1. 28 May, 2021 3 commits
  2. 27 May, 2021 8 commits
    • Tom Lane's avatar
      Reduce the range of OIDs reserved for genbki.pl. · a4390abe
      Tom Lane authored
      Commit ab596105 increased FirstBootstrapObjectId from 12000 to 13000,
      but we've had some push-back about that.  It's worrisome to reduce the
      daylight between there and FirstNormalObjectId, because the number of
      OIDs consumed during initdb for collation objects is hard to predict.
      
      We can improve the situation by abandoning the assumption that these
      OIDs must be globally unique.  It should be sufficient for them to be
      unique per-catalog.  (Any code that's unhappy about that is broken
      anyway, since no more than per-catalog uniqueness can be guaranteed
      once the OID counter wraps around.)  With that change, the largest OID
      assigned during genbki.pl (starting from a base of 10000) is a bit
      under 11000.  This allows reverting FirstBootstrapObjectId to 12000
      with reasonable confidence that that will be sufficient for many years
      to come.
      
      We are not, at this time, abandoning the expectation that
      hand-assigned OIDs (below 10000) are globally unique.  Someday that'll
      likely be necessary, but the need seems years away still.
      
      This is late for v14, but it seems worth doing it now so that
      downstream software doesn't have to deal with the consequences of
      a change in FirstBootstrapObjectId.  In any case, we already
      bought into forcing an initdb for beta2, so another catversion
      bump won't hurt.
      
      Discussion: https://postgr.es/m/1665197.1622065382@sss.pgh.pa.us
      a4390abe
    • Tom Lane's avatar
      Rethink definition of pg_attribute.attcompression. · e6241d8e
      Tom Lane authored
      Redefine '\0' (InvalidCompressionMethod) as meaning "if we need to
      compress, use the current setting of default_toast_compression".
      This allows '\0' to be a suitable default choice regardless of
      datatype, greatly simplifying code paths that initialize tupledescs
      and the like.  It seems like a more user-friendly approach as well,
      because now the default compression choice doesn't migrate into table
      definitions, meaning that changing default_toast_compression is
      usually sufficient to flip an installation's behavior; one needn't
      tediously issue per-column ALTER SET COMPRESSION commands.
      
      Along the way, fix a few minor bugs and documentation issues
      with the per-column-compression feature.  Adopt more robust
      APIs for SetIndexStorageProperties and GetAttributeCompression.
      
      Bump catversion because typical contents of attcompression will now
      be different.  We could get away without doing that, but it seems
      better to ensure v14 installations all agree on this.  (We already
      forced initdb for beta2, anyway.)
      
      Discussion: https://postgr.es/m/626613.1621787110@sss.pgh.pa.us
      e6241d8e
    • Peter Eisentraut's avatar
      Fix vpath build in libpq_pipeline test · a717e5c7
      Peter Eisentraut authored
      The path needs to be set to refer to the build directory, not the
      current directory, because that's actually the source directory at
      that point.
      
      fix for 6abc8c25
      a717e5c7
    • Peter Eisentraut's avatar
      Add NO_INSTALL option to pgxs · 6abc8c25
      Peter Eisentraut authored
      Apply in libpq_pipeline test makefile, so that the test file is not
      installed into tmp_install.
      Reviewed-by: default avatarAlvaro Herrera <alvherre@alvh.no-ip.org>
      Reviewed-by: default avatarTom Lane <tgl@sss.pgh.pa.us>
      Discussion: https://www.postgresql.org/message-id/flat/cb9d16a6-760f-cd44-28d6-b091d5fb6ca7%40enterprisedb.com
      6abc8c25
    • Michael Paquier's avatar
      Fix MSVC scripts when building with GSSAPI/Kerberos · 02511066
      Michael Paquier authored
      The deliverables of upstream Kerberos on Windows are installed with
      paths that do not match our MSVC scripts.  First, the include folder was
      named "inc/" in our scripts, but the upstream MSIs use "include/".
      Second, the build would fail with 64-bit environments as the libraries
      are named differently.
      
      This commit adjusts the MSVC scripts to be compatible with the latest
      installations of upstream, and I have checked that the compilation was
      able to work with the 32-bit and 64-bit installations.
      
      Special thanks to Kondo Yuta for the help in investigating the situation
      in hamerkop, which had an incorrect configuration for the GSS
      compilation.
      
      Reported-by: Brian Ye
      Discussion: https://postgr.es/m/162128202219.27274.12616756784952017465@wrigleys.postgresql.org
      Backpatch-through: 9.6
      02511066
    • Peter Eisentraut's avatar
      Replace run-time error check with assertion · 388e75ad
      Peter Eisentraut authored
      The error message was checking that the structures returned from the
      parser matched expectations.  That's something we usually use
      assertions for, not a full user-facing error message.  So replace that
      with an assertion (hidden inside lfirst_node()).
      Reviewed-by: default avatarTom Lane <tgl@sss.pgh.pa.us>
      Discussion: https://www.postgresql.org/message-id/flat/452e9df8-ec89-e01b-b64a-8cc6ce830458%40enterprisedb.com
      388e75ad
    • Michael Paquier's avatar
      doc: Fix description of some GUCs in docs and postgresql.conf.sample · 2941138e
      Michael Paquier authored
      The following parameters have been imprecise, or incorrect, about their
      description (PGC_POSTMASTER or PGC_SIGHUP):
      - autovacuum_work_mem (docs, as of 9.6~)
      - huge_page_size (docs, as of 14~)
      - max_logical_replication_workers (docs, as of 10~)
      - max_sync_workers_per_subscription (docs, as of 10~)
      - min_dynamic_shared_memory (docs, as of 14~)
      - recovery_init_sync_method (postgresql.conf.sample, as of 14~)
      - remove_temp_files_after_crash (docs, as of 14~)
      - restart_after_crash (docs, as of 9.6~)
      - ssl_min_protocol_version (docs, as of 12~)
      - ssl_max_protocol_version (docs, as of 12~)
      
      This commit adjusts the description of all these parameters to be more
      consistent with the practice used for the others.
      
      Revewed-by: Justin Pryzby
      Discussion: https://postgr.es/m/YK2ltuLpe+FbRXzA@paquier.xyz
      Backpatch-through: 9.6
      2941138e
    • Amit Kapila's avatar
      Fix assertion during streaming of multi-insert toast changes. · 6f4bdf81
      Amit Kapila authored
      While decoding the multi-insert WAL we can't clean the toast untill we get
      the last insert of that WAL record. Now if we stream the changes before we
      get the last change, the memory for toast chunks won't be released and we
      expect the txn to have streamed all changes after streaming.  This
      restriction is mainly to ensure the correctness of streamed transactions
      and it doesn't seem worth uplifting such a restriction just to allow this
      case because anyway we will stream the transaction once such an insert is
      complete.
      
      Previously we were using two different flags (one for toast tuples and
      another for speculative inserts) to indicate partial changes. Now instead
      we replaced both of them with a single flag to indicate partial changes.
      
      Reported-by: Pavan Deolasee
      Author: Dilip Kumar
      Reviewed-by: Pavan Deolasee, Amit Kapila
      Discussion: https://postgr.es/m/CABOikdN-_858zojYN-2tNcHiVTw-nhxPwoQS4quExeweQfG1Ug@mail.gmail.com
      6f4bdf81
  3. 26 May, 2021 1 commit
  4. 25 May, 2021 10 commits
  5. 24 May, 2021 2 commits
  6. 23 May, 2021 5 commits
    • Tom Lane's avatar
      Re-order pg_attribute columns to eliminate some padding space. · f5024d8d
      Tom Lane authored
      Now that attcompression is just a char, there's a lot of wasted
      padding space after it.  Move it into the group of char-wide
      columns to save a net of 4 bytes per pg_attribute entry.  While
      we're at it, swap the order of attstorage and attalign to make for
      a more logical grouping of these columns.
      
      Also re-order actions in related code to match the new field ordering.
      
      This patch also fixes one outright bug: equalTupleDescs() failed to
      compare attcompression.  That could, for example, cause relcache
      reload to fail to adopt a new value following a change.
      
      Michael Paquier and Tom Lane, per a gripe from Andres Freund.
      
      Discussion: https://postgr.es/m/20210517204803.iyk5wwvwgtjcmc5w@alap3.anarazel.de
      f5024d8d
    • Tom Lane's avatar
      Be more verbose when the postmaster unexpectedly quits. · bc2a389e
      Tom Lane authored
      Emit a LOG message when the postmaster stops because of a failure in
      the startup process.  There already is a similar message if we exit
      for that reason during PM_STARTUP phase, so it seems inconsistent
      that there was none if the startup process fails later on.
      
      Also emit a LOG message when the postmaster stops after a crash
      because restart_after_crash is disabled.  This seems potentially
      helpful in case DBAs (or developers) forget that that's set.
      Also, it was the only remaining place where the postmaster would
      do an abnormal exit without any comment as to why.
      
      In passing, remove an unreachable call of ExitPostmaster(0).
      
      Discussion: https://postgr.es/m/194914.1621641288@sss.pgh.pa.us
      bc2a389e
    • Bruce Momjian's avatar
      doc: word-wrap and indent PG 14 relnotes · 8f73ed6b
      Bruce Momjian authored
      8f73ed6b
    • Tom Lane's avatar
      Fix access to no-longer-open relcache entry in logical-rep worker. · b39630fd
      Tom Lane authored
      If we redirected a replicated tuple operation into a partition child
      table, and then tried to fire AFTER triggers for that event, the
      relation cache entry for the child table was already closed.  This has
      no visible ill effects as long as the entry is still there and still
      valid, but an unluckily-timed cache flush could result in a crash or
      other misbehavior.
      
      To fix, postpone the ExecCleanupTupleRouting call (which is what
      closes the child table) until after we've fired triggers.  This
      requires a bit of refactoring so that the cleanup function can
      have access to the necessary state.
      
      In HEAD, I took the opportunity to simplify some of worker.c's
      function APIs based on use of the new ApplyExecutionData struct.
      However, it doesn't seem safe/practical to back-patch that aspect,
      at least not without a lot of analysis of possible interactions
      with a04daa97.
      
      In passing, add an Assert to afterTriggerInvokeEvents to catch
      such cases.  This seems worthwhile because we've grown a number
      of fairly unstructured ways of calling AfterTriggerEndQuery.
      
      Back-patch to v13, where worker.c grew the ability to deal with
      partitioned target tables.
      
      Discussion: https://postgr.es/m/3382681.1621381328@sss.pgh.pa.us
      b39630fd
    • Bruce Momjian's avatar
  7. 22 May, 2021 4 commits
    • Bruce Momjian's avatar
    • Tom Lane's avatar
      Remove plpgsql's special-case code paths for SET/RESET. · 30168be8
      Tom Lane authored
      In the wake of 84f5c290, it's no longer necessary for plpgsql to
      handle SET/RESET specially.  The point of that was just to avoid
      taking a new transaction snapshot prematurely, which the regular code
      path through _SPI_execute_plan() now does just fine (in fact better,
      since it now does the right thing for LOCK too).  Hence, rip out a
      few lines of code, going back to the old way of treating SET/RESET
      as a generic SQL command.  This essentially reverts all but the
      test cases from b981275b.
      
      Discussion: https://postgr.es/m/15990-eee2ac466b11293d@postgresql.org
      30168be8
    • David Rowley's avatar
      Fix planner's use of Result Cache with unique joins · 9e215378
      David Rowley authored
      When the planner considered using a Result Cache node to cache results
      from the inner side of a Nested Loop Join, it failed to consider that the
      inner path's parameterization may not be the entire join condition.  If
      the join was marked as inner_unique then we may accidentally put the cache
      in singlerow mode.  This meant that entries would be marked as complete
      after caching the first row.  That was wrong as if only part of the join
      condition was parameterized then the uniqueness of the unique join was not
      guaranteed at the Result Cache's level.  The uniqueness is only guaranteed
      after Nested Loop applies the join filter.  If subsequent rows were found,
      this would lead to:
      
      ERROR: cache entry already complete
      
      This could have been fixed by only putting the cache in singlerow mode if
      the entire join condition was parameterized.  However, Nested Loop will
      only read its inner side so far as the first matching row when the join is
      unique, so that might mean we never get an opportunity to mark cache
      entries as complete.  Since non-complete cache entries are useless for
      subsequent lookups, we just don't bother considering a Result Cache path
      in this case.
      
      In passing, remove the XXX comment that claimed the above ERROR might be
      better suited to be an Assert.  After there being an actual case which
      triggered it, it seems better to keep it an ERROR.
      
      Reported-by: David Christensen
      Discussion: https://postgr.es/m/CAOxo6X+dy-V58iEPFgst8ahPKEU+38NZzUuc+a7wDBZd4TrHMQ@mail.gmail.com
      9e215378
    • Bruce Momjian's avatar
      0cdaa05b
  8. 21 May, 2021 6 commits
    • Bruce Momjian's avatar
      55370f8d
    • Tom Lane's avatar
      Disallow whole-row variables in GENERATED expressions. · 4b100744
      Tom Lane authored
      This was previously allowed, but I think that was just an oversight.
      It's a clear violation of the rule that a generated column cannot
      depend on itself or other generated columns.  Moreover, because the
      code was relying on the assumption that no such cross-references
      exist, it was pretty easy to crash ALTER TABLE and perhaps other
      places.  Even if you managed not to crash, you got quite unstable,
      implementation-dependent results.
      
      Per report from Vitaly Ustinov.
      Back-patch to v12 where GENERATED came in.
      
      Discussion: https://postgr.es/m/CAM_DEiWR2DPT6U4xb-Ehigozzd3n3G37ZB1+867zbsEVtYoJww@mail.gmail.com
      4b100744
    • Tom Lane's avatar
      Fix usage of "tableoid" in GENERATED expressions. · 2b0ee126
      Tom Lane authored
      We consider this supported (though I've got my doubts that it's a
      good idea, because tableoid is not immutable).  However, several
      code paths failed to fill the field in soon enough, causing such
      a GENERATED expression to see zero or the wrong value.  This
      occurred when ALTER TABLE adds a new GENERATED column to a table
      with existing rows, and during regular INSERT or UPDATE on a
      foreign table with GENERATED columns.
      
      Noted during investigation of a report from Vitaly Ustinov.
      Back-patch to v12 where GENERATED came in.
      
      Discussion: https://postgr.es/m/CAM_DEiWR2DPT6U4xb-Ehigozzd3n3G37ZB1+867zbsEVtYoJww@mail.gmail.com
      2b0ee126
    • Tom Lane's avatar
      Restore the portal-level snapshot after procedure COMMIT/ROLLBACK. · 84f5c290
      Tom Lane authored
      COMMIT/ROLLBACK necessarily destroys all snapshots within the session.
      The original implementation of intra-procedure transactions just
      cavalierly did that, ignoring the fact that this left us executing in
      a rather different environment than normal.  In particular, it turns
      out that handling of toasted datums depends rather critically on there
      being an outer ActiveSnapshot: otherwise, when SPI or the core
      executor pop whatever snapshot they used and return, it's unsafe to
      dereference any toasted datums that may appear in the query result.
      It's possible to demonstrate "no known snapshots" and "missing chunk
      number N for toast value" errors as a result of this oversight.
      
      Historically this outer snapshot has been held by the Portal code,
      and that seems like a good plan to preserve.  So add infrastructure
      to pquery.c to allow re-establishing the Portal-owned snapshot if it's
      not there anymore, and add enough bookkeeping support that we can tell
      whether it is or not.
      
      We can't, however, just re-establish the Portal snapshot as part of
      COMMIT/ROLLBACK.  As in normal transaction start, acquiring the first
      snapshot should wait until after SET and LOCK commands.  Hence, teach
      spi.c about doing this at the right time.  (Note that this patch
      doesn't fix the problem for any PLs that try to run intra-procedure
      transactions without using SPI to execute SQL commands.)
      
      This makes SPI's no_snapshots parameter rather a misnomer, so in HEAD,
      rename that to allow_nonatomic.
      
      replication/logical/worker.c also needs some fixes, because it wasn't
      careful to hold a snapshot open around AFTER trigger execution.
      That code doesn't use a Portal, which I suspect someday we're gonna
      have to fix.  But for now, just rearrange the order of operations.
      This includes back-patching the recent addition of finish_estate()
      to centralize the cleanup logic there.
      
      This also back-patches commit 2ecfeda3 into v13, to improve the
      test coverage for worker.c (it was that test that exposed that
      worker.c's snapshot management is wrong).
      
      Per bug #15990 from Andreas Wicht.  Back-patch to v11 where
      intra-procedure COMMIT was added.
      
      Discussion: https://postgr.es/m/15990-eee2ac466b11293d@postgresql.org
      84f5c290
    • Peter Eisentraut's avatar
    • Amit Kapila's avatar
      Fix deadlock for multiple replicating truncates of the same table. · 6d0eb385
      Amit Kapila authored
      While applying the truncate change, the logical apply worker acquires
      RowExclusiveLock on the relation being truncated. This allowed truncate on
      the relation at a time by two apply workers which lead to a deadlock. The
      reason was that one of the workers after updating the pg_class tuple tries
      to acquire SHARE lock on the relation and started to wait for the second
      worker which has acquired RowExclusiveLock on the relation. And when the
      second worker tries to update the pg_class tuple, it starts to wait for
      the first worker which leads to a deadlock. Fix it by acquiring
      AccessExclusiveLock on the relation before applying the truncate change as
      we do for normal truncate operation.
      
      Author: Peter Smith, test case by Haiying Tang
      Reviewed-by: Dilip Kumar, Amit Kapila
      Backpatch-through: 11
      Discussion: https://postgr.es/m/CAHut+PsNm43p0jM+idTvWwiGZPcP0hGrHMPK9TOAkc+a4UpUqw@mail.gmail.com
      6d0eb385
  9. 20 May, 2021 1 commit
    • Tom Lane's avatar
      Avoid detoasting failure after COMMIT inside a plpgsql FOR loop. · f21fadaf
      Tom Lane authored
      exec_for_query() normally tries to prefetch a few rows at a time
      from the query being iterated over, so as to reduce executor
      entry/exit overhead.  Unfortunately this is unsafe if we have
      COMMIT or ROLLBACK within the loop, because there might be
      TOAST references in the data that we prefetched but haven't
      yet examined.  Immediately after the COMMIT/ROLLBACK, we have
      no snapshots in the session, meaning that VACUUM is at liberty
      to remove recently-deleted TOAST rows.
      
      This was originally reported as a case triggering the "no known
      snapshots" error in init_toast_snapshot(), but even if you miss
      hitting that, you can get "missing toast chunk", as illustrated
      by the added isolation test case.
      
      To fix, just disable prefetching in non-atomic contexts.  Maybe
      there will be performance complaints prompting us to work harder
      later, but it's not clear at the moment that this really costs
      much, and I doubt we'd want to back-patch any complicated fix.
      
      In passing, adjust that error message in init_toast_snapshot()
      to be a little clearer about the likely cause of the problem.
      
      Patch by me, based on earlier investigation by Konstantin Knizhnik.
      
      Per bug #15990 from Andreas Wicht.  Back-patch to v11 where
      intra-procedure COMMIT was added.
      
      Discussion: https://postgr.es/m/15990-eee2ac466b11293d@postgresql.org
      f21fadaf