1. 12 Jun, 2021 2 commits
  2. 11 Jun, 2021 11 commits
  3. 10 Jun, 2021 7 commits
    • Tom Lane's avatar
      Reconsider the handling of procedure OUT parameters. · e56bce5d
      Tom Lane authored
      Commit 2453ea14 redefined pg_proc.proargtypes to include the types of
      OUT parameters, for procedures only.  While that had some advantages
      for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty
      disastrous from a number of other perspectives.  Notably, since the
      primary key of pg_proc is name + proargtypes, this made it possible to
      have multiple procedures with identical names + input arguments and
      differing output argument types.  That would make it impossible to call
      any one of the procedures by writing just NULL (or "?", or any other
      data-type-free notation) for the output argument(s).  The change also
      seems likely to cause grave confusion for client applications that
      examine pg_proc and expect the traditional definition of proargtypes.
      
      Hence, revert the definition of proargtypes to what it was, and
      undo a number of complications that had been added to support that.
      
      To support the SQL-spec behavior of DROP PROCEDURE, when there are
      no argmode markers in the command's parameter list, we perform the
      lookup both ways (that is, matching against both proargtypes and
      proallargtypes), succeeding if we get just one unique match.
      In principle this could result in ambiguous-function failures
      that would not happen when using only one of the two rules.
      However, overloading of procedure names is thought to be a pretty
      rare usage, so this shouldn't cause many problems in practice.
      Postgres-specific code such as pg_dump can defend against any
      possibility of such failures by being careful to specify argmodes
      for all procedure arguments.
      
      This also fixes a few other bugs in the area of CALL statements
      with named parameters, and improves the documentation a little.
      
      catversion bump forced because the representation of procedures
      with OUT arguments changes.
      
      Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
      e56bce5d
    • Tom Lane's avatar
      Rearrange logrep worker's snapshot handling some more. · 3a09d75b
      Tom Lane authored
      It turns out that worker.c's code path for TRUNCATE was also
      careless about establishing a snapshot while executing user-defined
      code, allowing the checks added by commit 84f5c290 to fail when
      a trigger is fired in that context.
      
      We could just wrap Push/PopActiveSnapshot around the truncate call,
      but it seems better to establish a policy of holding a snapshot
      throughout execution of a replication step.  To help with that and
      possible future requirements, replace the previous ensure_transaction
      calls with pairs of begin/end_replication_step calls.
      
      Per report from Mark Dilger.  Back-patch to v11, like the previous
      changes.
      
      Discussion: https://postgr.es/m/B4A3AF82-79ED-4F4C-A4E5-CD2622098972@enterprisedb.com
      3a09d75b
    • Tom Lane's avatar
      Shut down EvalPlanQual machinery when LockRows node reaches the end. · bb4aed46
      Tom Lane authored
      Previously, we left the EPQ sub-executor alone until ExecEndLockRows.
      This caused any buffer pins or other resources that it might hold to
      remain held until ExecutorEnd, which in some code paths means that
      they are held till the Portal is closed.  That can cause user-visible
      problems, such as blocking VACUUM; and it's unlike the behavior of
      ordinary table-scanning nodes, which will have released all buffer
      pins by the time they return an EOF indication.
      
      We can make LockRows work more like other plan nodes by calling
      EvalPlanQualEnd just before returning NULL.  We still need to call it
      in ExecEndLockRows in case the node was not run to completion, but in
      the normal case the second call does nothing and costs little.
      
      Per report from Yura Sokolov.  In principle this is a longstanding
      bug, but in view of the lack of other complaints and the low severity
      of the consequences, I chose not to back-patch.
      
      Discussion: https://postgr.es/m/4aa370cb91ecf2f9885d98b80ad1109c@postgrespro.ru
      bb4aed46
    • Tom Lane's avatar
      Avoid ECPG test failures in some GSS-capable environments. · 9bb5eecc
      Tom Lane authored
      Buildfarm member hamerkop has been reporting that two cases in
      connect/test5.pgc show different error messages than the test expects,
      because since commit ffa2e467 libpq's connection failure messages
      are exposing the fact that a GSS-encrypted connection was attempted
      and failed.  That's pretty interesting information in itself, and
      I certainly don't wish to shoot the messenger, but we need to do
      something to stabilize the ECPG results.
      
      For the second of these two failure cases, we can add the
      gssencmode=disable option to prevent the discrepancy.  However,
      that solution is problematic for the first failure, because the only
      unique thing about that case is that it's testing a completely-omitted
      connection target; there's noplace to add the option without defeating
      the point of the test case.  After some thrashing around with
      alternative fixes that turned out to have undesirable side-effects,
      the most workable answer is just to give up and remove that test case.
      Perhaps we can revert this later, if we figure out why the GSS code
      is misbehaving in hamerkop's environment.
      
      Thanks to Michael Paquier for exploration of alternatives.
      
      Discussion: https://postgr.es/m/YLRZH6CWs9N6Pusy@paquier.xyz
      9bb5eecc
    • Peter Eisentraut's avatar
      Add some const decorations · b29fa951
      Peter Eisentraut authored
      One of these functions is new in PostgreSQL 14; might as well start it
      out right.
      b29fa951
    • Robert Haas's avatar
      Adjust new test case to set wal_keep_size. · 4dcb1d08
      Robert Haas authored
      Per buildfarm member conchuela and Kyotaro Horiguchi, it's possible
      for the WAL segment that the cascading standby needs to be removed
      too quickly. Hopefully this will prevent that.
      
      Kyotaro Horiguchi
      
      Discussion: http://postgr.es/m/20210610.101240.1270925505780628275.horikyota.ntt@gmail.com
      4dcb1d08
    • David Rowley's avatar
  4. 09 Jun, 2021 2 commits
    • Robert Haas's avatar
      Fix corner case failure of new standby to follow new primary. · caba8f0d
      Robert Haas authored
      This only happens if (1) the new standby has no WAL available locally,
      (2) the new standby is starting from the old timeline, (3) the promotion
      happened in the WAL segment from which the new standby is starting,
      (4) the timeline history file for the new timeline is available from
      the archive but the WAL files for are not (i.e. this is a race),
      (5) the WAL files for the new timeline are available via streaming,
      and (6) recovery_target_timeline='latest'.
      
      Commit ee994272 introduced this
      logic and was an improvement over the previous code, but it mishandled
      this case. If recovery_target_timeline='latest' and restore_command is
      set, validateRecoveryParameters() can change recoveryTargetTLI to be
      different from receiveTLI. If streaming is then tried afterward,
      expectedTLEs gets initialized with the history of the wrong timeline.
      It's supposed to be a list of entries explaining how to get to the
      target timeline, but in this case it ends up with a list of entries
      explaining how to get to the new standby's original timeline, which
      isn't right.
      
      Dilip Kumar and Robert Haas, reviewed by Kyotaro Horiguchi.
      
      Discussion: http://postgr.es/m/CAFiTN-sE-jr=LB8jQuxeqikd-Ux+jHiXyh4YDiZMPedgQKup0g@mail.gmail.com
      caba8f0d
    • Michael Paquier's avatar
      Fix inconsistencies in psql --help=commands · 845cad4d
      Michael Paquier authored
      The set of subcommands supported by \dAp, \do and \dy was described
      incorrectly in psql's --help.  The documentation was already consistent
      with the code.
      
      Reported-by: inoas, from IRC
      Author: Matthijs van der Vleuten
      Reviewed-by: Neil Chen
      Discussion: https://postgr.es/m/6a984e24-2171-4039-9050-92d55e7b23fe@www.fastmail.com
      Backpatch-through: 9.6
      845cad4d
  5. 08 Jun, 2021 8 commits
  6. 07 Jun, 2021 7 commits
    • Michael Paquier's avatar
      Reorder superuser check in pg_log_backend_memory_contexts() · 4e47b028
      Michael Paquier authored
      The use of this function is limited to superusers and the code includes
      a hardcoded check for that.  However, the code would look for the PGPROC
      entry to signal for the memory dump before checking if the user is a
      superuser or not, which does not make sense if we know that an error
      will be returned.  Note that the code would let one know if a process
      was a PostgreSQL process or not even for non-authorized users, which is
      not the case now, but this avoids taking ProcArrayLock that will most
      likely finish by being unnecessary.
      
      Thanks to Julien Rouhaud and Tom Lane for the discussion.
      
      Discussion: https://postgr.es/m/YLxw1uVGIAP5uMPl@paquier.xyz
      4e47b028
    • Peter Eisentraut's avatar
      Add _outTidRangePath() · 3bb309be
      Peter Eisentraut authored
      We have outNode() coverage for all path nodes, but this one was
      missed when it was added.
      3bb309be
    • Tom Lane's avatar
      Stabilize contrib/seg regression test. · d16ebfbf
      Tom Lane authored
      If autovacuum comes along just after we fill table test_seg with
      some data, it will update the stats to the point where we prefer
      a plain indexscan over a bitmap scan, breaking the expected
      output (as well as the point of the test case).  To fix, just
      force a bitmap scan to be chosen here.
      
      This has evidently been wrong since commit de1d042f.  It's not
      clear why we just recently saw any buildfarm failures due to it;
      but prairiedog has failed twice on this test in the past week.
      Hence, backpatch to v11 where this test case came in.
      d16ebfbf
    • Tom Lane's avatar
      Fix incautious handling of possibly-miscoded strings in client code. · 42f94f56
      Tom Lane authored
      An incorrectly-encoded multibyte character near the end of a string
      could cause various processing loops to run past the string's
      terminating NUL, with results ranging from no detectable issue to
      a program crash, depending on what happens to be in the following
      memory.
      
      This isn't an issue in the server, because we take care to verify
      the encoding of strings before doing any interesting processing
      on them.  However, that lack of care leaked into client-side code
      which shouldn't assume that anyone has validated the encoding of
      its input.
      
      Although this is certainly a bug worth fixing, the PG security team
      elected not to regard it as a security issue, primarily because
      any untrusted text should be sanitized by PQescapeLiteral or
      the like before being incorporated into a SQL or psql command.
      (If an app fails to do so, the same technique can be used to
      cause SQL injection, with probably much more dire consequences
      than a mere client-program crash.)  Those functions were already
      made proof against this class of problem, cf CVE-2006-2313.
      
      To fix, invent PQmblenBounded() which is like PQmblen() except it
      won't return more than the number of bytes remaining in the string.
      In HEAD we can make this a new libpq function, as PQmblen() is.
      It seems imprudent to change libpq's API in stable branches though,
      so in the back branches define PQmblenBounded as a macro in the files
      that need it.  (Note that just changing PQmblen's behavior would not
      be a good idea; notably, it would completely break the escaping
      functions' defense against this exact problem.  So we just want a
      version for those callers that don't have any better way of handling
      this issue.)
      
      Per private report from houjingyi.  Back-patch to all supported branches.
      42f94f56
    • Michael Paquier's avatar
      Fix portability issue in test indirect_toast · 68a6d8a8
      Michael Paquier authored
      When run on a server using default_toast_compression set to LZ4, this
      test would fail because of a consistency issue with the order of the
      tuples treated.  LZ4 causes one tuple to be stored inline instead of
      getting externalized.  As the goal of this test is to check after data
      stored externally, stick to pglz as the compression algorithm used, so
      as all data of this test is stored the way it should.
      
      Analyzed-by: Dilip Kumar
      Discussion: https://postgr.es/m/YLrDWxJgM8WWMoCg@paquier.xyz
      68a6d8a8
    • Amit Kapila's avatar
      Remove two_phase variable from CreateReplicationSlotCmd struct. · be57f216
      Amit Kapila authored
      Commit 19890a06 added the option to enable two_phase commits via
      pg_create_logical_replication_slot but didn't extend the support of same
      in replication protocol. However, by mistake, it added the two_phase
      variable in CreateReplicationSlotCmd which is required only when we extend
      the replication protocol.
      
      Reported-by: Jeff Davis
      Author: Ajin Cherian
      Reviewed-by: Amit Kapila
      Discussion: https://postgr.es/m/64b9f783c6e125f18f88fbc0c0234e34e71d8639.camel@j-davis.com
      be57f216
    • Etsuro Fujita's avatar
      Fix rescanning of async-aware Append nodes. · f3baaf28
      Etsuro Fujita authored
      In cases where run-time pruning isn't required, the synchronous and
      asynchronous subplans for an async-aware Append node determined using
      classify_matching_subplans() should be re-used when rescanning the node,
      but the previous code re-determined them using that function repeatedly
      each time when rescanning the node, leading to incorrect results in a
      normal build and an Assert failure in an Assert-enabled build as that
      function doesn't assume that it's called repeatedly in such cases.  Fix
      the code as mentioned above.
      
      My oversight in commit 27e1f145.
      
      While at it, initialize async-related pointers/variables to NULL/zero
      explicitly in ExecInitAppend() and ExecReScanAppend(), just to be sure.
      (The variables would have been set to zero before we get to the latter
      function, but let's do so.)
      
      Reviewed-by: Kyotaro Horiguchi
      Discussion: https://postgr.es/m/CAPmGK16Q4B2_KY%2BJH7rb7wQbw54AUprp7TMekGTd2T1B62yysQ%40mail.gmail.com
      f3baaf28
  7. 06 Jun, 2021 3 commits