1. 16 May, 2022 2 commits
    • David Rowley's avatar
      Fix incorrect row estimates used for Memoize costing · 23c2b76a
      David Rowley authored
      In order to estimate the cache hit ratio of a Memoize node, one of the
      inputs we require is the estimated number of times the Memoize node will
      be rescanned.  The higher this number, the large the cache hit ratio is
      likely to become.  Unfortunately, the value being passed as the number of
      "calls" to the Memoize was incorrectly using the Nested Loop's
      outer_path->parent->rows instead of outer_path->rows.  This failed to
      account for the fact that the outer_path might be parameterized by some
      upper-level Nested Loop.
      
      This problem could lead to Memoize plans appearing more favorable than
      they might actually be.  It could also lead to extended executor startup
      times when work_mem values were large due to the planner setting overly
      large MemoizePath->est_entries resulting in the Memoize hash table being
      initially made much larger than might be required.
      
      Fix this simply by passing outer_path->rows rather than
      outer_path->parent->rows.  Also, adjust the expected regression test
      output for a plan change.
      
      Reported-by: Pavel Stehule
      Author: David Rowley
      Discussion: https://postgr.es/m/CAFj8pRAMp%3DQsMi6sPQJ4W3hczoFJRvyXHJV3AZAZaMyTVM312Q%40mail.gmail.com
      Backpatch-through: 14, where Memoize was introduced
      23c2b76a
    • Michael Paquier's avatar
      Fix control file update done in restartpoints still running after promotion · 6dced63b
      Michael Paquier authored
      If a cluster is promoted (aka the control file shows a state different
      than DB_IN_ARCHIVE_RECOVERY) while CreateRestartPoint() is still
      processing, this function could miss an update of the control file for
      "checkPoint" and "checkPointCopy" but still do the recycling and/or
      removal of the past WAL segments, assuming that the to-be-updated LSN
      values should be used as reference points for the cleanup.  This causes
      a follow-up restart attempting crash recovery to fail with a PANIC on a
      missing checkpoint record if the end-of-recovery checkpoint triggered by
      the promotion did not complete while the cluster abruptly stopped or
      crashed before the completion of this checkpoint.  The PANIC would be
      caused by the redo LSN referred in the control file as located in a
      segment already gone, recycled by the previous restartpoint with
      "checkPoint" out-of-sync in the control file.
      
      This commit fixes the update of the control file during restartpoints so
      as "checkPoint" and "checkPointCopy" are updated even if the cluster has
      been promoted while a restartpoint is running, to be on par with the set
      of WAL segments actually recycled in the end of CreateRestartPoint().
      
      7863ee4 has fixed this problem already on master, but the release timing
      of the latest point versions did not let me enough time to study and fix
      that on all the stable branches.
      
      Reported-by: Fujii Masao, Rui Zhao
      Author: Kyotaro Horiguchi
      Reviewed-by: Nathan Bossart, Michael Paquier
      Discussion: https://postgr.es/m/20220316.102444.2193181487576617583.horikyota.ntt@gmail.com
      Backpatch-through: 10
      6dced63b
  2. 12 May, 2022 1 commit
    • Tom Lane's avatar
      Make pull_var_clause() handle GroupingFuncs exactly like Aggrefs. · ac51c9fb
      Tom Lane authored
      This follows in the footsteps of commit 2591ee8ec by removing one more
      ill-advised shortcut from planning of GroupingFuncs.  It's true that
      we don't intend to execute the argument expression(s) at runtime, but
      we still have to process any Vars appearing within them, or we risk
      failure at setrefs.c time (or more fundamentally, in EXPLAIN trying
      to print such an expression).  Vars in upper plan nodes have to have
      referents in the next plan level, whether we ever execute 'em or not.
      
      Per bug #17479 from Michael J. Sullivan.  Back-patch to all supported
      branches.
      
      Richard Guo
      
      Discussion: https://postgr.es/m/17479-6260deceaf0ad304@postgresql.org
      ac51c9fb
  3. 11 May, 2022 2 commits
    • Amit Kapila's avatar
      Fix the logical replication timeout during large transactions. · d6da71fa
      Amit Kapila authored
      The problem is that we don't send keep-alive messages for a long time
      while processing large transactions during logical replication where we
      don't send any data of such transactions. This can happen when the table
      modified in the transaction is not published or because all the changes
      got filtered. We do try to send the keep_alive if necessary at the end of
      the transaction (via WalSndWriteData()) but by that time the
      subscriber-side can timeout and exit.
      
      To fix this we try to send the keepalive message if required after
      processing certain threshold of changes.
      
      Reported-by: Fabrice Chapuis
      Author: Wang wei and Amit Kapila
      Reviewed By: Masahiko Sawada, Euler Taveira, Hou Zhijie, Hayato Kuroda
      Backpatch-through: 10
      Discussion: https://postgr.es/m/CAA5-nLARN7-3SLU_QUxfy510pmrYK6JJb=bk3hcgemAM_pAv+w@mail.gmail.com
      d6da71fa
    • Michael Paquier's avatar
      Improve setup of environment values for commands in MSVC's vcregress.pl · ca9e9b08
      Michael Paquier authored
      The current setup assumes that commands for lz4, zstd and gzip always
      exist by default if not enforced by a user's environment.  However,
      vcpkg, as one example, installs libraries but no binaries, so this
      default setup to assume that a command should always be present would
      cause failures.  This commit improves the detection of such external
      commands as follows:
      * If a ENV value is available, trust the environment/user and use it.
      * If a ENV value is not available, check its execution by looking in the
      current PATH, by launching a simple "$command --version" (that should be
      portable enough).
      ** On execution failure, ignore ENV{command}.
      ** On execution success, set ENV{command} = "$command".
      
      Note that this new rule applies to gzip, lz4 and zstd but not tar that
      we assume will always exist.  Those commands are set up in the
      environment only when using bincheck and taptest.  The CI includes all
      those commands and I have checked that their setup is correct there.  I
      have also tested this change in a MSVC environment where we have none of
      those commands.
      
      While on it, remove the references to lz4 from the documentation and
      vcregress.pl in ~v13.  --with-lz4 has been added in v14~ so there is no
      point to have this information in these older branches.
      
      Reported-by: Andrew Dunstan
      Reviewed-by: Andrew Dunstan
      Discussion: https://postgr.es/m/14402151-376b-a57a-6d0c-10ad12608e12@dunslane.net
      Backpatch-through: 10
      ca9e9b08
  4. 10 May, 2022 1 commit
    • Tom Lane's avatar
      configure: don't probe for libldap_r if libldap is 2.5 or newer. · 12736e7d
      Tom Lane authored
      In OpenLDAP 2.5 and later, libldap itself is always thread-safe and
      there's never a libldap_r.  Our existing coding dealt with that
      by assuming it wouldn't find libldap_r if libldap is thread-safe.
      But that rule fails to cope if there are multiple OpenLDAP versions
      visible, as is likely to be the case on macOS in particular.  We'd
      end up using shiny new libldap in the backend and a hoary libldap_r
      in libpq.
      
      Instead, once we've found libldap, check if it's >= 2.5 (by
      probing for a function introduced then) and don't bother looking
      for libldap_r if so.  While one can imagine library setups that
      this'd still give the wrong answer for, they seem unlikely to
      occur in practice.
      
      Per report from Peter Eisentraut.  Back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/fedacd7c-2a38-25c9-e7ff-dea549d0e979@enterprisedb.com
      12736e7d
  5. 09 May, 2022 8 commits
  6. 08 May, 2022 1 commit
  7. 07 May, 2022 2 commits
  8. 06 May, 2022 2 commits
  9. 05 May, 2022 2 commits
  10. 04 May, 2022 2 commits
  11. 03 May, 2022 3 commits
  12. 02 May, 2022 1 commit
  13. 28 Apr, 2022 1 commit
    • Etsuro Fujita's avatar
      Disable asynchronous execution if using gating Result nodes. · ebb79024
      Etsuro Fujita authored
      mark_async_capable_plan(), which is called from create_append_plan() to
      determine whether subplans are async-capable, failed to take into
      account that the given subplan created from a given subpath might
      include a gating Result node if the subpath is a SubqueryScanPath or
      ForeignPath, causing a segmentation fault there when the subplan created
      from a SubqueryScanPath includes the Result node, or causing
      ExecAsyncRequest() to throw an error about an unrecognized node type
      when the subplan created from a ForeignPath includes the Result node,
      because in the latter case the Result node was unintentionally
      considered as async-capable, but we don't currently support executing
      Result nodes asynchronously.  Fix by modifying mark_async_capable_plan()
      to disable asynchronous execution in such cases.  Also, adjust code in
      the ProjectionPath case in mark_async_capable_plan(), for consistency
      with other cases, and adjust/improve comments there.
      
      is_async_capable_path() added in commit 27e1f145, which was rewritten
      to mark_async_capable_plan() in a later commit, has the same issue,
      causing the error at execution mentioned above, so back-patch to v14
      where the aforesaid commit went in.
      
      Per report from Justin Pryzby.
      
      Etsuro Fujita, reviewed by Zhihong Yu and Justin Pryzby.
      
      Discussion: https://postgr.es/m/20220408124338.GK24419%40telsasoft.com
      ebb79024
  14. 25 Apr, 2022 2 commits
  15. 23 Apr, 2022 1 commit
  16. 21 Apr, 2022 4 commits
    • Tom Lane's avatar
      Remove inadequate assertion check in CTE inlining. · da22ef38
      Tom Lane authored
      inline_cte() expected to find exactly as many references to the
      target CTE as its cterefcount indicates.  While that should be
      accurate for the tree as emitted by the parser, there are some
      optimizations that occur upstream of here that could falsify it,
      notably removal of unused subquery output expressions.
      
      Trying to make the accounting 100% accurate seems expensive and
      doomed to future breakage.  It's not really worth it, because
      all this code is protecting is downstream assumptions that every
      referenced CTE has a plan.  Let's convert those assertions to
      regular test-and-elog just in case there's some actual problem,
      and then drop the failing assertion.
      
      Per report from Tomas Vondra (thanks also to Richard Guo for
      analysis).  Back-patch to v12 where the faulty code came in.
      
      Discussion: https://postgr.es/m/29196a1e-ed47-c7ca-9be2-b1c636816183@enterprisedb.com
      da22ef38
    • Andrew Dunstan's avatar
      Support new perl module namespace in stable branches · b235d41d
      Andrew Dunstan authored
      Commit b3b4d8e68a moved our perl test modules to a better namespace
      structure, but this has made life hard for people wishing to backpatch
      improvements in the TAP tests. Here we alleviate much of that difficulty
      by implementing the new module names on top of the old modules, mostly
      by using a little perl typeglob aliasing magic, so that we don't have a
      dual maintenance burden. This should work both for the case where a new
      test is backpatched and the case where a fix to an existing test that
      uses the new namespace is backpatched.
      
      Reviewed by Michael Paquier
      
      Per complaint from Andres Freund
      
      Discussion: https://postgr.es/m/20220418141530.nfxtkohefvwnzncl@alap3.anarazel.de
      
      Applied to branches 10 through 14
      b235d41d
    • Etsuro Fujita's avatar
      postgres_fdw: Disable batch insert when BEFORE ROW INSERT triggers exist. · 89d349b0
      Etsuro Fujita authored
      Previously, we allowed this, but such triggers might query the table to
      insert into and act differently if the tuples that have already been
      processed and prepared for insertion are not there, so disable it in
      such cases.
      
      Back-patch to v14 where batch insert was added.
      
      Discussion: https://postgr.es/m/CAPmGK16_uPqsmgK0-LpLSUk54_BoK13bPrhxhfjSoSTVz414hA%40mail.gmail.com
      89d349b0
    • Peter Geoghegan's avatar
      Fix CLUSTER tuplesorts on abbreviated expressions. · e4521841
      Peter Geoghegan authored
      CLUSTER sort won't use the datum1 SortTuple field when clustering
      against an index whose leading key is an expression.  This makes it
      unsafe to use the abbreviated keys optimization, which was missed by the
      logic that sets up SortSupport state.  Affected tuplesorts output tuples
      in a completely bogus order as a result (the wrong SortSupport based
      comparator was used for the leading attribute).
      
      This issue is similar to the bug fixed on the master branch by recent
      commit cc58eecc5d.  But it's a far older issue, that dates back to the
      introduction of the abbreviated keys optimization by commit 4ea51cdf.
      
      Backpatch to all supported versions.
      
      Author: Peter Geoghegan <pg@bowt.ie>
      Author: Thomas Munro <thomas.munro@gmail.com>
      Discussion: https://postgr.es/m/CA+hUKG+bA+bmwD36_oDxAoLrCwZjVtST2fqe=b4=qZcmU7u89A@mail.gmail.com
      Backpatch: 10-
      e4521841
  17. 20 Apr, 2022 4 commits
    • Tom Lane's avatar
      Disallow infinite endpoints in generate_series() for timestamps. · e3463294
      Tom Lane authored
      Such cases will lead to infinite loops, so they're of no practical
      value.  The numeric variant of generate_series() already threw error
      for this, so borrow its message wording.
      
      Per report from Richard Wesley.  Back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/91B44E7B-68D5-448F-95C8-B4B3B0F5DEAF@duckdblabs.com
      e3463294
    • Robert Haas's avatar
      Allow db.schema.table patterns, but complain about random garbage. · 4a66300a
      Robert Haas authored
      psql, pg_dump, and pg_amcheck share code to process object name
      patterns like 'foo*.bar*' to match all tables with names starting in
      'bar' that are in schemas starting with 'foo'. Before v14, any number
      of extra name parts were silently ignored, so a command line '\d
      foo.bar.baz.bletch.quux' was interpreted as '\d bletch.quux'.  In v14,
      as a result of commit 2c8726c4, we
      instead treated this as a request for table quux in a schema named
      'foo.bar.baz.bletch'. That caused problems for people like Justin
      Pryzby who were accustomed to copying strings of the form
      db.schema.table from messages generated by PostgreSQL itself and using
      them as arguments to \d.
      
      Accordingly, revise things so that if an object name pattern contains
      more parts than we're expecting, we throw an error, unless there's
      exactly one extra part and it matches the current database name.
      That way, thisdb.myschema.mytable is accepted as meaning just
      myschema.mytable, but otherdb.myschema.mytable is an error, and so
      is some.random.garbage.myschema.mytable.
      
      Mark Dilger, per report from Justin Pryzby and discussion among
      various people.
      
      Discussion: https://www.postgresql.org/message-id/20211013165426.GD27491%40telsasoft.com
      4a66300a
    • Amit Kapila's avatar
      Stabilize streaming tests in test_decoding. · 7891a0d5
      Amit Kapila authored
      We have some streaming tests that rely on the size of changes which can
      fail if there are additional changes like invalidation messages by
      background activity like auto analyze. Avoid such failures by increasing
      autovacuum_naptime to a reasonably high value (1d).
      
      Author: Dilip Kumar
      Backpatch-through: 14
      Discussion: https://postgr.es/m/1958043.1650129119@sss.pgh.pa.us
      7891a0d5
    • Tom Lane's avatar
      Fix breakage in AlterFunction(). · 08a9e7a8
      Tom Lane authored
      An ALTER FUNCTION command that tried to update both the function's
      proparallel property and its proconfig list failed to do the former,
      because it stored the new proparallel value into a tuple that was
      no longer the interesting one.  Carelessness in 7aea8e4f.
      
      (I did not bother with a regression test, because the only likely
      future breakage would be for someone to ignore the comment I added
      and add some other field update after the heap_modify_tuple step.
      A test using existing function properties could not catch that.)
      
      Per report from Bryn Llewellyn.  Back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/8AC9A37F-99BD-446F-A2F7-B89AD0022774@yugabyte.com
      08a9e7a8
  18. 19 Apr, 2022 1 commit