1. 01 Jun, 2022 3 commits
    • Tom Lane's avatar
      Fix pl/perl test case so it will still work under Perl 5.36. · 1072e4c4
      Tom Lane authored
      Perl 5.36 has reclassified the warning condition that this test
      case used, so that the expected error fails to appear.  Tweak
      the test so it instead exercises a case that's handled the same
      way in all Perl versions of interest.
      
      This appears to meet our standards for back-patching into
      out-of-support branches: it changes no user-visible behavior
      but enables testing of old branches with newer tools.
      Hence, back-patch as far as 9.2.
      
      Dagfinn Ilmari Mannsåker, per report from Jitka Plesníková.
      
      Discussion: https://postgr.es/m/564579.1654093326@sss.pgh.pa.us
      1072e4c4
    • Tom Lane's avatar
      Doc: fix a couple of erroneous examples. · 80cd99c5
      Tom Lane authored
      The example given for anyelement <@ anymultirange didn't return
      true as claimed; adjust it so it does.
      
      In passing, change a couple of sample results where the modern
      numeric-based logic produces a different number of trailing zeroes
      than before.
      
      Erik Rijkers
      
      Discussion: https://postgr.es/m/cc35735d-1ec1-5bb3-9e27-cddbab7afa23@xs4all.nl
      80cd99c5
    • David Rowley's avatar
      Doc: mention limitation of the number of resultset columns · f2a0060a
      David Rowley authored
      The PostgreSQL limitations section of the documents mentioned the limit
      on the number of columns that can exist in a table.  Users might be
      surprised to find that there's also a limit on the number of columns that
      can exist in a targetlist.  Users may experience restrictions which
      surprise them if they happened to select a large number of columns from
      several tables with many columns.  Here we document that there is a
      limitation on this and mention what that limit actually is.
      
      Wording proposal by Alvaro Herrera
      
      Reported-by: Vladimir Sitnikov
      Author: Dave Crammer
      Reviewed-by: Tom Lane
      Discussion: https://postgr.es/m/CAB=Je-E18aTYpNqje4mT0iEADpeGLSzwUvo3H9kRRuDdsNo4aQ@mail.gmail.com
      Backpatch-through: 12, where the limitations section was added
      f2a0060a
  2. 31 May, 2022 4 commits
  3. 30 May, 2022 1 commit
  4. 29 May, 2022 2 commits
  5. 28 May, 2022 1 commit
  6. 26 May, 2022 2 commits
    • Tom Lane's avatar
      Remove misguided SSL key file ownership check in libpq. · b4be4a08
      Tom Lane authored
      Commits a59c79564 et al. tried to sync libpq's SSL key file
      permissions checks with what we've used for years in the backend.
      We did not intend to create any new failure cases, but it turns out
      we did: restricting the key file's ownership breaks cases where the
      client is allowed to read a key file despite not having the identical
      UID.  In particular a client running as root used to be able to read
      someone else's key file; and having seen that I suspect that there are
      other, less-dubious use cases that this restriction breaks on some
      platforms.
      
      We don't really need an ownership check, since if we can read the key
      file despite its having restricted permissions, it must have the right
      ownership --- under normal conditions anyway, and the point of this
      patch is that any additional corner cases where that works should be
      deemed allowable, as they have been historically.  Hence, just drop
      the ownership check, and rearrange the permissions check to get rid
      of its faulty assumption that geteuid() can't be zero.  (Note that the
      comparable backend-side code doesn't have to cater for geteuid() == 0,
      since the server rejects that very early on.)
      
      This does have the end result that the permissions safety check used
      for a root user's private key file is weaker than that used for
      anyone else's.  While odd, root really ought to know what she's doing
      with file permissions, so I think this is acceptable.
      
      Per report from Yogendra Suralkar.  Like the previous patch,
      back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/MW3PR15MB3931DF96896DC36D21AFD47CA3D39@MW3PR15MB3931.namprd15.prod.outlook.com
      b4be4a08
    • Robert Haas's avatar
      In CREATE FOREIGN TABLE syntax synopsis, fix partitioning stuff. · a5fc06bf
      Robert Haas authored
      Foreign tables can be partitioned, but previous documentation commits
      left the syntax synopsis both incomplete and incorrect.
      
      Justin Pryzby and Amit Langote
      
      Discussion: http://postgr.es/m/20220521130922.GX19626@telsasoft.com
      a5fc06bf
  7. 21 May, 2022 2 commits
    • Tom Lane's avatar
      Show 'AS "?column?"' explicitly when it's important. · 6f7eec11
      Tom Lane authored
      ruleutils.c was coded to suppress the AS label for a SELECT output
      expression if the column name is "?column?", which is the parser's
      fallback if it can't think of something better.  This is fine, and
      avoids ugly clutter, so long as (1) nothing further up in the parse
      tree relies on that column name or (2) the same fallback would be
      assigned when the rule or view definition is reloaded.  Unfortunately
      (2) is far from certain, both because ruleutils.c might print the
      expression in a different form from how it was originally written
      and because FigureColname's rules might change in future releases.
      So we shouldn't rely on that.
      
      Detecting exactly whether there is any outer-level use of a SELECT
      column name would be rather expensive.  This patch takes the simpler
      approach of just passing down a flag indicating whether there *could*
      be any outer use; for example, the output column names of a SubLink
      are not referenceable, and we also do not care about the names exposed
      by the right-hand side of a setop.  This is sufficient to suppress
      unwanted clutter in all but one case in the regression tests.  That
      seems like reasonable evidence that it won't be too much in users'
      faces, while still fixing the cases we need to fix.
      
      Per bug #17486 from Nicolas Lutic.  This issue is ancient, so
      back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/17486-1ad6fd786728b8af@postgresql.org
      6f7eec11
    • Michael Paquier's avatar
      doc: Mention pg_read_all_stats in description of track_activities · 7f798e89
      Michael Paquier authored
      The description of track_activities mentioned that it is visible to
      superusers and that the information related to the current session can
      be seen, without telling about pg_read_all_stats.  Roles that are
      granted the privileges of pg_read_all_stats can also see this
      information, so mention it in the docs.
      
      Author: Ian Barwick
      Reviewed-by: Nathan Bossart
      Discussion: https://postgr.es/m/CAB8KJ=jhPyYFu-A5r-ZGP+Ax715mUKsMxAGcEQ9Cx_mBAmrPow@mail.gmail.com
      Backpatch-through: 10
      7f798e89
  8. 20 May, 2022 2 commits
  9. 19 May, 2022 3 commits
  10. 18 May, 2022 2 commits
  11. 16 May, 2022 2 commits
    • David Rowley's avatar
      Fix incorrect row estimates used for Memoize costing · 23c2b76a
      David Rowley authored
      In order to estimate the cache hit ratio of a Memoize node, one of the
      inputs we require is the estimated number of times the Memoize node will
      be rescanned.  The higher this number, the large the cache hit ratio is
      likely to become.  Unfortunately, the value being passed as the number of
      "calls" to the Memoize was incorrectly using the Nested Loop's
      outer_path->parent->rows instead of outer_path->rows.  This failed to
      account for the fact that the outer_path might be parameterized by some
      upper-level Nested Loop.
      
      This problem could lead to Memoize plans appearing more favorable than
      they might actually be.  It could also lead to extended executor startup
      times when work_mem values were large due to the planner setting overly
      large MemoizePath->est_entries resulting in the Memoize hash table being
      initially made much larger than might be required.
      
      Fix this simply by passing outer_path->rows rather than
      outer_path->parent->rows.  Also, adjust the expected regression test
      output for a plan change.
      
      Reported-by: Pavel Stehule
      Author: David Rowley
      Discussion: https://postgr.es/m/CAFj8pRAMp%3DQsMi6sPQJ4W3hczoFJRvyXHJV3AZAZaMyTVM312Q%40mail.gmail.com
      Backpatch-through: 14, where Memoize was introduced
      23c2b76a
    • Michael Paquier's avatar
      Fix control file update done in restartpoints still running after promotion · 6dced63b
      Michael Paquier authored
      If a cluster is promoted (aka the control file shows a state different
      than DB_IN_ARCHIVE_RECOVERY) while CreateRestartPoint() is still
      processing, this function could miss an update of the control file for
      "checkPoint" and "checkPointCopy" but still do the recycling and/or
      removal of the past WAL segments, assuming that the to-be-updated LSN
      values should be used as reference points for the cleanup.  This causes
      a follow-up restart attempting crash recovery to fail with a PANIC on a
      missing checkpoint record if the end-of-recovery checkpoint triggered by
      the promotion did not complete while the cluster abruptly stopped or
      crashed before the completion of this checkpoint.  The PANIC would be
      caused by the redo LSN referred in the control file as located in a
      segment already gone, recycled by the previous restartpoint with
      "checkPoint" out-of-sync in the control file.
      
      This commit fixes the update of the control file during restartpoints so
      as "checkPoint" and "checkPointCopy" are updated even if the cluster has
      been promoted while a restartpoint is running, to be on par with the set
      of WAL segments actually recycled in the end of CreateRestartPoint().
      
      7863ee4 has fixed this problem already on master, but the release timing
      of the latest point versions did not let me enough time to study and fix
      that on all the stable branches.
      
      Reported-by: Fujii Masao, Rui Zhao
      Author: Kyotaro Horiguchi
      Reviewed-by: Nathan Bossart, Michael Paquier
      Discussion: https://postgr.es/m/20220316.102444.2193181487576617583.horikyota.ntt@gmail.com
      Backpatch-through: 10
      6dced63b
  12. 12 May, 2022 1 commit
    • Tom Lane's avatar
      Make pull_var_clause() handle GroupingFuncs exactly like Aggrefs. · ac51c9fb
      Tom Lane authored
      This follows in the footsteps of commit 2591ee8ec by removing one more
      ill-advised shortcut from planning of GroupingFuncs.  It's true that
      we don't intend to execute the argument expression(s) at runtime, but
      we still have to process any Vars appearing within them, or we risk
      failure at setrefs.c time (or more fundamentally, in EXPLAIN trying
      to print such an expression).  Vars in upper plan nodes have to have
      referents in the next plan level, whether we ever execute 'em or not.
      
      Per bug #17479 from Michael J. Sullivan.  Back-patch to all supported
      branches.
      
      Richard Guo
      
      Discussion: https://postgr.es/m/17479-6260deceaf0ad304@postgresql.org
      ac51c9fb
  13. 11 May, 2022 2 commits
    • Amit Kapila's avatar
      Fix the logical replication timeout during large transactions. · d6da71fa
      Amit Kapila authored
      The problem is that we don't send keep-alive messages for a long time
      while processing large transactions during logical replication where we
      don't send any data of such transactions. This can happen when the table
      modified in the transaction is not published or because all the changes
      got filtered. We do try to send the keep_alive if necessary at the end of
      the transaction (via WalSndWriteData()) but by that time the
      subscriber-side can timeout and exit.
      
      To fix this we try to send the keepalive message if required after
      processing certain threshold of changes.
      
      Reported-by: Fabrice Chapuis
      Author: Wang wei and Amit Kapila
      Reviewed By: Masahiko Sawada, Euler Taveira, Hou Zhijie, Hayato Kuroda
      Backpatch-through: 10
      Discussion: https://postgr.es/m/CAA5-nLARN7-3SLU_QUxfy510pmrYK6JJb=bk3hcgemAM_pAv+w@mail.gmail.com
      d6da71fa
    • Michael Paquier's avatar
      Improve setup of environment values for commands in MSVC's vcregress.pl · ca9e9b08
      Michael Paquier authored
      The current setup assumes that commands for lz4, zstd and gzip always
      exist by default if not enforced by a user's environment.  However,
      vcpkg, as one example, installs libraries but no binaries, so this
      default setup to assume that a command should always be present would
      cause failures.  This commit improves the detection of such external
      commands as follows:
      * If a ENV value is available, trust the environment/user and use it.
      * If a ENV value is not available, check its execution by looking in the
      current PATH, by launching a simple "$command --version" (that should be
      portable enough).
      ** On execution failure, ignore ENV{command}.
      ** On execution success, set ENV{command} = "$command".
      
      Note that this new rule applies to gzip, lz4 and zstd but not tar that
      we assume will always exist.  Those commands are set up in the
      environment only when using bincheck and taptest.  The CI includes all
      those commands and I have checked that their setup is correct there.  I
      have also tested this change in a MSVC environment where we have none of
      those commands.
      
      While on it, remove the references to lz4 from the documentation and
      vcregress.pl in ~v13.  --with-lz4 has been added in v14~ so there is no
      point to have this information in these older branches.
      
      Reported-by: Andrew Dunstan
      Reviewed-by: Andrew Dunstan
      Discussion: https://postgr.es/m/14402151-376b-a57a-6d0c-10ad12608e12@dunslane.net
      Backpatch-through: 10
      ca9e9b08
  14. 10 May, 2022 1 commit
    • Tom Lane's avatar
      configure: don't probe for libldap_r if libldap is 2.5 or newer. · 12736e7d
      Tom Lane authored
      In OpenLDAP 2.5 and later, libldap itself is always thread-safe and
      there's never a libldap_r.  Our existing coding dealt with that
      by assuming it wouldn't find libldap_r if libldap is thread-safe.
      But that rule fails to cope if there are multiple OpenLDAP versions
      visible, as is likely to be the case on macOS in particular.  We'd
      end up using shiny new libldap in the backend and a hoary libldap_r
      in libpq.
      
      Instead, once we've found libldap, check if it's >= 2.5 (by
      probing for a function introduced then) and don't bother looking
      for libldap_r if so.  While one can imagine library setups that
      this'd still give the wrong answer for, they seem unlikely to
      occur in practice.
      
      Per report from Peter Eisentraut.  Back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/fedacd7c-2a38-25c9-e7ff-dea549d0e979@enterprisedb.com
      12736e7d
  15. 09 May, 2022 8 commits
  16. 08 May, 2022 1 commit
  17. 07 May, 2022 2 commits
  18. 06 May, 2022 1 commit