1. 27 Jan, 2022 1 commit
  2. 26 Jan, 2022 1 commit
    • Magnus Hagander's avatar
      Fix pg_hba_file_rules for authentication method cert · 4afae689
      Magnus Hagander authored
      For authentication method cert, clientcert=verify-full is implied. But
      the pg_hba_file_rules entry would incorrectly show clientcert=verify-ca.
      
      Per bug #17354
      
      Reported-By: Feike Steenbergen
      Reviewed-By: Jonathan Katz
      Backpatch-through: 12
      4afae689
  3. 25 Jan, 2022 3 commits
  4. 24 Jan, 2022 2 commits
    • Tom Lane's avatar
      Fix limitations on what SQL commands can be issued to a walsender. · 1efcc594
      Tom Lane authored
      In logical replication mode, a WalSender is supposed to be able
      to execute any regular SQL command, as well as the special
      replication commands.  Poor design of the replication-command
      parser caused it to fail in various cases, notably:
      
      * semicolons embedded in a command, or multiple SQL commands
      sent in a single message;
      
      * dollar-quoted literals containing odd numbers of single
      or double quote marks;
      
      * commands starting with a comment.
      
      The basic problem here is that we're trying to run repl_scanner.l
      across the entire input string even when it's not a replication
      command.  Since repl_scanner.l does not understand all of the
      token types known to the core lexer, this is doomed to have
      failure modes.
      
      We certainly don't want to make repl_scanner.l as big as scan.l,
      so instead rejigger stuff so that we only lex the first token of
      a non-replication command.  That will usually look like an IDENT
      to repl_scanner.l, though a comment would end up getting reported
      as a '-' or '/' single-character token.  If the token is a replication
      command keyword, we push it back and proceed normally with repl_gram.y
      parsing.  Otherwise, we can drop out of exec_replication_command()
      without examining the rest of the string.
      
      (It's still theoretically possible for repl_scanner.l to fail on
      the first token; but that could only happen if it's an unterminated
      single- or double-quoted string, in which case you'd have gotten
      largely the same error from the core lexer too.)
      
      In this way, repl_gram.y isn't involved at all in handling general
      SQL commands, so we can get rid of the SQLCmd node type.  (In
      the back branches, we can't remove it because renumbering enum
      NodeTag would be an ABI break; so just leave it sit there unused.)
      
      I failed to resist the temptation to clean up some other sloppy
      coding in repl_scanner.l while at it.  The only externally-visible
      behavior change from that is it now accepts \r and \f as whitespace,
      same as the core lexer.
      
      Per bug #17379 from Greg Rychlewski.  Back-patch to all supported
      branches.
      
      Discussion: https://postgr.es/m/17379-6a5c6cfb3f1f5e77@postgresql.org
      1efcc594
    • Tom Lane's avatar
      Remember to reset yy_start state when firing up repl_scanner.l. · ef9706bb
      Tom Lane authored
      Without this, we get odd behavior when the previous cycle of
      lexing exited in a non-default exclusive state.  Every other
      copy of this code is aware that it has to do BEGIN(INITIAL),
      but repl_scanner.l did not get that memo.
      
      The real-world impact of this is probably limited, since most
      replication clients would abandon their connection after getting
      a syntax error.  Still, it's a bug.
      
      This mistake is old, so back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/1874781.1643035952@sss.pgh.pa.us
      ef9706bb
  5. 23 Jan, 2022 4 commits
  6. 22 Jan, 2022 1 commit
  7. 21 Jan, 2022 4 commits
  8. 20 Jan, 2022 4 commits
  9. 19 Jan, 2022 2 commits
  10. 18 Jan, 2022 2 commits
    • Thomas Munro's avatar
      Try to stabilize the reloptions test. · ff4d0d8c
      Thomas Munro authored
      Where we test vacuum_truncate's effects, sometimes this is failing to
      truncate as expected on the build farm.  That could be explained by page
      skipping, so disable it explicitly, with the theory that commit fe246d1c
      didn't go far enough.
      
      Back-patch to 12, where the vacuum_truncate tests were added.
      
      Discussion: https://postgr.es/m/CA%2BhUKGLT2UL5_JhmBzUgkdyKfc%3D5J-gJSQJLysMs4rqLUKLAzw%40mail.gmail.com
      ff4d0d8c
    • Tom Lane's avatar
      Fix psql \d's query for identifying parent triggers. · 3886785b
      Tom Lane authored
      The original coding (from c33869cc) failed with "more than one row
      returned by a subquery used as an expression" if there were unrelated
      triggers of the same tgname on parent partitioned tables.  (That's
      possible because statement-level triggers don't get inherited.)  Fix
      by applying LIMIT 1 after sorting the candidates by inheritance level.
      
      Also, wrap the subquery in a CASE so that we don't have to execute it at
      all when the trigger is visibly non-inherited.  Aside from saving some
      cycles, this avoids the need for a confusing and undocumented NULLIF().
      
      While here, tweak the format of the emitted query to look a bit
      nicer for "psql -E", and add some explanation of this subquery,
      because it badly needs it.
      
      Report and patch by Justin Pryzby (with some editing by me).
      Back-patch to v13 where the faulty code came in.
      
      Discussion: https://postgr.es/m/20211217154356.GJ17618@telsasoft.com
      3886785b
  11. 17 Jan, 2022 2 commits
    • Tom Lane's avatar
      Avoid calling gettext() in signal handlers. · 4e872656
      Tom Lane authored
      It seems highly unlikely that gettext() can be relied on to be
      async-signal-safe.  psql used to understand that, but someone got
      it wrong long ago in the src/bin/scripts/ version of handle_sigint,
      and then the bad idea was perpetuated when those two versions were
      unified into src/fe_utils/cancel.c.
      
      I'm unsure why there have not been field complaints about this
      ... maybe gettext() is signal-safe once it's translated at least
      one message?  But we have no business assuming any such thing.
      
      In cancel.c (v13 and up), I preserved our ability to localize
      "Cancel request sent" messages by invoking gettext() before
      the signal handler is set up.  In earlier branches I just made
      src/bin/scripts/ not localize those messages, as psql did then.
      
      (Just for extra unsafety, the src/bin/scripts/ version was
      invoking fprintf() from a signal handler.  Sigh.)
      
      Noted while fixing signal-safety issues in PQcancel() itself.
      Back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/2937814.1641960929@sss.pgh.pa.us
      4e872656
    • Tom Lane's avatar
      Avoid calling strerror[_r] in PQcancel(). · 05094987
      Tom Lane authored
      PQcancel() is supposed to be safe to call from a signal handler,
      and indeed psql uses it that way.  All of the library functions
      it uses are specified to be async-signal-safe by POSIX ...
      except for strerror.  Neither plain strerror nor strerror_r
      are considered safe.  When this code was written, back in the
      dark ages, we probably figured "oh, strerror will just index
      into a constant array of strings" ... but in any locale except C,
      that's unlikely to be true.  Probably the reason we've not heard
      complaints is that (a) this error-handling code is unlikely to be
      reached in normal use, and (b) in many scenarios, localized error
      strings would already have been loaded, after which maybe it's
      safe to call strerror here.  Still, this is clearly unacceptable.
      
      The best we can do without relying on strerror is to print the
      decimal value of errno, so make it do that instead.  (This is
      probably not much loss of user-friendliness, given that it is
      hard to get a failure here.)
      
      Back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/2937814.1641960929@sss.pgh.pa.us
      05094987
  12. 16 Jan, 2022 2 commits
  13. 15 Jan, 2022 2 commits
    • Tomas Vondra's avatar
      Build inherited extended stats on partitioned tables · ea212bd9
      Tomas Vondra authored
      Commit 859b3003de disabled building of extended stats for inheritance
      trees, to prevent updating the same catalog row twice. While that
      resolved the issue, it also means there are no extended stats for
      declaratively partitioned tables, because there are no data in the
      non-leaf relations.
      
      That also means declaratively partitioned tables were not affected by
      the issue 859b3003de addressed, which means this is a regression
      affecting queries that calculate estimates for the whole inheritance
      tree as a whole (which includes e.g. GROUP BY queries).
      
      But because partitioned tables are empty, we can invert the condition
      and build statistics only for the case with inheritance, without losing
      anything. And we can consider them when calculating estimates.
      
      It may be necessary to run ANALYZE on partitioned tables, to collect
      proper statistics. For declarative partitioning there should no prior
      statistics, and it might take time before autoanalyze is triggered. For
      tables partitioned by inheritance the statistics may include data from
      child relations (if built 859b3003de), contradicting the current code.
      
      Report and patch by Justin Pryzby, minor fixes and cleanup by me.
      Backpatch all the way back to PostgreSQL 10, where extended statistics
      were introduced (same as 859b3003de).
      
      Author: Justin Pryzby
      Reported-by: Justin Pryzby
      Backpatch-through: 10
      Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
      ea212bd9
    • Tomas Vondra's avatar
      Ignore extended statistics for inheritance trees · 2cc007fd
      Tomas Vondra authored
      Since commit 859b3003de we only build extended statistics for individual
      relations, ignoring the child relations. This resolved the issue with
      updating catalog tuple twice, but we still tried to use the statistics
      when calculating estimates for the whole inheritance tree. When the
      relations contain very distinct data, it may produce bogus estimates.
      
      This is roughly the same issue 427c6b5b addressed ~15 years ago, and we
      fix it the same way - by ignoring extended statistics when calculating
      estimates for the inheritance tree as a whole. We still consider
      extended statistics when calculating estimates for individual child
      relations, of course.
      
      This may result in plan changes due to different estimates, but if the
      old statistics were not describing the inheritance tree particularly
      well it's quite likely the new plans is actually better.
      
      Report and patch by Justin Pryzby, minor fixes and cleanup by me.
      Backpatch all the way back to PostgreSQL 10, where extended statistics
      were introduced (same as 859b3003de).
      
      Author: Justin Pryzby
      Reported-by: Justin Pryzby
      Backpatch-through: 10
      Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
      2cc007fd
  14. 14 Jan, 2022 2 commits
    • Andres Freund's avatar
      Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning. · dad1539a
      Andres Freund authored
      Since dc7420c2 the horizon used for pruning is determined "lazily". A more
      accurate horizon is built on-demand, rather than in GetSnapshotData(). If a
      horizon computation is triggered between two HeapTupleSatisfiesVacuum() calls
      for the same tuple, the result can change from RECENTLY_DEAD to DEAD.
      
      heap_page_prune() can process the same tid multiple times (once following an
      update chain, once "directly"). When the result of HeapTupleSatisfiesVacuum()
      of a tuple changes from RECENTLY_DEAD during the first access, to DEAD in the
      second, the "tuple is DEAD and doesn't chain to anything else" path in
      heap_prune_chain() can end up marking the target of a LP_REDIRECT ItemId
      unused.
      
      Initially not easily visible,
      Once the target of a LP_REDIRECT ItemId is marked unused, a new tuple version
      can reuse it. At that point the corruption may become visible, as index
      entries pointing to the "original" redirect item, now point to a unrelated
      tuple.
      
      To fix, compute HTSV for all tuples on a page only once. This fixes the entire
      class of problems of HTSV changing inside heap_page_prune(). However,
      visibility changes can obviously still occur between HTSV checks inside
      heap_page_prune() and outside (e.g. in lazy_scan_prune()).
      
      The computation of HTSV is now done in bulk, in heap_page_prune(), rather than
      on-demand in heap_prune_chain(). Besides being a bit simpler, it also is
      faster: Memory accesses can happen sequentially, rather than in the order of
      HOT chains.
      
      There are other causes of HeapTupleSatisfiesVacuum() results changing between
      two visibility checks for the same tuple, even before dc7420c2. E.g.
      HEAPTUPLE_INSERT_IN_PROGRESS can change to HEAPTUPLE_DEAD when a transaction
      aborts between the two checks. None of the these other visibility status
      changes are known to cause corruption, but heap_page_prune()'s approach makes
      it hard to be confident.
      
      A patch implementing a more fundamental redesign of heap_page_prune(), which
      fixes this bug and simplifies pruning substantially, has been proposed by
      Peter Geoghegan in
      https://postgr.es/m/CAH2-WzmNk6V6tqzuuabxoxM8HJRaWU6h12toaS-bqYcLiht16A@mail.gmail.com
      
      However, that redesign is larger change than desirable for backpatching. As
      the new design still benefits from the batched visibility determination
      introduced in this commit, it makes sense to commit this narrower fix to 14
      and master, and then commit Peter's improvement in master.
      
      The precise sequence required to trigger the bug is complicated and hard to do
      exercise in an isolation test (until we have wait points). Due to that the
      isolation test initially posted at
      https://postgr.es/m/20211119003623.d3jusiytzjqwb62p%40alap3.anarazel.de
      and updated in
      https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb%40alap3.anarazel.de
      isn't committable.
      
      A followup commit will introduce additional assertions, to detect problems
      like this more easily.
      
      Bug: #17255
      Reported-By: default avatarAlexander Lakhin <exclusion@gmail.com>
      Debugged-By: default avatarAndres Freund <andres@anarazel.de>
      Debugged-By: default avatarPeter Geoghegan <pg@bowt.ie>
      Author: Andres Freund <andres@andres@anarazel.de>
      Reviewed-By: default avatarPeter Geoghegan <pg@bowt.ie>
      Discussion: https://postgr.es/m/20211122175914.ayk6gg6nvdwuhrzb@alap3.anarazel.de
      Backpatch: 14-, the oldest branch containing dc7420c2
      dad1539a
    • Michael Paquier's avatar
      Revert error handling improvements for cryptohashes · ad5b6f24
      Michael Paquier authored
      This reverts commits ab27df24, af8d530e and 3a0cced8, that introduced
      pg_cryptohash_error().  In order to make the core code able to pass down
      the new error types that this introduced, some of the MD5-related
      routines had to be reworked, causing an ABI breakage, but we found that
      some external extensions rely on them.  Maintaining compatibility
      outweights the error report benefits, so just revert the change in v14.
      
      Reported-by: Laurenz Albe
      Discussion: https://postgr.es/m/9f0c0a96d28cf14fc87296bbe67061c14eb53ae8.camel@cybertec.at
      ad5b6f24
  15. 13 Jan, 2022 2 commits
  16. 12 Jan, 2022 2 commits
    • Peter Geoghegan's avatar
      Fix memory leak in indexUnchanged hint mechanism. · 41ee68a9
      Peter Geoghegan authored
      Commit 9dc718bd added a "logically unchanged by UPDATE" hinting
      mechanism, which is currently used within nbtree indexes only (see
      commit d168b666).  This mechanism determined whether or not the incoming
      item is a logically unchanged duplicate (a duplicate needed only for
      MVCC versioning purposes) once per row updated per non-HOT update.  This
      approach led to memory leaks which were noticeable with an UPDATE
      statement that updated sufficiently many rows, at least on tables that
      happen to have an expression index.
      
      On HEAD, fix the issue by adding a cache to the executor's per-index
      IndexInfo struct.
      
      Take a different approach on Postgres 14 to avoid an ABI break: simply
      pass down the hint to all indexes unconditionally with non-HOT UPDATEs.
      This is deemed acceptable because the hint is currently interpreted
      within btinsert() as "perform a bottom-up index deletion pass if and
      when the only alternative is splitting the leaf page -- prefer to delete
      any LP_DEAD-set items first".  nbtree must always treat the hint as a
      noisy signal about what might work, as a strategy of last resort, with
      costs imposed on non-HOT updaters.  (The same thing might not be true
      within another index AM that applies the hint, which is why the original
      behavior is preserved on HEAD.)
      
      Author: Peter Geoghegan <pg@bowt.ie>
      Reported-By: default avatarKlaudie Willis <Klaudie.Willis@protonmail.com>
      Diagnosed-By: default avatarTom Lane <tgl@sss.pgh.pa.us>
      Discussion: https://postgr.es/m/261065.1639497535@sss.pgh.pa.us
      Backpatch: 14-, where the hinting mechanism was added.
      41ee68a9
    • Michael Paquier's avatar
      Fix comment related to pg_cryptohash_error() · af8d530e
      Michael Paquier authored
      One of the comments introduced in b69aba7 was worded a bit weirdly, so
      improve it.
      
      Reported-by: Sergey Shinderuk
      Discussion: https://postgr.es/m/71b9a5d2-a3bf-83bc-a243-93dcf0bcfb3b@postgrespro.ru
      Backpatch-through: 14
      af8d530e
  17. 11 Jan, 2022 2 commits
    • Tom Lane's avatar
      Clean up error message reported after \password encryption failure. · ab27df24
      Tom Lane authored
      Experimenting with FIPS mode enabled, I saw
      
      regression=# \password joe
      Enter new password for user "joe":
      Enter it again:
      could not encrypt password: disabled for FIPS
      out of memory
      
      because PQencryptPasswordConn was still of the opinion that "out of
      memory" is always appropriate to print.
      
      Minor oversight in b69aba745.  Like that one, back-patch to v14.
      ab27df24
    • Michael Paquier's avatar
      Improve error handling of cryptohash computations · 3a0cced8
      Michael Paquier authored
      The existing cryptohash facility was causing problems in some code paths
      related to MD5 (frontend and backend) that relied on the fact that the
      only type of error that could happen would be an OOM, as the MD5
      implementation used in PostgreSQL ~13 (the in-core implementation is
      used when compiling with or without OpenSSL in those older versions),
      could fail only under this circumstance.
      
      The new cryptohash facilities can fail for reasons other than OOMs, like
      attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to
      1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this
      would cause incorrect reports to show up.
      
      This commit extends the cryptohash APIs so as callers of those routines
      can fetch more context when an error happens, by using a new routine
      called pg_cryptohash_error().  The error states are stored within each
      implementation's internal context data, so as it is possible to extend
      the logic depending on what's suited for an implementation.  The default
      implementation requires few error states, but OpenSSL could report
      various issues depending on its internal state so more is needed in
      cryptohash_openssl.c, and the code is shaped so as we are always able to
      grab the necessary information.
      
      The core code is changed to adapt to the new error routine, painting
      more "const" across the call stack where the static errors are stored,
      particularly in authentication code paths on variables that provide
      log details.  This way, any future changes would warn if attempting to
      free these strings.  The MD5 authentication code was also a bit blurry
      about the handling of "logdetail" (LOG sent to the postmaster), so
      improve the comments related that, while on it.
      
      The origin of the problem is 87ae9691, that introduced the centralized
      cryptohash facility.  Extra changes are done for pgcrypto in v14 for the
      non-OpenSSL code path to cope with the improvements done by this
      commit.
      
      Reported-by: Michael Mühlbeyer
      Author: Michael Paquier
      Reviewed-by: Tom Lane
      Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com
      Backpatch-through: 14
      3a0cced8
  18. 10 Jan, 2022 2 commits