1. 01 Sep, 2021 3 commits
    • Fujii Masao's avatar
      pgbench: Fix bug in measurement of disconnection delays. · d760d942
      Fujii Masao authored
      When -C/--connect option is specified, pgbench establishes and closes
      the connection for each transaction. In this case pgbench needs to
      measure the times taken for all those connections and disconnections,
      to include the average connection time in the benchmark result.
      But previously pgbench could not measure those disconnection delays.
      To fix the bug, this commit makes pgbench measure the disconnection
      delays whenever the connection is closed at the end of transaction,
      if -C/--connect option is specified.
      
      Back-patch to v14. Per discussion, we concluded not to back-patch to v13
      or before because changing that behavior in stable branches would
      surprise users rather than providing benefits.
      
      Author: Yugo Nagata
      Reviewed-by: Fabien COELHO, Tatsuo Ishii, Asif Rehman, Fujii Masao
      Discussion: https://postgr.es/m/20210614151155.a393bc7d8fed183e38c9f52a@sraoss.co.jp
      d760d942
    • Amit Kapila's avatar
      Fix the random test failure in 001_rep_changes. · b7ad093d
      Amit Kapila authored
      The check to test whether the subscription workers were restarting after a
      change in the subscription was failing. The reason was that the test was
      assuming the walsender started before it reaches the 'streaming' state and
      the walsender was exiting due to an error before that. Now, the walsender
      was erroring out before reaching the 'streaming' state because it tries to
      acquire the slot before the previous walsender has exited.
      
      In passing, improve the die messages so that it is easier to investigate
      the failures in the future if any.
      
      Reported-by: Michael Paquier, as per buildfarm
      Author: Ajin Cherian
      Reviewed-by: Masahiko Sawada, Amit Kapila
      Backpatch-through: 10, where this test was introduced
      Discussion: https://postgr.es/m/YRnhFxa9bo73wfpV@paquier.xyz
      b7ad093d
    • Peter Geoghegan's avatar
      VACUUM VERBOSE: Don't report "pages removed". · 0d892cf7
      Peter Geoghegan authored
      It doesn't make any sense to report this information, since VACUUM
      VERBOSE reports on heap relation truncation directly.  This was an
      oversight in commit 7ab96cf6, which made VACUUM VERBOSE output a little
      more consistent with nearby autovacuum-specific log output.  Adjust
      comments that describe how this is supposed to work in passing.
      
      Also bring truncation-related VACUUM VERBOSE output in line with the
      convention established for VACUUM VERBOSE output by commit f4f4a649.
      
      Author: Peter Geoghegan <pg@bowt.ie>
      Backpatch: 14-, where VACUUM VERBOSE's output changed.
      0d892cf7
  2. 31 Aug, 2021 7 commits
    • Tomas Vondra's avatar
      Don't print extra parens around expressions in extended stats · 4d1816ec
      Tomas Vondra authored
      The code printing expressions for extended statistics doubled the
      parens, producing results like ((a+1)), which is unnecessary and not
      consistent with how we print expressions elsewhere.
      
      Fixed by tweaking the code to produce just a single set of parens.
      
      Reported by Mark Dilger, fix by me. Backpatch to 14, where support for
      extended statistics on expressions was added.
      
      Reported-by: Mark Dilger
      Discussion: https://postgr.es/m/20210122040101.GF27167%40telsasoft.com
      4d1816ec
    • John Naylor's avatar
      Mark the timestamptz variant of date_bin() as stable · 3eda9fc0
      John Naylor authored
      Previously, it was immutable by lack of marking. This is not
      correct, since the time zone could change.
      
      Bump catversion
      
      Discussion: https://www.postgresql.org/message-id/CAFBsxsG2UHk8mOWL0tca%3D_cg%2B_oA5mVRNLhDF0TBw980iOg5NQ%40mail.gmail.com
      Backpatch to v14, when this function came in
      3eda9fc0
    • Tom Lane's avatar
      In pg_dump, avoid doing per-table queries for RLS policies. · a20a9f26
      Tom Lane authored
      For no particularly good reason, getPolicies() queried pg_policy
      separately for each table.  We can collect all the policies in
      a single query instead, and attach them to the correct TableInfo
      objects using findTableByOid() lookups.  On the regression
      database, this reduces the number of queries substantially, and
      provides a visible savings even when running against a local
      server.
      
      Per complaint from Hubert Depesz Lubaczewski.  Since this is such
      a simple fix and can have a visible performance benefit, back-patch
      to all supported branches.
      
      Discussion: https://postgr.es/m/20210826084430.GA26282@depesz.com
      a20a9f26
    • Tom Lane's avatar
      Cache the results of format_type() queries in pg_dump. · 9407dbbc
      Tom Lane authored
      There's long been a "TODO: there might be some value in caching
      the results" annotation on pg_dump's getFormattedTypeName function;
      but we hadn't gotten around to checking what it was costing us to
      repetitively look up type names.  It turns out that when dumping the
      current regression database, about 10% of the total number of queries
      issued are duplicative format_type() queries.  However, Hubert Depesz
      Lubaczewski reported a not-unusual case where these account for over
      half of the queries issued by pg_dump.  Individually these queries
      aren't expensive, but when network lag is a factor, they add up to a
      problem.  We can very easily add some caching to getFormattedTypeName
      to solve it.
      
      Since this is such a simple fix and can have a visible performance
      benefit, back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/20210826084430.GA26282@depesz.com
      9407dbbc
    • Tomas Vondra's avatar
      Rename the role in stats_ext to have regress_ prefix · 4090ff2a
      Tomas Vondra authored
      Commit 5be8ce82e8 added a new role to the stats_ext regression suite,
      but the role name did not start with regress_ causing failures when
      running with ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS. Fixed by
      renaming the role to start with the expected regress_ prefix.
      
      Backpatch-through: 10, same as the new regression test
      Discussion: https://postgr.es/m/1F238937-7CC2-4703-A1B1-6DC225B8978A%40enterprisedb.com
      4090ff2a
    • Tomas Vondra's avatar
      Fix lookup error in extended stats ownership check · a371a5ba
      Tomas Vondra authored
      When an ownership check on extended statistics object failed, the code
      was calling aclcheck_error_type to report the failure, which is clearly
      wrong, resulting in cache lookup errors. Fix by calling aclcheck_error.
      
      This issue exists since the introduction of extended statistics, so
      backpatch all the way back to PostgreSQL 10. It went unnoticed because
      there were no tests triggering the error, so add one.
      
      Reported-by: Mark Dilger
      Backpatch-through: 10, where extended stats were introduced
      Discussion: https://postgr.es/m/1F238937-7CC2-4703-A1B1-6DC225B8978A%40enterprisedb.com
      a371a5ba
    • Tom Lane's avatar
      Fix missed lock acquisition while inlining new-style SQL functions. · 983d7033
      Tom Lane authored
      When starting to use a query parsetree loaded from the catalogs,
      we must begin by applying AcquireRewriteLocks(), to obtain the same
      relation locks that the parser would have gotten if the query were
      entered interactively, and to do some other cleanup such as dealing
      with later-dropped columns.  New-style SQL functions are just as
      subject to this rule as other stored parsetrees; however, of the
      places dealing with such functions, only init_sql_fcache had gotten
      the memo.  In particular, if we successfully inlined a new-style
      set-returning SQL function that contained any relation references,
      we'd either get an assertion failure or attempt to use those
      relation(s) sans locks.
      
      I also added AcquireRewriteLocks calls to fmgr_sql_validator and
      print_function_sqlbody.  Desultory experiments didn't demonstrate any
      failures in those, but I suspect that I just didn't try hard enough.
      Certainly we don't expect nearby code paths to operate without locks.
      
      On the same logic of it-ought-to-have-the-same-effects-as-the-old-code,
      call pg_rewrite_query() in fmgr_sql_validator, too.  It's possible
      that neither code path there needs to bother with rewriting, but
      doing the analysis to prove that is beyond my goals for today.
      
      Per bug #17161 from Alexander Lakhin.
      
      Discussion: https://postgr.es/m/17161-048a1cdff8422800@postgresql.org
      983d7033
  3. 30 Aug, 2021 5 commits
  4. 28 Aug, 2021 4 commits
  5. 27 Aug, 2021 8 commits
  6. 25 Aug, 2021 9 commits
  7. 24 Aug, 2021 2 commits
    • Tom Lane's avatar
      Fix regexp misbehavior with capturing parens inside "{0}". · 244dd799
      Tom Lane authored
      Regexps like "(.){0}...\1" drew an "invalid backreference number".
      That's not unreasonable on its face, since the capture group will
      never be matched if it's iterated zero times.  However, other engines
      such as Perl's don't complain about this, nor do we throw an error for
      related cases such as "(.)|\1", even though that backref can never
      succeed either.  Also, if the zero-iterations case happens at runtime
      rather than compile time --- say, "(x)*...\1" when there's no "x" to
      be found --- that's not an error, we just deem the backref to not
      match.  Making this even less defensible, no error was thrown for
      nested cases such as "((.)){0}...\2"; and to add insult to injury,
      those cases could result in assertion failures instead.  (It seems
      that nothing especially bad happened in non-assert builds, though.)
      
      Let's just fix it so that no error is thrown and instead the backref
      is deemed to never match, so that compile-time detection of no
      iterations behaves the same as run-time detection.
      
      Per report from Mark Dilger.  This appears to be an aboriginal error
      in Spencer's library, so back-patch to all supported versions.
      
      Pre-v14, it turns out to also be necessary to back-patch one aspect of
      commits cb76fbd7e/00116dee5, namely to create capture-node subREs with
      the begin/end states of their subexpressions, not the current lp/rp
      of the outer parseqatom invocation.  Otherwise delsub complains that
      we're trying to disconnect a state from itself.  This is a bit scary
      but code examination shows that it's safe: in the pre-v14 code, if we
      want to wrap iteration around the subexpression, the first thing we do
      is overwrite the atom's begin/end fields with new states.  So the
      bogus values didn't survive long enough to be used for anything, except
      if no iteration is required, in which case it doesn't matter.
      
      Discussion: https://postgr.es/m/A099E4A8-4377-4C64-A98C-3DEDDC075502@enterprisedb.com
      244dd799
    • Amit Kapila's avatar
      Fix Alter Subscription's Add/Drop Publication behavior. · 5cfcd46e
      Amit Kapila authored
      The current refresh behavior tries to just refresh added/dropped
      publications but that leads to removing wrong tables from subscription. We
      can't refresh just the dropped publication because it is quite possible
      that some of the tables are removed from publication by that time and now
      those will remain as part of the subscription. Also, there is a chance
      that the tables that were part of the publication being dropped are also
      part of another publication, so we can't remove those.
      
      So, we decided that by default, add/drop commands will also act like
      REFRESH PUBLICATION which means they will refresh all the publications. We
      can keep the old behavior for "add publication" but it is better to be
      consistent with "drop publication".
      
      Author: Hou Zhijie
      Reviewed-by: Masahiko Sawada, Amit Kapila
      Backpatch-through: 14, where it was introduced
      Discussion: https://postgr.es/m/OS0PR01MB5716935D4C2CC85A6143073F94EF9@OS0PR01MB5716.jpnprd01.prod.outlook.com
      5cfcd46e
  8. 23 Aug, 2021 2 commits
    • Tom Lane's avatar
      Prevent regexp back-refs from sometimes matching when they shouldn't. · 779557bd
      Tom Lane authored
      The recursion in cdissect() was careless about clearing match data
      for capturing parentheses after rejecting a partial match.  This
      could allow a later back-reference to succeed when by rights it
      should fail for lack of a defined referent.
      
      To fix, think a little more rigorously about what the contract
      between different levels of cdissect's recursion needs to be.
      With the right spec, we can fix this using fewer rather than more
      resets of the match data; the key decision being that a failed
      sub-match is now explicitly responsible for clearing any matches
      it may have set.
      
      There are enough other cross-checks and optimizations in the code
      that it's not especially easy to exhibit this problem; usually, the
      match will fail as-expected.  Plus, regexps that are even potentially
      vulnerable are most likely user errors, since there's just not much
      point in writing a back-ref that doesn't always have a referent.
      These facts perhaps explain why the issue hasn't been detected,
      even though it's almost certainly a couple of decades old.
      
      Discussion: https://postgr.es/m/151435.1629733387@sss.pgh.pa.us
      779557bd
    • Alvaro Herrera's avatar
      Avoid creating archive status ".ready" files too early · e3fb6170
      Alvaro Herrera authored
      WAL records may span multiple segments, but XLogWrite() does not
      wait for the entire record to be written out to disk before
      creating archive status files.  Instead, as soon as the last WAL page of
      the segment is written, the archive status file is created, and the
      archiver may process it.  If PostgreSQL crashes before it is able to
      write and flush the rest of the record (in the next WAL segment), the
      wrong version of the first segment file lingers in the archive, which
      causes operations such as point-in-time restores to fail.
      
      To fix this, keep track of records that span across segments and ensure
      that segments are only marked ready-for-archival once such records have
      been completely written to disk.
      
      This has always been wrong, so backpatch all the way back.
      
      Author: Nathan Bossart <bossartn@amazon.com>
      Reviewed-by: default avatarKyotaro Horiguchi <horikyota.ntt@gmail.com>
      Reviewed-by: default avatarRyo Matsumura <matsumura.ryo@fujitsu.com>
      Reviewed-by: default avatarAndrey Borodin <x4mmm@yandex-team.ru>
      Discussion: https://postgr.es/m/CBDDFA01-6E40-46BB-9F98-9340F4379505@amazon.com
      e3fb6170