1. 13 Mar, 2021 7 commits
  2. 12 Mar, 2021 17 commits
  3. 11 Mar, 2021 11 commits
    • Peter Geoghegan's avatar
      Save a few cycles during nbtree VACUUM. · 7bb97211
      Peter Geoghegan authored
      Avoid calling RelationGetNumberOfBlocks() unnecessarily in the common
      case where there are no deleted but not yet recycled pages to recycle
      during a cleanup-only nbtree VACUUM operation.
      
      Follow-up to commit e5d8a999, which (among other things) taught the
      "skip full scan" nbtree VACUUM mechanism to only trigger a full index
      scan when the absolute number of deleted pages in the index is
      considered excessive.
      7bb97211
    • Peter Geoghegan's avatar
      Add back vacuum_cleanup_index_scale_factor parameter. · effdd3f3
      Peter Geoghegan authored
      Commit 9f3665fb removed the vacuum_cleanup_index_scale_factor storage
      parameter.  However, that creates dump/reload hazards when moving across
      major versions.
      
      Add back the vacuum_cleanup_index_scale_factor parameter (though not the
      GUC of the same name) purely to avoid problems when using tools like
      pg_upgrade.  The parameter remains disabled and undocumented.
      
      No backpatch to Postgres 13, since vacuum_cleanup_index_scale_factor was
      only disabled by REL_13_STABLE's version of master branch commit
      9f3665fb in the first place -- the parameter already looks like this on
      REL_13_STABLE.
      
      Discussion: https://postgr.es/m/YEm/a3Ko3nKnBuVq@paquier.xyz
      effdd3f3
    • Robert Haas's avatar
      Be clear about whether a recovery pause has taken effect. · 32fd2b57
      Robert Haas authored
      Previously, the code and documentation seem to have essentially
      assumed than a call to pg_wal_replay_pause() would take place
      immediately, but that's not the case, because we only check for a
      pause in certain places. This means that a tool that uses this
      function and then wants to do something else afterward that is
      dependent on the pause having taken effect doesn't know how long it
      needs to wait to be sure that no more WAL is going to be replayed.
      
      To avoid that, add a new function pg_get_wal_replay_pause_state()
      which returns either 'not paused', 'paused requested', or 'paused'.
      After calling pg_wal_replay_pause() the status will immediate change
      from 'not paused' to 'pause requested'; when the startup process
      has noticed this, the status will change to 'pause'.  For backward
      compatibility, pg_is_wal_replay_paused() still exists and returns
      the same thing as before: true if a pause has been requested,
      whether or not it has taken effect yet; and false if not.
      The documentation is updated to clarify.
      
      To improve the changes that a pause request is quickly confirmed
      effective, adjust things so that WaitForWALToBecomeAvailable will
      swiftly reach a call to recoveryPausesHere() when a pause request
      is made.
      
      Dilip Kumar, reviewed by Simon Riggs, Kyotaro Horiguchi, Yugo Nagata,
      Masahiko Sawada, and Bharath Rupireddy.
      
      Discussion: http://postgr.es/m/CAFiTN-vcLLWEm8Zr%3DYK83rgYrT9pbC8VJCfa1kY9vL3AUPfu6g%40mail.gmail.com
      32fd2b57
    • Tom Lane's avatar
      Re-simplify management of inStart in pqParseInput3's subroutines. · 51c54bb6
      Tom Lane authored
      Commit 92785dac copied some logic related to advancement of inStart
      from pqParseInput3 into getRowDescriptions and getAnotherTuple,
      because it wanted to allow user-defined row processor callbacks to
      potentially longjmp out of the library, and inStart would have to be
      updated before that happened to avoid an infinite loop.  We later
      decided that that API was impossibly fragile and reverted it, but
      we didn't undo all of the related code changes, and this bit of
      messiness survived.  Undo it now so that there's just one place in
      pqParseInput3's processing where inStart is advanced; this will
      simplify addition of better tracing support.
      
      getParamDescriptions had grown similar processing somewhere along
      the way (not in 92785dac; I didn't track down just when), but it's
      actually buggy because its handling of corrupt-message cases seems to
      have been copied from the v2 logic where we lacked a known message
      length.  The cases where we "goto not_enough_data" should not simply
      return EOF, because then we won't consume the message, potentially
      creating an infinite loop.  That situation now represents a
      definitively corrupt message, and we should report it as such.
      
      Although no field reports of getParamDescriptions getting stuck in
      a loop have been seen, it seems appropriate to back-patch that fix.
      I chose to back-patch all of this to keep the logic looking more alike
      in supported branches.
      
      Discussion: https://postgr.es/m/2217283.1615411989@sss.pgh.pa.us
      51c54bb6
    • Robert Haas's avatar
      Refactor and generalize the ParallelSlot machinery. · f71519e5
      Robert Haas authored
      Create a wrapper object, ParallelSlotArray, to encapsulate the
      number of slots and the slot array itself, plus some other relevant
      bits of information. This reduces the number of parameters we have
      to pass around all over the place.
      
      Allow for a ParallelSlotArray to contain slots connected to
      different databases within a single cluster. The current clients
      of this mechanism don't need this, but it is expected to be used
      by future patches.
      
      Defer connecting to databases until we actually need the connection
      for something. This is a slight behavior change for vacuumdb and
      reindexdb. If you specify a number of jobs that is larger than the
      number of objects, the extra connections will now not be used.
      But, on the other hand, if you specify a number of jobs that is
      so large that it's going to fail, the failure would previously have
      happened before any operations were actually started, and now it
      won't.
      
      Mark Dilger, reviewed by me.
      
      Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com
      Discussion: http://postgr.es/m/BA592F2D-F928-46FF-9516-2B827F067F57@enterprisedb.com
      f71519e5
    • Michael Paquier's avatar
      Set libcrypto callbacks for all connection threads in libpq · 2c0cefcd
      Michael Paquier authored
      Based on an analysis of the OpenSSL code with Jacob, moving to EVP for
      the cryptohash computations makes necessary the setup of the libcrypto
      callbacks that were getting set only for SSL connections, but not for
      connections without SSL.  Not setting the callbacks makes the use of
      threads potentially unsafe for connections calling cryptohashes during
      authentication, like MD5 or SCRAM, if a failure happens during a
      cryptohash computation.  The logic setting the libssl and libcrypto
      states is then split into two parts, both using the same locking, with
      libcrypto being set up for SSL and non-SSL connections, while SSL
      connections set any libssl state afterwards as needed.
      
      Prior to this commit, only SSL connections would have set libcrypto
      callbacks that are necessary to ensure a proper thread locking when
      using multiple concurrent threads in libpq (ENABLE_THREAD_SAFETY).  Note
      that this is only required for OpenSSL 1.0.2 and 1.0.1 (oldest version
      supported on HEAD), as 1.1.0 has its own internal locking and it has
      dropped support for CRYPTO_set_locking_callback().
      
      Tests with up to 300 threads with OpenSSL 1.0.1 and 1.0.2, mixing SSL
      and non-SSL connection threads did not show any performance impact after
      some micro-benchmarking.  pgbench can be used here with -C and a
      mostly-empty script (with one \set meta-command for example) to stress
      authentication requests, and we have mixed that with some custom
      programs for testing.
      
      Reported-by: Jacob Champion
      Author: Michael Paquier
      Reviewed-by: Jacob Champion
      Discussion: https://postgr.es/m/fd3ba610085f1ff54623478cf2f7adf5af193cbb.camel@vmware.com
      2c0cefcd
    • Peter Geoghegan's avatar
      Doc: B-Tree only has one additional parameter. · 3f0daeb0
      Peter Geoghegan authored
      Oversight in commit 9f3665fb.
      
      Backpatch: 13-, just like commit 9f3665fb.
      3f0daeb0
    • Thomas Munro's avatar
      Improve comment for struct BufferDesc. · 049d9b87
      Thomas Munro authored
      Add a note that per-buffer I/O condition variables currently live
      outside the BufferDesc struct.  Follow-up for commit d8725104.
      Reported-by: default avatarJulien Rouhaud <rjuju123@gmail.com>
      Discussion: https://postgr.es/m/20210311031118.hucytmrgwlktjxgq%40nol
      049d9b87
    • Bruce Momjian's avatar
      tutorial: land height is "elevation", not "altitude" · 2950ff32
      Bruce Momjian authored
      This is a follow-on patch to 92c12e46.  In that patch, we renamed
      "altitude" to "elevation" in the docs, based on these details:
      
         https://mapscaping.com/blogs/geo-candy/what-is-the-difference-between-elevation-relief-and-altitude
      
      This renames the tutorial SQL files to match the documentation.
      
      Reported-by: max1@inbox.ru
      
      Discussion: https://postgr.es/m/161512392887.1046.3137472627109459518@wrigleys.postgresql.org
      
      Backpatch-through: 9.6
      2950ff32
    • Peter Geoghegan's avatar
      VACUUM ANALYZE: Always update pg_class.reltuples. · 5f8727f5
      Peter Geoghegan authored
      vacuumlazy.c sometimes fails to update pg_class entries for each index
      (to ensure that pg_class.reltuples is current), even though analyze.c
      assumed that that must have happened during VACUUM ANALYZE.  There are
      at least a couple of reasons for this.  For example, vacuumlazy.c could
      fail to update pg_class when the index AM indicated that its statistics
      are merely an estimate, per the contract for amvacuumcleanup() routines
      established by commit e5734597 back in 2006.
      
      Stop assuming that pg_class must have been updated with accurate
      statistics within VACUUM ANALYZE -- update pg_class for indexes at the
      same time as the table relation in all cases.  That way VACUUM ANALYZE
      will never fail to keep pg_class.reltuples reasonably accurate.
      
      The only downside of this approach (compared to the old approach) is
      that it might inaccurately set pg_class.reltuples for indexes whose heap
      relation ends up with the same inaccurate value anyway.  This doesn't
      seem too bad.  We already consistently called vac_update_relstats() (to
      update pg_class) for the heap/table relation twice during any VACUUM
      ANALYZE -- once in vacuumlazy.c, and once in analyze.c.  We now make
      sure that we call vac_update_relstats() at least once (though often
      twice) for each index.
      
      This is follow up work to commit 9f3665fb, which dealt with issues in
      btvacuumcleanup().  Technically this fixes an unrelated issue, though.
      btvacuumcleanup() no longer provides an accurate num_index_tuples value
      following commit 9f3665fb (when there was no btbulkdelete() call during
      the VACUUM operation in question), but hashvacuumcleanup() has worked in
      the same way for many years now.
      
      Author: Peter Geoghegan <pg@bowt.ie>
      Reviewed-By: default avatarMasahiko Sawada <sawada.mshk@gmail.com>
      Discussion: https://postgr.es/m/CAH2-WzknxdComjhqo4SUxVFk_Q1171GJO2ZgHZ1Y6pion6u8rA@mail.gmail.com
      Backpatch: 13-, just like commit 9f3665fb.
      5f8727f5
    • Peter Geoghegan's avatar
      Don't consider newly inserted tuples in nbtree VACUUM. · 9f3665fb
      Peter Geoghegan authored
      Remove the entire idea of "stale stats" within nbtree VACUUM (stop
      caring about stats involving the number of inserted tuples).  Also
      remove the vacuum_cleanup_index_scale_factor GUC/param on the master
      branch (though just disable them on postgres 13).
      
      The vacuum_cleanup_index_scale_factor/stats interface made the nbtree AM
      partially responsible for deciding when pg_class.reltuples stats needed
      to be updated.  This seems contrary to the spirit of the index AM API,
      though -- it is not actually necessary for an index AM's bulk delete and
      cleanup callbacks to provide accurate stats when it happens to be
      inconvenient.  The core code owns that.  (Index AMs have the authority
      to perform or not perform certain kinds of deferred cleanup based on
      their own considerations, such as page deletion and recycling, but that
      has little to do with pg_class.reltuples/num_index_tuples.)
      
      This issue was fairly harmless until the introduction of the
      autovacuum_vacuum_insert_threshold feature by commit b07642db, which had
      an undesirable interaction with the vacuum_cleanup_index_scale_factor
      mechanism: it made insert-driven autovacuums perform full index scans,
      even though there is no real benefit to doing so.  This has been tied to
      a regression with an append-only insert benchmark [1].
      
      Also have remaining cases that perform a full scan of an index during a
      cleanup-only nbtree VACUUM indicate that the final tuple count is only
      an estimate.  This prevents vacuumlazy.c from setting the index's
      pg_class.reltuples in those cases (it will now only update pg_class when
      vacuumlazy.c had TIDs for nbtree to bulk delete).  This arguably fixes
      an oversight in deduplication-related bugfix commit 48e12913.
      
      [1] https://smalldatum.blogspot.com/2021/01/insert-benchmark-postgres-is-still.html
      
      Author: Peter Geoghegan <pg@bowt.ie>
      Reviewed-By: default avatarMasahiko Sawada <sawada.mshk@gmail.com>
      Discussion: https://postgr.es/m/CAD21AoA4WHthN5uU6+WScZ7+J_RcEjmcuH94qcoUPuB42ShXzg@mail.gmail.com
      Backpatch: 13-, where autovacuum_vacuum_insert_threshold was added.
      9f3665fb
  4. 10 Mar, 2021 5 commits