1. 09 Apr, 2021 8 commits
  2. 08 Apr, 2021 24 commits
  3. 07 Apr, 2021 8 commits
    • Peter Geoghegan's avatar
      Teach VACUUM to bypass unnecessary index vacuuming. · 5100010e
      Peter Geoghegan authored
      VACUUM has never needed to call ambulkdelete() for each index in cases
      where there are precisely zero TIDs in its dead_tuples array by the end
      of its first pass over the heap (also its only pass over the heap in
      this scenario).  Index vacuuming is simply not required when this
      happens.  Index cleanup will still go ahead, but in practice most calls
      to amvacuumcleanup() are usually no-ops when there were zero preceding
      ambulkdelete() calls.  In short, VACUUM has generally managed to avoid
      index scans when there were clearly no index tuples to delete from
      indexes.  But cases with _close to_ no index tuples to delete were
      another matter -- a round of ambulkdelete() calls took place (one per
      index), each of which performed a full index scan.
      
      VACUUM now behaves just as if there were zero index tuples to delete in
      cases where there are in fact "virtually zero" such tuples.  That is, it
      can now bypass index vacuuming and heap vacuuming as an optimization
      (though not index cleanup).  Whether or not VACUUM bypasses indexes is
      determined dynamically, based on the just-observed number of heap pages
      in the table that have one or more LP_DEAD items (LP_DEAD items in heap
      pages have a 1:1 correspondence with index tuples that still need to be
      deleted from each index in the worst case).
      
      We only skip index vacuuming when 2% or less of the table's pages have
      one or more LP_DEAD items -- bypassing index vacuuming as an
      optimization must not noticeably impede setting bits in the visibility
      map.  As a further condition, the dead_tuples array (i.e. VACUUM's array
      of LP_DEAD item TIDs) must not exceed 32MB at the point that the first
      pass over the heap finishes, which is also when the decision to bypass
      is made.  (The VACUUM must also have been able to fit all TIDs in its
      maintenance_work_mem-bound dead_tuples space, though with a default
      maintenance_work_mem setting it can't matter.)
      
      This avoids surprising jumps in the duration and overhead of routine
      vacuuming with workloads where successive VACUUM operations consistently
      have almost zero dead index tuples.  The number of LP_DEAD items may
      well accumulate over multiple VACUUM operations, before finally the
      threshold is crossed and VACUUM performs conventional index vacuuming.
      Even then, the optimization will have avoided a great deal of largely
      unnecessary index vacuuming.
      
      In the future we may teach VACUUM to skip index vacuuming on a per-index
      basis, using a much more sophisticated approach.  For now we only
      consider the extreme cases, where we can be quite confident that index
      vacuuming just isn't worth it using simple heuristics.
      
      Also log information about how many heap pages have one or more LP_DEAD
      items when autovacuum logging is enabled.
      
      Author: Masahiko Sawada <sawada.mshk@gmail.com>
      Author: Peter Geoghegan <pg@bowt.ie>
      Discussion: https://postgr.es/m/CAD21AoD0SkE11fMw4jD4RENAwBMcw1wasVnwpJVw3tVqPOQgAw@mail.gmail.com
      Discussion: https://postgr.es/m/CAH2-WzmkebqPd4MVGuPTOS9bMFvp9MDs5cRTCOsv1rQJ3jCbXw@mail.gmail.com
      5100010e
    • Bruce Momjian's avatar
      Fix regression test failure caused by commit 4f0b0966 · bc707286
      Bruce Momjian authored
      The query originally used was too simple, cause explain_filter() to be
      unable to remove JIT output text.
      
      Reported-by: Tom Lane
      
      Author: Julien Rouhaud
      bc707286
    • Michael Paquier's avatar
      Fix some failures with connection tests on Windows hosts · c7578fa6
      Michael Paquier authored
      The truncation of the log file, that this set of tests relies on to make
      sure that a connection attempt matches with its expected backend log
      pattern, fails, as reported by buildfarm member fairywren.  Instead of a
      truncation, do a rotation of the log file and restart the node.  This
      will ensure that the connection attempt data is unique for each test.
      
      Discussion: https://postgr.es/m/YG05nCI8x8B+Ad3G@paquier.xyz
      c7578fa6
    • Peter Eisentraut's avatar
      SQL-standard function body · e717a9a1
      Peter Eisentraut authored
      This adds support for writing CREATE FUNCTION and CREATE PROCEDURE
      statements for language SQL with a function body that conforms to the
      SQL standard and is portable to other implementations.
      
      Instead of the PostgreSQL-specific AS $$ string literal $$ syntax,
      this allows writing out the SQL statements making up the body
      unquoted, either as a single statement:
      
          CREATE FUNCTION add(a integer, b integer) RETURNS integer
              LANGUAGE SQL
              RETURN a + b;
      
      or as a block
      
          CREATE PROCEDURE insert_data(a integer, b integer)
          LANGUAGE SQL
          BEGIN ATOMIC
            INSERT INTO tbl VALUES (a);
            INSERT INTO tbl VALUES (b);
          END;
      
      The function body is parsed at function definition time and stored as
      expression nodes in a new pg_proc column prosqlbody.  So at run time,
      no further parsing is required.
      
      However, this form does not support polymorphic arguments, because
      there is no more parse analysis done at call time.
      
      Dependencies between the function and the objects it uses are fully
      tracked.
      
      A new RETURN statement is introduced.  This can only be used inside
      function bodies.  Internally, it is treated much like a SELECT
      statement.
      
      psql needs some new intelligence to keep track of function body
      boundaries so that it doesn't send off statements when it sees
      semicolons that are inside a function body.
      Tested-by: default avatarJaime Casanova <jcasanov@systemguards.com.ec>
      Reviewed-by: default avatarJulien Rouhaud <rjuju123@gmail.com>
      Discussion: https://www.postgresql.org/message-id/flat/1c11f1eb-f00c-43b7-799d-2d44132c02d7@2ndquadrant.com
      e717a9a1
    • Peter Geoghegan's avatar
      Add wraparound failsafe to VACUUM. · 1e55e7d1
      Peter Geoghegan authored
      Add a failsafe mechanism that is triggered by VACUUM when it notices
      that the table's relfrozenxid and/or relminmxid are dangerously far in
      the past.  VACUUM checks the age of the table dynamically, at regular
      intervals.
      
      When the failsafe triggers, VACUUM takes extraordinary measures to
      finish as quickly as possible so that relfrozenxid and/or relminmxid can
      be advanced.  VACUUM will stop applying any cost-based delay that may be
      in effect.  VACUUM will also bypass any further index vacuuming and heap
      vacuuming -- it only completes whatever remaining pruning and freezing
      is required.  Bypassing index/heap vacuuming is enabled by commit
      8523492d, which made it possible to dynamically trigger the mechanism
      already used within VACUUM when it is run with INDEX_CLEANUP off.
      
      It is expected that the failsafe will almost always trigger within an
      autovacuum to prevent wraparound, long after the autovacuum began.
      However, the failsafe mechanism can trigger in any VACUUM operation.
      Even in a non-aggressive VACUUM, where we're likely to not advance
      relfrozenxid, it still seems like a good idea to finish off remaining
      pruning and freezing.   An aggressive/anti-wraparound VACUUM will be
      launched immediately afterwards.  Note that the anti-wraparound VACUUM
      that follows will itself trigger the failsafe, usually before it even
      begins its first (and only) pass over the heap.
      
      The failsafe is controlled by two new GUCs: vacuum_failsafe_age, and
      vacuum_multixact_failsafe_age.  There are no equivalent reloptions,
      since that isn't expected to be useful.  The GUCs have rather high
      defaults (both default to 1.6 billion), and are expected to generally
      only be used to make the failsafe trigger sooner/more frequently.
      
      Author: Masahiko Sawada <sawada.mshk@gmail.com>
      Author: Peter Geoghegan <pg@bowt.ie>
      Discussion: https://postgr.es/m/CAD21AoD0SkE11fMw4jD4RENAwBMcw1wasVnwpJVw3tVqPOQgAw@mail.gmail.com
      Discussion: https://postgr.es/m/CAH2-WzmgH3ySGYeC-m-eOBsa2=sDwa292-CFghV4rESYo39FsQ@mail.gmail.com
      1e55e7d1
    • Bruce Momjian's avatar
      Make use of in-core query id added by commit 5fd9dfa5 · 4f0b0966
      Bruce Momjian authored
      Use the in-core query id computation for pg_stat_activity,
      log_line_prefix, and EXPLAIN VERBOSE.
      
      Similar to other fields in pg_stat_activity, only the queryid from the
      top level statements are exposed, and if the backends status isn't
      active then the queryid from the last executed statements is displayed.
      
      Add a %Q placeholder to include the queryid in log_line_prefix, which
      will also only expose top level statements.
      
      For EXPLAIN VERBOSE, if a query identifier has been computed, either by
      enabling compute_query_id or using a third-party module, display it.
      
      Bump catalog version.
      
      Discussion: https://postgr.es/m/20210407125726.tkvjdbw76hxnpwfi@nol
      
      Author: Julien Rouhaud
      
      Reviewed-by: Alvaro Herrera, Nitin Jadhav, Zhihong Yu
      4f0b0966
    • Robert Haas's avatar
      amcheck: fix multiple problems with TOAST pointer validation · ec7ffb80
      Robert Haas authored
      First, don't perform database access while holding a buffer lock.
      When checking a heap, we can validate that TOAST pointers are sane by
      performing a scan on the TOAST index and looking up the chunks that
      correspond to each value ID that appears in a TOAST poiner in the main
      table. But, to do that while holding a buffer lock at least risks
      causing other backends to wait uninterruptibly, and probably can cause
      undetected and uninterruptible deadlocks.  So, instead, make a list of
      checks to perform while holding the lock, and then perform the checks
      after releasing it.
      
      Second, adjust things so that we don't try to follow TOAST pointers
      for tuples that are already eligible to be pruned. The TOAST tuples
      become eligible for pruning at the same time that the main tuple does,
      so trying to check them may lead to spurious reports of corruption,
      as observed in the buildfarm. The necessary infrastructure to decide
      whether or not the tuple being checked is prunable was added by
      commit 3b6c1259, but it wasn't
      actually used for its intended purpose prior to this patch.
      
      Mark Dilger, adjusted by me to avoid a memory leak.
      
      Discussion: http://postgr.es/m/AC5479E4-6321-473D-AC92-5EC36299FBC2@enterprisedb.com
      ec7ffb80
    • Bruce Momjian's avatar
      Move pg_stat_statements query jumbling to core. · 5fd9dfa5
      Bruce Momjian authored
      Add compute_query_id GUC to control whether a query identifier should be
      computed by the core (off by default).  It's thefore now possible to
      disable core queryid computation and use pg_stat_statements with a
      different algorithm to compute the query identifier by using a
      third-party module.
      
      To ensure that a single source of query identifier can be used and is
      well defined, modules that calculate a query identifier should throw an
      error if compute_query_id specified to compute a query id and if a query
      idenfitier was already calculated.
      
      Discussion: https://postgr.es/m/20210407125726.tkvjdbw76hxnpwfi@nol
      
      Author: Julien Rouhaud
      
      Reviewed-by: Alvaro Herrera, Nitin Jadhav, Zhihong Yu
      5fd9dfa5