1. 25 Mar, 2019 5 commits
  2. 24 Mar, 2019 8 commits
    • Tom Lane's avatar
      Un-hide most cascaded-drop details in regression test results. · 940311e4
      Tom Lane authored
      Now that the ordering of DROP messages ought to be stable everywhere,
      we should not need these kluges of hiding DETAIL output just to avoid
      unstable ordering.  Hiding it's not great for test coverage, so
      let's undo that where possible.
      
      In a small number of places, it's necessary to leave it in, for
      example because the output might include a variable pg_temp_nnn
      schema name.  I also left things alone in places where the details
      would depend on other regression test scripts, e.g. plpython_drop.sql.
      
      Perhaps buildfarm experience will show this to be a bad idea,
      but if so I'd like to know why.
      
      Discussion: https://postgr.es/m/E1h6eep-0001Mw-Vd@gemulon.postgresql.org
      940311e4
    • Tom Lane's avatar
      Sort dependent objects before reporting them in DROP ROLE. · af6550d3
      Tom Lane authored
      Commit 8aa9dd74 didn't quite finish the job in this area after all,
      because DROP ROLE has a code path distinct from DROP OWNED BY, and
      it was still reporting dependent objects in whatever order the index
      scan returned them in.
      
      Buildfarm experience shows that index ordering of equal-keyed objects is
      significantly less stable than before in the wake of using heap TIDs as
      tie-breakers.  So if we try to hide the unstable ordering by suppressing
      DETAIL reports, we're just going to end up having to do that for every
      DROP that reports multiple objects.  That's not great from a coverage
      or problem-detection standpoint, and it's something we'll inevitably
      forget in future patches, leading to more iterations of fixing-an-
      unstable-result.  So let's just bite the bullet and sort here too.
      
      Discussion: https://postgr.es/m/E1h6eep-0001Mw-Vd@gemulon.postgresql.org
      af6550d3
    • Peter Geoghegan's avatar
      Remove dead code from nbtsplitloc.c. · 59ab3be9
      Peter Geoghegan authored
      It doesn't make sense to consider the possibility that there will only
      be one candidate split point when choosing among split points to find
      the split with the lowest penalty.  This is a vestige of an earlier
      version of the patch that became commit fab25024.
      
      Issue spotted while rereviewing coverage of the nbtree patch series
      using gcov.
      59ab3be9
    • Tom Lane's avatar
      Avoid double-free in vacuumlo error path. · bd9396a0
      Tom Lane authored
      The code would do "PQclear(res)" twice if lo_unlink failed, evidently
      due to careless thinking about how far out a "break" would break.
      Remove the extra PQclear and adjust the loop logic so that we'll fall
      out of both levels of loop after an error, as was clearly the intent.
      
      Spotted by Coverity.  I have no idea why it took this long to notice,
      since the bug has been there since commit 67ccbb08.  Accordingly,
      back-patch to all supported branches.
      bd9396a0
    • Michael Paquier's avatar
      Make current_logfiles use permissions assigned to files in data directory · 276d2e6c
      Michael Paquier authored
      Since its introduction in 19dc233c, current_logfiles has been assigned
      the same permissions as a log file, which can be enforced with
      log_file_mode.  This setup can lead to incompatibility problems with
      group access permissions as current_logfiles is not located in the log
      directory, but at the root of the data folder.  Hence, if group
      permissions are used but log_file_mode is more restrictive, a backup
      with a user in the group having read access could fail even if the log
      directory is located outside of the data folder.
      
      Per discussion with the folks mentioned below, we have concluded that
      current_logfiles should not be treated as a log file as it only stores
      metadata related to log files, and that it should use the same
      permissions as all other files in the data directory.  This solution has
      the merit to be simple and fixes all the interaction problems between
      group access and log_file_mode.
      
      Author: Haribabu Kommi
      Reviewed-by: Stephen Frost, Robert Haas, Tom Lane, Michael Paquier
      Discussion: https://postgr.es/m/CAJrrPGcEotF1P7AWoeQyD3Pqr-0xkQg_Herv98DjbaMj+naozw@mail.gmail.com
      Backpatch-through: 11, where group access has been added.
      276d2e6c
    • Peter Eisentraut's avatar
      Transaction chaining · 280a408b
      Peter Eisentraut authored
      Add command variants COMMIT AND CHAIN and ROLLBACK AND CHAIN, which
      start new transactions with the same transaction characteristics as the
      just finished one, per SQL standard.
      
      Support for transaction chaining in PL/pgSQL is also added.  This
      functionality is especially useful when running COMMIT in a loop in
      PL/pgSQL.
      Reviewed-by: default avatarFabien COELHO <coelho@cri.ensmp.fr>
      Discussion: https://www.postgresql.org/message-id/flat/28536681-324b-10dc-ade8-ab46f7645a5a@2ndquadrant.com
      280a408b
    • Andres Freund's avatar
      Remove spurious return. · b2db2770
      Andres Freund authored
      Per buildfarm member anole.
      
      Author: Andres Freund
      b2db2770
    • Andres Freund's avatar
      tableam: Add tuple_{insert, delete, update, lock} and use. · 5db6df0c
      Andres Freund authored
      This adds new, required, table AM callbacks for insert/delete/update
      and lock_tuple. To be able to reasonably use those, the EvalPlanQual
      mechanism had to be adapted, moving more logic into the AM.
      
      Previously both delete/update/lock call-sites and the EPQ mechanism had
      to have awareness of the specific tuple format to be able to fetch the
      latest version of a tuple. Obviously that needs to be abstracted
      away. To do so, move the logic that find the latest row version into
      the AM. lock_tuple has a new flag argument,
      TUPLE_LOCK_FLAG_FIND_LAST_VERSION, that forces it to lock the last
      version, rather than the current one.  It'd have been possible to do
      so via a separate callback as well, but finding the last version
      usually also necessitates locking the newest version, making it
      sensible to combine the two. This replaces the previous use of
      EvalPlanQualFetch().  Additionally HeapTupleUpdated, which previously
      signaled either a concurrent update or delete, is now split into two,
      to avoid callers needing AM specific knowledge to differentiate.
      
      The move of finding the latest row version into tuple_lock means that
      encountering a row concurrently moved into another partition will now
      raise an error about "tuple to be locked" rather than "tuple to be
      updated/deleted" - which is accurate, as that always happens when
      locking rows. While possible slightly less helpful for users, it seems
      like an acceptable trade-off.
      
      As part of this commit HTSU_Result has been renamed to TM_Result, and
      its members been expanded to differentiated between updating and
      deleting. HeapUpdateFailureData has been renamed to TM_FailureData.
      
      The interface to speculative insertion is changed so nodeModifyTable.c
      does not have to set the speculative token itself anymore. Instead
      there's a version of tuple_insert, tuple_insert_speculative, that
      performs the speculative insertion (without requiring a flag to signal
      that fact), and the speculative insertion is either made permanent
      with table_complete_speculative(succeeded = true) or aborted with
      succeeded = false).
      
      Note that multi_insert is not yet routed through tableam, nor is
      COPY. Changing multi_insert requires changes to copy.c that are large
      enough to better be done separately.
      
      Similarly, although simpler, CREATE TABLE AS and CREATE MATERIALIZED
      VIEW are also only going to be adjusted in a later commit.
      
      Author: Andres Freund and Haribabu Kommi
      Discussion:
          https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
          https://postgr.es/m/20190313003903.nwvrxi7rw3ywhdel@alap3.anarazel.de
          https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
      5db6df0c
  3. 23 Mar, 2019 8 commits
    • Tom Lane's avatar
      Remove inadequate check for duplicate "xml" PI. · f778e537
      Tom Lane authored
      I failed to think about PIs starting with "xml".  We don't really
      need this check at all, so just take it out.  Oversight in
      commit 8d1dadb2 et al.
      f778e537
    • Tom Lane's avatar
      Ensure xmloption = content while restoring pg_dump output. · 4870dce3
      Tom Lane authored
      In combination with the previous commit, this ensures that valid XML
      data can always be dumped and reloaded, whether it is "document"
      or "content".
      
      Discussion: https://postgr.es/m/CAN-V+g-6JqUQEQZ55Q3toXEN6d5Ez5uvzL4VR+8KtvJKj31taw@mail.gmail.com
      4870dce3
    • Tom Lane's avatar
      Accept XML documents when xmloption = content, as required by SQL:2006+. · 8d1dadb2
      Tom Lane authored
      Previously we were using the SQL:2003 definition, which doesn't allow
      this, but that creates a serious dump/restore gotcha: there is no
      setting of xmloption that will allow all valid XML data.  Hence,
      switch to the 2006 definition.
      
      Since libxml doesn't accept <!DOCTYPE> directives in the mode we
      use for CONTENT parsing, the implementation is to detect <!DOCTYPE>
      in the input and switch to DOCUMENT parsing mode.  This should not
      cost much, because <!DOCTYPE> should be close to the front of the
      input if it's there at all.  It's possible that this causes the
      error messages for malformed input to be slightly different than
      they were before, if said input includes <!DOCTYPE>; but that does
      not seem like a big problem.
      
      In passing, buy back a few cycles in parsing of large XML documents
      by not doing strlen() of the whole input in parse_xml_decl().
      
      Back-patch because dump/restore failures are not nice.  This change
      shouldn't break any cases that worked before, so it seems safe to
      back-patch.
      
      Chapman Flack (revised a bit by me)
      
      Discussion: https://postgr.es/m/CAN-V+g-6JqUQEQZ55Q3toXEN6d5Ez5uvzL4VR+8KtvJKj31taw@mail.gmail.com
      8d1dadb2
    • Peter Geoghegan's avatar
      Suppress DETAIL output from an event_trigger test. · 05f110cc
      Peter Geoghegan authored
      Suppress 3 lines of unstable DETAIL output from a DROP ROLE statement in
      event_trigger.sql.  This is further cleanup for commit dd299df8.
      
      Note that the event_trigger test instability issue is very similar to
      the recently suppressed foreign_data test instability issue.  Both
      issues involve DETAIL output for a DROP ROLE statement that needed to be
      changed as part of dd299df8.
      
      Per buildfarm member macaque.
      05f110cc
    • Peter Geoghegan's avatar
      Add nbtree high key "continuescan" optimization. · 29b64d1d
      Peter Geoghegan authored
      Teach nbtree forward index scans to check the high key before moving to
      the right sibling page in the hope of finding that it isn't actually
      necessary to do so.  The new check may indicate that the scan definitely
      cannot find matching tuples to the right, ending the scan immediately.
      We already opportunistically force a similar "continuescan orientated"
      key check of the final non-pivot tuple when it's clear that it cannot be
      returned to the scan due to being dead-to-all.  The new high key check
      is complementary.
      
      The new approach for forward scans is more effective than checking the
      final non-pivot tuple, especially with composite indexes and non-unique
      indexes.  The improvements to the logic for picking a split point added
      by commit fab25024 make it likely that relatively dissimilar high keys
      will appear on a page.  A distinguishing key value that can only appear
      on non-pivot tuples on the right sibling page will often be present in
      leaf page high keys.
      
      Since forcing the final item to be key checked no longer makes any
      difference in the case of forward scans, the existing extra key check is
      now only used for backwards scans.  Backward scans continue to
      opportunistically check the final non-pivot tuple, which is actually the
      first non-pivot tuple on the page (not the last).
      
      Note that even pg_upgrade'd v3 indexes make use of this optimization.
      
      Author: Peter Geoghegan, Heikki Linnakangas
      Reviewed-By: Heikki Linnakangas
      Discussion: https://postgr.es/m/CAH2-WzkOmUduME31QnuTFpimejuQoiZ-HOf0pOWeFZNhTMctvA@mail.gmail.com
      29b64d1d
    • Michael Paquier's avatar
      Improve format of code and some error messages in pg_checksums · 4ba96d1b
      Michael Paquier authored
      This makes the code more consistent with the surroundings.
      
      Author: Fabrízio de Royes Mello
      Discussion: https://postgr.es/m/CAFcNs+pXb_35r5feMU3-dWsWxXU=Yjq+spUsthFyGFbT0QcaKg@mail.gmail.com
      4ba96d1b
    • Tom Lane's avatar
      Add unreachable "break" to satisfy -Wimplicit-fallthrough. · fb50d3f0
      Tom Lane authored
      gcc is a bit pickier about this than perhaps it should be.
      
      Discussion: https://postgr.es/m/E1h6zzT-0003ft-DD@gemulon.postgresql.org
      fb50d3f0
    • Andres Freund's avatar
      Expand EPQ tests for UPDATEs and DELETEs · cdcffe22
      Andres Freund authored
      Previously there was basically no coverage for UPDATEs encountering
      deleted rows, and no coverage for DELETE having to perform EPQ. That's
      problematic for an upcoming commit in which EPQ is tought to integrate
      with tableams.  Also, there was no test for UPDATE to encounter a row
      UPDATEd into another partition.
      
      Author: Andres Freund
      cdcffe22
  4. 22 Mar, 2019 18 commits
  5. 21 Mar, 2019 1 commit