1. 23 Aug, 2018 2 commits
  2. 22 Aug, 2018 8 commits
  3. 21 Aug, 2018 5 commits
  4. 20 Aug, 2018 1 commit
  5. 19 Aug, 2018 2 commits
  6. 18 Aug, 2018 2 commits
  7. 17 Aug, 2018 6 commits
    • Tom Lane's avatar
      Ensure schema qualification in pg_restore DISABLE/ENABLE TRIGGER commands. · 6771c932
      Tom Lane authored
      Previously, this code blindly followed the common coding pattern of
      passing PQserverVersion(AH->connection) as the server-version parameter
      of fmtQualifiedId.  That works as long as we have a connection; but in
      pg_restore with text output, we don't.  Instead we got a zero from
      PQserverVersion, which fmtQualifiedId interpreted as "server is too old to
      have schemas", and so the name went unqualified.  That still accidentally
      managed to work in many cases, which is probably why this ancient bug went
      undetected for so long.  It only became obvious in the wake of the changes
      to force dump/restore to execute with restricted search_path.
      
      In HEAD/v11, let's deal with this by ripping out fmtQualifiedId's server-
      version behavioral dependency, and just making it schema-qualify all the
      time.  We no longer support pg_dump from servers old enough to need the
      ability to omit schema name, let alone restoring to them.  (Also, the few
      callers outside pg_dump already didn't work with pre-schema servers.)
      
      In older branches, that's not an acceptable solution, so instead just
      tweak the DISABLE/ENABLE TRIGGER logic to ensure it will schema-qualify
      its output regardless of server version.
      
      Per bug #15338 from Oleg somebody.  Back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/153452458706.1316.5328079417086507743@wrigleys.postgresql.org
      6771c932
    • Peter Eisentraut's avatar
      InsertPgAttributeTuple() to set attcacheoff · e4597ee6
      Peter Eisentraut authored
      InsertPgAttributeTuple() is the interface between in-memory tuple
      descriptors and on-disk pg_attribute, so it makes sense to give it the
      job of resetting attcacheoff.  This avoids having all the callers having
      to do so.
      Reviewed-by: default avatarTom Lane <tgl@sss.pgh.pa.us>
      e4597ee6
    • Andrew Gierth's avatar
      Set scan direction appropriately for SubPlans (bug #15336) · 520acab1
      Andrew Gierth authored
      When executing a SubPlan in an expression, the EState's direction
      field was left alone, resulting in an attempt to execute the subplan
      backwards if it was encountered during a backwards scan of a cursor.
      Also, though much less likely, it was possible to reach the execution
      of an InitPlan while in backwards-scan state.
      
      Repair by saving/restoring estate->es_direction and forcing forward
      scan mode in the relevant places.
      
      Backpatch all the way, since this has been broken since 8.3 (prior to
      commit c7ff7663, SubPlans had their own EStates rather than sharing
      the parent plan's, so there was no confusion over scan direction).
      
      Per bug #15336 reported by Vladimir Baranoff; analysis and patch by
      me, review by Tom Lane.
      
      Discussion: https://postgr.es/m/153449812167.1304.1741624125628126322@wrigleys.postgresql.org
      520acab1
    • Tom Lane's avatar
      Fix configure's snprintf test so it exposes HP-UX bug. · 9bed827b
      Tom Lane authored
      Since commit e1d19c90, buildfarm member gharial has been failing with
      symptoms indicating that snprintf sometimes returns -1 for buffer
      overrun, even though it passes the added configure check.  Some
      google research suggests that this happens only in limited cases,
      such as when the overrun happens partway through a %d item.  Adjust
      the configure check to exercise it that way.  Since I'm now feeling
      more paranoid than I was before, also make the test explicitly verify
      that the buffer doesn't get physically overrun.
      9bed827b
    • Bruce Momjian's avatar
      pg_upgrade: issue helpful error message for use on standbys · b94f7b53
      Bruce Momjian authored
      Commit 777e6ddf checked for a shut down
      message from a standby and allowed it to continue.  This patch reports a
      helpful error message in these cases, suggesting to use rsync as
      documented.
      
      Diagnosed-by: Martín Marqués
      
      Discussion: https://postgr.es/m/CAPdiE1xYCow-reLjrhJ9DqrMu-ppNq0ChUUEvVdxhdjGRD5_eA@mail.gmail.com
      
      Backpatch-through: 9.3
      b94f7b53
    • Michael Paquier's avatar
  8. 16 Aug, 2018 10 commits
  9. 15 Aug, 2018 3 commits
    • Alvaro Herrera's avatar
      Update FSM on WAL replay of page all-visible/frozen · ab7dbd68
      Alvaro Herrera authored
      We aren't very strict about keeping FSM up to date on WAL replay,
      because per-page freespace values aren't critical in replicas (can't
      write to heap in a replica; and if the replica is promoted, the values
      would be updated by VACUUM anyway).  However, VACUUM since 9.6 can skip
      processing pages marked all-visible or all-frozen, and if such pages are
      recorded in FSM with wrong values, those values are blindly propagated
      to FSM's upper layers by VACUUM's FreeSpaceMapVacuum.  (This rationale
      assumes that crashes are not very frequent, because those would cause
      outdated FSM to occur in the primary.)
      
      Even when the FSM is outdated in standby, things are not too bad
      normally, because, most per-page FSM values will be zero (other than
      those propagated with the base-backup that created the standby); only
      once the remaining free space is less than 0.2*BLCKSZ the per-page value
      is maintained by WAL replay of heap ins/upd/del.  However, if
      wal_log_hints=on causes complete FSM pages to be propagated to a standby
      via full-page images, many too-optimistic per-page values can end up
      being registered in the standby.
      
      Incorrect per-page values aren't critical in most cases, since an
      inserter that is given a page that doesn't actually contain the claimed
      free space will update FSM with the correct value, and retry until it
      finds a usable page.  However, if there are many such updates to do, an
      inserter can spend a long time doing them before a usable page is found;
      in a heavily trafficked insert-only table with many concurrent inserters
      this has been observed to cause several second stalls, causing visible
      application malfunction.
      
      To fix this problem, it seems sufficient to have heap_xlog_visible
      (replay of setting all-visible and all-frozen VM bits for a heap page)
      update the FSM value for the page being processed.  This fixes the
      per-page counters together with making the page skippable to vacuum, so
      when vacuum does FreeSpaceMapVacuum, the values propagated to FSM upper
      layers are the correct ones, avoiding the problem.
      
      While at it, apply the same fix to heap_xlog_clean (replay of tuple
      removal by HOT pruning and vacuum).  This makes any space freed by the
      cleaning available earlier than the next vacuum in the promoted replica.
      
      Backpatch to 9.6, where this problem was diagnosed on an insert-only
      table with all-frozen pages, which were introduced as a concept in that
      release.  Theoretically it could apply with all-visible pages to older
      branches, but there's been no report of that and it doesn't backpatch
      cleanly anyway.
      
      Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
      Discussion: https://postgr.es/m/20180802172857.5skoexsilnjvgruk@alvherre.pgsql
      ab7dbd68
    • Tom Lane's avatar
      Clean up assorted misuses of snprintf()'s result value. · cc4f6b77
      Tom Lane authored
      Fix a small number of places that were testing the result of snprintf()
      but doing so incorrectly.  The right test for buffer overrun, per C99,
      is "result >= bufsize" not "result > bufsize".  Some places were also
      checking for failure with "result == -1", but the standard only says
      that a negative value is delivered on failure.
      
      (Note that this only makes these places correct if snprintf() delivers
      C99-compliant results.  But at least now these places are consistent
      with all the other places where we assume that.)
      
      Also, make psql_start_test() and isolation_start_test() check for
      buffer overrun while constructing their shell commands.  There seems
      like a higher risk of overrun, with more severe consequences, here
      than there is for the individual file paths that are made elsewhere
      in the same functions, so this seemed like a worthwhile change.
      
      Also fix guc.c's do_serialize() to initialize errno = 0 before
      calling vsnprintf.  In principle, this should be unnecessary because
      vsnprintf should have set errno if it returns a failure indication ...
      but the other two places this coding pattern is cribbed from don't
      assume that, so let's be consistent.
      
      These errors are all very old, so back-patch as appropriate.  I think
      that only the shell command overrun cases are even theoretically
      reachable in practice, but there's not much point in erroneous error
      checks.
      
      Discussion: https://postgr.es/m/17245.1534289329@sss.pgh.pa.us
      cc4f6b77
    • Tom Lane's avatar
      Make snprintf.c follow the C99 standard for snprintf's result value. · 805889d7
      Tom Lane authored
      C99 says that the result should be the number of bytes that would have
      been emitted given a large enough buffer, not the number we actually
      were able to put in the buffer.  It's time to make our substitute
      implementation comply with that.  Not doing so results in inefficiency
      in buffer-enlargement cases, and also poses a portability hazard for
      third-party code that might expect C99-compliant snprintf behavior
      within Postgres.
      
      In passing, remove useless tests for str == NULL; neither C99 nor
      predecessor standards ever allowed that except when count == 0,
      so I see no reason to expend cycles on making that a non-crash case
      for this implementation.  Also, don't waste a byte in pg_vfprintf's
      local I/O buffer; this might have performance benefits by allowing
      aligned writes during flushbuffer calls.
      
      Discussion: https://postgr.es/m/17245.1534289329@sss.pgh.pa.us
      805889d7
  10. 14 Aug, 2018 1 commit