1. 09 Sep, 2018 1 commit
    • Noah Misch's avatar
      Allow ENOENT in check_mode_recursive(). · c85ad9cc
      Noah Misch authored
      Buildfarm member tern failed src/bin/pg_ctl/t/001_start_stop.pl when a
      check_mode_recursive() call overlapped a server's startup-time deletion
      of pg_stat/global.stat.  Just warn.  Also, include errno in the message.
      Back-patch to v11, where check_mode_recursive() first appeared.
      c85ad9cc
  2. 08 Sep, 2018 4 commits
    • Noah Misch's avatar
      Fix logical subscriber wait in test. · 076a3c21
      Noah Misch authored
      Buildfarm members sungazer and tern revealed this deficit.  Back-patch
      to v10, like commit 4f10e7ea, which
      introduced the test.
      076a3c21
    • Tom Lane's avatar
      Minor cleanup/future-proofing for pg_saslprep(). · f47f3148
      Tom Lane authored
      Ensure that pg_saslprep() initializes its output argument to NULL in
      all failure paths, and then remove the redundant initialization that
      some (not all) of its callers did.  This does not fix any live bug,
      but it reduces the odds of future bugs of omission.
      
      Also add a comment about why the existing failure-path coding is
      adequate.
      
      Back-patch so as to keep the function's API consistent across branches,
      again to forestall future bug introduction.
      
      Patch by me, reviewed by Michael Paquier
      
      Discussion: https://postgr.es/m/16558.1536407783@sss.pgh.pa.us
      f47f3148
    • Michael Paquier's avatar
      Remove duplicated words split across lines in comments · 9226a3b8
      Michael Paquier authored
      This has been detected using some interesting tricks with sed, and the
      method used is mentioned in details in the discussion below.
      
      Author: Justin Pryzby
      Discussion: https://postgr.es/m/20180908013109.GB15350@telsasoft.com
      9226a3b8
    • Tom Lane's avatar
      Save/restore SPI's global variables in SPI_connect() and SPI_finish(). · 361844fe
      Tom Lane authored
      This patch removes two sources of interference between nominally
      independent functions when one SPI-using function calls another,
      perhaps without knowing that it does so.
      
      Chapman Flack pointed out that xml.c's query_to_xml_internal() expects
      SPI_tuptable and SPI_processed to stay valid across datatype output
      function calls; but it's possible that such a call could involve
      re-entrant use of SPI.  It seems likely that there are similar hazards
      elsewhere, if not in the core code then in third-party SPI users.
      Previously SPI_finish() reset SPI's API globals to zeroes/nulls, which
      would typically make for a crash in such a situation.  Restoring them
      to the values they had at SPI_connect() seems like a considerably more
      useful behavior, and it still meets the design goal of not leaving any
      dangling pointers to tuple tables of the function being exited.
      
      Also, cause SPI_connect() to reset these variables to zeroes/nulls after
      saving them.  This prevents interference in the opposite direction: it's
      possible that a SPI-using function that's only ever been tested standalone
      contains assumptions that these variables start out as zeroes.  That was
      the case as long as you were the outermost SPI user, but not so much for
      an inner user.  Now it's consistent.
      
      Report and fix suggestion by Chapman Flack, actual patch by me.
      Back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/9fa25bef-2e4f-1c32-22a4-3ad0723c4a17@anastigmatix.net
      361844fe
  3. 07 Sep, 2018 6 commits
    • Tom Lane's avatar
      Limit depth of forced recursion for CLOBBER_CACHE_RECURSIVELY. · f510412d
      Tom Lane authored
      It's somewhat surprising that we got away with this before.  (Actually,
      since nobody tests this routinely AFAIK, it might've been broken for
      awhile.  But it's definitely broken in the wake of commit f868a814.)
      It seems sufficient to limit the forced recursion to a small number
      of levels.
      
      Back-patch to all supported branches, like the preceding patch.
      
      Discussion: https://postgr.es/m/12259.1532117714@sss.pgh.pa.us
      f510412d
    • Tom Lane's avatar
      Fix longstanding recursion hazard in sinval message processing. · f868a814
      Tom Lane authored
      LockRelationOid and sibling routines supposed that, if our session already
      holds the lock they were asked to acquire, they could skip calling
      AcceptInvalidationMessages on the grounds that we must have already read
      any remote sinval messages issued against the relation being locked.
      This is normally true, but there's a critical special case where it's not:
      processing inside AcceptInvalidationMessages might attempt to access system
      relations, resulting in a recursive call to acquire a relation lock.
      
      Hence, if the outer call had acquired that same system catalog lock, we'd
      fall through, despite the possibility that there's an as-yet-unread sinval
      message for that system catalog.  This could, for example, result in
      failure to access a system catalog or index that had just been processed
      by VACUUM FULL.  This is the explanation for buildfarm failures we've been
      seeing intermittently for the past three months.  The bug is far older
      than that, but commits a54e1f15 et al added a new recursion case within
      AcceptInvalidationMessages that is apparently easier to hit than any
      previous case.
      
      To fix this, we must not skip calling AcceptInvalidationMessages until
      we have *finished* a call to it since acquiring a relation lock, not
      merely acquired the lock.  (There's already adequate logic inside
      AcceptInvalidationMessages to deal with being called recursively.)
      Fortunately, we can implement that at trivial cost, by adding a flag
      to LOCALLOCK hashtable entries that tracks whether we know we have
      completed such a call.
      
      There is an API hazard added by this patch for external callers of
      LockAcquire: if anything is testing for LOCKACQUIRE_ALREADY_HELD,
      it might be fooled by the new return code LOCKACQUIRE_ALREADY_CLEAR
      into thinking the lock wasn't already held.  This should be a fail-soft
      condition, though, unless something very bizarre is being done in
      response to the test.
      
      Also, I added an additional output argument to LockAcquireExtended,
      assuming that that probably isn't called by any outside code given
      the very limited usefulness of its additional functionality.
      
      Back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/12259.1532117714@sss.pgh.pa.us
      f868a814
    • Michael Paquier's avatar
      Improve handling of corrupted two-phase state files at recovery · 8582b4d0
      Michael Paquier authored
      When a corrupted two-phase state file is found by WAL replay, be it for
      crash recovery or archive recovery, then the file is simply skipped and
      a WARNING is logged to the user, causing the transaction to be silently
      lost.  Facing an on-disk WAL file which is corrupted is as likely to
      happen as what is stored in WAL records, but WAL records are already
      able to fail hard if there is a CRC mismatch.  On-disk two-phase state
      files, on the contrary, are simply ignored if corrupted.  Note that when
      restoring the initial two-phase data state at recovery, files newer than
      the horizon XID are discarded hence no files present in pg_twophase/
      should be torned and have been made durable by a previous checkpoint, so
      recovery should never see any corrupted two-phase state file by design.
      
      The situation got better since 978b2f65 which has added two-phase state
      information directly in WAL instead of using on-disk files, so the risk
      is limited to two-phase transactions which live across at least one
      checkpoint for long periods.  Backups having legit two-phase state files
      on-disk could also lose silently transactions when restored if things
      get corrupted.
      
      This behavior exists since two-phase commit has been introduced, no
      back-patch is done for now per the lack of complaints about this
      problem.
      
      Author: Michael Paquier
      Discussion: https://postgr.es/m/20180709050309.GM1467@paquier.xyz
      8582b4d0
    • Andrew Gierth's avatar
      Refactor installation of extension headers. · 7b6b167f
      Andrew Gierth authored
      Commit be54b377 failed on gmake 3.80 due to a chained conditional,
      which on closer examination could be removed entirely with some
      refactoring elsewhere for a net simplification and more robustness
      against empty expansions. Along the way, add some more comments.
      
      Also make explicit in the documentation and comments that built
      headers are not removed by 'make clean', since we don't typically want
      that for headers generated by a separate ./configure step, and it's
      much easier to add your own 'distclean' rule or use EXTRA_CLEAN than
      to try and override a deletion rule in pgxs.mk.
      
      Per buildfarm member prariedog and comments by Michael Paquier, though
      all the actual changes are my fault.
      7b6b167f
    • Peter Eisentraut's avatar
      libpq: Change "options" dispchar to normal · 1fea1e32
      Peter Eisentraut authored
      libpq connection options as returned by PQconndefaults() have a
      "dispchar" field that determines (among other things) whether an option
      is a "debug" option, which shouldn't be shown by default to clients.
      postgres_fdw makes use of that to control which connection options to
      accept from a foreign server configuration.
      
      Curiously, the "options" option, which allows passing configuration
      settings to the backend server, was listed as a debug option, which
      prevented it from being used by postgres_fdw.  Maybe it was once meant
      for debugging, but it's clearly in general use nowadays.
      
      So change the dispchar for it to be the normal non-debug case.  Also
      remove the "debug" reference from its label field.
      Reported-by: default avatarShinoda, Noriyoshi <noriyoshi.shinoda@hpe.com>
      1fea1e32
    • Peter Eisentraut's avatar
      Use C99 designated initializers for some structs · 98afa68d
      Peter Eisentraut authored
      These are just a few particularly egregious cases that were hard to read
      and write, and error prone because of many similar adjacent types.
      
      Discussion: https://www.postgresql.org/message-id/flat/4c9f01be-9245-2148-b569-61a8562ef190%402ndquadrant.com
      98afa68d
  4. 06 Sep, 2018 4 commits
    • Tom Lane's avatar
      Fix inconsistent argument naming. · 75f78553
      Tom Lane authored
      Typo in commit 842cb9fa.
      75f78553
    • Tom Lane's avatar
      Make contrib/unaccent's unaccent() function work when not in search path. · a5322ca1
      Tom Lane authored
      Since the fixes for CVE-2018-1058, we've advised people to schema-qualify
      function references in order to fix failures in code that executes under
      a minimal search_path setting.  However, that's insufficient to make the
      single-argument form of unaccent() work, because it looks up the "unaccent"
      text search dictionary using the search path.
      
      The most expedient answer seems to be to remove the search_path dependency
      by making it look in the same schema that the unaccent() function itself
      is declared in.  This will definitely work for the normal usage of this
      function with the unaccent dictionary provided by the extension.
      It's barely possible that there are people who were relying on the
      search-path-dependent behavior to select other dictionaries with the same
      name; but if there are any such people at all, they can still get that
      behavior by writing unaccent('unaccent', ...), or possibly
      unaccent('unaccent'::text::regdictionary, ...) if the lookup has to be
      postponed to runtime.
      
      Per complaint from Gunnlaugur Thor Briem.  Back-patch to all supported
      branches.
      
      Discussion: https://postgr.es/m/CAPs+M8LCex6d=DeneofdsoJVijaG59m9V0ggbb3pOH7hZO4+cQ@mail.gmail.com
      a5322ca1
    • Peter Eisentraut's avatar
      Refactor dlopen() support · 842cb9fa
      Peter Eisentraut authored
      Nowadays, all platforms except Windows and older HP-UX have standard
      dlopen() support.  So having a separate implementation per platform
      under src/backend/port/dynloader/ is a bit excessive.  Instead, treat
      dlopen() like other library functions that happen to be missing
      sometimes and put a replacement implementation under src/port/.
      
      Discussion: https://www.postgresql.org/message-id/flat/e11a49cb-570a-60b7-707d-7084c8de0e61%402ndquadrant.com#54e735ae37476a121abb4e33c2549b03
      842cb9fa
    • Amit Kapila's avatar
      Fix the overrun in hash index metapage for smaller block sizes. · ac27c74d
      Amit Kapila authored
      The commit 620b49a1 changed the value of HASH_MAX_BITMAPS with the intent
      to allow many non-unique values in hash indexes without worrying to reach
      the limit of the number of overflow pages.  At that time, this didn't
      occur to us that it can overrun the block for smaller block sizes.
      
      Choose the value of HASH_MAX_BITMAPS based on BLCKSZ such that it gives
      the same answer as now for the cases where the overrun doesn't occur, and
      some other sufficiently-value for the cases where an overrun currently
      does occur.  This allows us not to change the behavior in any case that
      currently works, so there's really no reason for a HASH_VERSION bump.
      
      Author: Dilip Kumar
      Reviewed-by: Amit Kapila
      Backpatch-through: 10
      Discussion: https://postgr.es/m/CAA4eK1LtF4VmU4mx_+i72ff1MdNZ8XaJMGkt2HV8+uSWcn8t4A@mail.gmail.com
      ac27c74d
  5. 05 Sep, 2018 6 commits
    • Andrew Gierth's avatar
      Allow extensions to install built as well as unbuilt headers. · be54b377
      Andrew Gierth authored
      Commit df163230 overlooked the case that an out-of-tree extension
      might need to build its header files (e.g. via ./configure). If it is
      also doing a VPATH build, the HEADERS_* rules in the original commit
      would then fail to find the files, since they would be looking only
      under $(srcdir) and not in the build directory.
      
      Fix by adding HEADERS_built and HEADERS_built_$(MODULE) which behave
      like DATA_built in that they look in the build dir rather than the
      source dir (and also make the files dependencies of the "all" target).
      
      No Windows support appears to be needed for this, since it is only
      relevant to out-of-tree builds (no support exists in Mkvcbuild.pm to
      build extension header files in any case).
      be54b377
    • Tom Lane's avatar
      Remove no-longer-used variable. · 54b01b92
      Tom Lane authored
      Oversight in 2fbdf1b3.  Per buildfarm.
      54b01b92
    • Tom Lane's avatar
      Make argument names of pg_get_object_address consistent, and fix docs. · ae5205c8
      Tom Lane authored
      pg_get_object_address and pg_identify_object_as_address are supposed
      to be inverses, but they disagreed as to the names of the arguments
      representing the textual form of an object address.  Moreover, the
      documented argument names didn't agree with reality at all, either
      for these functions or pg_identify_object.
      
      In HEAD and v11, I think we can get away with renaming the input
      arguments of pg_get_object_address to match the outputs of
      pg_identify_object_as_address.  In theory that might break queries
      using named-argument notation to call pg_get_object_address, but
      it seems really unlikely that anybody is doing that, or that they'd
      have much trouble adjusting if they were.  In older branches, we'll
      just live with the lack of consistency.
      
      Aside from fixing the documentation of these functions to match reality,
      I couldn't resist the temptation to do some copy-editing.
      
      Per complaint from Jean-Pierre Pelletier.  Back-patch to 9.5 where these
      functions were introduced.  (Before v11, this is a documentation change
      only.)
      
      Discussion: https://postgr.es/m/CANGqjDnWH8wsTY_GzDUxbt4i=y-85SJreZin4Hm8uOqv1vzRQA@mail.gmail.com
      ae5205c8
    • Alvaro Herrera's avatar
      Simplify partitioned table creation vs. relcache · 2fbdf1b3
      Alvaro Herrera authored
      In the original code, we were storing the pg_inherits row for a
      partitioned table too early: enough that we had a hack for relcache to
      avoid falling flat on its face while reading such a partial entry.  If
      we finish the pg_class creation first and *then* store the pg_inherits
      entry, we don't need that hack.
      
      Also recognize that pg_class.relpartbound is not marked NOT NULL and
      therefore it's entirely possible to read null values, so having only
      Assert() protection isn't enough.  Change those to if/elog tests
      instead.  This qualifies as a robustness fix, so backpatch to pg11.
      
      In passing, remove one access that wasn't actually needed, and reword
      one message to be like all the others that check for the same thing.
      
      Reviewed-by: Amit Langote
      Discussion: https://postgr.es/m/20180903213916.hh6wasnrdg6xv2ud@alvherre.pgsql
      2fbdf1b3
    • Peter Eisentraut's avatar
      PL/Python: Remove use of simple slicing API · f5a6509b
      Peter Eisentraut authored
      The simple slicing API (sq_slice, sq_ass_slice) has been deprecated
      since Python 2.0 and has been removed altogether in Python 3, so remove
      those functions from the PLyResult class.  Instead, the non-slice
      mapping functions mp_subscript and mp_ass_subscript can take slice
      objects as an index.  Since we just pass the index through to the
      underlying list object, we already support that.  Test coverage was
      already in place.
      f5a6509b
    • Bruce Momjian's avatar
      docs: improve AT TIME ZONE description · dd6073f2
      Bruce Momjian authored
      The previous description was unclear.  Also add a third example, change
      use of time zone acronyms to more verbose descriptions, and add a
      mention that using 'time' with AT TIME ZONE uses the current time zone
      rules.
      
      Backpatch-through: 9.3
      dd6073f2
  6. 04 Sep, 2018 5 commits
    • Michael Paquier's avatar
      Improve some error message strings and errcodes · d6e98ebe
      Michael Paquier authored
      This makes a bit less work for translators, by unifying error strings a
      bit more with what the rest of the code does, this time for three error
      strings in autoprewarm and one in base backup code.
      
      After some code review of slot.c, some file-access errcodes are reported
      but lead to an incorrect internal error, while corrupted data makes the
      most sense, similarly to the previous work done in e41d0a10.  Also,
      after calling rmtree(), a WARNING gets reported, which is a duplicate of
      what the internal call report, so make the code more consistent with all
      other code paths calling this function.
      
      Author: Michael Paquier
      Discussion: https://postgr.es/m/20180902200747.GC1343@paquier.xyz
      d6e98ebe
    • Tom Lane's avatar
      Fully enforce uniqueness of constraint names. · 17b7c302
      Tom Lane authored
      It's been true for a long time that we expect names of table and domain
      constraints to be unique among the constraints of that table or domain.
      However, the enforcement of that has been pretty haphazard, and it missed
      some corner cases such as creating a CHECK constraint and then an index
      constraint of the same name (as per recent report from André Hänsel).
      Also, due to the lack of an actual unique index enforcing this, duplicates
      could be created through race conditions.
      
      Moreover, the code that searches pg_constraint has been quite inconsistent
      about how to handle duplicate names if one did occur: some places checked
      and threw errors if there was more than one match, while others just
      processed the first match they came to.
      
      To fix, create a unique index on (conrelid, contypid, conname).  Since
      either conrelid or contypid is zero, this will separately enforce
      uniqueness of constraint names among constraints of any one table and any
      one domain.  (If we ever implement SQL assertions, and put them into this
      catalog, more thought might be needed.  But it'd be at least as reasonable
      to put them into a new catalog; having overloaded this one catalog with
      two kinds of constraints was a mistake already IMO.)  This index can replace
      the existing non-unique index on conrelid, though we need to keep the one
      on contypid for query performance reasons.
      
      Having done that, we can simplify the logic in various places that either
      coped with duplicates or neglected to, as well as potentially improve
      lookup performance when searching for a constraint by name.
      
      Also, as per our usual practice, install a preliminary check so that you
      get something more friendly than a unique-index violation report in the
      case complained of by André.  And teach ChooseIndexName to avoid choosing
      autogenerated names that would draw such a failure.
      
      While it's not possible to make such a change in the back branches,
      it doesn't seem quite too late to put this into v11, so do so.
      
      Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
      17b7c302
    • Tom Lane's avatar
      Clean up after TAP tests in oid2name and vacuumlo. · f30c6f52
      Tom Lane authored
      Oversights in commits 1aaf532d and bfea331a.  Unlike the case for
      traditional-style REGRESS tests, pgxs.mk doesn't have any builtin support
      for TAP tests, so it doesn't realize it should remove tmp_check/.
      Maybe we should build some actual pgxs infrastructure for TAP tests ...
      but for the moment, just remove explicitly.
      f30c6f52
    • Amit Kapila's avatar
      Prohibit pushing subqueries containing window function calculation to · 14e9b2a7
      Amit Kapila authored
      workers.
      
      Allowing window function calculation in workers leads to inconsistent
      results because if the input row ordering is not fully deterministic, the
      output of window functions might vary across workers.  The fix is to treat
      them as parallel-restricted.
      
      In the passing, improve the coding pattern in max_parallel_hazard_walker
      so that it has a chain of mutually-exclusive if ... else if ... else if
      ... else if ... IsA tests.
      
      Reported-by: Marko Tiikkaja
      Bug: 15324
      Author: Amit Kapila
      Reviewed-by: Tom Lane
      Backpatch-through: 9.6
      Discussion: https://postgr.es/m/CAL9smLAnfPJCDUUG4ckX2iznj53V7VSMsYefzZieN93YxTNOcw@mail.gmail.com
      14e9b2a7
    • Amit Kapila's avatar
      During the split, set checksum on an empty hash index page. · 7c9e19ca
      Amit Kapila authored
      On a split, we allocate a new splitpoint's worth of bucket pages wherein
      we initialize the last page with zeros which is fine, but we forgot to set
      the checksum for that last page.
      
      We decided to back-patch this fix till 10 because we don't have an easy
      way to test it in prior versions.  Another reason is that the hash-index
      code is changed heavily in 10, so it is not advisable to push the fix
      without testing it in prior versions.
      
      Author: Amit Kapila
      Reviewed-by: Yugo Nagata
      Backpatch-through: 10
      Discussion: https://postgr.es/m/5d03686d-727c-dbf8-0064-bf8b97ffe850@2ndquadrant.com
      7c9e19ca
  7. 03 Sep, 2018 2 commits
    • Alvaro Herrera's avatar
      Remove pg_constraint.conincluding · c076f3d7
      Alvaro Herrera authored
      This column was added in commit 8224de4f ("Indexes with INCLUDE
      columns and their support in B-tree") to ease writing the ruleutils.c
      supporting code for that feature, but it turns out to be unnecessary --
      we can do the same thing with just one more syscache lookup.
      
      Even the documentation for the new column being removed in this commit
      is awkward.
      
      Discussion: https://postgr.es/m/20180902165018.33otxftp3olgtu4t@alvherre.pgsql
      c076f3d7
    • Tomas Vondra's avatar
      Fix memory leak in TRUNCATE decoding · 4ddd8f5f
      Tomas Vondra authored
      When decoding a TRUNCATE record, the relids array was being allocated in
      the main ReorderBuffer memory context, but not released with the change
      resulting in a memory leak.
      
      The array was also ignored when serializing/deserializing the change,
      assuming all the information is stored in the change itself.  So when
      spilling the change to disk, we've only we have serialized only the
      pointer to the relids array.  Thanks to never releasing the array,
      the pointer however remained valid even after loading the change back
      to memory, preventing an actual crash.
      
      This fixes both the memory leak and (de)serialization.  The relids array
      is still allocated in the main ReorderBuffer memory context (none of the
      existing ones seems like a good match, and adding an extra context seems
      like an overkill).  The allocation is wrapped in a new ReorderBuffer API
      functions, to keep the details within reorderbuffer.c, just like the
      other ReorderBufferGet methods do.
      
      Author: Tomas Vondra
      Discussion: https://www.postgresql.org/message-id/flat/66175a41-9342-2845-652f-1bd4c3ee50aa%402ndquadrant.com
      Backpatch: 11, where decoding of TRUNCATE was introduced
      4ddd8f5f
  8. 02 Sep, 2018 1 commit
  9. 01 Sep, 2018 6 commits
  10. 31 Aug, 2018 5 commits
    • Tom Lane's avatar
      Fix psql's \dC command to annotate I/O conversion casts as such. · 115bf1e7
      Tom Lane authored
      A cast declared WITH INOUT was described as '(binary coercible)',
      which seems pretty inaccurate; let's print '(with inout)' instead.
      Per complaint from Jean-Pierre Pelletier.
      
      This definitely seems like a bug fix, but given that it's been wrong
      since 8.4 and nobody complained before, I'm hesitant to back-patch a
      behavior change into stable branches.  It doesn't seem too late for
      v11 though.
      
      Discussion: https://postgr.es/m/5b887023.1c69fb81.ff96e.6a1d@mx.google.com
      115bf1e7
    • Michael Paquier's avatar
      Ensure correct minimum consistent point on standbys · c186ba13
      Michael Paquier authored
      Startup process has improved its calculation of incorrect minimum
      consistent point in 8d68ee6, which ensures that all WAL available gets
      replayed when doing crash recovery, and has introduced an incorrect
      calculation of the minimum recovery point for non-startup processes,
      which can cause incorrect page references on a standby when for example
      the background writer flushed a couple of pages on-disk but was not
      updating the control file to let a subsequent crash recovery replay to
      where it should have.
      
      The only case where this has been reported to be a problem is when a
      standby needs to calculate the latest removed xid when replaying a btree
      deletion record, so one would need connections on a standby that happen
      just after recovery has thought it reached a consistent point.  Using a
      background worker which is started after the consistent point is reached
      would be the easiest way to get into problems if it connects to a
      database.  Having clients which attempt to connect periodically could
      also be a problem, but the odds of seeing this problem are much lower.
      
      The fix used is pretty simple, as the idea is to give access to the
      minimum recovery point written in the control file to non-startup
      processes so as they use a reference, while the startup process still
      initializes its own references of the minimum consistent point so as the
      original problem with incorrect page references happening post-promotion
      with a crash do not show up.
      
      Reported-by: Alexander Kukushkin
      Diagnosed-by: Alexander Kukushkin
      Author: Michael Paquier
      Reviewed-by: Kyotaro Horiguchi, Alexander Kukushkin
      Discussion: https://postgr.es/m/153492341830.1368.3936905691758473953@wrigleys.postgresql.org
      Backpatch-through: 9.3
      c186ba13
    • Tom Lane's avatar
      Code review for pg_verify_checksums.c. · d9c366f9
      Tom Lane authored
      Use postgres_fe.h, since this is frontend code.  Pretend that we've heard
      of project style guidelines for, eg, #include order.  Use BlockNumber not
      int arithmetic for block numbers, to avoid misbehavior with relations
      exceeding 2^31 blocks.  Avoid an unnecessary strict-aliasing warning
      (per report from Michael Banck).  Const-ify assorted stuff.  Avoid
      scribbling on the output of readdir() -- perhaps that's safe in practice,
      but POSIX forbids it, and this code has so far earned exactly zero
      credibility portability-wise.  Editorialize on an ambiguously-worded
      message.
      
      I did not touch the problem of the "buf" local variable being possibly
      insufficiently aligned; that's not specific to this code, and seems like
      it should be fixed as part of a different, larger patch.
      
      Discussion: https://postgr.es/m/1535618100.1286.3.camel@credativ.de
      d9c366f9
    • Alexander Korotkov's avatar
      Enforce cube dimension limit in all cube construction functions · f919c165
      Alexander Korotkov authored
      contrib/cube has a limit to 100 dimensions for cube datatype.  However, it's
      not enforced everywhere, and one can actually construct cube with more than
      100 dimensions having then trouble with dump/restore.  This commit add checks
      for dimensions limit in all functions responsible for cube construction.
      Backpatch to all supported versions.
      
      Reported-by: Andrew Gierth
      Discussion: https://postgr.es/m/87va7uybt4.fsf%40news-spur.riddles.org.uk
      Author: Andrey Borodin with small additions by me
      Review: Tom Lane
      Backpatch-through: 9.3
      f919c165
    • Alexander Korotkov's avatar
      Split contrib/cube platform-depended checks into separate test · 38970ce8
      Alexander Korotkov authored
      We're currently maintaining two outputs for cube regression test.  But that
      appears to be unsuitable, because these outputs are different in out few checks
      involving scientific notation.  So, split checks involving scientific notation
      into separate test, making contrib/cube easier to maintain.  Backpatch to all
      supported versions in order to make further backpatching easier.
      
      Discussion: https://postgr.es/m/CAPpHfdvJgWjxHsJTtT%2Bo1tz3OR8EFHcLQjhp-d3%2BUcmJLh-fQA%40mail.gmail.com
      Author: Alexander Korotkov
      Backpatch-through: 9.3
      38970ce8