1. 07 Jan, 2019 1 commit
  2. 06 Jan, 2019 1 commit
    • Tom Lane's avatar
      Replace the data structure used for keyword lookup. · afb0d071
      Tom Lane authored
      Previously, ScanKeywordLookup was passed an array of string pointers.
      This had some performance deficiencies: the strings themselves might
      be scattered all over the place depending on the compiler (and some
      quick checking shows that at least with gcc-on-Linux, they indeed
      weren't reliably close together).  That led to very cache-unfriendly
      behavior as the binary search touched strings in many different pages.
      Also, depending on the platform, the string pointers might need to
      be adjusted at program start, so that they couldn't be simple constant
      data.  And the ScanKeyword struct had been designed with an eye to
      32-bit machines originally; on 64-bit it requires 16 bytes per
      keyword, making it even more cache-unfriendly.
      
      Redesign so that the keyword strings themselves are allocated
      consecutively (as part of one big char-string constant), thereby
      eliminating the touch-lots-of-unrelated-pages syndrome.  And get
      rid of the ScanKeyword array in favor of three separate arrays:
      uint16 offsets into the keyword array, uint16 token codes, and
      uint8 keyword categories.  That reduces the overhead per keyword
      to 5 bytes instead of 16 (even less in programs that only need
      one of the token codes and categories); moreover, the binary search
      only touches the offsets array, further reducing its cache footprint.
      This also lets us put the token codes somewhere else than the
      keyword strings are, which avoids some unpleasant build dependencies.
      
      While we're at it, wrap the data used by ScanKeywordLookup into
      a struct that can be treated as an opaque type by most callers.
      That doesn't change things much right now, but it will make it
      less painful to switch to a hash-based lookup method, as is being
      discussed in the mailing list thread.
      
      Most of the change here is associated with adding a generator
      script that can build the new data structure from the same
      list-of-PG_KEYWORD header representation we used before.
      The PG_KEYWORD lists that plpgsql and ecpg used to embed in
      their scanner .c files have to be moved into headers, and the
      Makefiles have to be taught to invoke the generator script.
      This work is also necessary if we're to consider hash-based lookup,
      since the generator script is what would be responsible for
      constructing a hash table.
      
      Aside from saving a few kilobytes in each program that includes
      the keyword table, this seems to speed up raw parsing (flex+bison)
      by a few percent.  So it's worth doing even as it stands, though
      we think we can gain even more with a follow-on patch to switch
      to hash-based lookup.
      
      John Naylor, with further hacking by me
      
      Discussion: https://postgr.es/m/CAJVSVGXdFVU2sgym89XPL=Lv1zOS5=EHHQ8XWNzFL=mTXkKMLw@mail.gmail.com
      afb0d071
  3. 05 Jan, 2019 1 commit
    • Tom Lane's avatar
      Fix program build rule in src/bin/scripts/Makefile. · c5c7fa26
      Tom Lane authored
      Commit 69ae9dcb added a globally-visible "%: %.o" rule, but we failed
      to notice that src/bin/scripts/Makefile already had such a rule.
      Apparently, the later occurrence of the same rule wins in nearly all
      versions of gmake ... but not in the one used by buildfarm member jacana.
      jacana is evidently using the global rule, which says to link "$<",
      ie just the first dependency.  But the scripts makefile needs to
      link "$^", ie all the dependencies listed for the target.
      
      There is, fortunately, no good reason not to use "$^" in the global
      version of the rule, so we can just do that and get rid of the local
      version.
      c5c7fa26
  4. 04 Jan, 2019 6 commits
  5. 03 Jan, 2019 2 commits
    • Tom Lane's avatar
      Use symbolic references for pg_language OIDs in the bootstrap data. · 814c9019
      Tom Lane authored
      This patch teaches genbki.pl to replace pg_language names by OIDs
      in much the same way as it already does for pg_am names etc, and
      converts pg_proc.dat to use such symbolic references in the prolang
      column.
      
      Aside from getting rid of a few more magic numbers in the initial
      catalog data, this means that Gen_fmgrtab.pl no longer needs to read
      pg_language.dat, since it doesn't have to know the OID of the "internal"
      language; now it's just looking for the string "internal".
      
      No need for a catversion bump, since the contents of postgres.bki
      don't actually change at all.
      
      John Naylor
      
      Discussion: https://postgr.es/m/CAJVSVGWtUqxpfAaxS88vEGvi+jKzWZb2EStu5io-UPc4p9rSJg@mail.gmail.com
      814c9019
    • Tom Lane's avatar
      Improve ANALYZE's handling of concurrent-update scenarios. · 7170268e
      Tom Lane authored
      This patch changes the rule for whether or not a tuple seen by ANALYZE
      should be included in its sample.
      
      When we last touched this logic, in commit 51e1445f, we weren't
      thinking very hard about tuples being UPDATEd by a long-running
      concurrent transaction.  In such a case, we might see the pre-image as
      either LIVE or DELETE_IN_PROGRESS depending on timing; and we might see
      the post-image not at all, or as INSERT_IN_PROGRESS.  Since the existing
      code will not sample either DELETE_IN_PROGRESS or INSERT_IN_PROGRESS
      tuples, this leads to concurrently-updated rows being omitted from the
      sample entirely.  That's not very helpful, and it's especially the wrong
      thing if the concurrent transaction ends up rolling back.
      
      The right thing seems to be to sample DELETE_IN_PROGRESS rows just as if
      they were live.  This makes the "sample it" and "count it" decisions the
      same, which seems good for consistency.  It's clearly the right thing
      if the concurrent transaction ends up rolling back; in effect, we are
      sampling as though IN_PROGRESS transactions haven't happened yet.
      Also, this combination of choices ensures maximum robustness against
      the different combinations of whether and in which state we might see the
      pre- and post-images of an update.
      
      It's slightly annoying that we end up recording immediately-out-of-date
      stats in the case where the transaction does commit, but on the other
      hand the stats are fine for columns that didn't change in the update.
      And the alternative of sampling INSERT_IN_PROGRESS rows instead seems
      like a bad idea, because then the sampling would be inconsistent with
      the way rows are counted for the stats report.
      
      Per report from Mark Chambers; thanks to Jeff Janes for diagnosing
      what was happening.  Back-patch to all supported versions.
      
      Discussion: https://postgr.es/m/CAFh58O_Myr6G3tcH3gcGrF-=OExB08PJdWZcSBcEcovaiPsrHA@mail.gmail.com
      7170268e
  6. 02 Jan, 2019 5 commits
    • Tom Lane's avatar
      Don't believe MinMaxExpr is leakproof without checking. · 68a13f28
      Tom Lane authored
      MinMaxExpr invokes the btree comparison function for its input datatype,
      so it's only leakproof if that function is.  Many such functions are
      indeed leakproof, but others are not, and we should not just assume that
      they are.  Hence, adjust contain_leaked_vars to verify the leakproofness
      of the referenced function explicitly.
      
      I didn't add a regression test because it would need to depend on
      some particular comparison function being leaky, and that's a moving
      target, per discussion.
      
      This has been wrong all along, so back-patch to supported branches.
      
      Discussion: https://postgr.es/m/31042.1546194242@sss.pgh.pa.us
      68a13f28
    • Peter Eisentraut's avatar
    • Tom Lane's avatar
      Ensure link commands list *.o files before LDFLAGS. · 69ae9dcb
      Tom Lane authored
      It's important for link commands to list *.o input files before -l
      switches for libraries, as library code may not get pulled into the link
      unless referenced by an earlier command-line entry.  This is certainly
      necessary for static libraries (.a style).  Apparently on some platforms
      it is also necessary for shared libraries, as reported by Donald Dong.
      
      We often put -l switches for within-tree libraries into LDFLAGS, meaning
      that link commands that list *.o files after LDFLAGS are hazardous.
      Most of our link commands got this right, but a few did not.  In
      particular, places that relied on gmake's default implicit link rule
      failed, because that puts LDFLAGS first.  Fix that by overriding the
      built-in rule with our own.  The implicit link rules in
      src/makefiles/Makefile.* for single-.o-file shared libraries mostly
      got this wrong too, so fix them.  I also changed the link rules for the
      backend and a couple of other places for consistency, even though they
      are not (currently) at risk because they aren't adding any -l switches
      to LDFLAGS.
      
      Arguably, the real problem here is that we're abusing LDFLAGS by
      putting -l switches in it and we should stop doing that.  But changing
      that would be quite invasive, so I'm not eager to do so.
      
      Perhaps this is a candidate for back-patching, but so far it seems
      that problems can only be exhibited in test code we don't normally
      build, and at least some of the problems are new in HEAD anyway.
      So I'll refrain for now.
      
      Donald Dong and Tom Lane
      
      Discussion: https://postgr.es/m/CAKABAquXn-BF-vBeRZxhzvPyfMqgGuc74p8BmQZyCFDpyROBJQ@mail.gmail.com
      69ae9dcb
    • Bruce Momjian's avatar
      Update copyright for 2019 · 97c39498
      Bruce Momjian authored
      Backpatch-through: certain files through 9.4
      97c39498
    • Peter Eisentraut's avatar
      Convert unaccent tests to UTF-8 · b6f3649b
      Peter Eisentraut authored
      This makes it easier to add new tests that are specific to Unicode
      features.  The files were previously in KOI8-R.
      
      Discussion: https://www.postgresql.org/message-id/8506.1545111362@sss.pgh.pa.us
      b6f3649b
  7. 01 Jan, 2019 2 commits
    • Michael Paquier's avatar
      Remove configure switch --disable-strong-random · 1707a0d2
      Michael Paquier authored
      This removes a portion of infrastructure introduced by fe0a0b59 to allow
      compilation of Postgres in environments where no strong random source is
      available, meaning that there is no linking to OpenSSL and no
      /dev/urandom (Windows having its own CryptoAPI).  No systems shipped
      this century lack /dev/urandom, and the buildfarm is actually not
      testing this switch at all, so just remove it.  This simplifies
      particularly some backend code which included a fallback implementation
      using shared memory, and removes a set of alternate regression output
      files from pgcrypto.
      
      Author: Michael Paquier
      Reviewed-by: Tom Lane
      Discussion: https://postgr.es/m/20181230063219.GG608@paquier.xyz
      1707a0d2
    • Michael Paquier's avatar
      Fix generation of padding message before encrypting Elgamal in pgcrypto · d880b208
      Michael Paquier authored
      fe0a0b59, which has added a stronger random source in Postgres, has
      introduced a thinko when creating a padding message which gets encrypted
      for Elgamal.  The padding message cannot have zeros, which are replaced
      by random bytes.  However if pg_strong_random() failed, the message
      would finish by being considered in correct shape for encryption with
      zeros.
      
      Author: Tom Lane
      Reviewed-by: Michael Paquier
      Discussion: https://postgr.es/m/20186.1546188423@sss.pgh.pa.us
      Backpatch-through: 10
      d880b208
  8. 31 Dec, 2018 7 commits
    • Michael Paquier's avatar
      Improve comments and logs in do_pg_stop/start_backup · 8d3b389e
      Michael Paquier authored
      The function name pg_stop_backup() has been included for ages in some
      log messages when stopping the backup, which is confusing for base
      backups taken with the replication protocol because this function is
      never called.  Some other comments and messages in this area are
      improved while on it.
      
      The new wording is based on input and suggestions from several people,
      all listed below.
      
      Author: Michael Paquier
      Reviewed-by: Peter Eisentraut, Álvaro Herrera, Tom Lane
      Discussion: https://postgr.es/m/20181221040510.GA12599@paquier.xyz
      8d3b389e
    • Noah Misch's avatar
      Process EXTRA_INSTALL serially, during the first temp-install. · aa019da5
      Noah Misch authored
      This closes a race condition in "make -j check-world"; the symptom was
      EEXIST errors.  Back-patch to v10, before which parallel check-world had
      worse problems.
      
      Discussion: https://postgr.es/m/20181224221601.GA3227827@rfd.leadboat.com
      aa019da5
    • Noah Misch's avatar
      Send EXTRA_INSTALL errors to install.log, not stderr. · 76f7b0b0
      Noah Misch authored
      We already redirected other temp-install stderr and all temp-install
      stdout in this way.  Back-patch to v10, like the next commit.
      
      Discussion: https://postgr.es/m/20181224221601.GA3227827@rfd.leadboat.com
      76f7b0b0
    • Noah Misch's avatar
      pg_regress: Promptly detect failed postmaster startup. · 94600dd4
      Noah Misch authored
      Detect it the way pg_ctl's wait_for_postmaster() does.  When pg_regress
      spawned a postmaster that failed startup, we were detecting that only
      with "pg_regress: postmaster did not respond within 60 seconds".
      Back-patch to 9.4 (all supported versions).
      
      Reviewed by Tom Lane.
      
      Discussion: https://postgr.es/m/20181231172922.GA199150@gust.leadboat.com
      94600dd4
    • Tom Lane's avatar
      Update leakproofness markings on some btree comparison functions. · d01e75d6
      Tom Lane authored
      Mark pg_lsn and oidvector comparison functions as leakproof.  Per
      discussion, these clearly are leakproof so we might as well mark them so.
      
      On the other hand, remove leakproof markings from name comparison
      functions other than equal/not-equal.  Now that these depend on
      varstr_cmp, they can't be considered leakproof if text comparison isn't.
      (This was my error in commit 586b98fd.)
      
      While at it, add some opr_sanity queries to catch cases where related
      functions do not have the same volatility and leakproof markings.
      This would clearly be bogus for commutator or negator pairs.  In the
      domain of btree comparison functions, we do have some exceptions,
      because text equality is leakproof but inequality comparisons are not.
      That's odd on first glance but is reasonable (for now anyway) given
      the much greater complexity of the inequality code paths.
      
      Discussion: https://postgr.es/m/20181231172551.GA206480@gust.leadboat.com
      d01e75d6
    • Alvaro Herrera's avatar
      Remove some useless code · e439c6f0
      Alvaro Herrera authored
      In commit 8b08f7d4 I added member relationId to IndexStmt struct.
      I'm now not sure why; DefineIndex doesn't need it, since the relation
      OID is passed as a separate argument anyway.  Remove it.
      
      Also remove a redundant assignment to the relationId argument (it wasn't
      redundant when added by commit e093dcdd, but should have been removed
      in commit 5f173040), and use relationId instead of stmt->relation when
      locking the relation in the second phase of CREATE INDEX CONCURRENTLY,
      which is not only confusing but it means we resolve the name twice for
      no reason.
      e439c6f0
    • Tom Lane's avatar
      Fix oversight in commit b5415e3c. · b2edbbd0
      Tom Lane authored
      While rearranging code in tidpath.c, I overlooked the fact that we ought
      to check restriction_is_securely_promotable when trying to use a join
      clause as a TID qual.  Since tideq itself is leakproof, this wouldn't
      really allow any interesting leak AFAICT, but it still seems like we
      had better check it.
      
      For consistency with the corresponding logic in indxpath.c, also
      check rinfo->pseudoconstant.  I'm not sure right now that it's
      possible for that to be set in a join clause, but if it were,
      a match couldn't be made anyway.
      b2edbbd0
  9. 30 Dec, 2018 5 commits
    • Peter Eisentraut's avatar
      Change "checkpoint starting" message to use "wal" · 60d99797
      Peter Eisentraut authored
      This catches up with the recent renaming of all user-facing mentions
      of "xlog" to "wal".
      
      Discussion: https://www.postgresql.org/message-id/flat/20181129084708.GA9562%40msg.credativ.de
      60d99797
    • Tom Lane's avatar
      Add a hash opclass for type "tid". · 0a6ea400
      Tom Lane authored
      Up to now we've not worried much about joins where the join key is a
      relation's CTID column, reasoning that storing a table's CTIDs in some
      other table would be pretty useless.  However, there are use-cases for
      this sort of query involving self-joins, so that argument doesn't really
      hold water.
      
      With larger relations, a merge or hash join is desirable.  We had a btree
      opclass for type "tid", allowing merge joins on CTID, but no hash opclass
      so that hash joins weren't possible.  Add the missing infrastructure.
      
      This also potentially enables hash aggregation on "tid", though the
      use-cases for that aren't too clear.
      
      Discussion: https://postgr.es/m/1853.1545453106@sss.pgh.pa.us
      0a6ea400
    • Tom Lane's avatar
      Support parameterized TidPaths. · b5415e3c
      Tom Lane authored
      Up to now we've not worried much about joins where the join key is a
      relation's CTID column, reasoning that storing a table's CTIDs in some
      other table would be pretty useless.  However, there are use-cases for
      this sort of query involving self-joins, so that argument doesn't really
      hold water.
      
      This patch allows generating plans for joins on CTID that use a nestloop
      with inner TidScan, similar to what we might do with an index on the join
      column.  This is the most efficient way to join when the outer side of
      the nestloop is expected to yield relatively few rows.
      
      This change requires upgrading tidpath.c and the generated TidPaths
      to work with RestrictInfos instead of bare qual clauses, but that's
      long-postponed technical debt anyway.
      
      Discussion: https://postgr.es/m/17443.1545435266@sss.pgh.pa.us
      b5415e3c
    • Tom Lane's avatar
      Teach eval_const_expressions to constant-fold LEAST/GREATEST expressions. · 6f19a8c4
      Tom Lane authored
      Doing this requires an assumption that the invoked btree comparison
      function is immutable.  We could check that explicitly, but in other
      places such as contain_mutable_functions we just assume that it's true,
      so we may as well do likewise here.  (If the comparison function's
      behavior isn't immutable, the sort order in indexes built with it would
      be unstable, so it seems certainly wrong for it not to be so.)
      
      Vik Fearing
      
      Discussion: https://postgr.es/m/c6e8504c-4c43-35fa-6c8f-3c0b80a912cc@2ndquadrant.com
      6f19a8c4
    • Michael Paquier's avatar
      Trigger stmt_beg and stmt_end for top-level statement blocks of PL/pgSQL · e0ef136d
      Michael Paquier authored
      PL/pgSQL provides a set of callbacks which can be used for extra
      instrumentation of functions written in this language called at function
      setup, begin and end, as well as statement begin and end.  When calling
      a routine, a trigger, or an event trigger, statement callbacks are not
      getting called for the top-level statement block leading to an
      inconsistent handling compared to the other statements.  This
      inconsistency can potentially complicate extensions doing
      instrumentation work on top of PL/pgSQL, so this commit makes sure that
      all statement blocks, including the top-level one, go through the
      correct corresponding callbacks.
      
      Author: Pavel Stehule
      Reviewed-by: Michael Paquier
      Discussion: https://postgr.es/m/CAFj8pRArEANsaUjo5in9_iQt0vKf9ecwDAmsdN_EBwL13ps12A@mail.gmail.com
      e0ef136d
  10. 29 Dec, 2018 4 commits
    • Tom Lane's avatar
      Use pg_strong_random() to select each server process's random seed. · 4203842a
      Tom Lane authored
      Previously we just set the seed based on process ID and start timestamp.
      Both those values are directly available within the session, and can
      be found out or guessed by other users too, making the session's series
      of random(3) values fairly predictable.  Up to now, our backend-internal
      uses of random(3) haven't seemed security-critical, but commit 88bdbd3f
      added one that potentially is: when using log_statement_sample_rate, a
      user might be able to predict which of his SQL statements will get logged.
      
      To improve this situation, upgrade the per-process seed initialization
      method to use pg_strong_random() if available, greatly reducing the
      predictability of the initial seed value.  This adds a few tens of
      microseconds to process start time, but since backend startup time is
      at least a couple of milliseconds, that seems an acceptable price.
      
      This means that pg_strong_random() needs to be able to run without
      reliance on any backend infrastructure, since it will be invoked
      before any of that is up.  It was safe for that already, but adjust
      comments and #include commands to make it clearer.
      
      Discussion: https://postgr.es/m/3859.1545849900@sss.pgh.pa.us
      4203842a
    • Tom Lane's avatar
      Use a separate random seed for SQL random()/setseed() functions. · 6645ad6b
      Tom Lane authored
      Previously, the SQL random() function depended on libc's random(3),
      and setseed() invoked srandom(3).  This results in interference between
      these functions and backend-internal uses of random(3).  We'd never paid
      too much mind to that, but in the wake of commit 88bdbd3f which added
      log_statement_sample_rate, the interference arguably has a security
      consequence: if log_statement_sample_rate is active then an unprivileged
      user could probably control which if any of his SQL commands get logged,
      by issuing setseed() at the right times.  That seems bad.
      
      To fix this reliably, we need random() and setseed() to use their own
      private random state variable.  Standard random(3) isn't amenable to such
      usage, so let's switch to pg_erand48().  It's hard to say whether that's
      more or less "random" than any particular platform's version of random(3),
      but it does have a wider seed value and a longer period than are required
      by POSIX, so we can hope that this isn't a big downgrade.  Also, we should
      now have uniform behavior of random() across platforms, which is worth
      something.
      
      While at it, upgrade the per-process seed initialization method to use
      pg_strong_random() if available, greatly reducing the predictability
      of the initial seed value.  (I'll separately do something similar for
      the internal uses of random().)
      
      In addition to forestalling the possible security problem, this has a
      benefit in the other direction, which is that we can now document
      setseed() as guaranteeing a reproducible sequence of random() values.
      Previously, because of the possibility of internal calls of random(3),
      we could not promise any such thing.
      
      Discussion: https://postgr.es/m/3859.1545849900@sss.pgh.pa.us
      6645ad6b
    • Peter Eisentraut's avatar
      1a4eba4e
    • Peter Eisentraut's avatar
      Remove redundant translation markers · e3299d36
      Peter Eisentraut authored
      psql_error() already handles that itself.
      e3299d36
  11. 28 Dec, 2018 6 commits
    • Michael Paquier's avatar
      Improve description of DEFAULT_XLOG_SEG_SIZE in pg_config.h · 0a5a493f
      Michael Paquier authored
      This was incorrectly referring to --walsegsize, and its description is
      rewritten in a clearer way.
      
      Author: Ian Barwick, Tom Lane
      Reviewed-by: Álvaro Herrera, Michael Paquier
      Discussion: https://postgr.es/m/08534fc6-119a-c498-254e-d5acc4e6bf85@2ndquadrant.com
      0a5a493f
    • Tom Lane's avatar
      Marginal performance hacking in erand48.c. · 6b9bba2d
      Tom Lane authored
      Get rid of the multiplier and addend variables in favor of hard-wired
      constants.  Do the multiply-and-add using uint64 arithmetic, rather
      than manually combining several narrower multiplications and additions.
      Make _dorand48 return the full-width new random value, and have its
      callers use that directly (after suitable masking) rather than
      reconstructing what they need from the unsigned short[] representation.
      
      On my machine, this is good for a nearly factor-of-2 speedup of
      pg_erand48(), probably mostly from needing just one call of ldexp()
      rather than three.  The wins for the other functions are smaller
      but measurable.  While none of the existing call sites are really
      performance-critical, a cycle saved is a cycle earned; and besides
      the machine code is smaller this way (at least on x86_64).
      
      Patch by me, but the original idea to optimize this by switching
      to int64 arithmetic is from Fabien Coelho.
      
      Discussion: https://postgr.es/m/1551.1546018192@sss.pgh.pa.us
      6b9bba2d
    • Tom Lane's avatar
      Fix latent problem with pg_jrand48(). · e0904664
      Tom Lane authored
      POSIX specifies that jrand48() returns a signed 32-bit value (in the
      range [-2^31, 2^31)), but our code was returning an unsigned 32-bit
      value (in the range [0, 2^32)).  This doesn't actually matter to any
      existing call site, because they all cast the "long" result to int32
      or uint32; but it will doubtless bite somebody in the future.
      To fix, cast the arithmetic result to int32 explicitly before the
      compiler widens it to long (if widening is needed).
      
      While at it, upgrade this file's far-short-of-project-style comments.
      Had there been some peer pressure to document pg_jrand48() properly,
      maybe this thinko wouldn't have gotten committed to begin with.
      
      Backpatch to v10 where pg_jrand48() was added, just in case somebody
      back-patches a fix that uses it and depends on the standard behavior.
      
      Discussion: https://postgr.es/m/17235.1545951602@sss.pgh.pa.us
      e0904664
    • Alvaro Herrera's avatar
      Fix thinko in previous commit · 4ed6c071
      Alvaro Herrera authored
      4ed6c071
    • Alvaro Herrera's avatar
      Rewrite ExecPartitionCheckEmitError for clarity · e8b0e6b8
      Alvaro Herrera authored
      The original was hard to follow and failed to comply with DRY principle.
      
      Discussion: https://postgr.es/m/20181206222221.g5witbsklvqthjll@alvherre.pgsql
      e8b0e6b8
    • Michael Paquier's avatar
      Clarify referential actions in docs of CREATE/ALTER TABLE · f7ea1a42
      Michael Paquier authored
      The documentation of ON DELETE and ON UPDATE uses the term "action",
      which is also used on the ALTER TABLE page for other purposes.  This
      commit renames the term to "referential_action", which is more
      consistent with the SQL specification.  The new term is now used on the
      documentation of both CREATE TABLE and ALTER TABLE for consistency.
      
      Reported-by: Brigitte Blanc-Lafay
      Author: Lætitia Avrot
      Reviewed-by: Álvaro Herrera
      Discussion: https://postgr.es/m/CAB_COdiHEVVs0uB+uYCjjYUwQ4YFFekppq+Xqv6qAM8+cd42gA@mail.gmail.com
      f7ea1a42