1. 02 Apr, 2020 12 commits
    • Bruce Momjian's avatar
      doc: remove comma, related to commit 92d31085 · c713dc2f
      Bruce Momjian authored
      Reported-by: Peter Eisentraut
      
      Discussion: https://postgr.es/m/750b8832-d123-7f9b-931e-43ce8321b2d7@2ndquadrant.com
      
      Backpatch-through: 9.5
      c713dc2f
    • Tom Lane's avatar
      Improve user control over truncation of logged bind-parameter values. · 0b34e7d3
      Tom Lane authored
      This patch replaces the boolean GUC log_parameters_on_error introduced
      by commit ba79cb5d with an integer log_parameter_max_length_on_error,
      adding the ability to specify how many bytes to trim each logged
      parameter value to.  (The previous coding hard-wired that choice at
      64 bytes.)
      
      In addition, add a new parameter log_parameter_max_length that provides
      similar control over truncation of query parameters that are logged in
      response to statement-logging options, as opposed to errors.  Previous
      releases always logged such parameters in full, possibly causing log
      bloat.
      
      For backwards compatibility with prior releases,
      log_parameter_max_length defaults to -1 (log in full), while
      log_parameter_max_length_on_error defaults to 0 (no logging).
      
      Per discussion, log_parameter_max_length is SUSET since the DBA should
      control routine logging behavior, but log_parameter_max_length_on_error
      is USERSET because it also affects errcontext data sent back to the
      client.
      
      Alexey Bashtanov, editorialized a little by me
      
      Discussion: https://postgr.es/m/b10493cc-a399-a03a-67c7-068f2791ee50@imap.cc
      0b34e7d3
    • Tomas Vondra's avatar
    • David Rowley's avatar
      Attempt to stabilize partitionwise_aggregate test · cefb82d4
      David Rowley authored
      In b07642db, we added code to trigger autovacuums based on the number of
      INSERTs into a table. This seems to have cause some destabilization of
      the regression tests. Likely this is due to an autovacuum triggering
      mid-test and (per theory from Tom Lane) one of the test's queries causes
      autovacuum to skip some number of pages, resulting in the reltuples
      estimate changing.
      
      The failure that this is attempting to fix is around the order of subnodes
      in an Append. Since the planner orders these according to the subnode
      cost, then it's possible that a small change in the reltuples value changes
      the subnode's cost enough that it swaps position with one of its fellow
      subnodes.
      
      The failure here only seems to occur on slower buildfarm machines. In this
      case, lousyjack, which seems have taken over 8 minutes to run just
      the partitionwise_aggregate test. Such a slow run would increase the
      chances that the autovacuum launcher would trigger a vacuum mid-test.
      Faster machines run this test in sub second time, so have a much smaller
      window for an autovacuum to trigger.
      
      Here we fix this by disabling autovacuum on all tables created in the test.
      
      Additionally, this reverts the change made in the
      partitionwise_aggregate test in 2dc16efe.
      
      Discussion: https://postgr.es/m/22297.1585797192@sss.pgh.pa.us
      cefb82d4
    • Peter Eisentraut's avatar
      Add SQL functions for Unicode normalization · 2991ac5f
      Peter Eisentraut authored
      This adds SQL expressions NORMALIZE() and IS NORMALIZED to convert and
      check Unicode normal forms, per SQL standard.
      
      To support fast IS NORMALIZED tests, we pull in a new data file
      DerivedNormalizationProps.txt from Unicode and build a lookup table
      from that, using techniques similar to ones already used for other
      Unicode data.  make update-unicode will keep it up to date.  We only
      build and use these tables for the NFC and NFKC forms, because they
      are too big for NFD and NFKD and the improvement is not significant
      enough there.
      Reviewed-by: default avatarDaniel Verite <daniel@manitou-mail.org>
      Reviewed-by: default avatarAndreas Karlsson <andreas@proxel.se>
      Discussion: https://www.postgresql.org/message-id/flat/c1909f27-c269-2ed9-12f8-3ab72c8caf7a@2ndquadrant.com
      2991ac5f
    • Peter Eisentraut's avatar
      Fix whitespace · 070c3d39
      Peter Eisentraut authored
      070c3d39
    • Peter Eisentraut's avatar
      doc: Update for Unix-domain sockets on Windows · 580a446c
      Peter Eisentraut authored
      Update the documentation to reflect that Unix-domain sockets are now
      usable on Windows.
      580a446c
    • Peter Eisentraut's avatar
      Add some comments to some SQL features · c6e0edad
      Peter Eisentraut authored
      Otherwise, it could be confusing to a reader that some of these
      well-publicized features are simply listed as unsupported without
      further explanation.
      c6e0edad
    • Thomas Munro's avatar
      Add maintenance_io_concurrency to postgresql.conf.sample. · 37b3794d
      Thomas Munro authored
      New GUC from commit fc34b0d9.
      37b3794d
    • Amit Kapila's avatar
      Allow parallel vacuum to accumulate buffer usage. · 3a5e2213
      Amit Kapila authored
      Commit 40d964ec allowed vacuum command to process indexes in parallel but
      forgot to accumulate the buffer usage stats of parallel workers.  This
      allows leader backend to accumulate buffer usage stats of all the parallel
      workers.
      
      Reported-by: Julien Rouhaud
      Author: Sawada Masahiko
      Reviewed-by: Dilip Kumar, Amit Kapila and Julien Rouhaud
      Discussion: https://postgr.es/m/20200328151721.GB12854@nol
      3a5e2213
    • Fujii Masao's avatar
      Allow pg_stat_statements to track planning statistics. · 17e03282
      Fujii Masao authored
      This commit makes pg_stat_statements support new GUC
      pg_stat_statements.track_planning. If this option is enabled,
      pg_stat_statements tracks the planning statistics of the statements,
      e.g., the number of times the statement was planned, the total time
      spent planning the statement, etc. This feature is useful to check
      the statements that it takes a long time to plan. Previously since
      pg_stat_statements tracked only the execution statistics, we could
      not use that for the purpose.
      
      The planning and execution statistics are stored at the end of
      each phase separately. So there are not always one-to-one relationship
      between them. For example, if the statement is successfully planned
      but fails in the execution phase, only its planning statistics are stored.
      This may cause the users to be able to see different pg_stat_statements
      results from the previous version. To avoid this,
      pg_stat_statements.track_planning needs to be disabled.
      
      This commit bumps the version of pg_stat_statements to 1.8
      since it changes the definition of pg_stat_statements function.
      
      Author: Julien Rouhaud, Pascal Legrand, Thomas Munro, Fujii Masao
      Reviewed-by: Sergei Kornilov, Tomas Vondra, Yoshikazu Imai, Haribabu Kommi, Tom Lane
      Discussion: https://postgr.es/m/CAHGQGwFx_=DO-Gu-MfPW3VQ4qC7TfVdH2zHmvZfrGv6fQ3D-Tw@mail.gmail.com
      Discussion: https://postgr.es/m/CAEepm=0e59Y_6Q_YXYCTHZkqOc6H2pJ54C_Xe=VFu50Aqqp_sA@mail.gmail.com
      Discussion: https://postgr.es/m/DB6PR0301MB21352F6210E3B11934B0DCC790B00@DB6PR0301MB2135.eurprd03.prod.outlook.com
      17e03282
    • Tomas Vondra's avatar
      Collect statistics about SLRU caches · 28cac71b
      Tomas Vondra authored
      There's a number of SLRU caches used to access important data like clog,
      commit timestamps, multixact, asynchronous notifications, etc. Until now
      we had no easy way to monitor these shared caches, compute hit ratios,
      number of reads/writes etc.
      
      This commit extends the statistics collector to track this information
      for a predefined list of SLRUs, and also introduces a new system view
      pg_stat_slru displaying the data.
      
      The list of built-in SLRUs is fixed, but additional SLRUs may be defined
      in extensions. Unfortunately, there's no suitable registry of SLRUs, so
      this patch simply defines a fixed list of SLRUs with entries for the
      built-in ones and one entry for all additional SLRUs. Extensions adding
      their own SLRU are fairly rare, so this seems acceptable.
      
      This patch only allows monitoring of SLRUs, not tuning. The SLRU sizes
      are still fixed (hard-coded in the code) and it's not entirely clear
      which of the SLRUs might need a GUC to tune size. In a way, allowing us
      to determine that is one of the goals of this patch.
      
      Bump catversion as the patch introduces new functions and system view.
      
      Author: Tomas Vondra
      Reviewed-by: Alvaro Herrera
      Discussion: https://www.postgresql.org/message-id/flat/20200119143707.gyinppnigokesjok@development
      28cac71b
  2. 01 Apr, 2020 13 commits
    • Tom Lane's avatar
      Clean up parsing of ltree and lquery some more. · 17ca0679
      Tom Lane authored
      Fix lquery parsing to handle repeated flag characters correctly,
      and to enforce the max label length correctly in some cases where
      it did not before, and to detect empty labels in some cases where
      it did not before.
      
      In a more cosmetic vein, use a switch rather than if-then chains to
      handle the different states, and avoid unnecessary checks on charlen
      when looking for ASCII characters, and factor out multiple copies of
      the label length checking code.
      
      Tom Lane and Dmitry Belyavsky
      
      Discussion: https://postgr.es/m/CADqLbzLVkBuPX0812o+z=c3i6honszsZZ6VQOSKR3VPbB56P3w@mail.gmail.com
      17ca0679
    • Tom Lane's avatar
      Add support for binary I/O of ltree, lquery, and ltxtquery types. · 949a9f04
      Tom Lane authored
      Not much to say here --- does what it says on the tin.  The "binary"
      representation in each case is really just the same as the text format,
      though we prefix a version-number byte in case anyone ever feels
      motivated to change that.  Thus, there's not any expectation of improved
      speed or reduced space; the point here is just to allow clients to use
      binary format for all columns of a query result or COPY data.
      
      This makes use of the recently added ALTER TYPE support to add binary
      I/O functions to an existing data type.  As in commit a8081860,
      we can piggy-back on there already being a new-for-v13 version of the
      ltree extension, so we don't need a new update script file.
      
      Nino Floris, reviewed by Alexander Korotkov and myself
      
      Discussion: https://postgr.es/m/CANmj9Vxx50jOo1L7iSRxd142NyTz6Bdcgg7u9P3Z8o0=HGkYyQ@mail.gmail.com
      949a9f04
    • Tom Lane's avatar
      Check equality semantics for unique indexes on partitioned tables. · 501b0187
      Tom Lane authored
      We require the partition key to be a subset of the set of columns
      being made unique, so that physically-separate indexes on the different
      partitions are sufficient to enforce the uniqueness constraint.
      
      The existing code checked that the listed columns appear, but did not
      inquire into the index semantics, which is a serious oversight given
      that different index opclasses might enforce completely different
      notions of uniqueness.
      
      Ideally, perhaps, we'd just match the partition key opfamily to the
      index opfamily.  But hash partitioning uses hash opfamilies which we
      can't directly match to btree opfamilies.  Hence, look up the equality
      operator in each family, and accept if it's the same operator.  This
      should be okay in a fairly general sense, since the equality operator
      ought to precisely represent the opfamily's notion of uniqueness.
      
      A remaining weak spot is that we don't have a cross-index-AM notion of
      which opfamily member is "equality".  But we know which one to use for
      hash and btree AMs, and those are the only two that are relevant here
      at present.  (Any non-core AMs that know how to enforce equality are
      out of luck, for now.)
      
      Back-patch to v11 where this feature was introduced.
      
      Guancheng Luo, revised a bit by me
      
      Discussion: https://postgr.es/m/D9C3CEF7-04E8-47A1-8300-CA1DCD5ED40D@gmail.com
      501b0187
    • Tom Lane's avatar
      Improve selectivity estimation for assorted match-style operators. · a8081860
      Tom Lane authored
      Quite a few matching operators such as JSONB's @> used "contsel" and
      "contjoinsel" as their selectivity estimators.  That was a bad idea,
      because (a) contsel is only a stub, yielding a fixed default estimate,
      and (b) that default is 0.001, meaning we estimate these operators as
      five times more selective than equality, which is surely pretty silly.
      
      There's a good model for improving this in ltree's ltreeparentsel():
      for any "var OP constant" query, we can try applying the operator
      to all of the column's MCV and histogram values, taking the latter
      as being a random sample of the non-MCV values.  That code is
      actually 100% generic, except for the question of exactly what
      default selectivity ought to be plugged in when we don't have stats.
      
      Hence, migrate the guts of ltreeparentsel() into the core code, provide
      wrappers "matchingsel" and "matchingjoinsel" with a more-appropriate
      default estimate, and use those for the non-geometric operators that
      formerly used contsel (mostly JSONB containment operators and tsquery
      matching).
      
      Also apply this code to some match-like operators in hstore, ltree, and
      pg_trgm, including the former users of ltreeparentsel as well as ones
      that improperly used contsel.  Since commit 911e7020 just created new
      versions of those extensions that we haven't released yet, we can sneak
      this change into those new versions instead of having to create an
      additional generation of update scripts.
      
      Patch by me, reviewed by Alexey Bashtanov
      
      Discussion: https://postgr.es/m/12237.1582833074@sss.pgh.pa.us
      a8081860
    • Peter Eisentraut's avatar
      Refactor code to look up local replication tuple · d8653f46
      Peter Eisentraut authored
      This unifies some duplicate code.
      
      Author: Amit Langote <amitlangote09@gmail.com>
      Discussion: https://www.postgresql.org/message-id/CA+HiwqFjYE5anArxvkjr37AQMd52L-LZtz9Ld2QrLQ3YfcYhTw@mail.gmail.com
      d8653f46
    • Peter Eisentraut's avatar
      Update SQL features count · 36962349
      Peter Eisentraut authored
      The previously listed total of 179 does not appear to be correct for
      SQL:2016 anymore.  (Previous SQL versions had slightly different
      feature sets, so it's plausible that it was once correct.)  The
      currently correct count is the number of rows in the respective tables
      in appendix F in SQL parts 2 and 11, minus 2 features that are listed
      twice.  Thus the correct count is currently 177.  This also matches
      the number of Core entries the built documentation currently shows, so
      it's internally consistent.
      36962349
    • Alexander Korotkov's avatar
      4d276ba9
    • Alexander Korotkov's avatar
      Correct CREATE INDEX documentation for opclass parameters · 3eabc623
      Alexander Korotkov authored
      Old versions of opclass parameters patch supported ability to specify DEFAULT
      as the opclass name in CREATE INDEX command.  This ability was removed in the
      final version, but 911e7020 still mentions that in the documentation.
      3eabc623
    • Alexander Korotkov's avatar
    • Michael Paquier's avatar
      Fix crash in psql when attempting to reuse old connection · 8d84dd00
      Michael Paquier authored
      In a psql session, if the connection to the server is abruptly cut, the
      referenced connection would become NULL as of CheckConnection().  This
      could cause a hard crash with psql if attempting to connect by reusing
      the past connection's data because of a null-pointer dereference with
      either PQhost() or PQdb().  This issue is fixed by making sure that no
      reuse of the past connection is done if it does not exist.
      
      Issue has been introduced by 6e5f8d48, so backpatch down to 12.
      
      Reported-by: Hugh Wang
      Author: Michael Paquier
      Reviewed-by: Álvaro Herrera, Tom Lane
      Discussion: https://postgr.es/m/16330-b34835d83619e25d@postgresql.org
      Backpatch-through: 12
      8d84dd00
    • Amit Kapila's avatar
      Fix coverity complaint about commit 40d964ec. · 2401d937
      Amit Kapila authored
      The coverity complained that dividing integer expressions and then
      converting the integer quotient to type "double" would lose fractional
      part.  Typecasting one of the arguments of expression with double should
      fix the report.
      
      Author: Mahendra Singh Thalor
      Reviewed-by: Amit Kapila
      Discussion: https://postgr.es/m/20200329224818.6phnhv7o2q2rfovf@alap3.anarazel.de
      2401d937
    • Bruce Momjian's avatar
      psql: do file completion for \gx · 08481eed
      Bruce Momjian authored
      This was missed when the feature was added.
      
      Reported-by: Vik Fearing
      
      Discussion: https://postgr.es/m/eca20529-0b06-b493-ee38-f071a75dcd5b@postgresfriends.org
      
      Backpatch-through: 10
      08481eed
    • Michael Paquier's avatar
      Add -c/--restore-target-wal to pg_rewind · a7e8ece4
      Michael Paquier authored
      pg_rewind needs to copy from the source cluster to the target cluster a
      set of relation blocks changed from the previous checkpoint where WAL
      forked up to the end of WAL on the target.  Building this list of
      relation blocks requires a range of WAL segments that may not be present
      anymore on the target's pg_wal, causing pg_rewind to fail.  It is
      possible to work around this issue by copying manually the WAL segments
      needed but this may lead to some extra and actually useless work.
      
      This commit introduces a new option allowing pg_rewind to use a
      restore_command while doing the rewind by grabbing the parameter value
      of restore_command from the target cluster configuration.  This allows
      the rewind operation to be more reliable, so as only the WAL segments
      needed by the rewind are restored from the archives.
      
      In order to be able to do that, a new routine is added to src/common/ to
      allow frontend tools to restore files from archives using an
      already-built restore command.  This version is more simple than the
      backend equivalent as there is no need to handle the non-recovery case.
      
      Author: Alexey Kondratov
      Reviewed-by: Andrey Borodin, Andres Freund, Alvaro Herrera, Alexander
      Korotkov, Michael Paquier
      Discussion: https://postgr.es/m/a3acff50-5a0d-9a2c-b3b2-ee36168955c1@postgrespro.ru
      a7e8ece4
  3. 31 Mar, 2020 15 commits