1. 03 Apr, 2020 15 commits
    • Robert Haas's avatar
      pg_validatebackup: Use tempdir_short in TAP tests. · 21dc4884
      Robert Haas authored
      The buildfarm is very unhappy right now because TAP test
      003_corruption.pl uses TestLib::tempdir to generate the name of
      a temporary directory that is used as a tablespace name, and
      this results in a 'symbolic link target too long' error message
      on many of the buildfarm machines, but not on my machine.
      
      It appears that other people have run into similar problems in
      the past and that TestLib::tempdir_short was the solution, so
      let's try using that instead.
      21dc4884
    • Robert Haas's avatar
      pg_validatebackup: Adjust TAP tests to placate perlcritic. · 87e30043
      Robert Haas authored
      It seems that we have a policy that every Perl subroutine should
      end with an explicit "return", so add explicit "return"
      statements to all the new subroutines added by my prior
      commit 0d8c9c12.
      
      Per buildfarm.
      87e30043
    • Robert Haas's avatar
      Generate backup manifests for base backups, and validate them. · 0d8c9c12
      Robert Haas authored
      A manifest is a JSON document which includes (1) the file name, size,
      last modification time, and an optional checksum for each file backed
      up, (2) timelines and LSNs for whatever WAL will need to be replayed
      to make the backup consistent, and (3) a checksum for the manifest
      itself. By default, we use CRC-32C when checksumming data files,
      because we are trying to detect corruption and user error, not foil an
      adversary. However, pg_basebackup and the server-side BASE_BACKUP
      command now have options to select a different algorithm, so users
      wanting a cryptographic hash function can select SHA-224, SHA-256,
      SHA-384, or SHA-512. Users not wanting file checksums at all can
      disable them, or disable generating of the backup manifest altogether.
      Using a cryptographic hash function in place of CRC-32C consumes
      significantly more CPU cycles, which may slow down backups in some
      cases.
      
      A new tool called pg_validatebackup can validate a backup against the
      manifest. If no checksums are present, it can still check that the
      right files exist and that they have the expected sizes. If checksums
      are present, it can also verify that each file has the expected
      checksum. Additionally, it calls pg_waldump to verify that the
      expected WAL files are present and parseable. Only plain format
      backups can be validated directly, but tar format backups can be
      validated after extracting them.
      
      Robert Haas, with help, ideas, review, and testing from David Steele,
      Stephen Frost, Andrew Dunstan, Rushabh Lathia, Suraj Kharage, Tushar
      Ahuja, Rajkumar Raghuwanshi, Mark Dilger, Davinder Singh, Jeevan
      Chalke, Amit Kapila, Andres Freund, and Noah Misch.
      
      Discussion: http://postgr.es/m/CA+TgmoZV8dw1H2bzZ9xkKwdrk8+XYa+DC9H=F7heO2zna5T6qg@mail.gmail.com
      0d8c9c12
    • Fujii Masao's avatar
      Include information on buffer usage during planning phase, in EXPLAIN output, take two. · ce77abe6
      Fujii Masao authored
      When BUFFERS option is enabled, EXPLAIN command includes the information
      on buffer usage during each plan node, in its output. In addition to that,
      this commit makes EXPLAIN command include also the information on
      buffer usage during planning phase, in its output. This feature makes it
      easier to discern the cases where lots of buffer access happen during
      planning.
      
      This commit revives the original commit ed7a5095 that was reverted by
      commit 19db23bc. The original commit had to be reverted because
      it caused the regression test failure on the buildfarm members prion and
      dory. But since commit c0885c4c got rid of the caues of the test failure,
      the original commit can be safely introduced again.
      
      Author: Julien Rouhaud, slightly revised by Fujii Masao
      Reviewed-by: Justin Pryzby
      Discussion: https://postgr.es/m/16109-26a1a88651e90608@postgresql.org
      ce77abe6
    • Tom Lane's avatar
      Fix bugs in gin_fuzzy_search_limit processing. · e41955fa
      Tom Lane authored
      entryGetItem()'s three code paths each contained bugs associated
      with filtering the entries for gin_fuzzy_search_limit.
      
      The posting-tree path failed to advance "advancePast" after having
      decided to filter an item.  If we ran out of items on the current
      page and needed to advance to the next, what would actually happen
      is that entryLoadMoreItems() would re-load the same page.  Eventually,
      the random dropItem() test would accept one of the same items it'd
      previously rejected, and we'd move on --- but it could take awhile
      with small gin_fuzzy_search_limit.  To add insult to injury, this
      case would inevitably cause entryLoadMoreItems() to decide it needed
      to re-descend from the root, making things even slower.
      
      The posting-list path failed to implement gin_fuzzy_search_limit
      filtering at all, so that all entries in the posting list would
      be returned.
      
      The bitmap-result path used a "gotitem" variable that it failed to
      update in the one place where it'd actually make a difference, ie
      at the one "continue" statement.  I think this was unreachable in
      practice, because if we'd looped around then it shouldn't be the
      case that the entries on the new page are before advancePast.
      Still, the "gotitem" variable was contributing nothing to either
      clarity or correctness, so get rid of it.
      
      Refactor all three loops so that the termination conditions are
      more alike and less unreadable.
      
      The code coverage report showed that we had no coverage at all for
      the re-descend-from-root code path in entryLoadMoreItems(), which
      seems like a very bad thing, so add a test case that exercises it.
      We also had exactly no coverage for gin_fuzzy_search_limit, so add a
      simplistic test case that at least hits those code paths a little bit.
      
      Back-patch to all supported branches.
      
      Adé Heyward and Tom Lane
      
      Discussion: https://postgr.es/m/CAEknJCdS-dE1Heddptm7ay2xTbSeADbkaQ8bU2AXRCVC2LdtKQ@mail.gmail.com
      e41955fa
    • Fujii Masao's avatar
      Improve stability of explain regression test. · c0885c4c
      Fujii Masao authored
      The explain regression test runs EXPLAIN commands via the function
      that filters unstable outputs. To produce more stable test output,
      this commit improves the function so that it also filters out text-mode
      Buffers lines. This is necessary because text-mode Buffers lines vary
      depending the system state.
      
      This improvement will get rid of the regression test failure that
      the commit ed7a5095 caused on the buildfarm members prion and
      dory because of the instability of Buffers lines.
      
      Author: Fujii Masao
      Reviewed-by: Tom Lane
      Discussion: https://postgr.es/m/20200403025751.GB1759@paquier.xyz
      c0885c4c
    • Alvaro Herrera's avatar
      347d2b07
    • Robert Haas's avatar
      pg_waldump: Don't call XLogDumpDisplayStats() if -q is specified. · 3031440e
      Robert Haas authored
      Commit ac44367e introduced this
      problem.
      
      Report and fix by Fujii Masao.
      
      Discussion: http://postgr.es/m/d332b8f0-0c72-3cd6-6945-7a86a503662a@oss.nttdata.com
      3031440e
    • Robert Haas's avatar
      Add checksum helper functions. · c12e43a2
      Robert Haas authored
      These functions make it easier to write code that wants to compute a
      checksum for some data while allowing the user to configure the type
      of checksum that gets used.
      
      This is another piece of infrastructure for the upcoming patch to add
      backup manifests.
      
      Patch written from scratch by me, but it is similar to previous work
      by Rushabh Lathia and Suraj Kharage. Suraj also reviewed this version
      off-list. Advice on how not to break Windows from Davinder Singh.
      
      Discussion: http://postgr.es/m/CA+TgmoZV8dw1H2bzZ9xkKwdrk8+XYa+DC9H=F7heO2zna5T6qg@mail.gmail.com
      Discussion: http://postgr.es/m/CA+TgmoZRTBiPyvQEwV79PU1ePTtSEo2UeVncrkJMbn1sU1gnRA@mail.gmail.com
      c12e43a2
    • Tom Lane's avatar
      Fix bogus CALLED_AS_TRIGGER() defenses. · 6dd9f357
      Tom Lane authored
      contrib/lo's lo_manage() thought it could use
      trigdata->tg_trigger->tgname in its error message about
      not being called as a trigger.  That naturally led to a core dump.
      
      unique_key_recheck() figured it could Assert that fcinfo->context
      is a TriggerData node in advance of having checked that it's
      being called as a trigger.  That's harmless in production builds,
      and perhaps not that easy to reach in any case, but it's logically
      wrong.
      
      The first of these per bug #16340 from William Crowell;
      the second from manual inspection of other CALLED_AS_TRIGGER
      call sites.
      
      Back-patch the lo.c change to all supported branches, the
      other to v10 where the thinko crept in.
      
      Discussion: https://postgr.es/m/16340-591c7449dc7c8c47@postgresql.org
      6dd9f357
    • Fujii Masao's avatar
      Revert "Include information on buffer usage during planning phase, in EXPLAIN output." · 19db23bc
      Fujii Masao authored
      This reverts commit ed7a5095.
      
      Per buildfarm member prion.
      19db23bc
    • Fujii Masao's avatar
      Add wait events for recovery conflicts. · 18808f8c
      Fujii Masao authored
      This commit introduces new wait events RecoveryConflictSnapshot and
      RecoveryConflictTablespace. The former is reported while waiting for
      recovery conflict resolution on a vacuum cleanup. The latter is reported
      while waiting for recovery conflict resolution on dropping tablespace.
      
      Also this commit changes the code so that the wait event Lock is reported
      while waiting in ResolveRecoveryConflictWithVirtualXIDs() for recovery
      conflict resolution on a lock. Basically the wait event Lock is reported
      during that wait, but previously was not reported only when that wait
      happened in ResolveRecoveryConflictWithVirtualXIDs().
      
      Author: Masahiko Sawada
      Reviewed-by: Fujii Masao
      Discussion: https://postgr.es/m/CA+fd4k4mXWTwfQLS3RPwGr4xnfAEs1ysFfgYHvmmoUgv6Zxvmg@mail.gmail.com
      18808f8c
    • Michael Paquier's avatar
      Add support for \aset in pgbench · 9d8ef988
      Michael Paquier authored
      This option is similar to \gset, except that it is able to store all
      results from combined SQL queries into separate variables.  If a query
      returns multiple rows, the last result is stored and if a query returns
      no rows, nothing is stored.
      
      While on it, add a TAP test for \gset to check for a failure when a
      query returns multiple rows.
      
      Author: Fabien Coelho
      Reviewed-by: Ibrar Ahmed, Michael Paquier
      Discussion: https://postgr.es/m/alpine.DEB.2.21.1904081914200.2529@lancre
      9d8ef988
    • Fujii Masao's avatar
      Include information on buffer usage during planning phase, in EXPLAIN output. · ed7a5095
      Fujii Masao authored
      When BUFFERS option is enabled, EXPLAIN command includes the information
      on buffer usage during each plan node, in its output. In addition to that,
      this commit makes EXPLAIN command include also the information on
      buffer usage during planning phase, in its output. This feature makes it
      easier to discern the cases where lots of buffer access happen during
      planning.
      
      Author: Julien Rouhaud, slightly revised by Fujii Masao
      Reviewed-by: Justin Pryzby
      Discussion: https://postgr.es/m/16109-26a1a88651e90608@postgresql.org
      ed7a5095
    • Robert Haas's avatar
      pg_waldump: Add a --quiet option. · ac44367e
      Robert Haas authored
      The primary motivation for this change is that it will be used by the
      upcoming patch to add backup manifests, but it also seems to have some
      potential more general use.
      
      Andres Freund and Robert Haas
      
      Discussion: http://postgr.es/m/20200330020814.nspra4mvby42yoa4@alap3.anarazel.de
      ac44367e
  2. 02 Apr, 2020 14 commits
    • Tom Lane's avatar
      Improve stability fix for partition_aggregate test. · 7cb0a423
      Tom Lane authored
      Instead of disabling autovacuum on these test tables, adjust the
      partition boundaries so that the child partitions are not all the
      same size.  That should cause the planner to use a predictable
      ordering of the per-partition scan nodes even in cases where
      autovacuum causes the rowcount estimates to be off a bit.
      Moreover, this also lets these tests show that the planner does
      properly order the tables in descending size order, something
      that wasn't being proven before.
      
      The pagg_tab1 and pagg_tab2 partitions are still all the same
      size, but that should be fine, because those tables are so small
      that (1) autovacuum won't fire on them, and (2) even if it did,
      it couldn't change the reltuples value --- with only one page,
      it can't see just part of the relation.
      
      Discussion: https://postgr.es/m/24467.1585838693@sss.pgh.pa.us
      7cb0a423
    • Bruce Momjian's avatar
      doc: remove unnecessary INNER keyword · 8da1538b
      Bruce Momjian authored
      A join that was added in commit 9b2009c4 that did not use the INNER
      keyword but the existing query used it.  It was cleaner to remove the
      existing INNER keyword.
      
      Reported-by: Peter Eisentraut
      
      Discussion: https://postgr.es/m/a1ffbfda-59d2-5732-e5fb-3df8582b6434@2ndquadrant.com
      
      Backpatch-through: 9.5
      8da1538b
    • Bruce Momjian's avatar
      doc: remove comma, related to commit 92d31085 · c713dc2f
      Bruce Momjian authored
      Reported-by: Peter Eisentraut
      
      Discussion: https://postgr.es/m/750b8832-d123-7f9b-931e-43ce8321b2d7@2ndquadrant.com
      
      Backpatch-through: 9.5
      c713dc2f
    • Tom Lane's avatar
      Improve user control over truncation of logged bind-parameter values. · 0b34e7d3
      Tom Lane authored
      This patch replaces the boolean GUC log_parameters_on_error introduced
      by commit ba79cb5d with an integer log_parameter_max_length_on_error,
      adding the ability to specify how many bytes to trim each logged
      parameter value to.  (The previous coding hard-wired that choice at
      64 bytes.)
      
      In addition, add a new parameter log_parameter_max_length that provides
      similar control over truncation of query parameters that are logged in
      response to statement-logging options, as opposed to errors.  Previous
      releases always logged such parameters in full, possibly causing log
      bloat.
      
      For backwards compatibility with prior releases,
      log_parameter_max_length defaults to -1 (log in full), while
      log_parameter_max_length_on_error defaults to 0 (no logging).
      
      Per discussion, log_parameter_max_length is SUSET since the DBA should
      control routine logging behavior, but log_parameter_max_length_on_error
      is USERSET because it also affects errcontext data sent back to the
      client.
      
      Alexey Bashtanov, editorialized a little by me
      
      Discussion: https://postgr.es/m/b10493cc-a399-a03a-67c7-068f2791ee50@imap.cc
      0b34e7d3
    • Tomas Vondra's avatar
    • David Rowley's avatar
      Attempt to stabilize partitionwise_aggregate test · cefb82d4
      David Rowley authored
      In b07642db, we added code to trigger autovacuums based on the number of
      INSERTs into a table. This seems to have cause some destabilization of
      the regression tests. Likely this is due to an autovacuum triggering
      mid-test and (per theory from Tom Lane) one of the test's queries causes
      autovacuum to skip some number of pages, resulting in the reltuples
      estimate changing.
      
      The failure that this is attempting to fix is around the order of subnodes
      in an Append. Since the planner orders these according to the subnode
      cost, then it's possible that a small change in the reltuples value changes
      the subnode's cost enough that it swaps position with one of its fellow
      subnodes.
      
      The failure here only seems to occur on slower buildfarm machines. In this
      case, lousyjack, which seems have taken over 8 minutes to run just
      the partitionwise_aggregate test. Such a slow run would increase the
      chances that the autovacuum launcher would trigger a vacuum mid-test.
      Faster machines run this test in sub second time, so have a much smaller
      window for an autovacuum to trigger.
      
      Here we fix this by disabling autovacuum on all tables created in the test.
      
      Additionally, this reverts the change made in the
      partitionwise_aggregate test in 2dc16efe.
      
      Discussion: https://postgr.es/m/22297.1585797192@sss.pgh.pa.us
      cefb82d4
    • Peter Eisentraut's avatar
      Add SQL functions for Unicode normalization · 2991ac5f
      Peter Eisentraut authored
      This adds SQL expressions NORMALIZE() and IS NORMALIZED to convert and
      check Unicode normal forms, per SQL standard.
      
      To support fast IS NORMALIZED tests, we pull in a new data file
      DerivedNormalizationProps.txt from Unicode and build a lookup table
      from that, using techniques similar to ones already used for other
      Unicode data.  make update-unicode will keep it up to date.  We only
      build and use these tables for the NFC and NFKC forms, because they
      are too big for NFD and NFKD and the improvement is not significant
      enough there.
      Reviewed-by: default avatarDaniel Verite <daniel@manitou-mail.org>
      Reviewed-by: default avatarAndreas Karlsson <andreas@proxel.se>
      Discussion: https://www.postgresql.org/message-id/flat/c1909f27-c269-2ed9-12f8-3ab72c8caf7a@2ndquadrant.com
      2991ac5f
    • Peter Eisentraut's avatar
      Fix whitespace · 070c3d39
      Peter Eisentraut authored
      070c3d39
    • Peter Eisentraut's avatar
      doc: Update for Unix-domain sockets on Windows · 580a446c
      Peter Eisentraut authored
      Update the documentation to reflect that Unix-domain sockets are now
      usable on Windows.
      580a446c
    • Peter Eisentraut's avatar
      Add some comments to some SQL features · c6e0edad
      Peter Eisentraut authored
      Otherwise, it could be confusing to a reader that some of these
      well-publicized features are simply listed as unsupported without
      further explanation.
      c6e0edad
    • Thomas Munro's avatar
      Add maintenance_io_concurrency to postgresql.conf.sample. · 37b3794d
      Thomas Munro authored
      New GUC from commit fc34b0d9.
      37b3794d
    • Amit Kapila's avatar
      Allow parallel vacuum to accumulate buffer usage. · 3a5e2213
      Amit Kapila authored
      Commit 40d964ec allowed vacuum command to process indexes in parallel but
      forgot to accumulate the buffer usage stats of parallel workers.  This
      allows leader backend to accumulate buffer usage stats of all the parallel
      workers.
      
      Reported-by: Julien Rouhaud
      Author: Sawada Masahiko
      Reviewed-by: Dilip Kumar, Amit Kapila and Julien Rouhaud
      Discussion: https://postgr.es/m/20200328151721.GB12854@nol
      3a5e2213
    • Fujii Masao's avatar
      Allow pg_stat_statements to track planning statistics. · 17e03282
      Fujii Masao authored
      This commit makes pg_stat_statements support new GUC
      pg_stat_statements.track_planning. If this option is enabled,
      pg_stat_statements tracks the planning statistics of the statements,
      e.g., the number of times the statement was planned, the total time
      spent planning the statement, etc. This feature is useful to check
      the statements that it takes a long time to plan. Previously since
      pg_stat_statements tracked only the execution statistics, we could
      not use that for the purpose.
      
      The planning and execution statistics are stored at the end of
      each phase separately. So there are not always one-to-one relationship
      between them. For example, if the statement is successfully planned
      but fails in the execution phase, only its planning statistics are stored.
      This may cause the users to be able to see different pg_stat_statements
      results from the previous version. To avoid this,
      pg_stat_statements.track_planning needs to be disabled.
      
      This commit bumps the version of pg_stat_statements to 1.8
      since it changes the definition of pg_stat_statements function.
      
      Author: Julien Rouhaud, Pascal Legrand, Thomas Munro, Fujii Masao
      Reviewed-by: Sergei Kornilov, Tomas Vondra, Yoshikazu Imai, Haribabu Kommi, Tom Lane
      Discussion: https://postgr.es/m/CAHGQGwFx_=DO-Gu-MfPW3VQ4qC7TfVdH2zHmvZfrGv6fQ3D-Tw@mail.gmail.com
      Discussion: https://postgr.es/m/CAEepm=0e59Y_6Q_YXYCTHZkqOc6H2pJ54C_Xe=VFu50Aqqp_sA@mail.gmail.com
      Discussion: https://postgr.es/m/DB6PR0301MB21352F6210E3B11934B0DCC790B00@DB6PR0301MB2135.eurprd03.prod.outlook.com
      17e03282
    • Tomas Vondra's avatar
      Collect statistics about SLRU caches · 28cac71b
      Tomas Vondra authored
      There's a number of SLRU caches used to access important data like clog,
      commit timestamps, multixact, asynchronous notifications, etc. Until now
      we had no easy way to monitor these shared caches, compute hit ratios,
      number of reads/writes etc.
      
      This commit extends the statistics collector to track this information
      for a predefined list of SLRUs, and also introduces a new system view
      pg_stat_slru displaying the data.
      
      The list of built-in SLRUs is fixed, but additional SLRUs may be defined
      in extensions. Unfortunately, there's no suitable registry of SLRUs, so
      this patch simply defines a fixed list of SLRUs with entries for the
      built-in ones and one entry for all additional SLRUs. Extensions adding
      their own SLRU are fairly rare, so this seems acceptable.
      
      This patch only allows monitoring of SLRUs, not tuning. The SLRU sizes
      are still fixed (hard-coded in the code) and it's not entirely clear
      which of the SLRUs might need a GUC to tune size. In a way, allowing us
      to determine that is one of the goals of this patch.
      
      Bump catversion as the patch introduces new functions and system view.
      
      Author: Tomas Vondra
      Reviewed-by: Alvaro Herrera
      Discussion: https://www.postgresql.org/message-id/flat/20200119143707.gyinppnigokesjok@development
      28cac71b
  3. 01 Apr, 2020 11 commits
    • Tom Lane's avatar
      Clean up parsing of ltree and lquery some more. · 17ca0679
      Tom Lane authored
      Fix lquery parsing to handle repeated flag characters correctly,
      and to enforce the max label length correctly in some cases where
      it did not before, and to detect empty labels in some cases where
      it did not before.
      
      In a more cosmetic vein, use a switch rather than if-then chains to
      handle the different states, and avoid unnecessary checks on charlen
      when looking for ASCII characters, and factor out multiple copies of
      the label length checking code.
      
      Tom Lane and Dmitry Belyavsky
      
      Discussion: https://postgr.es/m/CADqLbzLVkBuPX0812o+z=c3i6honszsZZ6VQOSKR3VPbB56P3w@mail.gmail.com
      17ca0679
    • Tom Lane's avatar
      Add support for binary I/O of ltree, lquery, and ltxtquery types. · 949a9f04
      Tom Lane authored
      Not much to say here --- does what it says on the tin.  The "binary"
      representation in each case is really just the same as the text format,
      though we prefix a version-number byte in case anyone ever feels
      motivated to change that.  Thus, there's not any expectation of improved
      speed or reduced space; the point here is just to allow clients to use
      binary format for all columns of a query result or COPY data.
      
      This makes use of the recently added ALTER TYPE support to add binary
      I/O functions to an existing data type.  As in commit a8081860,
      we can piggy-back on there already being a new-for-v13 version of the
      ltree extension, so we don't need a new update script file.
      
      Nino Floris, reviewed by Alexander Korotkov and myself
      
      Discussion: https://postgr.es/m/CANmj9Vxx50jOo1L7iSRxd142NyTz6Bdcgg7u9P3Z8o0=HGkYyQ@mail.gmail.com
      949a9f04
    • Tom Lane's avatar
      Check equality semantics for unique indexes on partitioned tables. · 501b0187
      Tom Lane authored
      We require the partition key to be a subset of the set of columns
      being made unique, so that physically-separate indexes on the different
      partitions are sufficient to enforce the uniqueness constraint.
      
      The existing code checked that the listed columns appear, but did not
      inquire into the index semantics, which is a serious oversight given
      that different index opclasses might enforce completely different
      notions of uniqueness.
      
      Ideally, perhaps, we'd just match the partition key opfamily to the
      index opfamily.  But hash partitioning uses hash opfamilies which we
      can't directly match to btree opfamilies.  Hence, look up the equality
      operator in each family, and accept if it's the same operator.  This
      should be okay in a fairly general sense, since the equality operator
      ought to precisely represent the opfamily's notion of uniqueness.
      
      A remaining weak spot is that we don't have a cross-index-AM notion of
      which opfamily member is "equality".  But we know which one to use for
      hash and btree AMs, and those are the only two that are relevant here
      at present.  (Any non-core AMs that know how to enforce equality are
      out of luck, for now.)
      
      Back-patch to v11 where this feature was introduced.
      
      Guancheng Luo, revised a bit by me
      
      Discussion: https://postgr.es/m/D9C3CEF7-04E8-47A1-8300-CA1DCD5ED40D@gmail.com
      501b0187
    • Tom Lane's avatar
      Improve selectivity estimation for assorted match-style operators. · a8081860
      Tom Lane authored
      Quite a few matching operators such as JSONB's @> used "contsel" and
      "contjoinsel" as their selectivity estimators.  That was a bad idea,
      because (a) contsel is only a stub, yielding a fixed default estimate,
      and (b) that default is 0.001, meaning we estimate these operators as
      five times more selective than equality, which is surely pretty silly.
      
      There's a good model for improving this in ltree's ltreeparentsel():
      for any "var OP constant" query, we can try applying the operator
      to all of the column's MCV and histogram values, taking the latter
      as being a random sample of the non-MCV values.  That code is
      actually 100% generic, except for the question of exactly what
      default selectivity ought to be plugged in when we don't have stats.
      
      Hence, migrate the guts of ltreeparentsel() into the core code, provide
      wrappers "matchingsel" and "matchingjoinsel" with a more-appropriate
      default estimate, and use those for the non-geometric operators that
      formerly used contsel (mostly JSONB containment operators and tsquery
      matching).
      
      Also apply this code to some match-like operators in hstore, ltree, and
      pg_trgm, including the former users of ltreeparentsel as well as ones
      that improperly used contsel.  Since commit 911e7020 just created new
      versions of those extensions that we haven't released yet, we can sneak
      this change into those new versions instead of having to create an
      additional generation of update scripts.
      
      Patch by me, reviewed by Alexey Bashtanov
      
      Discussion: https://postgr.es/m/12237.1582833074@sss.pgh.pa.us
      a8081860
    • Peter Eisentraut's avatar
      Refactor code to look up local replication tuple · d8653f46
      Peter Eisentraut authored
      This unifies some duplicate code.
      
      Author: Amit Langote <amitlangote09@gmail.com>
      Discussion: https://www.postgresql.org/message-id/CA+HiwqFjYE5anArxvkjr37AQMd52L-LZtz9Ld2QrLQ3YfcYhTw@mail.gmail.com
      d8653f46
    • Peter Eisentraut's avatar
      Update SQL features count · 36962349
      Peter Eisentraut authored
      The previously listed total of 179 does not appear to be correct for
      SQL:2016 anymore.  (Previous SQL versions had slightly different
      feature sets, so it's plausible that it was once correct.)  The
      currently correct count is the number of rows in the respective tables
      in appendix F in SQL parts 2 and 11, minus 2 features that are listed
      twice.  Thus the correct count is currently 177.  This also matches
      the number of Core entries the built documentation currently shows, so
      it's internally consistent.
      36962349
    • Alexander Korotkov's avatar
      4d276ba9
    • Alexander Korotkov's avatar
      Correct CREATE INDEX documentation for opclass parameters · 3eabc623
      Alexander Korotkov authored
      Old versions of opclass parameters patch supported ability to specify DEFAULT
      as the opclass name in CREATE INDEX command.  This ability was removed in the
      final version, but 911e7020 still mentions that in the documentation.
      3eabc623
    • Alexander Korotkov's avatar
    • Michael Paquier's avatar
      Fix crash in psql when attempting to reuse old connection · 8d84dd00
      Michael Paquier authored
      In a psql session, if the connection to the server is abruptly cut, the
      referenced connection would become NULL as of CheckConnection().  This
      could cause a hard crash with psql if attempting to connect by reusing
      the past connection's data because of a null-pointer dereference with
      either PQhost() or PQdb().  This issue is fixed by making sure that no
      reuse of the past connection is done if it does not exist.
      
      Issue has been introduced by 6e5f8d48, so backpatch down to 12.
      
      Reported-by: Hugh Wang
      Author: Michael Paquier
      Reviewed-by: Álvaro Herrera, Tom Lane
      Discussion: https://postgr.es/m/16330-b34835d83619e25d@postgresql.org
      Backpatch-through: 12
      8d84dd00
    • Amit Kapila's avatar
      Fix coverity complaint about commit 40d964ec. · 2401d937
      Amit Kapila authored
      The coverity complained that dividing integer expressions and then
      converting the integer quotient to type "double" would lose fractional
      part.  Typecasting one of the arguments of expression with double should
      fix the report.
      
      Author: Mahendra Singh Thalor
      Reviewed-by: Amit Kapila
      Discussion: https://postgr.es/m/20200329224818.6phnhv7o2q2rfovf@alap3.anarazel.de
      2401d937