1. 03 Jun, 2021 2 commits
    • David Rowley's avatar
      Standardize usages of appendStringInfo and appendPQExpBuffer · f736e188
      David Rowley authored
      Fix a few places that were using appendStringInfo() when they should have
      been using appendStringInfoString().  Also some cases of
      appendPQExpBuffer() that would have been better suited to use
      appendPQExpBufferChar(), and finally, some places that used
      appendPQExpBuffer() when appendPQExpBufferStr() would have suited better.
      
      There are no bugs are being fixed here.  The aim is just to make the code
      use the most optimal function for the job.
      
      All the code being changed here is new to PG14.  It makes sense to fix
      these before we branch for PG15.  There are a few other places that we
      could fix, but those cases are older code so fixing those seems less
      worthwhile as it may cause unnecessary back-patching pain in the future.
      
      Author: Hou Zhijie
      Discussion: https://postgr.es/m/OS0PR01MB5716732158B1C4142C6FE375943D9@OS0PR01MB5716.jpnprd01.prod.outlook.com
      f736e188
    • Michael Paquier's avatar
      Ignore more environment variables in TAP tests · 8279f68a
      Michael Paquier authored
      Various environment variables were not getting reset in the TAP tests,
      which would cause failures depending on the tests or the environment
      variables involved.  For example, PGSSL{MAX,MIN}PROTOCOLVERSION could
      cause failures in the SSL tests.  Even worse, a junk value of
      PGCLIENTENCODING makes a server startup fail.  The list of variables
      reset is adjusted in each stable branch depending on what is supported.
      
      While on it, simplify a bit the code per a suggestion from Andrew
      Dunstan, using a list of variables instead of doing single deletions.
      
      Reviewed-by: Andrew Dunstan, Daniel Gustafsson
      Discussion: https://postgr.es/m/YLbjjRpucIeZ78VQ@paquier.xyz
      Backpatch-through: 9.6
      8279f68a
  2. 02 Jun, 2021 9 commits
  3. 01 Jun, 2021 2 commits
    • Tom Lane's avatar
      Reject SELECT ... GROUP BY GROUPING SETS (()) FOR UPDATE. · 1103033a
      Tom Lane authored
      This case should be disallowed, just as FOR UPDATE with a plain
      GROUP BY is disallowed; FOR UPDATE only makes sense when each row
      of the query result can be identified with a single table row.
      However, we missed teaching CheckSelectLocking() to check
      groupingSets as well as groupClause, so that it would allow
      degenerate grouping sets.  That resulted in a bad plan and
      a null-pointer dereference in the executor.
      
      Looking around for other instances of the same bug, the only one
      I found was in examine_simple_variable().  That'd just lead to
      silly estimates, but it should be fixed too.
      
      Per private report from Yaoguang Chen.
      Back-patch to all supported branches.
      1103033a
    • Amit Kapila's avatar
      pgoutput: Fix memory leak due to RelationSyncEntry.map. · eb89cb43
      Amit Kapila authored
      Release memory allocated when creating the tuple-conversion map and its
      component TupleDescs when its owning sync entry is invalidated.
      TupleDescs must also be freed when no map is deemed necessary, to begin
      with.
      
      Reported-by: Andres Freund
      Author: Amit Langote
      Reviewed-by: Takamichi Osumi, Amit Kapila
      Backpatch-through: 13, where it was introduced
      Discussion: https://postgr.es/m/MEYP282MB166933B1AB02B4FE56E82453B64D9@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
      eb89cb43
  4. 31 May, 2021 5 commits
    • Thomas Munro's avatar
      Fix error handling in replacement pthread_barrier_init(). · a40646e3
      Thomas Munro authored
      Commit 44bf3d50 incorrectly used an errno-style interface when supplying
      missing pthread functionality (i.e. on macOS), but it should check for
      and return error numbers directly.
      a40646e3
    • Peter Eisentraut's avatar
      Fix RADIUS error reporting in hba file parsing · 7c544ecd
      Peter Eisentraut authored
      The RADIUS-related checks in parse_hba_line() did not respect elevel
      and did not fill in *err_msg.  Also, verify_option_list_length()
      pasted together error messages in an untranslatable way.  To fix the
      latter, remove the function and do the error checking inline.  It's a
      bit more verbose but only minimally longer, and it makes fixing the
      first two issues straightforward.
      Reviewed-by: default avatarMagnus Hagander <magnus@hagander.net>
      Discussion: https://www.postgresql.org/message-id/flat/8381e425-8c23-99b3-15ec-3115001db1b2%40enterprisedb.com
      7c544ecd
    • Tom Lane's avatar
      Fix mis-planning of repeated application of a projection. · 6ee41a30
      Tom Lane authored
      create_projection_plan contains a hidden assumption (here made
      explicit by an Assert) that a projection-capable Path will yield a
      projection-capable Plan.  Unfortunately, that assumption is violated
      only a few lines away, by create_projection_plan itself.  This means
      that two stacked ProjectionPaths can yield an outcome where we try to
      jam the upper path's tlist into a non-projection-capable child node,
      resulting in an invalid plan.
      
      There isn't any good reason to have stacked ProjectionPaths; indeed the
      whole concept is faulty, since the set of Vars/Aggs/etc needed by the
      upper one wouldn't necessarily be available in the output of the lower
      one, nor could the lower one create such values if they weren't
      available from its input.  Hence, we can fix this by adjusting
      create_projection_path to strip any top-level ProjectionPath from the
      subpath it's given.  (This amounts to saying "oh, we changed our
      minds about what we need to project here".)
      
      The test case added here only fails in v13 and HEAD; before that, we
      don't attempt to shove the Sort into the parallel part of the plan,
      for reasons that aren't entirely clear to me.  However, all the
      directly-related code looks generally the same as far back as v11,
      where the hazard was introduced (by d7c19e62).  So I've got no faith
      that the same type of bug doesn't exist in v11 and v12, given the
      right test case.  Hence, back-patch the code changes, but not the
      irrelevant test case, into those branches.
      
      Per report from Bas Poot.
      
      Discussion: https://postgr.es/m/534fca83789c4a378c7de379e9067d4f@politie.nl
      6ee41a30
    • Noah Misch's avatar
      Raise a timeout to 180s, in test 010_logical_decoding_timelines.pl. · d03eeab8
      Noah Misch authored
      Per buildfarm member hornet.  Also, update Pod documentation showing the
      lower value.  Back-patch to v10, where the test first appeared.
      d03eeab8
    • Michael Paquier's avatar
      Improve some error wording with multirange type parsing · 12cc9566
      Michael Paquier authored
      Braces were referred in some error messages as only brackets (not curly
      brackets or curly braces), which can be confusing as other types of
      brackets could be used.
      
      While on it, add one test to check after the case of junk characters
      detected after a right brace.
      
      Author: Kyotaro Horiguchi
      Discussion: https://postgr.es/m/20210514.153153.1814935914483287479.horikyota.ntt@gmail.com
      12cc9566
  5. 29 May, 2021 2 commits
  6. 28 May, 2021 3 commits
  7. 27 May, 2021 8 commits
    • Tom Lane's avatar
      Reduce the range of OIDs reserved for genbki.pl. · a4390abe
      Tom Lane authored
      Commit ab596105 increased FirstBootstrapObjectId from 12000 to 13000,
      but we've had some push-back about that.  It's worrisome to reduce the
      daylight between there and FirstNormalObjectId, because the number of
      OIDs consumed during initdb for collation objects is hard to predict.
      
      We can improve the situation by abandoning the assumption that these
      OIDs must be globally unique.  It should be sufficient for them to be
      unique per-catalog.  (Any code that's unhappy about that is broken
      anyway, since no more than per-catalog uniqueness can be guaranteed
      once the OID counter wraps around.)  With that change, the largest OID
      assigned during genbki.pl (starting from a base of 10000) is a bit
      under 11000.  This allows reverting FirstBootstrapObjectId to 12000
      with reasonable confidence that that will be sufficient for many years
      to come.
      
      We are not, at this time, abandoning the expectation that
      hand-assigned OIDs (below 10000) are globally unique.  Someday that'll
      likely be necessary, but the need seems years away still.
      
      This is late for v14, but it seems worth doing it now so that
      downstream software doesn't have to deal with the consequences of
      a change in FirstBootstrapObjectId.  In any case, we already
      bought into forcing an initdb for beta2, so another catversion
      bump won't hurt.
      
      Discussion: https://postgr.es/m/1665197.1622065382@sss.pgh.pa.us
      a4390abe
    • Tom Lane's avatar
      Rethink definition of pg_attribute.attcompression. · e6241d8e
      Tom Lane authored
      Redefine '\0' (InvalidCompressionMethod) as meaning "if we need to
      compress, use the current setting of default_toast_compression".
      This allows '\0' to be a suitable default choice regardless of
      datatype, greatly simplifying code paths that initialize tupledescs
      and the like.  It seems like a more user-friendly approach as well,
      because now the default compression choice doesn't migrate into table
      definitions, meaning that changing default_toast_compression is
      usually sufficient to flip an installation's behavior; one needn't
      tediously issue per-column ALTER SET COMPRESSION commands.
      
      Along the way, fix a few minor bugs and documentation issues
      with the per-column-compression feature.  Adopt more robust
      APIs for SetIndexStorageProperties and GetAttributeCompression.
      
      Bump catversion because typical contents of attcompression will now
      be different.  We could get away without doing that, but it seems
      better to ensure v14 installations all agree on this.  (We already
      forced initdb for beta2, anyway.)
      
      Discussion: https://postgr.es/m/626613.1621787110@sss.pgh.pa.us
      e6241d8e
    • Peter Eisentraut's avatar
      Fix vpath build in libpq_pipeline test · a717e5c7
      Peter Eisentraut authored
      The path needs to be set to refer to the build directory, not the
      current directory, because that's actually the source directory at
      that point.
      
      fix for 6abc8c25
      a717e5c7
    • Peter Eisentraut's avatar
      Add NO_INSTALL option to pgxs · 6abc8c25
      Peter Eisentraut authored
      Apply in libpq_pipeline test makefile, so that the test file is not
      installed into tmp_install.
      Reviewed-by: default avatarAlvaro Herrera <alvherre@alvh.no-ip.org>
      Reviewed-by: default avatarTom Lane <tgl@sss.pgh.pa.us>
      Discussion: https://www.postgresql.org/message-id/flat/cb9d16a6-760f-cd44-28d6-b091d5fb6ca7%40enterprisedb.com
      6abc8c25
    • Michael Paquier's avatar
      Fix MSVC scripts when building with GSSAPI/Kerberos · 02511066
      Michael Paquier authored
      The deliverables of upstream Kerberos on Windows are installed with
      paths that do not match our MSVC scripts.  First, the include folder was
      named "inc/" in our scripts, but the upstream MSIs use "include/".
      Second, the build would fail with 64-bit environments as the libraries
      are named differently.
      
      This commit adjusts the MSVC scripts to be compatible with the latest
      installations of upstream, and I have checked that the compilation was
      able to work with the 32-bit and 64-bit installations.
      
      Special thanks to Kondo Yuta for the help in investigating the situation
      in hamerkop, which had an incorrect configuration for the GSS
      compilation.
      
      Reported-by: Brian Ye
      Discussion: https://postgr.es/m/162128202219.27274.12616756784952017465@wrigleys.postgresql.org
      Backpatch-through: 9.6
      02511066
    • Peter Eisentraut's avatar
      Replace run-time error check with assertion · 388e75ad
      Peter Eisentraut authored
      The error message was checking that the structures returned from the
      parser matched expectations.  That's something we usually use
      assertions for, not a full user-facing error message.  So replace that
      with an assertion (hidden inside lfirst_node()).
      Reviewed-by: default avatarTom Lane <tgl@sss.pgh.pa.us>
      Discussion: https://www.postgresql.org/message-id/flat/452e9df8-ec89-e01b-b64a-8cc6ce830458%40enterprisedb.com
      388e75ad
    • Michael Paquier's avatar
      doc: Fix description of some GUCs in docs and postgresql.conf.sample · 2941138e
      Michael Paquier authored
      The following parameters have been imprecise, or incorrect, about their
      description (PGC_POSTMASTER or PGC_SIGHUP):
      - autovacuum_work_mem (docs, as of 9.6~)
      - huge_page_size (docs, as of 14~)
      - max_logical_replication_workers (docs, as of 10~)
      - max_sync_workers_per_subscription (docs, as of 10~)
      - min_dynamic_shared_memory (docs, as of 14~)
      - recovery_init_sync_method (postgresql.conf.sample, as of 14~)
      - remove_temp_files_after_crash (docs, as of 14~)
      - restart_after_crash (docs, as of 9.6~)
      - ssl_min_protocol_version (docs, as of 12~)
      - ssl_max_protocol_version (docs, as of 12~)
      
      This commit adjusts the description of all these parameters to be more
      consistent with the practice used for the others.
      
      Revewed-by: Justin Pryzby
      Discussion: https://postgr.es/m/YK2ltuLpe+FbRXzA@paquier.xyz
      Backpatch-through: 9.6
      2941138e
    • Amit Kapila's avatar
      Fix assertion during streaming of multi-insert toast changes. · 6f4bdf81
      Amit Kapila authored
      While decoding the multi-insert WAL we can't clean the toast untill we get
      the last insert of that WAL record. Now if we stream the changes before we
      get the last change, the memory for toast chunks won't be released and we
      expect the txn to have streamed all changes after streaming.  This
      restriction is mainly to ensure the correctness of streamed transactions
      and it doesn't seem worth uplifting such a restriction just to allow this
      case because anyway we will stream the transaction once such an insert is
      complete.
      
      Previously we were using two different flags (one for toast tuples and
      another for speculative inserts) to indicate partial changes. Now instead
      we replaced both of them with a single flag to indicate partial changes.
      
      Reported-by: Pavan Deolasee
      Author: Dilip Kumar
      Reviewed-by: Pavan Deolasee, Amit Kapila
      Discussion: https://postgr.es/m/CABOikdN-_858zojYN-2tNcHiVTw-nhxPwoQS4quExeweQfG1Ug@mail.gmail.com
      6f4bdf81
  8. 26 May, 2021 1 commit
  9. 25 May, 2021 8 commits
    • Alvaro Herrera's avatar
      Make detach-partition-concurrently-4 less timing sensitive · eb43bdbf
      Alvaro Herrera authored
      Same as 5e0b1aeb, for the companion test file.
      
      This one seems lower probability (only two failures in a month of runs);
      I was hardly able to reproduce a failure without a patch, so the fact
      that I was also unable to reproduce one with it doesn't say anything.
      We'll have to wait for further buildfarm results to see if we need any
      further adjustments.
      
      Discussion: https://postgr.es/m/20210524090712.GA3771394@rfd.leadboat.com
      eb43bdbf
    • Tom Lane's avatar
      Fix use of uninitialized variable in inline_function(). · e30e3fde
      Tom Lane authored
      Commit e717a9a1 introduced a code path that bypassed the call of
      get_expr_result_type, which is not good because we need its rettupdesc
      result to pass to check_sql_fn_retval.  We'd failed to notice right
      away because the code path in which check_sql_fn_retval uses that
      argument is fairly hard to reach in this context.  It's not impossible
      though, and in any case inline_function would have no business
      assuming that check_sql_fn_retval doesn't need that value.
      
      To fix, move get_expr_result_type out of the if-block, which in
      turn requires moving the construction of the dummy FuncExpr
      out of it.
      
      Per report from Ranier Vilela.  (I'm bemused by the lack of any
      compiler complaints...)
      
      Discussion: https://postgr.es/m/CAEudQAqBqQpQ3HruWAGU_7WaMJ7tntpk0T8k_dVtNB46DqdBgw@mail.gmail.com
      e30e3fde
    • Alvaro Herrera's avatar
      Make detach-partition-concurrently-3 less timing-sensitive · 5e0b1aeb
      Alvaro Herrera authored
      This recently added test has shown to be too sensitive to timing when
      sending a cancel to a session waiting for a lock.
      
      We fix this by running a no-op query in the blocked session immediately
      after the cancel; this avoids the session that sent the cancel sending
      another query immediately before the cancel has been reported.
      Idea by Noah Misch.
      
      With that fix, we sometimes see that the cancel error report is shown
      only relative to the step that is cancelled, instead of together with
      the step that sends the cancel.  To increase the probability that both
      steps are shown togeter, add a 0.1s sleep to the cancel.  In normal
      conditions this appears sufficient to silence most failures, but we'll
      see that the slower buildfarm members say about it.
      Reported-by: default avatarTakamichi Osumi <osumi.takamichi@fujitsu.com>
      Discussion: https://postgr.es/m/OSBPR01MB4888C4ABA361C7E81094AC66ED269@OSBPR01MB4888.jpnprd01.prod.outlook.com
      5e0b1aeb
    • Peter Eisentraut's avatar
    • Michael Paquier's avatar
      Fix memory leak when de-toasting compressed values in VACUUM FULL/CLUSTER · fb0f5f01
      Michael Paquier authored
      VACUUM FULL and CLUSTER can be used to enforce the use of the existing
      compression method of a toastable column if a value currently stored is
      compressed with a method that does not match the column's defined
      method.  The code in charge of decompressing and recompressing toast
      values at rewrite left around the detoasted values, causing an
      accumulation of memory allocated in TopTransactionContext.
      
      When processing large relations, this could cause the system to run out
      of memory.  The detoasted values are not needed once their tuple is
      rewritten, and this commit ensures that the necessary cleanup happens.
      
      Issue introduced by bbe0a81d.  The comments of the area are reordered a
      bit while on it.
      
      Reported-by: Andres Freund
      Analyzed-by: Andres Freund
      Author: Michael Paquier
      Reviewed-by: Dilip Kumar
      Discussion: https://postgr.es/m/20210521211929.pcehg6f23icwstdb@alap3.anarazel.de
      fb0f5f01
    • Amit Kapila's avatar
      Doc: Update logical decoding stats information. · 0c6b92d9
      Amit Kapila authored
      Add the information of pg_stat_replication_slots view along with other
      system catalogs related to logical decoding.
      
      Author: Vignesh C
      Reviewed-by: Amit Kapila
      Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
      0c6b92d9
    • Amit Kapila's avatar
      Improve docs and error messages for parallel vacuum. · 0734b0e9
      Amit Kapila authored
      The error messages, docs, and one of the options were using
      'parallel degree' to indicate parallelism used by vacuum command. We
      normally use 'parallel workers' at other places so change it for parallel
      vacuum accordingly.
      
      Author: Bharath Rupireddy
      Reviewed-by: Dilip Kumar, Amit Kapila
      Backpatch-through: 13
      Discussion: https://postgr.es/m/CALj2ACWz=PYrrFXVsEKb9J1aiX4raA+UBe02hdRp_zqDkrWUiw@mail.gmail.com
      0734b0e9
    • Michael Paquier's avatar
      Disallow SSL renegotiation · 01e6f1a8
      Michael Paquier authored
      SSL renegotiation is already disabled as of 48d23c72, however this does
      not prevent the server to comply with a client willing to use
      renegotiation.  In the last couple of years, renegotiation had its set
      of security issues and flaws (like the recent CVE-2021-3449), and it
      could be possible to crash the backend with a client attempting
      renegotiation.
      
      This commit takes one extra step by disabling renegotiation in the
      backend in the same way as SSL compression (f9264d15) or tickets
      (97d3a0b0).  OpenSSL 1.1.0h has added an option named
      SSL_OP_NO_RENEGOTIATION able to achieve that.  In older versions
      there is an option called SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS that
      was undocumented, and could be set within the SSL object created when
      the TLS connection opens, but I have decided not to use it, as it feels
      trickier to rely on, and it is not official.  Note that this option is
      not usable in OpenSSL < 1.1.0h as the internal contents of the *SSL
      object are hidden to applications.
      
      SSL renegotiation concerns protocols up to TLSv1.2.
      
      Per original report from Robert Haas, with a patch based on a suggestion
      by Andres Freund.
      
      Author: Michael Paquier
      Reviewed-by: Daniel Gustafsson
      Discussion: https://postgr.es/m/YKZBXx7RhU74FlTE@paquier.xyz
      Backpatch-through: 9.6
      01e6f1a8