1. 05 Aug, 2021 4 commits
    • Tom Lane's avatar
      Doc: remove commit 2945a488 from v14 release notes. · 64b7a835
      Tom Lane authored
      Now that this has been back-patched, it's no longer a new feature
      for v14.
      64b7a835
    • Etsuro Fujita's avatar
      postgres_fdw: Fix issues with generated columns in foreign tables. · 588d3f59
      Etsuro Fujita authored
      postgres_fdw imported generated columns from the remote tables as plain
      columns, and caused failures like "ERROR: cannot insert a non-DEFAULT
      value into column "foo"" when inserting into the foreign tables, as it
      tried to insert values into the generated columns.  To fix, we do the
      following under the assumption that generated columns in a postgres_fdw
      foreign table are defined so that they represent generated columns in
      the underlying remote table:
      
      * Send DEFAULT for the generated columns to the foreign server on insert
        or update, not generated column values computed on the local server.
      * Add to postgresImportForeignSchema() an option "import_generated" to
        include column generated expressions in the definitions of foreign
        tables imported from a foreign server.  The option is true by default.
      
      The assumption seems reasonable, because that would make a query of the
      postgres_fdw foreign table return values for the generated columns that
      are consistent with the generated expression.
      
      While here, fix another issue in postgresImportForeignSchema(): it tried
      to include column generated expressions as column default expressions in
      the foreign table definitions when the import_default option was enabled.
      
      Per bug #16631 from Daniel Cherniy.  Back-patch to v12 where generated
      columns were added.
      
      Discussion: https://postgr.es/m/16631-e929fe9db0ffc7cf%40postgresql.org
      588d3f59
    • Dean Rasheed's avatar
      Fix division-by-zero error in to_char() with 'EEEE' format. · ecbdbdfd
      Dean Rasheed authored
      This fixes a long-standing bug when using to_char() to format a
      numeric value in scientific notation -- if the value's exponent is
      less than -NUMERIC_MAX_DISPLAY_SCALE-1 (-1001), it produced a
      division-by-zero error.
      
      The reason for this error was that get_str_from_var_sci() divides its
      input by 10^exp, which it produced using power_var_int(). However, the
      underflow test in power_var_int() causes it to return zero if the
      result scale is too small. That's not a problem for power_var_int()'s
      only other caller, power_var(), since that limits the rscale to 1000,
      but in get_str_from_var_sci() the exponent can be much smaller,
      requiring a much larger rscale. Fix by introducing a new function to
      compute 10^exp directly, with no rscale limit. This also allows 10^exp
      to be computed more efficiently, without any numeric multiplication,
      division or rounding.
      
      Discussion: https://postgr.es/m/CAEZATCWhojfH4whaqgUKBe8D5jNHB8ytzemL-PnRx+KCTyMXmg@mail.gmail.com
      ecbdbdfd
    • Andres Freund's avatar
      pgbench: When using pipelining only do PQconsumeInput() when necessary. · fa604e0d
      Andres Freund authored
      Up to now we did a PQconsumeInput() for each pipelined query, asking the OS
      for more input - which it often won't have, as all results might already have
      been sent. That turns out to have a noticeable performance impact.
      
      Alvaro Herrera reviewed the idea to add the PQisBusy() check, but not this
      concrete patch.
      
      Author: Andres Freund <andres@anarazel.de>
      Discussion: https://postgr.es/m/20210720180039.23rivhdft3l4mayn@alap3.anarazel.de
      Backpatch: 14, where libpq/pgbench pipelining was introduced.
      fa604e0d
  2. 04 Aug, 2021 1 commit
  3. 03 Aug, 2021 6 commits
  4. 02 Aug, 2021 1 commit
    • Etsuro Fujita's avatar
      Fix oversight in commit 1ec7fca8592178281cd5cdada0f27a340fb813fc. · fb234086
      Etsuro Fujita authored
      I failed to account for the possibility that when
      ExecAppendAsyncEventWait() notifies multiple async-capable nodes using
      postgres_fdw, a preceding node might invoke process_pending_request() to
      process a pending asynchronous request made by a succeeding node.  In
      that case the succeeding node should produce a tuple to return to the
      parent Append node from tuples fetched by process_pending_request() when
      notified.  Repair.
      
      Per buildfarm via Michael Paquier.  Back-patch to v14, like the previous
      commit.
      
      Thanks to Tom Lane for testing.
      
      Discussion: https://postgr.es/m/YQP0UPT8KmPiHTMs%40paquier.xyz
      fb234086
  5. 31 Jul, 2021 2 commits
    • Tom Lane's avatar
      Use elog, not Assert, to report failure to provide an outer snapshot. · ec410c98
      Tom Lane authored
      As of commit 84f5c290, executing SQL commands (via SPI or otherwise)
      requires having either an active Portal, or a caller-established
      active snapshot.  We were simply Assert'ing that that's the case.
      But we've now had a couple different reports of people testing
      extensions that didn't meet this requirement, and were confused by
      the resulting crash.  Let's convert the Assert to a test-and-elog,
      in hopes of making the issue clearer for extension authors.
      
      Per gripes from Liu Huailing and RekGRpth.  Back-patch to v11,
      like the prior commit.
      
      Discussion: https://postgr.es/m/OSZPR01MB6215671E3C5956A034A080DFBEEC9@OSZPR01MB6215.jpnprd01.prod.outlook.com
      Discussion: https://postgr.es/m/17035-14607d308ac8643c@postgresql.org
      ec410c98
    • Dean Rasheed's avatar
      Fix corner-case errors and loss of precision in numeric_power(). · 0d6b8749
      Dean Rasheed authored
      This fixes a couple of related problems that arise when raising
      numbers to very large powers.
      
      Firstly, when raising a negative number to a very large integer power,
      the result should be well-defined, but the previous code would only
      cope if the exponent was small enough to go through power_var_int().
      Otherwise it would throw an internal error, attempting to take the
      logarithm of a negative number. Fix this by adding suitable handling
      to the general case in power_var() to cope with negative bases,
      checking for integer powers there.
      
      Next, when raising a (positive or negative) number whose absolute
      value is slightly less than 1 to a very large power, the result should
      approach zero as the power is increased. However, in some cases, for
      sufficiently large powers, this would lose all precision and return 1
      instead of 0. This was due to the way that the local_rscale was being
      calculated for the final full-precision calculation:
      
        local_rscale = rscale + (int) val - ln_dweight + 8
      
      The first two terms on the right hand side are meant to give the
      number of significant digits required in the result ("val" being the
      estimated result weight). However, this failed to account for the fact
      that rscale is clipped to a maximum of NUMERIC_MAX_DISPLAY_SCALE
      (1000), and the result weight might be less then -1000, causing their
      sum to be negative, leading to a loss of precision. Fix this by
      forcing the number of significant digits calculated to be nonnegative.
      It's OK for it to be zero (when the result weight is less than -1000),
      since the local_rscale value then includes a few extra digits to
      ensure an accurate result.
      
      Finally, add additional underflow checks to exp_var() and power_var(),
      so that they consistently return zero for cases like this where the
      result is indistinguishable from zero. Some paths through this code
      already returned zero in such cases, but others were throwing overflow
      errors.
      
      Dean Rasheed, reviewed by Yugo Nagata.
      
      Discussion: http://postgr.es/m/CAEZATCW6Dvq7+3wN3tt5jLj-FyOcUgT5xNoOqce5=6Su0bCR0w@mail.gmail.com
      0d6b8749
  6. 30 Jul, 2021 4 commits
    • John Naylor's avatar
      Fix range check in ECPG numeric to int conversion · f051b87a
      John Naylor authored
      The previous coding guarded against -INT_MAX instead of INT_MIN,
      leading to -2147483648 being rejected as out of range.
      
      Per bug #17128 from Kevin Sweet
      
      Discussion: https://www.postgresql.org/message-id/flat/17128-55a8a879727a3e3a%40postgresql.org
      Reviewed-by: Tom Lane
      Backpatch to all supported branches
      f051b87a
    • Heikki Linnakangas's avatar
      Update obsolete comment that still referred to CheckpointLock · 99da905d
      Heikki Linnakangas authored
      CheckpointLock was removed in commit d18e7566, and commit ce197e91d0
      updated a leftover comment in CreateCheckPoint, but there was another
      copy of it in CreateRestartPoint still.
      99da905d
    • Etsuro Fujita's avatar
      postgres_fdw: Fix handling of pending asynchronous requests. · 1cf7fb37
      Etsuro Fujita authored
      A pending asynchronous request is handled by process_pending_request(),
      which previously not only processed an in-progress remote query but
      performed ExecForeignScan() to produce a tuple to return to the local
      server asynchronously from the result of the remote query.  But that led
      to a server crash when executing a query or led to an "InstrStartNode
      called twice in a row" or "InstrEndLoop called on running node" failure
      when doing EXPLAIN ANALYZE of it, in cases where the plan tree for it
      contained multiple async-capable nodes accessing the same
      initplan/subplan that contained multiple async-capable nodes scanning
      the same foreign tables as for the parent async-capable nodes, as
      reported by Andrey Lepikhov.  The reason is that the second step in
      process_pending_request() invoked when executing the initplan/subplan
      for one of the parent async-capable nodes caused recursive execution of
      the initplan/subplan for another of the parent async-capable nodes.
      
      To fix, split process_pending_request() into the two steps and postpone
      the second step until ForeignAsyncConfigureWait() is called for each of
      the pending asynchronous requests.  Also, in ExecAppendAsyncEventWait()
      we assumed that FDWs would register at least one wait event in a
      WaitEventSet created there when they were called from
      ForeignAsyncConfigureWait() in that function, but allow FDWs to register
      zero wait events in the WaitEventSet; modify ExecAppendAsyncEventWait()
      to just return in that case.
      
      Oversight in commit 27e1f145.  Back-patch to v14 where that commit went
      in.
      
      Andrey Lepikhov and Etsuro Fujita
      
      Discussion: https://postgr.es/m/fe5eaa19-1704-e4a4-76ee-3b9d37ade399@postgrespro.ru
      1cf7fb37
    • Amit Kapila's avatar
      Remove unused argument in apply_handle_commit_internal(). · f4b939f1
      Amit Kapila authored
      Oversight in commit 0926e96c.
      
      Author: Masahiko Sawada
      Reviewed-By: Amit Kapila
      Backpatch-through: 14, where it was introduced
      Discussion: https://postgr.es/m/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK=30xJfUVihNZDA@mail.gmail.com
      f4b939f1
  7. 29 Jul, 2021 6 commits
  8. 28 Jul, 2021 3 commits
  9. 27 Jul, 2021 5 commits
  10. 26 Jul, 2021 5 commits
  11. 25 Jul, 2021 2 commits
    • Tom Lane's avatar
      Get rid of artificial restriction on hash table sizes on Windows. · b154ee63
      Tom Lane authored
      The point of introducing the hash_mem_multiplier GUC was to let users
      reproduce the old behavior of hash aggregation, i.e. that it could use
      more than work_mem at need.  However, the implementation failed to get
      the job done on Win64, where work_mem is clamped to 2GB to protect
      various places that calculate memory sizes using "long int".  As
      written, the same clamp was applied to hash_mem.  This resulted in
      severe performance regressions for queries requiring a bit more than
      2GB for hash aggregation, as they now spill to disk and there's no
      way to stop that.
      
      Getting rid of the work_mem restriction seems like a good idea, but
      it's a big job and could not conceivably be back-patched.  However,
      there's only a fairly small number of places that are concerned with
      the hash_mem value, and it turns out to be possible to remove the
      restriction there without too much code churn or any ABI breaks.
      So, let's do that for now to fix the regression, and leave the
      larger task for another day.
      
      This patch does introduce a bit more infrastructure that should help
      with the larger task, namely pg_bitutils.h support for working with
      size_t values.
      
      Per gripe from Laurent Hasson.  Back-patch to v13 where the
      behavior change came in.
      
      Discussion: https://postgr.es/m/997817.1627074924@sss.pgh.pa.us
      Discussion: https://postgr.es/m/MN2PR15MB25601E80A9B6D1BA6F592B1985E39@MN2PR15MB2560.namprd15.prod.outlook.com
      b154ee63
    • Andres Freund's avatar
      Deduplicate choice of horizon for a relation procarray.c. · 3d0a4636
      Andres Freund authored
      5a1e1d83 was a minimal bug fix for dc7420c2. To avoid future bugs of
      that kind, deduplicate the choice of a relation's horizon into a new helper,
      GlobalVisHorizonKindForRel().
      
      As the code in question was only introduced in dc7420c2 it seems worth
      backpatching this change as well, otherwise 14 will look different from all
      other branches.
      
      A different approach to this was suggested by Matthias van de Meent.
      
      Author: Andres Freund
      Discussion: https://postgr.es/m/20210621122919.2qhu3pfugxxp3cji@alap3.anarazel.de
      Backpatch: 14, like 5a1e1d83
      3d0a4636
  12. 24 Jul, 2021 1 commit
    • Tom Lane's avatar
      Fix check for conflicting session- vs transaction-level locks. · 712ba6b8
      Tom Lane authored
      We have an implementation restriction that PREPARE TRANSACTION can't
      handle cases where both session-lifespan and transaction-lifespan locks
      are held on the same lockable object.  (That's because we'd otherwise
      need to acquire a new PROCLOCK entry during post-prepare cleanup, which
      is an operation that might fail.  The situation can only arise with odd
      usages of advisory locks, so removing the restriction is probably not
      worth the amount of effort it would take.)  AtPrepare_Locks attempted
      to enforce this, but its logic was many bricks shy of a load, because
      it only detected cases where the session and transaction locks had the
      same lockmode.  Locks of different modes on the same object would lead
      to the rather unhelpful message "PANIC: we seem to have dropped a bit
      somewhere".
      
      To fix, build a transient hashtable with one entry per locktag,
      not one per locktag + mode, and use that to detect conflicts.
      
      Per bug #17122 from Alexander Pyhalov.  This bug is ancient,
      so back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/17122-04f3c32098a62233@postgresql.org
      712ba6b8