1. 03 Oct, 2019 4 commits
    • Tom Lane's avatar
      Avoid unnecessary out-of-memory errors during encoding conversion. · 8e10405c
      Tom Lane authored
      Encoding conversion uses the very simplistic rule that the output
      can't be more than 4X longer than the input, and palloc's a buffer
      of that size.  This results in failure to convert any string longer
      than 1/4 GB, which is becoming an annoying limitation.
      
      As a band-aid to improve matters, allow the allocated output buffer
      size to exceed 1GB.  We still insist that the final result fit into
      MaxAllocSize (1GB), though.  Perhaps it'd be safe to relax that
      restriction, but it'd require close analysis of all callers, which
      is daunting (not least because external modules might call these
      functions).  For the moment, this should allow a 2X to 4X improvement
      in the longest string we can convert, which is a useful gain in
      return for quite a simple patch.
      
      Also, once we have successfully converted a long string, repalloc
      the output down to the actual string length, returning the excess
      to the malloc pool.  This seems worth doing since we can usually
      expect to give back several MB if we take this path at all.
      
      This still leaves much to be desired, most notably that the assumption
      that MAX_CONVERSION_GROWTH == 4 is very fragile, and yet we have no
      guard code verifying that the output buffer isn't overrun.  Fixing
      that would require significant changes in the encoding conversion
      APIs, so it'll have to wait for some other day.
      
      The present patch seems safely back-patchable, so patch all supported
      branches.
      
      Alvaro Herrera and Tom Lane
      
      Discussion: https://postgr.es/m/20190816181418.GA898@alvherre.pgsql
      Discussion: https://postgr.es/m/3614.1569359690@sss.pgh.pa.us
      8e10405c
    • Tom Lane's avatar
      Allow repalloc() to give back space when a large chunk is downsized. · c477f3e4
      Tom Lane authored
      Up to now, if you resized a large (>8K) palloc chunk down to a smaller
      size, aset.c made no attempt to return any space to the malloc pool.
      That's unpleasant if a really large allocation is resized to a
      significantly smaller size.  I think no such cases existed when this
      code was designed, and I'm not sure whether they're common even yet,
      but an upcoming fix to encoding conversion will certainly create such
      cases.  Therefore, fix AllocSetRealloc so that it gives realloc()
      a chance to do something with the block.  This doesn't noticeably
      increase complexity, we mostly just have to change the order in which
      the cases are considered.
      
      Back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/20190816181418.GA898@alvherre.pgsql
      Discussion: https://postgr.es/m/3614.1569359690@sss.pgh.pa.us
      c477f3e4
    • Andrew Gierth's avatar
      Selectively include window frames in expression walks/mutates. · b7a1c553
      Andrew Gierth authored
      query_tree_walker and query_tree_mutator were skipping the
      windowClause of the query, without regard for the fact that the
      startOffset and endOffset in a WindowClause node are expression trees
      that need to be processed. This was an oversight in commit ec4be2ee
      from 2010 which added the expression fields; the main symptom is that
      function parameters in window frame clauses don't work in inlined
      functions.
      
      Fix (as conservatively as possible since this needs to not break
      existing out-of-tree callers) and add tests.
      
      Backpatch all the way, since this has been broken since 9.0.
      
      Per report from Alastair McKinley; fix by me with kibitzing and review
      from Tom Lane.
      
      Discussion: https://postgr.es/m/DB6PR0202MB2904E7FDDA9D81504D1E8C68E3800@DB6PR0202MB2904.eurprd02.prod.outlook.com
      b7a1c553
    • Amit Kapila's avatar
      pgbench: add --partitions and --partition-method options. · b1c1aa53
      Amit Kapila authored
      These new options allow users to partition the pgbench_accounts table by
      specifying the number of partitions and partitioning method.  The values
      allowed for partitioning method are range and hash.
      
      This feature allows users to measure the overhead of partitioning if any.
      
      Author: Fabien COELHO
      Reviewed-by: Amit Kapila, Amit Langote, Dilip Kumar, Asif Rehman, and
      Alvaro Herrera
      Discussion: https://postgr.es/m/alpine.DEB.2.21.1907230826190.7008@lancre
      b1c1aa53
  2. 02 Oct, 2019 2 commits
  3. 01 Oct, 2019 7 commits
    • Tomas Vondra's avatar
      Blind attempt to fix pglz_maximum_compressed_size · 540f3168
      Tomas Vondra authored
      Commit 11a078cf triggered failures on big-endian machines, and the
      only plausible place for an issue seems to be that TOAST_COMPRESS_SIZE
      calls VARSIZE instead of VARSIZE_ANY. So try fixing that blindly.
      
      Discussion: https://www.postgresql.org/message-id/20191001131803.j6uin7nho7t6vxzy%40development
      540f3168
    • Tomas Vondra's avatar
      Mark two variables in in aset.c with PG_USED_FOR_ASSERTS_ONLY · fa2fe04b
      Tomas Vondra authored
      This fixes two compiler warnings about unused variables in non-assert builds,
      introduced by 5dd7fc15.
      fa2fe04b
    • Tomas Vondra's avatar
      Optimize partial TOAST decompression · 11a078cf
      Tomas Vondra authored
      Commit 4d0e994e added support for partial TOAST decompression, so the
      decompression is interrupted after producing the requested prefix. For
      prefix and slices near the beginning of the entry, this may saves a lot
      of decompression work.
      
      That however only deals with decompression - the whole compressed entry
      was still fetched and re-assembled, even though the compression used
      only a small fraction of it. This commit improves that by computing how
      much compressed data may be needed to decompress the requested prefix,
      and then fetches only the necessary part.
      
      We always need to fetch a bit more compressed data than the requested
      (uncompressed) prefix, because the prefix may not be compressible at all
      and pglz itself adds a bit of overhead. That means this optimization is
      most effective when the requested prefix is much smaller than the whole
      compressed entry.
      
      Author: Binguo Bao
      Reviewed-by: Andrey Borodin, Tomas Vondra, Paul Ramsey
      Discussion: https://www.postgresql.org/message-id/flat/CAL-OGkthU9Gs7TZchf5OWaL-Gsi=hXqufTxKv9qpNG73d5na_g@mail.gmail.com
      11a078cf
    • Michael Paquier's avatar
      Fix test_session_hooks with parallel workers · 002962dc
      Michael Paquier authored
      Several buildfarm machines have been complaining about the new module
      test_session_hooks to be unstable, like crake and thorntail.  The issue
      was that the module was trying to log some start and end session
      activity for parallel workers, which makes little sense as they don't
      support DML, so just prevent this pattern to happen in the module.
      
      This could be reproduced by enforcing force_parallel_mode=regress, which
      is the value used by some of the buildfarm members.
      
      Discussion: https://postgr.es/m/20191001045246.GF2781@paquier.xyz
      002962dc
    • Michael Paquier's avatar
      Add hooks for session start and session end, take two · e788bd92
      Michael Paquier authored
      These hooks can be used in loadable modules.  A simple test module is
      included.
      
      The first attempt was done with cd8ce3a2 but we lacked handling for
      NO_INSTALLCHECK in the MSVC scripts (problem solved afterwards by
      431f1599) so the buildfarm got angry.  This also fixes a couple of
      issues noticed upon review compared to the first attempt, so the code
      has slightly changed, resulting in a more simple test module.
      
      Author: Fabrízio de Royes Mello, Yugo Nagata
      Reviewed-by: Andrew Dunstan, Michael Paquier, Aleksandr Parfenov
      Discussion: https://postgr.es/m/20170720204733.40f2b7eb.nagata@sraoss.co.jp
      Discussion: https://postgr.es/m/20190823042602.GB5275@paquier.xyz
      e788bd92
    • Michael Paquier's avatar
      Fix confusing error caused by connection parameter channel_binding · 41a6de41
      Michael Paquier authored
      When using a client compiled without channel binding support (linking to
      OpenSSL 1.0.1 or older) to connect to a server which supports channel
      binding (linking to OpenSSL 1.0.2 or newer), libpq would generate a
      confusing error message with channel_binding=require for an SSL
      connection, where the server sends back SCRAM-SHA-256-PLUS:
      "channel binding is required, but server did not offer an authentication
      method that supports channel binding."
      
      This is confusing because the server did send a SASL mechanism able to
      support channel binding, but libpq was not able to detect that
      properly.
      
      The situation can be summarized as followed for the case described in
      the previous paragraph for the SASL mechanisms used with the various
      modes of channel_binding:
      1) Client supports channel binding.
      1-1) channel_binding = disable => OK, with SCRAM-SHA-256.
      1-2) channel_binding = prefer => OK, with SCRAM-SHA-256-PLUS.
      1-3) channel_binding = require => OK, with SCRAM-SHA-256-PLUS.
      2) Client does not support channel binding.
      2-1) channel_binding = disable => OK, with SCRAM-SHA-256.
      2-2) channel_binding = prefer => OK, with SCRAM-SHA-256.
      2-3) channel_binding = require => failure with new error message,
      instead of the confusing one.
      This commit updates case 2-3 to generate a better error message.  Note
      that the SSL TAP tests are not impacted as it is not possible to test
      with mixed versions of OpenSSL for the backend and libpq.
      
      Reported-by: Tom Lane
      Author: Michael Paquier
      Reviewed-by: Jeff Davis, Tom Lane
      Discussion: https://postgr.es/m/24857.1569775891@sss.pgh.pa.us
      41a6de41
    • Tomas Vondra's avatar
      Add transparent block-level memory accounting · 5dd7fc15
      Tomas Vondra authored
      Adds accounting of memory allocated in a memory context. Compared to
      various ad hoc solutions, the main advantage is that the accounting is
      transparent and does not require direct control over allocations (this
      matters for use cases where the allocations happen in user code, like
      for example aggregate states allocated in a transition functions).
      
      To reduce overhead, the accounting happens at the block level (not for
      individual chunks) and only the context immediately owning the block is
      updated. When inquiring about amount of memory allocated in a context,
      we have to recursively walk all children contexts.
      
      This "lazy" accounting works well for cases with relatively small number
      of contexts in the relevant subtree and/or with infrequent inquiries.
      
      Author: Jeff Davis
      Reivewed-by: Tomas Vondra, Melanie Plageman, Soumyadeep Chakraborty
      Discussion: https://www.postgresql.org/message-id/flat/027a129b8525601c6a680d27ce3a7172dab61aab.camel@j-davis.com
      5dd7fc15
  4. 30 Sep, 2019 11 commits
  5. 29 Sep, 2019 6 commits
    • Andres Freund's avatar
      jit: Re-allow JIT compilation of execGrouping.c hashtable comparisons. · ac88807f
      Andres Freund authored
      In the course of 5567d12c, 356687bd and 317ffdfe, I changed
      BuildTupleHashTable[Ext]'s call to ExecBuildGroupingEqual to not pass
      in the parent node, but NULL. Which in turn prevents the tuple
      equality comparator from being JIT compiled.  While that fixes
      bug #15486, it is not actually necessary after all of the above commits,
      as we don't re-build the comparator when using the new
      BuildTupleHashTableExt() interface (as the content of the hashtable
      are reset, but the TupleHashTable itself is not).
      
      Therefore re-allow jit compilation for callers that use
      BuildTupleHashTableExt with a separate context for "metadata" and
      content.
      
      As in the previous commit, there's ongoing work to make this easier to
      test to prevent such regressions in the future, but that
      infrastructure is not going to be backpatchable.
      
      The performance impact of not JIT compiling hashtable equality
      comparators can be substantial e.g. for aggregation queries that
      aggregate a lot of input rows to few output rows (when there are a lot
      of output groups, there will be fewer comparisons).
      
      Author: Andres Freund
      Discussion: https://postgr.es/m/20190927072053.njf6prdl3vb7y7qb@alap3.anarazel.de
      Backpatch: 11, just as 5567d12c
      ac88807f
    • Andres Freund's avatar
      Fix determination when slot types for upper executor nodes are fixed. · 97e971ee
      Andres Freund authored
      For many queries the fact that the tuple descriptor from the lower
      node was not taken into account when determining whether the type of a
      slot is fixed, lead to tuple deforming for such upper nodes not to be
      JIT accelerated.
      
      I broke this in 675af5c0.
      
      There is ongoing work to enable writing regression tests for related
      behavior (including a patch that would have detected this
      regression), by optionally showing such details in EXPLAIN. But as it
      seems unlikely that that will be suitable for stable branches, just
      merge the fix for now.
      
      While it's fairly close to the 12 release window, the fact that 11
      continues to perform JITed tuple deforming in these cases, that
      there's still cases where we do so in 12, and the fact that the
      performance regression can be sizable, weigh in favor of fixing it
      now.
      
      Author: Andres Freund
      Discussion: https://postgr.es/m/20190927072053.njf6prdl3vb7y7qb@alap3.anarazel.de
      Backpatch: 12-, where 675af5c0 was merged.
      97e971ee
    • Andrew Dunstan's avatar
      Allow SSL TAP tests to run on Windows · 258bf86a
      Andrew Dunstan authored
      Windows does not enforce key file permissions checks in libpq, and psql
      can produce CRLF line endings on Windows.
      
      Backpatch to Release 12 (CRLF) and Release 11 (permissions check)
      258bf86a
    • Peter Eisentraut's avatar
      doc: Further clarify how recovery target parameters are applied · e04a53a6
      Peter Eisentraut authored
      Recovery target parameters are all applied even in standby mode.  The
      previous documentation mostly wished they were not but this was never
      the case.
      
      Discussion: https://www.postgresql.org/message-id/flat/e445616d-023e-a268-8aa1-67b8b335340c%40pgmasters.net
      e04a53a6
    • Tom Lane's avatar
      Fix bogus order of error checks in new channel_binding code. · 2c97f734
      Tom Lane authored
      Coverity pointed out that it's pretty silly to check for a null pointer
      after we've already dereferenced the pointer.  To fix, just swap the
      order of the two error checks.  Oversight in commit d6e612f8.
      2c97f734
    • Peter Eisentraut's avatar
      doc: Add a link target · 92f1545d
      Peter Eisentraut authored
      Forward-patched from PostgreSQL 12 release notes patch, for
      consistency.
      92f1545d
  6. 28 Sep, 2019 3 commits
  7. 27 Sep, 2019 7 commits