1. 31 Dec, 2017 1 commit
    • Tom Lane's avatar
      Improve regression tests' code coverage for plpgsql control structures. · dd2243f2
      Tom Lane authored
      I noticed that our code coverage report showed considerable deficiency
      in test coverage for PL/pgSQL control statements.  Notably, both
      exec_stmt_block and most of the loop control statements had very poor
      coverage of handling of return/exit/continue result codes from their
      child statements; and exec_stmt_fori was seriously lacking in feature
      coverage, having no test that exercised its BY or REVERSE features,
      nor verification that its overflow defenses work.
      
      Now that we have some infrastructure for plpgsql-specific test scripts,
      the natural thing to do is make a new script rather than further extend
      plpgsql.sql.  So I created a new script plpgsql_control.sql with the
      charter to test plpgsql control structures, and moved a few existing
      tests there because they fell entirely under that charter.  I then
      added new test cases that exercise the bits of code complained of above.
      
      Of the five kinds of loop statements, only exec_stmt_while's result code
      handling is fully exercised by these tests.  That would be a deficiency
      as things stand, but a follow-on commit will merge the loop statements'
      result code handling into one implementation.  So testing each usage of
      that implementation separately seems redundant.
      
      In passing, also add a couple test cases to plpgsql.sql to more fully
      exercise plpgsql's code related to expanded arrays --- I had thought
      that area was sufficiently covered already, but the coverage report
      showed a couple of un-executed code paths.
      
      Discussion: https://postgr.es/m/26314.1514670401@sss.pgh.pa.us
      dd2243f2
  2. 29 Dec, 2017 7 commits
  3. 28 Dec, 2017 1 commit
    • Andres Freund's avatar
      Fix rare assertion failure in parallel hash join. · f83040c6
      Andres Freund authored
      When a backend runs out of inner tuples to hash, it should detach from
      grow_batch_barrier only after it has flushed all batches to disk and
      merged counters, not before.  Otherwise a concurrent backend in
      ExecParallelHashIncreaseNumBatches() could stop waiting for this
      backend and try to read tuples before they have been written.  This
      commit reorders those operations and should fix the assertion failures
      seen occasionally on the build farm since commit
      18042840.
      
      Author: Thomas Munro
      Discussion: https://postgr.es/m/E1eRwXy-0004IK-TO%40gemulon.postgresql.org
      f83040c6
  4. 27 Dec, 2017 5 commits
  5. 26 Dec, 2017 2 commits
  6. 25 Dec, 2017 1 commit
    • Teodor Sigaev's avatar
      Add polygon opclass for SP-GiST · ff963b39
      Teodor Sigaev authored
      Polygon opclass uses compress method feature of SP-GiST added earlier. For now
      it's a single operator class which uses this feature. SP-GiST actually indexes
      a bounding boxes of input polygons, so part of supported operations are lossy.
      Opclass uses most methods of corresponding opclass over boxes of SP-GiST and
      treats bounding boxes as point in 4D-space.
      
      Bump catalog version.
      
      Authors: Nikita Glukhov, Alexander Korotkov with minor editorization by me
      Reviewed-By: all authors + Darafei Praliaskouski
      Discussion: https://www.postgresql.org/message-id/flat/54907069.1030506@sigaev.ru
      ff963b39
  7. 24 Dec, 2017 1 commit
  8. 22 Dec, 2017 2 commits
  9. 21 Dec, 2017 8 commits
    • Alvaro Herrera's avatar
      Minor edits to catalog files and scripts · 9373baa0
      Alvaro Herrera authored
      This fixes a few typos and small mistakes; it also cleans a few
      minor stylistic issues.  The biggest functional change is that
      Gen_fmgrtab.pl no longer knows the OID of language 'internal'.
      
      Author: John Naylor
      Discussion: https://postgr.es/m/CAJVSVGXAkwbk-A9QHHHf00N905kKisyQbaYwKqaRpze_gPXGfg@mail.gmail.com
      9373baa0
    • Robert Haas's avatar
      Adjust assertion in GetCurrentCommandId. · cce1ecfc
      Robert Haas authored
      currentCommandIdUsed is only used to skip redundant increments of the
      command counter, and CommandCounterIncrement() is categorically denied
      under parallelism anyway.  Therefore, it's OK for
      GetCurrentCommandId() to mark the counter value used, as long as it
      happens in the leader, not a worker.
      
      Prior to commit e9baa5e9, the slightly
      incorrect check didn't matter, but now it does.  A test case added by
      commit 18042840 uncovered the problem
      by accident; it caused failures with force_parallel_mode=on/regress.
      
      Report and review by Andres Freund.  Patch by me.
      
      Discussion: http://postgr.es/m/20171221143106.5lhtygohvmazli3x@alap3.anarazel.de
      cce1ecfc
    • Tom Lane's avatar
      Rearrange execution of PARAM_EXTERN Params for plpgsql's benefit. · 6719b238
      Tom Lane authored
      This patch does three interrelated things:
      
      * Create a new expression execution step type EEOP_PARAM_CALLBACK
      and add the infrastructure needed for add-on modules to generate that.
      As discussed, the best control mechanism for that seems to be to add
      another hook function to ParamListInfo, which will be called by
      ExecInitExpr if it's supplied and a PARAM_EXTERN Param is found.
      For stand-alone expressions, we add a new entry point to allow the
      ParamListInfo to be specified directly, since it can't be retrieved
      from the parent plan node's EState.
      
      * Redesign the API for the ParamListInfo paramFetch hook so that the
      ParamExternData array can be entirely virtual.  This also lets us get rid
      of ParamListInfo.paramMask, instead leaving it to the paramFetch hook to
      decide which param IDs should be accessible or not.  plpgsql_param_fetch
      was already doing the identical masking check, so having callers do it too
      seemed redundant.  While I was at it, I added a "speculative" flag to
      paramFetch that the planner can specify as TRUE to avoid unwanted failures.
      This solves an ancient problem for plpgsql that it couldn't provide values
      of non-DTYPE_VAR variables to the planner for fear of triggering premature
      "record not assigned yet" or "field not found" errors during planning.
      
      * Rework plpgsql to get rid of the need for "unshared" parameter lists,
      by dint of turning the single ParamListInfo per estate into a nearly
      read-only data structure that doesn't instantiate any per-variable data.
      Instead, the paramFetch hook controls access to per-variable data and can
      make the right decisions on the fly, replacing the cases that we used to
      need multiple ParamListInfos for.  This might perhaps have been a
      performance loss on its own, but by using a paramCompile hook we can
      bypass plpgsql_param_fetch entirely during normal query execution.
      (It's now only called when, eg, we copy the ParamListInfo into a cursor
      portal.  copyParamList() or SerializeParamList() effectively instantiate
      the virtual parameter array as a simple physical array without a
      paramFetch hook, which is what we want in those cases.)  This allows
      reverting most of commit 6c82d8d1, though I kept the cosmetic
      code-consolidation aspects of that (eg the assign_simple_var function).
      
      Performance testing shows this to be at worst a break-even change,
      and it can provide wins ranging up to 20% in test cases involving
      accesses to fields of "record" variables.  The fact that values of
      such variables can now be exposed to the planner might produce wins
      in some situations, too, but I've not pursued that angle.
      
      In passing, remove the "parent" pointer from the arguments to
      ExecInitExprRec and related functions, instead storing that pointer in a
      transient field in ExprState.  The ParamListInfo pointer for a stand-alone
      expression is handled the same way; we'd otherwise have had to add
      yet another recursively-passed-down argument in expression compilation.
      
      Discussion: https://postgr.es/m/32589.1513706441@sss.pgh.pa.us
      6719b238
    • Alvaro Herrera's avatar
      Get rid of copy_partition_key · 8a0596cb
      Alvaro Herrera authored
      That function currently exists to avoid leaking memory in
      CacheMemoryContext in case of trouble while the partition key is being
      built, but there's a better way: allocate everything in a memcxt that
      goes away if the current (sub)transaction fails, and once the partition
      key is built and no further errors can occur, make the memcxt permanent
      by making it a child of CacheMemoryContext.
      
      Reviewed-by: Tom Lane
      Discussion: https://postgr.es/m/20171027172730.eh2domlkpn4ja62m@alvherre.pgsql
      8a0596cb
    • Alvaro Herrera's avatar
      Fix typo · 9ef6aba1
      Alvaro Herrera authored
      9ef6aba1
    • Tom Lane's avatar
      Avoid putting build-location-dependent strings into generated files. · c98c35cd
      Tom Lane authored
      Various Perl scripts we use to generate files were in the habit of
      printing things like "generated by $0" into their output files.
      That looks like a fine idea at first glance, but it results in
      non-reproducible output, because in VPATH builds $0 won't be just
      the name of the script file, but a full path for it.  We'd prefer
      that you get identical results whether using VPATH or not, so this
      is a bad thing.
      
      Some of these places also printed their input file name(s), causing
      an additional hazard of the same type.
      
      Hence, establish a policy that thou shalt not print $0, nor input file
      pathnames, into output files (they're still allowed in error messages,
      though).  Instead just write the script name verbatim.  While we are at
      it, we can make these annotations more useful by giving the script's
      full relative path name within the PG source tree, eg instead of
      Gen_fmgrtab.pl let's print src/backend/utils/Gen_fmgrtab.pl.
      
      Not all of the changes made here actually affect any files shipped
      in finished tarballs today, but it seems best to apply the policy
      everyplace so that nobody copies unsafe code into places where it
      could matter.
      
      Christoph Berg and Tom Lane
      
      Discussion: https://postgr.es/m/20171215102223.GB31812@msg.df7cb.de
      c98c35cd
    • Robert Haas's avatar
      Cancel CV sleep during subtransaction abort. · 59d1e2b9
      Robert Haas authored
      Generally, error recovery paths that need to do things like
      LWLockReleaseAll and pgstat_report_wait_end also need to call
      ConditionVariableCancelSleep, but AbortSubTransaction was missed.
      
      Since subtransaction abort might destroy up the DSM segment that
      contains the ConditionVariable stored in cv_sleep_target, this
      can result in a crash for anything using condition variables.
      
      Reported and diagnosed by Andres Freund.
      
      Discussion: http://postgr.es/m/20171221110048.rxk6464azzl5t2fi@alap3.anarazel.de
      59d1e2b9
    • Andres Freund's avatar
      Add parallel-aware hash joins. · 18042840
      Andres Freund authored
      Introduce parallel-aware hash joins that appear in EXPLAIN plans as Parallel
      Hash Join with Parallel Hash.  While hash joins could already appear in
      parallel queries, they were previously always parallel-oblivious and had a
      partial subplan only on the outer side, meaning that the work of the inner
      subplan was duplicated in every worker.
      
      After this commit, the planner will consider using a partial subplan on the
      inner side too, using the Parallel Hash node to divide the work over the
      available CPU cores and combine its results in shared memory.  If the join
      needs to be split into multiple batches in order to respect work_mem, then
      workers process different batches as much as possible and then work together
      on the remaining batches.
      
      The advantages of a parallel-aware hash join over a parallel-oblivious hash
      join used in a parallel query are that it:
      
       * avoids wasting memory on duplicated hash tables
       * avoids wasting disk space on duplicated batch files
       * divides the work of building the hash table over the CPUs
      
      One disadvantage is that there is some communication between the participating
      CPUs which might outweigh the benefits of parallelism in the case of small
      hash tables.  This is avoided by the planner's existing reluctance to supply
      partial plans for small scans, but it may be necessary to estimate
      synchronization costs in future if that situation changes.  Another is that
      outer batch 0 must be written to disk if multiple batches are required.
      
      A potential future advantage of parallel-aware hash joins is that right and
      full outer joins could be supported, since there is a single set of matched
      bits for each hashtable, but that is not yet implemented.
      
      A new GUC enable_parallel_hash is defined to control the feature, defaulting
      to on.
      
      Author: Thomas Munro
      Reviewed-By: Andres Freund, Robert Haas
      Tested-By: Rafia Sabih, Prabhat Sahu
      Discussion:
          https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com
          https://postgr.es/m/CAEepm=37HKyJ4U6XOLi=JgfSHM3o6B-GaeO-6hkOmneTDkH+Uw@mail.gmail.com
      18042840
  10. 20 Dec, 2017 1 commit
  11. 19 Dec, 2017 5 commits
  12. 18 Dec, 2017 6 commits
    • Andres Freund's avatar
      Add shared tuplestores. · ab9e0e71
      Andres Freund authored
      SharedTuplestore allows multiple participants to write into it and
      then read the tuples back from it in parallel.  Each reader receives
      partial results.
      
      For now it always uses disk files, but other buffering policies and
      other kinds of scans (ie each reader receives complete results) may be
      useful in future.
      
      The upcoming parallel hash join feature will use this facility.
      
      Author: Thomas Munro
      Reviewed-By: Peter Geoghegan, Andres Freund, Robert Haas
      Discussion: https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com
      ab9e0e71
    • Peter Eisentraut's avatar
      Move SCRAM-related name definitions to scram-common.h · 25d53269
      Peter Eisentraut authored
      Mechanism names for SCRAM and channel binding names have been included
      in scram.h by the libpq frontend code, and this header references a set
      of routines which are only used by the backend.  scram-common.h is on
      the contrary usable by both the backend and libpq, so getting those
      names from there seems more reasonable.
      
      Author: Michael Paquier <michael.paquier@gmail.com>
      25d53269
    • Peter Eisentraut's avatar
      doc: Fix figures in example description · 53cba77b
      Peter Eisentraut authored
      oversight in 244c8b46Reported-by: default avatarBlaz Merela <blaz@merela.org>
      53cba77b
    • Fujii Masao's avatar
      Fix bug in cancellation of non-exclusive backup to avoid assertion failure. · 56a95ee5
      Fujii Masao authored
      Previously an assertion failure occurred when pg_stop_backup() for
      non-exclusive backup was aborted while it's waiting for WAL files to
      be archived. This assertion failure happened in do_pg_abort_backup()
      which was called when a non-exclusive backup was canceled.
      do_pg_abort_backup() assumes that there is at least one non-exclusive
      backup running when it's called. But pg_stop_backup() can be canceled
      even after it marks the end of non-exclusive backup (e.g.,
      during waiting for WAL archiving). This broke the assumption that
      do_pg_abort_backup() relies on, and which caused an assertion failure.
      
      This commit changes do_pg_abort_backup() so that it does nothing
      when non-exclusive backup has been already marked as completed.
      That is, the asssumption is also changed, and do_pg_abort_backup()
      now can handle even the case where it's called when there is
      no running backup.
      
      Backpatch to 9.6 where SQL-callable non-exclusive backup was added.
      
      Author: Masahiko Sawada and Michael Paquier
      Reviewed-By: Robert Haas and Fujii Masao
      Discussion: https://www.postgresql.org/message-id/CAD21AoD2L1Fu2c==gnVASMyFAAaq3y-AQ2uEVj-zTCGFFjvmDg@mail.gmail.com
      56a95ee5
    • Robert Haas's avatar
      Fix crashes on plans with multiple Gather (Merge) nodes. · fd7c0fa7
      Robert Haas authored
      es_query_dsa turns out to be broken by design, because it supposes
      that there is only one DSA for the whole query, whereas there is
      actually one per Gather (Merge) node.  For now, work around that
      problem by setting and clearing the pointer around the sections of
      code that might need it.  It's probably a better idea to get rid of
      es_query_dsa altogether in favor of having each node keep track
      individually of which DSA is relevant, but that seems like more than
      we would want to back-patch.
      
      Thomas Munro, reviewed and tested by Andreas Seltenreich, Amit
      Kapila, and by me.
      
      Discussion: http://postgr.es/m/CAEepm=1U6as=brnVvMNixEV2tpi8NuyQoTmO8Qef0-VV+=7MDA@mail.gmail.com
      fd7c0fa7
    • Magnus Hagander's avatar
      Fix typo on comment · 7731c320
      Magnus Hagander authored
      Author: David Rowley
      7731c320