1. 03 Jan, 2018 12 commits
    • Alvaro Herrera's avatar
      Fix typo · bab29698
      Alvaro Herrera authored
      Author: Dagfinn Ilmari Mannsåker
      Discussion: https://postgr.es/m/d8jefpk4jtd.fsf@dalvik.ping.uio.no
      bab29698
    • Alvaro Herrera's avatar
      Revert "Fix isolation test to be less timing-dependent" · 6c8be596
      Alvaro Herrera authored
      This reverts commit 2268e6af.  It turned out that inconsistency in
      the report is still possible, so go back to the simpler formulation of
      the test and instead add an alternate expected output.
      
      Discussion: https://postgr.es/m/20180103193728.ysqpcp2xjnqpiep7@alvherre.pgsql
      6c8be596
    • Andres Freund's avatar
      Rename pg_rewind's copy_file_range() to avoid conflict with new linux syscall. · 3e68686e
      Andres Freund authored
      Upcoming versions of glibc will contain copy_file_range(2), a wrapper
      around a new linux syscall for in-kernel copying of data ranges. This
      conflicts with pg_rewinds function of the same name.
      
      Therefore rename pg_rewinds version. As our version isn't a generic
      copying facility we decided to choose a rewind specific function name.
      
      Per buildfarm animal caiman and subsequent discussion with Tom Lane.
      
      Author: Andres Freund
      Discussion:
          https://postgr.es/m/20180103033425.w7jkljth3e26sduc@alap3.anarazel.de
          https://postgr.es/m/31122.1514951044@sss.pgh.pa.us
      Backpatch: 9.5-, where pg_rewind was introduced
      3e68686e
    • Andrew Dunstan's avatar
      Fix use of config-specific libraries for Windows OpenSSL · 99d5a3ff
      Andrew Dunstan authored
      Commit 614350a3 allowed for an different builds of OpenSSL libraries on
      Windows, but ignored the fact that the alternative builds don't have
      config-specific libraries. This patch fixes the Solution file to ask for
      the correct libraries.
      
      per offline discussions with Leonardo Cecchi and Marco Nenciarini,
      
      Backpatch to all live branches.
      99d5a3ff
    • Alvaro Herrera's avatar
      Make XactLockTableWait work for transactions that are not yet self-locked · 3c27944f
      Alvaro Herrera authored
      XactLockTableWait assumed that its xid argument has already added itself
      to the lock table.  That assumption led to another assumption that if
      locking the xid has succeeded but the xid is reported as still in
      progress, then the input xid must have been a subtransaction.
      
      These assumptions hold true for the original uses of this code in
      locking related to on-disk tuples, but they break down in logical
      replication slot snapshot building -- in particular, when a standby
      snapshot logged contains an xid that's already in ProcArray but not yet
      in the lock table.  This leads to assertion failures that can be
      reproduced all the way back to 9.4, when logical decoding was
      introduced.
      
      To fix, change SubTransGetParent to SubTransGetTopmostTransaction which
      has a slightly different API: it returns the argument Xid if there is no
      parent, and it goes all the way to the top instead of moving up the
      levels one by one.  Also, to avoid busy-waiting, add a 1ms sleep to give
      the other process time to register itself in the lock table.
      
      For consistency, change ConditionalXactLockTableWait the same way.
      
      Author: Petr Jelínek
      Discussion: https://postgr.es/m/1B3E32D8-FCF4-40B4-AEF9-5C0E3AC57969@postgrespro.ru
      Reported-by: Konstantin Knizhnik
      Diagnosed-by: Stas Kelvich, Petr Jelínek
      Reviewed-by: Andres Freund, Robert Haas
      3c27944f
    • Tom Lane's avatar
      Fix some minor errors in new PHJ code. · 6fcde240
      Tom Lane authored
      Correct ExecParallelHashTuplePrealloc's estimate of whether the
      space_allowed limit is exceeded.  Be more consistent about tuples that
      are exactly HASH_CHUNK_THRESHOLD in size (they're "small", not "large").
      Neither of these things explain the current buildfarm unhappiness, but
      they're still bugs.
      
      Thomas Munro, per gripe by me
      
      Discussion: https://postgr.es/m/CAEepm=34PDuR69kfYVhmZPgMdy8pSA-MYbpesEN1SR+2oj3Y+w@mail.gmail.com
      6fcde240
    • Tom Lane's avatar
      Teach eval_const_expressions() to handle some more cases. · 3decd150
      Tom Lane authored
      Add some infrastructure (mostly macros) to make it easier to write
      typical cases for constant-expression simplification.  Add simplification
      processing for ArrayRef, RowExpr, and ScalarArrayOpExpr node types,
      which formerly went unsimplified even if all their inputs were constants.
      Also teach it to simplify FieldSelect from a composite constant.
      Make use of the new infrastructure to reduce the amount of code needed
      for the existing ArrayExpr and ArrayCoerceExpr cases.
      
      One existing test case changes output as a result of the fact that
      RowExpr can now be folded to a constant.  All the new code is exercised
      by existing test cases according to gcov, so I feel no need to add
      additional tests.
      
      Tom Lane, reviewed by Dmitry Dolgov
      
      Discussion: https://postgr.es/m/3be3b82c-e29c-b674-2163-bf47d98817b1@iki.fi
      3decd150
    • Peter Eisentraut's avatar
      Allow ldaps when using ldap authentication · 35c0754f
      Peter Eisentraut authored
      While ldaptls=1 provides an RFC 4513 conforming way to do LDAP
      authentication with TLS encryption, there was an earlier de facto
      standard way to do LDAP over SSL called LDAPS.  Even though it's not
      enshrined in a standard, it's still widely used and sometimes required
      by organizations' network policies.  There seems to be no reason not to
      support it when available in the client library.  Therefore, add support
      when using OpenLDAP 2.4+ or Windows.  It can be configured with
      ldapscheme=ldaps or ldapurl=ldaps://...
      
      Add tests for both ways of requesting LDAPS and a test for the
      pre-existing ldaptls=1.  Modify the 001_auth.pl test for "diagnostic
      messages", which was previously relying on the server rejecting
      ldaptls=1.
      
      Author: Thomas Munro
      Reviewed-By: Peter Eisentraut
      Discussion: https://postgr.es/m/CAEepm=1s+pA-LZUjQ-9GQz0Z4rX_eK=DFXAF1nBQ+ROPimuOYQ@mail.gmail.com
      35c0754f
    • Alvaro Herrera's avatar
      Fix isolation test to be less timing-dependent · 2268e6af
      Alvaro Herrera authored
      I did this by adding another locking process, which makes the other two
      wait.  This way the output should be stable enough.
      
      Per buildfarm and Andres Freund
      Discussion: https://postgr.es/m/20180103034445.t3utrtrnrevfsghm@alap3.anarazel.de
      2268e6af
    • Bruce Momjian's avatar
      Update copyright for 2018 · 9d4649ca
      Bruce Momjian authored
      Backpatch-through: certain files through 9.3
      9d4649ca
    • Andres Freund's avatar
      Simplify representation of aggregate transition values a bit. · f9ccf92e
      Andres Freund authored
      Previously aggregate transition values for hash and other forms of
      aggregation (i.e. sort and no group by) were represented
      differently. Hash based aggregation used a grouping set indexed array
      pointing to an array of transition values, whereas other forms of
      aggregation used one flattened array with the index being computed out
      of grouping set and transition offsets.
      
      That made upcoming changes hard, so represent both as grouping set
      indexed array of per-group data.
      
      As a nice side-effect this also makes aggregation slightly faster,
      because computing offsets with `transno + (setno * numTrans)` turns
      out not to be that cheap (too big for x86 lea for example).
      
      Author: Andres Freund
      Discussion: https://postgr.es/m/20171128003121.nmxbm2ounxzb6n2t@alap3.anarazel.de
      f9ccf92e
    • Tom Lane's avatar
      Ensure proper alignment of tuples in HashMemoryChunkData buffers. · 5dc692f7
      Tom Lane authored
      The previous coding relied (without any documentation) on the data[]
      member of HashMemoryChunkData being at a MAXALIGN'ed offset.  If it
      was not, the tuples would not be maxaligned either, leading to failures
      on alignment-picky machines.  While there seems to be no live bug on any
      platform we support, this is clearly pretty fragile: any addition to or
      rearrangement of the fields in HashMemoryChunkData could break it.
      Let's remove the hazard by getting rid of the data[] member and instead
      using pointer arithmetic with an explicitly maxalign'ed offset.
      
      Discussion: https://postgr.es/m/14483.1514938129@sss.pgh.pa.us
      5dc692f7
  2. 02 Jan, 2018 2 commits
    • Alvaro Herrera's avatar
      Fix deadlock hazard in CREATE INDEX CONCURRENTLY · 54eff531
      Alvaro Herrera authored
      Multiple sessions doing CREATE INDEX CONCURRENTLY simultaneously are
      supposed to be able to work in parallel, as evidenced by fixes in commit
      c3d09b3b specifically to support this case.  In reality, one of the
      sessions would be aborted by a misterious "deadlock detected" error.
      
      Jeff Janes diagnosed that this is because of leftover snapshots used for
      system catalog scans -- this was broken by 8aa3e47510b9 keeping track of
      (registering) the catalog snapshot.  To fix the deadlocks, it's enough
      to de-register that snapshot prior to waiting.
      
      Backpatch to 9.4, which introduced MVCC catalog scans.
      
      Include an isolationtester spec that 8 out of 10 times reproduces the
      deadlock with the unpatched code for me (Álvaro).
      
      Author: Jeff Janes
      Diagnosed-by: Jeff Janes
      Reported-by: Jeremy Finzel
      Discussion: https://postgr.es/m/CAMa1XUhHjCv8Qkx0WOr1Mpm_R4qxN26EibwCrj0Oor2YBUFUTg%40mail.gmail.com
      54eff531
    • Peter Eisentraut's avatar
      Don't cast between GinNullCategory and bool · 43803626
      Peter Eisentraut authored
      The original idea was that we could use an isNull-style bool array
      directly as a GinNullCategory array.  However, the existing code already
      acknowledges that that doesn't really work, because of the possibility
      that bool as currently defined can have arbitrary bit patterns for true
      values.  So it has to loop through the nullFlags array to set each bool
      value to an acceptable value.  But if we are looping through the whole
      array anyway, we might as well build a proper GinNullCategory array
      instead and abandon the type casting.  That makes the code much safer in
      case bool is ever changed to something else.
      Reviewed-by: default avatarMichael Paquier <michael.paquier@gmail.com>
      43803626
  3. 01 Jan, 2018 2 commits
  4. 31 Dec, 2017 2 commits
    • Tom Lane's avatar
      Merge coding of return/exit/continue cases in plpgsql's loop statements. · 3e724aac
      Tom Lane authored
      plpgsql's five different loop control statements contained three distinct
      implementations of the same (or what ought to be the same, at least)
      logic for handling return/exit/continue result codes from their child
      statements.  At best, that's trouble waiting to happen, and there seems
      no very good reason for the coding to be so different.  Refactor so that
      all the common logic is expressed in a single macro.
      
      Discussion: https://postgr.es/m/26314.1514670401@sss.pgh.pa.us
      3e724aac
    • Tom Lane's avatar
      Improve regression tests' code coverage for plpgsql control structures. · dd2243f2
      Tom Lane authored
      I noticed that our code coverage report showed considerable deficiency
      in test coverage for PL/pgSQL control statements.  Notably, both
      exec_stmt_block and most of the loop control statements had very poor
      coverage of handling of return/exit/continue result codes from their
      child statements; and exec_stmt_fori was seriously lacking in feature
      coverage, having no test that exercised its BY or REVERSE features,
      nor verification that its overflow defenses work.
      
      Now that we have some infrastructure for plpgsql-specific test scripts,
      the natural thing to do is make a new script rather than further extend
      plpgsql.sql.  So I created a new script plpgsql_control.sql with the
      charter to test plpgsql control structures, and moved a few existing
      tests there because they fell entirely under that charter.  I then
      added new test cases that exercise the bits of code complained of above.
      
      Of the five kinds of loop statements, only exec_stmt_while's result code
      handling is fully exercised by these tests.  That would be a deficiency
      as things stand, but a follow-on commit will merge the loop statements'
      result code handling into one implementation.  So testing each usage of
      that implementation separately seems redundant.
      
      In passing, also add a couple test cases to plpgsql.sql to more fully
      exercise plpgsql's code related to expanded arrays --- I had thought
      that area was sufficiently covered already, but the coverage report
      showed a couple of un-executed code paths.
      
      Discussion: https://postgr.es/m/26314.1514670401@sss.pgh.pa.us
      dd2243f2
  5. 29 Dec, 2017 7 commits
  6. 28 Dec, 2017 1 commit
    • Andres Freund's avatar
      Fix rare assertion failure in parallel hash join. · f83040c6
      Andres Freund authored
      When a backend runs out of inner tuples to hash, it should detach from
      grow_batch_barrier only after it has flushed all batches to disk and
      merged counters, not before.  Otherwise a concurrent backend in
      ExecParallelHashIncreaseNumBatches() could stop waiting for this
      backend and try to read tuples before they have been written.  This
      commit reorders those operations and should fix the assertion failures
      seen occasionally on the build farm since commit
      18042840.
      
      Author: Thomas Munro
      Discussion: https://postgr.es/m/E1eRwXy-0004IK-TO%40gemulon.postgresql.org
      f83040c6
  7. 27 Dec, 2017 5 commits
  8. 26 Dec, 2017 2 commits
  9. 25 Dec, 2017 1 commit
    • Teodor Sigaev's avatar
      Add polygon opclass for SP-GiST · ff963b39
      Teodor Sigaev authored
      Polygon opclass uses compress method feature of SP-GiST added earlier. For now
      it's a single operator class which uses this feature. SP-GiST actually indexes
      a bounding boxes of input polygons, so part of supported operations are lossy.
      Opclass uses most methods of corresponding opclass over boxes of SP-GiST and
      treats bounding boxes as point in 4D-space.
      
      Bump catalog version.
      
      Authors: Nikita Glukhov, Alexander Korotkov with minor editorization by me
      Reviewed-By: all authors + Darafei Praliaskouski
      Discussion: https://www.postgresql.org/message-id/flat/54907069.1030506@sigaev.ru
      ff963b39
  10. 24 Dec, 2017 1 commit
  11. 22 Dec, 2017 2 commits
  12. 21 Dec, 2017 3 commits
    • Alvaro Herrera's avatar
      Minor edits to catalog files and scripts · 9373baa0
      Alvaro Herrera authored
      This fixes a few typos and small mistakes; it also cleans a few
      minor stylistic issues.  The biggest functional change is that
      Gen_fmgrtab.pl no longer knows the OID of language 'internal'.
      
      Author: John Naylor
      Discussion: https://postgr.es/m/CAJVSVGXAkwbk-A9QHHHf00N905kKisyQbaYwKqaRpze_gPXGfg@mail.gmail.com
      9373baa0
    • Robert Haas's avatar
      Adjust assertion in GetCurrentCommandId. · cce1ecfc
      Robert Haas authored
      currentCommandIdUsed is only used to skip redundant increments of the
      command counter, and CommandCounterIncrement() is categorically denied
      under parallelism anyway.  Therefore, it's OK for
      GetCurrentCommandId() to mark the counter value used, as long as it
      happens in the leader, not a worker.
      
      Prior to commit e9baa5e9, the slightly
      incorrect check didn't matter, but now it does.  A test case added by
      commit 18042840 uncovered the problem
      by accident; it caused failures with force_parallel_mode=on/regress.
      
      Report and review by Andres Freund.  Patch by me.
      
      Discussion: http://postgr.es/m/20171221143106.5lhtygohvmazli3x@alap3.anarazel.de
      cce1ecfc
    • Tom Lane's avatar
      Rearrange execution of PARAM_EXTERN Params for plpgsql's benefit. · 6719b238
      Tom Lane authored
      This patch does three interrelated things:
      
      * Create a new expression execution step type EEOP_PARAM_CALLBACK
      and add the infrastructure needed for add-on modules to generate that.
      As discussed, the best control mechanism for that seems to be to add
      another hook function to ParamListInfo, which will be called by
      ExecInitExpr if it's supplied and a PARAM_EXTERN Param is found.
      For stand-alone expressions, we add a new entry point to allow the
      ParamListInfo to be specified directly, since it can't be retrieved
      from the parent plan node's EState.
      
      * Redesign the API for the ParamListInfo paramFetch hook so that the
      ParamExternData array can be entirely virtual.  This also lets us get rid
      of ParamListInfo.paramMask, instead leaving it to the paramFetch hook to
      decide which param IDs should be accessible or not.  plpgsql_param_fetch
      was already doing the identical masking check, so having callers do it too
      seemed redundant.  While I was at it, I added a "speculative" flag to
      paramFetch that the planner can specify as TRUE to avoid unwanted failures.
      This solves an ancient problem for plpgsql that it couldn't provide values
      of non-DTYPE_VAR variables to the planner for fear of triggering premature
      "record not assigned yet" or "field not found" errors during planning.
      
      * Rework plpgsql to get rid of the need for "unshared" parameter lists,
      by dint of turning the single ParamListInfo per estate into a nearly
      read-only data structure that doesn't instantiate any per-variable data.
      Instead, the paramFetch hook controls access to per-variable data and can
      make the right decisions on the fly, replacing the cases that we used to
      need multiple ParamListInfos for.  This might perhaps have been a
      performance loss on its own, but by using a paramCompile hook we can
      bypass plpgsql_param_fetch entirely during normal query execution.
      (It's now only called when, eg, we copy the ParamListInfo into a cursor
      portal.  copyParamList() or SerializeParamList() effectively instantiate
      the virtual parameter array as a simple physical array without a
      paramFetch hook, which is what we want in those cases.)  This allows
      reverting most of commit 6c82d8d1, though I kept the cosmetic
      code-consolidation aspects of that (eg the assign_simple_var function).
      
      Performance testing shows this to be at worst a break-even change,
      and it can provide wins ranging up to 20% in test cases involving
      accesses to fields of "record" variables.  The fact that values of
      such variables can now be exposed to the planner might produce wins
      in some situations, too, but I've not pursued that angle.
      
      In passing, remove the "parent" pointer from the arguments to
      ExecInitExprRec and related functions, instead storing that pointer in a
      transient field in ExprState.  The ParamListInfo pointer for a stand-alone
      expression is handled the same way; we'd otherwise have had to add
      yet another recursively-passed-down argument in expression compilation.
      
      Discussion: https://postgr.es/m/32589.1513706441@sss.pgh.pa.us
      6719b238