1. 10 Nov, 2015 1 commit
    • Tom Lane's avatar
      Improve our workaround for 'TeX capacity exceeded' in building PDF files. · 944b41fc
      Tom Lane authored
      In commit a5ec86a7 I wrote a quick hack
      that reduced the number of TeX string pool entries created while converting
      our documentation to PDF form.  That held the fort for awhile, but as of
      HEAD we're back up against the same limitation.  It turns out that the
      original coding of \FlowObjectSetup actually results in *three* string pool
      entries being generated for every "flow object" (that is, potential
      cross-reference target) in the documentation, and my previous hack only got
      rid of one of them.  With a little more care, we can reduce the string
      count to one per flow object plus one per actually-cross-referenced flow
      object (about 115000 + 5000 as of current HEAD); that should work until
      the documentation volume roughly doubles from where it is today.
      
      As a not-incidental side benefit, this change also causes pdfjadetex to
      stop emitting unreferenced hyperlink anchors (bookmarks) into the PDF file.
      It had been making one willy-nilly for every flow object; now it's just one
      per actually-cross-referenced object.  This results in close to a 2X
      savings in PDF file size.  We will still want to run the output through
      "jpdftweak" to get it to be compressed; but we no longer need removal of
      unreferenced bookmarks, so we might be able to find a quicker tool for
      that step.
      
      Although the failure only affects HEAD and US-format output at the moment,
      9.5 cannot be more than a few pages short of failing likewise, so it
      will inevitably fail after a few rounds of minor-version release notes.
      I don't have a lot of faith that we'll never hit the limit in the older
      branches; and anyway it would be nice to get rid of jpdftweak across the
      board.  Therefore, back-patch to all supported branches.
      944b41fc
  2. 09 Nov, 2015 4 commits
  3. 08 Nov, 2015 3 commits
    • Andres Freund's avatar
      Set replication origin when decoding commit records. · f3a764b0
      Andres Freund authored
      By accident the replication origin was not set properly in
      DecodeCommit(). That's bad because the origin is passed to the output
      plugins origin filter, and accessible from the output plugin via
      ReorderBufferTXN->origin_id.  Accessing the origin of individual changes
      worked before the fix, which is why this wasn't notices earlier.
      
      Reported-By: Craig Ringer
      Author: Craig Ringer
      Discussion: CAMsr+YFhBJLp=qfSz3-J+0P1zLkE8zNXM2otycn20QRMx380gw@mail.gmail.com
      Backpatch: 9.5, where replication origins where introduced
      f3a764b0
    • Noah Misch's avatar
      Don't connect() to a wildcard address in test_postmaster_connection(). · fed19f31
      Noah Misch authored
      At least OpenBSD, NetBSD, and Windows don't support it.  This repairs
      pg_ctl for listen_addresses='0.0.0.0' and listen_addresses='::'.  Since
      pg_ctl prefers to test a Unix-domain socket, Windows users are most
      likely to need this change.  Back-patch to 9.1 (all supported versions).
      This could change pg_ctl interaction with loopback-interface firewall
      rules.  Therefore, in 9.4 and earlier (released branches), activate the
      change only on known-affected platforms.
      
      Reported (bug #13611) and designed by Kondo Yuta.
      fed19f31
    • Robert Haas's avatar
      Remove set-but-not-used variables. · fba60e57
      Robert Haas authored
      Reported by both Peter Eisentraunt and Kevin Grittner.
      fba60e57
  4. 07 Nov, 2015 6 commits
    • Tom Lane's avatar
      Update 9.5 release notes through today. · ad9fad7b
      Tom Lane authored
      ad9fad7b
    • Tom Lane's avatar
      Add "xid <> xid" and "xid <> int4" operators. · c5e86ea9
      Tom Lane authored
      The corresponding "=" operators have been there a long time, and not
      having their negators is a bit of a nuisance.
      
      Michael Paquier
      c5e86ea9
    • Tom Lane's avatar
      Rename PQsslAttributes() to PQsslAttributeNames(), and const-ify fully. · 9042f583
      Tom Lane authored
      Per discussion, the original name was a bit misleading, and
      PQsslAttributeNames() seems more apropos.  It's not quite too late to
      change this in 9.5, so let's change it while we can.
      
      Also, make sure that the pointer array is const, not only the pointed-to
      strings.
      
      Minor documentation wordsmithing while at it.
      
      Lars Kanis, slight adjustments by me
      9042f583
    • Tom Lane's avatar
      Fix enforcement of restrictions inside regexp lookaround constraints. · a43b4ab1
      Tom Lane authored
      Lookahead and lookbehind constraints aren't allowed to contain backrefs,
      and parentheses within them are always considered non-capturing.  Or so
      says the manual.  But the regexp parser forgot about these rules once
      inside a parenthesized subexpression, so that constructs like (\w)(?=(\1))
      were accepted (but then not correctly executed --- a case like this acted
      like (\w)(?=\w), without any enforcement that the two \w's match the same
      text).  And in (?=((foo))) the innermost parentheses would be counted as
      capturing parentheses, though no text would ever be captured for them.
      
      To fix, properly pass down the "type" argument to the recursive invocation
      of parse().
      
      Back-patch to all supported branches; it was agreed that silent
      misexecution of such patterns is worse than throwing an error, even though
      new errors in minor releases are generally not desirable.
      a43b4ab1
    • Robert Haas's avatar
      Try to convince gcc that TupleQueueRemap never falls off the end. · 8d7396e5
      Robert Haas authored
      Without this, MacOS gcc version 4.2.1 isn't convinced.
      8d7396e5
    • Robert Haas's avatar
      When completing ALTER INDEX .. SET, add an equals sign also. · af9773cf
      Robert Haas authored
      Jeff Janes
      af9773cf
  5. 06 Nov, 2015 6 commits
    • Robert Haas's avatar
      Modify tqueue infrastructure to support transient record types. · 6e71dd7c
      Robert Haas authored
      Commit 4a4e6893, which introduced this
      mechanism, failed to account for the fact that the RECORD pseudo-type
      uses transient typmods that are only meaningful within a single
      backend.  Transferring such tuples without modification between two
      cooperating backends does not work.  This commit installs a system
      for passing the tuple descriptors over the same shm_mq being used to
      send the tuples themselves.  The two sides might not assign the same
      transient typmod to any given tuple descriptor, so we must also
      substitute the appropriate receiver-side typmod for the one used by
      the sender.  That adds some CPU overhead, but still seems better than
      being unable to pass records between cooperating parallel processes.
      
      Along the way, move the logic for handling multiple tuple queues from
      tqueue.c to nodeGather.c; tqueue.c now provides a TupleQueueReader,
      which reads from a single queue, rather than a TupleQueueFunnel, which
      potentially reads from multiple queues.  This change was suggested
      previously as a way to make sure that nodeGather.c rather than tqueue.c
      had policy control over the order in which to read from queues, but
      it wasn't clear to me until now how good an idea it was.  typmod
      mapping needs to be performed separately for each queue, and it is
      much simpler if the tqueue.c code handles that and leaves multiplexing
      multiple queues to higher layers of the stack.
      6e71dd7c
    • Robert Haas's avatar
      Remove unnecessary cast in previous commit. · cbb82e37
      Robert Haas authored
      Noted by Kyotaro Horiguchi, who also reviewed the previous patch, but
      I failed to notice his review before committing.
      cbb82e37
    • Robert Haas's avatar
      Add sort support routine for the UUID data type. · a76ef15d
      Robert Haas authored
      This introduces a simple encoding scheme to produce abbreviated keys:
      pack as many bytes of each UUID as will fit into a Datum.  On
      little-endian machines, a byteswap is also performed; the abbreviated
      comparator can therefore just consist of a simple 3-way unsigned integer
      comparison.
      
      The purpose of this change is to speed up sorting data on a column
      of type UUID.
      
      Peter Geoghegan
      a76ef15d
    • Stephen Frost's avatar
      Set include_realm=1 default in parse_hba_line · 5644419b
      Stephen Frost authored
      With include_realm=1 being set down in parse_hba_auth_opt, if multiple
      options are passed on the pg_hba line, such as:
      
      host all     all    0.0.0.0/0    gss include_realm=0 krb_realm=XYZ.COM
      
      We would mistakenly reset include_realm back to 1.  Instead, we need to
      set include_realm=1 up in parse_hba_line, prior to parsing any of the
      additional options.
      
      Discovered by Jeff McCormick during testing.
      
      Bug introduced by 9a088417.
      
      Back-patch to 9.5
      5644419b
    • Robert Haas's avatar
      pg_size_pretty: Format negative values similar to positive ones. · 8a1fab36
      Robert Haas authored
      Previously, negative values were always displayed in bytes, regardless
      of how large they were.
      
      Adrian Vondendriesch, reviewed by Julien Rouhaud and myself
      8a1fab36
    • Robert Haas's avatar
      Document interaction of bgworkers with LISTEN/NOTIFY. · dde5f09f
      Robert Haas authored
      Thomas Munro and Robert Haas, reviewed by Haribabu Kommi
      dde5f09f
  6. 05 Nov, 2015 5 commits
    • Tom Lane's avatar
      Fix erroneous hash calculations in gin_extract_jsonb_path(). · b23af458
      Tom Lane authored
      The jsonb_path_ops code calculated hash values inconsistently in some cases
      involving nested arrays and objects.  This would result in queries possibly
      not finding entries that they should find, when using a jsonb_path_ops GIN
      index for the search.  The problem cases involve JSONB values that contain
      both scalars and sub-objects at the same nesting level, for example an
      array containing both scalars and sub-arrays.  To fix, reset the current
      stack->hash after processing each value or sub-object, not before; and
      don't try to be cute about the outermost level's initial hash.
      
      Correcting this means that existing jsonb_path_ops indexes may now be
      inconsistent with the new hash calculation code.  The symptom is the same
      --- searches not finding entries they should find --- but the specific
      rows affected are likely to be different.  Users will need to REINDEX
      jsonb_path_ops indexes to make sure that all searches work as expected.
      
      Per bug #13756 from Daniel Cheng.  Back-patch to 9.4 where the faulty
      logic was introduced.
      b23af458
    • Tom Lane's avatar
      Fix memory leaks in PL/Python. · 8c75ad43
      Tom Lane authored
      Previously, plpython was in the habit of allocating a lot of stuff in
      TopMemoryContext, and it was very slipshod about making sure that stuff
      got cleaned up; in particular, use of TopMemoryContext as fn_mcxt for
      function calls represents an unfixable leak, since we generally don't
      know what the called function might have allocated in fn_mcxt.  This
      results in session-lifespan leakage in certain usage scenarios, as for
      example in a case reported by Ed Behn back in July.
      
      To fix, get rid of all the retail allocations in TopMemoryContext.
      All long-lived allocations are now made in sub-contexts that are
      associated with specific objects (either pl/python procedures, or
      Python-visible objects such as cursors and plans).  We can clean these
      up when the associated object is deleted.
      
      I went so far as to get rid of PLy_malloc completely.  There were a
      couple of places where it could still have been used safely, but on
      the whole it was just an invitation to bad coding.
      
      Haribabu Kommi, based on a draft patch by Heikki Linnakangas;
      some further work by me
      8c75ad43
    • Robert Haas's avatar
      Pass extra data to bgworkers, and use this to fix parallel contexts. · 64b2e7ad
      Robert Haas authored
      Up until now, the total amount of data that could be passed to a
      background worker at startup was one datum, which can be a small as
      4 bytes on some systems.  That's enough to pass a dsm_handle or an
      array index, but not much else.  Add a bgw_extra flag to the
      BackgroundWorker struct, allowing up to 128 bytes to be passed to
      a new worker on any platform.
      
      Use this to fix a problem I recently discovered with the parallel
      context machinery added in 9.5: the master assigns each worker an
      array index, and each worker subsequently assigns itself an array
      index, and there's nothing to guarantee that the two sets of indexes
      match, leading to chaos.
      
      Normally, I would not back-patch the change to add bgw_extra, since it
      is basically a feature addition.  However, since 9.5 is still in beta
      and there seems to be no other sensible way to repair the broken
      parallel context machinery, back-patch to 9.5.  Existing background
      worker code can ignore the bgw_extra field without a problem, but
      might need to be recompiled since the structure size has changed.
      
      Report and patch by me.  Review by Amit Kapila.
      64b2e7ad
    • Tom Lane's avatar
      Improve implementation of GEQO's init_tour() function. · 59464bd6
      Tom Lane authored
      Rather than filling a temporary array and then copying values to the
      output array, we can generate the required random permutation in-place
      using the Fisher-Yates shuffle algorithm.  This is shorter as well as
      more efficient than before.  It's pretty unlikely that anyone would
      notice a speed improvement, but shorter code is better.
      
      Nathan Wagner, edited a bit by me
      59464bd6
    • Peter Eisentraut's avatar
      Update spelling of COPY options · 7bd099d5
      Peter Eisentraut authored
      The preferred spelling was changed from FORCE QUOTE to FORCE_QUOTE and
      the like, but some code was still referring to the old spellings.
      7bd099d5
  7. 04 Nov, 2015 1 commit
  8. 03 Nov, 2015 8 commits
    • Tom Lane's avatar
      Allow postgres_fdw to ship extension funcs/operators for remote execution. · d8949416
      Tom Lane authored
      The user can whitelist specified extension(s) in the foreign server's
      options, whereupon we will treat immutable functions and operators of those
      extensions as candidates to be sent for remote execution.
      
      Whitelisting an extension in this way basically promises that the extension
      exists on the remote server and behaves compatibly with the local instance.
      We have no way to prove that formally, so we have to rely on the user to
      get it right.  But this seems like something that people can usually get
      right in practice.
      
      We might in future allow functions and operators to be whitelisted
      individually, but extension granularity is a very convenient special case,
      so it got done first.
      
      The patch as-committed lacks any regression tests, which is unfortunate,
      but introducing dependencies on other extensions for testing purposes
      would break "make installcheck" scenarios, which is worse.  I have some
      ideas about klugy ways around that, but it seems like material for a
      separate patch.  For the moment, leave the problem open.
      
      Paul Ramsey, hacked up a bit more by me
      d8949416
    • Robert Haas's avatar
      Improve comments about abbreviation abort. · ee44cb75
      Robert Haas authored
      Peter Geoghegan
      ee44cb75
    • Robert Haas's avatar
      postgres_fdw: Add ORDER BY to some remote SQL queries. · f18c944b
      Robert Haas authored
      If the join problem's entire ORDER BY clause can be pushed to the
      remote server, consider a path that adds this ORDER BY clause.  If
      use_remote_estimate is on, we cost this path using an additional
      remote EXPLAIN.  If not, we just estimate that the path costs 20%
      more, which is intended to be large enough that we won't request a
      remote sort when it's not helpful, but small enough that we'll have
      the remote side do the sort when in doubt.  In some cases, the remote
      sort might actually be free, because the remote query plan might
      happen to produce output that is ordered the way we need, but without
      remote estimates we have no way of knowing that.
      
      It might also be useful to request sorted output from the remote side
      if it enables an efficient merge join, but this patch doesn't attempt
      to handle that case.
      
      Ashutosh Bapat with revisions by me.  Also reviewed by Fabrízio de Royes
      Mello and Jeevan Chalke.
      f18c944b
    • Tom Lane's avatar
      Remove obsolete advice about doubling backslashes in regex escapes. · fc0b8935
      Tom Lane authored
      Standard-conforming literals have been the default for long enough that
      it no longer seems necessary to go out of our way to tell people to write
      regex escapes illegibly.
      fc0b8935
    • Tom Lane's avatar
      Code + docs review for unicode linestyle patch. · a69b0b2c
      Tom Lane authored
      Fix some brain fade in commit a2dabf0e: erroneous variable names
      in docs, rearrangements that made sentences less clear not more so,
      undocumented and poorly-chosen-anyway API behaviors of subroutines,
      bad grammar in error messages, copy-and-paste faults.
      
      Albe Laurenz and Tom Lane
      a69b0b2c
    • Robert Haas's avatar
      shm_mq: Third attempt at fixing nowait behavior in shm_mq_receive. · 4efe26cb
      Robert Haas authored
      Commit a1480ec1 purported to fix the
      problems with commit b2ccb5f4, but it
      didn't completely fix them.  The problem is that the checks were
      performed in the wrong order, leading to a race condition.  If the
      sender attached, sent a message, and detached after the receiver
      called shm_mq_get_sender and before the receiver called
      shm_mq_counterparty_gone, we'd incorrectly return SHM_MQ_DETACHED
      before all messages were read.  Repair by reversing the order of
      operations, and add a long comment explaining why this new logic is
      (hopefully) correct.
      4efe26cb
    • Robert Haas's avatar
      Correct tiny inaccuracy in strxfrm cache comment. · 0279f62f
      Robert Haas authored
      Peter Geoghegan
      0279f62f
    • Tom Lane's avatar
      Remove some more dead Alpha-specific code. · 620ac88d
      Tom Lane authored
      620ac88d
  9. 02 Nov, 2015 2 commits
    • Robert Haas's avatar
      Fix problems with ParamListInfo serialization mechanism. · 1efc7e53
      Robert Haas authored
      Commit d1b7c1ff introduced a mechanism
      for serializing a ParamListInfo structure to be passed to a parallel
      worker.  However, this mechanism failed to handle external expanded
      values, as pointed out by Noah Misch.  Repair.
      
      Moreover, plpgsql_param_fetch requires adjustment because the
      serialization mechanism needs it to skip evaluating unused parameters
      just as we would do when it is called from copyParamList, but params
      == estate->paramLI in that case.  To fix, make the bms_is_member test
      in that function unconditional.
      
      Finally, have setup_param_list set a new ParamListInfo field,
      paramMask, to the parameters actually used in the expression, so that
      we don't try to fetch those that are not needed when serializing a
      parameter list.  This isn't necessary for correctness, but it makes
      the performance of the parallel executor code comparable to what we
      do for cases involving cursors.
      
      Design suggestions and extensive review by Noah Misch.  Patch by me.
      1efc7e53
    • Kevin Grittner's avatar
      Add RMV to list of commands taking AE lock. · bf25fb2f
      Kevin Grittner authored
      Backpatch to 9.3, where it was initially omitted.
      
      Craig Ringer, with minor adjustment by Kevin Grittner
      bf25fb2f
  10. 31 Oct, 2015 1 commit
    • Kevin Grittner's avatar
      Fix serialization anomalies due to race conditions on INSERT. · 585e2a3b
      Kevin Grittner authored
      On insert the CheckForSerializableConflictIn() test was performed
      before the page(s) which were going to be modified had been locked
      (with an exclusive buffer content lock).  If another process
      acquired a relation SIReadLock on the heap and scanned to a page on
      which an insert was going to occur before the page was so locked,
      a rw-conflict would be missed, which could allow a serialization
      anomaly to be missed.  The window between the check and the page
      lock was small, so the bug was generally not noticed unless there
      was high concurrency with multiple processes inserting into the
      same table.
      
      This was reported by Peter Bailis as bug #11732, by Sean Chittenden
      as bug #13667, and by others.
      
      The race condition was eliminated in heap_insert() by moving the
      check down below the acquisition of the buffer lock, which had been
      the very next statement.  Because of the loop locking and unlocking
      multiple buffers in heap_multi_insert() a check was added after all
      inserts were completed.  The check before the start of the inserts
      was left because it might avoid a large amount of work to detect a
      serialization anomaly before performing the all of the inserts and
      the related WAL logging.
      
      While investigating this bug, other SSI bugs which were even harder
      to hit in practice were noticed and fixed, an unnecessary check
      (covered by another check, so redundant) was removed from
      heap_update(), and comments were improved.
      
      Back-patch to all supported branches.
      
      Kevin Grittner and Thomas Munro
      585e2a3b
  11. 30 Oct, 2015 3 commits
    • Tom Lane's avatar
      Implement lookbehind constraints in our regular-expression engine. · 12c9a040
      Tom Lane authored
      A lookbehind constraint is like a lookahead constraint in that it consumes
      no text; but it checks for existence (or nonexistence) of a match *ending*
      at the current point in the string, rather than one *starting* at the
      current point.  This is a long-requested feature since it exists in many
      other regex libraries, but Henry Spencer had never got around to
      implementing it in the code we use.
      
      Just making it work is actually pretty trivial; but naive copying of the
      logic for lookahead constraints leads to code that often spends O(N^2) time
      to scan an N-character string, because we have to run the match engine
      from string start to the current probe point each time the constraint is
      checked.  In typical use-cases a lookbehind constraint will be written at
      the start of the regex and hence will need to be checked at every character
      --- so O(N^2) work overall.  To fix that, I introduced a third copy of the
      core DFA matching loop, paralleling the existing longest() and shortest()
      loops.  This version, matchuntil(), can suspend and resume matching given
      a couple of pointers' worth of storage space.  So we need only run it
      across the string once, stopping at each interesting probe point and then
      resuming to advance to the next one.
      
      I also put in an optimization that simplifies one-character lookahead and
      lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND
      constraints, which already existed in the engine.  This avoids the overhead
      of the LACON machinery entirely for these rather common cases.
      
      The net result is that lookbehind constraints run a factor of three or so
      slower than Perl's for multi-character constraints, but faster than Perl's
      for one-character constraints ... and they work fine for variable-length
      constraints, which Perl gives up on entirely.  So that's not bad from a
      competitive perspective, and there's room for further optimization if
      anyone cares.  (In reality, raw scan rate across a large input string is
      probably not that big a deal for Postgres usage anyway; so I'm happy if
      it's linear.)
      12c9a040
    • Robert Haas's avatar
      doc: security_barrier option is a Boolean, not a string. · c5057b2b
      Robert Haas authored
      Mistake introduced by commit 5bd91e3a.
      
      Hari Babu
      c5057b2b
    • Robert Haas's avatar
      Update parallel executor support to reuse the same DSM. · 3a1f8611
      Robert Haas authored
      Commit b0b0d84b purported to make it
      possible to relaunch workers using the same parallel context, but it had
      an unpleasant race condition: we might reinitialize after the workers
      have sent their last control message but before they have dettached the
      DSM, leaving to crashes.  Repair by introducing a new ParallelContext
      operation, ReinitializeParallelDSM.
      
      Adjust execParallel.c to use this new support, so that we can rescan a
      Gather node by relaunching workers but without needing to recreate the
      DSM.
      
      Amit Kapila, with some adjustments by me.  Extracted from latest parallel
      sequential scan patch.
      3a1f8611