1. 05 Sep, 2017 5 commits
    • Peter Eisentraut's avatar
      Support retaining data dirs on successful TAP tests · 90627cf9
      Peter Eisentraut authored
      This moves the data directories from using temporary directories with
      randomness in the directory name to a static name, to make it easier to
      debug.  The data directory will be retained if tests fail or the test
      code dies/exits with failure, and is automatically removed on the next
      make check.
      
      If the environment variable PG_TEST_NOCLEAN is defined, the data
      directories will be retained regardless of test or exit status.
      
      Author: Daniel Gustafsson <daniel@yesql.se>
      90627cf9
    • Tom Lane's avatar
      In psql, use PSQL_PAGER in preference to PAGER, if it's set. · 5e8304fd
      Tom Lane authored
      This allows the user's environment to set up a psql-specific choice
      of pager, in much the same way that we provide PSQL_EDITOR to allow
      a psql-specific override of the more widely known EDITOR variable.
      
      Pavel Stehule, reviewed by Thomas Munro
      
      Discussion: https://postgr.es/m/CAFj8pRD3RRk9S1eRbnGm_T6brc3Ss5mohraNzTSJquzx+pmtKA@mail.gmail.com
      5e8304fd
    • Alvaro Herrera's avatar
      Correct base backup throttling · ebd346ca
      Alvaro Herrera authored
      Throttling for sending a base backup in walsender is broken for the case
      where there is a lot of WAL traffic, because the latch used to put the
      walsender to sleep is also signalled by regular WAL traffic (and each
      signal causes an additional batch of data to be sent); the net effect is
      that there is no or little actual throttling.  This is undesirable, so
      rewrite the sleep into a loop to achieve the desired effeect.
      
      Author: Jeff Janes, small tweaks by me
      Reviewed-by: Antonin Houska
      Discussion: https://postgr.es/m/CAMkU=1xH6mde-yL-Eo1TKBGNd0PB1-TMxvrNvqcAkN-qr2E9mw@mail.gmail.com
      ebd346ca
    • Tom Lane's avatar
      Add psql variables showing server version and psql version. · 9ae9d8c1
      Tom Lane authored
      We already had a psql variable VERSION that shows the verbose form of
      psql's own version.  Add VERSION_NAME to show the short form (e.g.,
      "11devel") and VERSION_NUM to show the numeric form (e.g., 110000).
      Also add SERVER_VERSION_NAME and SERVER_VERSION_NUM to show the short and
      numeric forms of the server's version.  (We'd probably add SERVER_VERSION
      with the verbose string if it were readily available; but adding another
      network round trip to get it seems too expensive.)
      
      The numeric forms, in particular, are expected to be useful for scripting
      purposes, now that psql can do conditional tests.
      
      Fabien Coelho, reviewed by Pavel Stehule
      
      Discussion: https://postgr.es/m/alpine.DEB.2.20.1704020917220.4632@lancre
      9ae9d8c1
    • Tom Lane's avatar
      Reformat psql's --help=variables output. · 3955c8c4
      Tom Lane authored
      The previous format with variable names and descriptions in separate
      columns was extremely constraining about the length of the descriptions.
      We'd dealt with that in several inconsistent ways over the years,
      including letting the lines run over 80 characters, breaking descriptions
      into multiple lines, or shoving the description onto a separate line.
      But it's been a long time since the output could realistically fit onto
      a single screen vertically, so let's just rely even more heavily on the
      pager to deal with the vertical distance, and split each entry into two
      (or more) lines, in the format
      
        variable-name
          variable description goes here
      
      Each variable name + description remains a single translatable string,
      in hopes of reducing translator confusion; we're just changing the
      embedded whitespace.
      
      I failed to resist the temptation to copy-edit one or two of the
      descriptions while at it.
      
      Discussion: https://postgr.es/m/2947.1504542679@sss.pgh.pa.us
      3955c8c4
  2. 04 Sep, 2017 4 commits
    • Tom Lane's avatar
      Be more careful about newline-chomping in pgbench. · 0b707d6e
      Tom Lane authored
      process_backslash_command would drop the last character of the input
      command on the assumption that it was a newline.  Given a non newline
      terminated input file, this could result in dropping the last character
      of the command.  Fix that by doing an actual test that we're removing
      a newline.
      
      While at it, allow for Windows newlines (\r\n), and suppress multiple
      newlines if any.  I do not think either of those cases really occur,
      since (a) we read script files in text mode and (b) the lexer stops
      when it hits a newline.  But it's cheap enough and it provides a
      stronger guarantee about what the result string looks like.
      
      This is just cosmetic, I think, since the possibly-overly-chomped
      line was only used for display not for further processing.  So
      it doesn't seem necessary to back-patch.
      
      Fabien Coelho, reviewed by Nikolay Shaplov, whacked around a bit by me
      
      Discussion: https://postgr.es/m/alpine.DEB.2.20.1704171422500.4025@lancre
      0b707d6e
    • Tom Lane's avatar
      Fix some subtle problems in pgbench transaction stats counting. · c23bb6ba
      Tom Lane authored
      With --latency-limit, transactions might get skipped even beyond the
      transaction count limit specified by -t, throwing off the expected
      number of transactions and thus the denominator for later stats.
      Be sure to stop skipping transactions once -t is reached.
      
      Also, include skipped transactions in the "cnt" fields; this requires
      discounting them again in a couple of places, but most places are
      better off with this definition.  In particular this is needed to
      get correct overall stats for the combination of -R/-L/-t options.
      Merge some more processing into processXactStats() to simplify this.
      
      In passing, add a check that --progress-timestamp is specified only
      when --progress is.
      
      We might consider back-patching this, but given that it only matters
      for a combination of options, and given the lack of field complaints,
      consensus seems to be not to bother.
      
      Fabien Coelho, reviewed by Nikolay Shaplov
      
      Discussion: https://postgr.es/m/alpine.DEB.2.20.1704171422500.4025@lancre
      c23bb6ba
    • Tom Lane's avatar
      Adjust pgbench to allow non-ASCII characters in variable names. · 9d36a386
      Tom Lane authored
      This puts it in sync with psql's notion of what is a valid variable name.
      Like psql, we document that "non-Latin letters" are allowed, but actually
      any non-ASCII character is accepted.
      
      Fabien Coelho
      
      Discussion: https://postgr.es/m/20170405.094548.1184280384967203518.t-ishii@sraoss.co.jp
      9d36a386
    • Alvaro Herrera's avatar
  3. 03 Sep, 2017 2 commits
  4. 02 Sep, 2017 1 commit
  5. 01 Sep, 2017 17 commits
  6. 31 Aug, 2017 5 commits
    • Tom Lane's avatar
      Avoid memory leaks when a GatherMerge node is rescanned. · 2d44c58c
      Tom Lane authored
      Rescanning a GatherMerge led to leaking some memory in the executor's
      query-lifespan context, because most of the node's working data structures
      were simply abandoned and rebuilt from scratch.  In practice, this might
      never amount to much, given the cost of relaunching worker processes ---
      but it's still pretty messy, so let's fix it.
      
      We can rearrange things so that the tuple arrays are simply cleared and
      reused, and we don't need to rebuild the TupleTableSlots either, just
      clear them.  One small complication is that because we might get a
      different number of workers on each iteration, we can't keep the old
      convention that the leader's gm_slots[] entry is the last one; the leader
      might clobber a TupleTableSlot that we need for a worker in a future
      iteration.  Hence, adjust the logic so that the leader has slot 0 always,
      while the active workers have slots 1..n.
      
      Back-patch to v10 to keep all the existing versions of nodeGatherMerge.c
      in sync --- because of the renumbering of the slots, there would otherwise
      be a very large risk that any future backpatches in this module would
      introduce bugs.
      
      Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us
      2d44c58c
    • Robert Haas's avatar
      Expand partitioned tables in PartDesc order. · 30833ba1
      Robert Haas authored
      Previously, we expanded the inheritance hierarchy in the order in
      which find_all_inheritors had locked the tables, but that turns out
      to block quite a bit of useful optimization.  For example, a
      partition-wise join can't count on two tables with matching bounds
      to get expanded in the same order.
      
      Where possible, this change results in expanding partitioned tables in
      *bound* order.  Bound order isn't well-defined for a list-partitioned
      table with a null-accepting partition or for a list-partitioned table
      where the bounds for a single partition are interleaved with other
      partitions.  However, when expansion in bound order is possible, it
      opens up further opportunities for optimization, such as
      strength-reducing MergeAppend to Append when the expansion order
      matches the desired sort order.
      
      Patch by me, with cosmetic revisions by Ashutosh Bapat.
      
      Discussion: http://postgr.es/m/CA+TgmoZrKj7kEzcMSum3aXV4eyvvbh9WD=c6m=002WMheDyE3A@mail.gmail.com
      30833ba1
    • Tom Lane's avatar
      Clean up shm_mq cleanup. · 6708e447
      Tom Lane authored
      The logic around shm_mq_detach was a few bricks shy of a load, because
      (contrary to the comments for shm_mq_attach) all it did was update the
      shared shm_mq state.  That left us leaking a bit of process-local
      memory, but much worse, the on_dsm_detach callback for shm_mq_detach
      was still armed.  That means that whenever we ultimately detach from
      the DSM segment, we'd run shm_mq_detach again for already-detached,
      possibly long-dead queues.  This accidentally fails to fail today,
      because we only ever re-use a shm_mq's memory for another shm_mq, and
      multiple detach attempts on the last such shm_mq are fairly harmless.
      But it's gonna bite us someday, so let's clean it up.
      
      To do that, change shm_mq_detach's API so it takes a shm_mq_handle
      not the underlying shm_mq.  This makes the callers simpler in most
      cases anyway.  Also fix a few places in parallel.c that were just
      pfree'ing the handle structs rather than doing proper cleanup.
      
      Back-patch to v10 because of the risk that the revenant shm_mq_detach
      callbacks would cause a live bug sometime.  Since this is an API
      change, it's too late to do it in 9.6.  (We could make a variant
      patch that preserves API, but I'm not excited enough to do that.)
      
      Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us
      6708e447
    • Tom Lane's avatar
      Improve code coverage of select_parallel test. · 4b1dd62a
      Tom Lane authored
      Make sure that rescans of parallel indexscans are tested.
      Per code coverage report.
      4b1dd62a
    • Peter Eisentraut's avatar
      Remove to pre-8.2 coding convention for PG_MODULE_MAGIC · b5c75fec
      Peter Eisentraut authored
      PG_MODULE_MAGIC has been around since 8.2, with 8.1 long since in EOL,
      so remove the mention of #ifdef guards for compiling against pre-8.2
      sources from the documentation.
      
      Author: Daniel Gustafsson <daniel@yesql.se>
      b5c75fec
  7. 30 Aug, 2017 4 commits
    • Tom Lane's avatar
      Code review for nodeGatherMerge.c. · 04e96786
      Tom Lane authored
      Comment the fields of GatherMergeState, and organize them a bit more
      sensibly.  Comment GMReaderTupleBuffer more usefully too.  Improve
      assorted other comments that were obsolete or just not very good English.
      
      Get rid of the use of a GMReaderTupleBuffer for the leader process;
      that was confusing, since only the "done" field was used, and that
      in a way redundant with need_to_scan_locally.
      
      In gather_merge_init, avoid calling load_tuple_array for
      already-known-exhausted workers.  I'm not sure if there's a live bug there,
      but the case is unlikely to be well tested due to timing considerations.
      
      Remove some useless code, such as duplicating the tts_isempty test done by
      TupIsNull.
      
      Remove useless initialization of ps.qual, replacing that with an assertion
      that we have no qual to check.  (If we did, the code would fail to check
      it.)
      
      Avoid applying heap_copytuple to a null tuple.  While that fails to crash,
      it's confusing and it makes the code less legible not more so IMO.
      
      Propagate a couple of these changes into nodeGather.c, as well.
      
      Back-patch to v10, partly because of the possibility that the
      gather_merge_init change is fixing a live bug, but mostly to keep
      the branches in sync to ease future bug fixes.
      04e96786
    • Tom Lane's avatar
      Separate reinitialization of shared parallel-scan state from ExecReScan. · 41b0dd98
      Tom Lane authored
      Previously, the parallel executor logic did reinitialization of shared
      state within the ExecReScan code for parallel-aware scan nodes.  This is
      problematic, because it means that the ExecReScan call has to occur
      synchronously (ie, during the parent Gather node's ReScan call).  That is
      swimming very much against the tide so far as the ExecReScan machinery is
      concerned; the fact that it works at all today depends on a lot of fragile
      assumptions, such as that no plan node between Gather and a parallel-aware
      scan node is parameterized.  Another objection is that because ExecReScan
      might be called in workers as well as the leader, hacky extra tests are
      needed in some places to prevent unwanted shared-state resets.
      
      Hence, let's separate this code into two functions, a ReInitializeDSM
      call and the ReScan call proper.  ReInitializeDSM is called only in
      the leader and is guaranteed to run before we start new workers.
      ReScan is returned to its traditional function of resetting only local
      state, which means that ExecReScan's usual habits of delaying or
      eliminating child rescan calls are safe again.
      
      As with the preceding commit 7df2c1f8, it doesn't seem to be necessary
      to make these changes in 9.6, which is a good thing because the FDW and
      CustomScan APIs are impacted.
      
      Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
      41b0dd98
    • Tom Lane's avatar
      Restore test case from a2b70c89. · 6c2c5bea
      Tom Lane authored
      Revert the reversion commits a20aac89 and 9b644745c.  In the wake of
      commit 7df2c1f8, we should get stable buildfarm results from this test;
      if not, I'd like to know sooner not later.
      
      Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
      6c2c5bea
    • Tom Lane's avatar
      Force rescanning of parallel-aware scan nodes below a Gather[Merge]. · 7df2c1f8
      Tom Lane authored
      The ExecReScan machinery contains various optimizations for postponing
      or skipping rescans of plan subtrees; for example a HashAgg node may
      conclude that it can re-use the table it built before, instead of
      re-reading its input subtree.  But that is wrong if the input contains
      a parallel-aware table scan node, since the portion of the table scanned
      by the leader process is likely to vary from one rescan to the next.
      This explains the timing-dependent buildfarm failures we saw after
      commit a2b70c89.
      
      The established mechanism for showing that a plan node's output is
      potentially variable is to mark it as depending on some runtime Param.
      Hence, to fix this, invent a dummy Param (one that has a PARAM_EXEC
      parameter number, but carries no actual value) associated with each Gather
      or GatherMerge node, mark parallel-aware nodes below that node as dependent
      on that Param, and arrange for ExecReScanGather[Merge] to flag that Param
      as changed whenever the Gather[Merge] node is rescanned.
      
      This solution breaks an undocumented assumption made by the parallel
      executor logic, namely that all rescans of nodes below a Gather[Merge]
      will happen synchronously during the ReScan of the top node itself.
      But that's fundamentally contrary to the design of the ExecReScan code,
      and so was doomed to fail someday anyway (even if you want to argue
      that the bug being fixed here wasn't a failure of that assumption).
      A follow-on patch will address that issue.  In the meantime, the worst
      that's expected to happen is that given very bad timing luck, the leader
      might have to do all the work during a rescan, because workers think
      they have nothing to do, if they are able to start up before the eventual
      ReScan of the leader's parallel-aware table scan node has reset the
      shared scan state.
      
      Although this problem exists in 9.6, there does not seem to be any way
      for it to manifest there.  Without GatherMerge, it seems that a plan tree
      that has a rescan-short-circuiting node below Gather will always also
      have one above it that will short-circuit in the same cases, preventing
      the Gather from being rescanned.  Hence we won't take the risk of
      back-patching this change into 9.6.  But v10 needs it.
      
      Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
      7df2c1f8
  8. 29 Aug, 2017 2 commits