1. 31 Aug, 2017 2 commits
  2. 30 Aug, 2017 4 commits
    • Tom Lane's avatar
      Code review for nodeGatherMerge.c. · 04e96786
      Tom Lane authored
      Comment the fields of GatherMergeState, and organize them a bit more
      sensibly.  Comment GMReaderTupleBuffer more usefully too.  Improve
      assorted other comments that were obsolete or just not very good English.
      
      Get rid of the use of a GMReaderTupleBuffer for the leader process;
      that was confusing, since only the "done" field was used, and that
      in a way redundant with need_to_scan_locally.
      
      In gather_merge_init, avoid calling load_tuple_array for
      already-known-exhausted workers.  I'm not sure if there's a live bug there,
      but the case is unlikely to be well tested due to timing considerations.
      
      Remove some useless code, such as duplicating the tts_isempty test done by
      TupIsNull.
      
      Remove useless initialization of ps.qual, replacing that with an assertion
      that we have no qual to check.  (If we did, the code would fail to check
      it.)
      
      Avoid applying heap_copytuple to a null tuple.  While that fails to crash,
      it's confusing and it makes the code less legible not more so IMO.
      
      Propagate a couple of these changes into nodeGather.c, as well.
      
      Back-patch to v10, partly because of the possibility that the
      gather_merge_init change is fixing a live bug, but mostly to keep
      the branches in sync to ease future bug fixes.
      04e96786
    • Tom Lane's avatar
      Separate reinitialization of shared parallel-scan state from ExecReScan. · 41b0dd98
      Tom Lane authored
      Previously, the parallel executor logic did reinitialization of shared
      state within the ExecReScan code for parallel-aware scan nodes.  This is
      problematic, because it means that the ExecReScan call has to occur
      synchronously (ie, during the parent Gather node's ReScan call).  That is
      swimming very much against the tide so far as the ExecReScan machinery is
      concerned; the fact that it works at all today depends on a lot of fragile
      assumptions, such as that no plan node between Gather and a parallel-aware
      scan node is parameterized.  Another objection is that because ExecReScan
      might be called in workers as well as the leader, hacky extra tests are
      needed in some places to prevent unwanted shared-state resets.
      
      Hence, let's separate this code into two functions, a ReInitializeDSM
      call and the ReScan call proper.  ReInitializeDSM is called only in
      the leader and is guaranteed to run before we start new workers.
      ReScan is returned to its traditional function of resetting only local
      state, which means that ExecReScan's usual habits of delaying or
      eliminating child rescan calls are safe again.
      
      As with the preceding commit 7df2c1f8, it doesn't seem to be necessary
      to make these changes in 9.6, which is a good thing because the FDW and
      CustomScan APIs are impacted.
      
      Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
      41b0dd98
    • Tom Lane's avatar
      Restore test case from a2b70c89. · 6c2c5bea
      Tom Lane authored
      Revert the reversion commits a20aac89 and 9b644745c.  In the wake of
      commit 7df2c1f8, we should get stable buildfarm results from this test;
      if not, I'd like to know sooner not later.
      
      Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
      6c2c5bea
    • Tom Lane's avatar
      Force rescanning of parallel-aware scan nodes below a Gather[Merge]. · 7df2c1f8
      Tom Lane authored
      The ExecReScan machinery contains various optimizations for postponing
      or skipping rescans of plan subtrees; for example a HashAgg node may
      conclude that it can re-use the table it built before, instead of
      re-reading its input subtree.  But that is wrong if the input contains
      a parallel-aware table scan node, since the portion of the table scanned
      by the leader process is likely to vary from one rescan to the next.
      This explains the timing-dependent buildfarm failures we saw after
      commit a2b70c89.
      
      The established mechanism for showing that a plan node's output is
      potentially variable is to mark it as depending on some runtime Param.
      Hence, to fix this, invent a dummy Param (one that has a PARAM_EXEC
      parameter number, but carries no actual value) associated with each Gather
      or GatherMerge node, mark parallel-aware nodes below that node as dependent
      on that Param, and arrange for ExecReScanGather[Merge] to flag that Param
      as changed whenever the Gather[Merge] node is rescanned.
      
      This solution breaks an undocumented assumption made by the parallel
      executor logic, namely that all rescans of nodes below a Gather[Merge]
      will happen synchronously during the ReScan of the top node itself.
      But that's fundamentally contrary to the design of the ExecReScan code,
      and so was doomed to fail someday anyway (even if you want to argue
      that the bug being fixed here wasn't a failure of that assumption).
      A follow-on patch will address that issue.  In the meantime, the worst
      that's expected to happen is that given very bad timing luck, the leader
      might have to do all the work during a rescan, because workers think
      they have nothing to do, if they are able to start up before the eventual
      ReScan of the leader's parallel-aware table scan node has reset the
      shared scan state.
      
      Although this problem exists in 9.6, there does not seem to be any way
      for it to manifest there.  Without GatherMerge, it seems that a plan tree
      that has a rescan-short-circuiting node below Gather will always also
      have one above it that will short-circuit in the same cases, preventing
      the Gather from being rescanned.  Hence we won't take the risk of
      back-patching this change into 9.6.  But v10 needs it.
      
      Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
      7df2c1f8
  3. 29 Aug, 2017 6 commits
  4. 28 Aug, 2017 3 commits
  5. 27 Aug, 2017 1 commit
  6. 26 Aug, 2017 5 commits
  7. 25 Aug, 2017 8 commits
  8. 24 Aug, 2017 7 commits
  9. 23 Aug, 2017 4 commits