1. 30 Aug, 2017 2 commits
    • Tom Lane's avatar
      Restore test case from a2b70c89. · 6c2c5bea
      Tom Lane authored
      Revert the reversion commits a20aac89 and 9b644745c.  In the wake of
      commit 7df2c1f8, we should get stable buildfarm results from this test;
      if not, I'd like to know sooner not later.
      
      Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
      6c2c5bea
    • Tom Lane's avatar
      Force rescanning of parallel-aware scan nodes below a Gather[Merge]. · 7df2c1f8
      Tom Lane authored
      The ExecReScan machinery contains various optimizations for postponing
      or skipping rescans of plan subtrees; for example a HashAgg node may
      conclude that it can re-use the table it built before, instead of
      re-reading its input subtree.  But that is wrong if the input contains
      a parallel-aware table scan node, since the portion of the table scanned
      by the leader process is likely to vary from one rescan to the next.
      This explains the timing-dependent buildfarm failures we saw after
      commit a2b70c89.
      
      The established mechanism for showing that a plan node's output is
      potentially variable is to mark it as depending on some runtime Param.
      Hence, to fix this, invent a dummy Param (one that has a PARAM_EXEC
      parameter number, but carries no actual value) associated with each Gather
      or GatherMerge node, mark parallel-aware nodes below that node as dependent
      on that Param, and arrange for ExecReScanGather[Merge] to flag that Param
      as changed whenever the Gather[Merge] node is rescanned.
      
      This solution breaks an undocumented assumption made by the parallel
      executor logic, namely that all rescans of nodes below a Gather[Merge]
      will happen synchronously during the ReScan of the top node itself.
      But that's fundamentally contrary to the design of the ExecReScan code,
      and so was doomed to fail someday anyway (even if you want to argue
      that the bug being fixed here wasn't a failure of that assumption).
      A follow-on patch will address that issue.  In the meantime, the worst
      that's expected to happen is that given very bad timing luck, the leader
      might have to do all the work during a rescan, because workers think
      they have nothing to do, if they are able to start up before the eventual
      ReScan of the leader's parallel-aware table scan node has reset the
      shared scan state.
      
      Although this problem exists in 9.6, there does not seem to be any way
      for it to manifest there.  Without GatherMerge, it seems that a plan tree
      that has a rescan-short-circuiting node below Gather will always also
      have one above it that will short-circuit in the same cases, preventing
      the Gather from being rescanned.  Hence we won't take the risk of
      back-patching this change into 9.6.  But v10 needs it.
      
      Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com
      7df2c1f8
  2. 29 Aug, 2017 6 commits
  3. 28 Aug, 2017 3 commits
  4. 27 Aug, 2017 1 commit
  5. 26 Aug, 2017 5 commits
  6. 25 Aug, 2017 8 commits
  7. 24 Aug, 2017 7 commits
  8. 23 Aug, 2017 7 commits
  9. 22 Aug, 2017 1 commit