Commit 2d44c58c authored by Tom Lane's avatar Tom Lane

Avoid memory leaks when a GatherMerge node is rescanned.

Rescanning a GatherMerge led to leaking some memory in the executor's
query-lifespan context, because most of the node's working data structures
were simply abandoned and rebuilt from scratch.  In practice, this might
never amount to much, given the cost of relaunching worker processes ---
but it's still pretty messy, so let's fix it.

We can rearrange things so that the tuple arrays are simply cleared and
reused, and we don't need to rebuild the TupleTableSlots either, just
clear them.  One small complication is that because we might get a
different number of workers on each iteration, we can't keep the old
convention that the leader's gm_slots[] entry is the last one; the leader
might clobber a TupleTableSlot that we need for a worker in a future
iteration.  Hence, adjust the logic so that the leader has slot 0 always,
while the active workers have slots 1..n.

Back-patch to v10 to keep all the existing versions of nodeGatherMerge.c
in sync --- because of the renumbering of the slots, there would otherwise
be a very large risk that any future backpatches in this module would
introduce bugs.

Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us
parent 30833ba1
This diff is collapsed.
......@@ -1958,7 +1958,8 @@ typedef struct GatherMergeState
int gm_nkeys; /* number of sort columns */
SortSupport gm_sortkeys; /* array of length gm_nkeys */
struct ParallelExecutorInfo *pei;
/* all remaining fields are reinitialized during a rescan: */
/* all remaining fields are reinitialized during a rescan */
/* (but the arrays are not reallocated, just cleared) */
int nworkers_launched; /* original number of workers */
int nreaders; /* number of active workers */
TupleTableSlot **gm_slots; /* array with nreaders+1 entries */
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment