Commit d2d8a229 authored by Tomas Vondra's avatar Tomas Vondra

Implement Incremental Sort

Incremental Sort is an optimized variant of multikey sort for cases when
the input is already sorted by a prefix of the requested sort keys. For
example when the relation is already sorted by (key1, key2) and we need
to sort it by (key1, key2, key3) we can simply split the input rows into
groups having equal values in (key1, key2), and only sort/compare the
remaining column key3.

This has a number of benefits:

- Reduced memory consumption, because only a single group (determined by
  values in the sorted prefix) needs to be kept in memory. This may also
  eliminate the need to spill to disk.

- Lower startup cost, because Incremental Sort produce results after each
  prefix group, which is beneficial for plans where startup cost matters
  (like for example queries with LIMIT clause).

We consider both Sort and Incremental Sort, and decide based on costing.

The implemented algorithm operates in two different modes:

- Fetching a minimum number of tuples without check of equality on the
  prefix keys, and sorting on all columns when safe.

- Fetching all tuples for a single prefix group and then sorting by
  comparing only the remaining (non-prefix) keys.

We always start in the first mode, and employ a heuristic to switch into
the second mode if we believe it's beneficial - the goal is to minimize
the number of unnecessary comparions while keeping memory consumption
below work_mem.

This is a very old patch series. The idea was originally proposed by
Alexander Korotkov back in 2013, and then revived in 2017. In 2018 the
patch was taken over by James Coleman, who wrote and rewrote most of the
current code.

There were many reviewers/contributors since 2013 - I've done my best to
pick the most active ones, and listed them in this commit message.

Author: James Coleman, Alexander Korotkov
Reviewed-by: Tomas Vondra, Andreas Karlsson, Marti Raudsepp, Peter Geoghegan, Robert Haas, Thomas Munro, Antonin Houska, Andres Freund, Alexander Kuzmenkov
Discussion: https://postgr.es/m/CAPpHfdscOX5an71nHd8WSUH6GNOCf=V7wgDaTXdDd9=goN-gfA@mail.gmail.com
Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
parent 3c855354
...@@ -4573,6 +4573,20 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" ...@@ -4573,6 +4573,20 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
</listitem> </listitem>
</varlistentry> </varlistentry>
<varlistentry id="guc-enable-incrementalsort" xreflabel="enable_incrementalsort">
<term><varname>enable_incrementalsort</varname> (<type>boolean</type>)
<indexterm>
<primary><varname>enable_incrementalsort</varname> configuration parameter</primary>
</indexterm>
</term>
<listitem>
<para>
Enables or disables the query planner's use of incremental sort steps.
The default is <literal>on</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry id="guc-enable-indexscan" xreflabel="enable_indexscan"> <varlistentry id="guc-enable-indexscan" xreflabel="enable_indexscan">
<term><varname>enable_indexscan</varname> (<type>boolean</type>) <term><varname>enable_indexscan</varname> (<type>boolean</type>)
<indexterm> <indexterm>
......
...@@ -291,7 +291,47 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 = 42; ...@@ -291,7 +291,47 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 = 42;
often see this plan type for queries that fetch just a single row. It's often see this plan type for queries that fetch just a single row. It's
also often used for queries that have an <literal>ORDER BY</literal> condition also often used for queries that have an <literal>ORDER BY</literal> condition
that matches the index order, because then no extra sorting step is needed that matches the index order, because then no extra sorting step is needed
to satisfy the <literal>ORDER BY</literal>. to satisfy the <literal>ORDER BY</literal>. In this example, adding
<literal>ORDER BY unique1</literal> would use the same plan because the
index already implicitly provides the requested ordering.
</para>
<para>
The planner may implement an <literal>ORDER BY</literal> clause in several
ways. The above example shows that such an ordering clause may be
implemented implicitly. The planner may also add an explicit
<literal>sort</literal> step:
<screen>
EXPLAIN SELECT * FROM tenk1 ORDER BY unique1;
QUERY PLAN
-------------------------------------------------------------------
Sort (cost=1109.39..1134.39 rows=10000 width=244)
Sort Key: unique1
-> Seq Scan on tenk1 (cost=0.00..445.00 rows=10000 width=244)
</screen>
If the a part of the plan guarantess an ordering on a prefix of the
required sort keys, then the planner may instead decide to use an
<literal>incremental sort</literal> step:
<screen>
EXPLAIN SELECT * FROM tenk1 ORDER BY four, ten LIMIT 100;
QUERY PLAN
------------------------------------------------------------------------------------------------------
Limit (cost=521.06..538.05 rows=100 width=244)
-> Incremental Sort (cost=521.06..2220.95 rows=10000 width=244)
Sort Key: four, ten
Presorted Key: four
-> Index Scan using index_tenk1_on_four on tenk1 (cost=0.29..1510.08 rows=10000 width=244)
</screen>
Compared to regular sorts, sorting incrementally allows returning tuples
before the entire result set has been sorted, which particularly enables
optimizations with <literal>LIMIT</literal> queries. It may also reduce
memory usage and the likelihood of spilling sorts to disk, but it comes at
the cost of the increased overhead of splitting the result set into multiple
sorting batches.
</para> </para>
<para> <para>
......
This diff is collapsed.
...@@ -46,6 +46,7 @@ OBJS = \ ...@@ -46,6 +46,7 @@ OBJS = \
nodeGroup.o \ nodeGroup.o \
nodeHash.o \ nodeHash.o \
nodeHashjoin.o \ nodeHashjoin.o \
nodeIncrementalSort.o \
nodeIndexonlyscan.o \ nodeIndexonlyscan.o \
nodeIndexscan.o \ nodeIndexscan.o \
nodeLimit.o \ nodeLimit.o \
......
...@@ -30,6 +30,7 @@ ...@@ -30,6 +30,7 @@
#include "executor/nodeGroup.h" #include "executor/nodeGroup.h"
#include "executor/nodeHash.h" #include "executor/nodeHash.h"
#include "executor/nodeHashjoin.h" #include "executor/nodeHashjoin.h"
#include "executor/nodeIncrementalSort.h"
#include "executor/nodeIndexonlyscan.h" #include "executor/nodeIndexonlyscan.h"
#include "executor/nodeIndexscan.h" #include "executor/nodeIndexscan.h"
#include "executor/nodeLimit.h" #include "executor/nodeLimit.h"
...@@ -252,6 +253,10 @@ ExecReScan(PlanState *node) ...@@ -252,6 +253,10 @@ ExecReScan(PlanState *node)
ExecReScanSort((SortState *) node); ExecReScanSort((SortState *) node);
break; break;
case T_IncrementalSortState:
ExecReScanIncrementalSort((IncrementalSortState *) node);
break;
case T_GroupState: case T_GroupState:
ExecReScanGroup((GroupState *) node); ExecReScanGroup((GroupState *) node);
break; break;
...@@ -557,8 +562,17 @@ ExecSupportsBackwardScan(Plan *node) ...@@ -557,8 +562,17 @@ ExecSupportsBackwardScan(Plan *node)
case T_CteScan: case T_CteScan:
case T_Material: case T_Material:
case T_Sort: case T_Sort:
/* these don't evaluate tlist */
return true; return true;
case T_IncrementalSort:
/*
* Unlike full sort, incremental sort keeps only a single group of
* tuples in memory, so it can't scan backwards.
*/
return false;
case T_LockRows: case T_LockRows:
case T_Limit: case T_Limit:
return ExecSupportsBackwardScan(outerPlan(node)); return ExecSupportsBackwardScan(outerPlan(node));
......
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include "executor/nodeForeignscan.h" #include "executor/nodeForeignscan.h"
#include "executor/nodeHash.h" #include "executor/nodeHash.h"
#include "executor/nodeHashjoin.h" #include "executor/nodeHashjoin.h"
#include "executor/nodeIncrementalSort.h"
#include "executor/nodeIndexonlyscan.h" #include "executor/nodeIndexonlyscan.h"
#include "executor/nodeIndexscan.h" #include "executor/nodeIndexscan.h"
#include "executor/nodeSeqscan.h" #include "executor/nodeSeqscan.h"
...@@ -283,6 +284,10 @@ ExecParallelEstimate(PlanState *planstate, ExecParallelEstimateContext *e) ...@@ -283,6 +284,10 @@ ExecParallelEstimate(PlanState *planstate, ExecParallelEstimateContext *e)
/* even when not parallel-aware, for EXPLAIN ANALYZE */ /* even when not parallel-aware, for EXPLAIN ANALYZE */
ExecSortEstimate((SortState *) planstate, e->pcxt); ExecSortEstimate((SortState *) planstate, e->pcxt);
break; break;
case T_IncrementalSortState:
/* even when not parallel-aware, for EXPLAIN ANALYZE */
ExecIncrementalSortEstimate((IncrementalSortState *) planstate, e->pcxt);
break;
default: default:
break; break;
...@@ -496,6 +501,10 @@ ExecParallelInitializeDSM(PlanState *planstate, ...@@ -496,6 +501,10 @@ ExecParallelInitializeDSM(PlanState *planstate,
/* even when not parallel-aware, for EXPLAIN ANALYZE */ /* even when not parallel-aware, for EXPLAIN ANALYZE */
ExecSortInitializeDSM((SortState *) planstate, d->pcxt); ExecSortInitializeDSM((SortState *) planstate, d->pcxt);
break; break;
case T_IncrementalSortState:
/* even when not parallel-aware, for EXPLAIN ANALYZE */
ExecIncrementalSortInitializeDSM((IncrementalSortState *) planstate, d->pcxt);
break;
default: default:
break; break;
...@@ -972,6 +981,7 @@ ExecParallelReInitializeDSM(PlanState *planstate, ...@@ -972,6 +981,7 @@ ExecParallelReInitializeDSM(PlanState *planstate,
break; break;
case T_HashState: case T_HashState:
case T_SortState: case T_SortState:
case T_IncrementalSortState:
/* these nodes have DSM state, but no reinitialization is required */ /* these nodes have DSM state, but no reinitialization is required */
break; break;
...@@ -1032,6 +1042,9 @@ ExecParallelRetrieveInstrumentation(PlanState *planstate, ...@@ -1032,6 +1042,9 @@ ExecParallelRetrieveInstrumentation(PlanState *planstate,
case T_SortState: case T_SortState:
ExecSortRetrieveInstrumentation((SortState *) planstate); ExecSortRetrieveInstrumentation((SortState *) planstate);
break; break;
case T_IncrementalSortState:
ExecIncrementalSortRetrieveInstrumentation((IncrementalSortState *) planstate);
break;
case T_HashState: case T_HashState:
ExecHashRetrieveInstrumentation((HashState *) planstate); ExecHashRetrieveInstrumentation((HashState *) planstate);
break; break;
...@@ -1318,6 +1331,11 @@ ExecParallelInitializeWorker(PlanState *planstate, ParallelWorkerContext *pwcxt) ...@@ -1318,6 +1331,11 @@ ExecParallelInitializeWorker(PlanState *planstate, ParallelWorkerContext *pwcxt)
/* even when not parallel-aware, for EXPLAIN ANALYZE */ /* even when not parallel-aware, for EXPLAIN ANALYZE */
ExecSortInitializeWorker((SortState *) planstate, pwcxt); ExecSortInitializeWorker((SortState *) planstate, pwcxt);
break; break;
case T_IncrementalSortState:
/* even when not parallel-aware, for EXPLAIN ANALYZE */
ExecIncrementalSortInitializeWorker((IncrementalSortState *) planstate,
pwcxt);
break;
default: default:
break; break;
......
...@@ -88,6 +88,7 @@ ...@@ -88,6 +88,7 @@
#include "executor/nodeGroup.h" #include "executor/nodeGroup.h"
#include "executor/nodeHash.h" #include "executor/nodeHash.h"
#include "executor/nodeHashjoin.h" #include "executor/nodeHashjoin.h"
#include "executor/nodeIncrementalSort.h"
#include "executor/nodeIndexonlyscan.h" #include "executor/nodeIndexonlyscan.h"
#include "executor/nodeIndexscan.h" #include "executor/nodeIndexscan.h"
#include "executor/nodeLimit.h" #include "executor/nodeLimit.h"
...@@ -313,6 +314,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags) ...@@ -313,6 +314,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
estate, eflags); estate, eflags);
break; break;
case T_IncrementalSort:
result = (PlanState *) ExecInitIncrementalSort((IncrementalSort *) node,
estate, eflags);
break;
case T_Group: case T_Group:
result = (PlanState *) ExecInitGroup((Group *) node, result = (PlanState *) ExecInitGroup((Group *) node,
estate, eflags); estate, eflags);
...@@ -693,6 +699,10 @@ ExecEndNode(PlanState *node) ...@@ -693,6 +699,10 @@ ExecEndNode(PlanState *node)
ExecEndSort((SortState *) node); ExecEndSort((SortState *) node);
break; break;
case T_IncrementalSortState:
ExecEndIncrementalSort((IncrementalSortState *) node);
break;
case T_GroupState: case T_GroupState:
ExecEndGroup((GroupState *) node); ExecEndGroup((GroupState *) node);
break; break;
...@@ -839,6 +849,30 @@ ExecSetTupleBound(int64 tuples_needed, PlanState *child_node) ...@@ -839,6 +849,30 @@ ExecSetTupleBound(int64 tuples_needed, PlanState *child_node)
sortState->bound = tuples_needed; sortState->bound = tuples_needed;
} }
} }
else if (IsA(child_node, IncrementalSortState))
{
/*
* If it is an IncrementalSort node, notify it that it can use bounded
* sort.
*
* Note: it is the responsibility of nodeIncrementalSort.c to react
* properly to changes of these parameters. If we ever redesign this,
* it'd be a good idea to integrate this signaling with the
* parameter-change mechanism.
*/
IncrementalSortState *sortState = (IncrementalSortState *) child_node;
if (tuples_needed < 0)
{
/* make sure flag gets reset if needed upon rescan */
sortState->bounded = false;
}
else
{
sortState->bounded = true;
sortState->bound = tuples_needed;
}
}
else if (IsA(child_node, AppendState)) else if (IsA(child_node, AppendState))
{ {
/* /*
......
This diff is collapsed.
...@@ -93,7 +93,8 @@ ExecSort(PlanState *pstate) ...@@ -93,7 +93,8 @@ ExecSort(PlanState *pstate)
plannode->collations, plannode->collations,
plannode->nullsFirst, plannode->nullsFirst,
work_mem, work_mem,
NULL, node->randomAccess); NULL,
node->randomAccess);
if (node->bounded) if (node->bounded)
tuplesort_set_bound(tuplesortstate, node->bound); tuplesort_set_bound(tuplesortstate, node->bound);
node->tuplesortstate = (void *) tuplesortstate; node->tuplesortstate = (void *) tuplesortstate;
......
...@@ -927,6 +927,24 @@ _copyMaterial(const Material *from) ...@@ -927,6 +927,24 @@ _copyMaterial(const Material *from)
} }
/*
* CopySortFields
*
* This function copies the fields of the Sort node. It is used by
* all the copy functions for classes which inherit from Sort.
*/
static void
CopySortFields(const Sort *from, Sort *newnode)
{
CopyPlanFields((const Plan *) from, (Plan *) newnode);
COPY_SCALAR_FIELD(numCols);
COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
}
/* /*
* _copySort * _copySort
*/ */
...@@ -938,13 +956,29 @@ _copySort(const Sort *from) ...@@ -938,13 +956,29 @@ _copySort(const Sort *from)
/* /*
* copy node superclass fields * copy node superclass fields
*/ */
CopyPlanFields((const Plan *) from, (Plan *) newnode); CopySortFields(from, newnode);
COPY_SCALAR_FIELD(numCols); return newnode;
COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber)); }
COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool)); /*
* _copyIncrementalSort
*/
static IncrementalSort *
_copyIncrementalSort(const IncrementalSort *from)
{
IncrementalSort *newnode = makeNode(IncrementalSort);
/*
* copy node superclass fields
*/
CopySortFields((const Sort *) from, (Sort *) newnode);
/*
* copy remainder of node
*/
COPY_SCALAR_FIELD(nPresortedCols);
return newnode; return newnode;
} }
...@@ -4898,6 +4932,9 @@ copyObjectImpl(const void *from) ...@@ -4898,6 +4932,9 @@ copyObjectImpl(const void *from)
case T_Sort: case T_Sort:
retval = _copySort(from); retval = _copySort(from);
break; break;
case T_IncrementalSort:
retval = _copyIncrementalSort(from);
break;
case T_Group: case T_Group:
retval = _copyGroup(from); retval = _copyGroup(from);
break; break;
......
...@@ -837,10 +837,8 @@ _outMaterial(StringInfo str, const Material *node) ...@@ -837,10 +837,8 @@ _outMaterial(StringInfo str, const Material *node)
} }
static void static void
_outSort(StringInfo str, const Sort *node) _outSortInfo(StringInfo str, const Sort *node)
{ {
WRITE_NODE_TYPE("SORT");
_outPlanInfo(str, (const Plan *) node); _outPlanInfo(str, (const Plan *) node);
WRITE_INT_FIELD(numCols); WRITE_INT_FIELD(numCols);
...@@ -850,6 +848,24 @@ _outSort(StringInfo str, const Sort *node) ...@@ -850,6 +848,24 @@ _outSort(StringInfo str, const Sort *node)
WRITE_BOOL_ARRAY(nullsFirst, node->numCols); WRITE_BOOL_ARRAY(nullsFirst, node->numCols);
} }
static void
_outSort(StringInfo str, const Sort *node)
{
WRITE_NODE_TYPE("SORT");
_outSortInfo(str, node);
}
static void
_outIncrementalSort(StringInfo str, const IncrementalSort *node)
{
WRITE_NODE_TYPE("INCREMENTALSORT");
_outSortInfo(str, (const Sort *) node);
WRITE_INT_FIELD(nPresortedCols);
}
static void static void
_outUnique(StringInfo str, const Unique *node) _outUnique(StringInfo str, const Unique *node)
{ {
...@@ -3786,6 +3802,9 @@ outNode(StringInfo str, const void *obj) ...@@ -3786,6 +3802,9 @@ outNode(StringInfo str, const void *obj)
case T_Sort: case T_Sort:
_outSort(str, obj); _outSort(str, obj);
break; break;
case T_IncrementalSort:
_outIncrementalSort(str, obj);
break;
case T_Unique: case T_Unique:
_outUnique(str, obj); _outUnique(str, obj);
break; break;
......
...@@ -2150,12 +2150,13 @@ _readMaterial(void) ...@@ -2150,12 +2150,13 @@ _readMaterial(void)
} }
/* /*
* _readSort * ReadCommonSort
* Assign the basic stuff of all nodes that inherit from Sort
*/ */
static Sort * static void
_readSort(void) ReadCommonSort(Sort *local_node)
{ {
READ_LOCALS(Sort); READ_TEMP_LOCALS();
ReadCommonPlan(&local_node->plan); ReadCommonPlan(&local_node->plan);
...@@ -2164,6 +2165,32 @@ _readSort(void) ...@@ -2164,6 +2165,32 @@ _readSort(void)
READ_OID_ARRAY(sortOperators, local_node->numCols); READ_OID_ARRAY(sortOperators, local_node->numCols);
READ_OID_ARRAY(collations, local_node->numCols); READ_OID_ARRAY(collations, local_node->numCols);
READ_BOOL_ARRAY(nullsFirst, local_node->numCols); READ_BOOL_ARRAY(nullsFirst, local_node->numCols);
}
/*
* _readSort
*/
static Sort *
_readSort(void)
{
READ_LOCALS_NO_FIELDS(Sort);
ReadCommonSort(local_node);
READ_DONE();
}
/*
* _readIncrementalSort
*/
static IncrementalSort *
_readIncrementalSort(void)
{
READ_LOCALS(IncrementalSort);
ReadCommonSort(&local_node->sort);
READ_INT_FIELD(nPresortedCols);
READ_DONE(); READ_DONE();
} }
...@@ -2801,6 +2828,8 @@ parseNodeString(void) ...@@ -2801,6 +2828,8 @@ parseNodeString(void)
return_value = _readMaterial(); return_value = _readMaterial();
else if (MATCH("SORT", 4)) else if (MATCH("SORT", 4))
return_value = _readSort(); return_value = _readSort();
else if (MATCH("INCREMENTALSORT", 15))
return_value = _readIncrementalSort();
else if (MATCH("GROUP", 5)) else if (MATCH("GROUP", 5))
return_value = _readGroup(); return_value = _readGroup();
else if (MATCH("AGG", 3)) else if (MATCH("AGG", 3))
......
...@@ -3881,6 +3881,10 @@ print_path(PlannerInfo *root, Path *path, int indent) ...@@ -3881,6 +3881,10 @@ print_path(PlannerInfo *root, Path *path, int indent)
ptype = "Sort"; ptype = "Sort";
subpath = ((SortPath *) path)->subpath; subpath = ((SortPath *) path)->subpath;
break; break;
case T_IncrementalSortPath:
ptype = "IncrementalSort";
subpath = ((SortPath *) path)->subpath;
break;
case T_GroupPath: case T_GroupPath:
ptype = "Group"; ptype = "Group";
subpath = ((GroupPath *) path)->subpath; subpath = ((GroupPath *) path)->subpath;
......
...@@ -128,6 +128,7 @@ bool enable_indexonlyscan = true; ...@@ -128,6 +128,7 @@ bool enable_indexonlyscan = true;
bool enable_bitmapscan = true; bool enable_bitmapscan = true;
bool enable_tidscan = true; bool enable_tidscan = true;
bool enable_sort = true; bool enable_sort = true;
bool enable_incrementalsort = true;
bool enable_hashagg = true; bool enable_hashagg = true;
bool enable_hashagg_disk = true; bool enable_hashagg_disk = true;
bool enable_groupingsets_hash_disk = false; bool enable_groupingsets_hash_disk = false;
...@@ -1648,9 +1649,9 @@ cost_recursive_union(Path *runion, Path *nrterm, Path *rterm) ...@@ -1648,9 +1649,9 @@ cost_recursive_union(Path *runion, Path *nrterm, Path *rterm)
} }
/* /*
* cost_sort * cost_tuplesort
* Determines and returns the cost of sorting a relation, including * Determines and returns the cost of sorting a relation using tuplesort,
* the cost of reading the input data. * not including the cost of reading the input data.
* *
* If the total volume of data to sort is less than sort_mem, we will do * If the total volume of data to sort is less than sort_mem, we will do
* an in-memory sort, which requires no I/O and about t*log2(t) tuple * an in-memory sort, which requires no I/O and about t*log2(t) tuple
...@@ -1677,39 +1678,23 @@ cost_recursive_union(Path *runion, Path *nrterm, Path *rterm) ...@@ -1677,39 +1678,23 @@ cost_recursive_union(Path *runion, Path *nrterm, Path *rterm)
* specifying nonzero comparison_cost; typically that's used for any extra * specifying nonzero comparison_cost; typically that's used for any extra
* work that has to be done to prepare the inputs to the comparison operators. * work that has to be done to prepare the inputs to the comparison operators.
* *
* 'pathkeys' is a list of sort keys
* 'input_cost' is the total cost for reading the input data
* 'tuples' is the number of tuples in the relation * 'tuples' is the number of tuples in the relation
* 'width' is the average tuple width in bytes * 'width' is the average tuple width in bytes
* 'comparison_cost' is the extra cost per comparison, if any * 'comparison_cost' is the extra cost per comparison, if any
* 'sort_mem' is the number of kilobytes of work memory allowed for the sort * 'sort_mem' is the number of kilobytes of work memory allowed for the sort
* 'limit_tuples' is the bound on the number of output tuples; -1 if no bound * 'limit_tuples' is the bound on the number of output tuples; -1 if no bound
*
* NOTE: some callers currently pass NIL for pathkeys because they
* can't conveniently supply the sort keys. Since this routine doesn't
* currently do anything with pathkeys anyway, that doesn't matter...
* but if it ever does, it should react gracefully to lack of key data.
* (Actually, the thing we'd most likely be interested in is just the number
* of sort keys, which all callers *could* supply.)
*/ */
void static void
cost_sort(Path *path, PlannerInfo *root, cost_tuplesort(Cost *startup_cost, Cost *run_cost,
List *pathkeys, Cost input_cost, double tuples, int width, double tuples, int width,
Cost comparison_cost, int sort_mem, Cost comparison_cost, int sort_mem,
double limit_tuples) double limit_tuples)
{ {
Cost startup_cost = input_cost;
Cost run_cost = 0;
double input_bytes = relation_byte_size(tuples, width); double input_bytes = relation_byte_size(tuples, width);
double output_bytes; double output_bytes;
double output_tuples; double output_tuples;
long sort_mem_bytes = sort_mem * 1024L; long sort_mem_bytes = sort_mem * 1024L;
if (!enable_sort)
startup_cost += disable_cost;
path->rows = tuples;
/* /*
* We want to be sure the cost of a sort is never estimated as zero, even * We want to be sure the cost of a sort is never estimated as zero, even
* if passed-in tuple count is zero. Besides, mustn't do log(0)... * if passed-in tuple count is zero. Besides, mustn't do log(0)...
...@@ -1748,7 +1733,7 @@ cost_sort(Path *path, PlannerInfo *root, ...@@ -1748,7 +1733,7 @@ cost_sort(Path *path, PlannerInfo *root,
* *
* Assume about N log2 N comparisons * Assume about N log2 N comparisons
*/ */
startup_cost += comparison_cost * tuples * LOG2(tuples); *startup_cost = comparison_cost * tuples * LOG2(tuples);
/* Disk costs */ /* Disk costs */
...@@ -1759,7 +1744,7 @@ cost_sort(Path *path, PlannerInfo *root, ...@@ -1759,7 +1744,7 @@ cost_sort(Path *path, PlannerInfo *root,
log_runs = 1.0; log_runs = 1.0;
npageaccesses = 2.0 * npages * log_runs; npageaccesses = 2.0 * npages * log_runs;
/* Assume 3/4ths of accesses are sequential, 1/4th are not */ /* Assume 3/4ths of accesses are sequential, 1/4th are not */
startup_cost += npageaccesses * *startup_cost += npageaccesses *
(seq_page_cost * 0.75 + random_page_cost * 0.25); (seq_page_cost * 0.75 + random_page_cost * 0.25);
} }
else if (tuples > 2 * output_tuples || input_bytes > sort_mem_bytes) else if (tuples > 2 * output_tuples || input_bytes > sort_mem_bytes)
...@@ -1770,12 +1755,12 @@ cost_sort(Path *path, PlannerInfo *root, ...@@ -1770,12 +1755,12 @@ cost_sort(Path *path, PlannerInfo *root,
* factor is a bit higher than for quicksort. Tweak it so that the * factor is a bit higher than for quicksort. Tweak it so that the
* cost curve is continuous at the crossover point. * cost curve is continuous at the crossover point.
*/ */
startup_cost += comparison_cost * tuples * LOG2(2.0 * output_tuples); *startup_cost = comparison_cost * tuples * LOG2(2.0 * output_tuples);
} }
else else
{ {
/* We'll use plain quicksort on all the input tuples */ /* We'll use plain quicksort on all the input tuples */
startup_cost += comparison_cost * tuples * LOG2(tuples); *startup_cost = comparison_cost * tuples * LOG2(tuples);
} }
/* /*
...@@ -1786,8 +1771,143 @@ cost_sort(Path *path, PlannerInfo *root, ...@@ -1786,8 +1771,143 @@ cost_sort(Path *path, PlannerInfo *root,
* here --- the upper LIMIT will pro-rate the run cost so we'd be double * here --- the upper LIMIT will pro-rate the run cost so we'd be double
* counting the LIMIT otherwise. * counting the LIMIT otherwise.
*/ */
run_cost += cpu_operator_cost * tuples; *run_cost = cpu_operator_cost * tuples;
}
/*
* cost_incremental_sort
* Determines and returns the cost of sorting a relation incrementally, when
* the input path is presorted by a prefix of the pathkeys.
*
* 'presorted_keys' is the number of leading pathkeys by which the input path
* is sorted.
*
* We estimate the number of groups into which the relation is divided by the
* leading pathkeys, and then calculate the cost of sorting a single group
* with tuplesort using cost_tuplesort().
*/
void
cost_incremental_sort(Path *path,
PlannerInfo *root, List *pathkeys, int presorted_keys,
Cost input_startup_cost, Cost input_total_cost,
double input_tuples, int width, Cost comparison_cost, int sort_mem,
double limit_tuples)
{
Cost startup_cost = 0,
run_cost = 0,
input_run_cost = input_total_cost - input_startup_cost;
double group_tuples,
input_groups;
Cost group_startup_cost,
group_run_cost,
group_input_run_cost;
List *presortedExprs = NIL;
ListCell *l;
int i = 0;
Assert(presorted_keys != 0);
/*
* We want to be sure the cost of a sort is never estimated as zero, even
* if passed-in tuple count is zero. Besides, mustn't do log(0)...
*/
if (input_tuples < 2.0)
input_tuples = 2.0;
/* Extract presorted keys as list of expressions */
foreach(l, pathkeys)
{
PathKey *key = (PathKey *) lfirst(l);
EquivalenceMember *member = (EquivalenceMember *)
linitial(key->pk_eclass->ec_members);
presortedExprs = lappend(presortedExprs, member->em_expr);
i++;
if (i >= presorted_keys)
break;
}
/* Estimate number of groups with equal presorted keys */
input_groups = estimate_num_groups(root, presortedExprs, input_tuples, NULL);
group_tuples = input_tuples / input_groups;
group_input_run_cost = input_run_cost / input_groups;
/*
* Estimate average cost of sorting of one group where presorted keys are
* equal. Incremental sort is sensitive to distribution of tuples to the
* groups, where we're relying on quite rough assumptions. Thus, we're
* pessimistic about incremental sort performance and increase its average
* group size by half.
*/
cost_tuplesort(&group_startup_cost, &group_run_cost,
1.5 * group_tuples, width, comparison_cost, sort_mem,
limit_tuples);
/*
* Startup cost of incremental sort is the startup cost of its first group
* plus the cost of its input.
*/
startup_cost += group_startup_cost
+ input_startup_cost + group_input_run_cost;
/*
* After we started producing tuples from the first group, the cost of
* producing all the tuples is given by the cost to finish processing this
* group, plus the total cost to process the remaining groups, plus the
* remaining cost of input.
*/
run_cost += group_run_cost
+ (group_run_cost + group_startup_cost) * (input_groups - 1)
+ group_input_run_cost * (input_groups - 1);
/*
* Incremental sort adds some overhead by itself. Firstly, it has to
* detect the sort groups. This is roughly equal to one extra copy and
* comparison per tuple. Secondly, it has to reset the tuplesort context
* for every group.
*/
run_cost += (cpu_tuple_cost + comparison_cost) * input_tuples;
run_cost += 2.0 * cpu_tuple_cost * input_groups;
path->rows = input_tuples;
path->startup_cost = startup_cost;
path->total_cost = startup_cost + run_cost;
}
/*
* cost_sort
* Determines and returns the cost of sorting a relation, including
* the cost of reading the input data.
*
* NOTE: some callers currently pass NIL for pathkeys because they
* can't conveniently supply the sort keys. Since this routine doesn't
* currently do anything with pathkeys anyway, that doesn't matter...
* but if it ever does, it should react gracefully to lack of key data.
* (Actually, the thing we'd most likely be interested in is just the number
* of sort keys, which all callers *could* supply.)
*/
void
cost_sort(Path *path, PlannerInfo *root,
List *pathkeys, Cost input_cost, double tuples, int width,
Cost comparison_cost, int sort_mem,
double limit_tuples)
{
Cost startup_cost;
Cost run_cost;
cost_tuplesort(&startup_cost, &run_cost,
tuples, width,
comparison_cost, sort_mem,
limit_tuples);
if (!enable_sort)
startup_cost += disable_cost;
startup_cost += input_cost;
path->rows = tuples;
path->startup_cost = startup_cost; path->startup_cost = startup_cost;
path->total_cost = startup_cost + run_cost; path->total_cost = startup_cost + run_cost;
} }
......
...@@ -334,6 +334,60 @@ pathkeys_contained_in(List *keys1, List *keys2) ...@@ -334,6 +334,60 @@ pathkeys_contained_in(List *keys1, List *keys2)
return false; return false;
} }
/*
* pathkeys_count_contained_in
* Same as pathkeys_contained_in, but also sets length of longest
* common prefix of keys1 and keys2.
*/
bool
pathkeys_count_contained_in(List *keys1, List *keys2, int *n_common)
{
int n = 0;
ListCell *key1,
*key2;
/*
* See if we can avoiding looping through both lists. This optimization
* gains us several percent in planning time in a worst-case test.
*/
if (keys1 == keys2)
{
*n_common = list_length(keys1);
return true;
}
else if (keys1 == NIL)
{
*n_common = 0;
return true;
}
else if (keys2 == NIL)
{
*n_common = 0;
return false;
}
/*
* If both lists are non-empty, iterate through both to find out how many
* items are shared.
*/
forboth(key1, keys1, key2, keys2)
{
PathKey *pathkey1 = (PathKey *) lfirst(key1);
PathKey *pathkey2 = (PathKey *) lfirst(key2);
if (pathkey1 != pathkey2)
{
*n_common = n;
return false;
}
n++;
}
/* If we ended with a null value, then we've processed the whole list. */
*n_common = n;
return (key1 == NULL);
}
/* /*
* get_cheapest_path_for_pathkeys * get_cheapest_path_for_pathkeys
* Find the cheapest path (according to the specified criterion) that * Find the cheapest path (according to the specified criterion) that
...@@ -1786,26 +1840,26 @@ right_merge_direction(PlannerInfo *root, PathKey *pathkey) ...@@ -1786,26 +1840,26 @@ right_merge_direction(PlannerInfo *root, PathKey *pathkey)
* Count the number of pathkeys that are useful for meeting the * Count the number of pathkeys that are useful for meeting the
* query's requested output ordering. * query's requested output ordering.
* *
* Unlike merge pathkeys, this is an all-or-nothing affair: it does us * Because we the have the possibility of incremental sort, a prefix list of
* no good to order by just the first key(s) of the requested ordering. * keys is potentially useful for improving the performance of the requested
* So the result is always either 0 or list_length(root->query_pathkeys). * ordering. Thus we return 0, if no valuable keys are found, or the number
* of leading keys shared by the list and the requested ordering..
*/ */
static int static int
pathkeys_useful_for_ordering(PlannerInfo *root, List *pathkeys) pathkeys_useful_for_ordering(PlannerInfo *root, List *pathkeys)
{ {
int n_common_pathkeys;
if (root->query_pathkeys == NIL) if (root->query_pathkeys == NIL)
return 0; /* no special ordering requested */ return 0; /* no special ordering requested */
if (pathkeys == NIL) if (pathkeys == NIL)
return 0; /* unordered path */ return 0; /* unordered path */
if (pathkeys_contained_in(root->query_pathkeys, pathkeys)) (void) pathkeys_count_contained_in(root->query_pathkeys, pathkeys,
{ &n_common_pathkeys);
/* It's useful ... or at least the first N keys are */
return list_length(root->query_pathkeys);
}
return 0; /* path ordering not useful */ return n_common_pathkeys;
} }
/* /*
......
...@@ -98,6 +98,8 @@ static Plan *create_projection_plan(PlannerInfo *root, ...@@ -98,6 +98,8 @@ static Plan *create_projection_plan(PlannerInfo *root,
int flags); int flags);
static Plan *inject_projection_plan(Plan *subplan, List *tlist, bool parallel_safe); static Plan *inject_projection_plan(Plan *subplan, List *tlist, bool parallel_safe);
static Sort *create_sort_plan(PlannerInfo *root, SortPath *best_path, int flags); static Sort *create_sort_plan(PlannerInfo *root, SortPath *best_path, int flags);
static IncrementalSort *create_incrementalsort_plan(PlannerInfo *root,
IncrementalSortPath *best_path, int flags);
static Group *create_group_plan(PlannerInfo *root, GroupPath *best_path); static Group *create_group_plan(PlannerInfo *root, GroupPath *best_path);
static Unique *create_upper_unique_plan(PlannerInfo *root, UpperUniquePath *best_path, static Unique *create_upper_unique_plan(PlannerInfo *root, UpperUniquePath *best_path,
int flags); int flags);
...@@ -244,6 +246,10 @@ static MergeJoin *make_mergejoin(List *tlist, ...@@ -244,6 +246,10 @@ static MergeJoin *make_mergejoin(List *tlist,
static Sort *make_sort(Plan *lefttree, int numCols, static Sort *make_sort(Plan *lefttree, int numCols,
AttrNumber *sortColIdx, Oid *sortOperators, AttrNumber *sortColIdx, Oid *sortOperators,
Oid *collations, bool *nullsFirst); Oid *collations, bool *nullsFirst);
static IncrementalSort *make_incrementalsort(Plan *lefttree,
int numCols, int nPresortedCols,
AttrNumber *sortColIdx, Oid *sortOperators,
Oid *collations, bool *nullsFirst);
static Plan *prepare_sort_from_pathkeys(Plan *lefttree, List *pathkeys, static Plan *prepare_sort_from_pathkeys(Plan *lefttree, List *pathkeys,
Relids relids, Relids relids,
const AttrNumber *reqColIdx, const AttrNumber *reqColIdx,
...@@ -258,6 +264,8 @@ static EquivalenceMember *find_ec_member_for_tle(EquivalenceClass *ec, ...@@ -258,6 +264,8 @@ static EquivalenceMember *find_ec_member_for_tle(EquivalenceClass *ec,
Relids relids); Relids relids);
static Sort *make_sort_from_pathkeys(Plan *lefttree, List *pathkeys, static Sort *make_sort_from_pathkeys(Plan *lefttree, List *pathkeys,
Relids relids); Relids relids);
static IncrementalSort *make_incrementalsort_from_pathkeys(Plan *lefttree,
List *pathkeys, Relids relids, int nPresortedCols);
static Sort *make_sort_from_groupcols(List *groupcls, static Sort *make_sort_from_groupcols(List *groupcls,
AttrNumber *grpColIdx, AttrNumber *grpColIdx,
Plan *lefttree); Plan *lefttree);
...@@ -460,6 +468,11 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags) ...@@ -460,6 +468,11 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
(SortPath *) best_path, (SortPath *) best_path,
flags); flags);
break; break;
case T_IncrementalSort:
plan = (Plan *) create_incrementalsort_plan(root,
(IncrementalSortPath *) best_path,
flags);
break;
case T_Group: case T_Group:
plan = (Plan *) create_group_plan(root, plan = (Plan *) create_group_plan(root,
(GroupPath *) best_path); (GroupPath *) best_path);
...@@ -1994,6 +2007,32 @@ create_sort_plan(PlannerInfo *root, SortPath *best_path, int flags) ...@@ -1994,6 +2007,32 @@ create_sort_plan(PlannerInfo *root, SortPath *best_path, int flags)
return plan; return plan;
} }
/*
* create_incrementalsort_plan
*
* Do the same as create_sort_plan, but create IncrementalSort plan.
*/
static IncrementalSort *
create_incrementalsort_plan(PlannerInfo *root, IncrementalSortPath *best_path,
int flags)
{
IncrementalSort *plan;
Plan *subplan;
/* See comments in create_sort_plan() above */
subplan = create_plan_recurse(root, best_path->spath.subpath,
flags | CP_SMALL_TLIST);
plan = make_incrementalsort_from_pathkeys(subplan,
best_path->spath.path.pathkeys,
IS_OTHER_REL(best_path->spath.subpath->parent) ?
best_path->spath.path.parent->relids : NULL,
best_path->nPresortedCols);
copy_generic_path_info(&plan->sort.plan, (Path *) best_path);
return plan;
}
/* /*
* create_group_plan * create_group_plan
* *
...@@ -5090,6 +5129,12 @@ label_sort_with_costsize(PlannerInfo *root, Sort *plan, double limit_tuples) ...@@ -5090,6 +5129,12 @@ label_sort_with_costsize(PlannerInfo *root, Sort *plan, double limit_tuples)
Plan *lefttree = plan->plan.lefttree; Plan *lefttree = plan->plan.lefttree;
Path sort_path; /* dummy for result of cost_sort */ Path sort_path; /* dummy for result of cost_sort */
/*
* This function shouldn't have to deal with IncrementalSort plans because
* they are only created from corresponding Path nodes.
*/
Assert(IsA(plan, Sort));
cost_sort(&sort_path, root, NIL, cost_sort(&sort_path, root, NIL,
lefttree->total_cost, lefttree->total_cost,
lefttree->plan_rows, lefttree->plan_rows,
...@@ -5677,9 +5722,12 @@ make_sort(Plan *lefttree, int numCols, ...@@ -5677,9 +5722,12 @@ make_sort(Plan *lefttree, int numCols,
AttrNumber *sortColIdx, Oid *sortOperators, AttrNumber *sortColIdx, Oid *sortOperators,
Oid *collations, bool *nullsFirst) Oid *collations, bool *nullsFirst)
{ {
Sort *node = makeNode(Sort); Sort *node;
Plan *plan = &node->plan; Plan *plan;
node = makeNode(Sort);
plan = &node->plan;
plan->targetlist = lefttree->targetlist; plan->targetlist = lefttree->targetlist;
plan->qual = NIL; plan->qual = NIL;
plan->lefttree = lefttree; plan->lefttree = lefttree;
...@@ -5693,6 +5741,37 @@ make_sort(Plan *lefttree, int numCols, ...@@ -5693,6 +5741,37 @@ make_sort(Plan *lefttree, int numCols,
return node; return node;
} }
/*
* make_incrementalsort --- basic routine to build an IncrementalSort plan node
*
* Caller must have built the sortColIdx, sortOperators, collations, and
* nullsFirst arrays already.
*/
static IncrementalSort *
make_incrementalsort(Plan *lefttree, int numCols, int nPresortedCols,
AttrNumber *sortColIdx, Oid *sortOperators,
Oid *collations, bool *nullsFirst)
{
IncrementalSort *node;
Plan *plan;
node = makeNode(IncrementalSort);
plan = &node->sort.plan;
plan->targetlist = lefttree->targetlist;
plan->qual = NIL;
plan->lefttree = lefttree;
plan->righttree = NULL;
node->nPresortedCols = nPresortedCols;
node->sort.numCols = numCols;
node->sort.sortColIdx = sortColIdx;
node->sort.sortOperators = sortOperators;
node->sort.collations = collations;
node->sort.nullsFirst = nullsFirst;
return node;
}
/* /*
* prepare_sort_from_pathkeys * prepare_sort_from_pathkeys
* Prepare to sort according to given pathkeys * Prepare to sort according to given pathkeys
...@@ -6039,6 +6118,42 @@ make_sort_from_pathkeys(Plan *lefttree, List *pathkeys, Relids relids) ...@@ -6039,6 +6118,42 @@ make_sort_from_pathkeys(Plan *lefttree, List *pathkeys, Relids relids)
collations, nullsFirst); collations, nullsFirst);
} }
/*
* make_incrementalsort_from_pathkeys
* Create sort plan to sort according to given pathkeys
*
* 'lefttree' is the node which yields input tuples
* 'pathkeys' is the list of pathkeys by which the result is to be sorted
* 'relids' is the set of relations required by prepare_sort_from_pathkeys()
* 'nPresortedCols' is the number of presorted columns in input tuples
*/
static IncrementalSort *
make_incrementalsort_from_pathkeys(Plan *lefttree, List *pathkeys,
Relids relids, int nPresortedCols)
{
int numsortkeys;
AttrNumber *sortColIdx;
Oid *sortOperators;
Oid *collations;
bool *nullsFirst;
/* Compute sort column info, and adjust lefttree as needed */
lefttree = prepare_sort_from_pathkeys(lefttree, pathkeys,
relids,
NULL,
false,
&numsortkeys,
&sortColIdx,
&sortOperators,
&collations,
&nullsFirst);
/* Now build the Sort node */
return make_incrementalsort(lefttree, numsortkeys, nPresortedCols,
sortColIdx, sortOperators,
collations, nullsFirst);
}
/* /*
* make_sort_from_sortclauses * make_sort_from_sortclauses
* Create sort plan to sort according to given sortclauses * Create sort plan to sort according to given sortclauses
...@@ -6774,6 +6889,7 @@ is_projection_capable_path(Path *path) ...@@ -6774,6 +6889,7 @@ is_projection_capable_path(Path *path)
case T_Hash: case T_Hash:
case T_Material: case T_Material:
case T_Sort: case T_Sort:
case T_IncrementalSort:
case T_Unique: case T_Unique:
case T_SetOp: case T_SetOp:
case T_LockRows: case T_LockRows:
......
...@@ -4924,13 +4924,16 @@ create_distinct_paths(PlannerInfo *root, ...@@ -4924,13 +4924,16 @@ create_distinct_paths(PlannerInfo *root,
* Build a new upperrel containing Paths for ORDER BY evaluation. * Build a new upperrel containing Paths for ORDER BY evaluation.
* *
* All paths in the result must satisfy the ORDER BY ordering. * All paths in the result must satisfy the ORDER BY ordering.
* The only new path we need consider is an explicit sort on the * The only new paths we need consider are an explicit full sort
* cheapest-total existing path. * and incremental sort on the cheapest-total existing path.
* *
* input_rel: contains the source-data Paths * input_rel: contains the source-data Paths
* target: the output tlist the result Paths must emit * target: the output tlist the result Paths must emit
* limit_tuples: estimated bound on the number of output tuples, * limit_tuples: estimated bound on the number of output tuples,
* or -1 if no LIMIT or couldn't estimate * or -1 if no LIMIT or couldn't estimate
*
* XXX This only looks at sort_pathkeys. I wonder if it needs to look at the
* other pathkeys (grouping, ...) like generate_useful_gather_paths.
*/ */
static RelOptInfo * static RelOptInfo *
create_ordered_paths(PlannerInfo *root, create_ordered_paths(PlannerInfo *root,
...@@ -4964,29 +4967,77 @@ create_ordered_paths(PlannerInfo *root, ...@@ -4964,29 +4967,77 @@ create_ordered_paths(PlannerInfo *root,
foreach(lc, input_rel->pathlist) foreach(lc, input_rel->pathlist)
{ {
Path *path = (Path *) lfirst(lc); Path *input_path = (Path *) lfirst(lc);
Path *sorted_path = input_path;
bool is_sorted; bool is_sorted;
int presorted_keys;
is_sorted = pathkeys_count_contained_in(root->sort_pathkeys,
input_path->pathkeys, &presorted_keys);
is_sorted = pathkeys_contained_in(root->sort_pathkeys, if (is_sorted)
path->pathkeys);
if (path == cheapest_input_path || is_sorted)
{ {
if (!is_sorted) /* Use the input path as is, but add a projection step if needed */
if (sorted_path->pathtarget != target)
sorted_path = apply_projection_to_path(root, ordered_rel,
sorted_path, target);
add_path(ordered_rel, sorted_path);
}
else
{
/*
* Try adding an explicit sort, but only to the cheapest total path
* since a full sort should generally add the same cost to all
* paths.
*/
if (input_path == cheapest_input_path)
{ {
/* An explicit sort here can take advantage of LIMIT */ /*
path = (Path *) create_sort_path(root, * Sort the cheapest input path. An explicit sort here can
ordered_rel, * take advantage of LIMIT.
path, */
root->sort_pathkeys, sorted_path = (Path *) create_sort_path(root,
limit_tuples); ordered_rel,
input_path,
root->sort_pathkeys,
limit_tuples);
/* Add projection step if needed */
if (sorted_path->pathtarget != target)
sorted_path = apply_projection_to_path(root, ordered_rel,
sorted_path, target);
add_path(ordered_rel, sorted_path);
} }
/*
* If incremental sort is enabled, then try it as well. Unlike with
* regular sorts, we can't just look at the cheapest path, because
* the cost of incremental sort depends on how well presorted the
* path is. Additionally incremental sort may enable a cheaper
* startup path to win out despite higher total cost.
*/
if (!enable_incrementalsort)
continue;
/* Likewise, if the path can't be used for incremental sort. */
if (!presorted_keys)
continue;
/* Also consider incremental sort. */
sorted_path = (Path *) create_incremental_sort_path(root,
ordered_rel,
input_path,
root->sort_pathkeys,
presorted_keys,
limit_tuples);
/* Add projection step if needed */ /* Add projection step if needed */
if (path->pathtarget != target) if (sorted_path->pathtarget != target)
path = apply_projection_to_path(root, ordered_rel, sorted_path = apply_projection_to_path(root, ordered_rel,
path, target); sorted_path, target);
add_path(ordered_rel, path); add_path(ordered_rel, sorted_path);
} }
} }
......
...@@ -678,6 +678,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset) ...@@ -678,6 +678,7 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
case T_Material: case T_Material:
case T_Sort: case T_Sort:
case T_IncrementalSort:
case T_Unique: case T_Unique:
case T_SetOp: case T_SetOp:
......
...@@ -2688,6 +2688,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, ...@@ -2688,6 +2688,7 @@ finalize_plan(PlannerInfo *root, Plan *plan,
case T_Hash: case T_Hash:
case T_Material: case T_Material:
case T_Sort: case T_Sort:
case T_IncrementalSort:
case T_Unique: case T_Unique:
case T_SetOp: case T_SetOp:
case T_Group: case T_Group:
......
...@@ -2753,6 +2753,57 @@ create_set_projection_path(PlannerInfo *root, ...@@ -2753,6 +2753,57 @@ create_set_projection_path(PlannerInfo *root,
return pathnode; return pathnode;
} }
/*
* create_incremental_sort_path
* Creates a pathnode that represents performing an incremental sort.
*
* 'rel' is the parent relation associated with the result
* 'subpath' is the path representing the source of data
* 'pathkeys' represents the desired sort order
* 'presorted_keys' is the number of keys by which the input path is
* already sorted
* 'limit_tuples' is the estimated bound on the number of output tuples,
* or -1 if no LIMIT or couldn't estimate
*/
SortPath *
create_incremental_sort_path(PlannerInfo *root,
RelOptInfo *rel,
Path *subpath,
List *pathkeys,
int presorted_keys,
double limit_tuples)
{
IncrementalSortPath *sort = makeNode(IncrementalSortPath);
SortPath *pathnode = &sort->spath;
pathnode->path.pathtype = T_IncrementalSort;
pathnode->path.parent = rel;
/* Sort doesn't project, so use source path's pathtarget */
pathnode->path.pathtarget = subpath->pathtarget;
/* For now, assume we are above any joins, so no parameterization */
pathnode->path.param_info = NULL;
pathnode->path.parallel_aware = false;
pathnode->path.parallel_safe = rel->consider_parallel &&
subpath->parallel_safe;
pathnode->path.parallel_workers = subpath->parallel_workers;
pathnode->path.pathkeys = pathkeys;
pathnode->subpath = subpath;
cost_incremental_sort(&pathnode->path,
root, pathkeys, presorted_keys,
subpath->startup_cost,
subpath->total_cost,
subpath->rows,
subpath->pathtarget->width,
0.0, /* XXX comparison_cost shouldn't be 0? */
work_mem, limit_tuples);
sort->nPresortedCols = presorted_keys;
return pathnode;
}
/* /*
* create_sort_path * create_sort_path
* Creates a pathnode that represents performing an explicit sort. * Creates a pathnode that represents performing an explicit sort.
......
...@@ -991,6 +991,15 @@ static struct config_bool ConfigureNamesBool[] = ...@@ -991,6 +991,15 @@ static struct config_bool ConfigureNamesBool[] =
true, true,
NULL, NULL, NULL NULL, NULL, NULL
}, },
{
{"enable_incrementalsort", PGC_USERSET, QUERY_TUNING_METHOD,
gettext_noop("Enables the planner's use of incremental sort steps."),
NULL
},
&enable_incrementalsort,
true,
NULL, NULL, NULL
},
{ {
{"enable_hashagg", PGC_USERSET, QUERY_TUNING_METHOD, {"enable_hashagg", PGC_USERSET, QUERY_TUNING_METHOD,
gettext_noop("Enables the planner's use of hashed aggregation plans."), gettext_noop("Enables the planner's use of hashed aggregation plans."),
......
...@@ -360,6 +360,7 @@ ...@@ -360,6 +360,7 @@
#enable_parallel_append = on #enable_parallel_append = on
#enable_seqscan = on #enable_seqscan = on
#enable_sort = on #enable_sort = on
#enable_incrementalsort = on
#enable_tidscan = on #enable_tidscan = on
#enable_partitionwise_join = off #enable_partitionwise_join = off
#enable_partitionwise_aggregate = off #enable_partitionwise_aggregate = off
......
This diff is collapsed.
...@@ -86,10 +86,12 @@ ...@@ -86,10 +86,12 @@
#define SO_nodeDisplay(l) nodeDisplay(l) #define SO_nodeDisplay(l) nodeDisplay(l)
#define SO_printf(s) printf(s) #define SO_printf(s) printf(s)
#define SO1_printf(s, p) printf(s, p) #define SO1_printf(s, p) printf(s, p)
#define SO2_printf(s, p1, p2) printf(s, p1, p2)
#else #else
#define SO_nodeDisplay(l) #define SO_nodeDisplay(l)
#define SO_printf(s) #define SO_printf(s)
#define SO1_printf(s, p) #define SO1_printf(s, p)
#define SO2_printf(s, p1, p2)
#endif /* EXEC_SORTDEBUG */ #endif /* EXEC_SORTDEBUG */
/* ---------------- /* ----------------
......
/*-------------------------------------------------------------------------
*
* nodeIncrementalSort.h
*
* Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* src/include/executor/nodeIncrementalSort.h
*
*-------------------------------------------------------------------------
*/
#ifndef NODEINCREMENTALSORT_H
#define NODEINCREMENTALSORT_H
#include "access/parallel.h"
#include "nodes/execnodes.h"
extern IncrementalSortState *ExecInitIncrementalSort(IncrementalSort *node, EState *estate, int eflags);
extern void ExecEndIncrementalSort(IncrementalSortState *node);
extern void ExecReScanIncrementalSort(IncrementalSortState *node);
/* parallel instrumentation support */
extern void ExecIncrementalSortEstimate(IncrementalSortState *node, ParallelContext *pcxt);
extern void ExecIncrementalSortInitializeDSM(IncrementalSortState *node, ParallelContext *pcxt);
extern void ExecIncrementalSortInitializeWorker(IncrementalSortState *node, ParallelWorkerContext *pcxt);
extern void ExecIncrementalSortRetrieveInstrumentation(IncrementalSortState *node);
#endif /* NODEINCREMENTALSORT_H */
...@@ -1982,6 +1982,21 @@ typedef struct MaterialState ...@@ -1982,6 +1982,21 @@ typedef struct MaterialState
Tuplestorestate *tuplestorestate; Tuplestorestate *tuplestorestate;
} MaterialState; } MaterialState;
/* ----------------
* When performing sorting by multiple keys, it's possible that the input
* dataset is already sorted on a prefix of those keys. We call these
* "presorted keys".
* PresortedKeyData represents information about one such key.
* ----------------
*/
typedef struct PresortedKeyData
{
FmgrInfo flinfo; /* comparison function info */
FunctionCallInfo fcinfo; /* comparison function call info */
OffsetNumber attno; /* attribute number in tuple */
} PresortedKeyData;
/* ---------------- /* ----------------
* Shared memory container for per-worker sort information * Shared memory container for per-worker sort information
* ---------------- * ----------------
...@@ -2010,6 +2025,71 @@ typedef struct SortState ...@@ -2010,6 +2025,71 @@ typedef struct SortState
SharedSortInfo *shared_info; /* one entry per worker */ SharedSortInfo *shared_info; /* one entry per worker */
} SortState; } SortState;
/* ----------------
* Instrumentation information for IncrementalSort
* ----------------
*/
typedef struct IncrementalSortGroupInfo
{
int64 groupCount;
long maxDiskSpaceUsed;
long totalDiskSpaceUsed;
long maxMemorySpaceUsed;
long totalMemorySpaceUsed;
bits32 sortMethods; /* bitmask of TuplesortMethod */
} IncrementalSortGroupInfo;
typedef struct IncrementalSortInfo
{
IncrementalSortGroupInfo fullsortGroupInfo;
IncrementalSortGroupInfo prefixsortGroupInfo;
} IncrementalSortInfo;
/* ----------------
* Shared memory container for per-worker incremental sort information
* ----------------
*/
typedef struct SharedIncrementalSortInfo
{
int num_workers;
IncrementalSortInfo sinfo[FLEXIBLE_ARRAY_MEMBER];
} SharedIncrementalSortInfo;
/* ----------------
* IncrementalSortState information
* ----------------
*/
typedef enum
{
INCSORT_LOADFULLSORT,
INCSORT_LOADPREFIXSORT,
INCSORT_READFULLSORT,
INCSORT_READPREFIXSORT,
} IncrementalSortExecutionStatus;
typedef struct IncrementalSortState
{
ScanState ss; /* its first field is NodeTag */
bool bounded; /* is the result set bounded? */
int64 bound; /* if bounded, how many tuples are needed */
bool outerNodeDone; /* finished fetching tuples from outer node */
int64 bound_Done; /* value of bound we did the sort with */
IncrementalSortExecutionStatus execution_status;
int64 n_fullsort_remaining;
Tuplesortstate *fullsort_state; /* private state of tuplesort.c */
Tuplesortstate *prefixsort_state; /* private state of tuplesort.c */
/* the keys by which the input path is already sorted */
PresortedKeyData *presorted_keys;
IncrementalSortInfo incsort_info;
/* slot for pivot tuple defining values of presorted keys within group */
TupleTableSlot *group_pivot;
TupleTableSlot *transfer_tuple;
bool am_worker; /* are we a worker? */
SharedIncrementalSortInfo *shared_info; /* one entry per worker */
} IncrementalSortState;
/* --------------------- /* ---------------------
* GroupState information * GroupState information
* --------------------- * ---------------------
......
...@@ -74,6 +74,7 @@ typedef enum NodeTag ...@@ -74,6 +74,7 @@ typedef enum NodeTag
T_HashJoin, T_HashJoin,
T_Material, T_Material,
T_Sort, T_Sort,
T_IncrementalSort,
T_Group, T_Group,
T_Agg, T_Agg,
T_WindowAgg, T_WindowAgg,
...@@ -130,6 +131,7 @@ typedef enum NodeTag ...@@ -130,6 +131,7 @@ typedef enum NodeTag
T_HashJoinState, T_HashJoinState,
T_MaterialState, T_MaterialState,
T_SortState, T_SortState,
T_IncrementalSortState,
T_GroupState, T_GroupState,
T_AggState, T_AggState,
T_WindowAggState, T_WindowAggState,
...@@ -245,6 +247,7 @@ typedef enum NodeTag ...@@ -245,6 +247,7 @@ typedef enum NodeTag
T_ProjectionPath, T_ProjectionPath,
T_ProjectSetPath, T_ProjectSetPath,
T_SortPath, T_SortPath,
T_IncrementalSortPath,
T_GroupPath, T_GroupPath,
T_UpperUniquePath, T_UpperUniquePath,
T_AggPath, T_AggPath,
......
...@@ -1638,6 +1638,15 @@ typedef struct SortPath ...@@ -1638,6 +1638,15 @@ typedef struct SortPath
Path *subpath; /* path representing input source */ Path *subpath; /* path representing input source */
} SortPath; } SortPath;
/*
* IncrementalSortPath
*/
typedef struct IncrementalSortPath
{
SortPath spath;
int nPresortedCols; /* number of presorted columns */
} IncrementalSortPath;
/* /*
* GroupPath represents grouping (of presorted input) * GroupPath represents grouping (of presorted input)
* *
......
...@@ -774,6 +774,16 @@ typedef struct Sort ...@@ -774,6 +774,16 @@ typedef struct Sort
bool *nullsFirst; /* NULLS FIRST/LAST directions */ bool *nullsFirst; /* NULLS FIRST/LAST directions */
} Sort; } Sort;
/* ----------------
* incremental sort node
* ----------------
*/
typedef struct IncrementalSort
{
Sort sort;
int nPresortedCols; /* number of presorted columns */
} IncrementalSort;
/* --------------- /* ---------------
* group node - * group node -
* Used for queries with GROUP BY (but no aggregates) specified. * Used for queries with GROUP BY (but no aggregates) specified.
......
...@@ -53,6 +53,7 @@ extern PGDLLIMPORT bool enable_indexonlyscan; ...@@ -53,6 +53,7 @@ extern PGDLLIMPORT bool enable_indexonlyscan;
extern PGDLLIMPORT bool enable_bitmapscan; extern PGDLLIMPORT bool enable_bitmapscan;
extern PGDLLIMPORT bool enable_tidscan; extern PGDLLIMPORT bool enable_tidscan;
extern PGDLLIMPORT bool enable_sort; extern PGDLLIMPORT bool enable_sort;
extern PGDLLIMPORT bool enable_incrementalsort;
extern PGDLLIMPORT bool enable_hashagg; extern PGDLLIMPORT bool enable_hashagg;
extern PGDLLIMPORT bool enable_hashagg_disk; extern PGDLLIMPORT bool enable_hashagg_disk;
extern PGDLLIMPORT bool enable_groupingsets_hash_disk; extern PGDLLIMPORT bool enable_groupingsets_hash_disk;
...@@ -103,6 +104,11 @@ extern void cost_sort(Path *path, PlannerInfo *root, ...@@ -103,6 +104,11 @@ extern void cost_sort(Path *path, PlannerInfo *root,
List *pathkeys, Cost input_cost, double tuples, int width, List *pathkeys, Cost input_cost, double tuples, int width,
Cost comparison_cost, int sort_mem, Cost comparison_cost, int sort_mem,
double limit_tuples); double limit_tuples);
extern void cost_incremental_sort(Path *path,
PlannerInfo *root, List *pathkeys, int presorted_keys,
Cost input_startup_cost, Cost input_total_cost,
double input_tuples, int width, Cost comparison_cost, int sort_mem,
double limit_tuples);
extern void cost_append(AppendPath *path); extern void cost_append(AppendPath *path);
extern void cost_merge_append(Path *path, PlannerInfo *root, extern void cost_merge_append(Path *path, PlannerInfo *root,
List *pathkeys, int n_streams, List *pathkeys, int n_streams,
......
...@@ -184,6 +184,12 @@ extern ProjectSetPath *create_set_projection_path(PlannerInfo *root, ...@@ -184,6 +184,12 @@ extern ProjectSetPath *create_set_projection_path(PlannerInfo *root,
RelOptInfo *rel, RelOptInfo *rel,
Path *subpath, Path *subpath,
PathTarget *target); PathTarget *target);
extern SortPath *create_incremental_sort_path(PlannerInfo *root,
RelOptInfo *rel,
Path *subpath,
List *pathkeys,
int presorted_keys,
double limit_tuples);
extern SortPath *create_sort_path(PlannerInfo *root, extern SortPath *create_sort_path(PlannerInfo *root,
RelOptInfo *rel, RelOptInfo *rel,
Path *subpath, Path *subpath,
......
...@@ -185,6 +185,7 @@ typedef enum ...@@ -185,6 +185,7 @@ typedef enum
extern PathKeysComparison compare_pathkeys(List *keys1, List *keys2); extern PathKeysComparison compare_pathkeys(List *keys1, List *keys2);
extern bool pathkeys_contained_in(List *keys1, List *keys2); extern bool pathkeys_contained_in(List *keys1, List *keys2);
extern bool pathkeys_count_contained_in(List *keys1, List *keys2, int *n_common);
extern Path *get_cheapest_path_for_pathkeys(List *paths, List *pathkeys, extern Path *get_cheapest_path_for_pathkeys(List *paths, List *pathkeys,
Relids required_outer, Relids required_outer,
CostSelector cost_criterion, CostSelector cost_criterion,
......
...@@ -61,14 +61,17 @@ typedef struct SortCoordinateData *SortCoordinate; ...@@ -61,14 +61,17 @@ typedef struct SortCoordinateData *SortCoordinate;
* Data structures for reporting sort statistics. Note that * Data structures for reporting sort statistics. Note that
* TuplesortInstrumentation can't contain any pointers because we * TuplesortInstrumentation can't contain any pointers because we
* sometimes put it in shared memory. * sometimes put it in shared memory.
*
* TuplesortMethod is used in a bitmask in Increment Sort's shared memory
* instrumentation so needs to have each value be a separate bit.
*/ */
typedef enum typedef enum
{ {
SORT_TYPE_STILL_IN_PROGRESS = 0, SORT_TYPE_STILL_IN_PROGRESS = 1 << 0,
SORT_TYPE_TOP_N_HEAPSORT, SORT_TYPE_TOP_N_HEAPSORT = 1 << 1,
SORT_TYPE_QUICKSORT, SORT_TYPE_QUICKSORT = 1 << 2,
SORT_TYPE_EXTERNAL_SORT, SORT_TYPE_EXTERNAL_SORT = 1 << 3,
SORT_TYPE_EXTERNAL_MERGE SORT_TYPE_EXTERNAL_MERGE = 1 << 4
} TuplesortMethod; } TuplesortMethod;
typedef enum typedef enum
...@@ -215,6 +218,7 @@ extern Tuplesortstate *tuplesort_begin_datum(Oid datumType, ...@@ -215,6 +218,7 @@ extern Tuplesortstate *tuplesort_begin_datum(Oid datumType,
bool randomAccess); bool randomAccess);
extern void tuplesort_set_bound(Tuplesortstate *state, int64 bound); extern void tuplesort_set_bound(Tuplesortstate *state, int64 bound);
extern bool tuplesort_used_bound(Tuplesortstate *state);
extern void tuplesort_puttupleslot(Tuplesortstate *state, extern void tuplesort_puttupleslot(Tuplesortstate *state,
TupleTableSlot *slot); TupleTableSlot *slot);
...@@ -239,6 +243,8 @@ extern bool tuplesort_skiptuples(Tuplesortstate *state, int64 ntuples, ...@@ -239,6 +243,8 @@ extern bool tuplesort_skiptuples(Tuplesortstate *state, int64 ntuples,
extern void tuplesort_end(Tuplesortstate *state); extern void tuplesort_end(Tuplesortstate *state);
extern void tuplesort_reset(Tuplesortstate *state);
extern void tuplesort_get_stats(Tuplesortstate *state, extern void tuplesort_get_stats(Tuplesortstate *state,
TuplesortInstrumentation *stats); TuplesortInstrumentation *stats);
extern const char *tuplesort_method_name(TuplesortMethod m); extern const char *tuplesort_method_name(TuplesortMethod m);
......
...@@ -21,7 +21,7 @@ QUERY PLAN ...@@ -21,7 +21,7 @@ QUERY PLAN
Sort Sort
Sort Key: id, data Sort Key: id, data
-> Seq Scan on test_dc -> Index Scan using test_dc_pkey on test_dc
Filter: ((data)::text = '34'::text) Filter: ((data)::text = '34'::text)
step select2: SELECT * FROM test_dc WHERE data=34 ORDER BY id,data; step select2: SELECT * FROM test_dc WHERE data=34 ORDER BY id,data;
id data id data
......
This diff is collapsed.
...@@ -11,6 +11,8 @@ SET enable_partitionwise_aggregate TO true; ...@@ -11,6 +11,8 @@ SET enable_partitionwise_aggregate TO true;
SET enable_partitionwise_join TO true; SET enable_partitionwise_join TO true;
-- Disable parallel plans. -- Disable parallel plans.
SET max_parallel_workers_per_gather TO 0; SET max_parallel_workers_per_gather TO 0;
-- Disable incremental sort, which can influence selected plans due to fuzz factor.
SET enable_incrementalsort TO off;
-- --
-- Tests for list partitioned tables. -- Tests for list partitioned tables.
-- --
......
...@@ -78,6 +78,7 @@ select name, setting from pg_settings where name like 'enable%'; ...@@ -78,6 +78,7 @@ select name, setting from pg_settings where name like 'enable%';
enable_hashagg | on enable_hashagg | on
enable_hashagg_disk | on enable_hashagg_disk | on
enable_hashjoin | on enable_hashjoin | on
enable_incrementalsort | on
enable_indexonlyscan | on enable_indexonlyscan | on
enable_indexscan | on enable_indexscan | on
enable_material | on enable_material | on
...@@ -91,7 +92,7 @@ select name, setting from pg_settings where name like 'enable%'; ...@@ -91,7 +92,7 @@ select name, setting from pg_settings where name like 'enable%';
enable_seqscan | on enable_seqscan | on
enable_sort | on enable_sort | on
enable_tidscan | on enable_tidscan | on
(19 rows) (20 rows)
-- Test that the pg_timezone_names and pg_timezone_abbrevs views are -- Test that the pg_timezone_names and pg_timezone_abbrevs views are
-- more-or-less working. We can't test their contents in any great detail -- more-or-less working. We can't test their contents in any great detail
......
...@@ -78,7 +78,7 @@ test: brin gin gist spgist privileges init_privs security_label collate matview ...@@ -78,7 +78,7 @@ test: brin gin gist spgist privileges init_privs security_label collate matview
# ---------- # ----------
# Another group of parallel tests # Another group of parallel tests
# ---------- # ----------
test: create_table_like alter_generic alter_operator misc async dbsize misc_functions sysviews tsrf tidscan collate.icu.utf8 test: create_table_like alter_generic alter_operator misc async dbsize misc_functions sysviews tsrf tidscan collate.icu.utf8 incremental_sort
# rules cannot run concurrently with any test that creates # rules cannot run concurrently with any test that creates
# a view or rule in the public schema # a view or rule in the public schema
......
...@@ -89,6 +89,7 @@ test: select_distinct_on ...@@ -89,6 +89,7 @@ test: select_distinct_on
test: select_implicit test: select_implicit
test: select_having test: select_having
test: subselect test: subselect
test: incremental_sort
test: union test: union
test: case test: case
test: join test: join
......
This diff is collapsed.
...@@ -12,6 +12,8 @@ SET enable_partitionwise_aggregate TO true; ...@@ -12,6 +12,8 @@ SET enable_partitionwise_aggregate TO true;
SET enable_partitionwise_join TO true; SET enable_partitionwise_join TO true;
-- Disable parallel plans. -- Disable parallel plans.
SET max_parallel_workers_per_gather TO 0; SET max_parallel_workers_per_gather TO 0;
-- Disable incremental sort, which can influence selected plans due to fuzz factor.
SET enable_incrementalsort TO off;
-- --
-- Tests for list partitioned tables. -- Tests for list partitioned tables.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment