Commit 391c3811 authored by Tom Lane's avatar Tom Lane

Rename SortMem and VacuumMem to work_mem and maintenance_work_mem.

Make btree index creation and initial validation of foreign-key constraints
use maintenance_work_mem rather than work_mem as their memory limit.
Add some code to guc.c to allow these variables to be referenced by their
old names in SHOW and SET commands, for backwards compatibility.
parent 39d715be
...@@ -865,7 +865,7 @@ get_crosstab_tuplestore(char *sql, ...@@ -865,7 +865,7 @@ get_crosstab_tuplestore(char *sql,
MemoryContext SPIcontext; MemoryContext SPIcontext;
/* initialize our tuplestore */ /* initialize our tuplestore */
tupstore = tuplestore_begin_heap(true, false, SortMem); tupstore = tuplestore_begin_heap(true, false, work_mem);
/* Connect to SPI manager */ /* Connect to SPI manager */
if ((ret = SPI_connect()) < 0) if ((ret = SPI_connect()) < 0)
...@@ -1246,7 +1246,7 @@ connectby(char *relname, ...@@ -1246,7 +1246,7 @@ connectby(char *relname,
oldcontext = MemoryContextSwitchTo(per_query_ctx); oldcontext = MemoryContextSwitchTo(per_query_ctx);
/* initialize our tuplestore */ /* initialize our tuplestore */
tupstore = tuplestore_begin_heap(true, false, SortMem); tupstore = tuplestore_begin_heap(true, false, work_mem);
MemoryContextSwitchTo(oldcontext); MemoryContextSwitchTo(oldcontext);
......
<!-- <!--
$PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.34 2004/01/19 20:12:30 tgl Exp $ $PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.35 2004/02/03 17:34:02 tgl Exp $
--> -->
<chapter id="backup"> <chapter id="backup">
<title>Backup and Restore</title> <title>Backup and Restore</title>
...@@ -156,8 +156,8 @@ pg_dump -h <replaceable>host1</> <replaceable>dbname</> | psql -h <replaceable>h ...@@ -156,8 +156,8 @@ pg_dump -h <replaceable>host1</> <replaceable>dbname</> | psql -h <replaceable>h
<tip> <tip>
<para> <para>
Restore performance can be improved by increasing the Restore performance can be improved by increasing the
configuration parameter <varname>sort_mem</varname> (see <xref configuration parameter <varname>maintenance_work_mem</varname>
linkend="runtime-config-resource-memory">). (see <xref linkend="runtime-config-resource-memory">).
</para> </para>
</tip> </tip>
</sect2> </sect2>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/installation.sgml,v 1.193 2004/01/19 21:20:06 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/installation.sgml,v 1.194 2004/02/03 17:34:02 tgl Exp $ -->
<chapter id="installation"> <chapter id="installation">
<title><![%standalone-include[<productname>PostgreSQL</>]]> <title><![%standalone-include[<productname>PostgreSQL</>]]>
...@@ -1399,7 +1399,7 @@ kill `cat /usr/local/pgsql/data/postmaster.pid` ...@@ -1399,7 +1399,7 @@ kill `cat /usr/local/pgsql/data/postmaster.pid`
not designed for optimum performance. To achieve optimum not designed for optimum performance. To achieve optimum
performance, several server parameters must be adjusted, the two performance, several server parameters must be adjusted, the two
most common being <varname>shared_buffers</varname> and most common being <varname>shared_buffers</varname> and
<varname> sort_mem</varname> mentioned in the documentation. <varname>work_mem</varname>.
Other parameters mentioned in the documentation also affect Other parameters mentioned in the documentation also affect
performance. performance.
</para> </para>
......
<!-- <!--
$PostgreSQL: pgsql/doc/src/sgml/perform.sgml,v 1.40 2004/01/11 05:46:58 neilc Exp $ $PostgreSQL: pgsql/doc/src/sgml/perform.sgml,v 1.41 2004/02/03 17:34:02 tgl Exp $
--> -->
<chapter id="performance-tips"> <chapter id="performance-tips">
...@@ -684,16 +684,18 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; ...@@ -684,16 +684,18 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
</para> </para>
</sect2> </sect2>
<sect2 id="populate-sort-mem"> <sect2 id="populate-work-mem">
<title>Increase <varname>sort_mem</varname></title> <title>Increase <varname>maintenance_work_mem</varname></title>
<para> <para>
Temporarily increasing the <varname>sort_mem</varname> Temporarily increasing the <varname>maintenance_work_mem</varname>
configuration variable when restoring large amounts of data can configuration variable when restoring large amounts of data can
lead to improved performance. This is because when a B-tree index lead to improved performance. This is because when a B-tree index
is created from scratch, the existing content of the table needs is created from scratch, the existing content of the table needs
to be sorted. Allowing the merge sort to use more buffer pages to be sorted. Allowing the merge sort to use more memory
means that fewer merge passes will be required. means that fewer merge passes will be required. A larger setting for
<varname>maintenance_work_mem</varname> may also speed up validation
of foreign-key constraints.
</para> </para>
</sect2> </sect2>
......
<!-- <!--
$PostgreSQL: pgsql/doc/src/sgml/plpgsql.sgml,v 1.34 2004/01/24 22:05:08 tgl Exp $ $PostgreSQL: pgsql/doc/src/sgml/plpgsql.sgml,v 1.35 2004/02/03 17:34:02 tgl Exp $
--> -->
<chapter id="plpgsql"> <chapter id="plpgsql">
...@@ -1354,7 +1354,7 @@ SELECT * FROM some_func(); ...@@ -1354,7 +1354,7 @@ SELECT * FROM some_func();
allow users to define set-returning functions allow users to define set-returning functions
that do not have this limitation. Currently, the point at that do not have this limitation. Currently, the point at
which data begins being written to disk is controlled by the which data begins being written to disk is controlled by the
<varname>sort_mem</> configuration variable. Administrators <varname>work_mem</> configuration variable. Administrators
who have sufficient memory to store larger result sets in who have sufficient memory to store larger result sets in
memory should consider increasing this parameter. memory should consider increasing this parameter.
</para> </para>
......
<!-- <!--
$PostgreSQL: pgsql/doc/src/sgml/ref/postgres-ref.sgml,v 1.42 2003/11/29 19:51:39 pgsql Exp $ $PostgreSQL: pgsql/doc/src/sgml/ref/postgres-ref.sgml,v 1.43 2004/02/03 17:34:02 tgl Exp $
PostgreSQL documentation PostgreSQL documentation
--> -->
...@@ -41,7 +41,7 @@ PostgreSQL documentation ...@@ -41,7 +41,7 @@ PostgreSQL documentation
<arg>-s</arg> <arg>-s</arg>
<arg>-t<group choice="plain"><arg>pa</arg><arg>pl</arg><arg>ex</arg></group></arg> <arg>-t<group choice="plain"><arg>pa</arg><arg>pl</arg><arg>ex</arg></group></arg>
</group> </group>
<arg>-S <replaceable>sort-mem</replaceable></arg> <arg>-S <replaceable>work-mem</replaceable></arg>
<arg>-W <replaceable>seconds</replaceable></arg> <arg>-W <replaceable>seconds</replaceable></arg>
<arg>--<replaceable>name</replaceable>=<replaceable>value</replaceable></arg> <arg>--<replaceable>name</replaceable>=<replaceable>value</replaceable></arg>
<arg choice="plain"><replaceable>database</replaceable></arg> <arg choice="plain"><replaceable>database</replaceable></arg>
...@@ -64,7 +64,7 @@ PostgreSQL documentation ...@@ -64,7 +64,7 @@ PostgreSQL documentation
<arg>-s</arg> <arg>-s</arg>
<arg>-t<group choice="plain"><arg>pa</arg><arg>pl</arg><arg>ex</arg></group></arg> <arg>-t<group choice="plain"><arg>pa</arg><arg>pl</arg><arg>ex</arg></group></arg>
</group> </group>
<arg>-S <replaceable>sort-mem</replaceable></arg> <arg>-S <replaceable>work-mem</replaceable></arg>
<arg>-v <replaceable>protocol</replaceable></arg> <arg>-v <replaceable>protocol</replaceable></arg>
<arg>-W <replaceable>seconds</replaceable></arg> <arg>-W <replaceable>seconds</replaceable></arg>
<arg>--<replaceable>name</replaceable>=<replaceable>value</replaceable></arg> <arg>--<replaceable>name</replaceable>=<replaceable>value</replaceable></arg>
...@@ -197,16 +197,13 @@ PostgreSQL documentation ...@@ -197,16 +197,13 @@ PostgreSQL documentation
</varlistentry> </varlistentry>
<varlistentry> <varlistentry>
<term><option>-S</option> <replaceable class="parameter">sort-mem</replaceable></term> <term><option>-S</option> <replaceable class="parameter">work-mem</replaceable></term>
<listitem> <listitem>
<para> <para>
Specifies the amount of memory to be used by internal sorts and hashes Specifies the amount of memory to be used by internal sorts and hashes
before resorting to temporary disk files. The value is specified in before resorting to temporary disk files. See the description of the
kilobytes, and defaults to 1024. Note that for a complex query, <varname>work_mem</> configuration parameter in <xref
several sorts and/or hashes might be running in parallel, and each one linkend="runtime-config-resource-memory">.
will be allowed to use as much as
<replaceable class="parameter">sort-mem</replaceable> kilobytes
before it starts to put data into temporary files.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
......
<!-- <!--
$PostgreSQL: pgsql/doc/src/sgml/ref/postmaster.sgml,v 1.44 2003/12/14 00:15:03 neilc Exp $ $PostgreSQL: pgsql/doc/src/sgml/ref/postmaster.sgml,v 1.45 2004/02/03 17:34:02 tgl Exp $
PostgreSQL documentation PostgreSQL documentation
--> -->
...@@ -541,10 +541,10 @@ PostgreSQL documentation ...@@ -541,10 +541,10 @@ PostgreSQL documentation
<para> <para>
Named run-time parameters can be set in either of these styles: Named run-time parameters can be set in either of these styles:
<screen> <screen>
<prompt>$</prompt> <userinput>postmaster -c sort_mem=1234</userinput> <prompt>$</prompt> <userinput>postmaster -c work_mem=1234</userinput>
<prompt>$</prompt> <userinput>postmaster --sort-mem=1234</userinput> <prompt>$</prompt> <userinput>postmaster --work-mem=1234</userinput>
</screen> </screen>
Either form overrides whatever setting might exist for <varname>sort_mem</> Either form overrides whatever setting might exist for <varname>work_mem</>
in <filename>postgresql.conf</>. Notice that underscores in parameter in <filename>postgresql.conf</>. Notice that underscores in parameter
names can be written as either underscore or dash on the command line. names can be written as either underscore or dash on the command line.
</para> </para>
......
<!-- <!--
$PostgreSQL: pgsql/doc/src/sgml/runtime.sgml,v 1.235 2004/01/27 16:51:43 neilc Exp $ $PostgreSQL: pgsql/doc/src/sgml/runtime.sgml,v 1.236 2004/02/03 17:34:02 tgl Exp $
--> -->
<Chapter Id="runtime"> <Chapter Id="runtime">
...@@ -850,37 +850,41 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -850,37 +850,41 @@ SET ENABLE_SEQSCAN TO OFF;
</varlistentry> </varlistentry>
<varlistentry> <varlistentry>
<term><varname>sort_mem</varname> (<type>integer</type>)</term> <term><varname>work_mem</varname> (<type>integer</type>)</term>
<listitem> <listitem>
<para> <para>
Specifies the amount of memory to be used by internal sort operations and Specifies the amount of memory to be used by internal sort operations
hash tables before switching to temporary disk files. The value is and hash tables before switching to temporary disk files. The value is
specified in kilobytes, and defaults to 1024 kilobytes (1 MB). specified in kilobytes, and defaults to 1024 kilobytes (1 MB).
Note that for a complex query, several sort or hash operations might be Note that for a complex query, several sort or hash operations might be
running in parallel; each one will be allowed to use as much memory running in parallel; each one will be allowed to use as much memory
as this value specifies before it starts to put data into temporary as this value specifies before it starts to put data into temporary
files. Also, several running sessions could be doing files. Also, several running sessions could be doing such operations
sort operations simultaneously. So the total memory used could be many concurrently. So the total memory used could be many
times the value of <varname>sort_mem</varname>. Sort operations are used times the value of <varname>work_mem</varname>; it is necessary to
by <literal>ORDER BY</>, merge joins, and <command>CREATE INDEX</>. keep this fact in mind when choosing the value. Sort operations are
used for <literal>ORDER BY</>, <literal>DISTINCT</>, and
merge joins.
Hash tables are used in hash joins, hash-based aggregation, and Hash tables are used in hash joins, hash-based aggregation, and
hash-based processing of <literal>IN</> subqueries. Because hash-based processing of <literal>IN</> subqueries.
<command>CREATE INDEX</> is used when restoring a database,
increasing <varname>sort_mem</varname> before doing a large
restore operation can improve performance.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
<varlistentry> <varlistentry>
<term><varname>vacuum_mem</varname> (<type>integer</type>)</term> <term><varname>maintenance_work_mem</varname> (<type>integer</type>)</term>
<listitem> <listitem>
<para> <para>
Specifies the maximum amount of memory to be used by Specifies the maximum amount of memory to be used in maintenance
<command>VACUUM</command> to keep track of to-be-reclaimed operations, such as <command>VACUUM</command>, <command>CREATE
rows. The value is specified in kilobytes, and defaults to INDEX</>, and <command>ALTER TABLE ADD FOREIGN KEY</>.
8192 kB. Larger settings may improve the speed of The value is specified in kilobytes, and defaults to 16384 kilobytes
vacuuming large tables that have many deleted rows. (16 MB). Since only one of these operations can be executed at
a time by a database session, and an installation normally doesn't
have very many of them happening concurrently, it's safe to set this
value significantly larger than <varname>work_mem</varname>. Larger
settings may improve performance for vacuuming and for restoring
database dumps.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
...@@ -2840,7 +2844,7 @@ $ <userinput>postmaster -o '-S 1024 -s'</userinput> ...@@ -2840,7 +2844,7 @@ $ <userinput>postmaster -o '-S 1024 -s'</userinput>
<row> <row>
<entry><option>-S <replaceable>x</replaceable></option><footnoteref linkend="fn.runtime-config-short"> <entry><option>-S <replaceable>x</replaceable></option><footnoteref linkend="fn.runtime-config-short">
</entry> </entry>
<entry><literal>sort_mem = <replaceable>x</replaceable></></entry> <entry><literal>work_mem = <replaceable>x</replaceable></></entry>
</row> </row>
<row> <row>
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/access/nbtree/nbtree.c,v 1.109 2004/01/07 18:56:24 neilc Exp $ * $PostgreSQL: pgsql/src/backend/access/nbtree/nbtree.c,v 1.110 2004/02/03 17:34:02 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -117,13 +117,14 @@ btbuild(PG_FUNCTION_ARGS) ...@@ -117,13 +117,14 @@ btbuild(PG_FUNCTION_ARGS)
if (buildstate.usefast) if (buildstate.usefast)
{ {
buildstate.spool = _bt_spoolinit(index, indexInfo->ii_Unique); buildstate.spool = _bt_spoolinit(index, indexInfo->ii_Unique, false);
/* /*
* Different from spool, the uniqueness isn't checked for spool2. * If building a unique index, put dead tuples in a second spool
* to keep them out of the uniqueness check.
*/ */
if (indexInfo->ii_Unique) if (indexInfo->ii_Unique)
buildstate.spool2 = _bt_spoolinit(index, false); buildstate.spool2 = _bt_spoolinit(index, false, true);
} }
/* do the heap scan */ /* do the heap scan */
......
...@@ -36,7 +36,7 @@ ...@@ -36,7 +36,7 @@
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/access/nbtree/nbtsort.c,v 1.80 2004/01/07 18:56:24 neilc Exp $ * $PostgreSQL: pgsql/src/backend/access/nbtree/nbtsort.c,v 1.81 2004/02/03 17:34:02 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -112,14 +112,25 @@ static void _bt_load(Relation index, BTSpool *btspool, BTSpool *btspool2); ...@@ -112,14 +112,25 @@ static void _bt_load(Relation index, BTSpool *btspool, BTSpool *btspool2);
* create and initialize a spool structure * create and initialize a spool structure
*/ */
BTSpool * BTSpool *
_bt_spoolinit(Relation index, bool isunique) _bt_spoolinit(Relation index, bool isunique, bool isdead)
{ {
BTSpool *btspool = (BTSpool *) palloc0(sizeof(BTSpool)); BTSpool *btspool = (BTSpool *) palloc0(sizeof(BTSpool));
int btKbytes;
btspool->index = index; btspool->index = index;
btspool->isunique = isunique; btspool->isunique = isunique;
btspool->sortstate = tuplesort_begin_index(index, isunique, false); /*
* We size the sort area as maintenance_work_mem rather than work_mem to
* speed index creation. This should be OK since a single backend can't
* run multiple index creations in parallel. Note that creation of a
* unique index actually requires two BTSpool objects. We expect that the
* second one (for dead tuples) won't get very full, so we give it only
* work_mem.
*/
btKbytes = isdead ? work_mem : maintenance_work_mem;
btspool->sortstate = tuplesort_begin_index(index, isunique,
btKbytes, false);
/* /*
* Currently, tuplesort provides sort functions on IndexTuples. If we * Currently, tuplesort provides sort functions on IndexTuples. If we
......
...@@ -10,11 +10,11 @@ ...@@ -10,11 +10,11 @@
* relations with finite memory space usage. To do that, we set upper bounds * relations with finite memory space usage. To do that, we set upper bounds
* on the number of tuples and pages we will keep track of at once. * on the number of tuples and pages we will keep track of at once.
* *
* We are willing to use at most VacuumMem memory space to keep track of * We are willing to use at most maintenance_work_mem memory space to keep
* dead tuples. We initially allocate an array of TIDs of that size. * track of dead tuples. We initially allocate an array of TIDs of that size.
* If the array threatens to overflow, we suspend the heap scan phase * If the array threatens to overflow, we suspend the heap scan phase and
* and perform a pass of index cleanup and page compaction, then resume * perform a pass of index cleanup and page compaction, then resume the heap
* the heap scan with an empty TID array. * scan with an empty TID array.
* *
* We can limit the storage for page free space to MaxFSMPages entries, * We can limit the storage for page free space to MaxFSMPages entries,
* since that's the most the free space map will be willing to remember * since that's the most the free space map will be willing to remember
...@@ -31,7 +31,7 @@ ...@@ -31,7 +31,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/commands/vacuumlazy.c,v 1.33 2003/11/29 19:51:48 pgsql Exp $ * $PostgreSQL: pgsql/src/backend/commands/vacuumlazy.c,v 1.34 2004/02/03 17:34:02 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -908,8 +908,8 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks) ...@@ -908,8 +908,8 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
int maxtuples; int maxtuples;
int maxpages; int maxpages;
maxtuples = (int) ((VacuumMem * 1024L) / sizeof(ItemPointerData)); maxtuples = (int) ((maintenance_work_mem * 1024L) / sizeof(ItemPointerData));
/* stay sane if small VacuumMem */ /* stay sane if small maintenance_work_mem */
if (maxtuples < MAX_TUPLES_PER_PAGE) if (maxtuples < MAX_TUPLES_PER_PAGE)
maxtuples = MAX_TUPLES_PER_PAGE; maxtuples = MAX_TUPLES_PER_PAGE;
...@@ -942,8 +942,8 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats, ...@@ -942,8 +942,8 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats,
{ {
/* /*
* The array shouldn't overflow under normal behavior, but perhaps it * The array shouldn't overflow under normal behavior, but perhaps it
* could if we are given a really small VacuumMem. In that case, just * could if we are given a really small maintenance_work_mem. In that
* forget the last few tuples. * case, just forget the last few tuples.
*/ */
if (vacrelstats->num_dead_tuples < vacrelstats->max_dead_tuples) if (vacrelstats->num_dead_tuples < vacrelstats->max_dead_tuples)
{ {
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/executor/execQual.c,v 1.153 2004/01/07 18:56:26 neilc Exp $ * $PostgreSQL: pgsql/src/backend/executor/execQual.c,v 1.154 2004/02/03 17:34:02 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -1116,7 +1116,7 @@ ExecMakeTableFunctionResult(ExprState *funcexpr, ...@@ -1116,7 +1116,7 @@ ExecMakeTableFunctionResult(ExprState *funcexpr,
0, 0,
false); false);
} }
tupstore = tuplestore_begin_heap(true, false, SortMem); tupstore = tuplestore_begin_heap(true, false, work_mem);
MemoryContextSwitchTo(oldcontext); MemoryContextSwitchTo(oldcontext);
rsinfo.setResult = tupstore; rsinfo.setResult = tupstore;
rsinfo.setDesc = tupdesc; rsinfo.setDesc = tupdesc;
......
...@@ -45,7 +45,7 @@ ...@@ -45,7 +45,7 @@
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/executor/nodeAgg.c,v 1.117 2003/11/29 19:51:48 pgsql Exp $ * $PostgreSQL: pgsql/src/backend/executor/nodeAgg.c,v 1.118 2004/02/03 17:34:02 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -248,7 +248,7 @@ initialize_aggregates(AggState *aggstate, ...@@ -248,7 +248,7 @@ initialize_aggregates(AggState *aggstate,
peraggstate->sortstate = peraggstate->sortstate =
tuplesort_begin_datum(peraggstate->inputType, tuplesort_begin_datum(peraggstate->inputType,
peraggstate->sortOperator, peraggstate->sortOperator,
false); work_mem, false);
} }
/* /*
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/executor/nodeHash.c,v 1.81 2003/11/29 19:51:48 pgsql Exp $ * $PostgreSQL: pgsql/src/backend/executor/nodeHash.c,v 1.82 2004/02/03 17:34:02 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -348,9 +348,9 @@ ExecChooseHashTableSize(double ntuples, int tupwidth, ...@@ -348,9 +348,9 @@ ExecChooseHashTableSize(double ntuples, int tupwidth,
inner_rel_bytes = ntuples * tupsize * FUDGE_FAC; inner_rel_bytes = ntuples * tupsize * FUDGE_FAC;
/* /*
* Target in-memory hashtable size is SortMem kilobytes. * Target in-memory hashtable size is work_mem kilobytes.
*/ */
hash_table_bytes = SortMem * 1024L; hash_table_bytes = work_mem * 1024L;
/* /*
* Count the number of hash buckets we want for the whole relation, * Count the number of hash buckets we want for the whole relation,
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/executor/nodeIndexscan.c,v 1.90 2004/01/07 18:56:26 neilc Exp $ * $PostgreSQL: pgsql/src/backend/executor/nodeIndexscan.c,v 1.91 2004/02/03 17:34:02 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -40,7 +40,7 @@ ...@@ -40,7 +40,7 @@
* preferred way to do this is to record already-returned tuples in a hash * preferred way to do this is to record already-returned tuples in a hash
* table (using the TID as unique identifier). However, in a very large * table (using the TID as unique identifier). However, in a very large
* scan this could conceivably run out of memory. We limit the hash table * scan this could conceivably run out of memory. We limit the hash table
* to no more than SortMem KB; if it grows past that, we fall back to the * to no more than work_mem KB; if it grows past that, we fall back to the
* pre-7.4 technique: evaluate the prior-scan index quals again for each * pre-7.4 technique: evaluate the prior-scan index quals again for each
* tuple (which is space-efficient, but slow). * tuple (which is space-efficient, but slow).
* *
...@@ -1002,7 +1002,7 @@ create_duphash(IndexScanState *node) ...@@ -1002,7 +1002,7 @@ create_duphash(IndexScanState *node)
HASHCTL hash_ctl; HASHCTL hash_ctl;
long nbuckets; long nbuckets;
node->iss_MaxHash = (SortMem * 1024L) / node->iss_MaxHash = (work_mem * 1024L) /
(MAXALIGN(sizeof(HASHELEMENT)) + MAXALIGN(sizeof(DupHashTabEntry))); (MAXALIGN(sizeof(HASHELEMENT)) + MAXALIGN(sizeof(DupHashTabEntry)));
MemSet(&hash_ctl, 0, sizeof(hash_ctl)); MemSet(&hash_ctl, 0, sizeof(hash_ctl));
hash_ctl.keysize = SizeOfIptrData; hash_ctl.keysize = SizeOfIptrData;
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/executor/nodeMaterial.c,v 1.45 2003/11/29 19:51:48 pgsql Exp $ * $PostgreSQL: pgsql/src/backend/executor/nodeMaterial.c,v 1.46 2004/02/03 17:34:02 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -62,7 +62,7 @@ ExecMaterial(MaterialState *node) ...@@ -62,7 +62,7 @@ ExecMaterial(MaterialState *node)
*/ */
if (tuplestorestate == NULL) if (tuplestorestate == NULL)
{ {
tuplestorestate = tuplestore_begin_heap(true, false, SortMem); tuplestorestate = tuplestore_begin_heap(true, false, work_mem);
node->tuplestorestate = (void *) tuplestorestate; node->tuplestorestate = (void *) tuplestorestate;
} }
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/executor/nodeSort.c,v 1.46 2003/11/29 19:51:48 pgsql Exp $ * $PostgreSQL: pgsql/src/backend/executor/nodeSort.c,v 1.47 2004/02/03 17:34:02 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#include "executor/execdebug.h" #include "executor/execdebug.h"
#include "executor/nodeSort.h" #include "executor/nodeSort.h"
#include "miscadmin.h"
#include "utils/tuplesort.h" #include "utils/tuplesort.h"
...@@ -88,6 +89,7 @@ ExecSort(SortState *node) ...@@ -88,6 +89,7 @@ ExecSort(SortState *node)
plannode->numCols, plannode->numCols,
plannode->sortOperators, plannode->sortOperators,
plannode->sortColIdx, plannode->sortColIdx,
work_mem,
true /* randomAccess */ ); true /* randomAccess */ );
node->tuplesortstate = (void *) tuplesortstate; node->tuplesortstate = (void *) tuplesortstate;
......
...@@ -49,7 +49,7 @@ ...@@ -49,7 +49,7 @@
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/optimizer/path/costsize.c,v 1.123 2004/01/19 03:52:28 tgl Exp $ * $PostgreSQL: pgsql/src/backend/optimizer/path/costsize.c,v 1.124 2004/02/03 17:34:03 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -503,18 +503,18 @@ cost_functionscan(Path *path, Query *root, RelOptInfo *baserel) ...@@ -503,18 +503,18 @@ cost_functionscan(Path *path, Query *root, RelOptInfo *baserel)
* Determines and returns the cost of sorting a relation, including * Determines and returns the cost of sorting a relation, including
* the cost of reading the input data. * the cost of reading the input data.
* *
* If the total volume of data to sort is less than SortMem, we will do * If the total volume of data to sort is less than work_mem, we will do
* an in-memory sort, which requires no I/O and about t*log2(t) tuple * an in-memory sort, which requires no I/O and about t*log2(t) tuple
* comparisons for t tuples. * comparisons for t tuples.
* *
* If the total volume exceeds SortMem, we switch to a tape-style merge * If the total volume exceeds work_mem, we switch to a tape-style merge
* algorithm. There will still be about t*log2(t) tuple comparisons in * algorithm. There will still be about t*log2(t) tuple comparisons in
* total, but we will also need to write and read each tuple once per * total, but we will also need to write and read each tuple once per
* merge pass. We expect about ceil(log6(r)) merge passes where r is the * merge pass. We expect about ceil(log6(r)) merge passes where r is the
* number of initial runs formed (log6 because tuplesort.c uses six-tape * number of initial runs formed (log6 because tuplesort.c uses six-tape
* merging). Since the average initial run should be about twice SortMem, * merging). Since the average initial run should be about twice work_mem,
* we have * we have
* disk traffic = 2 * relsize * ceil(log6(p / (2*SortMem))) * disk traffic = 2 * relsize * ceil(log6(p / (2*work_mem)))
* cpu = comparison_cost * t * log2(t) * cpu = comparison_cost * t * log2(t)
* *
* The disk traffic is assumed to be half sequential and half random * The disk traffic is assumed to be half sequential and half random
...@@ -542,7 +542,7 @@ cost_sort(Path *path, Query *root, ...@@ -542,7 +542,7 @@ cost_sort(Path *path, Query *root,
Cost startup_cost = input_cost; Cost startup_cost = input_cost;
Cost run_cost = 0; Cost run_cost = 0;
double nbytes = relation_byte_size(tuples, width); double nbytes = relation_byte_size(tuples, width);
long sortmembytes = SortMem * 1024L; long work_mem_bytes = work_mem * 1024L;
if (!enable_sort) if (!enable_sort)
startup_cost += disable_cost; startup_cost += disable_cost;
...@@ -564,10 +564,10 @@ cost_sort(Path *path, Query *root, ...@@ -564,10 +564,10 @@ cost_sort(Path *path, Query *root,
startup_cost += 2.0 * cpu_operator_cost * tuples * LOG2(tuples); startup_cost += 2.0 * cpu_operator_cost * tuples * LOG2(tuples);
/* disk costs */ /* disk costs */
if (nbytes > sortmembytes) if (nbytes > work_mem_bytes)
{ {
double npages = ceil(nbytes / BLCKSZ); double npages = ceil(nbytes / BLCKSZ);
double nruns = nbytes / (sortmembytes * 2); double nruns = nbytes / (work_mem_bytes * 2);
double log_runs = ceil(LOG6(nruns)); double log_runs = ceil(LOG6(nruns));
double npageaccesses; double npageaccesses;
...@@ -594,7 +594,7 @@ cost_sort(Path *path, Query *root, ...@@ -594,7 +594,7 @@ cost_sort(Path *path, Query *root,
* Determines and returns the cost of materializing a relation, including * Determines and returns the cost of materializing a relation, including
* the cost of reading the input data. * the cost of reading the input data.
* *
* If the total volume of data to materialize exceeds SortMem, we will need * If the total volume of data to materialize exceeds work_mem, we will need
* to write it to disk, so the cost is much higher in that case. * to write it to disk, so the cost is much higher in that case.
*/ */
void void
...@@ -604,10 +604,10 @@ cost_material(Path *path, ...@@ -604,10 +604,10 @@ cost_material(Path *path,
Cost startup_cost = input_cost; Cost startup_cost = input_cost;
Cost run_cost = 0; Cost run_cost = 0;
double nbytes = relation_byte_size(tuples, width); double nbytes = relation_byte_size(tuples, width);
long sortmembytes = SortMem * 1024L; long work_mem_bytes = work_mem * 1024L;
/* disk costs */ /* disk costs */
if (nbytes > sortmembytes) if (nbytes > work_mem_bytes)
{ {
double npages = ceil(nbytes / BLCKSZ); double npages = ceil(nbytes / BLCKSZ);
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/optimizer/plan/planner.c,v 1.165 2004/01/18 00:50:02 tgl Exp $ * $PostgreSQL: pgsql/src/backend/optimizer/plan/planner.c,v 1.166 2004/02/03 17:34:03 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -966,7 +966,7 @@ grouping_planner(Query *parse, double tuple_fraction) ...@@ -966,7 +966,7 @@ grouping_planner(Query *parse, double tuple_fraction)
{ {
/* /*
* Use hashed grouping if (a) we think we can fit the * Use hashed grouping if (a) we think we can fit the
* hashtable into SortMem, *and* (b) the estimated cost is * hashtable into work_mem, *and* (b) the estimated cost is
* no more than doing it the other way. While avoiding * no more than doing it the other way. While avoiding
* the need for sorted input is usually a win, the fact * the need for sorted input is usually a win, the fact
* that the output won't be sorted may be a loss; so we * that the output won't be sorted may be a loss; so we
...@@ -979,7 +979,7 @@ grouping_planner(Query *parse, double tuple_fraction) ...@@ -979,7 +979,7 @@ grouping_planner(Query *parse, double tuple_fraction)
*/ */
int hashentrysize = cheapest_path_width + 64 + numAggs * 100; int hashentrysize = cheapest_path_width + 64 + numAggs * 100;
if (hashentrysize * dNumGroups <= SortMem * 1024L) if (hashentrysize * dNumGroups <= work_mem * 1024L)
{ {
/* /*
* Okay, do the cost comparison. We need to consider * Okay, do the cost comparison. We need to consider
......
...@@ -7,7 +7,7 @@ ...@@ -7,7 +7,7 @@
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/optimizer/plan/subselect.c,v 1.87 2004/01/12 22:20:28 tgl Exp $ * $PostgreSQL: pgsql/src/backend/optimizer/plan/subselect.c,v 1.88 2004/02/03 17:34:03 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -614,12 +614,12 @@ subplan_is_hashable(SubLink *slink, SubPlan *node) ...@@ -614,12 +614,12 @@ subplan_is_hashable(SubLink *slink, SubPlan *node)
return false; return false;
/* /*
* The estimated size of the subquery result must fit in SortMem. (XXX * The estimated size of the subquery result must fit in work_mem. (XXX
* what about hashtable overhead?) * what about hashtable overhead?)
*/ */
subquery_size = node->plan->plan_rows * subquery_size = node->plan->plan_rows *
(MAXALIGN(node->plan->plan_width) + MAXALIGN(sizeof(HeapTupleData))); (MAXALIGN(node->plan->plan_width) + MAXALIGN(sizeof(HeapTupleData)));
if (subquery_size > SortMem * 1024L) if (subquery_size > work_mem * 1024L)
return false; return false;
/* /*
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/optimizer/util/pathnode.c,v 1.100 2004/01/19 03:49:41 tgl Exp $ * $PostgreSQL: pgsql/src/backend/optimizer/util/pathnode.c,v 1.101 2004/02/03 17:34:03 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -637,7 +637,7 @@ create_unique_path(Query *root, RelOptInfo *rel, Path *subpath) ...@@ -637,7 +637,7 @@ create_unique_path(Query *root, RelOptInfo *rel, Path *subpath)
*/ */
int hashentrysize = rel->width + 64; int hashentrysize = rel->width + 64;
if (hashentrysize * pathnode->rows <= SortMem * 1024L) if (hashentrysize * pathnode->rows <= work_mem * 1024L)
{ {
cost_agg(&agg_path, root, cost_agg(&agg_path, root,
AGG_HASHED, 0, AGG_HASHED, 0,
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/tcop/postgres.c,v 1.387 2004/01/28 21:02:40 tgl Exp $ * $PostgreSQL: pgsql/src/backend/tcop/postgres.c,v 1.388 2004/02/03 17:34:03 tgl Exp $
* *
* NOTES * NOTES
* this is the "main" module of the postgres backend and * this is the "main" module of the postgres backend and
...@@ -1987,7 +1987,7 @@ usage(char *progname) ...@@ -1987,7 +1987,7 @@ usage(char *progname)
printf(gettext(" -o FILENAME send stdout and stderr to given file\n")); printf(gettext(" -o FILENAME send stdout and stderr to given file\n"));
printf(gettext(" -P disable system indexes\n")); printf(gettext(" -P disable system indexes\n"));
printf(gettext(" -s show statistics after each query\n")); printf(gettext(" -s show statistics after each query\n"));
printf(gettext(" -S SORT-MEM set amount of memory for sorts (in kbytes)\n")); printf(gettext(" -S WORK-MEM set amount of memory for sorts (in kbytes)\n"));
printf(gettext(" --describe-config describe configuration parameters, then exit\n")); printf(gettext(" --describe-config describe configuration parameters, then exit\n"));
printf(gettext(" --help show this help, then exit\n")); printf(gettext(" --help show this help, then exit\n"));
printf(gettext(" --version output version information, then exit\n")); printf(gettext(" --version output version information, then exit\n"));
...@@ -2277,7 +2277,7 @@ PostgresMain(int argc, char *argv[], const char *username) ...@@ -2277,7 +2277,7 @@ PostgresMain(int argc, char *argv[], const char *username)
/* /*
* S - amount of sort memory to use in 1k bytes * S - amount of sort memory to use in 1k bytes
*/ */
SetConfigOption("sort_mem", optarg, ctx, gucsource); SetConfigOption("work_mem", optarg, ctx, gucsource);
break; break;
case 's': case 's':
......
...@@ -17,7 +17,7 @@ ...@@ -17,7 +17,7 @@
* *
* Portions Copyright (c) 1996-2003, PostgreSQL Global Development Group * Portions Copyright (c) 1996-2003, PostgreSQL Global Development Group
* *
* $PostgreSQL: pgsql/src/backend/utils/adt/ri_triggers.c,v 1.66 2004/01/07 18:56:28 neilc Exp $ * $PostgreSQL: pgsql/src/backend/utils/adt/ri_triggers.c,v 1.67 2004/02/03 17:34:03 tgl Exp $
* *
* ---------- * ----------
*/ */
...@@ -41,6 +41,7 @@ ...@@ -41,6 +41,7 @@
#include "utils/lsyscache.h" #include "utils/lsyscache.h"
#include "utils/typcache.h" #include "utils/typcache.h"
#include "utils/acl.h" #include "utils/acl.h"
#include "utils/guc.h"
#include "miscadmin.h" #include "miscadmin.h"
...@@ -2572,6 +2573,8 @@ RI_Initial_Check(FkConstraint *fkconstraint, Relation rel, Relation pkrel) ...@@ -2572,6 +2573,8 @@ RI_Initial_Check(FkConstraint *fkconstraint, Relation rel, Relation pkrel)
const char *sep; const char *sep;
List *list; List *list;
List *list2; List *list2;
int old_work_mem;
char workmembuf[32];
int spi_result; int spi_result;
void *qplan; void *qplan;
...@@ -2665,6 +2668,23 @@ RI_Initial_Check(FkConstraint *fkconstraint, Relation rel, Relation pkrel) ...@@ -2665,6 +2668,23 @@ RI_Initial_Check(FkConstraint *fkconstraint, Relation rel, Relation pkrel)
snprintf(querystr + strlen(querystr), sizeof(querystr) - strlen(querystr), snprintf(querystr + strlen(querystr), sizeof(querystr) - strlen(querystr),
")"); ")");
/*
* Temporarily increase work_mem so that the check query can be executed
* more efficiently. It seems okay to do this because the query is simple
* enough to not use a multiple of work_mem, and one typically would not
* have many large foreign-key validations happening concurrently. So
* this seems to meet the criteria for being considered a "maintenance"
* operation, and accordingly we use maintenance_work_mem.
*
* We do the equivalent of "SET LOCAL work_mem" so that transaction abort
* will restore the old value if we lose control due to an error.
*/
old_work_mem = work_mem;
snprintf(workmembuf, sizeof(workmembuf), "%d", maintenance_work_mem);
(void) set_config_option("work_mem", workmembuf,
PGC_USERSET, PGC_S_SESSION,
true, true);
if (SPI_connect() != SPI_OK_CONNECT) if (SPI_connect() != SPI_OK_CONNECT)
elog(ERROR, "SPI_connect failed"); elog(ERROR, "SPI_connect failed");
...@@ -2741,6 +2761,16 @@ RI_Initial_Check(FkConstraint *fkconstraint, Relation rel, Relation pkrel) ...@@ -2741,6 +2761,16 @@ RI_Initial_Check(FkConstraint *fkconstraint, Relation rel, Relation pkrel)
if (SPI_finish() != SPI_OK_FINISH) if (SPI_finish() != SPI_OK_FINISH)
elog(ERROR, "SPI_finish failed"); elog(ERROR, "SPI_finish failed");
/*
* Restore work_mem for the remainder of the current transaction.
* This is another SET LOCAL, so it won't affect the session value,
* nor any tentative value if there is one.
*/
snprintf(workmembuf, sizeof(workmembuf), "%d", old_work_mem);
(void) set_config_option("work_mem", workmembuf,
PGC_USERSET, PGC_S_SESSION,
true, true);
return true; return true;
} }
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
* *
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/utils/init/globals.c,v 1.81 2004/01/28 21:02:40 tgl Exp $ * $PostgreSQL: pgsql/src/backend/utils/init/globals.c,v 1.82 2004/02/03 17:34:03 tgl Exp $
* *
* NOTES * NOTES
* Globals used all over the place should be declared here and not * Globals used all over the place should be declared here and not
...@@ -78,6 +78,6 @@ int CTimeZone = 0; ...@@ -78,6 +78,6 @@ int CTimeZone = 0;
bool enableFsync = true; bool enableFsync = true;
bool allowSystemTableMods = false; bool allowSystemTableMods = false;
int SortMem = 1024; int work_mem = 1024;
int VacuumMem = 8192; int maintenance_work_mem = 16384;
int NBuffers = 1000; int NBuffers = 1000;
...@@ -10,7 +10,7 @@ ...@@ -10,7 +10,7 @@
* Written by Peter Eisentraut <peter_e@gmx.net>. * Written by Peter Eisentraut <peter_e@gmx.net>.
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/utils/misc/guc.c,v 1.183 2004/02/02 00:17:21 momjian Exp $ * $PostgreSQL: pgsql/src/backend/utils/misc/guc.c,v 1.184 2004/02/03 17:34:03 tgl Exp $
* *
*-------------------------------------------------------------------- *--------------------------------------------------------------------
*/ */
...@@ -1030,23 +1030,23 @@ static struct config_int ConfigureNamesInt[] = ...@@ -1030,23 +1030,23 @@ static struct config_int ConfigureNamesInt[] =
}, },
{ {
{"sort_mem", PGC_USERSET, RESOURCES_MEM, {"work_mem", PGC_USERSET, RESOURCES_MEM,
gettext_noop("Sets the maximum memory to be used for sorts and hash tables."), gettext_noop("Sets the maximum memory to be used for query workspaces."),
gettext_noop("Specifies the amount of memory to be used by internal " gettext_noop("This much memory may be used by each internal "
"sort operations and hash tables before switching to temporary disk " "sort operation and hash table before switching to "
"files") "temporary disk files.")
}, },
&SortMem, &work_mem,
1024, 8 * BLCKSZ / 1024, INT_MAX, NULL, NULL 1024, 8 * BLCKSZ / 1024, INT_MAX / 1024, NULL, NULL
}, },
{ {
{"vacuum_mem", PGC_USERSET, RESOURCES_MEM, {"maintenance_work_mem", PGC_USERSET, RESOURCES_MEM,
gettext_noop("Sets the maximum memory used to keep track of to-be-reclaimed rows."), gettext_noop("Sets the maximum memory to be used for maintenance operations."),
NULL gettext_noop("This includes operations such as VACUUM and CREATE INDEX.")
}, },
&VacuumMem, &maintenance_work_mem,
8192, 1024, INT_MAX, NULL, NULL 16384, 1024, INT_MAX / 1024, NULL, NULL
}, },
{ {
...@@ -1709,6 +1709,19 @@ static struct config_string ConfigureNamesString[] = ...@@ -1709,6 +1709,19 @@ static struct config_string ConfigureNamesString[] =
/******** end of options list ********/ /******** end of options list ********/
/*
* To allow continued support of obsolete names for GUC variables, we apply
* the following mappings to any unrecognized name. Note that an old name
* should be mapped to a new one only if the new variable has very similar
* semantics to the old.
*/
static const char * const map_old_guc_names[] = {
"sort_mem", "work_mem",
"vacuum_mem", "maintenance_work_mem",
NULL
};
/* /*
* Actual lookup of variables is done through this single, sorted array. * Actual lookup of variables is done through this single, sorted array.
*/ */
...@@ -1723,6 +1736,7 @@ static char *guc_string_workspace; /* for avoiding memory leaks */ ...@@ -1723,6 +1736,7 @@ static char *guc_string_workspace; /* for avoiding memory leaks */
static int guc_var_compare(const void *a, const void *b); static int guc_var_compare(const void *a, const void *b);
static int guc_name_compare(const char *namea, const char *nameb);
static void ReportGUCOption(struct config_generic * record); static void ReportGUCOption(struct config_generic * record);
static char *_ShowOption(struct config_generic * record); static char *_ShowOption(struct config_generic * record);
...@@ -1812,11 +1826,12 @@ find_option(const char *name) ...@@ -1812,11 +1826,12 @@ find_option(const char *name)
{ {
const char **key = &name; const char **key = &name;
struct config_generic **res; struct config_generic **res;
int i;
Assert(name); Assert(name);
/* /*
* by equating const char ** with struct config_generic *, we are * By equating const char ** with struct config_generic *, we are
* assuming the name field is first in config_generic. * assuming the name field is first in config_generic.
*/ */
res = (struct config_generic **) bsearch((void *) &key, res = (struct config_generic **) bsearch((void *) &key,
...@@ -1826,6 +1841,19 @@ find_option(const char *name) ...@@ -1826,6 +1841,19 @@ find_option(const char *name)
guc_var_compare); guc_var_compare);
if (res) if (res)
return *res; return *res;
/*
* See if the name is an obsolete name for a variable. We assume that
* the set of supported old names is short enough that a brute-force
* search is the best way.
*/
for (i = 0; map_old_guc_names[i] != NULL; i += 2)
{
if (guc_name_compare(name, map_old_guc_names[i]) == 0)
return find_option(map_old_guc_names[i+1]);
}
/* Unknown name */
return NULL; return NULL;
} }
...@@ -1838,16 +1866,19 @@ guc_var_compare(const void *a, const void *b) ...@@ -1838,16 +1866,19 @@ guc_var_compare(const void *a, const void *b)
{ {
struct config_generic *confa = *(struct config_generic **) a; struct config_generic *confa = *(struct config_generic **) a;
struct config_generic *confb = *(struct config_generic **) b; struct config_generic *confb = *(struct config_generic **) b;
const char *namea;
const char *nameb;
return guc_name_compare(confa->name, confb->name);
}
static int
guc_name_compare(const char *namea, const char *nameb)
{
/* /*
* The temptation to use strcasecmp() here must be resisted, because * The temptation to use strcasecmp() here must be resisted, because
* the array ordering has to remain stable across setlocale() calls. * the array ordering has to remain stable across setlocale() calls.
* So, build our own with a simple ASCII-only downcasing. * So, build our own with a simple ASCII-only downcasing.
*/ */
namea = confa->name;
nameb = confb->name;
while (*namea && *nameb) while (*namea && *nameb)
{ {
char cha = *namea++; char cha = *namea++;
......
...@@ -56,8 +56,8 @@ ...@@ -56,8 +56,8 @@
# - Memory - # - Memory -
#shared_buffers = 1000 # min 16, at least max_connections*2, 8KB each #shared_buffers = 1000 # min 16, at least max_connections*2, 8KB each
#sort_mem = 1024 # min 64, size in KB #work_mem = 1024 # min 64, size in KB
#vacuum_mem = 8192 # min 1024, size in KB #maintenance_work_mem = 16384 # min 1024, size in KB
#debug_shared_buffers = 0 # 0-600 seconds #debug_shared_buffers = 0 # 0-600 seconds
# - Background writer - # - Background writer -
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/utils/mmgr/portalmem.c,v 1.63 2003/11/29 19:52:04 pgsql Exp $ * $PostgreSQL: pgsql/src/backend/utils/mmgr/portalmem.c,v 1.64 2004/02/03 17:34:03 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -282,8 +282,8 @@ PortalCreateHoldStore(Portal portal) ...@@ -282,8 +282,8 @@ PortalCreateHoldStore(Portal portal)
/* Create the tuple store, selecting cross-transaction temp files. */ /* Create the tuple store, selecting cross-transaction temp files. */
oldcxt = MemoryContextSwitchTo(portal->holdContext); oldcxt = MemoryContextSwitchTo(portal->holdContext);
/* XXX: Should SortMem be used for this? */ /* XXX: Should maintenance_work_mem be used for the portal size? */
portal->holdStore = tuplestore_begin_heap(true, true, SortMem); portal->holdStore = tuplestore_begin_heap(true, true, work_mem);
MemoryContextSwitchTo(oldcxt); MemoryContextSwitchTo(oldcxt);
} }
......
...@@ -30,15 +30,15 @@ ...@@ -30,15 +30,15 @@
* heap. When the run number at the top of the heap changes, we know that * heap. When the run number at the top of the heap changes, we know that
* no more records of the prior run are left in the heap. * no more records of the prior run are left in the heap.
* *
* The (approximate) amount of memory allowed for any one sort operation * The approximate amount of memory allowed for any one sort operation
* is given in kilobytes by the external variable SortMem. Initially, * is specified in kilobytes by the caller (most pass work_mem). Initially,
* we absorb tuples and simply store them in an unsorted array as long as * we absorb tuples and simply store them in an unsorted array as long as
* we haven't exceeded SortMem. If we reach the end of the input without * we haven't exceeded workMem. If we reach the end of the input without
* exceeding SortMem, we sort the array using qsort() and subsequently return * exceeding workMem, we sort the array using qsort() and subsequently return
* tuples just by scanning the tuple array sequentially. If we do exceed * tuples just by scanning the tuple array sequentially. If we do exceed
* SortMem, we construct a heap using Algorithm H and begin to emit tuples * workMem, we construct a heap using Algorithm H and begin to emit tuples
* into sorted runs in temporary tapes, emitting just enough tuples at each * into sorted runs in temporary tapes, emitting just enough tuples at each
* step to get back within the SortMem limit. Whenever the run number at * step to get back within the workMem limit. Whenever the run number at
* the top of the heap changes, we begin a new run with a new output tape * the top of the heap changes, we begin a new run with a new output tape
* (selected per Algorithm D). After the end of the input is reached, * (selected per Algorithm D). After the end of the input is reached,
* we dump out remaining tuples in memory into a final run (or two), * we dump out remaining tuples in memory into a final run (or two),
...@@ -49,7 +49,7 @@ ...@@ -49,7 +49,7 @@
* next tuple from its source tape (if any). When the heap empties, the merge * next tuple from its source tape (if any). When the heap empties, the merge
* is complete. The basic merge algorithm thus needs very little memory --- * is complete. The basic merge algorithm thus needs very little memory ---
* only M tuples for an M-way merge, and M is at most six in the present code. * only M tuples for an M-way merge, and M is at most six in the present code.
* However, we can still make good use of our full SortMem allocation by * However, we can still make good use of our full workMem allocation by
* pre-reading additional tuples from each source tape. Without prereading, * pre-reading additional tuples from each source tape. Without prereading,
* our access pattern to the temporary file would be very erratic; on average * our access pattern to the temporary file would be very erratic; on average
* we'd read one block from each of M source tapes during the same time that * we'd read one block from each of M source tapes during the same time that
...@@ -59,7 +59,7 @@ ...@@ -59,7 +59,7 @@
* of the temp file, ensuring that things will be even worse when it comes * of the temp file, ensuring that things will be even worse when it comes
* time to read that tape. A straightforward merge pass thus ends up doing a * time to read that tape. A straightforward merge pass thus ends up doing a
* lot of waiting for disk seeks. We can improve matters by prereading from * lot of waiting for disk seeks. We can improve matters by prereading from
* each source tape sequentially, loading about SortMem/M bytes from each tape * each source tape sequentially, loading about workMem/M bytes from each tape
* in turn. Then we run the merge algorithm, writing but not reading until * in turn. Then we run the merge algorithm, writing but not reading until
* one of the preloaded tuple series runs out. Then we switch back to preread * one of the preloaded tuple series runs out. Then we switch back to preread
* mode, fill memory again, and repeat. This approach helps to localize both * mode, fill memory again, and repeat. This approach helps to localize both
...@@ -78,7 +78,7 @@ ...@@ -78,7 +78,7 @@
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/utils/sort/tuplesort.c,v 1.40 2003/11/29 19:52:04 pgsql Exp $ * $PostgreSQL: pgsql/src/backend/utils/sort/tuplesort.c,v 1.41 2004/02/03 17:34:03 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -323,7 +323,7 @@ struct Tuplesortstate ...@@ -323,7 +323,7 @@ struct Tuplesortstate
* *
* NOTES about memory consumption calculations: * NOTES about memory consumption calculations:
* *
* We count space allocated for tuples against the SortMem limit, plus * We count space allocated for tuples against the workMem limit, plus
* the space used by the variable-size arrays memtuples and memtupindex. * the space used by the variable-size arrays memtuples and memtupindex.
* Fixed-size space (primarily the LogicalTapeSet I/O buffers) is not * Fixed-size space (primarily the LogicalTapeSet I/O buffers) is not
* counted. * counted.
...@@ -351,7 +351,7 @@ typedef struct ...@@ -351,7 +351,7 @@ typedef struct
} DatumTuple; } DatumTuple;
static Tuplesortstate *tuplesort_begin_common(bool randomAccess); static Tuplesortstate *tuplesort_begin_common(int workMem, bool randomAccess);
static void puttuple_common(Tuplesortstate *state, void *tuple); static void puttuple_common(Tuplesortstate *state, void *tuple);
static void inittapes(Tuplesortstate *state); static void inittapes(Tuplesortstate *state);
static void selectnewtape(Tuplesortstate *state); static void selectnewtape(Tuplesortstate *state);
...@@ -406,10 +406,16 @@ static Tuplesortstate *qsort_tuplesortstate; ...@@ -406,10 +406,16 @@ static Tuplesortstate *qsort_tuplesortstate;
* access was requested, rescan, markpos, and restorepos can also be called.) * access was requested, rescan, markpos, and restorepos can also be called.)
* For Datum sorts, putdatum/getdatum are used instead of puttuple/gettuple. * For Datum sorts, putdatum/getdatum are used instead of puttuple/gettuple.
* Call tuplesort_end to terminate the operation and release memory/disk space. * Call tuplesort_end to terminate the operation and release memory/disk space.
*
* Each variant of tuplesort_begin has a workMem parameter specifying the
* maximum number of kilobytes of RAM to use before spilling data to disk.
* (The normal value of this parameter is work_mem, but some callers use
* other values.) Each variant also has a randomAccess parameter specifying
* whether the caller needs non-sequential access to the sort result.
*/ */
static Tuplesortstate * static Tuplesortstate *
tuplesort_begin_common(bool randomAccess) tuplesort_begin_common(int workMem, bool randomAccess)
{ {
Tuplesortstate *state; Tuplesortstate *state;
...@@ -417,7 +423,7 @@ tuplesort_begin_common(bool randomAccess) ...@@ -417,7 +423,7 @@ tuplesort_begin_common(bool randomAccess)
state->status = TSS_INITIAL; state->status = TSS_INITIAL;
state->randomAccess = randomAccess; state->randomAccess = randomAccess;
state->availMem = SortMem * 1024L; state->availMem = workMem * 1024L;
state->tapeset = NULL; state->tapeset = NULL;
state->memtupcount = 0; state->memtupcount = 0;
...@@ -442,9 +448,9 @@ Tuplesortstate * ...@@ -442,9 +448,9 @@ Tuplesortstate *
tuplesort_begin_heap(TupleDesc tupDesc, tuplesort_begin_heap(TupleDesc tupDesc,
int nkeys, int nkeys,
Oid *sortOperators, AttrNumber *attNums, Oid *sortOperators, AttrNumber *attNums,
bool randomAccess) int workMem, bool randomAccess)
{ {
Tuplesortstate *state = tuplesort_begin_common(randomAccess); Tuplesortstate *state = tuplesort_begin_common(workMem, randomAccess);
int i; int i;
AssertArg(nkeys > 0); AssertArg(nkeys > 0);
...@@ -488,9 +494,9 @@ tuplesort_begin_heap(TupleDesc tupDesc, ...@@ -488,9 +494,9 @@ tuplesort_begin_heap(TupleDesc tupDesc,
Tuplesortstate * Tuplesortstate *
tuplesort_begin_index(Relation indexRel, tuplesort_begin_index(Relation indexRel,
bool enforceUnique, bool enforceUnique,
bool randomAccess) int workMem, bool randomAccess)
{ {
Tuplesortstate *state = tuplesort_begin_common(randomAccess); Tuplesortstate *state = tuplesort_begin_common(workMem, randomAccess);
state->comparetup = comparetup_index; state->comparetup = comparetup_index;
state->copytup = copytup_index; state->copytup = copytup_index;
...@@ -508,9 +514,9 @@ tuplesort_begin_index(Relation indexRel, ...@@ -508,9 +514,9 @@ tuplesort_begin_index(Relation indexRel,
Tuplesortstate * Tuplesortstate *
tuplesort_begin_datum(Oid datumType, tuplesort_begin_datum(Oid datumType,
Oid sortOperator, Oid sortOperator,
bool randomAccess) int workMem, bool randomAccess)
{ {
Tuplesortstate *state = tuplesort_begin_common(randomAccess); Tuplesortstate *state = tuplesort_begin_common(workMem, randomAccess);
RegProcedure sortFunction; RegProcedure sortFunction;
int16 typlen; int16 typlen;
bool typbyval; bool typbyval;
...@@ -1077,7 +1083,7 @@ mergeruns(Tuplesortstate *state) ...@@ -1077,7 +1083,7 @@ mergeruns(Tuplesortstate *state)
/* /*
* If we produced only one initial run (quite likely if the total data * If we produced only one initial run (quite likely if the total data
* volume is between 1X and 2X SortMem), we can just use that tape as * volume is between 1X and 2X workMem), we can just use that tape as
* the finished output, rather than doing a useless merge. * the finished output, rather than doing a useless merge.
*/ */
if (state->currentRun == 1) if (state->currentRun == 1)
......
...@@ -36,7 +36,7 @@ ...@@ -36,7 +36,7 @@
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/utils/sort/tuplestore.c,v 1.17 2003/11/29 19:52:04 pgsql Exp $ * $PostgreSQL: pgsql/src/backend/utils/sort/tuplestore.c,v 1.18 2004/02/03 17:34:03 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -219,10 +219,7 @@ tuplestore_begin_common(bool randomAccess, bool interXact, int maxKBytes) ...@@ -219,10 +219,7 @@ tuplestore_begin_common(bool randomAccess, bool interXact, int maxKBytes)
state->myfile = NULL; state->myfile = NULL;
state->memtupcount = 0; state->memtupcount = 0;
if (maxKBytes > 0)
state->memtupsize = 1024; /* initial guess */ state->memtupsize = 1024; /* initial guess */
else
state->memtupsize = 1; /* won't really need any space */
state->memtuples = (void **) palloc(state->memtupsize * sizeof(void *)); state->memtuples = (void **) palloc(state->memtupsize * sizeof(void *));
USEMEM(state, GetMemoryChunkSpace(state->memtuples)); USEMEM(state, GetMemoryChunkSpace(state->memtuples));
...@@ -250,7 +247,7 @@ tuplestore_begin_common(bool randomAccess, bool interXact, int maxKBytes) ...@@ -250,7 +247,7 @@ tuplestore_begin_common(bool randomAccess, bool interXact, int maxKBytes)
* no longer wanted. * no longer wanted.
* *
* maxKBytes: how much data to store in memory (any data beyond this * maxKBytes: how much data to store in memory (any data beyond this
* amount is paged to disk). * amount is paged to disk). When in doubt, use work_mem.
*/ */
Tuplestorestate * Tuplestorestate *
tuplestore_begin_heap(bool randomAccess, bool interXact, int maxKBytes) tuplestore_begin_heap(bool randomAccess, bool interXact, int maxKBytes)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
* *
* Copyright (c) 2000-2003, PostgreSQL Global Development Group * Copyright (c) 2000-2003, PostgreSQL Global Development Group
* *
* $PostgreSQL: pgsql/src/bin/psql/tab-complete.c,v 1.100 2004/01/25 03:07:22 neilc Exp $ * $PostgreSQL: pgsql/src/bin/psql/tab-complete.c,v 1.101 2004/02/03 17:34:03 tgl Exp $
*/ */
/*---------------------------------------------------------------------- /*----------------------------------------------------------------------
...@@ -533,6 +533,7 @@ psql_completion(char *text, int start, int end) ...@@ -533,6 +533,7 @@ psql_completion(char *text, int start, int end)
"log_planner_stats", "log_planner_stats",
"log_statement", "log_statement",
"log_statement_stats", "log_statement_stats",
"maintenance_work_mem",
"max_connections", "max_connections",
"max_expr_depth", "max_expr_depth",
"max_files_per_process", "max_files_per_process",
...@@ -547,7 +548,6 @@ psql_completion(char *text, int start, int end) ...@@ -547,7 +548,6 @@ psql_completion(char *text, int start, int end)
"shared_buffers", "shared_buffers",
"seed", "seed",
"server_encoding", "server_encoding",
"sort_mem",
"sql_inheritance", "sql_inheritance",
"ssl", "ssl",
"statement_timeout", "statement_timeout",
...@@ -567,10 +567,10 @@ psql_completion(char *text, int start, int end) ...@@ -567,10 +567,10 @@ psql_completion(char *text, int start, int end)
"unix_socket_directory", "unix_socket_directory",
"unix_socket_group", "unix_socket_group",
"unix_socket_permissions", "unix_socket_permissions",
"vacuum_mem",
"wal_buffers", "wal_buffers",
"wal_debug", "wal_debug",
"wal_sync_method", "wal_sync_method",
"work_mem",
NULL NULL
}; };
......
...@@ -7,7 +7,7 @@ ...@@ -7,7 +7,7 @@
* Portions Copyright (c) 1996-2003, PostgreSQL Global Development Group * Portions Copyright (c) 1996-2003, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* $PostgreSQL: pgsql/src/include/access/nbtree.h,v 1.75 2003/12/21 01:23:06 tgl Exp $ * $PostgreSQL: pgsql/src/include/access/nbtree.h,v 1.76 2004/02/03 17:34:03 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -490,7 +490,7 @@ extern BTItem _bt_formitem(IndexTuple itup); ...@@ -490,7 +490,7 @@ extern BTItem _bt_formitem(IndexTuple itup);
*/ */
typedef struct BTSpool BTSpool; /* opaque type known only within nbtsort.c */ typedef struct BTSpool BTSpool; /* opaque type known only within nbtsort.c */
extern BTSpool *_bt_spoolinit(Relation index, bool isunique); extern BTSpool *_bt_spoolinit(Relation index, bool isunique, bool isdead);
extern void _bt_spooldestroy(BTSpool *btspool); extern void _bt_spooldestroy(BTSpool *btspool);
extern void _bt_spool(BTItem btitem, BTSpool *btspool); extern void _bt_spool(BTItem btitem, BTSpool *btspool);
extern void _bt_leafbuild(BTSpool *btspool, BTSpool *spool2); extern void _bt_leafbuild(BTSpool *btspool, BTSpool *spool2);
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
* Portions Copyright (c) 1996-2003, PostgreSQL Global Development Group * Portions Copyright (c) 1996-2003, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* $PostgreSQL: pgsql/src/include/miscadmin.h,v 1.149 2004/01/30 15:57:04 momjian Exp $ * $PostgreSQL: pgsql/src/include/miscadmin.h,v 1.150 2004/02/03 17:34:03 tgl Exp $
* *
* NOTES * NOTES
* some of the information in this file should be moved to * some of the information in this file should be moved to
...@@ -207,8 +207,8 @@ extern int CTimeZone; ...@@ -207,8 +207,8 @@ extern int CTimeZone;
extern bool enableFsync; extern bool enableFsync;
extern bool allowSystemTableMods; extern bool allowSystemTableMods;
extern DLLIMPORT int SortMem; extern DLLIMPORT int work_mem;
extern int VacuumMem; extern DLLIMPORT int maintenance_work_mem;
/* /*
* A few postmaster startup options are exported here so the * A few postmaster startup options are exported here so the
......
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
* Portions Copyright (c) 1996-2003, PostgreSQL Global Development Group * Portions Copyright (c) 1996-2003, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
* *
* $PostgreSQL: pgsql/src/include/utils/tuplesort.h,v 1.14 2003/11/29 22:41:16 pgsql Exp $ * $PostgreSQL: pgsql/src/include/utils/tuplesort.h,v 1.15 2004/02/03 17:34:04 tgl Exp $
* *
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
...@@ -39,13 +39,13 @@ typedef struct Tuplesortstate Tuplesortstate; ...@@ -39,13 +39,13 @@ typedef struct Tuplesortstate Tuplesortstate;
extern Tuplesortstate *tuplesort_begin_heap(TupleDesc tupDesc, extern Tuplesortstate *tuplesort_begin_heap(TupleDesc tupDesc,
int nkeys, int nkeys,
Oid *sortOperators, AttrNumber *attNums, Oid *sortOperators, AttrNumber *attNums,
bool randomAccess); int workMem, bool randomAccess);
extern Tuplesortstate *tuplesort_begin_index(Relation indexRel, extern Tuplesortstate *tuplesort_begin_index(Relation indexRel,
bool enforceUnique, bool enforceUnique,
bool randomAccess); int workMem, bool randomAccess);
extern Tuplesortstate *tuplesort_begin_datum(Oid datumType, extern Tuplesortstate *tuplesort_begin_datum(Oid datumType,
Oid sortOperator, Oid sortOperator,
bool randomAccess); int workMem, bool randomAccess);
extern void tuplesort_puttuple(Tuplesortstate *state, void *tuple); extern void tuplesort_puttuple(Tuplesortstate *state, void *tuple);
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
* procedural language * procedural language
* *
* IDENTIFICATION * IDENTIFICATION
* $PostgreSQL: pgsql/src/pl/plpgsql/src/pl_exec.c,v 1.94 2003/11/29 19:52:12 pgsql Exp $ * $PostgreSQL: pgsql/src/pl/plpgsql/src/pl_exec.c,v 1.95 2004/02/03 17:34:04 tgl Exp $
* *
* This software is copyrighted by Jan Wieck - Hamburg. * This software is copyrighted by Jan Wieck - Hamburg.
* *
...@@ -1770,7 +1770,7 @@ exec_init_tuple_store(PLpgSQL_execstate * estate) ...@@ -1770,7 +1770,7 @@ exec_init_tuple_store(PLpgSQL_execstate * estate)
estate->tuple_store_cxt = rsi->econtext->ecxt_per_query_memory; estate->tuple_store_cxt = rsi->econtext->ecxt_per_query_memory;
oldcxt = MemoryContextSwitchTo(estate->tuple_store_cxt); oldcxt = MemoryContextSwitchTo(estate->tuple_store_cxt);
estate->tuple_store = tuplestore_begin_heap(true, false, SortMem); estate->tuple_store = tuplestore_begin_heap(true, false, work_mem);
MemoryContextSwitchTo(oldcxt); MemoryContextSwitchTo(oldcxt);
estate->rettupdesc = rsi->expectedDesc; estate->rettupdesc = rsi->expectedDesc;
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment