Commit bf62b1a0 authored by Bruce Momjian's avatar Bruce Momjian

Proofreading improvements for the Administration documentation book.

parent 1e4cc384
<!-- $PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.139 2010/01/26 06:45:31 petere Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.140 2010/02/03 17:25:05 momjian Exp $ -->
<chapter id="backup"> <chapter id="backup">
<title>Backup and Restore</title> <title>Backup and Restore</title>
...@@ -20,8 +20,7 @@ ...@@ -20,8 +20,7 @@
<listitem><para>File system level backup</para></listitem> <listitem><para>File system level backup</para></listitem>
<listitem><para>Continuous archiving</para></listitem> <listitem><para>Continuous archiving</para></listitem>
</itemizedlist> </itemizedlist>
Each has its own strengths and weaknesses. Each has its own strengths and weaknesses; each is discussed in turn below.
Each is discussed in turn below.
</para> </para>
<sect1 id="backup-dump"> <sect1 id="backup-dump">
...@@ -37,14 +36,14 @@ ...@@ -37,14 +36,14 @@
<synopsis> <synopsis>
pg_dump <replaceable class="parameter">dbname</replaceable> &gt; <replaceable class="parameter">outfile</replaceable> pg_dump <replaceable class="parameter">dbname</replaceable> &gt; <replaceable class="parameter">outfile</replaceable>
</synopsis> </synopsis>
As you see, <application>pg_dump</> writes its results to the As you see, <application>pg_dump</> writes its result to the
standard output. We will see below how this can be useful. standard output. We will see below how this can be useful.
</para> </para>
<para> <para>
<application>pg_dump</> is a regular <productname>PostgreSQL</> <application>pg_dump</> is a regular <productname>PostgreSQL</>
client application (albeit a particularly clever one). This means client application (albeit a particularly clever one). This means
that you can do this backup procedure from any remote host that has that you can perform this backup procedure from any remote host that has
access to the database. But remember that <application>pg_dump</> access to the database. But remember that <application>pg_dump</>
does not operate with special permissions. In particular, it must does not operate with special permissions. In particular, it must
have read access to all tables that you want to back up, so in have read access to all tables that you want to back up, so in
...@@ -76,8 +75,8 @@ pg_dump <replaceable class="parameter">dbname</replaceable> &gt; <replaceable cl ...@@ -76,8 +75,8 @@ pg_dump <replaceable class="parameter">dbname</replaceable> &gt; <replaceable cl
<para> <para>
Dumps created by <application>pg_dump</> are internally consistent, Dumps created by <application>pg_dump</> are internally consistent,
that is, the dump represents a snapshot of the database as of the time meaning, the dump represents a snapshot of the database at the time
<application>pg_dump</> begins running. <application>pg_dump</> does not <application>pg_dump</> began running. <application>pg_dump</> does not
block other operations on the database while it is working. block other operations on the database while it is working.
(Exceptions are those operations that need to operate with an (Exceptions are those operations that need to operate with an
exclusive lock, such as most forms of <command>ALTER TABLE</command>.) exclusive lock, such as most forms of <command>ALTER TABLE</command>.)
...@@ -85,9 +84,9 @@ pg_dump <replaceable class="parameter">dbname</replaceable> &gt; <replaceable cl ...@@ -85,9 +84,9 @@ pg_dump <replaceable class="parameter">dbname</replaceable> &gt; <replaceable cl
<important> <important>
<para> <para>
If your database schema relies on OIDs (for instance as foreign If your database schema relies on OIDs (for instance, as foreign
keys) you must instruct <application>pg_dump</> to dump the OIDs keys) you must instruct <application>pg_dump</> to dump the OIDs
as well. To do this, use the <option>-o</option> command line as well. To do this, use the <option>-o</option> command-line
option. option.
</para> </para>
</important> </important>
...@@ -102,43 +101,43 @@ pg_dump <replaceable class="parameter">dbname</replaceable> &gt; <replaceable cl ...@@ -102,43 +101,43 @@ pg_dump <replaceable class="parameter">dbname</replaceable> &gt; <replaceable cl
<synopsis> <synopsis>
psql <replaceable class="parameter">dbname</replaceable> &lt; <replaceable class="parameter">infile</replaceable> psql <replaceable class="parameter">dbname</replaceable> &lt; <replaceable class="parameter">infile</replaceable>
</synopsis> </synopsis>
where <replaceable class="parameter">infile</replaceable> is what where <replaceable class="parameter">infile</replaceable> is the
you used as <replaceable class="parameter">outfile</replaceable> file output by the <application>pg_dump</> command. The database <replaceable
for the <application>pg_dump</> command. The database <replaceable
class="parameter">dbname</replaceable> will not be created by this class="parameter">dbname</replaceable> will not be created by this
command, so you must create it yourself from <literal>template0</> command, so you must create it yourself from <literal>template0</>
before executing <application>psql</> (e.g., with before executing <application>psql</> (e.g., with
<literal>createdb -T template0 <replaceable <literal>createdb -T template0 <replaceable
class="parameter">dbname</></literal>). <application>psql</> class="parameter">dbname</></literal>). <application>psql</>
supports options similar to <application>pg_dump</>'s for specifying supports options similar to <application>pg_dump</> for specifying
the database server to connect to and the user name to use. See the database server to connect to and the user name to use. See
the <xref linkend="app-psql"> reference page for more information. the <xref linkend="app-psql"> reference page for more information.
</para> </para>
<para> <para>
Before restoring a SQL dump, all the users who own objects or were Before restoring an SQL dump, all the users who own objects or were
granted permissions on objects in the dumped database must already granted permissions on objects in the dumped database must already
exist. If they do not, then the restore will fail to recreate the exist. If they do not, the restore will fail to recreate the
objects with the original ownership and/or permissions. objects with the original ownership and/or permissions.
(Sometimes this is what you want, but usually it is not.) (Sometimes this is what you want, but usually it is not.)
</para> </para>
<para> <para>
By default, the <application>psql</> script will continue to By default, the <application>psql</> script will continue to
execute after an SQL error is encountered. You might wish to use the execute after an SQL error is encountered. You might wish to run
following command at the top of the script to alter that <application>psql</application> with
the <literal>ON_ERROR_STOP</> variable set to alter that
behaviour and have <application>psql</application> exit with an behaviour and have <application>psql</application> exit with an
exit status of 3 if an SQL error occurs: exit status of 3 if an SQL error occurs:
<programlisting> <programlisting>
\set ON_ERROR_STOP psql --set ON_ERROR_STOP=on dbname &lt; infile
</programlisting> </programlisting>
Either way, you will have an only partially restored database. Either way, you will only have a partially restored database.
Alternatively, you can specify that the whole dump should be Alternatively, you can specify that the whole dump should be
restored as a single transaction, so the restore is either fully restored as a single transaction, so the restore is either fully
completed or fully rolled back. This mode can be specified by completed or fully rolled back. This mode can be specified by
passing the <option>-1</> or <option>--single-transaction</> passing the <option>-1</> or <option>--single-transaction</>
command-line options to <application>psql</>. When using this command-line options to <application>psql</>. When using this
mode, be aware that even the smallest of errors can rollback a mode, be aware that even a minor error can rollback a
restore that has already run for many hours. However, that might restore that has already run for many hours. However, that might
still be preferable to manually cleaning up a complex database still be preferable to manually cleaning up a complex database
after a partially restored dump. after a partially restored dump.
...@@ -197,11 +196,11 @@ pg_dumpall &gt; <replaceable>outfile</> ...@@ -197,11 +196,11 @@ pg_dumpall &gt; <replaceable>outfile</>
psql -f <replaceable class="parameter">infile</replaceable> postgres psql -f <replaceable class="parameter">infile</replaceable> postgres
</synopsis> </synopsis>
(Actually, you can specify any existing database name to start from, (Actually, you can specify any existing database name to start from,
but if you are reloading into an empty cluster then <literal>postgres</> but if you are loading into an empty cluster then <literal>postgres</>
should usually be used.) It is always necessary to have should usually be used.) It is always necessary to have
database superuser access when restoring a <application>pg_dumpall</> database superuser access when restoring a <application>pg_dumpall</>
dump, as that is required to restore the role and tablespace information. dump, as that is required to restore the role and tablespace information.
If you use tablespaces, be careful that the tablespace paths in the If you use tablespaces, make sure that the tablespace paths in the
dump are appropriate for the new installation. dump are appropriate for the new installation.
</para> </para>
...@@ -218,13 +217,11 @@ psql -f <replaceable class="parameter">infile</replaceable> postgres ...@@ -218,13 +217,11 @@ psql -f <replaceable class="parameter">infile</replaceable> postgres
<title>Handling large databases</title> <title>Handling large databases</title>
<para> <para>
Since <productname>PostgreSQL</productname> allows tables larger Some operating systems have maximum file size limits that cause
than the maximum file size on your system, it can be problematic problems when creating large <application>pg_dump</> output files.
to dump such a table to a file, since the resulting file will likely Fortunately, <application>pg_dump</> can write to the standard
be larger than the maximum size allowed by your system. Since output, so you can use standard Unix tools to work around this
<application>pg_dump</> can write to the standard output, you can potential problem. There are several possible methods:
use standard Unix tools to work around this possible problem.
There are several ways to do it:
</para> </para>
<formalpara> <formalpara>
...@@ -255,7 +252,7 @@ cat <replaceable class="parameter">filename</replaceable>.gz | gunzip | psql <re ...@@ -255,7 +252,7 @@ cat <replaceable class="parameter">filename</replaceable>.gz | gunzip | psql <re
<title>Use <command>split</>.</title> <title>Use <command>split</>.</title>
<para> <para>
The <command>split</command> command The <command>split</command> command
allows you to split the output into pieces that are allows you to split the output into smaller files that are
acceptable in size to the underlying file system. For example, to acceptable in size to the underlying file system. For example, to
make chunks of 1 megabyte: make chunks of 1 megabyte:
...@@ -310,11 +307,10 @@ pg_restore -d <replaceable class="parameter">dbname</replaceable> <replaceable c ...@@ -310,11 +307,10 @@ pg_restore -d <replaceable class="parameter">dbname</replaceable> <replaceable c
<para> <para>
An alternative backup strategy is to directly copy the files that An alternative backup strategy is to directly copy the files that
<productname>PostgreSQL</> uses to store the data in the database. In <productname>PostgreSQL</> uses to store the data in the database;
<xref linkend="creating-cluster"> it is explained where these files <xref linkend="creating-cluster"> explains where these files
are located, but you have probably found them already if you are are located. You can use whatever method you prefer
interested in this method. You can use whatever method you prefer for doing file system backups; for example:
for doing usual file system backups, for example:
<programlisting> <programlisting>
tar -cf backup.tar /usr/local/pgsql/data tar -cf backup.tar /usr/local/pgsql/data
...@@ -336,7 +332,7 @@ tar -cf backup.tar /usr/local/pgsql/data ...@@ -336,7 +332,7 @@ tar -cf backup.tar /usr/local/pgsql/data
an atomic snapshot of the state of the file system, an atomic snapshot of the state of the file system,
but also because of internal buffering within the server). but also because of internal buffering within the server).
Information about stopping the server can be found in Information about stopping the server can be found in
<xref linkend="server-shutdown">. Needless to say that you <xref linkend="server-shutdown">. Needless to say, you
also need to shut down the server before restoring the data. also need to shut down the server before restoring the data.
</para> </para>
</listitem> </listitem>
...@@ -347,8 +343,8 @@ tar -cf backup.tar /usr/local/pgsql/data ...@@ -347,8 +343,8 @@ tar -cf backup.tar /usr/local/pgsql/data
database, you might be tempted to try to back up or restore only certain database, you might be tempted to try to back up or restore only certain
individual tables or databases from their respective files or individual tables or databases from their respective files or
directories. This will <emphasis>not</> work because the directories. This will <emphasis>not</> work because the
information contained in these files contains only half the information contained in these files is not usable without
truth. The other half is in the commit log files the commit log files,
<filename>pg_clog/*</filename>, which contain the commit status of <filename>pg_clog/*</filename>, which contain the commit status of
all transactions. A table file is only usable with this all transactions. A table file is only usable with this
information. Of course it is also impossible to restore only a information. Of course it is also impossible to restore only a
...@@ -371,11 +367,11 @@ tar -cf backup.tar /usr/local/pgsql/data ...@@ -371,11 +367,11 @@ tar -cf backup.tar /usr/local/pgsql/data
above) from the snapshot to a backup device, then release the frozen above) from the snapshot to a backup device, then release the frozen
snapshot. This will work even while the database server is running. snapshot. This will work even while the database server is running.
However, a backup created in this way saves However, a backup created in this way saves
the database files in a state where the database server was not the database files in a state as if the database server was not
properly shut down; therefore, when you start the database server properly shut down; therefore, when you start the database server
on the backed-up data, it will think the previous server instance had on the backed-up data, it will think the previous server instance
crashed and replay the WAL log. This is not a problem, just be aware of crashed and will replay the WAL log. This is not a problem; just
it (and be sure to include the WAL files in your backup). be aware of it (and be sure to include the WAL files in your backup).
</para> </para>
<para> <para>
...@@ -386,7 +382,7 @@ tar -cf backup.tar /usr/local/pgsql/data ...@@ -386,7 +382,7 @@ tar -cf backup.tar /usr/local/pgsql/data
not be possible to use snapshot backup because the snapshots not be possible to use snapshot backup because the snapshots
<emphasis>must</> be simultaneous. <emphasis>must</> be simultaneous.
Read your file system documentation very carefully before trusting Read your file system documentation very carefully before trusting
to the consistent-snapshot technique in such situations. the consistent-snapshot technique in such situations.
</para> </para>
<para> <para>
...@@ -411,9 +407,8 @@ tar -cf backup.tar /usr/local/pgsql/data ...@@ -411,9 +407,8 @@ tar -cf backup.tar /usr/local/pgsql/data
</para> </para>
<para> <para>
Note that a file system backup will not necessarily be Note that a file system backup will typically be larger
smaller than an SQL dump. On the contrary, it will most likely be than an SQL dump. (<application>pg_dump</application> does not need to dump
larger. (<application>pg_dump</application> does not need to dump
the contents of indexes for example, just the commands to recreate the contents of indexes for example, just the commands to recreate
them.) However, taking a file system backup might be faster. them.) However, taking a file system backup might be faster.
</para> </para>
...@@ -437,31 +432,31 @@ tar -cf backup.tar /usr/local/pgsql/data ...@@ -437,31 +432,31 @@ tar -cf backup.tar /usr/local/pgsql/data
<para> <para>
At all times, <productname>PostgreSQL</> maintains a At all times, <productname>PostgreSQL</> maintains a
<firstterm>write ahead log</> (WAL) in the <filename>pg_xlog/</> <firstterm>write ahead log</> (WAL) in the <filename>pg_xlog/</>
subdirectory of the cluster's data directory. The log describes subdirectory of the cluster's data directory. The log records
every change made to the database's data files. This log exists every change made to the database's data files. This log exists
primarily for crash-safety purposes: if the system crashes, the primarily for crash-safety purposes: if the system crashes, the
database can be restored to consistency by <quote>replaying</> the database can be restored to consistency by <quote>replaying</> the
log entries made since the last checkpoint. However, the existence log entries made since the last checkpoint. However, the existence
of the log makes it possible to use a third strategy for backing up of the log makes it possible to use a third strategy for backing up
databases: we can combine a file-system-level backup with backup of databases: we can combine a file-system-level backup with backup of
the WAL files. If recovery is needed, we restore the backup and the WAL files. If recovery is needed, we restore the file system backup and
then replay from the backed-up WAL files to bring the backup up to then replay from the backed-up WAL files to bring the system to a
current time. This approach is more complex to administer than current state. This approach is more complex to administer than
either of the previous approaches, but it has some significant either of the previous approaches, but it has some significant
benefits: benefits:
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para> <para>
We do not need a perfectly consistent backup as the starting point. We do not need a perfectly consistent file system backup as the starting point.
Any internal inconsistency in the backup will be corrected by log Any internal inconsistency in the backup will be corrected by log
replay (this is not significantly different from what happens during replay (this is not significantly different from what happens during
crash recovery). So we don't need file system snapshot capability, crash recovery). So we do not need a file system snapshot capability,
just <application>tar</> or a similar archiving tool. just <application>tar</> or a similar archiving tool.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Since we can string together an indefinitely long sequence of WAL files Since we can combine an indefinitely long sequence of WAL files
for replay, continuous backup can be achieved simply by continuing to archive for replay, continuous backup can be achieved simply by continuing to archive
the WAL files. This is particularly valuable for large databases, where the WAL files. This is particularly valuable for large databases, where
it might not be convenient to take a full backup frequently. it might not be convenient to take a full backup frequently.
...@@ -469,7 +464,7 @@ tar -cf backup.tar /usr/local/pgsql/data ...@@ -469,7 +464,7 @@ tar -cf backup.tar /usr/local/pgsql/data
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
There is nothing that says we have to replay the WAL entries all the It is not necessary to replay the WAL entries all the
way to the end. We could stop the replay at any point and have a way to the end. We could stop the replay at any point and have a
consistent snapshot of the database as it was at that time. Thus, consistent snapshot of the database as it was at that time. Thus,
this technique supports <firstterm>point-in-time recovery</>: it is this technique supports <firstterm>point-in-time recovery</>: it is
...@@ -521,8 +516,8 @@ tar -cf backup.tar /usr/local/pgsql/data ...@@ -521,8 +516,8 @@ tar -cf backup.tar /usr/local/pgsql/data
abstract WAL sequence. When not using WAL archiving, the system abstract WAL sequence. When not using WAL archiving, the system
normally creates just a few segment files and then normally creates just a few segment files and then
<quote>recycles</> them by renaming no-longer-needed segment files <quote>recycles</> them by renaming no-longer-needed segment files
to higher segment numbers. It's assumed that a segment file whose to higher segment numbers. It's assumed that segment files whose
contents precede the checkpoint-before-last is no longer of contents precede the checkpoint-before-last are no longer of
interest and can be recycled. interest and can be recycled.
</para> </para>
...@@ -535,7 +530,7 @@ tar -cf backup.tar /usr/local/pgsql/data ...@@ -535,7 +530,7 @@ tar -cf backup.tar /usr/local/pgsql/data
directory on another machine, write them onto a tape drive (ensuring that directory on another machine, write them onto a tape drive (ensuring that
you have a way of identifying the original name of each file), or batch you have a way of identifying the original name of each file), or batch
them together and burn them onto CDs, or something else entirely. To them together and burn them onto CDs, or something else entirely. To
provide the database administrator with as much flexibility as possible, provide the database administrator with flexibility,
<productname>PostgreSQL</> tries not to make any assumptions about how <productname>PostgreSQL</> tries not to make any assumptions about how
the archiving will be done. Instead, <productname>PostgreSQL</> lets the archiving will be done. Instead, <productname>PostgreSQL</> lets
the administrator specify a shell command to be executed to copy a the administrator specify a shell command to be executed to copy a
...@@ -552,11 +547,11 @@ tar -cf backup.tar /usr/local/pgsql/data ...@@ -552,11 +547,11 @@ tar -cf backup.tar /usr/local/pgsql/data
these settings will always be placed in the these settings will always be placed in the
<filename>postgresql.conf</filename> file. <filename>postgresql.conf</filename> file.
In <varname>archive_command</>, In <varname>archive_command</>,
any <literal>%p</> is replaced by the path name of the file to <literal>%p</> is replaced by the path name of the file to
archive, while any <literal>%f</> is replaced by the file name only. archive, while <literal>%f</> is replaced by only the file name.
(The path name is relative to the current working directory, (The path name is relative to the current working directory,
i.e., the cluster's data directory.) i.e., the cluster's data directory.)
Write <literal>%%</> if you need to embed an actual <literal>%</> Use <literal>%%</> if you need to embed an actual <literal>%</>
character in the command. The simplest useful command is something character in the command. The simplest useful command is something
like: like:
<programlisting> <programlisting>
...@@ -584,7 +579,7 @@ cp -i pg_xlog/00000001000000A900000065 /mnt/server/archivedir/00000001000000A900 ...@@ -584,7 +579,7 @@ cp -i pg_xlog/00000001000000A900000065 /mnt/server/archivedir/00000001000000A900
<para> <para>
It is important that the archive command return zero exit status if and It is important that the archive command return zero exit status if and
only if it succeeded. Upon getting a zero result, only if it succeeds. Upon getting a zero result,
<productname>PostgreSQL</> will assume that the file has been <productname>PostgreSQL</> will assume that the file has been
successfully archived, and will remove or recycle it. However, a nonzero successfully archived, and will remove or recycle it. However, a nonzero
status tells <productname>PostgreSQL</> that the file was not archived; status tells <productname>PostgreSQL</> that the file was not archived;
...@@ -602,7 +597,7 @@ cp -i pg_xlog/00000001000000A900000065 /mnt/server/archivedir/00000001000000A900 ...@@ -602,7 +597,7 @@ cp -i pg_xlog/00000001000000A900000065 /mnt/server/archivedir/00000001000000A900
nonzero status in this case</>. We have found that <literal>cp -i</> does nonzero status in this case</>. We have found that <literal>cp -i</> does
this correctly on some platforms but not others. If the chosen command this correctly on some platforms but not others. If the chosen command
does not itself handle this case correctly, you should add a command does not itself handle this case correctly, you should add a command
to test for pre-existence of the archive file. For example, something to test for existence of the archive file. For example, something
like: like:
<programlisting> <programlisting>
archive_command = 'test ! -f .../%f &amp;&amp; cp %p .../%f' archive_command = 'test ! -f .../%f &amp;&amp; cp %p .../%f'
...@@ -620,14 +615,14 @@ archive_command = 'test ! -f .../%f &amp;&amp; cp %p .../%f' ...@@ -620,14 +615,14 @@ archive_command = 'test ! -f .../%f &amp;&amp; cp %p .../%f'
is reported appropriately so that the situation can be is reported appropriately so that the situation can be
resolved reasonably quickly. The <filename>pg_xlog/</> directory will resolved reasonably quickly. The <filename>pg_xlog/</> directory will
continue to fill with WAL segment files until the situation is resolved. continue to fill with WAL segment files until the situation is resolved.
(If the filesystem containing <filename>pg_xlog/</> fills up, (If the file system containing <filename>pg_xlog/</> fills up,
<productname>PostgreSQL</> will do a PANIC shutdown. No prior <productname>PostgreSQL</> will do a PANIC shutdown. No committed
transactions will be lost, but the database will be unavailable until transactions will be lost, but the database will remain offline until
you free some space.) you free some space.)
</para> </para>
<para> <para>
The speed of the archiving command is not important, so long as it can keep up The speed of the archiving command is unimportant as long as it can keep up
with the average rate at which your server generates WAL data. Normal with the average rate at which your server generates WAL data. Normal
operation continues even if the archiving process falls a little behind. operation continues even if the archiving process falls a little behind.
If archiving falls significantly behind, this will increase the amount of If archiving falls significantly behind, this will increase the amount of
...@@ -642,8 +637,8 @@ archive_command = 'test ! -f .../%f &amp;&amp; cp %p .../%f' ...@@ -642,8 +637,8 @@ archive_command = 'test ! -f .../%f &amp;&amp; cp %p .../%f'
In writing your archive command, you should assume that the file names to In writing your archive command, you should assume that the file names to
be archived can be up to 64 characters long and can contain any be archived can be up to 64 characters long and can contain any
combination of ASCII letters, digits, and dots. It is not necessary to combination of ASCII letters, digits, and dots. It is not necessary to
remember the original relative path (<literal>%p</>) but it is necessary to preserve the original relative path (<literal>%p</>) but it is necessary to
remember the file name (<literal>%f</>). preserve the file name (<literal>%f</>).
</para> </para>
<para> <para>
...@@ -667,7 +662,7 @@ archive_command = 'test ! -f .../%f &amp;&amp; cp %p .../%f' ...@@ -667,7 +662,7 @@ archive_command = 'test ! -f .../%f &amp;&amp; cp %p .../%f'
a limit on how old unarchived data can be, you can set a limit on how old unarchived data can be, you can set
<xref linkend="guc-archive-timeout"> to force the server to switch <xref linkend="guc-archive-timeout"> to force the server to switch
to a new WAL segment file at least that often. Note that archived to a new WAL segment file at least that often. Note that archived
files that are ended early due to a forced switch are still the same files that are archived early due to a forced switch are still the same
length as completely full files. It is therefore unwise to set a very length as completely full files. It is therefore unwise to set a very
short <varname>archive_timeout</> &mdash; it will bloat your archive short <varname>archive_timeout</> &mdash; it will bloat your archive
storage. <varname>archive_timeout</> settings of a minute or so are storage. <varname>archive_timeout</> settings of a minute or so are
...@@ -676,7 +671,7 @@ archive_command = 'test ! -f .../%f &amp;&amp; cp %p .../%f' ...@@ -676,7 +671,7 @@ archive_command = 'test ! -f .../%f &amp;&amp; cp %p .../%f'
<para> <para>
Also, you can force a segment switch manually with Also, you can force a segment switch manually with
<function>pg_switch_xlog</>, if you want to ensure that a <function>pg_switch_xlog</> if you want to ensure that a
just-finished transaction is archived as soon as possible. Other utility just-finished transaction is archived as soon as possible. Other utility
functions related to WAL management are listed in <xref functions related to WAL management are listed in <xref
linkend="functions-admin-backup-table">. linkend="functions-admin-backup-table">.
...@@ -711,7 +706,7 @@ archive_command = 'test ! -f .../%f &amp;&amp; cp %p .../%f' ...@@ -711,7 +706,7 @@ archive_command = 'test ! -f .../%f &amp;&amp; cp %p .../%f'
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Connect to the database as a superuser, and issue the command: Connect to the database as a superuser and issue the command:
<programlisting> <programlisting>
SELECT pg_start_backup('label'); SELECT pg_start_backup('label');
</programlisting> </programlisting>
...@@ -720,7 +715,8 @@ SELECT pg_start_backup('label'); ...@@ -720,7 +715,8 @@ SELECT pg_start_backup('label');
full path where you intend to put the backup dump file.) full path where you intend to put the backup dump file.)
<function>pg_start_backup</> creates a <firstterm>backup label</> file, <function>pg_start_backup</> creates a <firstterm>backup label</> file,
called <filename>backup_label</>, in the cluster directory with called <filename>backup_label</>, in the cluster directory with
information about your backup. information about your backup, including the start time and label
string.
</para> </para>
<para> <para>
...@@ -735,9 +731,9 @@ SELECT pg_start_backup('label'); ...@@ -735,9 +731,9 @@ SELECT pg_start_backup('label');
required for the checkpoint will be spread out over a significant required for the checkpoint will be spread out over a significant
period of time, by default half your inter-checkpoint interval period of time, by default half your inter-checkpoint interval
(see the configuration parameter (see the configuration parameter
<xref linkend="guc-checkpoint-completion-target">). Usually <xref linkend="guc-checkpoint-completion-target">). This is
this is what you want, because it minimizes the impact on query usually what you want, because it minimizes the impact on query
processing. If you just want to start the backup as soon as processing. If you want to start the backup as soon as
possible, use: possible, use:
<programlisting> <programlisting>
SELECT pg_start_backup('label', true); SELECT pg_start_backup('label', true);
...@@ -760,14 +756,14 @@ SELECT pg_start_backup('label', true); ...@@ -760,14 +756,14 @@ SELECT pg_start_backup('label', true);
SELECT pg_stop_backup(); SELECT pg_stop_backup();
</programlisting> </programlisting>
This terminates the backup mode and performs an automatic switch to This terminates the backup mode and performs an automatic switch to
the next WAL segment. The reason for the switch is to arrange that the next WAL segment. The reason for the switch is to arrange for
the last WAL segment file written during the backup interval is the last WAL segment file written during the backup interval to be
immediately ready to archive. ready to archive.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Once the WAL segment files used during the backup are archived, you are Once the WAL segment files active during the backup are archived, you are
done. The file identified by <function>pg_stop_backup</>'s result is done. The file identified by <function>pg_stop_backup</>'s result is
the last segment that is required to form a complete set of backup files. the last segment that is required to form a complete set of backup files.
<function>pg_stop_backup</> does not return until the last segment has <function>pg_stop_backup</> does not return until the last segment has
...@@ -788,10 +784,10 @@ SELECT pg_stop_backup(); ...@@ -788,10 +784,10 @@ SELECT pg_stop_backup();
</para> </para>
<para> <para>
Some backup tools that you might wish to use emit warnings or errors Some file system backup tools emit warnings or errors
if the files they are trying to copy change while the copy proceeds. if the files they are trying to copy change while the copy proceeds.
This situation is normal, and not an error, when taking a base backup When taking a base backup of an active database, this situation is normal
of an active database; so you need to ensure that you can distinguish and not an error. However, you need to ensure that you can distinguish
complaints of this sort from real errors. For example, some versions complaints of this sort from real errors. For example, some versions
of <application>rsync</> return a separate exit code for of <application>rsync</> return a separate exit code for
<quote>vanished source files</>, and you can write a driver script to <quote>vanished source files</>, and you can write a driver script to
...@@ -804,7 +800,7 @@ SELECT pg_stop_backup(); ...@@ -804,7 +800,7 @@ SELECT pg_stop_backup();
</para> </para>
<para> <para>
It is not necessary to be very concerned about the amount of time elapsed It is not necessary to be concerned about the amount of time elapsed
between <function>pg_start_backup</> and the start of the actual backup, between <function>pg_start_backup</> and the start of the actual backup,
nor between the end of the backup and <function>pg_stop_backup</>; a nor between the end of the backup and <function>pg_stop_backup</>; a
few minutes' delay won't hurt anything. (However, if you normally run the few minutes' delay won't hurt anything. (However, if you normally run the
...@@ -812,23 +808,23 @@ SELECT pg_stop_backup(); ...@@ -812,23 +808,23 @@ SELECT pg_stop_backup();
in performance between <function>pg_start_backup</> and in performance between <function>pg_start_backup</> and
<function>pg_stop_backup</>, since <varname>full_page_writes</> is <function>pg_stop_backup</>, since <varname>full_page_writes</> is
effectively forced on during backup mode.) You must ensure that these effectively forced on during backup mode.) You must ensure that these
steps are carried out in sequence without any possible steps are carried out in sequence, without any possible
overlap, or you will invalidate the backup. overlap, or you will invalidate the backup.
</para> </para>
<para> <para>
Be certain that your backup dump includes all of the files underneath Be certain that your backup dump includes all of the files under
the database cluster directory (e.g., <filename>/usr/local/pgsql/data</>). the database cluster directory (e.g., <filename>/usr/local/pgsql/data</>).
If you are using tablespaces that do not reside underneath this directory, If you are using tablespaces that do not reside underneath this directory,
be careful to include them as well (and be sure that your backup dump be careful to include them as well (and be sure that your backup dump
archives symbolic links as links, otherwise the restore will mess up archives symbolic links as links, otherwise the restore will corrupt
your tablespaces). your tablespaces).
</para> </para>
<para> <para>
You can, however, omit from the backup dump the files within the You can, however, omit from the backup dump the files within the
<filename>pg_xlog/</> subdirectory of the cluster directory. This cluster's <filename>pg_xlog/</> subdirectory. This
slight complication is worthwhile because it reduces the risk slight adjustment is worthwhile because it reduces the risk
of mistakes when restoring. This is easy to arrange if of mistakes when restoring. This is easy to arrange if
<filename>pg_xlog/</> is a symbolic link pointing to someplace outside <filename>pg_xlog/</> is a symbolic link pointing to someplace outside
the cluster directory, which is a common setup anyway for performance the cluster directory, which is a common setup anyway for performance
...@@ -836,12 +832,12 @@ SELECT pg_stop_backup(); ...@@ -836,12 +832,12 @@ SELECT pg_stop_backup();
</para> </para>
<para> <para>
To make use of the backup, you will need to keep around all the WAL To make use of the backup, you will need to keep all the WAL
segment files generated during and after the file system backup. segment files generated during and after the file system backup.
To aid you in doing this, the <function>pg_stop_backup</> function To aid you in doing this, the <function>pg_stop_backup</> function
creates a <firstterm>backup history file</> that is immediately creates a <firstterm>backup history file</> that is immediately
stored into the WAL archive area. This file is named after the first stored into the WAL archive area. This file is named after the first
WAL segment file that you need to have to make use of the backup. WAL segment file that you need for the file system backup.
For example, if the starting WAL file is For example, if the starting WAL file is
<literal>0000000100001234000055CD</> the backup history file will be <literal>0000000100001234000055CD</> the backup history file will be
named something like named something like
...@@ -860,9 +856,9 @@ SELECT pg_stop_backup(); ...@@ -860,9 +856,9 @@ SELECT pg_stop_backup();
The backup history file is just a small text file. It contains the The backup history file is just a small text file. It contains the
label string you gave to <function>pg_start_backup</>, as well as label string you gave to <function>pg_start_backup</>, as well as
the starting and ending times and WAL segments of the backup. the starting and ending times and WAL segments of the backup.
If you used the label to identify where the associated dump file is kept, If you used the label to identify the associated dump file,
then the archived history file is enough to tell you which dump file to then the archived history file is enough to tell you which dump file to
restore, should you need to do so. restore.
</para> </para>
<para> <para>
...@@ -878,13 +874,13 @@ SELECT pg_stop_backup(); ...@@ -878,13 +874,13 @@ SELECT pg_stop_backup();
<para> <para>
It's also worth noting that the <function>pg_start_backup</> function It's also worth noting that the <function>pg_start_backup</> function
makes a file named <filename>backup_label</> in the database cluster makes a file named <filename>backup_label</> in the database cluster
directory, which is then removed again by <function>pg_stop_backup</>. directory, which is removed by <function>pg_stop_backup</>.
This file will of course be archived as a part of your backup dump file. This file will of course be archived as a part of your backup dump file.
The backup label file includes the label string you gave to The backup label file includes the label string you gave to
<function>pg_start_backup</>, as well as the time at which <function>pg_start_backup</>, as well as the time at which
<function>pg_start_backup</> was run, and the name of the starting WAL <function>pg_start_backup</> was run, and the name of the starting WAL
file. In case of confusion it will file. In case of confusion it is
therefore be possible to look inside a backup dump file and determine therefore possible to look inside a backup dump file and determine
exactly which backup session the dump file came from. exactly which backup session the dump file came from.
</para> </para>
...@@ -917,20 +913,20 @@ SELECT pg_stop_backup(); ...@@ -917,20 +913,20 @@ SELECT pg_stop_backup();
location in case you need them later. Note that this precaution will location in case you need them later. Note that this precaution will
require that you have enough free space on your system to hold two require that you have enough free space on your system to hold two
copies of your existing database. If you do not have enough space, copies of your existing database. If you do not have enough space,
you need at the least to copy the contents of the <filename>pg_xlog</> you should at least save the contents of the cluster's <filename>pg_xlog</>
subdirectory of the cluster data directory, as it might contain logs which subdirectory, as it might contain logs which
were not archived before the system went down. were not archived before the system went down.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Clean out all existing files and subdirectories under the cluster data Remove all existing files and subdirectories under the cluster data
directory and under the root directories of any tablespaces you are using. directory and under the root directories of any tablespaces you are using.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Restore the database files from your base backup. Be careful that they Restore the database files from your file system backup. Be sure that they
are restored with the right ownership (the database system user, not are restored with the right ownership (the database system user, not
<literal>root</>!) and with the right permissions. If you are using <literal>root</>!) and with the right permissions. If you are using
tablespaces, tablespaces,
...@@ -941,17 +937,18 @@ SELECT pg_stop_backup(); ...@@ -941,17 +937,18 @@ SELECT pg_stop_backup();
<listitem> <listitem>
<para> <para>
Remove any files present in <filename>pg_xlog/</>; these came from the Remove any files present in <filename>pg_xlog/</>; these came from the
backup dump and are therefore probably obsolete rather than current. file system backup and are therefore probably obsolete rather than current.
If you didn't archive <filename>pg_xlog/</> at all, then recreate it, If you didn't archive <filename>pg_xlog/</> at all, then recreate
it with proper permissions,
being careful to ensure that you re-establish it as a symbolic link being careful to ensure that you re-establish it as a symbolic link
if you had it set up that way before. if you had it set up that way before.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
If you had unarchived WAL segment files that you saved in step 2, If you have unarchived WAL segment files that you saved in step 2,
copy them into <filename>pg_xlog/</>. (It is best to copy them, copy them into <filename>pg_xlog/</>. (It is best to copy them,
not move them, so that you still have the unmodified files if a not move them, so you still have the unmodified files if a
problem occurs and you have to start over.) problem occurs and you have to start over.)
</para> </para>
</listitem> </listitem>
...@@ -960,7 +957,7 @@ SELECT pg_stop_backup(); ...@@ -960,7 +957,7 @@ SELECT pg_stop_backup();
Create a recovery command file <filename>recovery.conf</> in the cluster Create a recovery command file <filename>recovery.conf</> in the cluster
data directory (see <xref linkend="recovery-config-settings">). You might data directory (see <xref linkend="recovery-config-settings">). You might
also want to temporarily modify <filename>pg_hba.conf</> to prevent also want to temporarily modify <filename>pg_hba.conf</> to prevent
ordinary users from connecting until you are sure the recovery has worked. ordinary users from connecting until you are sure the recovery was successful.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
...@@ -971,28 +968,28 @@ SELECT pg_stop_backup(); ...@@ -971,28 +968,28 @@ SELECT pg_stop_backup();
simply be restarted and it will continue recovery. Upon completion simply be restarted and it will continue recovery. Upon completion
of the recovery process, the server will rename of the recovery process, the server will rename
<filename>recovery.conf</> to <filename>recovery.done</> (to prevent <filename>recovery.conf</> to <filename>recovery.done</> (to prevent
accidentally re-entering recovery mode in case of a crash later) and then accidentally re-entering recovery mode later) and then
commence normal database operations. commence normal database operations.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Inspect the contents of the database to ensure you have recovered to Inspect the contents of the database to ensure you have recovered to
where you want to be. If not, return to step 1. If all is well, the desired state. If not, return to step 1. If all is well,
let in your users by restoring <filename>pg_hba.conf</> to normal. allow your users to connect by restoring <filename>pg_hba.conf</> to normal.
</para> </para>
</listitem> </listitem>
</orderedlist> </orderedlist>
</para> </para>
<para> <para>
The key part of all this is to set up a recovery command file that The key part of all this is to set up a recovery configuration file that
describes how you want to recover and how far the recovery should describes how you want to recover and how far the recovery should
run. You can use <filename>recovery.conf.sample</> (normally run. You can use <filename>recovery.conf.sample</> (normally
installed in the installation <filename>share/</> directory) as a located in the installation's <filename>share/</> directory) as a
prototype. The one thing that you absolutely must specify in prototype. The one thing that you absolutely must specify in
<filename>recovery.conf</> is the <varname>restore_command</>, <filename>recovery.conf</> is the <varname>restore_command</>,
which tells <productname>PostgreSQL</> how to get back archived which tells <productname>PostgreSQL</> how to retrieve archived
WAL file segments. Like the <varname>archive_command</>, this is WAL file segments. Like the <varname>archive_command</>, this is
a shell command string. It can contain <literal>%f</>, which is a shell command string. It can contain <literal>%f</>, which is
replaced by the name of the desired log file, and <literal>%p</>, replaced by the name of the desired log file, and <literal>%p</>,
...@@ -1006,14 +1003,14 @@ SELECT pg_stop_backup(); ...@@ -1006,14 +1003,14 @@ SELECT pg_stop_backup();
restore_command = 'cp /mnt/server/archivedir/%f %p' restore_command = 'cp /mnt/server/archivedir/%f %p'
</programlisting> </programlisting>
which will copy previously archived WAL segments from the directory which will copy previously archived WAL segments from the directory
<filename>/mnt/server/archivedir</>. You could of course use something <filename>/mnt/server/archivedir</>. Of course, you can use something
much more complicated, perhaps even a shell script that requests the much more complicated, perhaps even a shell script that requests the
operator to mount an appropriate tape. operator to mount an appropriate tape.
</para> </para>
<para> <para>
It is important that the command return nonzero exit status on failure. It is important that the command return nonzero exit status on failure.
The command <emphasis>will</> be asked for files that are not present The command <emphasis>will</> be called requesting files that are not present
in the archive; it must return nonzero when so asked. This is not an in the archive; it must return nonzero when so asked. This is not an
error condition. Not all of the requested files will be WAL segment error condition. Not all of the requested files will be WAL segment
files; you should also expect requests for files with a suffix of files; you should also expect requests for files with a suffix of
...@@ -1025,7 +1022,7 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' ...@@ -1025,7 +1022,7 @@ restore_command = 'cp /mnt/server/archivedir/%f %p'
<para> <para>
WAL segments that cannot be found in the archive will be sought in WAL segments that cannot be found in the archive will be sought in
<filename>pg_xlog/</>; this allows use of recent un-archived segments. <filename>pg_xlog/</>; this allows use of recent un-archived segments.
However segments that are available from the archive will be used in However, segments that are available from the archive will be used in
preference to files in <filename>pg_xlog/</>. The system will not preference to files in <filename>pg_xlog/</>. The system will not
overwrite the existing contents of <filename>pg_xlog/</> when retrieving overwrite the existing contents of <filename>pg_xlog/</> when retrieving
archived files. archived files.
...@@ -1034,13 +1031,13 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' ...@@ -1034,13 +1031,13 @@ restore_command = 'cp /mnt/server/archivedir/%f %p'
<para> <para>
Normally, recovery will proceed through all available WAL segments, Normally, recovery will proceed through all available WAL segments,
thereby restoring the database to the current point in time (or as thereby restoring the database to the current point in time (or as
close as we can get given the available WAL segments). So a normal close as possible given the available WAL segments). Therefore, a normal
recovery will end with a <quote>file not found</> message, the exact text recovery will end with a <quote>file not found</> message, the exact text
of the error message depending upon your choice of of the error message depending upon your choice of
<varname>restore_command</>. You may also see an error message <varname>restore_command</>. You may also see an error message
at the start of recovery for a file named something like at the start of recovery for a file named something like
<filename>00000001.history</>. This is also normal and does not <filename>00000001.history</>. This is also normal and does not
indicate a problem in simple recovery situations. See indicate a problem in simple recovery situations; see
<xref linkend="backup-timelines"> for discussion. <xref linkend="backup-timelines"> for discussion.
</para> </para>
...@@ -1058,15 +1055,15 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' ...@@ -1058,15 +1055,15 @@ restore_command = 'cp /mnt/server/archivedir/%f %p'
<para> <para>
The stop point must be after the ending time of the base backup, i.e., The stop point must be after the ending time of the base backup, i.e.,
the end time of <function>pg_stop_backup</>. You cannot use a base backup the end time of <function>pg_stop_backup</>. You cannot use a base backup
to recover to a time when that backup was still going on. (To to recover to a time when that backup was in progress. (To
recover to such a time, you must go back to your previous base backup recover to such a time, you must go back to your previous base backup
and roll forward from there.) and roll forward from there.)
</para> </para>
</note> </note>
<para> <para>
If recovery finds a corruption in the WAL data then recovery will If recovery finds corrupted WAL data, recovery will
complete at that point and the server will not start. In such a case the halt at that point and the server will not start. In such a case the
recovery process could be re-run from the beginning, specifying a recovery process could be re-run from the beginning, specifying a
<quote>recovery target</> before the point of corruption so that recovery <quote>recovery target</> before the point of corruption so that recovery
can complete normally. can complete normally.
...@@ -1085,7 +1082,9 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' ...@@ -1085,7 +1082,9 @@ restore_command = 'cp /mnt/server/archivedir/%f %p'
<para> <para>
These settings can only be made in the <filename>recovery.conf</> These settings can only be made in the <filename>recovery.conf</>
file, and apply only for the duration of the recovery. They must be file, and apply only for the duration of the recovery. (A sample file,
<filename>share/recovery.conf.sample</>, exists in the installation's
<filename>share/</> directory.) They must be
reset for any subsequent recovery you wish to perform. They cannot be reset for any subsequent recovery you wish to perform. They cannot be
changed once recovery has begun. changed once recovery has begun.
The parameters for streaming replication are described in <xref The parameters for streaming replication are described in <xref
...@@ -1103,22 +1102,22 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' ...@@ -1103,22 +1102,22 @@ restore_command = 'cp /mnt/server/archivedir/%f %p'
but optional for streaming replication. but optional for streaming replication.
Any <literal>%f</> in the string is Any <literal>%f</> in the string is
replaced by the name of the file to retrieve from the archive, replaced by the name of the file to retrieve from the archive,
and any <literal>%p</> is replaced by the path name to copy and any <literal>%p</> is replaced by the copy destination path name
it to on the server. on the server.
(The path name is relative to the current working directory, (The path name is relative to the current working directory,
i.e., the cluster's data directory.) i.e., the cluster's data directory.)
Any <literal>%r</> is replaced by the name of the file containing the Any <literal>%r</> is replaced by the name of the file containing the
last valid restart point. That is the earliest file that must be kept last valid restart point. That is the earliest file that must be kept
to allow a restore to be restartable, so this information can be used to allow a restore to be restartable, so this information can be used
to truncate the archive to just the minimum required to support to truncate the archive to just the minimum required to support
restart from the current restore. <literal>%r</> would typically be restarting from the current restore. <literal>%r</> is typically only
used in a warm-standby configuration used by warm-standby configurations
(see <xref linkend="warm-standby">). (see <xref linkend="warm-standby">).
Write <literal>%%</> to embed an actual <literal>%</> character Write <literal>%%</> to embed an actual <literal>%</> character.
in the command.
</para> </para>
<para> <para>
It is important for the command to return a zero exit status if and It is important for the command to return a zero exit status
only if it succeeds. The command <emphasis>will</> be asked for file only if it succeeds. The command <emphasis>will</> be asked for file
names that are not present in the archive; it must return nonzero names that are not present in the archive; it must return nonzero
when so asked. Examples: when so asked. Examples:
...@@ -1221,7 +1220,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows ...@@ -1221,7 +1220,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
<para> <para>
Specifies recovering into a particular timeline. The default is Specifies recovering into a particular timeline. The default is
to recover along the same timeline that was current when the to recover along the same timeline that was current when the
base backup was taken. You would only need to set this parameter base backup was taken. You only need to set this parameter
in complex re-recovery situations, where you need to return to in complex re-recovery situations, where you need to return to
a state that itself was reached after a point-in-time recovery. a state that itself was reached after a point-in-time recovery.
See <xref linkend="backup-timelines"> for discussion. See <xref linkend="backup-timelines"> for discussion.
...@@ -1245,28 +1244,28 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows ...@@ -1245,28 +1244,28 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
<para> <para>
The ability to restore the database to a previous point in time creates The ability to restore the database to a previous point in time creates
some complexities that are akin to science-fiction stories about time some complexities that are akin to science-fiction stories about time
travel and parallel universes. In the original history of the database, travel and parallel universes. For example, in the original history of the database,
perhaps you dropped a critical table at 5:15PM on Tuesday evening, but suppose you dropped a critical table at 5:15PM on Tuesday evening, but
didn't realize your mistake until Wednesday noon. didn't realize your mistake until Wednesday noon.
Unfazed, you get out your backup, restore to the point-in-time 5:14PM Unfazed, you get out your backup, restore to the point-in-time 5:14PM
Tuesday evening, and are up and running. In <emphasis>this</> history of Tuesday evening, and are up and running. In <emphasis>this</> history of
the database universe, you never dropped the table at all. But suppose the database universe, you never dropped the table. But suppose
you later realize this wasn't such a great idea after all, and would like you later realize this wasn't such a great idea, and would like
to return to sometime Wednesday morning in the original history. to return to sometime Wednesday morning in the original history.
You won't be able You won't be able
to if, while your database was up-and-running, it overwrote some of the to if, while your database was up-and-running, it overwrote some of the
sequence of WAL segment files that led up to the time you now wish you WAL segment files that led up to the time you now wish you
could get back to. So you really want to distinguish the series of could get back to. Thus, to avoid this, you need to distinguish the series of
WAL records generated after you've done a point-in-time recovery from WAL records generated after you've done a point-in-time recovery from
those that were generated in the original database history. those that were generated in the original database history.
</para> </para>
<para> <para>
To deal with these problems, <productname>PostgreSQL</> has a notion To deal with this problem, <productname>PostgreSQL</> has a notion
of <firstterm>timelines</>. Whenever an archive recovery is completed, of <firstterm>timelines</>. Whenever an archive recovery completes,
a new timeline is created to identify the series of WAL records a new timeline is created to identify the series of WAL records
generated after that recovery. The timeline generated after that recovery. The timeline
ID number is part of WAL segment file names, and so a new timeline does ID number is part of WAL segment file names so a new timeline does
not overwrite the WAL data generated by previous timelines. It is not overwrite the WAL data generated by previous timelines. It is
in fact possible to archive many different timelines. While that might in fact possible to archive many different timelines. While that might
seem like a useless feature, it's often a lifesaver. Consider the seem like a useless feature, it's often a lifesaver. Consider the
...@@ -1275,11 +1274,11 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows ...@@ -1275,11 +1274,11 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
until you find the best place to branch off from the old history. Without until you find the best place to branch off from the old history. Without
timelines this process would soon generate an unmanageable mess. With timelines this process would soon generate an unmanageable mess. With
timelines, you can recover to <emphasis>any</> prior state, including timelines, you can recover to <emphasis>any</> prior state, including
states in timeline branches that you later abandoned. states in timeline branches that you abandoned earlier.
</para> </para>
<para> <para>
Each time a new timeline is created, <productname>PostgreSQL</> creates Every time a new timeline is created, <productname>PostgreSQL</> creates
a <quote>timeline history</> file that shows which timeline it branched a <quote>timeline history</> file that shows which timeline it branched
off from and when. These history files are necessary to allow the system off from and when. These history files are necessary to allow the system
to pick the right WAL segment files when recovering from an archive that to pick the right WAL segment files when recovering from an archive that
...@@ -1287,15 +1286,15 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows ...@@ -1287,15 +1286,15 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
archive area just like WAL segment files. The history files are just archive area just like WAL segment files. The history files are just
small text files, so it's cheap and appropriate to keep them around small text files, so it's cheap and appropriate to keep them around
indefinitely (unlike the segment files which are large). You can, if indefinitely (unlike the segment files which are large). You can, if
you like, add comments to a history file to make your own notes about you like, add comments to a history file to record your own notes about
how and why this particular timeline came to be. Such comments will be how and why this particular timeline was created. Such comments will be
especially valuable when you have a thicket of different timelines as especially valuable when you have a thicket of different timelines as
a result of experimentation. a result of experimentation.
</para> </para>
<para> <para>
The default behavior of recovery is to recover along the same timeline The default behavior of recovery is to recover along the same timeline
that was current when the base backup was taken. If you want to recover that was current when the base backup was taken. If you wish to recover
into some child timeline (that is, you want to return to some state that into some child timeline (that is, you want to return to some state that
was itself generated after a recovery attempt), you need to specify the was itself generated after a recovery attempt), you need to specify the
target timeline ID in <filename>recovery.conf</>. You cannot recover into target timeline ID in <filename>recovery.conf</>. You cannot recover into
...@@ -1319,13 +1318,13 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows ...@@ -1319,13 +1318,13 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
for point-in-time recovery, yet are typically much faster to backup and for point-in-time recovery, yet are typically much faster to backup and
restore than <application>pg_dump</> dumps. (They are also much larger restore than <application>pg_dump</> dumps. (They are also much larger
than <application>pg_dump</> dumps, so in some cases the speed advantage than <application>pg_dump</> dumps, so in some cases the speed advantage
could be negated.) might be negated.)
</para> </para>
<para> <para>
To prepare for standalone hot backups, set <varname>archive_mode</> to To prepare for standalone hot backups, set <varname>archive_mode</> to
<literal>on</>, and set up an <varname>archive_command</> that performs <literal>on</>, and set up an <varname>archive_command</> that performs
archiving only when a <quote>switch file</> exists. For example: archiving only when a <emphasis>switch file</> exists. For example:
<programlisting> <programlisting>
archive_command = 'test ! -f /var/lib/pgsql/backup_in_progress || cp -i %p /var/lib/pgsql/archive/%f &lt; /dev/null' archive_command = 'test ! -f /var/lib/pgsql/backup_in_progress || cp -i %p /var/lib/pgsql/archive/%f &lt; /dev/null'
</programlisting> </programlisting>
...@@ -1538,7 +1537,7 @@ archive_command = 'local_backup_script.sh' ...@@ -1538,7 +1537,7 @@ archive_command = 'local_backup_script.sh'
in continuous archiving mode, while each standby server operates in in continuous archiving mode, while each standby server operates in
continuous recovery mode, reading the WAL files from the primary. No continuous recovery mode, reading the WAL files from the primary. No
changes to the database tables are required to enable this capability, changes to the database tables are required to enable this capability,
so it offers low administration overhead in comparison with some other so it offers low administration overhead compared to some other
replication approaches. This configuration also has relatively low replication approaches. This configuration also has relatively low
performance impact on the primary server. performance impact on the primary server.
</para> </para>
...@@ -1549,7 +1548,7 @@ archive_command = 'local_backup_script.sh' ...@@ -1549,7 +1548,7 @@ archive_command = 'local_backup_script.sh'
implements file-based log shipping, which means that WAL records are implements file-based log shipping, which means that WAL records are
transferred one file (WAL segment) at a time. WAL files (16MB) can be transferred one file (WAL segment) at a time. WAL files (16MB) can be
shipped easily and cheaply over any distance, whether it be to an shipped easily and cheaply over any distance, whether it be to an
adjacent system, another system on the same site or another system on adjacent system, another system at the same site, or another system on
the far side of the globe. The bandwidth required for this technique the far side of the globe. The bandwidth required for this technique
varies according to the transaction rate of the primary server. varies according to the transaction rate of the primary server.
Record-based log shipping is also possible with custom-developed Record-based log shipping is also possible with custom-developed
...@@ -1563,10 +1562,10 @@ archive_command = 'local_backup_script.sh' ...@@ -1563,10 +1562,10 @@ archive_command = 'local_backup_script.sh'
failure: transactions not yet shipped will be lost. The length of the failure: transactions not yet shipped will be lost. The length of the
window of data loss can be limited by use of the window of data loss can be limited by use of the
<varname>archive_timeout</varname> parameter, which can be set as low <varname>archive_timeout</varname> parameter, which can be set as low
as a few seconds if required. However such low settings will as a few seconds if required. However such a low setting will
substantially increase the bandwidth requirements for file shipping. substantially increase the bandwidth required for file shipping.
If you need a window of less than a minute or so, it's probably better If you need a window of less than a minute or so, it's probably better
to look into record-based log shipping. to consider record-based log shipping.
</para> </para>
<para> <para>
...@@ -1587,12 +1586,12 @@ archive_command = 'local_backup_script.sh' ...@@ -1587,12 +1586,12 @@ archive_command = 'local_backup_script.sh'
It is usually wise to create the primary and standby servers It is usually wise to create the primary and standby servers
so that they are as similar as possible, at least from the so that they are as similar as possible, at least from the
perspective of the database server. In particular, the path names perspective of the database server. In particular, the path names
associated with tablespaces will be passed across as-is, so both associated with tablespaces will be passed across unmodified, so both
primary and standby servers must have the same mount paths for primary and standby servers must have the same mount paths for
tablespaces if that feature is used. Keep in mind that if tablespaces if that feature is used. Keep in mind that if
<xref linkend="sql-createtablespace" endterm="sql-createtablespace-title"> <xref linkend="sql-createtablespace" endterm="sql-createtablespace-title">
is executed on the primary, any new mount point needed for it must is executed on the primary, any new mount point needed for it must
be created on both the primary and all standby servers before the command be created on the primary and all standby servers before the command
is executed. Hardware need not be exactly the same, but experience shows is executed. Hardware need not be exactly the same, but experience shows
that maintaining two identical systems is easier than maintaining two that maintaining two identical systems is easier than maintaining two
dissimilar ones over the lifetime of the application and system. dissimilar ones over the lifetime of the application and system.
...@@ -1603,7 +1602,7 @@ archive_command = 'local_backup_script.sh' ...@@ -1603,7 +1602,7 @@ archive_command = 'local_backup_script.sh'
<para> <para>
In general, log shipping between servers running different major In general, log shipping between servers running different major
<productname>PostgreSQL</> release <productname>PostgreSQL</> release
levels will not be possible. It is the policy of the PostgreSQL Global levels is not possible. It is the policy of the PostgreSQL Global
Development Group not to make changes to disk formats during minor release Development Group not to make changes to disk formats during minor release
upgrades, so it is likely that running different minor release levels upgrades, so it is likely that running different minor release levels
on primary and standby servers will work successfully. However, no on primary and standby servers will work successfully. However, no
...@@ -1617,13 +1616,13 @@ archive_command = 'local_backup_script.sh' ...@@ -1617,13 +1616,13 @@ archive_command = 'local_backup_script.sh'
<para> <para>
There is no special mode required to enable a standby server. The There is no special mode required to enable a standby server. The
operations that occur on both primary and standby servers are entirely operations that occur on both primary and standby servers are
normal continuous archiving and recovery tasks. The only point of normal continuous archiving and recovery tasks. The only point of
contact between the two database servers is the archive of WAL files contact between the two database servers is the archive of WAL files
that both share: primary writing to the archive, standby reading from that both share: primary writing to the archive, standby reading from
the archive. Care must be taken to ensure that WAL archives for separate the archive. Care must be taken to ensure that WAL archives from separate
primary servers do not become mixed together or confused. The archive primary servers do not become mixed together or confused. The archive
need not be large, if it is only required for the standby operation. need not be large if it is only required for standby operation.
</para> </para>
<para> <para>
...@@ -1665,31 +1664,31 @@ if (!triggered) ...@@ -1665,31 +1664,31 @@ if (!triggered)
as a <filename>contrib</> module named <application>pg_standby</>. It as a <filename>contrib</> module named <application>pg_standby</>. It
should be used as a reference on how to correctly implement the logic should be used as a reference on how to correctly implement the logic
described above. It can also be extended as needed to support specific described above. It can also be extended as needed to support specific
configurations or environments. configurations and environments.
</para> </para>
<para> <para>
<productname>PostgreSQL</productname> does not provide the system <productname>PostgreSQL</productname> does not provide the system
software required to identify a failure on the primary and notify software required to identify a failure on the primary and notify
the standby system and then the standby database server. Many such the standby database server. Many such tools exist and are well
tools exist and are well integrated with other aspects required for integrated with the operating system facilities required for
successful failover, such as IP address migration. successful failover, such as IP address migration.
</para> </para>
<para> <para>
The means for triggering failover is an important part of planning and The method for triggering failover is an important part of planning
design. The <varname>restore_command</> is executed in full once and design. One potential option is the <varname>restore_command</>
for each WAL file. The process running the <varname>restore_command</> command. It is executed once for each WAL file, but the process
is therefore created and dies for each file, so there is no daemon running the <varname>restore_command</> is created and dies for
or server process and so we cannot use signals and a signal each file, so there is no daemon or server process, and we cannot
handler. A more permanent notification is required to trigger the use signals or a signal handler. Therefore, the
failover. It is possible to use a simple timeout facility, <varname>restore_command</> is not suitable to trigger failover.
especially if used in conjunction with a known It is possible to use a simple timeout facility, especially if
<varname>archive_timeout</> setting on the primary. This is used in conjunction with a known <varname>archive_timeout</>
somewhat error prone since a network problem or busy primary server might setting on the primary. However, this is somewhat error prone
be sufficient to initiate failover. A notification mechanism such since a network problem or busy primary server might be sufficient
as the explicit creation of a trigger file is less error prone, if to initiate failover. A notification mechanism such as the explicit
this can be arranged. creation of a trigger file is ideal, if this can be arranged.
</para> </para>
<para> <para>
...@@ -1697,7 +1696,7 @@ if (!triggered) ...@@ -1697,7 +1696,7 @@ if (!triggered)
option of the <varname>restore_command</>. This option specifies the option of the <varname>restore_command</>. This option specifies the
last archive file name that needs to be kept to allow the recovery to last archive file name that needs to be kept to allow the recovery to
restart correctly. This can be used to truncate the archive once restart correctly. This can be used to truncate the archive once
files are no longer required, if the archive is writable from the files are no longer required, assuming the archive is writable from the
standby server. standby server.
</para> </para>
</sect2> </sect2>
...@@ -1711,15 +1710,15 @@ if (!triggered) ...@@ -1711,15 +1710,15 @@ if (!triggered)
<orderedlist> <orderedlist>
<listitem> <listitem>
<para> <para>
Set up primary and standby systems as near identically as Set up primary and standby systems as nearly identical as
possible, including two identical copies of possible, including two identical copies of
<productname>PostgreSQL</> at the same release level. <productname>PostgreSQL</> at the same release level.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Set up continuous archiving from the primary to a WAL archive located Set up continuous archiving from the primary to a WAL archive
in a directory on the standby server. Ensure that directory on the standby server. Ensure that
<xref linkend="guc-archive-mode">, <xref linkend="guc-archive-mode">,
<xref linkend="guc-archive-command"> and <xref linkend="guc-archive-command"> and
<xref linkend="guc-archive-timeout"> <xref linkend="guc-archive-timeout">
...@@ -1777,9 +1776,10 @@ if (!triggered) ...@@ -1777,9 +1776,10 @@ if (!triggered)
</para> </para>
<para> <para>
If the primary server fails and then immediately restarts, you must have If the primary server fails and the standby server becomes the
a mechanism for informing it that it is no longer the primary. This is new primary, and then the old primary restarts, you must have
sometimes known as STONITH (Shoot the Other Node In The Head), which is a mechanism for informing old primary that it is no longer the primary. This is
sometimes known as STONITH (Shoot The Other Node In The Head), which is
necessary to avoid situations where both systems think they are the necessary to avoid situations where both systems think they are the
primary, which will lead to confusion and ultimately data loss. primary, which will lead to confusion and ultimately data loss.
</para> </para>
...@@ -1803,7 +1803,7 @@ if (!triggered) ...@@ -1803,7 +1803,7 @@ if (!triggered)
either on the former primary system when it comes up, or on a third, either on the former primary system when it comes up, or on a third,
possibly new, system. Once complete the primary and standby can be possibly new, system. Once complete the primary and standby can be
considered to have switched roles. Some people choose to use a third considered to have switched roles. Some people choose to use a third
server to provide backup to the new primary until the new standby server to provide backup for the new primary until the new standby
server is recreated, server is recreated,
though clearly this complicates the system configuration and though clearly this complicates the system configuration and
operational processes. operational processes.
...@@ -1834,15 +1834,15 @@ if (!triggered) ...@@ -1834,15 +1834,15 @@ if (!triggered)
to find out the file name and the exact byte offset within it of to find out the file name and the exact byte offset within it of
the current end of WAL. It can then access the WAL file directly the current end of WAL. It can then access the WAL file directly
and copy the data from the last known end of WAL through the current end and copy the data from the last known end of WAL through the current end
over to the standby server(s). With this approach, the window for data over to the standby servers. With this approach, the window for data
loss is the polling cycle time of the copying program, which can be very loss is the polling cycle time of the copying program, which can be very
small, but there is no wasted bandwidth from forcing partially-used small, and there is no wasted bandwidth from forcing partially-used
segment files to be archived. Note that the standby servers' segment files to be archived. Note that the standby servers'
<varname>restore_command</> scripts still deal in whole WAL files, <varname>restore_command</> scripts can only deal with whole WAL files,
so the incrementally copied data is not ordinarily made available to so the incrementally copied data is not ordinarily made available to
the standby servers. It is of use only when the primary dies &mdash; the standby servers. It is of use only when the primary dies &mdash;
then the last partial WAL file is fed to the standby before allowing then the last partial WAL file is fed to the standby before allowing
it to come up. So correct implementation of this process requires it to come up. The correct implementation of this process requires
cooperation of the <varname>restore_command</> script with the data cooperation of the <varname>restore_command</> script with the data
copying program. copying program.
</para> </para>
...@@ -2090,10 +2090,11 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' ...@@ -2090,10 +2090,11 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass'
</para> </para>
<para> <para>
If we take a backup of the standby server's data directory while it is processing If we take a file system backup of the standby server's data
logs shipped from the primary, we will be able to reload that data and directory while it is processing
logs shipped from the primary, we will be able to reload that backup and
restart the standby's recovery process from the last restart point. restart the standby's recovery process from the last restart point.
We no longer need to keep WAL files from before the restart point. We no longer need to keep WAL files from before the standby's restart point.
If we need to recover, it will be faster to recover from the incrementally If we need to recover, it will be faster to recover from the incrementally
updated backup than from the original base backup. updated backup than from the original base backup.
</para> </para>
...@@ -2106,7 +2107,8 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' ...@@ -2106,7 +2107,8 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass'
backup. You can do this by running <application>pg_controldata</> backup. You can do this by running <application>pg_controldata</>
on the standby server to inspect the control file and determine the on the standby server to inspect the control file and determine the
current checkpoint WAL location, or by using the current checkpoint WAL location, or by using the
<varname>log_checkpoints</> option to print values to the server log. <varname>log_checkpoints</> option to print values to the standby's
server log.
</para> </para>
</sect2> </sect2>
</sect1> </sect1>
...@@ -2892,27 +2894,35 @@ LOG: database system is ready to accept read only connections ...@@ -2892,27 +2894,35 @@ LOG: database system is ready to accept read only connections
</para> </para>
<para> <para>
As a general rule, the internal data storage format is subject to <productname>PostgreSQL</> major versions are represented by the
change between major releases of <productname>PostgreSQL</> (where first two digit groups of the version number, e.g. 8.4.
the number after the first dot changes). This does not apply to <productname>PostgreSQL</> minor versions are represented by the
different minor releases under the same major release (where the the third group of version digits, i.e., 8.4.2 is the second minor
number after the second dot changes); these always have compatible release of 8.4. Minor releases never change the internal storage
storage formats. For example, releases 8.1.1, 8.2.3, and 8.3 are format and are always compatible with earlier and later minor
not compatible, whereas 8.2.3 and 8.2.4 are. When you update releases of the same major version number, i.e. 8.4.2 is compatible
between compatible versions, you can simply replace the executables with 8.4, 8.4.1 and 8.4.6. To update between compatible versions,
and reuse the data directory on disk. Otherwise you need to back you simply replace the executables while the server is down and
up your data and restore it on the new server. This has to be done restart the server. The data directory remains unchanged &mdash;
using <application>pg_dump</>; file system level backup methods minor upgrades are that simple.
obviously won't work. There are checks in place that prevent you </para>
from using a data directory with an incompatible version of
<productname>PostgreSQL</productname>, so no great harm can be done by <para>
trying to start the wrong server version on a data directory. For <emphasis>major</> releases of <productname>PostgreSQL</>, the
internal data storage format is subject to change. When migrating
data from one major version of <productname>PostgreSQL</> to another,
you need to back up your data and restore it on the new server.
This must be done using <application>pg_dump</>; file system level
backup methods will not work. There are checks in place that prevent
you from using a data directory with an incompatible version of
<productname>PostgreSQL</productname>, so no great harm can be done
by trying to start the wrong server version on a data directory.
</para> </para>
<para> <para>
It is recommended that you use the <application>pg_dump</> and It is recommended that you use the <application>pg_dump</> and
<application>pg_dumpall</> programs from the newer version of <application>pg_dumpall</> programs from the newer version of
<productname>PostgreSQL</>, to take advantage of any enhancements <productname>PostgreSQL</>, to take advantage of enhancements
that might have been made in these programs. Current releases of the that might have been made in these programs. Current releases of the
dump programs can read data from any server version back to 7.0. dump programs can read data from any server version back to 7.0.
</para> </para>
...@@ -2926,9 +2936,9 @@ LOG: database system is ready to accept read only connections ...@@ -2926,9 +2936,9 @@ LOG: database system is ready to accept read only connections
pg_dumpall -p 5432 | psql -d postgres -p 6543 pg_dumpall -p 5432 | psql -d postgres -p 6543
</programlisting> </programlisting>
to transfer your data. Or use an intermediate file if you want. to transfer your data. Or use an intermediate file if you wish.
Then you can shut down the old server and start the new server at Then you can shut down the old server and start the new server using
the port the old one was running at. You should make sure that the the port the old one was running on. You should make sure that the
old database is not updated after you begin to run old database is not updated after you begin to run
<application>pg_dumpall</>, otherwise you will lose that data. See <xref <application>pg_dumpall</>, otherwise you will lose that data. See <xref
linkend="client-authentication"> for information on how to prohibit linkend="client-authentication"> for information on how to prohibit
...@@ -2949,13 +2959,14 @@ pg_dumpall -p 5432 | psql -d postgres -p 6543 ...@@ -2949,13 +2959,14 @@ pg_dumpall -p 5432 | psql -d postgres -p 6543
<para> <para>
If you cannot or do not want to run two servers in parallel, you can If you cannot or do not want to run two servers in parallel, you can
do the backup step before installing the new version, bring down do the backup step before installing the new version, bring down
the server, move the old version out of the way, install the new the old server, move the old version out of the way, install the new
version, start the new server, and restore the data. For example: version, start the new server, and restore the data. For example:
<programlisting> <programlisting>
pg_dumpall &gt; backup pg_dumpall &gt; backup
pg_ctl stop pg_ctl stop
mv /usr/local/pgsql /usr/local/pgsql.old mv /usr/local/pgsql /usr/local/pgsql.old
# Rename any tablespace directories as well
cd ~/postgresql-&version; cd ~/postgresql-&version;
gmake install gmake install
initdb -D /usr/local/pgsql/data initdb -D /usr/local/pgsql/data
...@@ -2976,7 +2987,7 @@ psql -f backup postgres ...@@ -2976,7 +2987,7 @@ psql -f backup postgres
This is usually not a big problem, but if you plan on using two This is usually not a big problem, but if you plan on using two
installations in parallel for a while you should assign them installations in parallel for a while you should assign them
different installation directories at build time. (This problem different installation directories at build time. (This problem
is rectified in <productname>PostgreSQL</> 8.0 and later, so long is rectified in <productname>PostgreSQL</> version 8.0 and later, so long
as you move all subdirectories containing installed files together; as you move all subdirectories containing installed files together;
for example if <filename>/usr/local/postgres/bin/</> goes to for example if <filename>/usr/local/postgres/bin/</> goes to
<filename>/usr/local/postgres.old/bin/</>, then <filename>/usr/local/postgres.old/bin/</>, then
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/catalogs.sgml,v 2.219 2010/01/22 16:40:18 rhaas Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/catalogs.sgml,v 2.220 2010/02/03 17:25:05 momjian Exp $ -->
<!-- <!--
Documentation of the system catalogs, directed toward PostgreSQL developers Documentation of the system catalogs, directed toward PostgreSQL developers
--> -->
...@@ -5569,7 +5569,9 @@ ...@@ -5569,7 +5569,9 @@
inserted before a datum of this type so that it begins on the inserted before a datum of this type so that it begins on the
specified boundary. The alignment reference is the beginning specified boundary. The alignment reference is the beginning
of the first datum in the sequence. of the first datum in the sequence.
</para><para> </para>
<para>
Possible values are: Possible values are:
<itemizedlist> <itemizedlist>
<listitem> <listitem>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/charset.sgml,v 2.95 2009/05/18 08:59:28 petere Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/charset.sgml,v 2.96 2010/02/03 17:25:05 momjian Exp $ -->
<chapter id="charset"> <chapter id="charset">
<title>Localization</> <title>Localization</>
...@@ -6,8 +6,8 @@ ...@@ -6,8 +6,8 @@
<para> <para>
This chapter describes the available localization features from the This chapter describes the available localization features from the
point of view of the administrator. point of view of the administrator.
<productname>PostgreSQL</productname> supports localization with <productname>PostgreSQL</productname> supports two localization
two approaches: facilities:
<itemizedlist> <itemizedlist>
<listitem> <listitem>
...@@ -67,10 +67,10 @@ initdb --locale=sv_SE ...@@ -67,10 +67,10 @@ initdb --locale=sv_SE
(<literal>sv</>) as spoken (<literal>sv</>) as spoken
in Sweden (<literal>SE</>). Other possibilities might be in Sweden (<literal>SE</>). Other possibilities might be
<literal>en_US</> (U.S. English) and <literal>fr_CA</> (French <literal>en_US</> (U.S. English) and <literal>fr_CA</> (French
Canadian). If more than one character set can be useful for a Canadian). If more than one character set can be used for a
locale then the specifications look like this: locale then the specifications look like this:
<literal>cs_CZ.ISO8859-2</>. What locales are available under what <literal>cs_CZ.ISO8859-2</>. What locales are available on your
names on your system depends on what was provided by the operating system under what names depends on what was provided by the operating
system vendor and what was installed. On most Unix systems, the command system vendor and what was installed. On most Unix systems, the command
<literal>locale -a</> will provide a list of available locales. <literal>locale -a</> will provide a list of available locales.
Windows uses more verbose locale names, such as <literal>German_Germany</> Windows uses more verbose locale names, such as <literal>German_Germany</>
...@@ -80,8 +80,8 @@ initdb --locale=sv_SE ...@@ -80,8 +80,8 @@ initdb --locale=sv_SE
<para> <para>
Occasionally it is useful to mix rules from several locales, e.g., Occasionally it is useful to mix rules from several locales, e.g.,
use English collation rules but Spanish messages. To support that, a use English collation rules but Spanish messages. To support that, a
set of locale subcategories exist that control only a certain set of locale subcategories exist that control only certain
aspect of the localization rules: aspects of the localization rules:
<informaltable> <informaltable>
<tgroup cols="2"> <tgroup cols="2">
...@@ -127,13 +127,13 @@ initdb --locale=sv_SE ...@@ -127,13 +127,13 @@ initdb --locale=sv_SE
</para> </para>
<para> <para>
The nature of some locale categories is that their value has to be Some locale categories must have their values
fixed when the database is created. You can use different settings fixed when the database is created. You can use different settings
for different databases, but once a database is created, you cannot for different databases, but once a database is created, you cannot
change them for that database anymore. <literal>LC_COLLATE</literal> change them for that database anymore. <literal>LC_COLLATE</literal>
and <literal>LC_CTYPE</literal> are these categories. They affect and <literal>LC_CTYPE</literal> are these type of categories. They affect
the sort order of indexes, so they must be kept fixed, or indexes on the sort order of indexes, so they must be kept fixed, or indexes on
text columns will become corrupt. The default values for these text columns would become corrupt. The default values for these
categories are determined when <command>initdb</command> is run, and categories are determined when <command>initdb</command> is run, and
those values are used when new databases are created, unless those values are used when new databases are created, unless
specified otherwise in the <command>CREATE DATABASE</command> command. specified otherwise in the <command>CREATE DATABASE</command> command.
...@@ -146,7 +146,7 @@ initdb --locale=sv_SE ...@@ -146,7 +146,7 @@ initdb --locale=sv_SE
linkend="runtime-config-client-format"> for details). The values linkend="runtime-config-client-format"> for details). The values
that are chosen by <command>initdb</command> are actually only written that are chosen by <command>initdb</command> are actually only written
into the configuration file <filename>postgresql.conf</filename> to into the configuration file <filename>postgresql.conf</filename> to
serve as defaults when the server is started. If you delete these serve as defaults when the server is started. If you disable these
assignments from <filename>postgresql.conf</filename> then the assignments from <filename>postgresql.conf</filename> then the
server will inherit the settings from its execution environment. server will inherit the settings from its execution environment.
</para> </para>
...@@ -178,7 +178,7 @@ initdb --locale=sv_SE ...@@ -178,7 +178,7 @@ initdb --locale=sv_SE
settings for the purpose of setting the language of messages. If settings for the purpose of setting the language of messages. If
in doubt, please refer to the documentation of your operating in doubt, please refer to the documentation of your operating
system, in particular the documentation about system, in particular the documentation about
<application>gettext</>, for more information. <application>gettext</>.
</para> </para>
</note> </note>
...@@ -320,8 +320,9 @@ initdb --locale=sv_SE ...@@ -320,8 +320,9 @@ initdb --locale=sv_SE
<para> <para>
An important restriction, however, is that each database's character set An important restriction, however, is that each database's character set
must be compatible with the database's <envar>LC_CTYPE</> and must be compatible with the database's <envar>LC_CTYPE</> (character
<envar>LC_COLLATE</> locale settings. For <literal>C</> or classification) and <envar>LC_COLLATE</> (string sort order) locale
settings. For <literal>C</> or
<literal>POSIX</> locale, any character set is allowed, but for other <literal>POSIX</> locale, any character set is allowed, but for other
locales there is only one character set that will work correctly. locales there is only one character set that will work correctly.
(On Windows, however, UTF-8 encoding can be used with any locale.) (On Windows, however, UTF-8 encoding can be used with any locale.)
...@@ -543,7 +544,7 @@ initdb --locale=sv_SE ...@@ -543,7 +544,7 @@ initdb --locale=sv_SE
<entry>LATIN1 with Euro and accents</entry> <entry>LATIN1 with Euro and accents</entry>
<entry>Yes</entry> <entry>Yes</entry>
<entry>1</entry> <entry>1</entry>
<entry>ISO885915</entry> <entry><literal>ISO885915</></entry>
</row> </row>
<row> <row>
<entry><literal>LATIN10</literal></entry> <entry><literal>LATIN10</literal></entry>
...@@ -694,7 +695,7 @@ initdb --locale=sv_SE ...@@ -694,7 +695,7 @@ initdb --locale=sv_SE
</table> </table>
<para> <para>
Not all <acronym>API</>s support all the listed character sets. For example, the Not all client <acronym>API</>s support all the listed character sets. For example, the
<productname>PostgreSQL</> <productname>PostgreSQL</>
JDBC driver does not support <literal>MULE_INTERNAL</>, <literal>LATIN6</>, JDBC driver does not support <literal>MULE_INTERNAL</>, <literal>LATIN6</>,
<literal>LATIN8</>, and <literal>LATIN10</>. <literal>LATIN8</>, and <literal>LATIN10</>.
...@@ -710,7 +711,7 @@ initdb --locale=sv_SE ...@@ -710,7 +711,7 @@ initdb --locale=sv_SE
much a declaration that a specific encoding is in use, as a declaration much a declaration that a specific encoding is in use, as a declaration
of ignorance about the encoding. In most cases, if you are of ignorance about the encoding. In most cases, if you are
working with any non-ASCII data, it is unwise to use the working with any non-ASCII data, it is unwise to use the
<literal>SQL_ASCII</> setting, because <literal>SQL_ASCII</> setting because
<productname>PostgreSQL</productname> will be unable to help you by <productname>PostgreSQL</productname> will be unable to help you by
converting or validating non-ASCII characters. converting or validating non-ASCII characters.
</para> </para>
...@@ -720,17 +721,17 @@ initdb --locale=sv_SE ...@@ -720,17 +721,17 @@ initdb --locale=sv_SE
<title>Setting the Character Set</title> <title>Setting the Character Set</title>
<para> <para>
<command>initdb</> defines the default character set <command>initdb</> defines the default character set (encoding)
for a <productname>PostgreSQL</productname> cluster. For example, for a <productname>PostgreSQL</productname> cluster. For example,
<screen> <screen>
initdb -E EUC_JP initdb -E EUC_JP
</screen> </screen>
sets the default character set (encoding) to sets the default character set to
<literal>EUC_JP</literal> (Extended Unix Code for Japanese). You <literal>EUC_JP</literal> (Extended Unix Code for Japanese). You
can use <option>--encoding</option> instead of can use <option>--encoding</option> instead of
<option>-E</option> if you prefer to type longer option strings. <option>-E</option> if you prefer longer option strings.
If no <option>-E</> or <option>--encoding</option> option is If no <option>-E</> or <option>--encoding</option> option is
given, <command>initdb</> attempts to determine the appropriate given, <command>initdb</> attempts to determine the appropriate
encoding to use based on the specified or default locale. encoding to use based on the specified or default locale.
...@@ -762,8 +763,8 @@ CREATE DATABASE korean WITH ENCODING 'EUC_KR' LC_COLLATE='ko_KR.euckr' LC_CTYPE= ...@@ -762,8 +763,8 @@ CREATE DATABASE korean WITH ENCODING 'EUC_KR' LC_COLLATE='ko_KR.euckr' LC_CTYPE=
<para> <para>
The encoding for a database is stored in the system catalog The encoding for a database is stored in the system catalog
<literal>pg_database</literal>. You can see it by using the <literal>pg_database</literal>. You can see it by using the
<option>-l</option> option or the <command>\l</command> command <command>psql</command> <option>-l</option> option or the
of <command>psql</command>. <command>\l</command> command.
<screen> <screen>
$ <userinput>psql -l</userinput> $ <userinput>psql -l</userinput>
...@@ -784,11 +785,11 @@ $ <userinput>psql -l</userinput> ...@@ -784,11 +785,11 @@ $ <userinput>psql -l</userinput>
<important> <important>
<para> <para>
On most modern operating systems, <productname>PostgreSQL</productname> On most modern operating systems, <productname>PostgreSQL</productname>
can determine which character set is implied by an <envar>LC_CTYPE</> can determine which character set is implied by the <envar>LC_CTYPE</>
setting, and it will enforce that only the matching database encoding is setting, and it will enforce that only the matching database encoding is
used. On older systems it is your responsibility to ensure that you use used. On older systems it is your responsibility to ensure that you use
the encoding expected by the locale you have selected. A mistake in the encoding expected by the locale you have selected. A mistake in
this area is likely to lead to strange misbehavior of locale-dependent this area is likely to lead to strange behavior of locale-dependent
operations such as sorting. operations such as sorting.
</para> </para>
...@@ -1190,9 +1191,9 @@ RESET client_encoding; ...@@ -1190,9 +1191,9 @@ RESET client_encoding;
<para> <para>
If the conversion of a particular character is not possible If the conversion of a particular character is not possible
&mdash; suppose you chose <literal>EUC_JP</literal> for the &mdash; suppose you chose <literal>EUC_JP</literal> for the
server and <literal>LATIN1</literal> for the client, then some server and <literal>LATIN1</literal> for the client, and some
Japanese characters do not have a representation in Japanese characters are returned that do not have a representation in
<literal>LATIN1</literal> &mdash; then an error is reported. <literal>LATIN1</literal> &mdash; an error is reported.
</para> </para>
<para> <para>
...@@ -1249,7 +1250,8 @@ RESET client_encoding; ...@@ -1249,7 +1250,8 @@ RESET client_encoding;
<listitem> <listitem>
<para> <para>
<acronym>UTF</acronym>-8 is defined here. <acronym>UTF</acronym>-8 (8-bit UCS/Unicode Transformation
Format) is defined here.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/client-auth.sgml,v 1.130 2010/02/02 19:09:36 mha Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/client-auth.sgml,v 1.131 2010/02/03 17:25:05 momjian Exp $ -->
<chapter id="client-authentication"> <chapter id="client-authentication">
<title>Client Authentication</title> <title>Client Authentication</title>
...@@ -162,7 +162,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> ...@@ -162,7 +162,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable>
<term><literal>hostnossl</literal></term> <term><literal>hostnossl</literal></term>
<listitem> <listitem>
<para> <para>
This record type has the opposite logic to <literal>hostssl</>: This record type has the opposite behavior of <literal>hostssl</>;
it only matches connection attempts made over it only matches connection attempts made over
TCP/IP that do not use <acronym>SSL</acronym>. TCP/IP that do not use <acronym>SSL</acronym>.
</para> </para>
...@@ -218,7 +218,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> ...@@ -218,7 +218,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable>
<para> <para>
Specifies the client machine IP address range that this record Specifies the client machine IP address range that this record
matches. This field contains an IP address in standard dotted decimal matches. This field contains an IP address in standard dotted decimal
notation and a CIDR mask length. (IP addresses can only be notation and a <acronym>CIDR</> mask length. (IP addresses can only be
specified numerically, not as domain or host names.) The mask specified numerically, not as domain or host names.) The mask
length indicates the number of high-order bits of the client length indicates the number of high-order bits of the client
IP address that must match. Bits to the right of this must IP address that must match. Bits to the right of this must
...@@ -239,6 +239,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> ...@@ -239,6 +239,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable>
<literal>172.20.143.89/32</literal> for a single host, or <literal>172.20.143.89/32</literal> for a single host, or
<literal>172.20.143.0/24</literal> for a small network, or <literal>172.20.143.0/24</literal> for a small network, or
<literal>10.6.0.0/16</literal> for a larger one. <literal>10.6.0.0/16</literal> for a larger one.
<literal>0.0.0.0/0</literal> (<quote>all balls</>) represents all addresses.
To specify a single host, use a CIDR mask of 32 for IPv4 or To specify a single host, use a CIDR mask of 32 for IPv4 or
128 for IPv6. In a network address, do not omit trailing zeroes. 128 for IPv6. In a network address, do not omit trailing zeroes.
</para> </para>
...@@ -296,8 +297,8 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> ...@@ -296,8 +297,8 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable>
Allow the connection unconditionally. This method Allow the connection unconditionally. This method
allows anyone that can connect to the allows anyone that can connect to the
<productname>PostgreSQL</productname> database server to login as <productname>PostgreSQL</productname> database server to login as
any <productname>PostgreSQL</productname> user they like, any <productname>PostgreSQL</productname> user they wish,
without the need for a password. See <xref without the need for a password or any other authentication. See <xref
linkend="auth-trust"> for details. linkend="auth-trust"> for details.
</para> </para>
</listitem> </listitem>
...@@ -308,7 +309,10 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> ...@@ -308,7 +309,10 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable>
<listitem> <listitem>
<para> <para>
Reject the connection unconditionally. This is useful for Reject the connection unconditionally. This is useful for
<quote>filtering out</> certain hosts from a group. <quote>filtering out</> certain hosts from a group, e.g. a
<literal>reject</> line blocks a specific host from connecting,
but a later line allows the remaining hosts in a specific
network to connect.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
...@@ -388,7 +392,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> ...@@ -388,7 +392,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable>
<term><literal>ldap</></term> <term><literal>ldap</></term>
<listitem> <listitem>
<para> <para>
Authenticate using an LDAP server. See <xref Authenticate using an <acronym>LDAP</> server. See <xref
linkend="auth-ldap"> for details. linkend="auth-ldap"> for details.
</para> </para>
</listitem> </listitem>
...@@ -473,7 +477,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> ...@@ -473,7 +477,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable>
the main server process receives a the main server process receives a
<systemitem>SIGHUP</systemitem><indexterm><primary>SIGHUP</primary></indexterm> <systemitem>SIGHUP</systemitem><indexterm><primary>SIGHUP</primary></indexterm>
signal. If you edit the file on an signal. If you edit the file on an
active system, you will need to signal the server active system, you will need to signal the postmaster
(using <literal>pg_ctl reload</> or <literal>kill -HUP</>) to make it (using <literal>pg_ctl reload</> or <literal>kill -HUP</>) to make it
re-read the file. re-read the file.
</para> </para>
...@@ -485,7 +489,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> ...@@ -485,7 +489,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable>
<literal>CONNECT</> privilege for the database. If you wish to <literal>CONNECT</> privilege for the database. If you wish to
restrict which users can connect to which databases, it's usually restrict which users can connect to which databases, it's usually
easier to control this by granting/revoking <literal>CONNECT</> privilege easier to control this by granting/revoking <literal>CONNECT</> privilege
than to put the rules into <filename>pg_hba.conf</filename> entries. than to put the rules in <filename>pg_hba.conf</filename> entries.
</para> </para>
</tip> </tip>
...@@ -498,7 +502,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> ...@@ -498,7 +502,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable>
<example id="example-pg-hba.conf"> <example id="example-pg-hba.conf">
<title>Example <filename>pg_hba.conf</filename> entries</title> <title>Example <filename>pg_hba.conf</filename> entries</title>
<programlisting> <programlisting>
# Allow any user on the local system to connect to any database under # Allow any user on the local system to connect to any database with
# any database user name using Unix-domain sockets (the default for local # any database user name using Unix-domain sockets (the default for local
# connections). # connections).
# #
...@@ -517,7 +521,7 @@ host all all 127.0.0.1 255.255.255.255 trus ...@@ -517,7 +521,7 @@ host all all 127.0.0.1 255.255.255.255 trus
# Allow any user from any host with IP address 192.168.93.x to connect # Allow any user from any host with IP address 192.168.93.x to connect
# to database "postgres" as the same user name that ident reports for # to database "postgres" as the same user name that ident reports for
# the connection (typically the Unix user name). # the connection (typically the operating system user name).
# #
# TYPE DATABASE USER CIDR-ADDRESS METHOD # TYPE DATABASE USER CIDR-ADDRESS METHOD
host postgres all 192.168.93.0/24 ident host postgres all 192.168.93.0/24 ident
...@@ -531,8 +535,8 @@ host postgres all 192.168.12.10/32 md5 ...@@ -531,8 +535,8 @@ host postgres all 192.168.12.10/32 md5
# In the absence of preceding "host" lines, these two lines will # In the absence of preceding "host" lines, these two lines will
# reject all connections from 192.168.54.1 (since that entry will be # reject all connections from 192.168.54.1 (since that entry will be
# matched first), but allow Kerberos 5 connections from anywhere else # matched first), but allow Kerberos 5 connections from anywhere else
# on the Internet. The zero mask means that no bits of the host IP # on the Internet. The zero mask causes no bits of the host IP
# address are considered so it matches any host. # address to be considered, so it matches any host.
# #
# TYPE DATABASE USER CIDR-ADDRESS METHOD # TYPE DATABASE USER CIDR-ADDRESS METHOD
host all all 192.168.54.1/32 reject host all all 192.168.54.1/32 reject
...@@ -654,7 +658,7 @@ mymap /^(.*)@otherdomain\.com$ guest ...@@ -654,7 +658,7 @@ mymap /^(.*)@otherdomain\.com$ guest
when the main server process receives a when the main server process receives a
<systemitem>SIGHUP</systemitem><indexterm><primary>SIGHUP</primary></indexterm> <systemitem>SIGHUP</systemitem><indexterm><primary>SIGHUP</primary></indexterm>
signal. If you edit the file on an signal. If you edit the file on an
active system, you will need to signal the server active system, you will need to signal the postmaster
(using <literal>pg_ctl reload</> or <literal>kill -HUP</>) to make it (using <literal>pg_ctl reload</> or <literal>kill -HUP</>) to make it
re-read the file. re-read the file.
</para> </para>
...@@ -663,16 +667,16 @@ mymap /^(.*)@otherdomain\.com$ guest ...@@ -663,16 +667,16 @@ mymap /^(.*)@otherdomain\.com$ guest
A <filename>pg_ident.conf</filename> file that could be used in A <filename>pg_ident.conf</filename> file that could be used in
conjunction with the <filename>pg_hba.conf</> file in <xref conjunction with the <filename>pg_hba.conf</> file in <xref
linkend="example-pg-hba.conf"> is shown in <xref linkend="example-pg-hba.conf"> is shown in <xref
linkend="example-pg-ident.conf">. In this example setup, anyone linkend="example-pg-ident.conf">. In this example, anyone
logged in to a machine on the 192.168 network that does not have the logged in to a machine on the 192.168 network that does not have the
Unix user name <literal>bryanh</>, <literal>ann</>, or operating system user name <literal>bryanh</>, <literal>ann</>, or
<literal>robert</> would not be granted access. Unix user <literal>robert</> would not be granted access. Unix user
<literal>robert</> would only be allowed access when he tries to <literal>robert</> would only be allowed access when he tries to
connect as <productname>PostgreSQL</> user <literal>bob</>, not connect as <productname>PostgreSQL</> user <literal>bob</>, not
as <literal>robert</> or anyone else. <literal>ann</> would as <literal>robert</> or anyone else. <literal>ann</> would
only be allowed to connect as <literal>ann</>. User only be allowed to connect as <literal>ann</>. User
<literal>bryanh</> would be allowed to connect as either <literal>bryanh</> would be allowed to connect as either
<literal>bryanh</> himself or as <literal>guest1</>. <literal>bryanh</> or as <literal>guest1</>.
</para> </para>
<example id="example-pg-ident.conf"> <example id="example-pg-ident.conf">
...@@ -759,7 +763,7 @@ omicron bryanh guest1 ...@@ -759,7 +763,7 @@ omicron bryanh guest1
The password-based authentication methods are <literal>md5</> The password-based authentication methods are <literal>md5</>
and <literal>password</>. These methods operate and <literal>password</>. These methods operate
similarly except for the way that the password is sent across the similarly except for the way that the password is sent across the
connection: respectively, MD5-hashed and clear-text. connection, i.e. respectively, MD5-hashed and clear-text.
</para> </para>
<para> <para>
...@@ -780,8 +784,8 @@ omicron bryanh guest1 ...@@ -780,8 +784,8 @@ omicron bryanh guest1
catalog. Passwords can be managed with the SQL commands catalog. Passwords can be managed with the SQL commands
<xref linkend="sql-createuser" endterm="sql-createuser-title"> and <xref linkend="sql-createuser" endterm="sql-createuser-title"> and
<xref linkend="sql-alteruser" endterm="sql-alteruser-title">, <xref linkend="sql-alteruser" endterm="sql-alteruser-title">,
e.g., <userinput>CREATE USER foo WITH PASSWORD 'secret';</userinput>. e.g., <userinput>CREATE USER foo WITH PASSWORD 'secret'</userinput>.
By default, that is, if no password has been set up, the stored password If no password has been set up for a user, the stored password
is null and password authentication will always fail for that user. is null and password authentication will always fail for that user.
</para> </para>
...@@ -802,7 +806,7 @@ omicron bryanh guest1 ...@@ -802,7 +806,7 @@ omicron bryanh guest1
authentication according to RFC 1964. <productname>GSSAPI</productname> authentication according to RFC 1964. <productname>GSSAPI</productname>
provides automatic authentication (single sign-on) for systems provides automatic authentication (single sign-on) for systems
that support it. The authentication itself is secure, but the that support it. The authentication itself is secure, but the
data sent over the database connection will be in clear unless data sent over the database connection will be send unencrypted unless
<acronym>SSL</acronym> is used. <acronym>SSL</acronym> is used.
</para> </para>
...@@ -877,7 +881,7 @@ omicron bryanh guest1 ...@@ -877,7 +881,7 @@ omicron bryanh guest1
<para> <para>
When using <productname>Kerberos</productname> authentication, When using <productname>Kerberos</productname> authentication,
<productname>SSPI</productname> works the same way <productname>SSPI</productname> works the same way
<productname>GSSAPI</productname> does. See <xref linkend="gssapi-auth"> <productname>GSSAPI</productname> does; see <xref linkend="gssapi-auth">
for details. for details.
</para> </para>
...@@ -941,7 +945,7 @@ omicron bryanh guest1 ...@@ -941,7 +945,7 @@ omicron bryanh guest1
<productname>Kerberos</productname> is an industry-standard secure <productname>Kerberos</productname> is an industry-standard secure
authentication system suitable for distributed computing over a public authentication system suitable for distributed computing over a public
network. A description of the <productname>Kerberos</productname> system network. A description of the <productname>Kerberos</productname> system
is far beyond the scope of this document; in full generality it can be is beyond the scope of this document; in full generality it can be
quite complex (yet powerful). The quite complex (yet powerful). The
<ulink url="http://www.nrl.navy.mil/CCS/people/kenh/kerberos-faq.html"> <ulink url="http://www.nrl.navy.mil/CCS/people/kenh/kerberos-faq.html">
Kerberos <acronym>FAQ</></ulink> or Kerberos <acronym>FAQ</></ulink> or
...@@ -973,8 +977,9 @@ omicron bryanh guest1 ...@@ -973,8 +977,9 @@ omicron bryanh guest1
changed from the default <literal>postgres</literal> at build time using changed from the default <literal>postgres</literal> at build time using
<literal>./configure --with-krb-srvnam=</><replaceable>whatever</>. <literal>./configure --with-krb-srvnam=</><replaceable>whatever</>.
In most environments, In most environments,
this parameter never needs to be changed. However, to support multiple this parameter never needs to be changed. However, it is necessary
<productname>PostgreSQL</> installations on the same host it is necessary. when supporting multiple <productname>PostgreSQL</> installations
on the same host.
Some Kerberos implementations might also require a different service name, Some Kerberos implementations might also require a different service name,
such as Microsoft Active Directory which requires the service name such as Microsoft Active Directory which requires the service name
to be in uppercase (<literal>POSTGRES</literal>). to be in uppercase (<literal>POSTGRES</literal>).
...@@ -1005,7 +1010,7 @@ omicron bryanh guest1 ...@@ -1005,7 +1010,7 @@ omicron bryanh guest1
of the key file is specified by the <xref of the key file is specified by the <xref
linkend="guc-krb-server-keyfile"> configuration linkend="guc-krb-server-keyfile"> configuration
parameter. The default is parameter. The default is
<filename>/usr/local/pgsql/etc/krb5.keytab</> (or whichever <filename>/usr/local/pgsql/etc/krb5.keytab</> (or whatever
directory was specified as <varname>sysconfdir</> at build time). directory was specified as <varname>sysconfdir</> at build time).
</para> </para>
...@@ -1035,7 +1040,7 @@ omicron bryanh guest1 ...@@ -1035,7 +1040,7 @@ omicron bryanh guest1
<productname>Apache</productname> web server, you can use <productname>Apache</productname> web server, you can use
<literal>AuthType KerberosV5SaveCredentials</literal> with a <literal>AuthType KerberosV5SaveCredentials</literal> with a
<application>mod_perl</application> script. This gives secure <application>mod_perl</application> script. This gives secure
database access over the web, no extra passwords required. database access over the web, with no additional passwords required.
</para> </para>
<para> <para>
...@@ -1137,13 +1142,13 @@ omicron bryanh guest1 ...@@ -1137,13 +1142,13 @@ omicron bryanh guest1
Since <productname>PostgreSQL</> knows both <replaceable>X</> and Since <productname>PostgreSQL</> knows both <replaceable>X</> and
<replaceable>Y</> when a physical connection is established, it <replaceable>Y</> when a physical connection is established, it
can interrogate the ident server on the host of the connecting can interrogate the ident server on the host of the connecting
client and could theoretically determine the operating system user client and can theoretically determine the operating system user
for any given connection this way. for any given connection.
</para> </para>
<para> <para>
The drawback of this procedure is that it depends on the integrity The drawback of this procedure is that it depends on the integrity
of the client: if the client machine is untrusted or compromised of the client: if the client machine is untrusted or compromised,
an attacker could run just about any program on port 113 and an attacker could run just about any program on port 113 and
return any user name he chooses. This authentication method is return any user name he chooses. This authentication method is
therefore only appropriate for closed networks where each client therefore only appropriate for closed networks where each client
...@@ -1562,7 +1567,7 @@ FATAL: database "testdb" does not exist ...@@ -1562,7 +1567,7 @@ FATAL: database "testdb" does not exist
<para> <para>
The server log might contain more information about an The server log might contain more information about an
authentication failure than is reported to the client. If you are authentication failure than is reported to the client. If you are
confused about the reason for a failure, check the log. confused about the reason for a failure, check the server log.
</para> </para>
</tip> </tip>
</sect1> </sect1>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/config.sgml,v 1.248 2010/02/01 13:40:28 sriggs Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/config.sgml,v 1.249 2010/02/03 17:25:05 momjian Exp $ -->
<chapter Id="runtime-config"> <chapter Id="runtime-config">
<title>Server Configuration</title> <title>Server Configuration</title>
...@@ -21,10 +21,10 @@ ...@@ -21,10 +21,10 @@
<para> <para>
All parameter names are case-insensitive. Every parameter takes a All parameter names are case-insensitive. Every parameter takes a
value of one of five types: Boolean, integer, floating point, value of one of five types: Boolean, integer, floating point,
string or enum. Boolean values can be written as <literal>ON</literal>, string or enum. Boolean values can be written as <literal>on</literal>,
<literal>OFF</literal>, <literal>TRUE</literal>, <literal>off</literal>, <literal>true</literal>,
<literal>FALSE</literal>, <literal>YES</literal>, <literal>false</literal>, <literal>yes</literal>,
<literal>NO</literal>, <literal>1</literal>, <literal>0</literal> <literal>no</literal>, <literal>1</literal>, <literal>0</literal>
(all case-insensitive) or any unambiguous prefix of these. (all case-insensitive) or any unambiguous prefix of these.
</para> </para>
...@@ -66,8 +66,8 @@ shared_buffers = 128MB ...@@ -66,8 +66,8 @@ shared_buffers = 128MB
</programlisting> </programlisting>
One parameter is specified per line. The equal sign between name and One parameter is specified per line. The equal sign between name and
value is optional. Whitespace is insignificant and blank lines are value is optional. Whitespace is insignificant and blank lines are
ignored. Hash marks (<literal>#</literal>) introduce comments ignored. Hash marks (<literal>#</literal>) designate the rest of the
anywhere. Parameter values that are not simple identifiers or line as a comment. Parameter values that are not simple identifiers or
numbers must be single-quoted. To embed a single quote in a parameter numbers must be single-quoted. To embed a single quote in a parameter
value, write either two quotes (preferred) or backslash-quote. value, write either two quotes (preferred) or backslash-quote.
</para> </para>
...@@ -155,9 +155,9 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -155,9 +155,9 @@ SET ENABLE_SEQSCAN TO OFF;
values for the parameter. Some parameters cannot be changed via values for the parameter. Some parameters cannot be changed via
<command>SET</command>: for example, if they control behavior that <command>SET</command>: for example, if they control behavior that
cannot be changed without restarting the entire cannot be changed without restarting the entire
<productname>PostgreSQL</productname> server. Also, some parameters can <productname>PostgreSQL</productname> server. Also,
be modified via <command>SET</command> or <command>ALTER</> by superusers, some <command>SET</command> or <command>ALTER</> parameter modifications
but not by ordinary users. require superuser permission.
</para> </para>
<para> <para>
...@@ -329,7 +329,7 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -329,7 +329,7 @@ SET ENABLE_SEQSCAN TO OFF;
at all, in which case only Unix-domain sockets can be used to connect at all, in which case only Unix-domain sockets can be used to connect
to it. to it.
The default value is <systemitem class="systemname">localhost</>, The default value is <systemitem class="systemname">localhost</>,
which allows only local <quote>loopback</> connections to be which allows only local TCP/IP <quote>loopback</> connections to be
made. While client authentication (<xref made. While client authentication (<xref
linkend="client-authentication">) allows fine-grained control linkend="client-authentication">) allows fine-grained control
over who can access the server, <varname>listen_addresses</varname> over who can access the server, <varname>listen_addresses</varname>
...@@ -440,8 +440,8 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -440,8 +440,8 @@ SET ENABLE_SEQSCAN TO OFF;
server.) In combination with the parameter server.) In combination with the parameter
<varname>unix_socket_permissions</varname> this can be used as <varname>unix_socket_permissions</varname> this can be used as
an additional access control mechanism for Unix-domain connections. an additional access control mechanism for Unix-domain connections.
By default this is the empty string, which selects the default By default this is the empty string, which uses the default
group for the current user. This parameter can only be set at group of the server user. This parameter can only be set at
server start. server start.
</para> </para>
</listitem> </listitem>
...@@ -457,7 +457,7 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -457,7 +457,7 @@ SET ENABLE_SEQSCAN TO OFF;
Sets the access permissions of the Unix-domain socket. Unix-domain Sets the access permissions of the Unix-domain socket. Unix-domain
sockets use the usual Unix file system permission set. sockets use the usual Unix file system permission set.
The parameter value is expected to be a numeric mode The parameter value is expected to be a numeric mode
specification in the form accepted by the specified in the format accepted by the
<function>chmod</function> and <function>umask</function> <function>chmod</function> and <function>umask</function>
system calls. (To use the customary octal format the number system calls. (To use the customary octal format the number
must start with a <literal>0</literal> (zero).) must start with a <literal>0</literal> (zero).)
...@@ -469,7 +469,7 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -469,7 +469,7 @@ SET ENABLE_SEQSCAN TO OFF;
<literal>0770</literal> (only user and group, see also <literal>0770</literal> (only user and group, see also
<varname>unix_socket_group</varname>) and <literal>0700</literal> <varname>unix_socket_group</varname>) and <literal>0700</literal>
(only user). (Note that for a Unix-domain socket, only write (only user). (Note that for a Unix-domain socket, only write
permission matters and so there is no point in setting or revoking permission matters, so there is no point in setting or revoking
read or execute permissions.) read or execute permissions.)
</para> </para>
...@@ -581,7 +581,7 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -581,7 +581,7 @@ SET ENABLE_SEQSCAN TO OFF;
<para> <para>
Maximum time to complete client authentication, in seconds. If a Maximum time to complete client authentication, in seconds. If a
would-be client has not completed the authentication protocol in would-be client has not completed the authentication protocol in
this much time, the server breaks the connection. This prevents this much time, the server closes the connection. This prevents
hung clients from occupying a connection indefinitely. hung clients from occupying a connection indefinitely.
The default is one minute (<literal>1m</>). The default is one minute (<literal>1m</>).
This parameter can only be set in the <filename>postgresql.conf</> This parameter can only be set in the <filename>postgresql.conf</>
...@@ -707,8 +707,9 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -707,8 +707,9 @@ SET ENABLE_SEQSCAN TO OFF;
<para> <para>
With this parameter enabled, you can still create ordinary global With this parameter enabled, you can still create ordinary global
users. Simply append <literal>@</> when specifying the user users. Simply append <literal>@</> when specifying the user
name in the client. The <literal>@</> will be stripped off name in the client, e.g. <literal>joe@</>. The <literal>@</>
before the user name is looked up by the server. will be stripped off before the user name is looked up by the
server.
</para> </para>
<para> <para>
...@@ -783,15 +784,15 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -783,15 +784,15 @@ SET ENABLE_SEQSCAN TO OFF;
session. These are session-local buffers used only for access to session. These are session-local buffers used only for access to
temporary tables. The default is eight megabytes temporary tables. The default is eight megabytes
(<literal>8MB</>). The setting can be changed within individual (<literal>8MB</>). The setting can be changed within individual
sessions, but only up until the first use of temporary tables sessions, but only before the first use of temporary tables
within a session; subsequent attempts to change the value will within the session; subsequent attempts to change the value will
have no effect on that session. have no effect on that session.
</para> </para>
<para> <para>
A session will allocate temporary buffers as needed up to the limit A session will allocate temporary buffers as needed up to the limit
given by <varname>temp_buffers</>. The cost of setting a large given by <varname>temp_buffers</>. The cost of setting a large
value in sessions that do not actually need a lot of temporary value in sessions that do not actually need many temporary
buffers is only a buffer descriptor, or about 64 bytes, per buffers is only a buffer descriptor, or about 64 bytes, per
increment in <varname>temp_buffers</>. However if a buffer is increment in <varname>temp_buffers</>. However if a buffer is
actually used an additional 8192 bytes will be consumed for it actually used an additional 8192 bytes will be consumed for it
...@@ -849,13 +850,13 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -849,13 +850,13 @@ SET ENABLE_SEQSCAN TO OFF;
<listitem> <listitem>
<para> <para>
Specifies the amount of memory to be used by internal sort operations Specifies the amount of memory to be used by internal sort operations
and hash tables before switching to temporary disk files. The value and hash tables before writing to temporary disk files. The value
defaults to one megabyte (<literal>1MB</>). defaults to one megabyte (<literal>1MB</>).
Note that for a complex query, several sort or hash operations might be Note that for a complex query, several sort or hash operations might be
running in parallel; each one will be allowed to use as much memory running in parallel; each operation will be allowed to use as much memory
as this value specifies before it starts to put data into temporary as this value specifies before it starts to write data into temporary
files. Also, several running sessions could be doing such operations files. Also, several running sessions could be doing such operations
concurrently. So the total memory used could be many concurrently. Therefore, the total memory used could be many
times the value of <varname>work_mem</varname>; it is necessary to times the value of <varname>work_mem</varname>; it is necessary to
keep this fact in mind when choosing the value. Sort operations are keep this fact in mind when choosing the value. Sort operations are
used for <literal>ORDER BY</>, <literal>DISTINCT</>, and used for <literal>ORDER BY</>, <literal>DISTINCT</>, and
...@@ -873,7 +874,7 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -873,7 +874,7 @@ SET ENABLE_SEQSCAN TO OFF;
</indexterm> </indexterm>
<listitem> <listitem>
<para> <para>
Specifies the maximum amount of memory to be used in maintenance Specifies the maximum amount of memory to be used by maintenance
operations, such as <command>VACUUM</command>, <command>CREATE operations, such as <command>VACUUM</command>, <command>CREATE
INDEX</>, and <command>ALTER TABLE ADD FOREIGN KEY</>. It defaults INDEX</>, and <command>ALTER TABLE ADD FOREIGN KEY</>. It defaults
to 16 megabytes (<literal>16MB</>). Since only one of these to 16 megabytes (<literal>16MB</>). Since only one of these
...@@ -916,9 +917,9 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -916,9 +917,9 @@ SET ENABLE_SEQSCAN TO OFF;
the actual kernel limit will mean that a runaway recursive function the actual kernel limit will mean that a runaway recursive function
can crash an individual backend process. On platforms where can crash an individual backend process. On platforms where
<productname>PostgreSQL</productname> can determine the kernel limit, <productname>PostgreSQL</productname> can determine the kernel limit,
it will not let you set this variable to an unsafe value. However, the server will not allow this variable to be set to an unsafe
not all platforms provide the information, so caution is recommended value. However, not all platforms provide the information,
in selecting a value. so caution is recommended in selecting a value.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
...@@ -942,7 +943,7 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -942,7 +943,7 @@ SET ENABLE_SEQSCAN TO OFF;
a safe per-process limit, you don't need to worry about this setting. a safe per-process limit, you don't need to worry about this setting.
But on some platforms (notably, most BSD systems), the kernel will But on some platforms (notably, most BSD systems), the kernel will
allow individual processes to open many more files than the system allow individual processes to open many more files than the system
can really support when a large number of processes all try to open can actually support if many processes all try to open
that many files. If you find yourself seeing <quote>Too many open that many files. If you find yourself seeing <quote>Too many open
files</> failures, try reducing this setting. files</> failures, try reducing this setting.
This parameter can only be set at server start. This parameter can only be set at server start.
...@@ -957,14 +958,14 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -957,14 +958,14 @@ SET ENABLE_SEQSCAN TO OFF;
</indexterm> </indexterm>
<listitem> <listitem>
<para> <para>
This variable specifies one or more shared libraries that are This variable specifies one or more shared libraries
to be preloaded at server start. If more than one library is to be to be preloaded at server start. For example,
loaded, separate their names with commas. For example,
<literal>'$libdir/mylib'</literal> would cause <literal>'$libdir/mylib'</literal> would cause
<literal>mylib.so</> (or on some platforms, <literal>mylib.so</> (or on some platforms,
<literal>mylib.sl</>) to be preloaded from the installation's <literal>mylib.sl</>) to be preloaded from the installation's
standard library directory. standard library directory.
This parameter can only be set at server start. If more than one library is to be loaded, separate their names
with commas. This parameter can only be set at server start.
</para> </para>
<para> <para>
...@@ -1024,15 +1025,15 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -1024,15 +1025,15 @@ SET ENABLE_SEQSCAN TO OFF;
various I/O operations that are performed. When the accumulated various I/O operations that are performed. When the accumulated
cost reaches a limit (specified by cost reaches a limit (specified by
<varname>vacuum_cost_limit</varname>), the process performing <varname>vacuum_cost_limit</varname>), the process performing
the operation will sleep for a while (specified by the operation will sleep for a short period of time, as specified by
<varname>vacuum_cost_delay</varname>). Then it will reset the <varname>vacuum_cost_delay</varname>. Then it will reset the
counter and continue execution. counter and continue execution.
</para> </para>
<para> <para>
The intent of this feature is to allow administrators to reduce The intent of this feature is to allow administrators to reduce
the I/O impact of these commands on concurrent database the I/O impact of these commands on concurrent database
activity. There are many situations in which it is not very activity. There are many situations where it is not
important that maintenance commands like important that maintenance commands like
<command>VACUUM</command> and <command>ANALYZE</command> finish <command>VACUUM</command> and <command>ANALYZE</command> finish
quickly; however, it is usually very important that these quickly; however, it is usually very important that these
...@@ -1156,15 +1157,15 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -1156,15 +1157,15 @@ SET ENABLE_SEQSCAN TO OFF;
<para> <para>
There is a separate server There is a separate server
process called the <firstterm>background writer</>, whose function process called the <firstterm>background writer</>, whose function
is to issue writes of <quote>dirty</> shared buffers. The intent is is to issue writes of <quote>dirty</> (new or modified) shared
that server processes handling user queries should seldom or never have buffers. It writes shared buffers so server processes handling
to wait for a write to occur, because the background writer will do it. user queries seldom or never need to wait for a write to occur.
However there is a net overall However, the background writer does cause a net overall
increase in I/O load, because a repeatedly-dirtied page might increase in I/O load, because while a repeatedly-dirtied page might
otherwise be written only once per checkpoint interval, but the otherwise be written only once per checkpoint interval, the
background writer might write it several times in the same interval. background writer might write it several times as it is dirtied
The parameters discussed in this subsection can be used to in the same interval. The parameters discussed in this subsection
tune the behavior for local needs. can be used to tune the behavior for local needs.
</para> </para>
<variablelist> <variablelist>
...@@ -1329,7 +1330,9 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -1329,7 +1330,9 @@ SET ENABLE_SEQSCAN TO OFF;
allowed to do its best in buffering, ordering, and delaying allowed to do its best in buffering, ordering, and delaying
writes. This can result in significantly improved performance. writes. This can result in significantly improved performance.
However, if the system crashes, the results of the last few However, if the system crashes, the results of the last few
committed transactions might be lost in part or whole. In the committed transactions might be completely lost, or worse,
might appear partially committed, leaving the database in an
inconsistent state. In the
worst case, unrecoverable data corruption might occur. worst case, unrecoverable data corruption might occur.
(Crashes of the database software itself are <emphasis>not</> (Crashes of the database software itself are <emphasis>not</>
a risk factor here. Only an operating-system-level crash a risk factor here. Only an operating-system-level crash
...@@ -1357,7 +1360,7 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -1357,7 +1360,7 @@ SET ENABLE_SEQSCAN TO OFF;
</para> </para>
<para> <para>
This parameter can only be set in the <filename>postgresql.conf</> <varname>fsync</varname> can only be set in the <filename>postgresql.conf</>
file or on the server command line. file or on the server command line.
If you turn this parameter off, also consider turning off If you turn this parameter off, also consider turning off
<xref linkend="guc-full-page-writes">. <xref linkend="guc-full-page-writes">.
...@@ -1409,7 +1412,7 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -1409,7 +1412,7 @@ SET ENABLE_SEQSCAN TO OFF;
<para> <para>
Method used for forcing WAL updates out to disk. Method used for forcing WAL updates out to disk.
If <varname>fsync</varname> is off then this setting is irrelevant, If <varname>fsync</varname> is off then this setting is irrelevant,
since updates will not be forced out at all. since WAL file updates will not be forced out at all.
Possible values are: Possible values are:
</para> </para>
<itemizedlist> <itemizedlist>
...@@ -1468,8 +1471,8 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -1468,8 +1471,8 @@ SET ENABLE_SEQSCAN TO OFF;
that contains a mix of old and new data. The row-level change data that contains a mix of old and new data. The row-level change data
normally stored in WAL will not be enough to completely restore normally stored in WAL will not be enough to completely restore
such a page during post-crash recovery. Storing the full page image such a page during post-crash recovery. Storing the full page image
guarantees that the page can be correctly restored, but at a price guarantees that the page can be correctly restored, but at the price
in increasing the amount of data that must be written to WAL. of increasing the amount of data that must be written to WAL.
(Because WAL replay always starts from a checkpoint, it is sufficient (Because WAL replay always starts from a checkpoint, it is sufficient
to do this during the first change of each page after a checkpoint. to do this during the first change of each page after a checkpoint.
Therefore, one way to reduce the cost of full-page writes is to Therefore, one way to reduce the cost of full-page writes is to
...@@ -1483,7 +1486,7 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -1483,7 +1486,7 @@ SET ENABLE_SEQSCAN TO OFF;
<varname>fsync</>, though smaller. It might be safe to turn off <varname>fsync</>, though smaller. It might be safe to turn off
this parameter if you have hardware (such as a battery-backed disk this parameter if you have hardware (such as a battery-backed disk
controller) or file-system software that reduces controller) or file-system software that reduces
the risk of partial page writes to an acceptably low level (e.g., ReiserFS 4). the risk of partial page writes to an acceptably low level (e.g., ZFS).
</para> </para>
<para> <para>
...@@ -1630,8 +1633,8 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -1630,8 +1633,8 @@ SET ENABLE_SEQSCAN TO OFF;
</indexterm> </indexterm>
<listitem> <listitem>
<para> <para>
Specifies the target length of checkpoints, as a fraction of Specifies the target of checkpoint completion, as a fraction of
the checkpoint interval. The default is 0.5. total time between checkpoints. The default is 0.5.
This parameter can only be set in the <filename>postgresql.conf</> This parameter can only be set in the <filename>postgresql.conf</>
file or on the server command line. file or on the server command line.
...@@ -1671,7 +1674,7 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -1671,7 +1674,7 @@ SET ENABLE_SEQSCAN TO OFF;
<listitem> <listitem>
<para> <para>
When <varname>archive_mode</> is enabled, completed WAL segments When <varname>archive_mode</> is enabled, completed WAL segments
can be sent to archive storage by setting are sent to archive storage by setting
<xref linkend="guc-archive-command">. <xref linkend="guc-archive-command">.
<varname>archive_mode</> and <varname>archive_command</> are <varname>archive_mode</> and <varname>archive_command</> are
separate variables so that <varname>archive_command</> can be separate variables so that <varname>archive_command</> can be
...@@ -1688,10 +1691,10 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -1688,10 +1691,10 @@ SET ENABLE_SEQSCAN TO OFF;
</indexterm> </indexterm>
<listitem> <listitem>
<para> <para>
The shell command to execute to archive a completed segment of The shell command to execute to archive a completed WAL file
the WAL file series. Any <literal>%p</> in the string is segment. Any <literal>%p</> in the string is
replaced by the path name of the file to archive, and any replaced by the path name of the file to archive, and any
<literal>%f</> is replaced by the file name only. <literal>%f</> is replaced by only the file name.
(The path name is relative to the working directory of the server, (The path name is relative to the working directory of the server,
i.e., the cluster's data directory.) i.e., the cluster's data directory.)
Use <literal>%%</> to embed an actual <literal>%</> character in the Use <literal>%%</> to embed an actual <literal>%</> character in the
...@@ -1701,9 +1704,13 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -1701,9 +1704,13 @@ SET ENABLE_SEQSCAN TO OFF;
file or on the server command line. It is ignored unless file or on the server command line. It is ignored unless
<varname>archive_mode</> was enabled at server start. <varname>archive_mode</> was enabled at server start.
If <varname>archive_command</> is an empty string (the default) while If <varname>archive_command</> is an empty string (the default) while
<varname>archive_mode</> is enabled, then WAL archiving is temporarily <varname>archive_mode</> is enabled, WAL archiving is temporarily
disabled, but the server continues to accumulate WAL segment files in disabled, but the server continues to accumulate WAL segment files in
the expectation that a command will soon be provided. the expectation that a command will soon be provided. Setting
<varname>archive_mode</> to a command that does nothing but
return true, e.g. <literal>/bin/true</>, effectively disables
archiving, but also breaks the chain of WAL files needed for
archive recovery, so it should only be used in unusual circumstances.
</para> </para>
<para> <para>
It is important for the command to return a zero exit status if It is important for the command to return a zero exit status if
...@@ -1723,11 +1730,11 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows ...@@ -1723,11 +1730,11 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows
</indexterm> </indexterm>
<listitem> <listitem>
<para> <para>
The <xref linkend="guc-archive-command"> is only invoked on The <xref linkend="guc-archive-command"> is only invoked for
completed WAL segments. Hence, if your server generates little WAL completed WAL segments. Hence, if your server generates little WAL
traffic (or has slack periods where it does so), there could be a traffic (or has slack periods where it does so), there could be a
long delay between the completion of a transaction and its safe long delay between the completion of a transaction and its safe
recording in archive storage. To put a limit on how old unarchived recording in archive storage. To limit how old unarchived
data can be, you can set <varname>archive_timeout</> to force the data can be, you can set <varname>archive_timeout</> to force the
server to switch to a new WAL segment file periodically. When this server to switch to a new WAL segment file periodically. When this
parameter is greater than zero, the server will switch to a new parameter is greater than zero, the server will switch to a new
...@@ -1854,16 +1861,15 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows ...@@ -1854,16 +1861,15 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows
These configuration parameters provide a crude method of These configuration parameters provide a crude method of
influencing the query plans chosen by the query optimizer. If influencing the query plans chosen by the query optimizer. If
the default plan chosen by the optimizer for a particular query the default plan chosen by the optimizer for a particular query
is not optimal, a temporary solution can be found by using one is not optimal, a <emphasis>temporary</> solution is to use one
of these configuration parameters to force the optimizer to of these configuration parameters to force the optimizer to
choose a different plan. Turning one of these settings off choose a different plan.
permanently is seldom a good idea, however.
Better ways to improve the quality of the Better ways to improve the quality of the
plans chosen by the optimizer include adjusting the <xref plans chosen by the optimizer include adjusting the <xref
linkend="runtime-config-query-constants" linkend="runtime-config-query-constants"
endterm="runtime-config-query-constants-title">, running <xref endterm="runtime-config-query-constants-title">, running <xref
linkend="sql-analyze" endterm="sql-analyze-title"> more linkend="sql-analyze" endterm="sql-analyze-title"> manually, increasing
frequently, increasing the value of the <xref the value of the <xref
linkend="guc-default-statistics-target"> configuration parameter, linkend="guc-default-statistics-target"> configuration parameter,
and increasing the amount of statistics collected for and increasing the amount of statistics collected for
specific columns using <command>ALTER TABLE SET specific columns using <command>ALTER TABLE SET
...@@ -1950,7 +1956,7 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows ...@@ -1950,7 +1956,7 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows
<listitem> <listitem>
<para> <para>
Enables or disables the query planner's use of nested-loop join Enables or disables the query planner's use of nested-loop join
plans. It's not possible to suppress nested-loop joins entirely, plans. It is impossible to suppress nested-loop joins entirely,
but turning this variable off discourages the planner from using but turning this variable off discourages the planner from using
one if there are other methods available. The default is one if there are other methods available. The default is
<literal>on</>. <literal>on</>.
...@@ -1969,7 +1975,7 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows ...@@ -1969,7 +1975,7 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows
<listitem> <listitem>
<para> <para>
Enables or disables the query planner's use of sequential scan Enables or disables the query planner's use of sequential scan
plan types. It's not possible to suppress sequential scans plan types. It is impossible to suppress sequential scans
entirely, but turning this variable off discourages the planner entirely, but turning this variable off discourages the planner
from using one if there are other methods available. The from using one if there are other methods available. The
default is <literal>on</>. default is <literal>on</>.
...@@ -1985,7 +1991,7 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows ...@@ -1985,7 +1991,7 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows
<listitem> <listitem>
<para> <para>
Enables or disables the query planner's use of explicit sort Enables or disables the query planner's use of explicit sort
steps. It's not possible to suppress explicit sorts entirely, steps. It is impossible to suppress explicit sorts entirely,
but turning this variable off discourages the planner from but turning this variable off discourages the planner from
using one if there are other methods available. The default using one if there are other methods available. The default
is <literal>on</>. is <literal>on</>.
...@@ -2017,8 +2023,8 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows ...@@ -2017,8 +2023,8 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows
The <firstterm>cost</> variables described in this section are measured The <firstterm>cost</> variables described in this section are measured
on an arbitrary scale. Only their relative values matter, hence on an arbitrary scale. Only their relative values matter, hence
scaling them all up or down by the same factor will result in no change scaling them all up or down by the same factor will result in no change
in the planner's choices. Traditionally, these variables have been in the planner's choices. By default, these cost variables are based on
referenced to sequential page fetches as the unit of cost; that is, the cost of sequential page fetches; that is,
<varname>seq_page_cost</> is conventionally set to <literal>1.0</> <varname>seq_page_cost</> is conventionally set to <literal>1.0</>
and the other cost variables are set with reference to that. But and the other cost variables are set with reference to that. But
you can use a different scale if you prefer, such as actual execution you can use a different scale if you prefer, such as actual execution
...@@ -2029,7 +2035,7 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows ...@@ -2029,7 +2035,7 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows
<para> <para>
Unfortunately, there is no well-defined method for determining ideal Unfortunately, there is no well-defined method for determining ideal
values for the cost variables. They are best treated as averages over values for the cost variables. They are best treated as averages over
the entire mix of queries that a particular installation will get. This the entire mix of queries that a particular installation will receive. This
means that changing them on the basis of just a few experiments is very means that changing them on the basis of just a few experiments is very
risky. risky.
</para> </para>
...@@ -2193,8 +2199,8 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows ...@@ -2193,8 +2199,8 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows
<para> <para>
Enables or disables genetic query optimization. Enables or disables genetic query optimization.
This is on by default. It is usually best not to turn it off in This is on by default. It is usually best not to turn it off in
production; the <varname>geqo_threshold</varname> variable provides a production; the <varname>geqo_threshold</varname> variable provides
more granular way to control use of GEQO. more granular control of GEQO.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
...@@ -2211,7 +2217,8 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows ...@@ -2211,7 +2217,8 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows
<literal>FULL OUTER JOIN</> construct counts as only one <literal>FROM</> <literal>FULL OUTER JOIN</> construct counts as only one <literal>FROM</>
item.) The default is 12. For simpler queries it is usually best item.) The default is 12. For simpler queries it is usually best
to use the deterministic, exhaustive planner, but for queries with to use the deterministic, exhaustive planner, but for queries with
many tables the deterministic planner takes too long. many tables the deterministic planner takes too long, often
longer than the penalty of executing a suboptimal plan.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
...@@ -2320,8 +2327,8 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows ...@@ -2320,8 +2327,8 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows
</indexterm> </indexterm>
<listitem> <listitem>
<para> <para>
Sets the default statistics target for table columns that have Sets the default statistics target for table columns without
not had a column-specific target set via <command>ALTER TABLE a column-specific target set via <command>ALTER TABLE
SET STATISTICS</>. Larger values increase the time needed to SET STATISTICS</>. Larger values increase the time needed to
do <command>ANALYZE</>, but might improve the quality of the do <command>ANALYZE</>, but might improve the quality of the
planner's estimates. The default is 100. For more information planner's estimates. The default is 100. For more information
...@@ -2349,6 +2356,8 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows ...@@ -2349,6 +2356,8 @@ archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows
<literal>partition</> (examine constraints only for inheritance child <literal>partition</> (examine constraints only for inheritance child
tables and <literal>UNION ALL</> subqueries). tables and <literal>UNION ALL</> subqueries).
<literal>partition</> is the default setting. <literal>partition</> is the default setting.
It is often used with inheritance and partitioned tables to
improve performance.
</para> </para>
<para> <para>
...@@ -2366,9 +2375,7 @@ SELECT * FROM parent WHERE key = 2400; ...@@ -2366,9 +2375,7 @@ SELECT * FROM parent WHERE key = 2400;
</programlisting> </programlisting>
With constraint exclusion enabled, this <command>SELECT</> With constraint exclusion enabled, this <command>SELECT</>
will not scan <structname>child1000</> at all. This can will not scan <structname>child1000</> at all, improving performance.
improve performance when inheritance is used to build
partitioned tables.
</para> </para>
<para> <para>
...@@ -2449,8 +2456,8 @@ SELECT * FROM parent WHERE key = 2400; ...@@ -2449,8 +2456,8 @@ SELECT * FROM parent WHERE key = 2400;
for most uses. Setting it to 1 prevents any reordering of for most uses. Setting it to 1 prevents any reordering of
explicit <literal>JOIN</>s. Thus, the explicit join order explicit <literal>JOIN</>s. Thus, the explicit join order
specified in the query will be the actual order in which the specified in the query will be the actual order in which the
relations are joined. The query planner does not always choose relations are joined. Because the query planner does not always choose
the optimal join order; advanced users can elect to the optimal join order, advanced users can elect to
temporarily set this variable to 1, and then specify the join temporarily set this variable to 1, and then specify the join
order they desire explicitly. order they desire explicitly.
For more information see <xref linkend="explicit-joins">. For more information see <xref linkend="explicit-joins">.
...@@ -2505,7 +2512,8 @@ SELECT * FROM parent WHERE key = 2400; ...@@ -2505,7 +2512,8 @@ SELECT * FROM parent WHERE key = 2400;
<para> <para>
If <systemitem>csvlog</> is included in <varname>log_destination</>, If <systemitem>csvlog</> is included in <varname>log_destination</>,
log entries are output in <quote>comma separated log entries are output in <quote>comma separated
value</> format, which is convenient for loading them into programs. value</> (<acronym>CSV</>) format, which is convenient for
loading logs into programs.
See <xref linkend="runtime-config-logging-csvlog"> for details. See <xref linkend="runtime-config-logging-csvlog"> for details.
<varname>logging_collector</varname> must be enabled to generate <varname>logging_collector</varname> must be enabled to generate
CSV-format log output. CSV-format log output.
...@@ -2521,7 +2529,7 @@ SELECT * FROM parent WHERE key = 2400; ...@@ -2521,7 +2529,7 @@ SELECT * FROM parent WHERE key = 2400;
<literal>LOCAL0</> through <literal>LOCAL7</> (see <xref <literal>LOCAL0</> through <literal>LOCAL7</> (see <xref
linkend="guc-syslog-facility">), but the default linkend="guc-syslog-facility">), but the default
<application>syslog</application> configuration on most platforms <application>syslog</application> configuration on most platforms
will discard all such messages. You will need to add something like will discard all such messages. You will need to add something like:
<programlisting> <programlisting>
local0.* /var/log/postgresql local0.* /var/log/postgresql
</programlisting> </programlisting>
...@@ -2539,9 +2547,8 @@ local0.* /var/log/postgresql ...@@ -2539,9 +2547,8 @@ local0.* /var/log/postgresql
</indexterm> </indexterm>
<listitem> <listitem>
<para> <para>
This parameter allows messages sent to <application>stderr</>, This parameter captures plain and CSV-format log messages
and CSV-format log output, to be sent to <application>stderr</> and redirects them into log files.
captured and redirected into log files.
This approach is often more useful than This approach is often more useful than
logging to <application>syslog</>, since some types of messages logging to <application>syslog</>, since some types of messages
might not appear in <application>syslog</> output (a common example might not appear in <application>syslog</> output (a common example
...@@ -2832,7 +2839,11 @@ local0.* /var/log/postgresql ...@@ -2832,7 +2839,11 @@ local0.* /var/log/postgresql
Controls the amount of detail written in the server log for each Controls the amount of detail written in the server log for each
message that is logged. Valid values are <literal>TERSE</>, message that is logged. Valid values are <literal>TERSE</>,
<literal>DEFAULT</>, and <literal>VERBOSE</>, each adding more <literal>DEFAULT</>, and <literal>VERBOSE</>, each adding more
fields to displayed messages. fields to displayed messages. <literal>VERBOSE</> logging
output includes the <link
linkend="errcodes-appendix">SQLSTATE</> error
code and the source code file name, function name,
and line number that generated the error.
Only superusers can change this setting. Only superusers can change this setting.
</para> </para>
</listitem> </listitem>
...@@ -2845,8 +2856,8 @@ local0.* /var/log/postgresql ...@@ -2845,8 +2856,8 @@ local0.* /var/log/postgresql
</indexterm> </indexterm>
<listitem> <listitem>
<para> <para>
Controls whether or not the SQL statement that causes an error Controls which SQL statements that cause an error
condition will be recorded in the server log. The current condition are recorded in the server log. The current
SQL statement is included in the log entry for any message of SQL statement is included in the log entry for any message of
the specified severity or higher. the specified severity or higher.
Valid values are <literal>DEBUG5</literal>, Valid values are <literal>DEBUG5</literal>,
...@@ -3165,7 +3176,7 @@ local0.* /var/log/postgresql ...@@ -3165,7 +3176,7 @@ local0.* /var/log/postgresql
<listitem> <listitem>
<para> <para>
By default, connection log messages only show the IP address of the By default, connection log messages only show the IP address of the
connecting host. Turning on this parameter causes logging of the connecting host. Turning this parameter on causes logging of the
host name as well. Note that depending on your host name resolution host name as well. Note that depending on your host name resolution
setup this might impose a non-negligible performance penalty. setup this might impose a non-negligible performance penalty.
This parameter can only be set in the <filename>postgresql.conf</> This parameter can only be set in the <filename>postgresql.conf</>
...@@ -3312,7 +3323,7 @@ FROM pg_stat_activity; ...@@ -3312,7 +3323,7 @@ FROM pg_stat_activity;
If you set a nonempty value for <varname>log_line_prefix</>, If you set a nonempty value for <varname>log_line_prefix</>,
you should usually make its last character be a space, to provide you should usually make its last character be a space, to provide
visual separation from the rest of the log line. A punctuation visual separation from the rest of the log line. A punctuation
character could be used too. character can be used too.
</para> </para>
</tip> </tip>
...@@ -3392,11 +3403,11 @@ FROM pg_stat_activity; ...@@ -3392,11 +3403,11 @@ FROM pg_stat_activity;
</indexterm> </indexterm>
<listitem> <listitem>
<para> <para>
Controls logging of use of temporary files. Controls logging of temporary file names and sizes.
Temporary files can be Temporary files can be
created for sorts, hashes, and temporary query results. created for sorts, hashes, and temporary query results.
A log entry is made for each temporary file when it is deleted. A log entry is made for each temporary file when it is deleted.
A value of zero logs all temporary files, while positive A value of zero logs all temporary file information, while positive
values log only files whose size is greater than or equal to values log only files whose size is greater than or equal to
the specified number of kilobytes. The the specified number of kilobytes. The
default setting is <literal>-1</>, which disables such logging. default setting is <literal>-1</>, which disables such logging.
...@@ -3415,7 +3426,7 @@ FROM pg_stat_activity; ...@@ -3415,7 +3426,7 @@ FROM pg_stat_activity;
Sets the time zone used for timestamps written in the log. Sets the time zone used for timestamps written in the log.
Unlike <xref linkend="guc-timezone">, this value is cluster-wide, Unlike <xref linkend="guc-timezone">, this value is cluster-wide,
so that all sessions will report timestamps consistently. so that all sessions will report timestamps consistently.
The default is <literal>unknown</>, which means to use whatever The default is <literal>unknown</>, which means use whatever
the system environment specifies as the time zone. See <xref the system environment specifies as the time zone. See <xref
linkend="datatype-timezones"> for more information. linkend="datatype-timezones"> for more information.
This parameter can only be set in the <filename>postgresql.conf</> This parameter can only be set in the <filename>postgresql.conf</>
...@@ -3432,7 +3443,8 @@ FROM pg_stat_activity; ...@@ -3432,7 +3443,8 @@ FROM pg_stat_activity;
<para> <para>
Including <literal>csvlog</> in the <varname>log_destination</> list Including <literal>csvlog</> in the <varname>log_destination</> list
provides a convenient way to import log files into a database table. provides a convenient way to import log files into a database table.
This option emits log lines in comma-separated-value format, This option emits log lines in comma-separated-values
(<acronym>CSV</>) format,
with these columns: with these columns:
timestamp with milliseconds, timestamp with milliseconds,
user name, user name,
...@@ -3503,7 +3515,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; ...@@ -3503,7 +3515,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
<para> <para>
There are a few things you need to do to simplify importing CSV log There are a few things you need to do to simplify importing CSV log
files easily and automatically: files:
<orderedlist> <orderedlist>
<listitem> <listitem>
...@@ -3575,11 +3587,11 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; ...@@ -3575,11 +3587,11 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
<listitem> <listitem>
<para> <para>
Enables the collection of information on the currently Enables the collection of information on the currently
executing command of each session, along with the time at executing command of each session, along with the time when
which that command began execution. This parameter is on by that command began execution. This parameter is on by
default. Note that even when enabled, this information is not default. Note that even when enabled, this information is not
visible to all users, only to superusers and the user owning visible to all users, only to superusers and the user owning
the session being reported on; so it should not represent a the session being reported on, so it should not represent a
security risk. security risk.
Only superusers can change this setting. Only superusers can change this setting.
</para> </para>
...@@ -3666,8 +3678,8 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; ...@@ -3666,8 +3678,8 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
<para> <para>
Sets the directory to store temporary statistics data in. This can be Sets the directory to store temporary statistics data in. This can be
a path relative to the data directory or an absolute path. The default a path relative to the data directory or an absolute path. The default
is <filename>pg_stat_tmp</filename>. Pointing this at a RAM based is <filename>pg_stat_tmp</filename>. Pointing this at a RAM-based
filesystem will decrease physical I/O requirements and can lead to file system will decrease physical I/O requirements and can lead to
improved performance. improved performance.
This parameter can only be set in the <filename>postgresql.conf</> This parameter can only be set in the <filename>postgresql.conf</>
file or on the server command line. file or on the server command line.
...@@ -3701,9 +3713,10 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; ...@@ -3701,9 +3713,10 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
</indexterm> </indexterm>
<listitem> <listitem>
<para> <para>
For each query, write performance statistics of the respective For each query, output performance statistics of the respective
module to the server log. This is a crude profiling module to the server log. This is a crude profiling
instrument. <varname>log_statement_stats</varname> reports total instrument, similar to the Unix <function>getrusage()</> operating
system facility. <varname>log_statement_stats</varname> reports total
statement statistics, while the others report per-module statistics. statement statistics, while the others report per-module statistics.
<varname>log_statement_stats</varname> cannot be enabled together with <varname>log_statement_stats</varname> cannot be enabled together with
any of the per-module options. All of these options are disabled by any of the per-module options. All of these options are disabled by
...@@ -3742,7 +3755,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; ...@@ -3742,7 +3755,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
<para> <para>
Controls whether the server should run the Controls whether the server should run the
autovacuum launcher daemon. This is on by default; however, autovacuum launcher daemon. This is on by default; however,
<xref linkend="guc-track-counts"> must also be turned on for <xref linkend="guc-track-counts"> must also be enabled for
autovacuum to work. autovacuum to work.
This parameter can only be set in the <filename>postgresql.conf</> This parameter can only be set in the <filename>postgresql.conf</>
file or on the server command line. file or on the server command line.
...@@ -3800,7 +3813,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; ...@@ -3800,7 +3813,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
database. In each round the daemon examines the database. In each round the daemon examines the
database and issues <command>VACUUM</> and <command>ANALYZE</> commands database and issues <command>VACUUM</> and <command>ANALYZE</> commands
as needed for tables in that database. The delay is measured as needed for tables in that database. The delay is measured
in seconds, and the default is one minute (<literal>1m</>). in seconds, and the default is one minute (<literal>1min</>).
This parameter can only be set in the <filename>postgresql.conf</> This parameter can only be set in the <filename>postgresql.conf</>
file or on the server command line. file or on the server command line.
</para> </para>
...@@ -3965,7 +3978,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; ...@@ -3965,7 +3978,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
<para> <para>
This variable specifies the order in which schemas are searched This variable specifies the order in which schemas are searched
when an object (table, data type, function, etc.) is referenced by a when an object (table, data type, function, etc.) is referenced by a
simple name with no schema component. When there are objects of simple name with no schema specified. When there are objects of
identical names in different schemas, the one found first identical names in different schemas, the one found first
in the search path is used. An object that is not in any of the in the search path is used. An object that is not in any of the
schemas in the search path can only be referenced by specifying schemas in the search path can only be referenced by specifying
...@@ -3973,7 +3986,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; ...@@ -3973,7 +3986,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
</para> </para>
<para> <para>
The value for <varname>search_path</varname> has to be a comma-separated The value for <varname>search_path</varname> must be a comma-separated
list of schema names. If one of the list items is list of schema names. If one of the list items is
the special value <literal>$user</literal>, then the schema the special value <literal>$user</literal>, then the schema
having the name returned by <function>SESSION_USER</> is substituted, if there having the name returned by <function>SESSION_USER</> is substituted, if there
...@@ -3993,9 +4006,9 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; ...@@ -3993,9 +4006,9 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
<literal>pg_temp_<replaceable>nnn</></>, is always searched if it <literal>pg_temp_<replaceable>nnn</></>, is always searched if it
exists. It can be explicitly listed in the path by using the exists. It can be explicitly listed in the path by using the
alias <literal>pg_temp</>. If it is not listed in the path then alias <literal>pg_temp</>. If it is not listed in the path then
it is searched first (before even <literal>pg_catalog</>). However, it is searched first (even before <literal>pg_catalog</>). However,
the temporary schema is only searched for relation (table, view, the temporary schema is only searched for relation (table, view,
sequence, etc) and data type names. It will never be searched for sequence, etc) and data type names. It is never searched for
function or operator names. function or operator names.
</para> </para>
...@@ -4022,7 +4035,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; ...@@ -4022,7 +4035,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
via the <acronym>SQL</acronym> function via the <acronym>SQL</acronym> function
<function>current_schemas()</>. This is not quite the same as <function>current_schemas()</>. This is not quite the same as
examining the value of <varname>search_path</varname>, since examining the value of <varname>search_path</varname>, since
<function>current_schemas()</> shows how the requests <function>current_schemas()</> shows how the items
appearing in <varname>search_path</varname> were resolved. appearing in <varname>search_path</varname> were resolved.
</para> </para>
...@@ -4075,11 +4088,11 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; ...@@ -4075,11 +4088,11 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
<indexterm><primary>tablespace</><secondary>temporary</></> <indexterm><primary>tablespace</><secondary>temporary</></>
<listitem> <listitem>
<para> <para>
This variable specifies tablespace(s) in which to create temporary This variable specifies tablespaces in which to create temporary
objects (temp tables and indexes on temp tables) when a objects (temp tables and indexes on temp tables) when a
<command>CREATE</> command does not explicitly specify a tablespace. <command>CREATE</> command does not explicitly specify a tablespace.
Temporary files for purposes such as sorting large data sets Temporary files for purposes such as sorting large data sets
are also created in these tablespace(s). are also created in these tablespaces.
</para> </para>
<para> <para>
...@@ -4210,8 +4223,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; ...@@ -4210,8 +4223,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
milliseconds, starting from the time the command arrives at the server milliseconds, starting from the time the command arrives at the server
from the client. If <varname>log_min_error_statement</> is set to from the client. If <varname>log_min_error_statement</> is set to
<literal>ERROR</> or lower, the statement that timed out will also be <literal>ERROR</> or lower, the statement that timed out will also be
logged. A value of zero (the default) turns off the logged. A value of zero (the default) turns this off.
limitation.
</para> </para>
<para> <para>
...@@ -4527,7 +4539,9 @@ SET XML OPTION { DOCUMENT | CONTENT }; ...@@ -4527,7 +4539,9 @@ SET XML OPTION { DOCUMENT | CONTENT };
<para> <para>
Only superusers can change this setting, because it affects the Only superusers can change this setting, because it affects the
messages sent to the server log as well as to the client. messages sent to the server log as well as to the client, and
an improper value might obscure the readability of the server
logs.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
...@@ -4631,12 +4645,12 @@ SET XML OPTION { DOCUMENT | CONTENT }; ...@@ -4631,12 +4645,12 @@ SET XML OPTION { DOCUMENT | CONTENT };
</para> </para>
<para> <para>
The value for <varname>dynamic_library_path</varname> has to be a The value for <varname>dynamic_library_path</varname> must be a
list of absolute directory paths separated by colons (or semi-colons list of absolute directory paths separated by colons (or semi-colons
on Windows). If a list element starts on Windows). If a list element starts
with the special string <literal>$libdir</literal>, the with the special string <literal>$libdir</literal>, the
compiled-in <productname>PostgreSQL</productname> package compiled-in <productname>PostgreSQL</productname> package
library directory is substituted for <literal>$libdir</literal>. This library directory is substituted for <literal>$libdir</literal>; this
is where the modules provided by the standard is where the modules provided by the standard
<productname>PostgreSQL</productname> distribution are installed. <productname>PostgreSQL</productname> distribution are installed.
(Use <literal>pg_config --pkglibdir</literal> to find out the name of (Use <literal>pg_config --pkglibdir</literal> to find out the name of
...@@ -4674,7 +4688,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' ...@@ -4674,7 +4688,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
</indexterm> </indexterm>
<listitem> <listitem>
<para> <para>
Soft upper limit of the size of the set returned by GIN index. For more Soft upper limit of the size of the set returned by GIN index scans. For more
information see <xref linkend="gin-tips">. information see <xref linkend="gin-tips">.
</para> </para>
</listitem> </listitem>
...@@ -4711,7 +4725,8 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' ...@@ -4711,7 +4725,8 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
</para> </para>
<para> <para>
There is no performance advantage to loading a library at session Unlike <varname>local_preload_libraries</>, there is no
performance advantage to loading a library at session
start rather than when it is first used. Rather, the intent of start rather than when it is first used. Rather, the intent of
this feature is to allow debugging or performance-measurement this feature is to allow debugging or performance-measurement
libraries to be loaded into specific sessions without an explicit libraries to be loaded into specific sessions without an explicit
...@@ -4761,10 +4776,10 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' ...@@ -4761,10 +4776,10 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
<para> <para>
This is the amount of time, in milliseconds, to wait on a lock This is the amount of time, in milliseconds, to wait on a lock
before checking to see if there is a deadlock condition. The before checking to see if there is a deadlock condition. The
check for deadlock is relatively slow, so the server doesn't run check for deadlock is relatively expensive, so the server doesn't run
it every time it waits for a lock. We optimistically assume it every time it waits for a lock. We optimistically assume
that deadlocks are not common in production applications and that deadlocks are not common in production applications and
just wait on the lock for a while before starting the check for a just wait on the lock for a while before checking for a
deadlock. Increasing this value reduces the amount of time deadlock. Increasing this value reduces the amount of time
wasted in needless deadlock checks, but slows down reporting of wasted in needless deadlock checks, but slows down reporting of
real deadlock errors. The default is one second (<literal>1s</>), real deadlock errors. The default is one second (<literal>1s</>),
...@@ -4792,7 +4807,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' ...@@ -4792,7 +4807,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
</indexterm> </indexterm>
<listitem> <listitem>
<para> <para>
The shared lock table is created to track locks on The shared lock table tracks locks on
<varname>max_locks_per_transaction</varname> * (<xref <varname>max_locks_per_transaction</varname> * (<xref
linkend="guc-max-connections"> + <xref linkend="guc-max-connections"> + <xref
linkend="guc-max-prepared-transactions">) objects (e.g., tables); linkend="guc-max-prepared-transactions">) objects (e.g., tables);
...@@ -4889,7 +4904,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' ...@@ -4889,7 +4904,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
<para> <para>
Note that in a standard-conforming string literal, <literal>\</> just Note that in a standard-conforming string literal, <literal>\</> just
means <literal>\</> anyway. This parameter affects the handling of means <literal>\</> anyway. This parameter only affects the handling of
non-standard-conforming literals, including non-standard-conforming literals, including
escape string syntax (<literal>E'...'</>). escape string syntax (<literal>E'...'</>).
</para> </para>
...@@ -4908,9 +4923,8 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' ...@@ -4908,9 +4923,8 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
newly-created tables, if neither <literal>WITH OIDS</literal> newly-created tables, if neither <literal>WITH OIDS</literal>
nor <literal>WITHOUT OIDS</literal> is specified. It also nor <literal>WITHOUT OIDS</literal> is specified. It also
determines whether OIDs will be included in tables created by determines whether OIDs will be included in tables created by
<command>SELECT INTO</command>. In <productname>PostgreSQL</> <command>SELECT INTO</command>. The parameter is <literal>off</>
8.1 <varname>default_with_oids</> is <literal>off</> by default; in by default; in <productname>PostgreSQL</> 8.0 and earlier, it
prior versions of <productname>PostgreSQL</productname>, it
was on by default. was on by default.
</para> </para>
...@@ -4983,7 +4997,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' ...@@ -4983,7 +4997,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
<listitem> <listitem>
<para> <para>
This controls the inheritance semantics. If turned <literal>off</>, This controls the inheritance semantics. If turned <literal>off</>,
subtables are not included by various commands by default; basically subtables are not accessed by various commands by default; basically
an implied <literal>ONLY</literal> key word. This was added for an implied <literal>ONLY</literal> key word. This was added for
compatibility with releases prior to 7.1. See compatibility with releases prior to 7.1. See
<xref linkend="ddl-inherit"> for more information. <xref linkend="ddl-inherit"> for more information.
...@@ -5006,12 +5020,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' ...@@ -5006,12 +5020,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
<productname>PostgreSQL</productname> to have its historical <productname>PostgreSQL</productname> to have its historical
behavior of treating backslashes as escape characters. behavior of treating backslashes as escape characters.
The default will change to <literal>on</> in a future release The default will change to <literal>on</> in a future release
to improve compatibility with the standard. to improve compatibility with the SQL standard.
Applications can check this Applications can check this
parameter to determine how string literals will be processed. parameter to determine how string literals will be processed.
The presence of this parameter can also be taken as an indication The presence of this parameter can also be taken as an indication
that the escape string syntax (<literal>E'...'</>) is supported. that the escape string syntax (<literal>E'...'</>) is supported.
Escape string syntax should be used if an application desires Escape string syntax (<xref linkend="sql-syntax-strings-escape">)
should be used if an application desires
backslashes to be treated as escape characters. backslashes to be treated as escape characters.
</para> </para>
</listitem> </listitem>
...@@ -5072,11 +5087,11 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' ...@@ -5072,11 +5087,11 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
null values, so if you use that interface to access the database you null values, so if you use that interface to access the database you
might want to turn this option on. Since expressions of the might want to turn this option on. Since expressions of the
form <literal><replaceable>expr</> = NULL</literal> always form <literal><replaceable>expr</> = NULL</literal> always
return the null value (using the correct interpretation) they are not return the null value (using the SQL standard interpretation), they are not
very useful and do not appear often in normal applications, so very useful and do not appear often in normal applications so
this option does little harm in practice. But new users are this option does little harm in practice. But new users are
frequently confused about the semantics of expressions frequently confused about the semantics of expressions
involving null values, so this option is not on by default. involving null values, so this option is off by default.
</para> </para>
<para> <para>
...@@ -5200,7 +5215,8 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' ...@@ -5200,7 +5215,8 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
less than the value of <literal>NAMEDATALEN</> when building less than the value of <literal>NAMEDATALEN</> when building
the server. The default value of <literal>NAMEDATALEN</> is the server. The default value of <literal>NAMEDATALEN</> is
64; therefore the default 64; therefore the default
<varname>max_identifier_length</varname> is 63 bytes. <varname>max_identifier_length</varname> is 63 bytes, which
can be less than 63 characters when using multi-byte encodings.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
...@@ -5355,8 +5371,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' ...@@ -5355,8 +5371,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
module for a specific class is loaded, it will add the proper variable module for a specific class is loaded, it will add the proper variable
definitions for its class name, convert any placeholder definitions for its class name, convert any placeholder
values according to those definitions, and issue warnings for any values according to those definitions, and issue warnings for any
placeholders of its class that remain (which presumably would be unrecognized placeholders of its class that remain.
misspelled configuration variables).
</para> </para>
<para> <para>
...@@ -5377,9 +5392,9 @@ plruby.use_strict = true # generates error: unknown class name ...@@ -5377,9 +5392,9 @@ plruby.use_strict = true # generates error: unknown class name
<para> <para>
The following parameters are intended for work on the The following parameters are intended for work on the
<productname>PostgreSQL</productname> source, and in some cases <productname>PostgreSQL</productname> source code, and in some cases
to assist with recovery of severely damaged databases. There to assist with recovery of severely damaged databases. There
should be no reason to use them in a production database setup. should be no reason to use them on a production database.
As such, they have been excluded from the sample As such, they have been excluded from the sample
<filename>postgresql.conf</> file. Note that many of these <filename>postgresql.conf</> file. Note that many of these
parameters require special source compilation flags to work at all. parameters require special source compilation flags to work at all.
...@@ -5445,7 +5460,7 @@ plruby.use_strict = true # generates error: unknown class name ...@@ -5445,7 +5460,7 @@ plruby.use_strict = true # generates error: unknown class name
<para> <para>
If nonzero, a delay of this many seconds occurs when a new If nonzero, a delay of this many seconds occurs when a new
server process is started, after it conducts the server process is started, after it conducts the
authentication procedure. This is intended to give an authentication procedure. This is intended to give developers an
opportunity to attach to the server process with a debugger. opportunity to attach to the server process with a debugger.
This parameter cannot be changed after session start. This parameter cannot be changed after session start.
</para> </para>
...@@ -5461,7 +5476,7 @@ plruby.use_strict = true # generates error: unknown class name ...@@ -5461,7 +5476,7 @@ plruby.use_strict = true # generates error: unknown class name
<para> <para>
If nonzero, a delay of this many seconds occurs just after a If nonzero, a delay of this many seconds occurs just after a
new server process is forked, before it conducts the new server process is forked, before it conducts the
authentication procedure. This is intended to give an authentication procedure. This is intended to give developers an
opportunity to attach to the server process with a debugger to opportunity to attach to the server process with a debugger to
trace down misbehavior in authentication. trace down misbehavior in authentication.
This parameter can only be set in the <filename>postgresql.conf</> This parameter can only be set in the <filename>postgresql.conf</>
...@@ -5482,7 +5497,7 @@ plruby.use_strict = true # generates error: unknown class name ...@@ -5482,7 +5497,7 @@ plruby.use_strict = true # generates error: unknown class name
commands. <xref linkend="guc-client-min-messages"> or commands. <xref linkend="guc-client-min-messages"> or
<xref linkend="guc-log-min-messages"> must be <xref linkend="guc-log-min-messages"> must be
<literal>DEBUG1</literal> or lower to send this output to the <literal>DEBUG1</literal> or lower to send this output to the
client or server log, respectively. client or server logs, respectively.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
...@@ -5719,9 +5734,9 @@ plruby.use_strict = true # generates error: unknown class name ...@@ -5719,9 +5734,9 @@ plruby.use_strict = true # generates error: unknown class name
namely all the rows on the damaged page. But it allows you to get namely all the rows on the damaged page. But it allows you to get
past the error and retrieve rows from any undamaged pages that might past the error and retrieve rows from any undamaged pages that might
be present in the table. So it is useful for recovering data if be present in the table. So it is useful for recovering data if
corruption has occurred due to hardware or software error. You should corruption has occurred due to a hardware or software error. You should
generally not set this on until you have given up hope of recovering generally not set this on until you have given up hope of recovering
data from the damaged page(s) of a table. The data from the damaged pages of a table. The
default setting is <literal>off</>, and it can only be changed default setting is <literal>off</>, and it can only be changed
by a superuser. by a superuser.
</para> </para>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/diskusage.sgml,v 1.18 2007/01/31 20:56:16 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/diskusage.sgml,v 1.19 2010/02/03 17:25:05 momjian Exp $ -->
<chapter id="diskusage"> <chapter id="diskusage">
<title>Monitoring Disk Usage</title> <title>Monitoring Disk Usage</title>
...@@ -18,10 +18,10 @@ ...@@ -18,10 +18,10 @@
<para> <para>
Each table has a primary heap disk file where most of the data is Each table has a primary heap disk file where most of the data is
stored. If the table has any columns with potentially-wide values, stored. If the table has any columns with potentially-wide values,
there is also a <acronym>TOAST</> file associated with the table, there also might be a <acronym>TOAST</> file associated with the table,
which is used to store values too wide to fit comfortably in the main which is used to store values too wide to fit comfortably in the main
table (see <xref linkend="storage-toast">). There will be one index on the table (see <xref linkend="storage-toast">). There will be one index on the
<acronym>TOAST</> table, if present. There might also be indexes associated <acronym>TOAST</> table, if present. There also might be indexes associated
with the base table. Each table and index is stored in a separate disk with the base table. Each table and index is stored in a separate disk
file &mdash; possibly more than one file, if the file would exceed one file &mdash; possibly more than one file, if the file would exceed one
gigabyte. Naming conventions for these files are described in <xref gigabyte. Naming conventions for these files are described in <xref
...@@ -29,7 +29,7 @@ ...@@ -29,7 +29,7 @@
</para> </para>
<para> <para>
You can monitor disk space from three ways: using You can monitor disk space three ways: using
SQL functions listed in <xref linkend="functions-admin-dbsize">, SQL functions listed in <xref linkend="functions-admin-dbsize">,
using <command>VACUUM</> information, and from the command line using <command>VACUUM</> information, and from the command line
using the tools in <filename>contrib/oid2name</>. The SQL functions using the tools in <filename>contrib/oid2name</>. The SQL functions
...@@ -60,13 +60,15 @@ SELECT relfilenode, relpages FROM pg_class WHERE relname = 'customer'; ...@@ -60,13 +60,15 @@ SELECT relfilenode, relpages FROM pg_class WHERE relname = 'customer';
like the following: like the following:
<programlisting> <programlisting>
SELECT relname, relpages SELECT relname, relpages
FROM pg_class, FROM pg_class,
(SELECT reltoastrelid FROM pg_class (SELECT reltoastrelid
WHERE relname = 'customer') ss FROM pg_class
WHERE oid = ss.reltoastrelid WHERE relname = 'customer') AS ss
OR oid = (SELECT reltoastidxid FROM pg_class WHERE oid = ss.reltoastrelid OR
oid = (SELECT reltoastidxid
FROM pg_class
WHERE oid = ss.reltoastrelid) WHERE oid = ss.reltoastrelid)
ORDER BY relname; ORDER BY relname;
relname | relpages relname | relpages
----------------------+---------- ----------------------+----------
...@@ -79,11 +81,11 @@ SELECT relname, relpages ...@@ -79,11 +81,11 @@ SELECT relname, relpages
You can easily display index sizes, too: You can easily display index sizes, too:
<programlisting> <programlisting>
SELECT c2.relname, c2.relpages SELECT c2.relname, c2.relpages
FROM pg_class c, pg_class c2, pg_index i FROM pg_class c, pg_class c2, pg_index i
WHERE c.relname = 'customer' WHERE c.relname = 'customer' AND
AND c.oid = i.indrelid c.oid = i.indrelid AND
AND c2.oid = i.indexrelid c2.oid = i.indexrelid
ORDER BY c2.relname; ORDER BY c2.relname;
relname | relpages relname | relpages
----------------------+---------- ----------------------+----------
...@@ -95,7 +97,9 @@ SELECT c2.relname, c2.relpages ...@@ -95,7 +97,9 @@ SELECT c2.relname, c2.relpages
It is easy to find your largest tables and indexes using this It is easy to find your largest tables and indexes using this
information: information:
<programlisting> <programlisting>
SELECT relname, relpages FROM pg_class ORDER BY relpages DESC; SELECT relname, relpages
FROM pg_class
ORDER BY relpages DESC;
relname | relpages relname | relpages
----------------------+---------- ----------------------+----------
...@@ -105,9 +109,8 @@ SELECT relname, relpages FROM pg_class ORDER BY relpages DESC; ...@@ -105,9 +109,8 @@ SELECT relname, relpages FROM pg_class ORDER BY relpages DESC;
</para> </para>
<para> <para>
You can also use <filename>contrib/oid2name</> to show disk usage. See You can also use <filename>contrib/oid2name</> to show disk usage; see
<filename>README.oid2name</> in that directory for examples. It includes a script that <xref linkend="oid2name"> for more details and examples.
shows disk usage for each database.
</para> </para>
</sect1> </sect1>
...@@ -116,7 +119,7 @@ SELECT relname, relpages FROM pg_class ORDER BY relpages DESC; ...@@ -116,7 +119,7 @@ SELECT relname, relpages FROM pg_class ORDER BY relpages DESC;
<para> <para>
The most important disk monitoring task of a database administrator The most important disk monitoring task of a database administrator
is to make sure the disk doesn't grow full. A filled data disk will is to make sure the disk doesn't become full. A filled data disk will
not result in data corruption, but it might prevent useful activity not result in data corruption, but it might prevent useful activity
from occurring. If the disk holding the WAL files grows full, database from occurring. If the disk holding the WAL files grows full, database
server panic and consequent shutdown might occur. server panic and consequent shutdown might occur.
...@@ -140,7 +143,7 @@ SELECT relname, relpages FROM pg_class ORDER BY relpages DESC; ...@@ -140,7 +143,7 @@ SELECT relname, relpages FROM pg_class ORDER BY relpages DESC;
If your system supports per-user disk quotas, then the database If your system supports per-user disk quotas, then the database
will naturally be subject to whatever quota is placed on the user will naturally be subject to whatever quota is placed on the user
the server runs as. Exceeding the quota will have the same bad the server runs as. Exceeding the quota will have the same bad
effects as running out of space entirely. effects as running out of disk space entirely.
</para> </para>
</sect1> </sect1>
</chapter> </chapter>
<!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.36 2010/01/15 09:18:59 heikki Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.37 2010/02/03 17:25:05 momjian Exp $ -->
<chapter id="high-availability"> <chapter id="high-availability">
<title>High Availability, Load Balancing, and Replication</title> <title>High Availability, Load Balancing, and Replication</title>
...@@ -67,7 +67,7 @@ ...@@ -67,7 +67,7 @@
<para> <para>
Performance must be considered in any choice. There is usually a Performance must be considered in any choice. There is usually a
trade-off between functionality and trade-off between functionality and
performance. For example, a full synchronous solution over a slow performance. For example, a fully synchronous solution over a slow
network might cut performance by more than half, while an asynchronous network might cut performance by more than half, while an asynchronous
one might have a minimal performance impact. one might have a minimal performance impact.
</para> </para>
...@@ -89,7 +89,7 @@ ...@@ -89,7 +89,7 @@
Shared disk failover avoids synchronization overhead by having only one Shared disk failover avoids synchronization overhead by having only one
copy of the database. It uses a single disk array that is shared by copy of the database. It uses a single disk array that is shared by
multiple servers. If the main database server fails, the standby server multiple servers. If the main database server fails, the standby server
is able to mount and start the database as though it was recovering from is able to mount and start the database as though it were recovering from
a database crash. This allows rapid failover with no data loss. a database crash. This allows rapid failover with no data loss.
</para> </para>
...@@ -149,7 +149,7 @@ protocol to make nodes agree on a serializable transactional order. ...@@ -149,7 +149,7 @@ protocol to make nodes agree on a serializable transactional order.
<para> <para>
A PITR warm standby server can be kept more up-to-date using the A PITR warm standby server can be kept more up-to-date using the
streaming replication feature built into <productname>PostgreSQL</> 8.5 streaming replication feature built into <productname>PostgreSQL</> 8.5
onwards. onwards; see <xref linkend="warm-standby">.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
...@@ -190,7 +190,7 @@ protocol to make nodes agree on a serializable transactional order. ...@@ -190,7 +190,7 @@ protocol to make nodes agree on a serializable transactional order.
<para> <para>
If queries are simply broadcast unmodified, functions like If queries are simply broadcast unmodified, functions like
<function>random()</>, <function>CURRENT_TIMESTAMP</>, and <function>random()</>, <function>CURRENT_TIMESTAMP</>, and
sequences would have different values on different servers. sequences can have different values on different servers.
This is because each server operates independently, and because This is because each server operates independently, and because
SQL queries are broadcast (and not actual modified rows). If SQL queries are broadcast (and not actual modified rows). If
this is unacceptable, either the middleware or the application this is unacceptable, either the middleware or the application
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/install-win32.sgml,v 1.55 2010/01/12 20:13:32 mha Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/install-win32.sgml,v 1.56 2010/02/03 17:25:05 momjian Exp $ -->
<chapter id="install-win32"> <chapter id="install-win32">
<title>Installation from Source Code on <productname>Windows</productname></title> <title>Installation from Source Code on <productname>Windows</productname></title>
...@@ -388,7 +388,7 @@ ...@@ -388,7 +388,7 @@
</userinput> </userinput>
</screen> </screen>
To change the schedule used (default is the parallel), append it to the To change the schedule used (default is parallel), append it to the
command line like: command line like:
<screen> <screen>
<userinput> <userinput>
...@@ -544,9 +544,10 @@ ...@@ -544,9 +544,10 @@
Normally you do not need to install any of the client files. You should Normally you do not need to install any of the client files. You should
place the <filename>libpq.dll</filename> file in the same directory place the <filename>libpq.dll</filename> file in the same directory
as your applications executable file. Do not install as your applications executable file. Do not install
<filename>libpq.dll</filename> into your Windows, System or System32 <filename>libpq.dll</filename> into your <filename>Windows</>,
directory unless absolutely necessary. <filename>System</> or <filename>System32</> directory unless
If this file is installed using a setup program, it should absolutely necessary.
If this file is installed using a setup program, then it should
be installed with version checking using the be installed with version checking using the
<symbol>VERSIONINFO</symbol> resource included in the file, to <symbol>VERSIONINFO</symbol> resource included in the file, to
ensure that a newer version of the library is not overwritten. ensure that a newer version of the library is not overwritten.
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/installation.sgml,v 1.340 2010/01/28 23:59:52 adunstan Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/installation.sgml,v 1.341 2010/02/03 17:25:05 momjian Exp $ -->
<chapter id="installation"> <chapter id="installation">
<title><![%standalone-include[<productname>PostgreSQL</>]]> <title><![%standalone-include[<productname>PostgreSQL</>]]>
...@@ -1106,7 +1106,7 @@ su - postgres ...@@ -1106,7 +1106,7 @@ su - postgres
a larger segment size. This can be helpful to reduce the number of a larger segment size. This can be helpful to reduce the number of
file descriptors consumed when working with very large tables. file descriptors consumed when working with very large tables.
But be careful not to select a value larger than is supported But be careful not to select a value larger than is supported
by your platform and the filesystem(s) you intend to use. Other by your platform and the file systems you intend to use. Other
tools you might wish to use, such as <application>tar</>, could tools you might wish to use, such as <application>tar</>, could
also set limits on the usable file size. also set limits on the usable file size.
It is recommended, though not absolutely required, that this value It is recommended, though not absolutely required, that this value
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/maintenance.sgml,v 1.97 2009/11/16 21:32:06 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/maintenance.sgml,v 1.98 2010/02/03 17:25:05 momjian Exp $ -->
<chapter id="maintenance"> <chapter id="maintenance">
<title>Routine Database Maintenance Tasks</title> <title>Routine Database Maintenance Tasks</title>
...@@ -17,13 +17,13 @@ ...@@ -17,13 +17,13 @@
discussed here are <emphasis>required</emphasis>, but they discussed here are <emphasis>required</emphasis>, but they
are repetitive in nature and can easily be automated using standard are repetitive in nature and can easily be automated using standard
tools such as <application>cron</application> scripts or tools such as <application>cron</application> scripts or
Windows' <application>Task Scheduler</>. But it is the database Windows' <application>Task Scheduler</>. It is the database
administrator's responsibility to set up appropriate scripts, and to administrator's responsibility to set up appropriate scripts, and to
check that they execute successfully. check that they execute successfully.
</para> </para>
<para> <para>
One obvious maintenance task is creation of backup copies of the data on a One obvious maintenance task is the creation of backup copies of the data on a
regular schedule. Without a recent backup, you have no chance of recovery regular schedule. Without a recent backup, you have no chance of recovery
after a catastrophe (disk failure, fire, mistakenly dropping a critical after a catastrophe (disk failure, fire, mistakenly dropping a critical
table, etc.). The backup and recovery mechanisms available in table, etc.). The backup and recovery mechanisms available in
...@@ -118,7 +118,7 @@ ...@@ -118,7 +118,7 @@
the standard form of <command>VACUUM</> can run in parallel with production the standard form of <command>VACUUM</> can run in parallel with production
database operations. (Commands such as <command>SELECT</command>, database operations. (Commands such as <command>SELECT</command>,
<command>INSERT</command>, <command>UPDATE</command>, and <command>INSERT</command>, <command>UPDATE</command>, and
<command>DELETE</command> will continue to function as normal, though you <command>DELETE</command> will continue to function normally, though you
will not be able to modify the definition of a table with commands such as will not be able to modify the definition of a table with commands such as
<command>ALTER TABLE</command> while it is being vacuumed.) <command>ALTER TABLE</command> while it is being vacuumed.)
<command>VACUUM FULL</> requires exclusive lock on the table it is <command>VACUUM FULL</> requires exclusive lock on the table it is
...@@ -151,11 +151,11 @@ ...@@ -151,11 +151,11 @@
<command>UPDATE</> or <command>DELETE</> of a row does not <command>UPDATE</> or <command>DELETE</> of a row does not
immediately remove the old version of the row. immediately remove the old version of the row.
This approach is necessary to gain the benefits of multiversion This approach is necessary to gain the benefits of multiversion
concurrency control (see <xref linkend="mvcc">): the row version concurrency control (<acronym>MVCC</>, see <xref linkend="mvcc">): the row version
must not be deleted while it is still potentially visible to other must not be deleted while it is still potentially visible to other
transactions. But eventually, an outdated or deleted row version is no transactions. But eventually, an outdated or deleted row version is no
longer of interest to any transaction. The space it occupies must then be longer of interest to any transaction. The space it occupies must then be
reclaimed for reuse by new rows, to avoid infinite growth of disk reclaimed for reuse by new rows, to avoid unbounded growth of disk
space requirements. This is done by running <command>VACUUM</>. space requirements. This is done by running <command>VACUUM</>.
</para> </para>
...@@ -309,14 +309,14 @@ ...@@ -309,14 +309,14 @@
statistics more frequently than others if your application requires it. statistics more frequently than others if your application requires it.
In practice, however, it is usually best to just analyze the entire In practice, however, it is usually best to just analyze the entire
database, because it is a fast operation. <command>ANALYZE</> uses a database, because it is a fast operation. <command>ANALYZE</> uses a
statistical random sampling of the rows of a table rather than reading statistically random sampling of the rows of a table rather than reading
every single row. every single row.
</para> </para>
<tip> <tip>
<para> <para>
Although per-column tweaking of <command>ANALYZE</> frequency might not be Although per-column tweaking of <command>ANALYZE</> frequency might not be
very productive, you might well find it worthwhile to do per-column very productive, you might find it worthwhile to do per-column
adjustment of the level of detail of the statistics collected by adjustment of the level of detail of the statistics collected by
<command>ANALYZE</>. Columns that are heavily used in <literal>WHERE</> <command>ANALYZE</>. Columns that are heavily used in <literal>WHERE</>
clauses and have highly irregular data distributions might require a clauses and have highly irregular data distributions might require a
...@@ -341,11 +341,11 @@ ...@@ -341,11 +341,11 @@
numbers: a row version with an insertion XID greater than the current numbers: a row version with an insertion XID greater than the current
transaction's XID is <quote>in the future</> and should not be visible transaction's XID is <quote>in the future</> and should not be visible
to the current transaction. But since transaction IDs have limited size to the current transaction. But since transaction IDs have limited size
(32 bits at this writing) a cluster that runs for a long time (more (32 bits) a cluster that runs for a long time (more
than 4 billion transactions) would suffer <firstterm>transaction ID than 4 billion transactions) would suffer <firstterm>transaction ID
wraparound</>: the XID counter wraps around to zero, and all of a sudden wraparound</>: the XID counter wraps around to zero, and all of a sudden
transactions that were in the past appear to be in the future &mdash; which transactions that were in the past appear to be in the future &mdash; which
means their outputs become invisible. In short, catastrophic data loss. means their output become invisible. In short, catastrophic data loss.
(Actually the data is still there, but that's cold comfort if you cannot (Actually the data is still there, but that's cold comfort if you cannot
get at it.) To avoid this, it is necessary to vacuum every table get at it.) To avoid this, it is necessary to vacuum every table
in every database at least once every two billion transactions. in every database at least once every two billion transactions.
...@@ -353,8 +353,9 @@ ...@@ -353,8 +353,9 @@
<para> <para>
The reason that periodic vacuuming solves the problem is that The reason that periodic vacuuming solves the problem is that
<productname>PostgreSQL</productname> distinguishes a special XID <productname>PostgreSQL</productname> reserves a special XID
<literal>FrozenXID</>. This XID is always considered older as <literal>FrozenXID</>. This XID does not follow the normal XID
comparison rules and is always considered older
than every normal XID. Normal XIDs are than every normal XID. Normal XIDs are
compared using modulo-2<superscript>31</> arithmetic. This means compared using modulo-2<superscript>31</> arithmetic. This means
that for every normal XID, there are two billion XIDs that are that for every normal XID, there are two billion XIDs that are
...@@ -365,12 +366,12 @@ ...@@ -365,12 +366,12 @@
the next two billion transactions, no matter which normal XID we are the next two billion transactions, no matter which normal XID we are
talking about. If the row version still exists after more than two billion talking about. If the row version still exists after more than two billion
transactions, it will suddenly appear to be in the future. To transactions, it will suddenly appear to be in the future. To
prevent data loss, old row versions must be reassigned the XID prevent this, old row versions must be reassigned the XID
<literal>FrozenXID</> sometime before they reach the <literal>FrozenXID</> sometime before they reach the
two-billion-transactions-old mark. Once they are assigned this two-billion-transactions-old mark. Once they are assigned this
special XID, they will appear to be <quote>in the past</> to all special XID, they will appear to be <quote>in the past</> to all
normal transactions regardless of wraparound issues, and so such normal transactions regardless of wraparound issues, and so such
row versions will be good until deleted, no matter how long that is. row versions will be valid until deleted, no matter how long that is.
This reassignment of old XIDs is handled by <command>VACUUM</>. This reassignment of old XIDs is handled by <command>VACUUM</>.
</para> </para>
...@@ -398,14 +399,14 @@ ...@@ -398,14 +399,14 @@
<para> <para>
The maximum time that a table can go unvacuumed is two billion The maximum time that a table can go unvacuumed is two billion
transactions minus the <varname>vacuum_freeze_min_age</> that was used transactions minus the <varname>vacuum_freeze_min_age</> value at
when <command>VACUUM</> last scanned the whole table. If it were to go the time <command>VACUUM</> last scanned the whole table. If it were to go
unvacuumed for longer than unvacuumed for longer than
that, data loss could result. To ensure that this does not happen, that, data loss could result. To ensure that this does not happen,
autovacuum is invoked on any table that might contain XIDs older than the autovacuum is invoked on any table that might contain XIDs older than the
age specified by the configuration parameter <xref age specified by the configuration parameter <xref
linkend="guc-autovacuum-freeze-max-age">. (This will happen even if linkend="guc-autovacuum-freeze-max-age">. (This will happen even if
autovacuum is otherwise disabled.) autovacuum is disabled.)
</para> </para>
<para> <para>
...@@ -416,10 +417,10 @@ ...@@ -416,10 +417,10 @@
For tables that are regularly vacuumed for space reclamation purposes, For tables that are regularly vacuumed for space reclamation purposes,
this is of little importance. However, for static tables this is of little importance. However, for static tables
(including tables that receive inserts, but no updates or deletes), (including tables that receive inserts, but no updates or deletes),
there is no need for vacuuming for space reclamation, and so it can there is no need to vacuum for space reclamation, so it can
be useful to try to maximize the interval between forced autovacuums be useful to try to maximize the interval between forced autovacuums
on very large static tables. Obviously one can do this either by on very large static tables. Obviously one can do this either by
increasing <varname>autovacuum_freeze_max_age</> or by decreasing increasing <varname>autovacuum_freeze_max_age</> or decreasing
<varname>vacuum_freeze_min_age</>. <varname>vacuum_freeze_min_age</>.
</para> </para>
...@@ -444,10 +445,10 @@ ...@@ -444,10 +445,10 @@
The sole disadvantage of increasing <varname>autovacuum_freeze_max_age</> The sole disadvantage of increasing <varname>autovacuum_freeze_max_age</>
(and <varname>vacuum_freeze_table_age</> along with it) (and <varname>vacuum_freeze_table_age</> along with it)
is that the <filename>pg_clog</> subdirectory of the database cluster is that the <filename>pg_clog</> subdirectory of the database cluster
will take more space, because it must store the commit status for all will take more space, because it must store the commit status of all
transactions back to the <varname>autovacuum_freeze_max_age</> horizon. transactions back to the <varname>autovacuum_freeze_max_age</> horizon.
The commit status uses two bits per transaction, so if The commit status uses two bits per transaction, so if
<varname>autovacuum_freeze_max_age</> has its maximum allowed value of <varname>autovacuum_freeze_max_age</> is set to its maximum allowed value of
a little less than two billion, <filename>pg_clog</> can be expected to a little less than two billion, <filename>pg_clog</> can be expected to
grow to about half a gigabyte. If this is trivial compared to your grow to about half a gigabyte. If this is trivial compared to your
total database size, setting <varname>autovacuum_freeze_max_age</> to total database size, setting <varname>autovacuum_freeze_max_age</> to
...@@ -530,7 +531,7 @@ HINT: To avoid a database shutdown, execute a database-wide VACUUM in "mydb". ...@@ -530,7 +531,7 @@ HINT: To avoid a database shutdown, execute a database-wide VACUUM in "mydb".
superuser, else it will fail to process system catalogs and thus not superuser, else it will fail to process system catalogs and thus not
be able to advance the database's <structfield>datfrozenxid</>.) be able to advance the database's <structfield>datfrozenxid</>.)
If these warnings are If these warnings are
ignored, the system will shut down and refuse to execute any new ignored, the system will shut down and refuse to start any new
transactions once there are fewer than 1 million transactions left transactions once there are fewer than 1 million transactions left
until wraparound: until wraparound:
...@@ -592,14 +593,14 @@ HINT: Stop the postmaster and use a standalone backend to VACUUM in "mydb". ...@@ -592,14 +593,14 @@ HINT: Stop the postmaster and use a standalone backend to VACUUM in "mydb".
The <xref linkend="guc-autovacuum-max-workers"> setting limits how many The <xref linkend="guc-autovacuum-max-workers"> setting limits how many
workers may be running at any time. If several large tables all become workers may be running at any time. If several large tables all become
eligible for vacuuming in a short amount of time, all autovacuum workers eligible for vacuuming in a short amount of time, all autovacuum workers
may become occupied with vacuuming those tables for a long period. might become occupied with vacuuming those tables for a long period.
This would result This would result
in other tables and databases not being vacuumed until a worker became in other tables and databases not being vacuumed until a worker became
available. There is not a limit on how many workers might be in a available. There is no limit on how many workers might be in a
single database, but workers do try to avoid repeating work that has single database, but workers do try to avoid repeating work that has
already been done by other workers. Note that the number of running already been done by other workers. Note that the number of running
workers does not count towards the <xref linkend="guc-max-connections"> nor workers does not count towards <xref linkend="guc-max-connections"> or
the <xref linkend="guc-superuser-reserved-connections"> limits. <xref linkend="guc-superuser-reserved-connections"> limits.
</para> </para>
<para> <para>
...@@ -699,36 +700,26 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu ...@@ -699,36 +700,26 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu
</para> </para>
<para> <para>
In <productname>PostgreSQL</> releases before 7.4, periodic reindexing Index pages that have become
was frequently necessary to avoid <quote>index bloat</>, due to lack of completely empty are reclaimed for re-use. However, here is still the possibility
internal space reclamation in B-tree indexes. Any situation in which the of inefficient use of space: if all but a few index keys on a page have
range of index keys changed over time &mdash; for example, an index on been deleted, the page remains allocated. Therefore, a usage
timestamps in a table where old entries are eventually deleted &mdash; pattern in which most, but not all, keys in each range are eventually
would result in bloat, because index pages for no-longer-needed portions deleted will see poor use of space. For such usage patterns,
of the key range were not reclaimed for re-use. Over time, the index size periodic reindexing is recommended.
could become indefinitely much larger than the amount of useful data in it.
</para>
<para>
In <productname>PostgreSQL</> 7.4 and later, index pages that have become
completely empty are reclaimed for re-use. There is still a possibility
for inefficient use of space: if all but a few index keys on a page have
been deleted, the page remains allocated. So a usage pattern in which all
but a few keys in each range are eventually deleted will see poor use of
space. For such usage patterns, periodic reindexing is recommended.
</para> </para>
<para> <para>
The potential for bloat in non-B-tree indexes has not been well The potential for bloat in non-B-tree indexes has not been well
characterized. It is a good idea to keep an eye on the index's physical researched. It is a good idea to periodically monitor the index's physical
size when using any non-B-tree index type. size when using any non-B-tree index type.
</para> </para>
<para> <para>
Also, for B-tree indexes a freshly-constructed index is somewhat faster to Also, for B-tree indexes, a freshly-constructed index is slightly faster to
access than one that has been updated many times, because logically access than one that has been updated many times because logically
adjacent pages are usually also physically adjacent in a newly built index. adjacent pages are usually also physically adjacent in a newly built index.
(This consideration does not currently apply to non-B-tree indexes.) It (This consideration does not apply to non-B-tree indexes.) It
might be worthwhile to reindex periodically just to improve access speed. might be worthwhile to reindex periodically just to improve access speed.
</para> </para>
</sect1> </sect1>
...@@ -744,11 +735,11 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu ...@@ -744,11 +735,11 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu
<para> <para>
It is a good idea to save the database server's log output It is a good idea to save the database server's log output
somewhere, rather than just routing it to <filename>/dev/null</>. somewhere, rather than just discarding it via <filename>/dev/null</>.
The log output is invaluable when it comes time to diagnose The log output is invaluable when diagnosing
problems. However, the log output tends to be voluminous problems. However, the log output tends to be voluminous
(especially at higher debug levels) and you won't want to save it (especially at higher debug levels) so you won't want to save it
indefinitely. You need to <quote>rotate</> the log files so that indefinitely. You need to <emphasis>rotate</> the log files so that
new log files are started and old ones removed after a reasonable new log files are started and old ones removed after a reasonable
period of time. period of time.
</para> </para>
...@@ -758,7 +749,7 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu ...@@ -758,7 +749,7 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu
<command>postgres</command> into a <command>postgres</command> into a
file, you will have log output, but file, you will have log output, but
the only way to truncate the log file is to stop and restart the only way to truncate the log file is to stop and restart
the server. This might be OK if you are using the server. This might be acceptable if you are using
<productname>PostgreSQL</productname> in a development environment, <productname>PostgreSQL</productname> in a development environment,
but few production servers would find this behavior acceptable. but few production servers would find this behavior acceptable.
</para> </para>
...@@ -766,17 +757,18 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu ...@@ -766,17 +757,18 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu
<para> <para>
A better approach is to send the server's A better approach is to send the server's
<systemitem>stderr</> output to some type of log rotation program. <systemitem>stderr</> output to some type of log rotation program.
There is a built-in log rotation program, which you can use by There is a built-in log rotation facility, which you can use by
setting the configuration parameter <literal>logging_collector</> to setting the configuration parameter <literal>logging_collector</> to
<literal>true</> in <filename>postgresql.conf</>. The control <literal>true</> in <filename>postgresql.conf</>. The control
parameters for this program are described in <xref parameters for this program are described in <xref
linkend="runtime-config-logging-where">. You can also use this approach linkend="runtime-config-logging-where">. You can also use this approach
to capture the log data in machine readable CSV format. to capture the log data in machine readable <acronym>CSV</>
(comma-separated values) format.
</para> </para>
<para> <para>
Alternatively, you might prefer to use an external log rotation Alternatively, you might prefer to use an external log rotation
program, if you have one that you are already using with other program if you have one that you are already using with other
server software. For example, the <application>rotatelogs</application> server software. For example, the <application>rotatelogs</application>
tool included in the <productname>Apache</productname> distribution tool included in the <productname>Apache</productname> distribution
can be used with <productname>PostgreSQL</productname>. To do this, can be used with <productname>PostgreSQL</productname>. To do this,
...@@ -794,7 +786,7 @@ pg_ctl start | rotatelogs /var/log/pgsql_log 86400 ...@@ -794,7 +786,7 @@ pg_ctl start | rotatelogs /var/log/pgsql_log 86400
<para> <para>
Another production-grade approach to managing log output is to Another production-grade approach to managing log output is to
send it all to <application>syslog</> and let send it to <application>syslog</> and let
<application>syslog</> deal with file rotation. To do this, set the <application>syslog</> deal with file rotation. To do this, set the
configuration parameter <literal>log_destination</> to <literal>syslog</> configuration parameter <literal>log_destination</> to <literal>syslog</>
(to log to <application>syslog</> only) in (to log to <application>syslog</> only) in
...@@ -810,15 +802,15 @@ pg_ctl start | rotatelogs /var/log/pgsql_log 86400 ...@@ -810,15 +802,15 @@ pg_ctl start | rotatelogs /var/log/pgsql_log 86400
On many systems, however, <application>syslog</> is not very reliable, On many systems, however, <application>syslog</> is not very reliable,
particularly with large log messages; it might truncate or drop messages particularly with large log messages; it might truncate or drop messages
just when you need them the most. Also, on <productname>Linux</>, just when you need them the most. Also, on <productname>Linux</>,
<application>syslog</> will sync each message to disk, yielding poor <application>syslog</> will flush each message to disk, yielding poor
performance. (You can use a <literal>-</> at the start of the file name performance. (You can use a <quote><literal>-</></> at the start of the file name
in the <application>syslog</> configuration file to disable syncing.) in the <application>syslog</> configuration file to disable syncing.)
</para> </para>
<para> <para>
Note that all the solutions described above take care of starting new Note that all the solutions described above take care of starting new
log files at configurable intervals, but they do not handle deletion log files at configurable intervals, but they do not handle deletion
of old, no-longer-interesting log files. You will probably want to set of old, no-longer-useful log files. You will probably want to set
up a batch job to periodically delete old log files. Another possibility up a batch job to periodically delete old log files. Another possibility
is to configure the rotation program so that old log files are overwritten is to configure the rotation program so that old log files are overwritten
cyclically. cyclically.
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/manage-ag.sgml,v 2.60 2009/12/19 01:49:02 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/manage-ag.sgml,v 2.61 2010/02/03 17:25:05 momjian Exp $ -->
<chapter id="managing-databases"> <chapter id="managing-databases">
<title>Managing Databases</title> <title>Managing Databases</title>
...@@ -25,7 +25,7 @@ ...@@ -25,7 +25,7 @@
A database is a named collection of <acronym>SQL</acronym> objects A database is a named collection of <acronym>SQL</acronym> objects
(<quote>database objects</quote>). Generally, every database (<quote>database objects</quote>). Generally, every database
object (tables, functions, etc.) belongs to one and only one object (tables, functions, etc.) belongs to one and only one
database. (But there are a few system catalogs, for example database. (However there are a few system catalogs, for example
<literal>pg_database</>, that belong to a whole cluster and <literal>pg_database</>, that belong to a whole cluster and
are accessible from each database within the cluster.) More are accessible from each database within the cluster.) More
accurately, a database is a collection of schemas and the schemas accurately, a database is a collection of schemas and the schemas
...@@ -38,15 +38,15 @@ ...@@ -38,15 +38,15 @@
When connecting to the database server, a client must specify in When connecting to the database server, a client must specify in
its connection request the name of the database it wants to connect its connection request the name of the database it wants to connect
to. It is not possible to access more than one database per to. It is not possible to access more than one database per
connection. (But an application is not restricted in the number of connection. However, an application is not restricted in the number of
connections it opens to the same or other databases.) Databases are connections it opens to the same or other databases. Databases are
physically separated and access control is managed at the physically separated and access control is managed at the
connection level. If one <productname>PostgreSQL</> server connection level. If one <productname>PostgreSQL</> server
instance is to house projects or users that should be separate and instance is to house projects or users that should be separate and
for the most part unaware of each other, it is therefore for the most part unaware of each other, it is therefore
recommendable to put them into separate databases. If the projects recommendable to put them into separate databases. If the projects
or users are interrelated and should be able to use each other's or users are interrelated and should be able to use each other's
resources they should be put in the same database, but possibly resources, they should be put in the same database but possibly
into separate schemas. Schemas are a purely logical structure and who can into separate schemas. Schemas are a purely logical structure and who can
access what is managed by the privilege system. More information about access what is managed by the privilege system. More information about
managing schemas is in <xref linkend="ddl-schemas">. managing schemas is in <xref linkend="ddl-schemas">.
...@@ -94,7 +94,7 @@ CREATE DATABASE <replaceable>name</>; ...@@ -94,7 +94,7 @@ CREATE DATABASE <replaceable>name</>;
where <replaceable>name</> follows the usual rules for where <replaceable>name</> follows the usual rules for
<acronym>SQL</acronym> identifiers. The current role automatically <acronym>SQL</acronym> identifiers. The current role automatically
becomes the owner of the new database. It is the privilege of the becomes the owner of the new database. It is the privilege of the
owner of a database to remove it later on (which also removes all owner of a database to remove it later (which also removes all
the objects in it, even if they have a different owner). the objects in it, even if they have a different owner).
</para> </para>
...@@ -123,14 +123,14 @@ CREATE DATABASE <replaceable>name</>; ...@@ -123,14 +123,14 @@ CREATE DATABASE <replaceable>name</>;
new database is created within the new database is created within the
cluster, <literal>template1</literal> is essentially cloned. cluster, <literal>template1</literal> is essentially cloned.
This means that any changes you make in <literal>template1</> are This means that any changes you make in <literal>template1</> are
propagated to all subsequently created databases. Therefore it is propagated to all subsequently created databases. Because of this,
unwise to use <literal>template1</> for real work, but when avoid creating objects in <literal>template1</> unless you want them
used judiciously this feature can be convenient. More details propagated to every newly created database. More details
appear in <xref linkend="manage-ag-templatedbs">. appear in <xref linkend="manage-ag-templatedbs">.
</para> </para>
<para> <para>
As a convenience, there is a program that you can As a convenience, there is a program you can
execute from the shell to create new databases, execute from the shell to create new databases,
<command>createdb</>.<indexterm><primary>createdb</></> <command>createdb</>.<indexterm><primary>createdb</></>
...@@ -143,8 +143,7 @@ createdb <replaceable class="parameter">dbname</replaceable> ...@@ -143,8 +143,7 @@ createdb <replaceable class="parameter">dbname</replaceable>
exactly as described above. exactly as described above.
The <xref linkend="app-createdb"> reference page contains the invocation The <xref linkend="app-createdb"> reference page contains the invocation
details. Note that <command>createdb</> without any arguments will create details. Note that <command>createdb</> without any arguments will create
a database with the current user name, which might or might not be what a database with the current user name.
you want.
</para> </para>
<note> <note>
...@@ -155,8 +154,8 @@ createdb <replaceable class="parameter">dbname</replaceable> ...@@ -155,8 +154,8 @@ createdb <replaceable class="parameter">dbname</replaceable>
</note> </note>
<para> <para>
Sometimes you want to create a database for someone else. That Sometimes you want to create a database for someone else, and have him
role should become the owner of the new database, so he can become the owner of the new database, so he can
configure and manage it himself. To achieve that, use one of the configure and manage it himself. To achieve that, use one of the
following commands: following commands:
<programlisting> <programlisting>
...@@ -167,7 +166,7 @@ CREATE DATABASE <replaceable>dbname</> OWNER <replaceable>rolename</>; ...@@ -167,7 +166,7 @@ CREATE DATABASE <replaceable>dbname</> OWNER <replaceable>rolename</>;
createdb -O <replaceable>rolename</> <replaceable>dbname</> createdb -O <replaceable>rolename</> <replaceable>dbname</>
</programlisting> </programlisting>
from the shell. from the shell.
You must be a superuser to be allowed to create a database for Only the superuser is allowed to create a database for
someone else (that is, for a role you are not a member of). someone else (that is, for a role you are not a member of).
</para> </para>
</sect1> </sect1>
...@@ -186,7 +185,7 @@ createdb -O <replaceable>rolename</> <replaceable>dbname</> ...@@ -186,7 +185,7 @@ createdb -O <replaceable>rolename</> <replaceable>dbname</>
objects in databases. For example, if you install the procedural objects in databases. For example, if you install the procedural
language <application>PL/Perl</> in <literal>template1</>, it will language <application>PL/Perl</> in <literal>template1</>, it will
automatically be available in user databases without any extra automatically be available in user databases without any extra
action being taken when those databases are made. action being taken when those databases are created.
</para> </para>
<para> <para>
...@@ -204,7 +203,7 @@ createdb -O <replaceable>rolename</> <replaceable>dbname</> ...@@ -204,7 +203,7 @@ createdb -O <replaceable>rolename</> <replaceable>dbname</>
<literal>template1</>. This is particularly handy when restoring a <literal>template1</>. This is particularly handy when restoring a
<literal>pg_dump</> dump: the dump script should be restored in a <literal>pg_dump</> dump: the dump script should be restored in a
virgin database to ensure that one recreates the correct contents virgin database to ensure that one recreates the correct contents
of the dumped database, without any conflicts with objects that of the dumped database, without conflicting with objects that
might have been added to <literal>template1</> later on. might have been added to <literal>template1</> later on.
</para> </para>
...@@ -238,8 +237,8 @@ createdb -T template0 <replaceable>dbname</> ...@@ -238,8 +237,8 @@ createdb -T template0 <replaceable>dbname</>
The principal limitation is that no other sessions can be connected to The principal limitation is that no other sessions can be connected to
the source database while it is being copied. <command>CREATE the source database while it is being copied. <command>CREATE
DATABASE</> will fail if any other connection exists when it starts; DATABASE</> will fail if any other connection exists when it starts;
otherwise, new connections to the source database are locked out during the copy operation, new connections to the source database
until <command>CREATE DATABASE</> completes. are prevented.
</para> </para>
<para> <para>
...@@ -251,9 +250,9 @@ createdb -T template0 <replaceable>dbname</> ...@@ -251,9 +250,9 @@ createdb -T template0 <replaceable>dbname</>
cloned by any user with <literal>CREATEDB</> privileges; if it is not set, cloned by any user with <literal>CREATEDB</> privileges; if it is not set,
only superusers and the owner of the database can clone it. only superusers and the owner of the database can clone it.
If <literal>datallowconn</literal> is false, then no new connections If <literal>datallowconn</literal> is false, then no new connections
to that database will be allowed (but existing sessions are not killed to that database will be allowed (but existing sessions are not terminated
simply by setting the flag false). The <literal>template0</literal> simply by setting the flag false). The <literal>template0</literal>
database is normally marked <literal>datallowconn = false</> to prevent modification of it. database is normally marked <literal>datallowconn = false</> to prevent its modification.
Both <literal>template0</literal> and <literal>template1</literal> Both <literal>template0</literal> and <literal>template1</literal>
should always be marked with <literal>datistemplate = true</>. should always be marked with <literal>datistemplate = true</>.
</para> </para>
...@@ -274,7 +273,7 @@ createdb -T template0 <replaceable>dbname</> ...@@ -274,7 +273,7 @@ createdb -T template0 <replaceable>dbname</>
The <literal>postgres</> database is also created when a database The <literal>postgres</> database is also created when a database
cluster is initialized. This database is meant as a default database for cluster is initialized. This database is meant as a default database for
users and applications to connect to. It is simply a copy of users and applications to connect to. It is simply a copy of
<literal>template1</> and can be dropped and recreated if required. <literal>template1</> and can be dropped and recreated if necessary.
</para> </para>
</note> </note>
</sect1> </sect1>
...@@ -294,7 +293,7 @@ createdb -T template0 <replaceable>dbname</> ...@@ -294,7 +293,7 @@ createdb -T template0 <replaceable>dbname</>
<acronym>GEQO</acronym> optimizer for a given database, you'd <acronym>GEQO</acronym> optimizer for a given database, you'd
ordinarily have to either disable it for all databases or make sure ordinarily have to either disable it for all databases or make sure
that every connecting client is careful to issue <literal>SET geqo that every connecting client is careful to issue <literal>SET geqo
TO off;</literal>. To make this setting the default within a particular TO off</literal>. To make this setting the default within a particular
database, you can execute the command: database, you can execute the command:
<programlisting> <programlisting>
ALTER DATABASE mydb SET geqo TO off; ALTER DATABASE mydb SET geqo TO off;
...@@ -306,7 +305,7 @@ ALTER DATABASE mydb SET geqo TO off; ...@@ -306,7 +305,7 @@ ALTER DATABASE mydb SET geqo TO off;
Note that users can still alter this setting during their sessions; it Note that users can still alter this setting during their sessions; it
will only be the default. To undo any such setting, use will only be the default. To undo any such setting, use
<literal>ALTER DATABASE <replaceable>dbname</> RESET <literal>ALTER DATABASE <replaceable>dbname</> RESET
<replaceable>varname</>;</literal>. <replaceable>varname</></literal>.
</para> </para>
</sect1> </sect1>
...@@ -387,7 +386,7 @@ dropdb <replaceable class="parameter">dbname</replaceable> ...@@ -387,7 +386,7 @@ dropdb <replaceable class="parameter">dbname</replaceable>
CREATE TABLESPACE fastspace LOCATION '/mnt/sda1/postgresql/data'; CREATE TABLESPACE fastspace LOCATION '/mnt/sda1/postgresql/data';
</programlisting> </programlisting>
The location must be an existing, empty directory that is owned by The location must be an existing, empty directory that is owned by
the <productname>PostgreSQL</> system user. All objects subsequently the <productname>PostgreSQL</> operating system user. All objects subsequently
created within the tablespace will be stored in files underneath this created within the tablespace will be stored in files underneath this
directory. directory.
</para> </para>
...@@ -405,7 +404,7 @@ CREATE TABLESPACE fastspace LOCATION '/mnt/sda1/postgresql/data'; ...@@ -405,7 +404,7 @@ CREATE TABLESPACE fastspace LOCATION '/mnt/sda1/postgresql/data';
<para> <para>
Creation of the tablespace itself must be done as a database superuser, Creation of the tablespace itself must be done as a database superuser,
but after that you can allow ordinary database users to make use of it. but after that you can allow ordinary database users to use it.
To do that, grant them the <literal>CREATE</> privilege on it. To do that, grant them the <literal>CREATE</> privilege on it.
</para> </para>
...@@ -500,8 +499,8 @@ SELECT spcname FROM pg_tablespace; ...@@ -500,8 +499,8 @@ SELECT spcname FROM pg_tablespace;
Although not recommended, it is possible to adjust the tablespace Although not recommended, it is possible to adjust the tablespace
layout by hand by redefining these links. Two warnings: do not do so layout by hand by redefining these links. Two warnings: do not do so
while the server is running; and after you restart the server, while the server is running; and after you restart the server,
update the <structname>pg_tablespace</> catalog to show the new update the <structname>pg_tablespace</> catalog with the new
locations. (If you do not, <literal>pg_dump</> will continue to show locations. (If you do not, <literal>pg_dump</> will continue to output
the old tablespace locations.) the old tablespace locations.)
</para> </para>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/monitoring.sgml,v 1.75 2010/01/28 14:25:41 mha Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/monitoring.sgml,v 1.76 2010/02/03 17:25:05 momjian Exp $ -->
<chapter id="monitoring"> <chapter id="monitoring">
<title>Monitoring Database Activity</title> <title>Monitoring Database Activity</title>
...@@ -43,7 +43,7 @@ ...@@ -43,7 +43,7 @@
</indexterm> </indexterm>
<para> <para>
On most platforms, <productname>PostgreSQL</productname> modifies its On most Unix platforms, <productname>PostgreSQL</productname> modifies its
command title as reported by <command>ps</>, so that individual server command title as reported by <command>ps</>, so that individual server
processes can readily be identified. A sample display is processes can readily be identified. A sample display is
...@@ -61,7 +61,7 @@ postgres 1016 0.1 2.4 6532 3080 pts/1 SN 13:19 0:00 postgres: tgl reg ...@@ -61,7 +61,7 @@ postgres 1016 0.1 2.4 6532 3080 pts/1 SN 13:19 0:00 postgres: tgl reg
platforms, as do the details of what is shown. This example is from a platforms, as do the details of what is shown. This example is from a
recent Linux system.) The first process listed here is the recent Linux system.) The first process listed here is the
master server process. The command arguments master server process. The command arguments
shown for it are the same ones given when it was launched. The next two shown for it are the same ones used when it was launched. The next two
processes are background worker processes automatically launched by the processes are background worker processes automatically launched by the
master process. (The <quote>stats collector</> process will not be present master process. (The <quote>stats collector</> process will not be present
if you have set if you have set
...@@ -73,22 +73,22 @@ postgres 1016 0.1 2.4 6532 3080 pts/1 SN 13:19 0:00 postgres: tgl reg ...@@ -73,22 +73,22 @@ postgres 1016 0.1 2.4 6532 3080 pts/1 SN 13:19 0:00 postgres: tgl reg
postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <replaceable>activity</> postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <replaceable>activity</>
</screen> </screen>
The user, database, and connection source host items remain the same for The user, database, and (client) host items remain the same for
the life of the client connection, but the activity indicator changes. the life of the client connection, but the activity indicator changes.
The activity can be <literal>idle</> (i.e., waiting for a client command), The activity can be <literal>idle</> (i.e., waiting for a client command),
<literal>idle in transaction</> (waiting for client inside a <command>BEGIN</> block), <literal>idle in transaction</> (waiting for client inside a <command>BEGIN</> block),
or a command type name such as <literal>SELECT</>. Also, or a command type name such as <literal>SELECT</>. Also,
<literal>waiting</> is attached if the server process is presently waiting <literal>waiting</> is appended if the server process is presently waiting
on a lock held by another server process. In the above example we can infer on a lock held by another session. In the above example we can infer
that process 1003 is waiting for process 1016 to complete its transaction and that process 1003 is waiting for process 1016 to complete its transaction and
thereby release some lock or other. thereby release some lock.
</para> </para>
<para> <para>
If you have turned off <xref linkend="guc-update-process-title"> then the If you have turned off <xref linkend="guc-update-process-title"> then the
activity indicator is not updated; the process title is set only once activity indicator is not updated; the process title is set only once
when a new process is launched. On some platforms this saves a useful when a new process is launched. On some platforms this saves a measurable
amount of per-command overhead, on others it's insignificant. amount of per-command overhead; on others it's insignificant.
</para> </para>
<tip> <tip>
...@@ -118,15 +118,15 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -118,15 +118,15 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
is a subsystem that supports collection and reporting of information about is a subsystem that supports collection and reporting of information about
server activity. Presently, the collector can count accesses to tables server activity. Presently, the collector can count accesses to tables
and indexes in both disk-block and individual-row terms. It also tracks and indexes in both disk-block and individual-row terms. It also tracks
total numbers of rows in each table, and the last vacuum and analyze times the total number of rows in each table, and the last vacuum and analyze times
for each table. It can also count calls to user-defined functions and for each table. It can also count calls to user-defined functions and
the total time spent in each one. the total time spent in each one.
</para> </para>
<para> <para>
<productname>PostgreSQL</productname> also supports determining the exact <productname>PostgreSQL</productname> also supports reporting of the exact
command currently being executed by other server processes. This is an command currently being executed by other server processes. This is an
independent facility that does not depend on the collector process. facility independent of the collector process.
</para> </para>
<sect2 id="monitoring-stats-setup"> <sect2 id="monitoring-stats-setup">
...@@ -172,7 +172,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -172,7 +172,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
When the postmaster shuts down, a permanent copy of the statistics When the postmaster shuts down, a permanent copy of the statistics
data is stored in the <filename>global</filename> subdirectory. For increased data is stored in the <filename>global</filename> subdirectory. For increased
performance, the parameter <xref linkend="guc-stats-temp-directory"> can performance, the parameter <xref linkend="guc-stats-temp-directory"> can
be pointed at a RAM based filesystem, decreasing physical I/O requirements. be pointed at a RAM-based file system, decreasing physical I/O requirements.
</para> </para>
</sect2> </sect2>
...@@ -205,9 +205,9 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -205,9 +205,9 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
any of these statistics, it first fetches the most recent report emitted by any of these statistics, it first fetches the most recent report emitted by
the collector process and then continues to use this snapshot for all the collector process and then continues to use this snapshot for all
statistical views and functions until the end of its current transaction. statistical views and functions until the end of its current transaction.
So the statistics will appear not to change as long as you continue the So the statistics will show static information as long as you continue the
current transaction. Similarly, information about the current queries of current transaction. Similarly, information about the current queries of
all processes is collected when any such information is first requested all sessions is collected when any such information is first requested
within a transaction, and the same information will be displayed throughout within a transaction, and the same information will be displayed throughout
the transaction. the transaction.
This is a feature, not a bug, because it allows you to perform several This is a feature, not a bug, because it allows you to perform several
...@@ -1603,7 +1603,7 @@ Total time (ns) 2312105013 ...@@ -1603,7 +1603,7 @@ Total time (ns) 2312105013
SystemTap uses a different notation for trace scripts than DTrace does, SystemTap uses a different notation for trace scripts than DTrace does,
even though the underlying trace points are compatible. One point worth even though the underlying trace points are compatible. One point worth
noting is that at this writing, SystemTap scripts must reference probe noting is that at this writing, SystemTap scripts must reference probe
names using double underlines in place of hyphens. This is expected to names using double underscores in place of hyphens. This is expected to
be fixed in future SystemTap releases. be fixed in future SystemTap releases.
</para> </para>
</note> </note>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/regress.sgml,v 1.64 2009/08/07 20:50:21 petere Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/regress.sgml,v 1.65 2010/02/03 17:25:06 momjian Exp $ -->
<chapter id="regress"> <chapter id="regress">
<title id="regress-title">Regression Tests</title> <title id="regress-title">Regression Tests</title>
...@@ -26,17 +26,14 @@ ...@@ -26,17 +26,14 @@
running server, or using a temporary installation within the build running server, or using a temporary installation within the build
tree. Furthermore, there is a <quote>parallel</quote> and a tree. Furthermore, there is a <quote>parallel</quote> and a
<quote>sequential</quote> mode for running the tests. The <quote>sequential</quote> mode for running the tests. The
sequential method runs each test script in turn, whereas the sequential method runs each test script alone, while the
parallel method starts up multiple server processes to run groups parallel method starts up multiple server processes to run groups
of tests in parallel. Parallel testing gives confidence that of tests in parallel. Parallel testing gives confidence that
interprocess communication and locking are working correctly. For interprocess communication and locking are working correctly.
historical reasons, the sequential test is usually run against an
existing installation and the parallel method against a temporary
installation, but there are no technical reasons for this.
</para> </para>
<para> <para>
To run the regression tests after building but before installation, To run the parallel regression tests after building but before installation,
type: type:
<screen> <screen>
gmake check gmake check
...@@ -44,7 +41,7 @@ gmake check ...@@ -44,7 +41,7 @@ gmake check
in the top-level directory. (Or you can change to in the top-level directory. (Or you can change to
<filename>src/test/regress</filename> and run the command there.) <filename>src/test/regress</filename> and run the command there.)
This will first build several auxiliary files, such as This will first build several auxiliary files, such as
some sample user-defined trigger functions, and then run the test driver sample user-defined trigger functions, and then run the test driver
script. At the end you should see something like: script. At the end you should see something like:
<screen> <screen>
<computeroutput> <computeroutput>
...@@ -206,9 +203,9 @@ gmake installcheck ...@@ -206,9 +203,9 @@ gmake installcheck
<para> <para>
If you run the tests against a server that was If you run the tests against a server that was
initialized with a collation-order locale other than C, then initialized with a collation-order locale other than C, then
there might be differences due to sort order and follow-up there might be differences due to sort order and subsequent
failures. The regression test suite is set up to handle this failures. The regression test suite is set up to handle this
problem by providing alternative result files that together are problem by providing alternate result files that together are
known to handle a large number of locales. known to handle a large number of locales.
</para> </para>
...@@ -270,7 +267,7 @@ gmake check NO_LOCALE=1 ...@@ -270,7 +267,7 @@ gmake check NO_LOCALE=1
results involving mathematical functions of <type>double results involving mathematical functions of <type>double
precision</type> columns have been observed. The <literal>float8</> and precision</type> columns have been observed. The <literal>float8</> and
<literal>geometry</> tests are particularly prone to small differences <literal>geometry</> tests are particularly prone to small differences
across platforms, or even with different compiler optimization options. across platforms, or even with different compiler optimization setting.
Human eyeball comparison is needed to determine the real Human eyeball comparison is needed to determine the real
significance of these differences which are usually 10 places to significance of these differences which are usually 10 places to
the right of the decimal point. the right of the decimal point.
...@@ -298,10 +295,10 @@ different order than what appears in the expected file. In most cases ...@@ -298,10 +295,10 @@ different order than what appears in the expected file. In most cases
this is not, strictly speaking, a bug. Most of the regression test this is not, strictly speaking, a bug. Most of the regression test
scripts are not so pedantic as to use an <literal>ORDER BY</> for every single scripts are not so pedantic as to use an <literal>ORDER BY</> for every single
<literal>SELECT</>, and so their result row orderings are not well-defined <literal>SELECT</>, and so their result row orderings are not well-defined
according to the letter of the SQL specification. In practice, since we are according to the SQL specification. In practice, since we are
looking at the same queries being executed on the same data by the same looking at the same queries being executed on the same data by the same
software, we usually get the same result ordering on all platforms, and software, we usually get the same result ordering on all platforms,
so the lack of <literal>ORDER BY</> isn't a problem. Some queries do exhibit so the lack of <literal>ORDER BY</> is not a problem. Some queries do exhibit
cross-platform ordering differences, however. When testing against an cross-platform ordering differences, however. When testing against an
already-installed server, ordering differences can also be caused by already-installed server, ordering differences can also be caused by
non-C locale settings or non-default parameter settings, such as custom values non-C locale settings or non-default parameter settings, such as custom values
...@@ -311,8 +308,8 @@ of <varname>work_mem</> or the planner cost parameters. ...@@ -311,8 +308,8 @@ of <varname>work_mem</> or the planner cost parameters.
<para> <para>
Therefore, if you see an ordering difference, it's not something to Therefore, if you see an ordering difference, it's not something to
worry about, unless the query does have an <literal>ORDER BY</> that your worry about, unless the query does have an <literal>ORDER BY</> that your
result is violating. But please report it anyway, so that we can add an result is violating. However, please report it anyway, so that we can add an
<literal>ORDER BY</> to that particular query and thereby eliminate the bogus <literal>ORDER BY</> to that particular query to eliminate the bogus
<quote>failure</quote> in future releases. <quote>failure</quote> in future releases.
</para> </para>
...@@ -364,7 +361,7 @@ diff results/random.out expected/random.out ...@@ -364,7 +361,7 @@ diff results/random.out expected/random.out
<para> <para>
Since some of the tests inherently produce environment-dependent Since some of the tests inherently produce environment-dependent
results, we have provided ways to specify alternative <quote>expected</> results, we have provided ways to specify alternate <quote>expected</>
result files. Each regression test can have several comparison files result files. Each regression test can have several comparison files
showing possible results on different platforms. There are two showing possible results on different platforms. There are two
independent mechanisms for determining which comparison file is used independent mechanisms for determining which comparison file is used
...@@ -410,7 +407,7 @@ testname:output:platformpattern=comparisonfilename ...@@ -410,7 +407,7 @@ testname:output:platformpattern=comparisonfilename
<programlisting> <programlisting>
float8:out:i.86-.*-openbsd=float8-small-is-zero.out float8:out:i.86-.*-openbsd=float8-small-is-zero.out
</programlisting> </programlisting>
which will trigger on any machine for which the output of which will trigger on any machine where the output of
<command>config.guess</command> matches <literal>i.86-.*-openbsd</literal>. <command>config.guess</command> matches <literal>i.86-.*-openbsd</literal>.
Other lines Other lines
in <filename>resultmap</> select the variant comparison file for other in <filename>resultmap</> select the variant comparison file for other
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/runtime.sgml,v 1.430 2010/01/11 18:39:32 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/runtime.sgml,v 1.431 2010/02/03 17:25:06 momjian Exp $ -->
<chapter Id="runtime"> <chapter Id="runtime">
<title>Server Setup and Operation</title> <title>Server Setup and Operation</title>
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
</indexterm> </indexterm>
<para> <para>
As with any other server daemon that is accessible to the outside world, As with any server daemon that is accessible to the outside world,
it is advisable to run <productname>PostgreSQL</productname> under a it is advisable to run <productname>PostgreSQL</productname> under a
separate user account. This user account should only own the data separate user account. This user account should only own the data
that is managed by the server, and should not be shared with other that is managed by the server, and should not be shared with other
...@@ -146,7 +146,7 @@ postgres$ <userinput>initdb -D /usr/local/pgsql/data</userinput> ...@@ -146,7 +146,7 @@ postgres$ <userinput>initdb -D /usr/local/pgsql/data</userinput>
superuser</></indexterm> Also, specify <option>-A md5</> or superuser</></indexterm> Also, specify <option>-A md5</> or
<option>-A password</> so that the default <literal>trust</> authentication <option>-A password</> so that the default <literal>trust</> authentication
mode is not used; or modify the generated <filename>pg_hba.conf</filename> mode is not used; or modify the generated <filename>pg_hba.conf</filename>
file after running <command>initdb</command>, file after running <command>initdb</command>, but
<emphasis>before</> you start the server for the first time. (Other <emphasis>before</> you start the server for the first time. (Other
reasonable approaches include using <literal>ident</literal> authentication reasonable approaches include using <literal>ident</literal> authentication
or file system permissions to restrict connections. See <xref or file system permissions to restrict connections. See <xref
...@@ -230,7 +230,7 @@ $ <userinput>postgres -D /usr/local/pgsql/data</userinput> ...@@ -230,7 +230,7 @@ $ <userinput>postgres -D /usr/local/pgsql/data</userinput>
<para> <para>
Normally it is better to start <command>postgres</command> in the Normally it is better to start <command>postgres</command> in the
background. For this, use the usual shell syntax: background. For this, use the usual Unix shell syntax:
<screen> <screen>
$ <userinput>postgres -D /usr/local/pgsql/data &gt;logfile 2&gt;&amp;1 &amp;</userinput> $ <userinput>postgres -D /usr/local/pgsql/data &gt;logfile 2&gt;&amp;1 &amp;</userinput>
</screen> </screen>
...@@ -449,7 +449,7 @@ DETAIL: Failed system call was semget(5440126, 17, 03600). ...@@ -449,7 +449,7 @@ DETAIL: Failed system call was semget(5440126, 17, 03600).
<para> <para>
Although the error conditions possible on the client side are quite Although the error conditions possible on the client side are quite
varied and application-dependent, a few of them might be directly varied and application-dependent, a few of them might be directly
related to how the server was started up. Conditions other than related to how the server was started. Conditions other than
those shown below should be documented with the respective client those shown below should be documented with the respective client
application. application.
</para> </para>
...@@ -524,16 +524,16 @@ psql: could not connect to server: No such file or directory ...@@ -524,16 +524,16 @@ psql: could not connect to server: No such file or directory
relevant for <productname>PostgreSQL</>). Almost all modern relevant for <productname>PostgreSQL</>). Almost all modern
operating systems provide these features, but not all of them have operating systems provide these features, but not all of them have
them turned on or sufficiently sized by default, especially systems them turned on or sufficiently sized by default, especially systems
with BSD heritage. (On <systemitem class="osname">Windows</>, with a BSD heritage. (On <systemitem class="osname">Windows</>,
<productname>PostgreSQL</> provides its own replacement <productname>PostgreSQL</> provides its own replacement
implementation of these facilities, and so most of this section implementation of these facilities, so most of this section
can be disregarded.) can be disregarded.)
</para> </para>
<para> <para>
The complete lack of these facilities is usually manifested by an The complete lack of these facilities is usually manifested by an
<errorname>Illegal system call</> error upon server start. In <errorname>Illegal system call</> error upon server start. In
that case there's nothing left to do but to reconfigure your that case there is no alternative but to reconfigure your
kernel. <productname>PostgreSQL</> won't work without them. kernel. <productname>PostgreSQL</> won't work without them.
</para> </para>
...@@ -541,7 +541,7 @@ psql: could not connect to server: No such file or directory ...@@ -541,7 +541,7 @@ psql: could not connect to server: No such file or directory
When <productname>PostgreSQL</> exceeds one of the various hard When <productname>PostgreSQL</> exceeds one of the various hard
<acronym>IPC</> limits, the server will refuse to start and <acronym>IPC</> limits, the server will refuse to start and
should leave an instructive error message describing the problem should leave an instructive error message describing the problem
encountered and what to do about it. (See also <xref and what to do about it. (See also <xref
linkend="server-start-failures">.) The relevant kernel linkend="server-start-failures">.) The relevant kernel
parameters are named consistently across different systems; <xref parameters are named consistently across different systems; <xref
linkend="sysvipc-parameters"> gives an overview. The methods to set linkend="sysvipc-parameters"> gives an overview. The methods to set
...@@ -621,7 +621,7 @@ psql: could not connect to server: No such file or directory ...@@ -621,7 +621,7 @@ psql: could not connect to server: No such file or directory
<row> <row>
<entry><varname>SEMVMX</></> <entry><varname>SEMVMX</></>
<entry>Maximum value of semaphore</> <entry>Maximum value of semaphore</>
<entry>at least 1000 (The default is often 32767, don't change unless forced to)</> <entry>at least 1000 (The default is often 32767; do not change unless necessary)</>
</row> </row>
</tbody> </tbody>
...@@ -633,7 +633,7 @@ psql: could not connect to server: No such file or directory ...@@ -633,7 +633,7 @@ psql: could not connect to server: No such file or directory
<indexterm><primary>SHMMAX</primary></indexterm> The most important <indexterm><primary>SHMMAX</primary></indexterm> The most important
shared memory parameter is <varname>SHMMAX</>, the maximum size, in shared memory parameter is <varname>SHMMAX</>, the maximum size, in
bytes, of a shared memory segment. If you get an error message from bytes, of a shared memory segment. If you get an error message from
<function>shmget</> like <errorname>Invalid argument</>, it is <function>shmget</> like <quote>Invalid argument</>, it is
likely that this limit has been exceeded. The size of the required likely that this limit has been exceeded. The size of the required
shared memory segment varies depending on several shared memory segment varies depending on several
<productname>PostgreSQL</> configuration parameters, as shown in <productname>PostgreSQL</> configuration parameters, as shown in
...@@ -681,7 +681,7 @@ psql: could not connect to server: No such file or directory ...@@ -681,7 +681,7 @@ psql: could not connect to server: No such file or directory
least <literal>ceil((max_connections + autovacuum_max_workers) / 16)</>. least <literal>ceil((max_connections + autovacuum_max_workers) / 16)</>.
Lowering the number Lowering the number
of allowed connections is a temporary workaround for failures, of allowed connections is a temporary workaround for failures,
which are usually confusingly worded <errorname>No space which are usually confusingly worded <quote>No space
left on device</>, from the function <function>semget</>. left on device</>, from the function <function>semget</>.
</para> </para>
...@@ -706,8 +706,8 @@ psql: could not connect to server: No such file or directory ...@@ -706,8 +706,8 @@ psql: could not connect to server: No such file or directory
<para> <para>
Various other settings related to <quote>semaphore undo</>, such as Various other settings related to <quote>semaphore undo</>, such as
<varname>SEMMNU</> and <varname>SEMUME</>, are not of concern <varname>SEMMNU</> and <varname>SEMUME</>, do not affect
for <productname>PostgreSQL</>. <productname>PostgreSQL</>.
</para> </para>
...@@ -758,24 +758,6 @@ options "SHMMAX=\(SHMALL*PAGE_SIZE\)" ...@@ -758,24 +758,6 @@ options "SHMMAX=\(SHMALL*PAGE_SIZE\)"
</para> </para>
</formalpara> </formalpara>
<para>
For those running 4.0 and earlier releases, use <command>bpatch</>
to find the <varname>sysptsize</> value in the current
kernel. This is computed dynamically at boot time.
<screen>
$ <userinput>bpatch -r sysptsize</>
<computeroutput>0x9 = 9</>
</screen>
Next, add <varname>SYSPTSIZE</> as a hard-coded value in the
kernel configuration file. Increase the value you found using
<command>bpatch</>. Add 1 for every additional 4 MB of
shared memory you desire.
<programlisting>
options "SYSPTSIZE=16"
</programlisting>
<varname>sysptsize</> cannot be changed by <command>sysctl</command>.
</para>
<formalpara> <formalpara>
<title>Semaphores</> <title>Semaphores</>
<para> <para>
...@@ -837,9 +819,9 @@ options "SEMMNS=240" ...@@ -837,9 +819,9 @@ options "SEMMNS=240"
<literal>security.jail.sysvipc_allowed</>, <application>postmaster</>s <literal>security.jail.sysvipc_allowed</>, <application>postmaster</>s
running in different jails should be run by different operating system running in different jails should be run by different operating system
users. This improves security because it prevents non-root users users. This improves security because it prevents non-root users
from interfering with shared memory or semaphores in a different jail, from interfering with shared memory or semaphores in different jails,
and it allows the PostgreSQL IPC cleanup code to function properly. and it allows the PostgreSQL IPC cleanup code to function properly.
(In FreeBSD 6.0 and later the IPC cleanup code doesn't properly detect (In FreeBSD 6.0 and later the IPC cleanup code does not properly detect
processes in other jails, preventing the running of postmasters on the processes in other jails, preventing the running of postmasters on the
same port in different jails.) same port in different jails.)
</para> </para>
...@@ -863,7 +845,8 @@ options "SEMMNS=240" ...@@ -863,7 +845,8 @@ options "SEMMNS=240"
to be enabled when the kernel is compiled. (They are by to be enabled when the kernel is compiled. (They are by
default.) The maximum size of shared memory is determined by default.) The maximum size of shared memory is determined by
the option <varname>SHMMAXPGS</> (in pages). The following the option <varname>SHMMAXPGS</> (in pages). The following
shows an example of how to set the various parameters shows an example of how to set the various parameters on
<systemitem class="osname">NetBSD</>
(<systemitem class="osname">OpenBSD</> uses <literal>option</> instead): (<systemitem class="osname">OpenBSD</> uses <literal>option</> instead):
<programlisting> <programlisting>
options SYSVSHM options SYSVSHM
...@@ -902,7 +885,7 @@ options SEMMAP=256 ...@@ -902,7 +885,7 @@ options SEMMAP=256
<acronym>IPC</> parameters can be set in the <application>System <acronym>IPC</> parameters can be set in the <application>System
Administration Manager</> (<acronym>SAM</>) under Administration Manager</> (<acronym>SAM</>) under
<menuchoice><guimenu>Kernel <menuchoice><guimenu>Kernel
Configuration</><guimenuitem>Configurable Parameters</></>. Hit Configuration</><guimenuitem>Configurable Parameters</></>. Choose
<guibutton>Create A New Kernel</> when you're done. <guibutton>Create A New Kernel</> when you're done.
</para> </para>
</listitem> </listitem>
...@@ -926,8 +909,8 @@ options SEMMAP=256 ...@@ -926,8 +909,8 @@ options SEMMAP=256
<prompt>$</prompt> <userinput>sysctl -w kernel.shmmax=134217728</userinput> <prompt>$</prompt> <userinput>sysctl -w kernel.shmmax=134217728</userinput>
<prompt>$</prompt> <userinput>sysctl -w kernel.shmall=2097152</userinput> <prompt>$</prompt> <userinput>sysctl -w kernel.shmall=2097152</userinput>
</screen> </screen>
In addition these settings can be saved between reboots in In addition these settings can be preserved between reboots in
<filename>/etc/sysctl.conf</filename>. the file <filename>/etc/sysctl.conf</filename>.
</para> </para>
<para> <para>
...@@ -964,7 +947,7 @@ sysctl -w kern.sysv.shmall ...@@ -964,7 +947,7 @@ sysctl -w kern.sysv.shmall
In OS X 10.3 and later, these commands have been moved to In OS X 10.3 and later, these commands have been moved to
<filename>/etc/rc</> and must be edited there. Note that <filename>/etc/rc</> and must be edited there. Note that
<filename>/etc/rc</> is usually overwritten by OS X updates (such as <filename>/etc/rc</> is usually overwritten by OS X updates (such as
10.3.6 to 10.3.7) so you should expect to have to redo your editing 10.3.6 to 10.3.7) so you should expect to have to redo your edits
after each update. after each update.
</para> </para>
...@@ -995,7 +978,7 @@ kern.sysv.shmall=1024 ...@@ -995,7 +978,7 @@ kern.sysv.shmall=1024
</para> </para>
<para> <para>
In all OS X versions, you'll need to reboot to make changes in the In all OS X versions, you will need to reboot to have changes in the
shared memory parameters take effect. shared memory parameters take effect.
</para> </para>
</listitem> </listitem>
...@@ -1304,11 +1287,11 @@ echo -17 > /proc/self/oom_adj ...@@ -1304,11 +1287,11 @@ echo -17 > /proc/self/oom_adj
Some vendors' Linux 2.4 kernels are reported to have early versions Some vendors' Linux 2.4 kernels are reported to have early versions
of the 2.6 overcommit <command>sysctl</command> parameter. However, setting of the 2.6 overcommit <command>sysctl</command> parameter. However, setting
<literal>vm.overcommit_memory</> to 2 <literal>vm.overcommit_memory</> to 2
on a kernel that does not have the relevant code will make on a 2.4 kernel that does not have the relevant code will make
things worse not better. It is recommended that you inspect things worse, not better. It is recommended that you inspect
the actual kernel source code (see the function the actual kernel source code (see the function
<function>vm_enough_memory</> in the file <filename>mm/mmap.c</>) <function>vm_enough_memory</> in the file <filename>mm/mmap.c</>)
to verify what is supported in your copy before you try this in a 2.4 to verify what is supported in your kernel before you try this in a 2.4
installation. The presence of the <filename>overcommit-accounting</> installation. The presence of the <filename>overcommit-accounting</>
documentation file should <emphasis>not</> be taken as evidence that the documentation file should <emphasis>not</> be taken as evidence that the
feature is there. If in any doubt, consult a kernel expert or your feature is there. If in any doubt, consult a kernel expert or your
...@@ -1357,7 +1340,7 @@ echo -17 > /proc/self/oom_adj ...@@ -1357,7 +1340,7 @@ echo -17 > /proc/self/oom_adj
The server disallows new connections and sends all existing The server disallows new connections and sends all existing
server processes <systemitem>SIGTERM</systemitem>, which will cause them server processes <systemitem>SIGTERM</systemitem>, which will cause them
to abort their current transactions and exit promptly. It then to abort their current transactions and exit promptly. It then
waits for the server processes to exit and finally shuts down. waits for all server processes to exit and finally shuts down.
If the server is in online backup mode, backup mode will be If the server is in online backup mode, backup mode will be
terminated, rendering the backup useless. terminated, rendering the backup useless.
</para> </para>
...@@ -1428,7 +1411,7 @@ $ <userinput>kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`</userinput ...@@ -1428,7 +1411,7 @@ $ <userinput>kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`</userinput
<para> <para>
While the server is running, it is not possible for a malicious user While the server is running, it is not possible for a malicious user
to take the place of the normal database server. However, when the to take the place of the normal database server. However, when the
server is down it is possible for a local user to spoof the normal server is down, it is possible for a local user to spoof the normal
server by starting their own server. The spoof server could read server by starting their own server. The spoof server could read
passwords and queries sent by clients, but could not return any data passwords and queries sent by clients, but could not return any data
because the <varname>PGDATA</> directory would still be secure because because the <varname>PGDATA</> directory would still be secure because
...@@ -1489,7 +1472,7 @@ $ <userinput>kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`</userinput ...@@ -1489,7 +1472,7 @@ $ <userinput>kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`</userinput
the administrator cannot determine the actual password assigned the administrator cannot determine the actual password assigned
to the user. If MD5 encryption is used for client authentication, to the user. If MD5 encryption is used for client authentication,
the unencrypted password is never even temporarily present on the the unencrypted password is never even temporarily present on the
server because the client MD5 encrypts it before being sent server because the client MD5-encrypts it before being sent
across the network. across the network.
</para> </para>
</listitem> </listitem>
...@@ -1523,11 +1506,12 @@ $ <userinput>kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`</userinput ...@@ -1523,11 +1506,12 @@ $ <userinput>kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`</userinput
<listitem> <listitem>
<para> <para>
On Linux, encryption can be layered on top of a file system mount On Linux, encryption can be layered on top of a file system
using a <quote>loopback device</quote>. This allows an entire using a <quote>loopback device</quote>. This allows an entire
file system partition be encrypted on disk, and decrypted by the file system partition to be encrypted on disk, and decrypted by the
operating system. On FreeBSD, the equivalent facility is called operating system. On FreeBSD, the equivalent facility is called
GEOM Based Disk Encryption, or <acronym>gbde</acronym>. GEOM Based Disk Encryption (<acronym>gbde</acronym>), and many
other operating systems support this functionality, including Windows.
</para> </para>
<para> <para>
...@@ -1550,7 +1534,7 @@ $ <userinput>kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`</userinput ...@@ -1550,7 +1534,7 @@ $ <userinput>kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`</userinput
<para> <para>
The <literal>MD5</> authentication method double-encrypts the The <literal>MD5</> authentication method double-encrypts the
password on the client before sending it to the server. It first password on the client before sending it to the server. It first
MD5 encrypts it based on the user name, and then encrypts it MD5-encrypts it based on the user name, and then encrypts it
based on a random salt sent by the server when the database based on a random salt sent by the server when the database
connection was made. It is this double-encrypted value that is connection was made. It is this double-encrypted value that is
sent over the network to the server. Double-encryption not only sent over the network to the server. Double-encryption not only
...@@ -1635,7 +1619,7 @@ $ <userinput>kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`</userinput ...@@ -1635,7 +1619,7 @@ $ <userinput>kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`</userinput
<productname>PostgreSQL</> server can be started with <productname>PostgreSQL</> server can be started with
<acronym>SSL</> enabled by setting the parameter <acronym>SSL</> enabled by setting the parameter
<xref linkend="guc-ssl"> to <literal>on</> in <xref linkend="guc-ssl"> to <literal>on</> in
<filename>postgresql.conf</>. The server will listen for both standard <filename>postgresql.conf</>. The server will listen for both normal
and <acronym>SSL</> connections on the same TCP port, and will negotiate and <acronym>SSL</> connections on the same TCP port, and will negotiate
with any connecting client on whether to use <acronym>SSL</>. By with any connecting client on whether to use <acronym>SSL</>. By
default, this is at the client's option; see <xref default, this is at the client's option; see <xref
...@@ -1750,7 +1734,7 @@ $ <userinput>kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`</userinput ...@@ -1750,7 +1734,7 @@ $ <userinput>kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`</userinput
<row> <row>
<entry><filename>server.key</></entry> <entry><filename>server.key</></entry>
<entry>server private key</entry> <entry>server private key</entry>
<entry>proves server certificate sent by owner; does not indicate <entry>proves server certificate was sent by the owner; it does not indicate
certificate owner is trustworthy</entry> certificate owner is trustworthy</entry>
</row> </row>
...@@ -1828,7 +1812,7 @@ chmod og-rwx server.key ...@@ -1828,7 +1812,7 @@ chmod og-rwx server.key
</indexterm> </indexterm>
<para> <para>
One can use <application>SSH</application> to encrypt the network It is possible to use <application>SSH</application> to encrypt the network
connection between clients and a connection between clients and a
<productname>PostgreSQL</productname> server. Done properly, this <productname>PostgreSQL</productname> server. Done properly, this
provides an adequately secure network connection, even for non-SSL-capable provides an adequately secure network connection, even for non-SSL-capable
...@@ -1845,7 +1829,7 @@ chmod og-rwx server.key ...@@ -1845,7 +1829,7 @@ chmod og-rwx server.key
ssh -L 63333:localhost:5432 joe@foo.com ssh -L 63333:localhost:5432 joe@foo.com
</programlisting> </programlisting>
The first number in the <option>-L</option> argument, 63333, is the The first number in the <option>-L</option> argument, 63333, is the
port number of your end of the tunnel; it can be chosen freely. port number of your end of the tunnel; it can be any unused port.
(IANA reserves ports 49152 through 65535 for private use.) The (IANA reserves ports 49152 through 65535 for private use.) The
second number, 5432, is the remote end of the tunnel: the port second number, 5432, is the remote end of the tunnel: the port
number your server is using. The name or IP address between the number your server is using. The name or IP address between the
...@@ -1873,7 +1857,7 @@ psql -h localhost -p 63333 postgres ...@@ -1873,7 +1857,7 @@ psql -h localhost -p 63333 postgres
In order for the In order for the
tunnel setup to succeed you must be allowed to connect via tunnel setup to succeed you must be allowed to connect via
<command>ssh</command> as <literal>joe@foo.com</literal>, just <command>ssh</command> as <literal>joe@foo.com</literal>, just
as if you had attempted to use <command>ssh</command> to set up a as if you had attempted to use <command>ssh</command> to create a
terminal session. terminal session.
</para> </para>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/user-manag.sgml,v 1.41 2008/10/28 12:10:42 mha Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/user-manag.sgml,v 1.42 2010/02/03 17:25:06 momjian Exp $ -->
<chapter id="user-manag"> <chapter id="user-manag">
<title>Database Roles and Privileges</title> <title>Database Roles and Privileges</title>
...@@ -11,8 +11,7 @@ ...@@ -11,8 +11,7 @@
tables) and can assign privileges on those objects to other roles to tables) and can assign privileges on those objects to other roles to
control who has access to which objects. Furthermore, it is possible control who has access to which objects. Furthermore, it is possible
to grant <firstterm>membership</> in a role to another role, thus to grant <firstterm>membership</> in a role to another role, thus
allowing the member role use of privileges assigned to the role it is allowing the member role to use privileges assigned to another role.
a member of.
</para> </para>
<para> <para>
...@@ -110,9 +109,9 @@ SELECT rolname FROM pg_roles; ...@@ -110,9 +109,9 @@ SELECT rolname FROM pg_roles;
</para> </para>
<para> <para>
Every connection to the database server is made in the name of some Every connection to the database server is made using the name of some
particular role, and this role determines the initial access privileges for particular role, and this role determines the initial access privileges for
commands issued on that connection. commands issued in that connection.
The role name to use for a particular database The role name to use for a particular database
connection is indicated by the client that is initiating the connection is indicated by the client that is initiating the
connection request in an application-specific fashion. For example, connection request in an application-specific fashion. For example,
...@@ -129,11 +128,11 @@ SELECT rolname FROM pg_roles; ...@@ -129,11 +128,11 @@ SELECT rolname FROM pg_roles;
The set of database roles a given client connection can connect as The set of database roles a given client connection can connect as
is determined by the client authentication setup, as explained in is determined by the client authentication setup, as explained in
<xref linkend="client-authentication">. (Thus, a client is not <xref linkend="client-authentication">. (Thus, a client is not
necessarily limited to connect as the role with the same name as limited to connect as the role matching
its operating system user, just as a person's login name its operating system user, just as a person's login name
need not match her real name.) Since the role need not match her real name.) Since the role
identity determines the set of privileges available to a connected identity determines the set of privileges available to a connected
client, it is important to carefully configure this when setting up client, it is important to carefully configure privileges when setting up
a multiuser environment. a multiuser environment.
</para> </para>
</sect1> </sect1>
...@@ -152,7 +151,7 @@ SELECT rolname FROM pg_roles; ...@@ -152,7 +151,7 @@ SELECT rolname FROM pg_roles;
<para> <para>
Only roles that have the <literal>LOGIN</> attribute can be used Only roles that have the <literal>LOGIN</> attribute can be used
as the initial role name for a database connection. A role with as the initial role name for a database connection. A role with
the <literal>LOGIN</> attribute can be considered the same thing the <literal>LOGIN</> attribute can be considered the same
as a <quote>database user</>. To create a role with login privilege, as a <quote>database user</>. To create a role with login privilege,
use either: use either:
<programlisting> <programlisting>
...@@ -204,7 +203,7 @@ CREATE USER <replaceable>name</replaceable>; ...@@ -204,7 +203,7 @@ CREATE USER <replaceable>name</replaceable>;
other roles, too, as well as grant or revoke membership in them. other roles, too, as well as grant or revoke membership in them.
However, to create, alter, drop, or change membership of a However, to create, alter, drop, or change membership of a
superuser role, superuser status is required; superuser role, superuser status is required;
<literal>CREATEROLE</> is not sufficient for that. <literal>CREATEROLE</> is insufficient for that.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
...@@ -250,15 +249,15 @@ CREATE USER <replaceable>name</replaceable>; ...@@ -250,15 +249,15 @@ CREATE USER <replaceable>name</replaceable>;
want to disable index scans (hint: not a good idea) anytime you want to disable index scans (hint: not a good idea) anytime you
connect, you can use: connect, you can use:
<programlisting> <programlisting>
ALTER ROLE myname SET enable_indexscan TO off; ALTER ROLE myname SET statement_timeout = '5min';
</programlisting> </programlisting>
This will save the setting (but not set it immediately). In This will save the setting (but not set it immediately). In
subsequent connections by this role it will appear as though subsequent connections by this role it will appear as though
<literal>SET enable_indexscan TO off;</literal> had been executed <literal>SET statement_timeout = '5min'</literal> had been executed
just before the session started. just before the session started.
You can still alter this setting during the session; it will only You can still alter this setting during the session; it will only
be the default. To remove a role-specific default setting, use be the default. To remove a role-specific default setting, use
<literal>ALTER ROLE <replaceable>rolename</> RESET <replaceable>varname</>;</literal>. <literal>ALTER ROLE <replaceable>rolename</> RESET <replaceable>varname</></literal>.
Note that role-specific defaults attached to roles without Note that role-specific defaults attached to roles without
<literal>LOGIN</> privilege are fairly useless, since they will never <literal>LOGIN</> privilege are fairly useless, since they will never
be invoked. be invoked.
...@@ -381,15 +380,16 @@ REVOKE <replaceable>group_role</replaceable> FROM <replaceable>role1</replaceabl ...@@ -381,15 +380,16 @@ REVOKE <replaceable>group_role</replaceable> FROM <replaceable>role1</replaceabl
</para> </para>
<para> <para>
The members of a role can use the privileges of the group role in two The members of a group role can use the privileges of the role in two
ways. First, every member of a group can explicitly do ways. First, every member of a group can explicitly do
<xref linkend="sql-set-role" endterm="sql-set-role-title"> to <xref linkend="sql-set-role" endterm="sql-set-role-title"> to
temporarily <quote>become</> the group role. In this state, the temporarily <quote>become</> the group role. In this state, the
database session has access to the privileges of the group role rather database session has access to the privileges of the group role rather
than the original login role, and any database objects created are than the original login role, and any database objects created are
considered owned by the group role not the login role. Second, member considered owned by the group role not the login role. Second, member
roles that have the <literal>INHERIT</> attribute automatically have use of roles that have the <literal>INHERIT</> attribute automatically inherit the
privileges of roles they are members of. As an example, suppose we have privileges of roles of which they are members, including their
<literal>INHERIT</> attributes. As an example, suppose we have
done: done:
<programlisting> <programlisting>
CREATE ROLE joe LOGIN INHERIT; CREATE ROLE joe LOGIN INHERIT;
...@@ -454,7 +454,7 @@ RESET ROLE; ...@@ -454,7 +454,7 @@ RESET ROLE;
special privileges, but they are never inherited as ordinary privileges special privileges, but they are never inherited as ordinary privileges
on database objects are. You must actually <command>SET ROLE</> to a on database objects are. You must actually <command>SET ROLE</> to a
specific role having one of these attributes in order to make use of specific role having one of these attributes in order to make use of
the attribute. Continuing the above example, we might well choose to the attribute. Continuing the above example, we might choose to
grant <literal>CREATEDB</> and <literal>CREATEROLE</> to the grant <literal>CREATEDB</> and <literal>CREATEROLE</> to the
<literal>admin</> role. Then a session connecting as role <literal>joe</> <literal>admin</> role. Then a session connecting as role <literal>joe</>
would not have these privileges immediately, only after doing would not have these privileges immediately, only after doing
...@@ -478,7 +478,7 @@ DROP ROLE <replaceable>name</replaceable>; ...@@ -478,7 +478,7 @@ DROP ROLE <replaceable>name</replaceable>;
</sect1> </sect1>
<sect1 id="perm-functions"> <sect1 id="perm-functions">
<title>Functions and Triggers</title> <title>Function and Trigger Security</title>
<para> <para>
Functions and triggers allow users to insert code into the backend Functions and triggers allow users to insert code into the backend
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/wal.sgml,v 1.60 2009/11/28 16:21:31 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/wal.sgml,v 1.61 2010/02/03 17:25:06 momjian Exp $ -->
<chapter id="wal"> <chapter id="wal">
<title>Reliability and the Write-Ahead Log</title> <title>Reliability and the Write-Ahead Log</title>
...@@ -42,9 +42,9 @@ ...@@ -42,9 +42,9 @@
<para> <para>
Next, there might be a cache in the disk drive controller; this is Next, there might be a cache in the disk drive controller; this is
particularly common on <acronym>RAID</> controller cards. Some of particularly common on <acronym>RAID</> controller cards. Some of
these caches are <firstterm>write-through</>, meaning writes are passed these caches are <firstterm>write-through</>, meaning writes are sent
along to the drive as soon as they arrive. Others are to the drive as soon as they arrive. Others are
<firstterm>write-back</>, meaning data is passed on to the drive at <firstterm>write-back</>, meaning data is sent to the drive at
some later time. Such caches can be a reliability hazard because the some later time. Such caches can be a reliability hazard because the
memory in the disk controller cache is volatile, and will lose its memory in the disk controller cache is volatile, and will lose its
contents in a power failure. Better controller cards have contents in a power failure. Better controller cards have
...@@ -61,7 +61,7 @@ ...@@ -61,7 +61,7 @@
particularly likely to have write-back caches that will not survive a particularly likely to have write-back caches that will not survive a
power failure. To check write caching on <productname>Linux</> use power failure. To check write caching on <productname>Linux</> use
<command>hdparm -I</>; it is enabled if there is a <literal>*</> next <command>hdparm -I</>; it is enabled if there is a <literal>*</> next
to <literal>Write cache</>. <command>hdparm -W</> to turn off to <literal>Write cache</>; <command>hdparm -W</> to turn off
write caching. On <productname>FreeBSD</> use write caching. On <productname>FreeBSD</> use
<application>atacontrol</>. (For SCSI disks use <ulink <application>atacontrol</>. (For SCSI disks use <ulink
url="http://sg.torque.net/sg/sdparm.html"><application>sdparm</></ulink> url="http://sg.torque.net/sg/sdparm.html"><application>sdparm</></ulink>
...@@ -79,10 +79,10 @@ ...@@ -79,10 +79,10 @@
</para> </para>
<para> <para>
When the operating system sends a write request to the disk hardware, When the operating system sends a write request to the storage hardware,
there is little it can do to make sure the data has arrived at a truly there is little it can do to make sure the data has arrived at a truly
non-volatile storage area. Rather, it is the non-volatile storage area. Rather, it is the
administrator's responsibility to be sure that all storage components administrator's responsibility to make certain that all storage components
ensure data integrity. Avoid disk controllers that have non-battery-backed ensure data integrity. Avoid disk controllers that have non-battery-backed
write caches. At the drive level, disable write-back caching if the write caches. At the drive level, disable write-back caching if the
drive cannot guarantee the data will be written before shutdown. drive cannot guarantee the data will be written before shutdown.
...@@ -100,11 +100,11 @@ ...@@ -100,11 +100,11 @@
to power loss at any time, meaning some of the 512-byte sectors were to power loss at any time, meaning some of the 512-byte sectors were
written, and others were not. To guard against such failures, written, and others were not. To guard against such failures,
<productname>PostgreSQL</> periodically writes full page images to <productname>PostgreSQL</> periodically writes full page images to
permanent storage <emphasis>before</> modifying the actual page on permanent WAL storage <emphasis>before</> modifying the actual page on
disk. By doing this, during crash recovery <productname>PostgreSQL</> can disk. By doing this, during crash recovery <productname>PostgreSQL</> can
restore partially-written pages. If you have a battery-backed disk restore partially-written pages. If you have a battery-backed disk
controller or file-system software that prevents partial page writes controller or file-system software that prevents partial page writes
(e.g., ReiserFS 4), you can turn off this page imaging by using the (e.g., ZFS), you can turn off this page imaging by turning off the
<xref linkend="guc-full-page-writes"> parameter. <xref linkend="guc-full-page-writes"> parameter.
</para> </para>
</sect1> </sect1>
...@@ -140,12 +140,12 @@ ...@@ -140,12 +140,12 @@
<tip> <tip>
<para> <para>
Because <acronym>WAL</acronym> restores database file Because <acronym>WAL</acronym> restores database file
contents after a crash, journaled filesystems are not necessary for contents after a crash, journaled file systems are not necessary for
reliable storage of the data files or WAL files. In fact, journaling reliable storage of the data files or WAL files. In fact, journaling
overhead can reduce performance, especially if journaling overhead can reduce performance, especially if journaling
causes file system <emphasis>data</emphasis> to be flushed causes file system <emphasis>data</emphasis> to be flushed
to disk. Fortunately, data flushing during journaling can to disk. Fortunately, data flushing during journaling can
often be disabled with a filesystem mount option, e.g. often be disabled with a file system mount option, e.g.
<literal>data=writeback</> on a Linux ext3 file system. <literal>data=writeback</> on a Linux ext3 file system.
Journaled file systems do improve boot speed after a crash. Journaled file systems do improve boot speed after a crash.
</para> </para>
...@@ -308,7 +308,7 @@ ...@@ -308,7 +308,7 @@
committing at about the same time. Setting <varname>commit_delay</varname> committing at about the same time. Setting <varname>commit_delay</varname>
can only help when there are many concurrently committing transactions, can only help when there are many concurrently committing transactions,
and it is difficult to tune it to a value that actually helps rather and it is difficult to tune it to a value that actually helps rather
than hurting throughput. than hurt throughput.
</para> </para>
</sect1> </sect1>
...@@ -326,7 +326,7 @@ ...@@ -326,7 +326,7 @@
<para> <para>
<firstterm>Checkpoints</firstterm><indexterm><primary>checkpoint</></> <firstterm>Checkpoints</firstterm><indexterm><primary>checkpoint</></>
are points in the sequence of transactions at which it is guaranteed are points in the sequence of transactions at which it is guaranteed
that the data files have been updated with all information written before that the heap and index data files have been updated with all information written before
the checkpoint. At checkpoint time, all dirty data pages are flushed to the checkpoint. At checkpoint time, all dirty data pages are flushed to
disk and a special checkpoint record is written to the log file. disk and a special checkpoint record is written to the log file.
(The changes were previously flushed to the <acronym>WAL</acronym> files.) (The changes were previously flushed to the <acronym>WAL</acronym> files.)
...@@ -349,18 +349,18 @@ ...@@ -349,18 +349,18 @@
</para> </para>
<para> <para>
The server's background writer process will automatically perform The server's background writer process automatically performs
a checkpoint every so often. A checkpoint is created every <xref a checkpoint every so often. A checkpoint is created every <xref
linkend="guc-checkpoint-segments"> log segments, or every <xref linkend="guc-checkpoint-segments"> log segments, or every <xref
linkend="guc-checkpoint-timeout"> seconds, whichever comes first. linkend="guc-checkpoint-timeout"> seconds, whichever comes first.
The default settings are 3 segments and 300 seconds respectively. The default settings are 3 segments and 300 seconds (5 minutes), respectively.
It is also possible to force a checkpoint by using the SQL command It is also possible to force a checkpoint by using the SQL command
<command>CHECKPOINT</command>. <command>CHECKPOINT</command>.
</para> </para>
<para> <para>
Reducing <varname>checkpoint_segments</varname> and/or Reducing <varname>checkpoint_segments</varname> and/or
<varname>checkpoint_timeout</varname> causes checkpoints to be done <varname>checkpoint_timeout</varname> causes checkpoints to occur
more often. This allows faster after-crash recovery (since less work more often. This allows faster after-crash recovery (since less work
will need to be redone). However, one must balance this against the will need to be redone). However, one must balance this against the
increased cost of flushing dirty data pages more often. If increased cost of flushing dirty data pages more often. If
...@@ -469,7 +469,7 @@ ...@@ -469,7 +469,7 @@
server processes to add their commit records to the log so as to have all server processes to add their commit records to the log so as to have all
of them flushed with a single log sync. No sleep will occur if of them flushed with a single log sync. No sleep will occur if
<xref linkend="guc-fsync"> <xref linkend="guc-fsync">
is not enabled, nor if fewer than <xref linkend="guc-commit-siblings"> is not enabled, or if fewer than <xref linkend="guc-commit-siblings">
other sessions are currently in active transactions; this avoids other sessions are currently in active transactions; this avoids
sleeping when it's unlikely that any other session will commit soon. sleeping when it's unlikely that any other session will commit soon.
Note that on most platforms, the resolution of a sleep request is Note that on most platforms, the resolution of a sleep request is
...@@ -483,7 +483,7 @@ ...@@ -483,7 +483,7 @@
The <xref linkend="guc-wal-sync-method"> parameter determines how The <xref linkend="guc-wal-sync-method"> parameter determines how
<productname>PostgreSQL</productname> will ask the kernel to force <productname>PostgreSQL</productname> will ask the kernel to force
<acronym>WAL</acronym> updates out to disk. <acronym>WAL</acronym> updates out to disk.
All the options should be the same as far as reliability goes, All the options should be the same in terms of reliability,
but it's quite platform-specific which one will be the fastest. but it's quite platform-specific which one will be the fastest.
Note that this parameter is irrelevant if <varname>fsync</varname> Note that this parameter is irrelevant if <varname>fsync</varname>
has been turned off. has been turned off.
...@@ -521,26 +521,26 @@ ...@@ -521,26 +521,26 @@
<filename>access/xlog.h</filename>; the record content is dependent <filename>access/xlog.h</filename>; the record content is dependent
on the type of event that is being logged. Segment files are given on the type of event that is being logged. Segment files are given
ever-increasing numbers as names, starting at ever-increasing numbers as names, starting at
<filename>000000010000000000000000</filename>. The numbers do not wrap, at <filename>000000010000000000000000</filename>. The numbers do not wrap,
present, but it should take a very very long time to exhaust the but it will take a very, very long time to exhaust the
available stock of numbers. available stock of numbers.
</para> </para>
<para> <para>
It is of advantage if the log is located on another disk than the It is advantageous if the log is located on a different disk from the
main database files. This can be achieved by moving the directory main database files. This can be achieved by moving the
<filename>pg_xlog</filename> to another location (while the server <filename>pg_xlog</filename> directory to another location (while the server
is shut down, of course) and creating a symbolic link from the is shut down, of course) and creating a symbolic link from the
original location in the main data directory to the new location. original location in the main data directory to the new location.
</para> </para>
<para> <para>
The aim of <acronym>WAL</acronym>, to ensure that the log is The aim of <acronym>WAL</acronym> is to ensure that the log is
written before database records are altered, can be subverted by written before database records are altered, but this can be subverted by
disk drives<indexterm><primary>disk drive</></> that falsely report a disk drives<indexterm><primary>disk drive</></> that falsely report a
successful write to the kernel, successful write to the kernel,
when in fact they have only cached the data and not yet stored it when in fact they have only cached the data and not yet stored it
on the disk. A power failure in such a situation might still lead to on the disk. A power failure in such a situation might lead to
irrecoverable data corruption. Administrators should try to ensure irrecoverable data corruption. Administrators should try to ensure
that disks holding <productname>PostgreSQL</productname>'s that disks holding <productname>PostgreSQL</productname>'s
<acronym>WAL</acronym> log files do not make such false reports. <acronym>WAL</acronym> log files do not make such false reports.
...@@ -549,8 +549,8 @@ ...@@ -549,8 +549,8 @@
<para> <para>
After a checkpoint has been made and the log flushed, the After a checkpoint has been made and the log flushed, the
checkpoint's position is saved in the file checkpoint's position is saved in the file
<filename>pg_control</filename>. Therefore, when recovery is to be <filename>pg_control</filename>. Therefore, at the start of recovery,
done, the server first reads <filename>pg_control</filename> and the server first reads <filename>pg_control</filename> and
then the checkpoint record; then it performs the REDO operation by then the checkpoint record; then it performs the REDO operation by
scanning forward from the log position indicated in the checkpoint scanning forward from the log position indicated in the checkpoint
record. Because the entire content of data pages is saved in the record. Because the entire content of data pages is saved in the
...@@ -562,12 +562,12 @@ ...@@ -562,12 +562,12 @@
<para> <para>
To deal with the case where <filename>pg_control</filename> is To deal with the case where <filename>pg_control</filename> is
corrupted, we should support the possibility of scanning existing log corrupt, we should support the possibility of scanning existing log
segments in reverse order &mdash; newest to oldest &mdash; in order to find the segments in reverse order &mdash; newest to oldest &mdash; in order to find the
latest checkpoint. This has not been implemented yet. latest checkpoint. This has not been implemented yet.
<filename>pg_control</filename> is small enough (less than one disk page) <filename>pg_control</filename> is small enough (less than one disk page)
that it is not subject to partial-write problems, and as of this writing that it is not subject to partial-write problems, and as of this writing
there have been no reports of database failures due solely to inability there have been no reports of database failures due solely to the inability
to read <filename>pg_control</filename> itself. So while it is to read <filename>pg_control</filename> itself. So while it is
theoretically a weak spot, <filename>pg_control</filename> does not theoretically a weak spot, <filename>pg_control</filename> does not
seem to be a problem in practice. seem to be a problem in practice.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment