Commit 92934258 authored by Peter Eisentraut's avatar Peter Eisentraut

spell checker run

parent 96ee6ff5
<!-- $PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.107 2007/10/16 19:44:18 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.108 2007/11/28 15:42:30 petere Exp $ -->
<chapter id="backup"> <chapter id="backup">
<title>Backup and Restore</title> <title>Backup and Restore</title>
...@@ -1034,7 +1034,7 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' ...@@ -1034,7 +1034,7 @@ restore_command = 'cp /mnt/server/archivedir/%f %p'
(The path name is relative to the working directory of the server, (The path name is relative to the working directory of the server,
i.e., the cluster's data directory.) i.e., the cluster's data directory.)
Any <literal>%r</> is replaced by the name of the file containing the Any <literal>%r</> is replaced by the name of the file containing the
last valid restartpoint. That is the earliest file that must be kept last valid restart point. That is the earliest file that must be kept
to allow a restore to be restartable, so this information can be used to allow a restore to be restartable, so this information can be used
to truncate the archive to just the minimum required to support to truncate the archive to just the minimum required to support
restart of the current restore. <literal>%r</> would only be used in a restart of the current restore. <literal>%r</> would only be used in a
...@@ -1479,7 +1479,7 @@ if (!triggered) ...@@ -1479,7 +1479,7 @@ if (!triggered)
<para> <para>
The size of the WAL archive can be minimized by using the <literal>%r</> The size of the WAL archive can be minimized by using the <literal>%r</>
option of the <varname>restore_command</>. This option specifies the option of the <varname>restore_command</>. This option specifies the
last archive filename that needs to be kept to allow the recovery to last archive file name that needs to be kept to allow the recovery to
restart correctly. This can be used to truncate the archive once restart correctly. This can be used to truncate the archive once
files are no longer required, if the archive is writable from the files are no longer required, if the archive is writable from the
standby server. standby server.
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/config.sgml,v 1.157 2007/11/28 05:01:24 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/config.sgml,v 1.158 2007/11/28 15:42:30 petere Exp $ -->
<chapter Id="runtime-config"> <chapter Id="runtime-config">
<title>Server Configuration</title> <title>Server Configuration</title>
...@@ -608,7 +608,7 @@ SET ENABLE_SEQSCAN TO OFF; ...@@ -608,7 +608,7 @@ SET ENABLE_SEQSCAN TO OFF;
</indexterm> </indexterm>
<listitem> <listitem>
<para> <para>
Sets the realm to match Kerberos, GSSAPI and SSPI usernames against. Sets the realm to match Kerberos, GSSAPI and SSPI user names against.
See <xref linkend="kerberos-auth">, <xref linkend="gssapi-auth"> or See <xref linkend="kerberos-auth">, <xref linkend="gssapi-auth"> or
<xref linkend="sspi-auth"> for details. This parameter can only be <xref linkend="sspi-auth"> for details. This parameter can only be
set at server start. set at server start.
...@@ -3166,12 +3166,11 @@ local0.* /var/log/postgresql ...@@ -3166,12 +3166,11 @@ local0.* /var/log/postgresql
Including <literal>csvlog</> in the <varname>log_destination</> list Including <literal>csvlog</> in the <varname>log_destination</> list
provides a convenient way to import log files into a database table. provides a convenient way to import log files into a database table.
This option emits log lines in comma-separated-value format, This option emits log lines in comma-separated-value format,
with these columns: timestamp with milliseconds, username, database with these columns: timestamp with milliseconds, user name, database
name, session id, host:port number, process id, per-process line name, session ID, host:port number, process ID, per-process line
number, command tag, session start time, virtual transaction id, number, command tag, session start time, virtual transaction ID,
regular transaction id, error severity, SQL state code, error message. regular transaction id, error severity, SQL state code, error message.
Here is a sample table definition for storing CSV-format log output: Here is a sample table definition for storing CSV-format log output:
</para>
<programlisting> <programlisting>
CREATE TABLE postgres_log CREATE TABLE postgres_log
...@@ -3193,15 +3192,16 @@ CREATE TABLE postgres_log ...@@ -3193,15 +3192,16 @@ CREATE TABLE postgres_log
PRIMARY KEY (session_id, process_line_num) PRIMARY KEY (session_id, process_line_num)
); );
</programlisting> </programlisting>
</para>
<para> <para>
To import a log file into this table, use the <command>COPY FROM</> To import a log file into this table, use the <command>COPY FROM</>
command: command:
</para>
<programlisting> <programlisting>
COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
</programlisting> </programlisting>
</para>
<para> <para>
There are a few things you need to do to simplify importing CSV log There are a few things you need to do to simplify importing CSV log
...@@ -3221,7 +3221,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; ...@@ -3221,7 +3221,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
<listitem> <listitem>
<para> <para>
Set <varname>log_rotation_size</varname> to 0 to disable Set <varname>log_rotation_size</varname> to 0 to disable
size-based log rotation, as it makes the log filename difficult size-based log rotation, as it makes the log file name difficult
to predict. to predict.
</para> </para>
</listitem> </listitem>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/ddl.sgml,v 1.76 2007/06/20 23:11:38 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/ddl.sgml,v 1.77 2007/11/28 15:42:31 petere Exp $ -->
<chapter id="ddl"> <chapter id="ddl">
<title>Data Definition</title> <title>Data Definition</title>
...@@ -2793,7 +2793,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate &gt;= DATE '2006-01-01'; ...@@ -2793,7 +2793,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate &gt;= DATE '2006-01-01';
range tests for range partitioning, as illustrated in the preceding range tests for range partitioning, as illustrated in the preceding
examples. A good rule of thumb is that partitioning constraints should examples. A good rule of thumb is that partitioning constraints should
contain only comparisons of the partitioning column(s) to constants contain only comparisons of the partitioning column(s) to constants
using btree-indexable operators. using B-tree-indexable operators.
</para> </para>
</listitem> </listitem>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/func.sgml,v 1.413 2007/11/28 05:13:41 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/func.sgml,v 1.414 2007/11/28 15:42:31 petere Exp $ -->
<chapter id="functions"> <chapter id="functions">
<title>Functions and Operators</title> <title>Functions and Operators</title>
...@@ -1313,7 +1313,7 @@ ...@@ -1313,7 +1313,7 @@
<entry> <entry>
<acronym>ASCII</acronym> code of the first character of the <acronym>ASCII</acronym> code of the first character of the
argument. For <acronym>UTF8</acronym> returns the Unicode code argument. For <acronym>UTF8</acronym> returns the Unicode code
point of the character. For other multi-byte encodings. the point of the character. For other multibyte encodings. the
argument must be a strictly <acronym>ASCII</acronym> character. argument must be a strictly <acronym>ASCII</acronym> character.
</entry> </entry>
<entry><literal>ascii('x')</literal></entry> <entry><literal>ascii('x')</literal></entry>
...@@ -1338,7 +1338,7 @@ ...@@ -1338,7 +1338,7 @@
<entry><type>text</type></entry> <entry><type>text</type></entry>
<entry> <entry>
Character with the given code. For <acronym>UTF8</acronym> the Character with the given code. For <acronym>UTF8</acronym> the
argument is treated as a Unicode code point. For other multi-byte argument is treated as a Unicode code point. For other multibyte
encodings the argument must designate a strictly encodings the argument must designate a strictly
<acronym>ASCII</acronym> character. <acronym>ASCII</acronym> character.
</entry> </entry>
...@@ -1359,7 +1359,7 @@ ...@@ -1359,7 +1359,7 @@
<parameter>src_encoding</parameter>. The <parameter>src_encoding</parameter>. The
<parameter>string</parameter> must be valid in this encoding. <parameter>string</parameter> must be valid in this encoding.
Conversions can be defined by <command>CREATE CONVERSION</command>. Conversions can be defined by <command>CREATE CONVERSION</command>.
Also there are some pre-defined conversions. See <xref Also there are some predefined conversions. See <xref
linkend="conversion-names"> for available conversions. linkend="conversion-names"> for available conversions.
</entry> </entry>
<entry><literal>convert('text_in_utf8', 'UTF8', 'LATIN1')</literal></entry> <entry><literal>convert('text_in_utf8', 'UTF8', 'LATIN1')</literal></entry>
...@@ -6823,7 +6823,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ...@@ -6823,7 +6823,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple
<para> <para>
Notice that except for the two-argument form of <function>enum_range</>, Notice that except for the two-argument form of <function>enum_range</>,
these functions disregard the specific value passed to them; they care these functions disregard the specific value passed to them; they care
only about its declared datatype. Either NULL or a specific value of only about its declared data type. Either null or a specific value of
the type can be passed, with the same result. It is more common to the type can be passed, with the same result. It is more common to
apply these functions to a table column or function argument than to apply these functions to a table column or function argument than to
a hardwired type name as suggested by the examples. a hardwired type name as suggested by the examples.
...@@ -8381,7 +8381,7 @@ cursor_to_xml(cursor refcursor, count int, nulls boolean, tableforest boolean, t ...@@ -8381,7 +8381,7 @@ cursor_to_xml(cursor refcursor, count int, nulls boolean, tableforest boolean, t
... ...
]]></screen> ]]></screen>
If no table name is avaible, that is, when mapping a query or a If no table name is available, that is, when mapping a query or a
cursor, the string <literal>table</literal> is used in the first cursor, the string <literal>table</literal> is used in the first
format, <literal>row</literal> in the second format. format, <literal>row</literal> in the second format.
</para> </para>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.28 2007/11/28 10:10:14 petere Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.29 2007/11/28 15:42:31 petere Exp $ -->
<chapter id="high-availability"> <chapter id="high-availability">
<title>High Availability, Load Balancing, and Replication</title> <title>High Availability, Load Balancing, and Replication</title>
...@@ -66,7 +66,7 @@ ...@@ -66,7 +66,7 @@
<para> <para>
Performance must be considered in any choice. There is usually a Performance must be considered in any choice. There is usually a
tradeoff between functionality and trade-off between functionality and
performance. For example, a full synchronous solution over a slow performance. For example, a full synchronous solution over a slow
network might cut performance by more than half, while an asynchronous network might cut performance by more than half, while an asynchronous
one might have a minimal performance impact. one might have a minimal performance impact.
...@@ -202,13 +202,13 @@ protocol to make nodes agree on a serializable transactional order. ...@@ -202,13 +202,13 @@ protocol to make nodes agree on a serializable transactional order.
</varlistentry> </varlistentry>
<varlistentry> <varlistentry>
<term>Asynchronous Multi-Master Replication</term> <term>Asynchronous Multimaster Replication</term>
<listitem> <listitem>
<para> <para>
For servers that are not regularly connected, like laptops or For servers that are not regularly connected, like laptops or
remote servers, keeping data consistent among servers is a remote servers, keeping data consistent among servers is a
challenge. Using asynchronous multi-master replication, each challenge. Using asynchronous multimaster replication, each
server works independently, and periodically communicates with server works independently, and periodically communicates with
the other servers to identify conflicting transactions. The the other servers to identify conflicting transactions. The
conflicts can be resolved by users or conflict resolution rules. conflicts can be resolved by users or conflict resolution rules.
...@@ -217,18 +217,18 @@ protocol to make nodes agree on a serializable transactional order. ...@@ -217,18 +217,18 @@ protocol to make nodes agree on a serializable transactional order.
</varlistentry> </varlistentry>
<varlistentry> <varlistentry>
<term>Synchronous Multi-Master Replication</term> <term>Synchronous Multimaster Replication</term>
<listitem> <listitem>
<para> <para>
In synchronous multi-master replication, each server can accept In synchronous multimaster replication, each server can accept
write requests, and modified data is transmitted from the write requests, and modified data is transmitted from the
original server to every other server before each transaction original server to every other server before each transaction
commits. Heavy write activity can cause excessive locking, commits. Heavy write activity can cause excessive locking,
leading to poor performance. In fact, write performance is leading to poor performance. In fact, write performance is
often worse than that of a single server. Read requests can often worse than that of a single server. Read requests can
be sent to any server. Some implementations use shared disk be sent to any server. Some implementations use shared disk
to reduce the communication overhead. Synchronous multi-master to reduce the communication overhead. Synchronous multimaster
replication is best for mostly read workloads, though its big replication is best for mostly read workloads, though its big
advantage is that any server can accept write requests &mdash; advantage is that any server can accept write requests &mdash;
there is no need to partition workloads between master and there is no need to partition workloads between master and
...@@ -279,8 +279,8 @@ protocol to make nodes agree on a serializable transactional order. ...@@ -279,8 +279,8 @@ protocol to make nodes agree on a serializable transactional order.
<entry>Warm Standby Using PITR</entry> <entry>Warm Standby Using PITR</entry>
<entry>Master-Slave Replication</entry> <entry>Master-Slave Replication</entry>
<entry>Statement-Based Replication Middleware</entry> <entry>Statement-Based Replication Middleware</entry>
<entry>Asynchronous Multi-Master Replication</entry> <entry>Asynchronous Multimaster Replication</entry>
<entry>Synchronous Multi-Master Replication</entry> <entry>Synchronous Multimaster Replication</entry>
</row> </row>
</thead> </thead>
...@@ -401,7 +401,7 @@ protocol to make nodes agree on a serializable transactional order. ...@@ -401,7 +401,7 @@ protocol to make nodes agree on a serializable transactional order.
</varlistentry> </varlistentry>
<varlistentry> <varlistentry>
<term>Multi-Server Parallel Query Execution</term> <term>Multiple-Server Parallel Query Execution</term>
<listitem> <listitem>
<para> <para>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/install-win32.sgml,v 1.41 2007/08/03 10:47:10 mha Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/install-win32.sgml,v 1.42 2007/11/28 15:42:31 petere Exp $ -->
<chapter id="install-win32"> <chapter id="install-win32">
<title>Installation on <productname>Windows</productname></title> <title>Installation on <productname>Windows</productname></title>
...@@ -82,7 +82,7 @@ ...@@ -82,7 +82,7 @@
<term><productname>ActiveState Perl</productname></term> <term><productname>ActiveState Perl</productname></term>
<listitem><para> <listitem><para>
ActiveState Perl is required to run the build generation scripts. MinGW ActiveState Perl is required to run the build generation scripts. MinGW
or Cygwin perl will not work. It must also be present in the PATH. or Cygwin Perl will not work. It must also be present in the PATH.
Binaries can be downloaded from Binaries can be downloaded from
<ulink url="http://www.activestate.com"></>. <ulink url="http://www.activestate.com"></>.
</para></listitem> </para></listitem>
...@@ -209,7 +209,7 @@ ...@@ -209,7 +209,7 @@
</userinput> </userinput>
</screen> </screen>
To change the default build configuration to debug, put the following To change the default build configuration to debug, put the following
in the buildenv.bat file: in the <filename>buildenv.bat</filename> file:
<screen> <screen>
<userinput> <userinput>
set CONFIG=Debug set CONFIG=Debug
...@@ -261,8 +261,8 @@ ...@@ -261,8 +261,8 @@
<para> <para>
To run the regression tests, make sure you have completed the build of all To run the regression tests, make sure you have completed the build of all
required parts first. Also, make sure that the DLLs required to load all required parts first. Also, make sure that the DLLs required to load all
parts of the system (such as the perl and python DLLs for the procedural parts of the system (such as the Perl and Python DLLs for the procedural
languages) are present in the system PATH. If they are not, set it through languages) are present in the system path. If they are not, set it through
the <filename>buildenv.bat</filename> file. To run the tests, run one of the <filename>buildenv.bat</filename> file. To run the tests, run one of
the following commands from the <filename>src\tools\msvc</filename> the following commands from the <filename>src\tools\msvc</filename>
directory: directory:
...@@ -282,7 +282,7 @@ ...@@ -282,7 +282,7 @@
</screen> </screen>
To change the schedule used (default is the parallel), append it to the To change the schedule used (default is the parallel), append it to the
commandline like: command line like:
<screen> <screen>
<userinput> <userinput>
vcregress check serial vcregress check serial
...@@ -321,7 +321,7 @@ ...@@ -321,7 +321,7 @@
</varlistentry> </varlistentry>
<varlistentry> <varlistentry>
<term>DocBook DSSL 1.79</term> <term>DocBook DSSSL 1.79</term>
<listitem><para> <listitem><para>
Download from Download from
<ulink url="http://sourceforge.net/project/downloading.php?groupname=docbook&amp;filename=docbook-dsssl-1.79.zip"></> <ulink url="http://sourceforge.net/project/downloading.php?groupname=docbook&amp;filename=docbook-dsssl-1.79.zip"></>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/installation.sgml,v 1.297 2007/11/05 17:43:20 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/installation.sgml,v 1.298 2007/11/28 15:42:31 petere Exp $ -->
<chapter id="installation"> <chapter id="installation">
<title><![%standalone-include[<productname>PostgreSQL</>]]> <title><![%standalone-include[<productname>PostgreSQL</>]]>
...@@ -1308,7 +1308,7 @@ su - postgres ...@@ -1308,7 +1308,7 @@ su - postgres
<term><envar>TCLSH</envar></term> <term><envar>TCLSH</envar></term>
<listitem> <listitem>
<para> <para>
Full path to the Tcl interpreter. This wil be used to Full path to the Tcl interpreter. This will be used to
determine the dependencies for building PL/Tcl. determine the dependencies for building PL/Tcl.
</para> </para>
</listitem> </listitem>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/libpq.sgml,v 1.246 2007/09/26 08:45:50 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/libpq.sgml,v 1.247 2007/11/28 15:42:31 petere Exp $ -->
<chapter id="libpq"> <chapter id="libpq">
<title><application>libpq</application> - C Library</title> <title><application>libpq</application> - C Library</title>
...@@ -4976,7 +4976,7 @@ defaultNoticeProcessor(void *arg, const char *message) ...@@ -4976,7 +4976,7 @@ defaultNoticeProcessor(void *arg, const char *message)
used. (Therefore, put more-specific entries first when you are using used. (Therefore, put more-specific entries first when you are using
wildcards.) If an entry needs to contain <literal>:</literal> or wildcards.) If an entry needs to contain <literal>:</literal> or
<literal>\</literal>, escape this character with <literal>\</literal>. <literal>\</literal>, escape this character with <literal>\</literal>.
A host name of <literal>localhost</> matches both TCP (hostname A host name of <literal>localhost</> matches both TCP (host name
<literal>localhost</>) and Unix domain socket (<literal>pghost</> empty <literal>localhost</>) and Unix domain socket (<literal>pghost</> empty
or the default socket directory) connections coming from the local or the default socket directory) connections coming from the local
machine. machine.
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/maintenance.sgml,v 1.84 2007/10/07 01:16:42 alvherre Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/maintenance.sgml,v 1.85 2007/11/28 15:42:31 petere Exp $ -->
<chapter id="maintenance"> <chapter id="maintenance">
<title>Routine Database Maintenance Tasks</title> <title>Routine Database Maintenance Tasks</title>
...@@ -115,7 +115,7 @@ ...@@ -115,7 +115,7 @@
<command>UPDATE</> or <command>DELETE</> of a row does not <command>UPDATE</> or <command>DELETE</> of a row does not
immediately remove the old version of the row. immediately remove the old version of the row.
This approach is necessary to gain the benefits of multiversion This approach is necessary to gain the benefits of multiversion
concurrency control (see <xref linkend="mvcc">): the row version concurrency control (see <xref linkend="mvcc">): the row versions
must not be deleted while it is still potentially visible to other must not be deleted while it is still potentially visible to other
transactions. But eventually, an outdated or deleted row version is no transactions. But eventually, an outdated or deleted row version is no
longer of interest to any transaction. The space it occupies must be longer of interest to any transaction. The space it occupies must be
...@@ -486,7 +486,7 @@ HINT: Stop the postmaster and use a standalone backend to VACUUM in "mydb". ...@@ -486,7 +486,7 @@ HINT: Stop the postmaster and use a standalone backend to VACUUM in "mydb".
<para> <para>
Beginning in <productname>PostgreSQL</productname> 8.3, autovacuum has a Beginning in <productname>PostgreSQL</productname> 8.3, autovacuum has a
multi-process architecture: there is a daemon process, called the multiprocess architecture: There is a daemon process, called the
<firstterm>autovacuum launcher</firstterm>, which is in charge of starting <firstterm>autovacuum launcher</firstterm>, which is in charge of starting
an <firstterm>autovacuum worker</firstterm> process on each database every an <firstterm>autovacuum worker</firstterm> process on each database every
<xref linkend="guc-autovacuum-naptime"> seconds. On each run, the worker <xref linkend="guc-autovacuum-naptime"> seconds. On each run, the worker
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/monitoring.sgml,v 1.54 2007/09/25 20:03:37 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/monitoring.sgml,v 1.55 2007/11/28 15:42:31 petere Exp $ -->
<chapter id="monitoring"> <chapter id="monitoring">
<title>Monitoring Database Activity</title> <title>Monitoring Database Activity</title>
...@@ -236,7 +236,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -236,7 +236,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
<entry>One row only, showing cluster-wide statistics from the <entry>One row only, showing cluster-wide statistics from the
background writer: number of scheduled checkpoints, requested background writer: number of scheduled checkpoints, requested
checkpoints, buffers written by checkpoints and cleaning scans, checkpoints, buffers written by checkpoints and cleaning scans,
and the number of times the bgwriter stopped a cleaning scan and the number of times the background writer stopped a cleaning scan
because it had written too many buffers. Also includes because it had written too many buffers. Also includes
statistics about the shared buffer pool, including buffers written statistics about the shared buffer pool, including buffers written
by backends (that is, not by the background writer) and total buffers by backends (that is, not by the background writer) and total buffers
...@@ -777,7 +777,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -777,7 +777,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
<entry><literal><function>pg_stat_get_bgwriter_timed_checkpoints</function>()</literal></entry> <entry><literal><function>pg_stat_get_bgwriter_timed_checkpoints</function>()</literal></entry>
<entry><type>bigint</type></entry> <entry><type>bigint</type></entry>
<entry> <entry>
The number of times the bgwriter has started timed checkpoints The number of times the background writer has started timed checkpoints
(because the <varname>checkpoint_timeout</varname> time has expired) (because the <varname>checkpoint_timeout</varname> time has expired)
</entry> </entry>
</row> </row>
...@@ -786,7 +786,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -786,7 +786,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
<entry><literal><function>pg_stat_get_bgwriter_requested_checkpoints</function>()</literal></entry> <entry><literal><function>pg_stat_get_bgwriter_requested_checkpoints</function>()</literal></entry>
<entry><type>bigint</type></entry> <entry><type>bigint</type></entry>
<entry> <entry>
The number of times the bgwriter has started checkpoints based The number of times the background writer has started checkpoints based
on requests from backends because the <varname>checkpoint_segments</varname> on requests from backends because the <varname>checkpoint_segments</varname>
has been exceeded or because the <command>CHECKPOINT</command> has been exceeded or because the <command>CHECKPOINT</command>
command has been issued command has been issued
...@@ -797,7 +797,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -797,7 +797,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
<entry><literal><function>pg_stat_get_bgwriter_buf_written_checkpoints</function>()</literal></entry> <entry><literal><function>pg_stat_get_bgwriter_buf_written_checkpoints</function>()</literal></entry>
<entry><type>bigint</type></entry> <entry><type>bigint</type></entry>
<entry> <entry>
The number of buffers written by the bgwriter during checkpoints The number of buffers written by the background writer during checkpoints
</entry> </entry>
</row> </row>
...@@ -805,7 +805,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -805,7 +805,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
<entry><literal><function>pg_stat_get_bgwriter_buf_written_clean</function>()</literal></entry> <entry><literal><function>pg_stat_get_bgwriter_buf_written_clean</function>()</literal></entry>
<entry><type>bigint</type></entry> <entry><type>bigint</type></entry>
<entry> <entry>
The number of buffers written by the bgwriter for routine cleaning of The number of buffers written by the background writer for routine cleaning of
dirty pages dirty pages
</entry> </entry>
</row> </row>
...@@ -814,7 +814,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re ...@@ -814,7 +814,7 @@ postgres: <replaceable>user</> <replaceable>database</> <replaceable>host</> <re
<entry><literal><function>pg_stat_get_bgwriter_maxwritten_clean</function>()</literal></entry> <entry><literal><function>pg_stat_get_bgwriter_maxwritten_clean</function>()</literal></entry>
<entry><type>bigint</type></entry> <entry><type>bigint</type></entry>
<entry> <entry>
The number of times the bgwriter has stopped its cleaning scan because The number of times the background writer has stopped its cleaning scan because
it has written more buffers than specified in the it has written more buffers than specified in the
<varname>bgwriter_lru_maxpages</varname> parameter <varname>bgwriter_lru_maxpages</varname> parameter
</entry> </entry>
...@@ -1180,7 +1180,7 @@ provider postgresql { ...@@ -1180,7 +1180,7 @@ provider postgresql {
<para> <para>
You should take care that the data types specified for the probe arguments You should take care that the data types specified for the probe arguments
match the datatypes of the variables used in the <literal>PG_TRACE</> match the data types of the variables used in the <literal>PG_TRACE</>
macro. This is not checked at compile time. You can check that your newly macro. This is not checked at compile time. You can check that your newly
added trace point is available by recompiling, then running the new binary, added trace point is available by recompiling, then running the new binary,
and as root, executing a DTrace command such as: and as root, executing a DTrace command such as:
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/perform.sgml,v 1.66 2007/10/22 21:34:33 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/perform.sgml,v 1.67 2007/11/28 15:42:31 petere Exp $ -->
<chapter id="performance-tips"> <chapter id="performance-tips">
<title>Performance Tips</title> <title>Performance Tips</title>
...@@ -738,10 +738,10 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; ...@@ -738,10 +738,10 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
<xref linkend="guc-from-collapse-limit"> and <xref <xref linkend="guc-from-collapse-limit"> and <xref
linkend="guc-join-collapse-limit"> linkend="guc-join-collapse-limit">
are similarly named because they do almost the same thing: one controls are similarly named because they do almost the same thing: one controls
when the planner will <quote>flatten out</> subselects, and the when the planner will <quote>flatten out</> subqueries, and the
other controls when it will flatten out explicit joins. Typically other controls when it will flatten out explicit joins. Typically
you would either set <varname>join_collapse_limit</> equal to you would either set <varname>join_collapse_limit</> equal to
<varname>from_collapse_limit</> (so that explicit joins and subselects <varname>from_collapse_limit</> (so that explicit joins and subqueries
act similarly) or set <varname>join_collapse_limit</> to 1 (if you want act similarly) or set <varname>join_collapse_limit</> to 1 (if you want
to control join order with explicit joins). But you might set them to control join order with explicit joins). But you might set them
differently if you are trying to fine-tune the trade-off between planning differently if you are trying to fine-tune the trade-off between planning
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/plpgsql.sgml,v 1.117 2007/10/26 01:11:09 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/plpgsql.sgml,v 1.118 2007/11/28 15:42:31 petere Exp $ -->
<chapter id="plpgsql"> <chapter id="plpgsql">
<title><application>PL/pgSQL</application> - <acronym>SQL</acronym> Procedural Language</title> <title><application>PL/pgSQL</application> - <acronym>SQL</acronym> Procedural Language</title>
...@@ -3348,7 +3348,7 @@ SELECT * INTO myrec FROM dictionary WHERE word LIKE search_term; ...@@ -3348,7 +3348,7 @@ SELECT * INTO myrec FROM dictionary WHERE word LIKE search_term;
where <literal>search_term</> is a <application>PL/pgSQL</application> where <literal>search_term</> is a <application>PL/pgSQL</application>
variable. The cached plan for this query will never use an index on variable. The cached plan for this query will never use an index on
<structfield>word</>, since the planner cannot assume that the <structfield>word</>, since the planner cannot assume that the
<literal>LIKE</> pattern will be left-anchored at runtime. To use <literal>LIKE</> pattern will be left-anchored at run time. To use
an index the query must be planned with a specific constant an index the query must be planned with a specific constant
<literal>LIKE</> pattern provided. This is another situation where <literal>LIKE</> pattern provided. This is another situation where
<command>EXECUTE</command> can be used to force a new plan to be <command>EXECUTE</command> can be used to force a new plan to be
......
<!-- <!--
$PostgreSQL: pgsql/doc/src/sgml/ref/alter_table.sgml,v 1.97 2007/05/17 23:36:04 neilc Exp $ $PostgreSQL: pgsql/doc/src/sgml/ref/alter_table.sgml,v 1.98 2007/11/28 15:42:31 petere Exp $
PostgreSQL documentation PostgreSQL documentation
--> -->
...@@ -214,7 +214,7 @@ where <replaceable class="PARAMETER">action</replaceable> is one of: ...@@ -214,7 +214,7 @@ where <replaceable class="PARAMETER">action</replaceable> is one of:
of course the integrity of the constraint cannot be guaranteed if the of course the integrity of the constraint cannot be guaranteed if the
triggers are not executed. triggers are not executed.
The trigger firing mechanism is also affected by the configuration The trigger firing mechanism is also affected by the configuration
variable <xref linkend="guc-session-replication-role">. Simply ENABLEd variable <xref linkend="guc-session-replication-role">. Simply enabled
triggers will fire when the replication role is <quote>origin</> triggers will fire when the replication role is <quote>origin</>
(the default) or <quote>local</>. Triggers configured <literal>ENABLE REPLICA</literal> (the default) or <quote>local</>. Triggers configured <literal>ENABLE REPLICA</literal>
will only fire if the session is in <quote>replica</> mode and triggers will only fire if the session is in <quote>replica</> mode and triggers
......
<!-- <!--
$PostgreSQL: pgsql/doc/src/sgml/ref/create_operator.sgml,v 1.48 2007/02/01 00:28:18 momjian Exp $ $PostgreSQL: pgsql/doc/src/sgml/ref/create_operator.sgml,v 1.49 2007/11/28 15:42:31 petere Exp $
PostgreSQL documentation PostgreSQL documentation
--> -->
...@@ -222,9 +222,9 @@ COMMUTATOR = OPERATOR(myschema.===) , ...@@ -222,9 +222,9 @@ COMMUTATOR = OPERATOR(myschema.===) ,
<para> <para>
The obsolete options <literal>SORT1</>, <literal>SORT2</>, The obsolete options <literal>SORT1</>, <literal>SORT2</>,
<literal>LTCMP</>, and <literal>GTCMP</> were formerly used to <literal>LTCMP</>, and <literal>GTCMP</> were formerly used to
specify the names of sort operators associated with a mergejoinable specify the names of sort operators associated with a merge-joinable
operator. This is no longer necessary, since information about operator. This is no longer necessary, since information about
associated operators is found by looking at btree operator families associated operators is found by looking at B-tree operator families
instead. If one of these options is given, it is ignored except instead. If one of these options is given, it is ignored except
for implicitly setting <literal>MERGES</> true. for implicitly setting <literal>MERGES</> true.
</para> </para>
......
<!-- <!--
$PostgreSQL: pgsql/doc/src/sgml/ref/delete.sgml,v 1.31 2007/06/11 01:16:21 tgl Exp $ $PostgreSQL: pgsql/doc/src/sgml/ref/delete.sgml,v 1.32 2007/11/28 15:42:31 petere Exp $
PostgreSQL documentation PostgreSQL documentation
--> -->
...@@ -150,7 +150,7 @@ DELETE FROM [ ONLY ] <replaceable class="PARAMETER">table</replaceable> [ [ AS ] ...@@ -150,7 +150,7 @@ DELETE FROM [ ONLY ] <replaceable class="PARAMETER">table</replaceable> [ [ AS ]
from this cursor. The cursor must be a simple (non-join, non-aggregate) from this cursor. The cursor must be a simple (non-join, non-aggregate)
query on the <command>DELETE</>'s target table. query on the <command>DELETE</>'s target table.
Note that <literal>WHERE CURRENT OF</> cannot be Note that <literal>WHERE CURRENT OF</> cannot be
specified together with a boolean condition. specified together with a Boolean condition.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
......
<!-- <!--
$PostgreSQL: pgsql/doc/src/sgml/ref/explain.sgml,v 1.40 2007/04/12 22:39:21 neilc Exp $ $PostgreSQL: pgsql/doc/src/sgml/ref/explain.sgml,v 1.41 2007/11/28 15:42:31 petere Exp $
PostgreSQL documentation PostgreSQL documentation
--> -->
...@@ -160,7 +160,7 @@ ROLLBACK; ...@@ -160,7 +160,7 @@ ROLLBACK;
</para> </para>
<para> <para>
In order to measure the runtime cost of each node in the execution In order to measure the run-time cost of each node in the execution
plan, the current implementation of <command>EXPLAIN plan, the current implementation of <command>EXPLAIN
ANALYZE</command> can add considerable profiling overhead to query ANALYZE</command> can add considerable profiling overhead to query
execution. As a result, running <command>EXPLAIN ANALYZE</command> execution. As a result, running <command>EXPLAIN ANALYZE</command>
......
<!-- <!--
$PostgreSQL: pgsql/doc/src/sgml/ref/select.sgml,v 1.101 2007/06/08 20:26:18 tgl Exp $ $PostgreSQL: pgsql/doc/src/sgml/ref/select.sgml,v 1.102 2007/11/28 15:42:31 petere Exp $
PostgreSQL documentation PostgreSQL documentation
--> -->
...@@ -698,7 +698,7 @@ SELECT name FROM distributors ORDER BY code; ...@@ -698,7 +698,7 @@ SELECT name FROM distributors ORDER BY code;
assumed by default. Alternatively, a specific ordering operator assumed by default. Alternatively, a specific ordering operator
name can be specified in the <literal>USING</> clause. name can be specified in the <literal>USING</> clause.
An ordering operator must be a less-than or greater-than An ordering operator must be a less-than or greater-than
member of some btree operator family. member of some B-tree operator family.
<literal>ASC</> is usually equivalent to <literal>USING &lt;</> and <literal>ASC</> is usually equivalent to <literal>USING &lt;</> and
<literal>DESC</> is usually equivalent to <literal>USING &gt;</>. <literal>DESC</> is usually equivalent to <literal>USING &gt;</>.
(But the creator of a user-defined data type can define exactly what the (But the creator of a user-defined data type can define exactly what the
......
<!-- <!--
$PostgreSQL: pgsql/doc/src/sgml/ref/update.sgml,v 1.44 2007/06/11 01:16:22 tgl Exp $ $PostgreSQL: pgsql/doc/src/sgml/ref/update.sgml,v 1.45 2007/11/28 15:42:31 petere Exp $
PostgreSQL documentation PostgreSQL documentation
--> -->
...@@ -169,7 +169,7 @@ UPDATE [ ONLY ] <replaceable class="PARAMETER">table</replaceable> [ [ AS ] <rep ...@@ -169,7 +169,7 @@ UPDATE [ ONLY ] <replaceable class="PARAMETER">table</replaceable> [ [ AS ] <rep
from this cursor. The cursor must be a simple (non-join, non-aggregate) from this cursor. The cursor must be a simple (non-join, non-aggregate)
query on the <command>UPDATE</>'s target table. query on the <command>UPDATE</>'s target table.
Note that <literal>WHERE CURRENT OF</> cannot be Note that <literal>WHERE CURRENT OF</> cannot be
specified together with a boolean condition. specified together with a Boolean condition.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/runtime.sgml,v 1.385 2007/11/08 15:21:03 momjian Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/runtime.sgml,v 1.386 2007/11/28 15:42:31 petere Exp $ -->
<chapter Id="runtime"> <chapter Id="runtime">
<title>Operating System Environment</title> <title>Operating System Environment</title>
...@@ -180,9 +180,9 @@ postgres$ <userinput>initdb -D /usr/local/pgsql/data</userinput> ...@@ -180,9 +180,9 @@ postgres$ <userinput>initdb -D /usr/local/pgsql/data</userinput>
<acronym>NFS</> implementations have non-standard semantics, this can <acronym>NFS</> implementations have non-standard semantics, this can
cause reliability problems (see <ulink cause reliability problems (see <ulink
url="http://www.time-travellers.org/shane/papers/NFS_considered_harmful.html"></ulink>). url="http://www.time-travellers.org/shane/papers/NFS_considered_harmful.html"></ulink>).
Specifically, delayed (asynchonous) writes to the <acronym>NFS</> Specifically, delayed (asynchronous) writes to the <acronym>NFS</>
server can cause reliability problems; if possible, mount server can cause reliability problems; if possible, mount
<acronym>NFS</> file systems synchonously (without caching) to avoid <acronym>NFS</> file systems synchronously (without caching) to avoid
this. (Storage Area Networks (<acronym>SAN</>) use a low-level this. (Storage Area Networks (<acronym>SAN</>) use a low-level
communication protocol rather than <acronym>NFS</>.) communication protocol rather than <acronym>NFS</>.)
</para> </para>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/textsearch.sgml,v 1.38 2007/11/20 15:58:52 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/textsearch.sgml,v 1.39 2007/11/28 15:42:31 petere Exp $ -->
<chapter id="textsearch"> <chapter id="textsearch">
<title id="textsearch-title">Full Text Search</title> <title id="textsearch-title">Full Text Search</title>
...@@ -32,7 +32,7 @@ ...@@ -32,7 +32,7 @@
Textual search operators have existed in databases for years. Textual search operators have existed in databases for years.
<productname>PostgreSQL</productname> has <productname>PostgreSQL</productname> has
<literal>~</literal>, <literal>~*</literal>, <literal>LIKE</literal>, and <literal>~</literal>, <literal>~*</literal>, <literal>LIKE</literal>, and
<literal>ILIKE</literal> operators for textual datatypes, but they lack <literal>ILIKE</literal> operators for textual data types, but they lack
many essential properties required by modern information systems: many essential properties required by modern information systems:
</para> </para>
...@@ -132,7 +132,7 @@ ...@@ -132,7 +132,7 @@
<listitem> <listitem>
<para> <para>
Map synonyms to a single word using <application>ispell</>. Map synonyms to a single word using <application>Ispell</>.
</para> </para>
</listitem> </listitem>
...@@ -145,7 +145,7 @@ ...@@ -145,7 +145,7 @@
<listitem> <listitem>
<para> <para>
Map different variations of a word to a canonical form using Map different variations of a word to a canonical form using
an <application>ispell</> dictionary. an <application>Ispell</> dictionary.
</para> </para>
</listitem> </listitem>
...@@ -725,7 +725,7 @@ UPDATE tt SET ti = ...@@ -725,7 +725,7 @@ UPDATE tt SET ti =
<para> <para>
<function>to_tsquery</function> creates a <type>tsquery</> value from <function>to_tsquery</function> creates a <type>tsquery</> value from
<replaceable>querytext</replaceable>, which must consist of single tokens <replaceable>querytext</replaceable>, which must consist of single tokens
separated by the boolean operators <literal>&amp;</literal> (AND), separated by the Boolean operators <literal>&amp;</literal> (AND),
<literal>|</literal> (OR) and <literal>!</literal> (NOT). These operators <literal>|</literal> (OR) and <literal>!</literal> (NOT). These operators
can be grouped using parentheses. In other words, the input to can be grouped using parentheses. In other words, the input to
<function>to_tsquery</function> must already follow the general rules for <function>to_tsquery</function> must already follow the general rules for
...@@ -783,7 +783,7 @@ SELECT to_tsquery('''supernovae stars'' &amp; !crab'); ...@@ -783,7 +783,7 @@ SELECT to_tsquery('''supernovae stars'' &amp; !crab');
<function>plainto_tsquery</> transforms unformatted text <function>plainto_tsquery</> transforms unformatted text
<replaceable>querytext</replaceable> to <type>tsquery</type>. <replaceable>querytext</replaceable> to <type>tsquery</type>.
The text is parsed and normalized much as for <function>to_tsvector</>, The text is parsed and normalized much as for <function>to_tsvector</>,
then the <literal>&amp;</literal> (AND) boolean operator is inserted then the <literal>&amp;</literal> (AND) Boolean operator is inserted
between surviving words. between surviving words.
</para> </para>
...@@ -798,7 +798,7 @@ SELECT to_tsquery('''supernovae stars'' &amp; !crab'); ...@@ -798,7 +798,7 @@ SELECT to_tsquery('''supernovae stars'' &amp; !crab');
</programlisting> </programlisting>
Note that <function>plainto_tsquery</> cannot Note that <function>plainto_tsquery</> cannot
recognize either boolean operators or weight labels in its input: recognize either Boolean operators or weight labels in its input:
<programlisting> <programlisting>
SELECT plainto_tsquery('english', 'The Fat &amp; Rats:C'); SELECT plainto_tsquery('english', 'The Fat &amp; Rats:C');
...@@ -1085,7 +1085,7 @@ ORDER BY rank DESC LIMIT 10; ...@@ -1085,7 +1085,7 @@ ORDER BY rank DESC LIMIT 10;
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
<literal>HighlightAll</literal>: boolean flag; if <literal>HighlightAll</literal>: Boolean flag; if
<literal>true</literal> the whole document will be highlighted. <literal>true</literal> the whole document will be highlighted.
</para> </para>
</listitem> </listitem>
...@@ -1131,7 +1131,7 @@ query.', ...@@ -1131,7 +1131,7 @@ query.',
<type>tsvector</type> summary, so it can be slow and should be used with <type>tsvector</type> summary, so it can be slow and should be used with
care. A typical mistake is to call <function>ts_headline</function> for care. A typical mistake is to call <function>ts_headline</function> for
<emphasis>every</emphasis> matching document when only ten documents are <emphasis>every</emphasis> matching document when only ten documents are
to be shown. <acronym>SQL</acronym> subselects can help; here is an to be shown. <acronym>SQL</acronym> subqueries can help; here is an
example: example:
<programlisting> <programlisting>
...@@ -1945,7 +1945,7 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h ...@@ -1945,7 +1945,7 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h
<listitem> <listitem>
<para> <para>
Linguistic - ispell dictionaries try to reduce input words to a Linguistic - Ispell dictionaries try to reduce input words to a
normalized form; stemmer dictionaries remove word endings normalized form; stemmer dictionaries remove word endings
</para> </para>
</listitem> </listitem>
...@@ -2395,7 +2395,7 @@ crab nebulae : crab ...@@ -2395,7 +2395,7 @@ crab nebulae : crab
</programlisting> </programlisting>
Below we create a dictionary and bind some token types to Below we create a dictionary and bind some token types to
an astronomical thesaurus and english stemmer: an astronomical thesaurus and English stemmer:
<programlisting> <programlisting>
CREATE TEXT SEARCH DICTIONARY thesaurus_astro ( CREATE TEXT SEARCH DICTIONARY thesaurus_astro (
...@@ -2610,7 +2610,7 @@ CREATE TEXT SEARCH DICTIONARY english_stem ( ...@@ -2610,7 +2610,7 @@ CREATE TEXT SEARCH DICTIONARY english_stem (
Several predefined text search configurations are available, and Several predefined text search configurations are available, and
you can create custom configurations easily. To facilitate management you can create custom configurations easily. To facilitate management
of text search objects, a set of <acronym>SQL</acronym> commands of text search objects, a set of <acronym>SQL</acronym> commands
is available, and there are several psql commands that display information is available, and there are several <application>psql</application> commands that display information
about text search objects (<xref linkend="textsearch-psql">). about text search objects (<xref linkend="textsearch-psql">).
</para> </para>
...@@ -2644,7 +2644,7 @@ CREATE TEXT SEARCH DICTIONARY pg_dict ( ...@@ -2644,7 +2644,7 @@ CREATE TEXT SEARCH DICTIONARY pg_dict (
); );
</programlisting> </programlisting>
Next we register the <productname>ispell</> dictionary Next we register the <productname>Ispell</> dictionary
<literal>english_ispell</literal>, which has its own configuration files: <literal>english_ispell</literal>, which has its own configuration files:
<programlisting> <programlisting>
...@@ -2834,7 +2834,7 @@ SELECT * FROM ts_debug('english','a fat cat sat on a mat - it ate a fat rats'); ...@@ -2834,7 +2834,7 @@ SELECT * FROM ts_debug('english','a fat cat sat on a mat - it ate a fat rats');
<para> <para>
For a more extensive demonstration, we For a more extensive demonstration, we
first create a <literal>public.english</literal> configuration and first create a <literal>public.english</literal> configuration and
ispell dictionary for the English language: Ispell dictionary for the English language:
</para> </para>
<programlisting> <programlisting>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/wal.sgml,v 1.45 2007/08/01 22:45:07 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/wal.sgml,v 1.46 2007/11/28 15:42:31 petere Exp $ -->
<chapter id="wal"> <chapter id="wal">
<title>Reliability and the Write-Ahead Log</title> <title>Reliability and the Write-Ahead Log</title>
...@@ -162,7 +162,7 @@ ...@@ -162,7 +162,7 @@
<firstterm>Asynchronous commit</> is an option that allows transactions <firstterm>Asynchronous commit</> is an option that allows transactions
to complete more quickly, at the cost that the most recent transactions may to complete more quickly, at the cost that the most recent transactions may
be lost if the database should crash. In many applications this is an be lost if the database should crash. In many applications this is an
acceptable tradeoff. acceptable trade-off.
</para> </para>
<para> <para>
...@@ -210,7 +210,7 @@ ...@@ -210,7 +210,7 @@
<para> <para>
The user can select the commit mode of each transaction, so that The user can select the commit mode of each transaction, so that
it is possible to have both synchronous and asynchronous commit it is possible to have both synchronous and asynchronous commit
transactions running concurrently. This allows flexible tradeoffs transactions running concurrently. This allows flexible trade-offs
between performance and certainty of transaction durability. between performance and certainty of transaction durability.
The commit mode is controlled by the user-settable parameter The commit mode is controlled by the user-settable parameter
<xref linkend="guc-synchronous-commit">, which can be changed in any of <xref linkend="guc-synchronous-commit">, which can be changed in any of
...@@ -223,7 +223,7 @@ ...@@ -223,7 +223,7 @@
Certain utility commands, for instance <command>DROP TABLE</>, are Certain utility commands, for instance <command>DROP TABLE</>, are
forced to commit synchronously regardless of the setting of forced to commit synchronously regardless of the setting of
<varname>synchronous_commit</varname>. This is to ensure consistency <varname>synchronous_commit</varname>. This is to ensure consistency
between the server's filesystem and the logical state of the database. between the server's file system and the logical state of the database.
The commands supporting two-phase commit, such as <command>PREPARE The commands supporting two-phase commit, such as <command>PREPARE
TRANSACTION</>, are also always synchronous. TRANSACTION</>, are also always synchronous.
</para> </para>
...@@ -234,11 +234,11 @@ ...@@ -234,11 +234,11 @@
<acronym>WAL</acronym> records, <acronym>WAL</acronym> records,
then changes made during that transaction <emphasis>will</> be lost. then changes made during that transaction <emphasis>will</> be lost.
The duration of the The duration of the
risk window is limited because a background process (the <quote>wal risk window is limited because a background process (the <quote>WAL
writer</>) flushes unwritten <acronym>WAL</acronym> records to disk writer</>) flushes unwritten <acronym>WAL</acronym> records to disk
every <xref linkend="guc-wal-writer-delay"> milliseconds. every <xref linkend="guc-wal-writer-delay"> milliseconds.
The actual maximum duration of the risk window is three times The actual maximum duration of the risk window is three times
<varname>wal_writer_delay</varname> because the wal writer is <varname>wal_writer_delay</varname> because the WAL writer is
designed to favor writing whole pages at a time during busy periods. designed to favor writing whole pages at a time during busy periods.
</para> </para>
......
<!-- $PostgreSQL: pgsql/doc/src/sgml/xoper.sgml,v 1.42 2007/02/06 04:38:31 tgl Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/xoper.sgml,v 1.43 2007/11/28 15:42:31 petere Exp $ -->
<sect1 id="xoper"> <sect1 id="xoper">
<title>User-Defined Operators</title> <title>User-Defined Operators</title>
...@@ -340,7 +340,7 @@ table1.column1 OP table2.column2 ...@@ -340,7 +340,7 @@ table1.column1 OP table2.column2
some form of equality. In most cases it is only practical to support some form of equality. In most cases it is only practical to support
hashing for operators that take the same data type on both sides. hashing for operators that take the same data type on both sides.
However, sometimes it is possible to design compatible hash functions However, sometimes it is possible to design compatible hash functions
for two or more datatypes; that is, functions that will generate the for two or more data types; that is, functions that will generate the
same hash codes for <quote>equal</> values, even though the values same hash codes for <quote>equal</> values, even though the values
have different representations. For example, it's fairly simple have different representations. For example, it's fairly simple
to arrange this property when hashing integers of different widths. to arrange this property when hashing integers of different widths.
...@@ -378,8 +378,8 @@ table1.column1 OP table2.column2 ...@@ -378,8 +378,8 @@ table1.column1 OP table2.column2
if they are different) that appears in the same operator family. if they are different) that appears in the same operator family.
If this is not the case, planner errors might occur when the operator If this is not the case, planner errors might occur when the operator
is used. Also, it is a good idea (but not strictly required) for is used. Also, it is a good idea (but not strictly required) for
a hash operator family that supports multiple datatypes to provide a hash operator family that supports multiple data types to provide
equality operators for every combination of the datatypes; this equality operators for every combination of the data types; this
allows better optimization. allows better optimization.
</para> </para>
...@@ -450,8 +450,8 @@ table1.column1 OP table2.column2 ...@@ -450,8 +450,8 @@ table1.column1 OP table2.column2
if they are different) that appears in the same operator family. if they are different) that appears in the same operator family.
If this is not the case, planner errors might occur when the operator If this is not the case, planner errors might occur when the operator
is used. Also, it is a good idea (but not strictly required) for is used. Also, it is a good idea (but not strictly required) for
a btree operator family that supports multiple datatypes to provide a btree operator family that supports multiple data types to provide
equality operators for every combination of the datatypes; this equality operators for every combination of the data types; this
allows better optimization. allows better optimization.
</para> </para>
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment