Commit bbde0903 authored by Bruce Momjian's avatar Bruce Momjian

Remove tabs from SGML files.

parent b913a94d
<!-- $PostgreSQL: pgsql/doc/src/sgml/config.sgml,v 1.139 2007/08/19 03:23:30 adunstan Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/config.sgml,v 1.140 2007/08/21 15:13:16 momjian Exp $ -->
<chapter Id="runtime-config"> <chapter Id="runtime-config">
<title>Server Configuration</title> <title>Server Configuration</title>
...@@ -2287,7 +2287,7 @@ SELECT * FROM parent WHERE key = 2400; ...@@ -2287,7 +2287,7 @@ SELECT * FROM parent WHERE key = 2400;
<listitem> <listitem>
<para> <para>
This parameter allows messages sent to <application>stderr</>, This parameter allows messages sent to <application>stderr</>,
and CSV logs, to be and CSV logs, to be
captured and redirected into log files. captured and redirected into log files.
This method, in combination with logging to <application>stderr</>, This method, in combination with logging to <application>stderr</>,
is often more useful than is often more useful than
...@@ -2295,8 +2295,8 @@ SELECT * FROM parent WHERE key = 2400; ...@@ -2295,8 +2295,8 @@ SELECT * FROM parent WHERE key = 2400;
might not appear in <application>syslog</> output (a common example might not appear in <application>syslog</> output (a common example
is dynamic-linker failure messages). is dynamic-linker failure messages).
This parameter can only be set at server start. This parameter can only be set at server start.
<varname>logging_collector</varname> must be enabled to generate <varname>logging_collector</varname> must be enabled to generate
CSV logs. CSV logs.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
...@@ -2342,11 +2342,11 @@ SELECT * FROM parent WHERE key = 2400; ...@@ -2342,11 +2342,11 @@ SELECT * FROM parent WHERE key = 2400;
file or on the server command line. file or on the server command line.
</para> </para>
<para> <para>
If <varname>log_destination</> is set to <systemitem>csvlog</>, If <varname>log_destination</> is set to <systemitem>csvlog</>,
<literal>.csv</> will be appended to the timestamped <literal>.csv</> will be appended to the timestamped
<varname>log_filename</> to create the final log file name. <varname>log_filename</> to create the final log file name.
(If log_filename ends in <literal>.log</>, the suffix is overwritten.) (If log_filename ends in <literal>.log</>, the suffix is overwritten.)
In the case of the example above, the In the case of the example above, the
file name will be <literal>server_log.1093827753.csv</literal> file name will be <literal>server_log.1093827753.csv</literal>
</para> </para>
</listitem> </listitem>
...@@ -3088,9 +3088,9 @@ SELECT * FROM parent WHERE key = 2400; ...@@ -3088,9 +3088,9 @@ SELECT * FROM parent WHERE key = 2400;
<title>Using the csvlog</title> <title>Using the csvlog</title>
<para> <para>
Including <literal>csvlog</> in the <varname>log_destination</> list Including <literal>csvlog</> in the <varname>log_destination</> list
provides a convenient way to import log files into a database table. provides a convenient way to import log files into a database table.
Here is a sample table definition for storing csvlog output: Here is a sample table definition for storing csvlog output:
</para> </para>
<programlisting> <programlisting>
...@@ -3124,7 +3124,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; ...@@ -3124,7 +3124,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
<para> <para>
There are a few things you need to import csvlog files easily and There are a few things you need to import csvlog files easily and
automatically: automatically:
<orderedlist> <orderedlist>
<listitem> <listitem>
...@@ -3141,15 +3141,15 @@ guess what ...@@ -3141,15 +3141,15 @@ guess what
<listitem> <listitem>
<para> <para>
Set <varname>log_rotation_size</varname> to 0 to disable Set <varname>log_rotation_size</varname> to 0 to disable
size-based log rotation, as it makes the log filename difficult size-based log rotation, as it makes the log filename difficult
to predict. to predict.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Set <varname>log_truncate_on_rotate</varname> = on so that old Set <varname>log_truncate_on_rotate</varname> = on so that old
log data isn't mixed with the new in the same file. log data isn't mixed with the new in the same file.
</para> </para>
</listitem> </listitem>
...@@ -3160,12 +3160,12 @@ guess what ...@@ -3160,12 +3160,12 @@ guess what
the same information twice. The COPY command commits all of the same information twice. The COPY command commits all of
the data it imports at one time, and any single error will the data it imports at one time, and any single error will
cause the entire import to fail. cause the entire import to fail.
If you import a partial log file and later import the file again If you import a partial log file and later import the file again
when it is complete, the primary key violation will cause the when it is complete, the primary key violation will cause the
import to fail. Wait until the log is complete and closed before import to fail. Wait until the log is complete and closed before
import. This will also protect against accidently importing a import. This will also protect against accidently importing a
partial line that hasn't been completely written, which would partial line that hasn't been completely written, which would
also cause the COPY to fail. also cause the COPY to fail.
</para> </para>
</listitem> </listitem>
</orderedlist> </orderedlist>
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment