Commit d07454f5 authored by Peter Eisentraut's avatar Peter Eisentraut

Markup additions and spell check. (covers Admin Guide)

parent 84956e71
<!-- $Header: /cvsroot/pgsql/doc/src/sgml/backup.sgml,v 2.12 2001/08/25 18:52:41 tgl Exp $ -->
<!-- $Header: /cvsroot/pgsql/doc/src/sgml/backup.sgml,v 2.13 2001/09/09 23:52:12 petere Exp $ -->
<chapter id="backup">
<title>Backup and Restore</title>
......@@ -236,10 +236,10 @@ cat <replaceable class="parameter">filename</replaceable>.* | psql <replaceable
<formalpara>
<title>Use the custom dump format (V7.1).</title>
<para>
If PostgreSQL was built on a system with the zlib compression library
If PostgreSQL was built on a system with the <application>zlib</> compression library
installed, the custom dump format will compress data as it writes it
to the output file. For large databases, this will produce similar dump
sizes to using gzip, but has the added advantage that the tables can be
sizes to using <command>gzip</command>, but has the added advantage that the tables can be
restored selectively. The following command dumps a database using the
custom dump format:
......
This diff is collapsed.
This diff is collapsed.
......@@ -22,7 +22,7 @@
Windows. The makefiles included in the source distribution are
written for <productname>Microsoft Visual C++</productname> and will
probably not work with other systems. It should be possible to
compile the libaries manually in other cases.
compile the libraries manually in other cases.
</para>
<tip>
......@@ -99,7 +99,7 @@
</para>
<para>
If you plan to do development using libpq on this machine, you will
If you plan to do development using <application>libpq</application> on this machine, you will
have to add the <filename>src\include</filename> and
<filename>src\interfaces\libpq</filename> subdirectories of the
source tree to the include path in your compilers settings.
......
This diff is collapsed.
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/maintenance.sgml,v 1.2 2001/08/27 23:42:34 tgl Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/maintenance.sgml,v 1.3 2001/09/09 23:52:12 petere Exp $
-->
<chapter id="maintenance">
......@@ -94,7 +94,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/maintenance.sgml,v 1.2 2001/08/27 23:42:34
In normal <productname>PostgreSQL</productname> operation, an UPDATE or
DELETE of a row does not immediately remove the old <firstterm>tuple</>
(version of the row). This approach is necessary to gain the benefits
of multi-version concurrency control (see the User's Guide): the tuple
of multiversion concurrency control (see the User's Guide): the tuple
must not be deleted while
it is still potentially visible to other transactions. But eventually,
an outdated or deleted tuple is no longer of interest to any transaction.
......@@ -106,7 +106,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/maintenance.sgml,v 1.2 2001/08/27 23:42:34
<para>
Clearly, a table that receives frequent updates or deletes will need
to be vacuumed more often than tables that are seldom updated. It may
be useful to set up periodic cron tasks that vacuum only selected tables,
be useful to set up periodic <application>cron</> tasks that vacuum only selected tables,
skipping tables that are known not to change often. This is only likely
to be helpful if you have both large heavily-updated tables and large
seldom-updated tables --- the extra cost of vacuuming a small table
......@@ -170,7 +170,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/maintenance.sgml,v 1.2 2001/08/27 23:42:34
statistics updates if the statistical distribution of the data is not
changing much. A simple rule of thumb is to think about how much
the minimum and maximum values of the columns in the table change.
For example, a timestamp column that contains the time of row update
For example, a <type>timestamp</type> column that contains the time of row update
will have a constantly-increasing maximum value as rows are added and
updated; such a column will probably need more frequent statistics
updates than, say, a column containing URLs for pages accessed on a
......@@ -233,12 +233,12 @@ $Header: /cvsroot/pgsql/doc/src/sgml/maintenance.sgml,v 1.2 2001/08/27 23:42:34
<para>
Prior to <productname>PostgreSQL</productname> 7.2, the only defense
against XID wraparound was to re-initdb at least every 4 billion
against XID wraparound was to re-<command>initdb</> at least every 4 billion
transactions. This of course was not very satisfactory for high-traffic
sites, so a better solution has been devised. The new approach allows an
installation to remain up indefinitely, without initdb or any sort of
installation to remain up indefinitely, without <command>initdb</> or any sort of
restart. The price is this maintenance requirement:
<emphasis>every table in the database must be VACUUMed at least once every
<emphasis>every table in the database must be vacuumed at least once every
billion transactions</emphasis>.
</para>
......@@ -342,7 +342,7 @@ VACUUM
user-created databases that are to be marked <literal>datallowconn</> =
<literal>false</> in <filename>pg_database</>, since there isn't any
convenient way to vacuum a database that you can't connect to. Note
that VACUUM's automatic warning message about unvacuumed databases will
that <command>VACUUM</command>'s automatic warning message about unvacuumed databases will
ignore <filename>pg_database</> entries with <literal>datallowconn</> =
<literal>false</>, so as to avoid giving false warnings about these
databases; therefore it's up to you to ensure that such databases are
......
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/manage-ag.sgml,v 2.13 2001/03/29 18:25:10 petere Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/manage-ag.sgml,v 2.14 2001/09/09 23:52:12 petere Exp $
-->
<chapter id="managing-databases">
......@@ -85,11 +85,11 @@ CREATE DATABASE <replaceable>name</>
createdb <replaceable class="parameter">dbname</replaceable>
</synopsis>
<filename>createdb</> does no magic. It connects to the template1
<command>createdb</> does no magic. It connects to the template1
database and executes the <command>CREATE DATABASE</> command,
exactly as described above. It uses <application>psql</> program
internally. The reference page on createdb contains the invocation
details. In particular, createdb without any arguments will create
internally. The reference page on <command>createdb</> contains the invocation
details. In particular, <command>createdb</> without any arguments will create
a database with the current user name, which may or may not be what
you want.
</para>
......@@ -132,7 +132,7 @@ export PGDATA2
setenv PGDATA2 /home/postgres/data
</programlisting>
</informalexample>
in csh or tcsh. You have to make sure that this environment
in <application>csh</> or <application>tcsh</>. You have to make sure that this environment
variable is always defined in the server environment, otherwise
you won't be able to access that database. Therefore you probably
want to set it in some sort of shell start-up file or server
......
<!-- $Header: /cvsroot/pgsql/doc/src/sgml/regress.sgml,v 1.18 2001/08/06 22:53:26 tgl Exp $ -->
<!-- $Header: /cvsroot/pgsql/doc/src/sgml/regress.sgml,v 1.19 2001/09/09 23:52:12 petere Exp $ -->
<chapter id="regress">
<title id="regress-title">Regression Tests</title>
......@@ -102,7 +102,7 @@
<prompt>$ </prompt><userinput>gmake installcheck</userinput>
</screen>
The tests will expect to contact the server at the local host and the
default port number, unless directed otherwise by PGHOST and PGPORT
default port number, unless directed otherwise by <envar>PGHOST</envar> and <envar>PGPORT</envar>
environment variables.
</para>
......@@ -178,7 +178,7 @@
<title>Date and time differences</title>
<para>
Some of the queries in the <quote>timestamp</quote> test will
Some of the queries in the <filename>timestamp</filename> test will
fail if you run the test on the day of a daylight-savings time
changeover, or the day before or after one. These queries assume
that the intervals between midnight yesterday, midnight today and
......@@ -189,21 +189,21 @@
<para>
Most of the date and time results are dependent on the time zone
environment. The reference files are generated for time zone
PST8PDT (Berkeley, California) and there will be apparent
<literal>PST8PDT</literal> (Berkeley, California) and there will be apparent
failures if the tests are not run with that time zone setting.
The regression test driver sets environment variable
<envar>PGTZ</envar> to <literal>PST8PDT</literal>, which normally
ensures proper results. However, your system must provide library
support for the PST8PDT time zone, or the time zone-dependent
support for the <literal>PST8PDT</literal> time zone, or the time zone-dependent
tests will fail. To verify that your machine does have this
support, type the following:
<screen>
<prompt>$ </prompt><userinput>env TZ=PST8PDT date</userinput>
</screen>
The command above should have returned the current system time in
the PST8PDT time zone. If the PST8PDT database is not available,
the <literal>PST8PDT</literal> time zone. If the <literal>PST8PDT</literal> database is not available,
then your system may have returned the time in GMT. If the
PST8PDT time zone is not available, you can set the time zone
<literal>PST8PDT</literal> time zone is not available, you can set the time zone
rules explicitly:
<programlisting>
PGTZ='PST8PDT7,M04.01.0,M10.05.03'; export PGTZ
......@@ -220,7 +220,7 @@ PGTZ='PST8PDT7,M04.01.0,M10.05.03'; export PGTZ
<para>
Some systems using older time zone libraries fail to apply
daylight-savings corrections to dates before 1970, causing
pre-1970 PDT times to be displayed in PST instead. This will
pre-1970 <acronym>PDT</acronym> times to be displayed in <acronym>PST</acronym> instead. This will
result in localized differences in the test results.
</para>
</sect2>
......@@ -300,7 +300,7 @@ ORDER BY to that particular query and thereby eliminate the bogus
</para>
<para>
You might wonder why we don't ORDER all the regress test SELECTs to
You might wonder why we don't order all the regress test queries explicitly to
get rid of this issue once and for all. The reason is that that would
make the regression tests less useful, not more, since they'd tend
to exercise query plan types that produce ordered results to the
......@@ -352,7 +352,7 @@ testname/platformpattern=comparisonfilename
</synopsis>
The test name is just the name of the particular regression test
module. The platform pattern is a pattern in the style of
expr(1) (that is, a regular expression with an implicit
<citerefentry><refentrytitle>expr</><manvolnum>1</></citerefentry> (that is, a regular expression with an implicit
<literal>^</literal> anchor
at the start). It is matched against the platform name as printed
by <filename>config.guess</filename> followed by
......@@ -365,19 +365,19 @@ testname/platformpattern=comparisonfilename
<para>
For example: some systems using older time zone libraries fail to apply
daylight-savings corrections to dates before 1970, causing
pre-1970 PDT times to be displayed in PST instead. This causes a
pre-1970 <acronym>PDT</acronym> times to be displayed in <acronym>PST</acronym> instead. This causes a
few differences in the <filename>horology</> regression test.
Therefore, we provide a variant comparison file,
<filename>horology-no-DST-before-1970.out</filename>, which includes
the results to be expected on these systems. To silence the bogus
<quote>failure</quote> message on HPPA platforms, resultmap
<quote>failure</quote> message on <systemitem>HPPA</systemitem> platforms, <filename>resultmap</filename>
includes
<programlisting>
horology/hppa=horology-no-DST-before-1970
</programlisting>
which will trigger on any machine for which config.guess's output
which will trigger on any machine for which the output of <command>config.guess</command>
begins with <quote><literal>hppa</literal></quote>. Other lines
in resultmap select the variant comparison file for other
in <filename>resultmap</> select the variant comparison file for other
platforms where it's appropriate.
</para>
......
This diff is collapsed.
This diff is collapsed.
......@@ -39,7 +39,7 @@ CREATE USER <replaceable>name</replaceable>
<command>initdb</command>) it will have the same name as the
operating system user that initialized the area (and is presumably
being used as the user that runs the server). Customarily, this user
will be called <quote>postgres</quote>. In order to create more
will be called <systemitem>postgres</systemitem>. In order to create more
users you have to first connect as this initial user.
</para>
......@@ -132,7 +132,7 @@ ALTER GROUP <replaceable>name</replaceable> DROP USER <replaceable>uname1</repla
<para>
When a database object is created, it is assigned an owner. The
owner is the user that executed the creation statement. There is
currenty no polished interface for changing the owner of a database
currently no polished interface for changing the owner of a database
object. By default, only an owner (or a superuser) can do anything
with the object. In order to allow other users to use it,
<firstterm>privileges</firstterm> must be granted.
......@@ -169,7 +169,7 @@ GRANT SELECT ON accounts TO GROUP staff;
REVOKE ALL ON accounts FROM PUBLIC;
</programlisting>
The set of privileges held by the table owner is always implicit
and is never revokable.
and cannot be revoked.
</para>
</sect1>
......@@ -179,7 +179,7 @@ REVOKE ALL ON accounts FROM PUBLIC;
<para>
Functions and triggers allow users to insert code into the backend
server that other users may execute without knowing it. Hence, both
mechanisms permit users to <firstterm>trojan horse</firstterm>
mechanisms permit users to <firstterm>Trojan horse</firstterm>
others with relative impunity. The only real protection is tight
control over who can define functions (e.g., write to relations
with SQL fields) and triggers. Audit trails and alerters on the
......
<!-- $Header: /cvsroot/pgsql/doc/src/sgml/wal.sgml,v 1.8 2001/08/25 18:52:41 tgl Exp $ -->
<!-- $Header: /cvsroot/pgsql/doc/src/sgml/wal.sgml,v 1.9 2001/09/09 23:52:12 petere Exp $ -->
<chapter id="wal">
<title>Write-Ahead Logging (<acronym>WAL</acronym>)</title>
......@@ -38,7 +38,7 @@
The first obvious benefit of using <acronym>WAL</acronym> is a
significantly reduced number of disk writes, since only the log
file needs to be flushed to disk at the time of transaction
commit; in multi-user environments, commits of many transactions
commit; in multiuser environments, commits of many transactions
may be accomplished with a single <function>fsync()</function> of
the log file. Furthermore, the log file is written sequentially,
and so the cost of syncing the log is much less than the cost of
......@@ -287,7 +287,7 @@
record to the log with <function>LogInsert</function> but before
performing a <function>LogFlush</function>. This delay allows other
backends to add their commit records to the log so as to have all
of them flushed with a single log sync. No sleep will occur if fsync
of them flushed with a single log sync. No sleep will occur if <varname>fsync</varname>
is not enabled or if fewer than <varname>COMMIT_SIBLINGS</varname>
other backends are not currently in active transactions; this avoids
sleeping when it's unlikely that any other backend will commit soon.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment