Commit 39712d11 authored by Neil Conway's avatar Neil Conway

Make a few marginal improvements to the documentation for the AV

launcher daemon.
parent 513836c7
<!-- $PostgreSQL: pgsql/doc/src/sgml/maintenance.sgml,v 1.73 2007/05/03 15:47:48 alvherre Exp $ --> <!-- $PostgreSQL: pgsql/doc/src/sgml/maintenance.sgml,v 1.74 2007/05/15 15:52:40 neilc Exp $ -->
<chapter id="maintenance"> <chapter id="maintenance">
<title>Routine Database Maintenance Tasks</title> <title>Routine Database Maintenance Tasks</title>
...@@ -485,7 +485,9 @@ HINT: Stop the postmaster and use a standalone backend to VACUUM in "mydb". ...@@ -485,7 +485,9 @@ HINT: Stop the postmaster and use a standalone backend to VACUUM in "mydb".
multi-process architecture: there is a daemon process, called the multi-process architecture: there is a daemon process, called the
<firstterm>autovacuum launcher</firstterm>, which is in charge of starting <firstterm>autovacuum launcher</firstterm>, which is in charge of starting
an <firstterm>autovacuum worker</firstterm> process on each database every an <firstterm>autovacuum worker</firstterm> process on each database every
<xref linkend="guc-autovacuum-naptime"> seconds. <xref linkend="guc-autovacuum-naptime"> seconds. On each run, the worker
process checks each table within that database, and <command>VACUUM</> or
<command>ANALYZE</> commands are issued as needed.
</para> </para>
<para> <para>
...@@ -493,7 +495,7 @@ HINT: Stop the postmaster and use a standalone backend to VACUUM in "mydb". ...@@ -493,7 +495,7 @@ HINT: Stop the postmaster and use a standalone backend to VACUUM in "mydb".
processes that may be running at any time, so if the <command>VACUUM</> processes that may be running at any time, so if the <command>VACUUM</>
and <command>ANALYZE</> work to do takes too long to run, the deadline may and <command>ANALYZE</> work to do takes too long to run, the deadline may
be failed to meet for other databases. Also, if a particular database be failed to meet for other databases. Also, if a particular database
takes long to process, more than one worker may be processing it takes a long time to process, more than one worker may be processing it
simultaneously. The workers are smart enough to avoid repeating work that simultaneously. The workers are smart enough to avoid repeating work that
other workers have done, so this is normally not a problem. Note that the other workers have done, so this is normally not a problem. Note that the
number of running workers does not count towards the <xref number of running workers does not count towards the <xref
...@@ -501,12 +503,6 @@ HINT: Stop the postmaster and use a standalone backend to VACUUM in "mydb". ...@@ -501,12 +503,6 @@ HINT: Stop the postmaster and use a standalone backend to VACUUM in "mydb".
linkend="guc-superuser-reserved-connections"> limits. linkend="guc-superuser-reserved-connections"> limits.
</para> </para>
<para>
On each run, the worker process checks each table within that database, and
<command>VACUUM</command> or <command>ANALYZE</command> commands are
issued as needed.
</para>
<para> <para>
Tables whose <structfield>relfrozenxid</> value is more than Tables whose <structfield>relfrozenxid</> value is more than
<varname>autovacuum_freeze_max_age</> transactions old are always <varname>autovacuum_freeze_max_age</> transactions old are always
...@@ -591,19 +587,19 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu ...@@ -591,19 +587,19 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu
<caution> <caution>
<para> <para>
The contents of the <structname>pg_autovacuum</structname> system The contents of the <structname>pg_autovacuum</structname> system
catalog are currently not saved in database dumps created by catalog are currently not saved in database dumps created by the
the tools <command>pg_dump</command> and <command>pg_dumpall</command>. tools <application>pg_dump</> and <application>pg_dumpall</>. If
If you want to preserve them across a dump/reload cycle, make sure you you want to preserve them across a dump/reload cycle, make sure
dump the catalog manually. you dump the catalog manually.
</para> </para>
</caution> </caution>
<para> <para>
When multiple workers are running, the cost limit is "balanced" among all When multiple workers are running, the cost limit is
the running workers, so that the total impact on the system is the same, <quote>balanced</quote> among all the running workers, so that the
regardless of the number of workers actually running. total impact on the system is the same, regardless of the number
of workers actually running.
</para> </para>
</sect2> </sect2>
</sect1> </sect1>
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment