Commit 991bfe11 authored by Heikki Linnakangas's avatar Heikki Linnakangas

Enhance documentation of the build-in standby mode, explaining the retry

loop in standby mode, trying to restore from archive, pg_xlog and
streaming.

Move sections around to make the high availability chapter more
coherent: the most prominent part is now a "Log-Shipping Standby Servers"
section that describes what a standby server is (like the old
"Warm Standby Servers for High Availability" section), and how to
set up a warm standby server, including streaming replication, using the
built-in standby mode. The pg_standby method is desribed in another
section called "Alternative method for log shipping", with the added
caveat that it doesn't work with streaming replication.
parent 55a01b4c
<!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.54 2010/03/19 19:31:06 sriggs Exp $ -->
<!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.55 2010/03/31 19:13:01 heikki Exp $ -->
<chapter id="high-availability">
<title>High Availability, Load Balancing, and Replication</title>
......@@ -455,32 +455,10 @@ protocol to make nodes agree on a serializable transactional order.
</sect1>
<sect1 id="warm-standby">
<title>File-based Log Shipping</title>
<indexterm zone="high-availability">
<primary>warm standby</primary>
</indexterm>
<indexterm zone="high-availability">
<primary>PITR standby</primary>
</indexterm>
<indexterm zone="high-availability">
<primary>standby server</primary>
</indexterm>
<indexterm zone="high-availability">
<primary>log shipping</primary>
</indexterm>
<sect1 id="warm-standby">
<title>Log-Shipping Standby Servers</title>
<indexterm zone="high-availability">
<primary>witness server</primary>
</indexterm>
<indexterm zone="high-availability">
<primary>STONITH</primary>
</indexterm>
<para>
Continuous archiving can be used to create a <firstterm>high
......@@ -510,8 +488,8 @@ protocol to make nodes agree on a serializable transactional order.
adjacent system, another system at the same site, or another system on
the far side of the globe. The bandwidth required for this technique
varies according to the transaction rate of the primary server.
Record-based log shipping is also possible with custom-developed
procedures, as discussed in <xref linkend="warm-standby-record">.
Record-based log shipping is also possible with streaming replication
(see <xref linkend="streaming-replication">).
</para>
<para>
......@@ -519,26 +497,52 @@ protocol to make nodes agree on a serializable transactional order.
records are shipped after transaction commit. As a result, there is a
window for data loss should the primary server suffer a catastrophic
failure; transactions not yet shipped will be lost. The size of the
data loss window can be limited by use of the
data loss window in file-based log shipping can be limited by use of the
<varname>archive_timeout</varname> parameter, which can be set as low
as a few seconds. However such a low setting will
substantially increase the bandwidth required for file shipping.
If you need a window of less than a minute or so, consider using
<xref linkend="streaming-replication">.
streaming replication (see <xref linkend="streaming-replication">).
</para>
<para>
The standby server is not available for access, since it is continually
performing recovery processing. Recovery performance is sufficiently
good that the standby will typically be only moments away from full
Recovery performance is sufficiently good that the standby will
typically be only moments away from full
availability once it has been activated. As a result, this is called
a warm standby configuration which offers high
availability. Restoring a server from an archived base backup and
rollforward will take considerably longer, so that technique only
offers a solution for disaster recovery, not high availability.
A standby server can also be used for read-only queries, in which case
it is called a Hot Standby server. See <xref linkend="hot-standby"> for
more information.
</para>
<sect2 id="warm-standby-planning">
<indexterm zone="high-availability">
<primary>warm standby</primary>
</indexterm>
<indexterm zone="high-availability">
<primary>PITR standby</primary>
</indexterm>
<indexterm zone="high-availability">
<primary>standby server</primary>
</indexterm>
<indexterm zone="high-availability">
<primary>log shipping</primary>
</indexterm>
<indexterm zone="high-availability">
<primary>witness server</primary>
</indexterm>
<indexterm zone="high-availability">
<primary>STONITH</primary>
</indexterm>
<sect2 id="standby-planning">
<title>Planning</title>
<para>
......@@ -573,188 +577,114 @@ protocol to make nodes agree on a serializable transactional order.
versa.
</para>
<para>
There is no special mode required to enable a standby server. The
operations that occur on both primary and standby servers are
normal continuous archiving and recovery tasks. The only point of
contact between the two database servers is the archive of WAL files
that both share: primary writing to the archive, standby reading from
the archive. Care must be taken to ensure that WAL archives from separate
primary servers do not become mixed together or confused. The archive
need not be large if it is only required for standby operation.
</para>
</sect2>
<para>
The magic that makes the two loosely coupled servers work together is
simply a <varname>restore_command</> used on the standby that,
when asked for the next WAL file, waits for it to become available from
the primary. The <varname>restore_command</> is specified in the
<filename>recovery.conf</> file on the standby server. Normal recovery
processing would request a file from the WAL archive, reporting failure
if the file was unavailable. For standby processing it is normal for
the next WAL file to be unavailable, so the standby must wait for
it to appear. For files ending in <literal>.backup</> or
<literal>.history</> there is no need to wait, and a non-zero return
code must be returned. A waiting <varname>restore_command</> can be
written as a custom script that loops after polling for the existence of
the next WAL file. There must also be some way to trigger failover, which
should interrupt the <varname>restore_command</>, break the loop and
return a file-not-found error to the standby server. This ends recovery
and the standby will then come up as a normal server.
</para>
<sect2 id="standby-server-operation">
<title>Standby Server Operation</title>
<para>
Pseudocode for a suitable <varname>restore_command</> is:
<programlisting>
triggered = false;
while (!NextWALFileReady() &amp;&amp; !triggered)
{
sleep(100000L); /* wait for ~0.1 sec */
if (CheckForExternalTrigger())
triggered = true;
}
if (!triggered)
CopyWALFileForRecovery();
</programlisting>
In standby mode, the server continously applies WAL received from the
master server. The standby server can read WAL from a WAL archive
(see <varname>restore_command</>) or directly from the master
over a TCP connection (streaming replication). The standby server will
also attempt to restore any WAL found in the standby cluster's
<filename>pg_xlog</> directory. That typically happens after a server
restart, when the standby replays again WAL that was streamed from the
master before the restart, but you can also manually copy files to
<filename>pg_xlog</> at any time to have them replayed.
</para>
<para>
A working example of a waiting <varname>restore_command</> is provided
as a <filename>contrib</> module named <application>pg_standby</>. It
should be used as a reference on how to correctly implement the logic
described above. It can also be extended as needed to support specific
configurations and environments.
At startup, the standby begins by restoring all WAL available in the
archive location, calling <varname>restore_command</>. Once it
reaches the end of WAL available there and <varname>restore_command</>
fails, it tries to restore any WAL available in the pg_xlog directory.
If that fails, and streaming replication has been configured, the
standby tries to connect to the primary server and start streaming WAL
from the last valid record found in archive or pg_xlog. If that fails
or streaming replication is not configured, or if the connection is
later disconnected, the standby goes back to step 1 and tries to
restore the file from the archive again. This loop of retries from the
archive, pg_xlog, and via streaming replication goes on until the server
is stopped or failover is triggered by a trigger file.
</para>
<para>
<productname>PostgreSQL</productname> does not provide the system
software required to identify a failure on the primary and notify
the standby database server. Many such tools exist and are well
integrated with the operating system facilities required for
successful failover, such as IP address migration.
Standby mode is exited and the server switches to normal operation,
when a trigger file is found (trigger_file). Before failover, it will
restore any WAL available in the archive or in pg_xlog, but won't try
to connect to the master or wait for files to become available in the
archive.
</para>
</sect2>
<para>
The method for triggering failover is an important part of planning
and design. One potential option is the <varname>restore_command</>
command. It is executed once for each WAL file, but the process
running the <varname>restore_command</> is created and dies for
each file, so there is no daemon or server process, and
signals or a signal handler cannot be used. Therefore, the
<varname>restore_command</> is not suitable to trigger failover.
It is possible to use a simple timeout facility, especially if
used in conjunction with a known <varname>archive_timeout</>
setting on the primary. However, this is somewhat error prone
since a network problem or busy primary server might be sufficient
to initiate failover. A notification mechanism such as the explicit
creation of a trigger file is ideal, if this can be arranged.
</para>
<sect2 id="preparing-master-for-standby">
<title>Preparing Master for Standby Servers</title>
<para>
The size of the WAL archive can be minimized by using the <literal>%r</>
option of the <varname>restore_command</>. This option specifies the
last archive file name that needs to be kept to allow the recovery to
restart correctly. This can be used to truncate the archive once
files are no longer required, assuming the archive is writable from the
standby server.
Set up continuous archiving to a WAL archive on the master, as described
in <xref linkend="continuous-archiving">. The archive location should be
accessible from the standby even when the master is down, ie. it should
reside on the standby server itself or another trusted server, not on
the master server.
</para>
</sect2>
<sect2 id="warm-standby-config">
<title>Implementation</title>
<para>
The short procedure for configuring a standby server is as follows. For
full details of each step, refer to previous sections as noted.
<orderedlist>
<listitem>
<para>
Set up primary and standby systems as nearly identical as
possible, including two identical copies of
<productname>PostgreSQL</> at the same release level.
</para>
</listitem>
<listitem>
<para>
Set up continuous archiving from the primary to a WAL archive
directory on the standby server. Ensure that
<xref linkend="guc-archive-mode">,
<xref linkend="guc-archive-command"> and
<xref linkend="guc-archive-timeout">
are set appropriately on the primary
(see <xref linkend="backup-archiving-wal">).
</para>
</listitem>
<listitem>
<para>
Make a base backup of the primary server (see <xref
linkend="backup-base-backup">), and load this data onto the standby.
</para>
</listitem>
<listitem>
<para>
Begin recovery on the standby server from the local WAL
archive, using a <filename>recovery.conf</> that specifies a
<varname>restore_command</> that waits as described
previously (see <xref linkend="backup-pitr-recovery">).
</para>
</listitem>
</orderedlist>
If you want to use streaming replication, set up authentication to allow
streaming replication connections and set <varname>max_wal_senders</> in
the configuration file of the primary server.
</para>
<para>
Recovery treats the WAL archive as read-only, so once a WAL file has
been copied to the standby system it can be copied to tape at the same
time as it is being read by the standby database server.
Thus, running a standby server for high availability can be performed at
the same time as files are stored for longer term disaster recovery
purposes.
Take a base backup as described in <xref linkend="backup-base-backup">
to bootstrap the standby server.
</para>
</sect2>
<sect2 id="standby-server-setup">
<title>Setting up the standby server</title>
<para>
For testing purposes, it is possible to run both primary and standby
servers on the same system. This does not provide any worthwhile
improvement in server robustness, nor would it be described as HA.
To set up the standby server, restore the base backup taken from primary
server (see <xref linkend="backup-pitr-recovery">). In the recovery command file
<filename>recovery.conf</> in the standby's cluster data directory,
turn on <varname>standby_mode</>. Set <varname>restore_command</> to
a simple command to copy files from the WAL archive. If you want to
use streaming replication, set <varname>primary_conninfo</>.
</para>
</sect2>
<sect2 id="warm-standby-record">
<title>Record-based Log Shipping</title>
<note>
<para>
Do not use pg_standby or similar tools with the built-in standby mode
described here. <varname>restore_command</> should return immediately
if the file does not exist, the server will retry the command again if
necessary. See <xref linkend="log-shipping-alternative">
for using tools like pg_standby.
</para>
</note>
<para>
<productname>PostgreSQL</productname> directly supports file-based
log shipping as described above. It is also possible to implement
record-based log shipping, though this requires custom development.
You can use restartpoint_command to prune the archive of files no longer
needed by the standby.
</para>
<para>
An external program can call the <function>pg_xlogfile_name_offset()</>
function (see <xref linkend="functions-admin">)
to find out the file name and the exact byte offset within it of
the current end of WAL. It can then access the WAL file directly
and copy the data from the last known end of WAL through the current end
over to the standby servers. With this approach, the window for data
loss is the polling cycle time of the copying program, which can be very
small, and there is no wasted bandwidth from forcing partially-used
segment files to be archived. Note that the standby servers'
<varname>restore_command</> scripts can only deal with whole WAL files,
so the incrementally copied data is not ordinarily made available to
the standby servers. It is of use only when the primary dies &mdash;
then the last partial WAL file is fed to the standby before allowing
it to come up. The correct implementation of this process requires
cooperation of the <varname>restore_command</> script with the data
copying program.
If you're setting up the standby server for high availability purposes,
set up WAL archiving, connections and authentication like the primary
server, because the standby server will work as a primary server after
failover. If you're setting up the standby server for reporting
purposes, with no plans to fail over to it, configure the standby
accordingly.
</para>
<para>
Starting with <productname>PostgreSQL</> version 9.0, you can use
streaming replication (see <xref linkend="streaming-replication">) to
achieve the same benefits with less effort.
You can have any number of standby servers, but if you use streaming
replication, make sure you set <varname>max_wal_senders</> high enough in
the primary to allow them to be connected simultaneously.
</para>
</sect2>
</sect1>
<sect1 id="streaming-replication">
<sect2 id="streaming-replication">
<title>Streaming Replication</title>
<indexterm zone="high-availability">
......@@ -785,101 +715,40 @@ if (!triggered)
delete old WAL files still required by the standby.
</para>
<sect2 id="streaming-replication-setup">
<title>Setup</title>
<para>
The short procedure for configuring streaming replication is as follows.
For full details of each step, refer to other sections as noted.
<orderedlist>
<listitem>
<para>
Set up primary and standby systems as near identically as possible,
including two identical copies of <productname>PostgreSQL</> at the
same release level.
</para>
</listitem>
<listitem>
<para>
Set up continuous archiving from the primary to a WAL archive located
in a directory on the standby server. In particular, set
<xref linkend="guc-archive-mode"> and
<xref linkend="guc-archive-command">
to archive WAL files in a location accessible from the standby
(see <xref linkend="backup-archiving-wal">).
</para>
</listitem>
<para>
To use streaming replication, set up a file-based log-shipping standby
server as described in <xref linkend="warm-standby">. The step that
turns a file-based log-shipping standby into streaming replication
standby is setting <varname>primary_conninfo</> setting in the
<filename>recovery.conf</> file to point to the primary server. Set
<xref linkend="guc-listen-addresses"> and authentication options
(see <filename>pg_hba.conf</>) on the primary so that the standby server
can connect to the <literal>replication</> pseudo-database on the primary
server (see <xref linkend="streaming-replication-authentication">).
</para>
<listitem>
<para>
Set <xref linkend="guc-listen-addresses"> and authentication options
(see <filename>pg_hba.conf</>) on the primary so that the standby server can connect to
the <literal>replication</> pseudo-database on the primary server (see
<xref linkend="streaming-replication-authentication">).
</para>
<para>
On systems that support the keepalive socket option, setting
<xref linkend="guc-tcp-keepalives-idle">,
<xref linkend="guc-tcp-keepalives-interval"> and
<xref linkend="guc-tcp-keepalives-count"> helps the master promptly
notice a broken connection.
</para>
</listitem>
<listitem>
<para>
Set the maximum number of concurrent connections from the standby servers
(see <xref linkend="guc-max-wal-senders"> for details).
</para>
</listitem>
<listitem>
<para>
Start the <productname>PostgreSQL</> server on the primary.
</para>
</listitem>
<listitem>
<para>
Make a base backup of the primary server (see
<xref linkend="backup-base-backup">), and load this data onto the
standby. Note that all files present in <filename>pg_xlog</>
and <filename>pg_xlog/archive_status</> on the <emphasis>standby</>
server should be removed because they might be obsolete.
</para>
</listitem>
<listitem>
<para>
If you're setting up the standby server for high availability purposes,
set up WAL archiving, connections and authentication like the primary
server, because the standby server will work as a primary server after
failover. If you're setting up the standby server for reporting
purposes, with no plans to fail over to it, configure the standby
accordingly.
</para>
</listitem>
<listitem>
<para>
Create a recovery command file <filename>recovery.conf</> in the data
directory on the standby server. Set <varname>restore_command</varname>
as you would in normal recovery from a continuous archiving backup
(see <xref linkend="backup-pitr-recovery">). <literal>pg_standby</> or
similar tools that wait for the next WAL file to arrive cannot be used
with streaming replication, as the server handles retries and waiting
itself. Enable <varname>standby_mode</varname>. Set
<varname>primary_conninfo</varname> to point to the primary server.
</para>
<para>
On systems that support the keepalive socket option, setting
<xref linkend="guc-tcp-keepalives-idle">,
<xref linkend="guc-tcp-keepalives-interval"> and
<xref linkend="guc-tcp-keepalives-count"> helps the master promptly
notice a broken connection.
</para>
</listitem>
<listitem>
<para>
Start the <productname>PostgreSQL</> server on the standby. The standby
server will go into recovery mode and proceed to receive WAL records
from the primary and apply them continuously.
</para>
</listitem>
</orderedlist>
</para>
</sect2>
<para>
Set the maximum number of concurrent connections from the standby servers
(see <xref linkend="guc-max-wal-senders"> for details).
</para>
<para>
When the standby is started and <varname>primary_conninfo</> is set
correctly, the standby will connect to the primary after replaying all
WAL files available in the archive. If the connection is established
successfully, you will see a walreceiver process in the standby, and
a corresponding walsender process in the primary.
</para>
<sect2 id="streaming-replication-authentication">
<sect3 id="streaming-replication-authentication">
<title>Authentication</title>
<para>
It is very important that the access privilege for replication be setup
......@@ -928,7 +797,8 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass'
automatically. If you mention the database parameter at all within
<varname>primary_conninfo</varname> then a FATAL error will be raised.
</para>
</sect2>
</sect3>
</sect2>
</sect1>
<sect1 id="warm-standby-failover">
......@@ -989,8 +859,220 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass'
failover mechanism to ensure that it will really work when you need it.
Written administration procedures are advised.
</para>
<para>
To trigger failover of a log-shipping standby server, create a trigger
file with the filename and path specified by the <varname>trigger_file</>
setting in <filename>recovery.conf</>. If <varname>trigger_file</> is
not given, there is no way to exit recovery in the standby and promote
it to a master. That can be useful for e.g reporting servers that are
only used to offload read-only queries from the primary, not for high
availability purposes.
</para>
</sect1>
<sect1 id="log-shipping-alternative">
<title>Alternative method for log shipping</title>
<para>
An alternative to the built-in standby mode desribed in the previous
sections is to use a restore_command that polls the archive location.
This was the only option available in versions 8.4 and below. In this
setup, set <varname>standby_mode</> off, because you are implementing
the polling required for standby operation yourself. See
contrib/pg_standby (<xref linkend="pgstandby">) for a reference
implementation of this.
</para>
<para>
Note that in this mode, the server will apply WAL one file at a
time, so if you use the standby server for queries (see Hot Standby),
there is a bigger delay between an action in the master and when the
action becomes visible in the standby, corresponding the time it takes
to fill up the WAL file. archive_timeout can be used to make that delay
shorter. Also note that you can't combine streaming replication with
this method.
</para>
<para>
The operations that occur on both primary and standby servers are
normal continuous archiving and recovery tasks. The only point of
contact between the two database servers is the archive of WAL files
that both share: primary writing to the archive, standby reading from
the archive. Care must be taken to ensure that WAL archives from separate
primary servers do not become mixed together or confused. The archive
need not be large if it is only required for standby operation.
</para>
<para>
The magic that makes the two loosely coupled servers work together is
simply a <varname>restore_command</> used on the standby that,
when asked for the next WAL file, waits for it to become available from
the primary. The <varname>restore_command</> is specified in the
<filename>recovery.conf</> file on the standby server. Normal recovery
processing would request a file from the WAL archive, reporting failure
if the file was unavailable. For standby processing it is normal for
the next WAL file to be unavailable, so the standby must wait for
it to appear. For files ending in <literal>.backup</> or
<literal>.history</> there is no need to wait, and a non-zero return
code must be returned. A waiting <varname>restore_command</> can be
written as a custom script that loops after polling for the existence of
the next WAL file. There must also be some way to trigger failover, which
should interrupt the <varname>restore_command</>, break the loop and
return a file-not-found error to the standby server. This ends recovery
and the standby will then come up as a normal server.
</para>
<para>
Pseudocode for a suitable <varname>restore_command</> is:
<programlisting>
triggered = false;
while (!NextWALFileReady() &amp;&amp; !triggered)
{
sleep(100000L); /* wait for ~0.1 sec */
if (CheckForExternalTrigger())
triggered = true;
}
if (!triggered)
CopyWALFileForRecovery();
</programlisting>
</para>
<para>
A working example of a waiting <varname>restore_command</> is provided
as a <filename>contrib</> module named <application>pg_standby</>. It
should be used as a reference on how to correctly implement the logic
described above. It can also be extended as needed to support specific
configurations and environments.
</para>
<para>
<productname>PostgreSQL</productname> does not provide the system
software required to identify a failure on the primary and notify
the standby database server. Many such tools exist and are well
integrated with the operating system facilities required for
successful failover, such as IP address migration.
</para>
<para>
The method for triggering failover is an important part of planning
and design. One potential option is the <varname>restore_command</>
command. It is executed once for each WAL file, but the process
running the <varname>restore_command</> is created and dies for
each file, so there is no daemon or server process, and
signals or a signal handler cannot be used. Therefore, the
<varname>restore_command</> is not suitable to trigger failover.
It is possible to use a simple timeout facility, especially if
used in conjunction with a known <varname>archive_timeout</>
setting on the primary. However, this is somewhat error prone
since a network problem or busy primary server might be sufficient
to initiate failover. A notification mechanism such as the explicit
creation of a trigger file is ideal, if this can be arranged.
</para>
<para>
The size of the WAL archive can be minimized by using the <literal>%r</>
option of the <varname>restore_command</>. This option specifies the
last archive file name that needs to be kept to allow the recovery to
restart correctly. This can be used to truncate the archive once
files are no longer required, assuming the archive is writable from the
standby server.
</para>
<sect2 id="warm-standby-config">
<title>Implementation</title>
<para>
The short procedure for configuring a standby server is as follows. For
full details of each step, refer to previous sections as noted.
<orderedlist>
<listitem>
<para>
Set up primary and standby systems as nearly identical as
possible, including two identical copies of
<productname>PostgreSQL</> at the same release level.
</para>
</listitem>
<listitem>
<para>
Set up continuous archiving from the primary to a WAL archive
directory on the standby server. Ensure that
<xref linkend="guc-archive-mode">,
<xref linkend="guc-archive-command"> and
<xref linkend="guc-archive-timeout">
are set appropriately on the primary
(see <xref linkend="backup-archiving-wal">).
</para>
</listitem>
<listitem>
<para>
Make a base backup of the primary server (see <xref
linkend="backup-base-backup">), and load this data onto the standby.
</para>
</listitem>
<listitem>
<para>
Begin recovery on the standby server from the local WAL
archive, using a <filename>recovery.conf</> that specifies a
<varname>restore_command</> that waits as described
previously (see <xref linkend="backup-pitr-recovery">).
</para>
</listitem>
</orderedlist>
</para>
<para>
Recovery treats the WAL archive as read-only, so once a WAL file has
been copied to the standby system it can be copied to tape at the same
time as it is being read by the standby database server.
Thus, running a standby server for high availability can be performed at
the same time as files are stored for longer term disaster recovery
purposes.
</para>
<para>
For testing purposes, it is possible to run both primary and standby
servers on the same system. This does not provide any worthwhile
improvement in server robustness, nor would it be described as HA.
</para>
</sect2>
<sect2 id="warm-standby-record">
<title>Record-based Log Shipping</title>
<para>
<productname>PostgreSQL</productname> directly supports file-based
log shipping as described above. It is also possible to implement
record-based log shipping, though this requires custom development.
</para>
<para>
An external program can call the <function>pg_xlogfile_name_offset()</>
function (see <xref linkend="functions-admin">)
to find out the file name and the exact byte offset within it of
the current end of WAL. It can then access the WAL file directly
and copy the data from the last known end of WAL through the current end
over to the standby servers. With this approach, the window for data
loss is the polling cycle time of the copying program, which can be very
small, and there is no wasted bandwidth from forcing partially-used
segment files to be archived. Note that the standby servers'
<varname>restore_command</> scripts can only deal with whole WAL files,
so the incrementally copied data is not ordinarily made available to
the standby servers. It is of use only when the primary dies &mdash;
then the last partial WAL file is fed to the standby before allowing
it to come up. The correct implementation of this process requires
cooperation of the <varname>restore_command</> script with the data
copying program.
</para>
<para>
Starting with <productname>PostgreSQL</> version 9.0, you can use
streaming replication (see <xref linkend="streaming-replication">) to
achieve the same benefits with less effort.
</para>
</sect2>
</sect1>
<sect1 id="hot-standby">
<title>Hot Standby</title>
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment