backup.sgml 69.5 KB
Newer Older
1
<!-- $PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.85 2006/09/15 22:02:21 momjian Exp $ -->
2

3 4 5
<chapter id="backup">
 <title>Backup and Restore</title>

6 7
 <indexterm zone="backup"><primary>backup</></>

8
 <para>
9
  As with everything that contains valuable data, <productname>PostgreSQL</>
10 11 12 13 14 15
  databases should be backed up regularly. While the procedure is
  essentially simple, it is important to have a basic understanding of
  the underlying techniques and assumptions.
 </para>

 <para>
16
  There are three fundamentally different approaches to backing up
17
  <productname>PostgreSQL</> data:
18 19 20
  <itemizedlist>
   <listitem><para><acronym>SQL</> dump</para></listitem>
   <listitem><para>File system level backup</para></listitem>
21
   <listitem><para>Continuous Archiving</para></listitem>
22
  </itemizedlist>
23
  Each has its own strengths and weaknesses.
24 25
 </para>

26
 <sect1 id="backup-dump">
27 28 29
  <title><acronym>SQL</> Dump</title>

  <para>
Peter Eisentraut's avatar
Peter Eisentraut committed
30
   The idea behind the SQL-dump method is to generate a text file with SQL
31 32
   commands that, when fed back to the server, will recreate the
   database in the same state as it was at the time of the dump.
33
   <productname>PostgreSQL</> provides the utility program
34
   <xref linkend="app-pgdump"> for this purpose. The basic usage of this
35 36 37 38 39 40 41 42 43
   command is:
<synopsis>
pg_dump <replaceable class="parameter">dbname</replaceable> &gt; <replaceable class="parameter">outfile</replaceable>
</synopsis>
   As you see, <application>pg_dump</> writes its results to the
   standard output. We will see below how this can be useful.
  </para>

  <para>
44
   <application>pg_dump</> is a regular <productname>PostgreSQL</>
45 46 47
   client application (albeit a particularly clever one). This means
   that you can do this backup procedure from any remote host that has
   access to the database. But remember that <application>pg_dump</>
48
   does not operate with special permissions. In particular, it must
49
   have read access to all tables that you want to back up, so in
50
   practice you almost always have to run it as a database superuser.
51 52 53
  </para>

  <para>
Bruce Momjian's avatar
Bruce Momjian committed
54
   To specify which database server <application>pg_dump</> should
55 56 57 58 59 60 61 62 63 64 65
   contact, use the command line options <option>-h
   <replaceable>host</></> and <option>-p <replaceable>port</></>. The
   default host is the local host or whatever your
   <envar>PGHOST</envar> environment variable specifies. Similarly,
   the default port is indicated by the <envar>PGPORT</envar>
   environment variable or, failing that, by the compiled-in default.
   (Conveniently, the server will normally have the same compiled-in
   default.)
  </para>

  <para>
66
   As any other <productname>PostgreSQL</> client application,
67
   <application>pg_dump</> will by default connect with the database
68
   user name that is equal to the current operating system user name. To override
69 70 71 72
   this, either specify the <option>-U</option> option or set the
   environment variable <envar>PGUSER</envar>. Remember that
   <application>pg_dump</> connections are subject to the normal
   client authentication mechanisms (which are described in <xref
73 74 75 76 77 78 79 80 81
   linkend="client-authentication">).
  </para>

  <para>
   Dumps created by <application>pg_dump</> are internally consistent,
   that is, updates to the database while <application>pg_dump</> is
   running will not be in the dump. <application>pg_dump</> does not
   block other operations on the database while it is working.
   (Exceptions are those operations that need to operate with an
82
   exclusive lock, such as <command>VACUUM FULL</command>.)
83 84 85 86
  </para>

  <important>
   <para>
Tom Lane's avatar
Tom Lane committed
87
    When your database schema relies on OIDs (for instance as foreign
88 89
    keys) you must instruct <application>pg_dump</> to dump the OIDs
    as well. To do this, use the <option>-o</option> command line
90
    option.
91 92 93
   </para>
  </important>

94
  <sect2 id="backup-dump-restore">
95 96 97 98 99 100 101 102 103 104 105
   <title>Restoring the dump</title>

   <para>
    The text files created by <application>pg_dump</> are intended to
    be read in by the <application>psql</application> program. The
    general command form to restore a dump is
<synopsis>
psql <replaceable class="parameter">dbname</replaceable> &lt; <replaceable class="parameter">infile</replaceable>
</synopsis>
    where <replaceable class="parameter">infile</replaceable> is what
    you used as <replaceable class="parameter">outfile</replaceable>
106
    for the <application>pg_dump</> command. The database <replaceable
107
    class="parameter">dbname</replaceable> will not be created by this
108
    command, you must create it yourself from <literal>template0</> before executing
Peter Eisentraut's avatar
Peter Eisentraut committed
109 110
    <application>psql</> (e.g., with <literal>createdb -T template0
    <replaceable class="parameter">dbname</></literal>).
111
    <application>psql</> supports options similar to <application>pg_dump</> 
112
    for controlling the database server location and the user name. See
113
    <xref linkend="app-psql">'s reference page for more information.
114 115 116
   </para>

   <para>
117 118 119 120 121 122
    Not only must the target database already exist before starting to
    run the restore, but so must all the users who own objects in the
    dumped database or were granted permissions on the objects.  If they
    do not, then the restore will fail to recreate the objects with the
    original ownership and/or permissions.  (Sometimes this is what you want,
    but usually it is not.)
123 124
   </para>

125
   <para>
126 127
    Once restored, it is wise to run <xref linkend="sql-analyze"
    endterm="sql-analyze-title"> on each database so the optimizer has
128 129
    useful statistics. An easy way to do this is to run
    <command>vacuumdb -a -z</> to
130 131
    <command>VACUUM ANALYZE</> all databases; this is equivalent to
    running <command>VACUUM ANALYZE</command> manually.
132 133
   </para>

134 135
   <para>
    The ability of <application>pg_dump</> and <application>psql</> to
Tom Lane's avatar
Tom Lane committed
136
    write to or read from pipes makes it possible to dump a database
137
    directly from one server to another; for example:
138 139 140 141
<programlisting>
pg_dump -h <replaceable>host1</> <replaceable>dbname</> | psql -h <replaceable>host2</> <replaceable>dbname</>
</programlisting>
   </para>
142

143 144 145 146 147 148 149 150 151 152 153
   <important>
    <para>
     The dumps produced by <application>pg_dump</> are relative to
     <literal>template0</>. This means that any languages, procedures,
     etc. added to <literal>template1</> will also be dumped by
     <application>pg_dump</>. As a result, when restoring, if you are
     using a customized <literal>template1</>, you must create the
     empty database from <literal>template0</>, as in the example
     above.
    </para>
   </important>
154

155 156 157 158 159
   <para>
    For advice on how to load large amounts of data into
    <productname>PostgreSQL</productname> efficiently, refer to <xref
    linkend="populate">.
   </para>
160 161
  </sect2>

162
  <sect2 id="backup-dump-all">
163
   <title>Using <application>pg_dumpall</></title>
164 165 166

   <para>
    The above mechanism is cumbersome and inappropriate when backing
167 168
    up an entire database cluster. For this reason the <xref
    linkend="app-pg-dumpall"> program is provided.
169
    <application>pg_dumpall</> backs up each database in a given
170 171
    cluster, and also preserves cluster-wide data such as users and
    groups. The basic usage of this command is:
172 173 174
<synopsis>
pg_dumpall &gt; <replaceable>outfile</>
</synopsis>
175 176
    The resulting dump can be restored with <application>psql</>:
<synopsis>
177
psql -f <replaceable class="parameter">infile</replaceable> postgres
178 179
</synopsis>
    (Actually, you can specify any existing database name to start from,
180 181
    but if you are reloading in an empty cluster then <literal>postgres</>
    should generally be used.)  It is always necessary to have
182 183
    database superuser access when restoring a <application>pg_dumpall</>
    dump, as that is required to restore the user and group information.
184 185 186
   </para>
  </sect2>

187
  <sect2 id="backup-dump-large">
188
   <title>Handling large databases</title>
189 190

   <para>
191
    Since <productname>PostgreSQL</productname> allows tables larger
192
    than the maximum file size on your system, it can be problematic
193
    to dump such a table to a file, since the resulting file will likely
194
    be larger than the maximum size allowed by your system. Since
195 196
    <application>pg_dump</> can write to the standard output, you can
    just use standard Unix tools to work around this possible problem.
197 198 199 200 201
   </para>

   <formalpara>
    <title>Use compressed dumps.</title>
    <para>
202
     You can use your favorite compression program, for example
203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224
     <application>gzip</application>.

<programlisting>
pg_dump <replaceable class="parameter">dbname</replaceable> | gzip &gt; <replaceable class="parameter">filename</replaceable>.gz
</programlisting>

     Reload with

<programlisting>
createdb <replaceable class="parameter">dbname</replaceable>
gunzip -c <replaceable class="parameter">filename</replaceable>.gz | psql <replaceable class="parameter">dbname</replaceable>
</programlisting>

     or

<programlisting>
cat <replaceable class="parameter">filename</replaceable>.gz | gunzip | psql <replaceable class="parameter">dbname</replaceable>
</programlisting>
    </para>
   </formalpara>

   <formalpara>
225
    <title>Use <command>split</>.</title>
226
    <para>
227 228
     The <command>split</command> command
     allows you to split the output into pieces that are
229 230 231 232 233 234 235 236 237 238 239
     acceptable in size to the underlying file system. For example, to
     make chunks of 1 megabyte:

<programlisting>
pg_dump <replaceable class="parameter">dbname</replaceable> | split -b 1m - <replaceable class="parameter">filename</replaceable>
</programlisting>

     Reload with

<programlisting>
createdb <replaceable class="parameter">dbname</replaceable>
240
cat <replaceable class="parameter">filename</replaceable>* | psql <replaceable class="parameter">dbname</replaceable>
241 242 243 244
</programlisting>
    </para>
   </formalpara>

245
   <formalpara>
246
    <title>Use the custom dump format.</title>
247
    <para>
248 249 250 251 252 253
     If <productname>PostgreSQL</productname> was built on a system with the
     <application>zlib</> compression library installed, the custom dump
     format will compress data as it writes it to the output file. This will
     produce dump file sizes similar to using <command>gzip</command>, but it
     has the added advantage that tables can be restored selectively. The
     following command dumps a database using the custom dump format:
254 255

<programlisting>
256
pg_dump -Fc <replaceable class="parameter">dbname</replaceable> &gt; <replaceable class="parameter">filename</replaceable>
257 258
</programlisting>

259 260
     A custom-format dump is not a script for <application>psql</>, but
     instead must be restored with <application>pg_restore</>.
261 262
     See the <xref linkend="app-pgdump"> and <xref
     linkend="app-pgrestore"> reference pages for details.
263 264 265
    </para>
   </formalpara>

266 267 268
  </sect2>
 </sect1>

269
 <sect1 id="backup-file">
270 271 272 273
  <title>File system level backup</title>

  <para>
   An alternative backup strategy is to directly copy the files that
274
   <productname>PostgreSQL</> uses to store the data in the database. In
275 276 277 278
   <xref linkend="creating-cluster"> it is explained where these files
   are located, but you have probably found them already if you are
   interested in this method. You can use whatever method you prefer
   for doing usual file system backups, for example
279

280 281 282 283 284 285 286 287 288 289 290 291 292 293 294
<programlisting>
tar -cf backup.tar /usr/local/pgsql/data
</programlisting>
  </para>

  <para>
   There are two restrictions, however, which make this method
   impractical, or at least inferior to the <application>pg_dump</>
   method:

   <orderedlist>
    <listitem>
     <para>
      The database server <emphasis>must</> be shut down in order to
      get a usable backup. Half-way measures such as disallowing all
295
      connections will <emphasis>not</emphasis> work
296
      (mainly because <command>tar</command> and similar tools do not take an
Peter Eisentraut's avatar
Peter Eisentraut committed
297
      atomic snapshot of the state of the file system at a point in
298
      time). Information about stopping the server can be found in
299
      <xref linkend="server-shutdown">.  Needless to say that you
300
      also need to shut down the server before restoring the data.
301 302 303 304 305
     </para>
    </listitem>

    <listitem>
     <para>
306 307
      If you have dug into the details of the file system layout of the
      database, you may be tempted to try to back up or restore only certain
308 309 310
      individual tables or databases from their respective files or
      directories. This will <emphasis>not</> work because the
      information contained in these files contains only half the
311 312
      truth. The other half is in the commit log files
      <filename>pg_clog/*</filename>, which contain the commit status of
313 314
      all transactions. A table file is only usable with this
      information. Of course it is also impossible to restore only a
315
      table and the associated <filename>pg_clog</filename> data
316
      because that would render all other tables in the database
317 318
      cluster useless.  So file system backups only work for complete
      restoration of an entire database cluster.
319 320 321 322 323 324
     </para>
    </listitem>
   </orderedlist>
  </para>

  <para>
Peter Eisentraut's avatar
Peter Eisentraut committed
325 326
   An alternative file-system backup approach is to make a
   <quote>consistent snapshot</quote> of the data directory, if the
327 328 329 330 331 332 333
   file system supports that functionality (and you are willing to
   trust that it is implemented correctly).  The typical procedure is
   to make a <quote>frozen snapshot</> of the volume containing the
   database, then copy the whole data directory (not just parts, see
   above) from the snapshot to a backup device, then release the frozen
   snapshot.  This will work even while the database server is running.
   However, a backup created in this way saves
Peter Eisentraut's avatar
Peter Eisentraut committed
334 335
   the database files in a state where the database server was not
   properly shut down; therefore, when you start the database server
336
   on the backed-up data, it will think the server had crashed
Peter Eisentraut's avatar
Peter Eisentraut committed
337
   and replay the WAL log.  This is not a problem, just be aware of
338
   it (and be sure to include the WAL files in your backup).
Peter Eisentraut's avatar
Peter Eisentraut committed
339 340 341
  </para>

  <para>
342
   If your database is spread across multiple file systems, there may not 
343
   be any way to obtain exactly-simultaneous frozen snapshots of all 
344 345 346 347
   the volumes.  For example, if your data files and WAL log are on different
   disks, or if tablespaces are on different file systems, it might
   not be possible to use snapshot backup because the snapshots must be
   simultaneous.
Peter Eisentraut's avatar
Peter Eisentraut committed
348
   Read your file system documentation very carefully before trusting
349 350 351
   to the consistent-snapshot technique in such situations.  The safest
   approach is to shut down the database server for long enough to
   establish all the frozen snapshots.
352 353
  </para>

354 355
  <para>
   Another option is to use <application>rsync</> to perform a file
356 357 358 359
   system backup.  This is done by first running <application>rsync</>
   while the database server is running, then shutting down the database
   server just long enough to do a second <application>rsync</>.  The
   second <application>rsync</> will be much quicker than the first,
360 361
   because it has relatively little data to transfer, and the end result
   will be consistent because the server was down.  This method
362
   allows a file system backup to be performed with minimal downtime.
363 364
  </para>

365 366
  <para>
   Note that a file system backup will not necessarily be
367 368
   smaller than an SQL dump. On the contrary, it will most likely be
   larger. (<application>pg_dump</application> does not need to dump
369
   the contents of indexes for example, just the commands to recreate
370 371
   them.)
  </para>
372 373
 </sect1>

374 375
 <sect1 id="continuous-archiving">
  <title>Continuous Archiving and Point-In-Time Recovery (PITR)</title>
376

377
  <indexterm zone="backup">
378
   <primary>continuous archiving</primary>
379 380 381 382 383 384
  </indexterm>

  <indexterm zone="backup">
   <primary>point-in-time recovery</primary>
  </indexterm>

385 386 387 388
  <indexterm zone="backup">
   <primary>PITR</primary>
  </indexterm>

389
  <para>
390 391 392 393 394 395 396
   At all times, <productname>PostgreSQL</> maintains a
   <firstterm>write ahead log</> (WAL) in the <filename>pg_xlog/</>
   subdirectory of the cluster's data directory. The log describes
   every change made to the database's data files.  This log exists
   primarily for crash-safety purposes: if the system crashes, the
   database can be restored to consistency by <quote>replaying</> the
   log entries made since the last checkpoint.  However, the existence
397
   of the log makes it possible to use a third strategy for backing up
Peter Eisentraut's avatar
Peter Eisentraut committed
398
   databases: we can combine a file-system-level backup with backup of
399 400 401 402 403
   the WAL files.  If recovery is needed, we restore the backup and
   then replay from the backed-up WAL files to bring the backup up to
   current time.  This approach is more complex to administer than
   either of the previous approaches, but it has some significant
   benefits:
404 405 406 407 408 409
  <itemizedlist>
   <listitem>
    <para>
     We do not need a perfectly consistent backup as the starting point.
     Any internal inconsistency in the backup will be corrected by log
     replay (this is not significantly different from what happens during
Peter Eisentraut's avatar
Peter Eisentraut committed
410
     crash recovery).  So we don't need file system snapshot capability,
411 412 413 414 415 416
     just <application>tar</> or a similar archiving tool.
    </para>
   </listitem>
   <listitem>
    <para>
     Since we can string together an indefinitely long sequence of WAL files
417
     for replay, continuous backup can be achieved simply by continuing to archive
418
     the WAL files.  This is particularly valuable for large databases, where
419
     it may not be convenient to take a full backup frequently.
420 421 422 423 424 425 426 427 428 429 430 431 432 433
    </para>
   </listitem>
   <listitem>
    <para>
     There is nothing that says we have to replay the WAL entries all the
     way to the end.  We could stop the replay at any point and have a
     consistent snapshot of the database as it was at that time.  Thus,
     this technique supports <firstterm>point-in-time recovery</>: it is
     possible to restore the database to its state at any time since your base
     backup was taken.
    </para>
   </listitem>
   <listitem>
    <para>
434 435 436 437 438
     If we continuously feed the series of WAL files to another
     machine that has been loaded with the same base backup file, we
     have a <quote>hot standby</> system: at any point we can bring up
     the second machine and it will have a nearly-current copy of the
     database.
439 440 441 442 443 444
    </para>
   </listitem>
  </itemizedlist>
  </para>

  <para>
Peter Eisentraut's avatar
Peter Eisentraut committed
445
   As with the plain file-system-backup technique, this method can only
446
   support restoration of an entire database cluster, not a subset.
447
   Also, it requires a lot of archival storage: the base backup may be bulky,
448 449 450 451
   and a busy system will generate many megabytes of WAL traffic that
   have to be archived.  Still, it is the preferred backup technique in
   many situations where high reliability is needed.
  </para>
452

453
  <para>
454 455
   To recover successfully using continuous archiving (also called "online
   backup" by many database vendors), you need a continuous
456 457 458 459 460 461 462 463 464 465 466 467 468
   sequence of archived WAL files that extends back at least as far as the
   start time of your backup.  So to get started, you should set up and test
   your procedure for archiving WAL files <emphasis>before</> you take your
   first base backup.  Accordingly, we first discuss the mechanics of
   archiving WAL files.
  </para>

  <sect2 id="backup-archiving-wal">
   <title>Setting up WAL archiving</title>

   <para>
    In an abstract sense, a running <productname>PostgreSQL</> system
    produces an indefinitely long sequence of WAL records.  The system
469 470 471 472 473 474 475 476 477 478
    physically divides this sequence into WAL <firstterm>segment
    files</>, which are normally 16MB apiece (although the size can be
    altered when building <productname>PostgreSQL</>).  The segment
    files are given numeric names that reflect their position in the
    abstract WAL sequence.  When not using WAL archiving, the system
    normally creates just a few segment files and then
    <quote>recycles</> them by renaming no-longer-needed segment files
    to higher segment numbers.  It's assumed that a segment file whose
    contents precede the checkpoint-before-last is no longer of
    interest and can be recycled.
479 480 481 482 483 484 485 486
   </para>

   <para>
    When archiving WAL data, we want to capture the contents of each segment
    file once it is filled, and save that data somewhere before the segment
    file is recycled for reuse.  Depending on the application and the
    available hardware, there could be many different ways of <quote>saving
    the data somewhere</>: we could copy the segment files to an NFS-mounted
487 488
    directory on another machine, write them onto a tape drive (ensuring that
    you have a way of restoring the file with its original file name), or batch
489 490 491 492 493 494
    them together and burn them onto CDs, or something else entirely.  To
    provide the database administrator with as much flexibility as possible,
    <productname>PostgreSQL</> tries not to make any assumptions about how 
    the archiving will be done.  Instead, <productname>PostgreSQL</> lets
    the administrator specify a shell command to be executed to copy a
    completed segment file to wherever it needs to go.  The command could be
495
    as simple as a <literal>cp</>, or it could invoke a complex shell
496
    script &mdash; it's all up to you.
497 498 499 500 501 502 503 504 505 506 507 508 509
   </para>

   <para>
    The shell command to use is specified by the <xref
    linkend="guc-archive-command"> configuration parameter, which in practice
    will always be placed in the <filename>postgresql.conf</filename> file.
    In this string,
    any <literal>%p</> is replaced by the absolute path of the file to
    archive, while any <literal>%f</> is replaced by the file name only.
    Write <literal>%%</> if you need to embed an actual <literal>%</>
    character in the command.  The simplest useful command is something
    like
<programlisting>
510
archive_command = 'cp -i %p /mnt/server/archivedir/%f &lt;/dev/null'
511 512
</programlisting>
    which will copy archivable WAL segments to the directory
513 514
    <filename>/mnt/server/archivedir</>.  (This is an example, not a 
    recommendation, and may not work on all platforms.)
515 516 517 518 519 520 521 522 523 524 525 526 527 528 529
   </para>

   <para>
    The archive command will be executed under the ownership of the same
    user that the <productname>PostgreSQL</> server is running as.  Since
    the series of WAL files being archived contains effectively everything
    in your database, you will want to be sure that the archived data is
    protected from prying eyes; for example, archive into a directory that
    does not have group or world read access.
   </para>

   <para>
    It is important that the archive command return zero exit status if and
    only if it succeeded.  Upon getting a zero result,
    <productname>PostgreSQL</> will assume that the WAL segment file has been
530 531
    successfully archived, and will remove or recycle it.
    However, a nonzero status tells
532 533 534 535
    <productname>PostgreSQL</> that the file was not archived; it will try
    again periodically until it succeeds.
   </para>

536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566
   <para>
    The archive command should generally be designed to refuse to overwrite
    any pre-existing archive file.  This is an important safety feature to
    preserve the integrity of your archive in case of administrator error
    (such as sending the output of two different servers to the same archive
    directory).
    It is advisable to test your proposed archive command to ensure that it
    indeed does not overwrite an existing file, <emphasis>and that it returns
    nonzero status in this case</>.  We have found that <literal>cp -i</> does
    this correctly on some platforms but not others.  If the chosen command
    does not itself handle this case correctly, you should add a command
    to test for pre-existence of the archive file.  For example, something
    like
<programlisting>
archive_command = 'test ! -f .../%f &amp;&amp; cp %p .../%f'
</programlisting>
    works correctly on most Unix variants.
   </para>

   <para>
    While designing your archiving setup, consider what will happen if
    the archive command fails repeatedly because some aspect requires 
    operator intervention or the archive runs out of space. For example, this
    could occur if you write to tape without an autochanger; when the tape 
    fills, nothing further can be archived until the tape is swapped.
    You should ensure that any error condition or request to a human operator
    is reported appropriately so that the situation can be 
    resolved relatively quickly. The <filename>pg_xlog/</> directory will
    continue to fill with WAL segment files until the situation is resolved.
   </para>

567
   <para>
568
    The speed of the archiving command is not important, so long as it can keep up
569 570 571 572 573 574 575 576
    with the average rate at which your server generates WAL data.  Normal
    operation continues even if the archiving process falls a little behind.
    If archiving falls significantly behind, this will increase the amount of
    data that would be lost in the event of a disaster. It will also mean that
    the <filename>pg_xlog/</> directory will contain large numbers of
    not-yet-archived segment files, which could eventually exceed available
    disk space. You are advised to monitor the archiving process to ensure that
    it is working as you intend.
577 578 579
   </para>

   <para>
580
    In writing your archive command, you should assume that the file names to
581 582 583 584 585
    be archived may be up to 64 characters long and may contain any
    combination of ASCII letters, digits, and dots.  It is not necessary to
    remember the original full path (<literal>%p</>) but it is necessary to
    remember the file name (<literal>%f</>).
   </para>
586 587 588 589 590

   <para>
    Note that although WAL archiving will allow you to restore any
    modifications made to the data in your <productname>PostgreSQL</> database
    it will not restore changes made to configuration files (that is,
591
    <filename>postgresql.conf</>, <filename>pg_hba.conf</> and
592 593
    <filename>pg_ident.conf</>), since those are edited manually rather
    than through SQL operations.
594
    You may wish to keep the configuration files in a location that will
595 596 597
    be backed up by your regular file system backup procedures.  See
    <xref linkend="runtime-config-file-locations"> for how to relocate the
    configuration files.
598
   </para>
599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621

   <para>
    The archive command is only invoked on completed WAL segments.  Hence,
    if your server generates only little WAL traffic (or has slack periods 
    where it does so), there could be a long delay between the completion
    of a transaction and its safe recording in archive storage.  To put
    a limit on how old unarchived data can be, you can set
    <xref linkend="guc-archive-timeout"> to force the server to switch
    to a new WAL segment file at least that often.  Note that archived
    files that are ended early due to a forced switch are still the same
    length as completely full files.  It is therefore unwise to set a very
    short <varname>archive_timeout</> &mdash; it will bloat your archive
    storage.  <varname>archive_timeout</> settings of a minute or so are
    usually reasonable.
   </para>

   <para>
    Also, you can force a segment switch manually with
    <function>pg_switch_xlog()</>,
    if you want to ensure that a just-finished transaction is archived
    immediately.  Other utility functions related to WAL management are
    listed in <xref linkend="functions-admin-backup-table">.
   </para>
622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641
  </sect2>

  <sect2 id="backup-base-backup">
   <title>Making a Base Backup</title>

   <para>
    The procedure for making a base backup is relatively simple:
  <orderedlist>
   <listitem>
    <para>
     Ensure that WAL archiving is enabled and working.
    </para>
   </listitem>
   <listitem>
    <para>
     Connect to the database as a superuser, and issue the command
<programlisting>
SELECT pg_start_backup('label');
</programlisting>
     where <literal>label</> is any string you want to use to uniquely
642 643 644 645 646
     identify this backup operation.  (One good practice is to use the
     full path where you intend to put the backup dump file.)
     <function>pg_start_backup</> creates a <firstterm>backup label</> file,
     called <filename>backup_label</>, in the cluster directory with
     information about your backup.
647 648 649 650 651
    </para>

    <para>
     It does not matter which database within the cluster you connect to to 
     issue this command.  You can ignore the result returned by the function;
652 653 654 655 656
     but if it reports an error, deal with that before proceeding.
    </para>
   </listitem>
   <listitem>
    <para>
Peter Eisentraut's avatar
Peter Eisentraut committed
657
     Perform the backup, using any convenient file-system-backup tool
658 659 660 661 662 663 664 665 666 667 668
     such as <application>tar</> or <application>cpio</>.  It is neither
     necessary nor desirable to stop normal operation of the database
     while you do this.
    </para>
   </listitem>
   <listitem>
    <para>
     Again connect to the database as a superuser, and issue the command
<programlisting>
SELECT pg_stop_backup();
</programlisting>
669 670 671 672 673 674
     This should return successfully.
    </para>
   </listitem>
   <listitem>
    <para>
     Once the WAL segment files used during the backup are archived as part
675 676 677
     of normal database activity, you are done.  The file identified by
     <function>pg_stop_backup</>'s result is the last segment that needs
     to be archived to complete the backup.
678 679 680 681 682
    </para>
   </listitem>
  </orderedlist>
   </para>

683
   <para>
684 685 686 687 688 689 690 691 692 693 694 695 696 697
    Some backup tools that you might wish to use emit warnings or errors
    if the files they are trying to copy change while the copy proceeds.
    This situation is normal, and not an error, when taking a base backup of
    an active database; so you need to ensure that you can distinguish
    complaints of this sort from real errors.  For example, some versions
    of <application>rsync</> return a separate exit code for <quote>vanished
    source files</>, and you can write a driver script to accept this exit
    code as a non-error case.  Also,
    some versions of GNU <application>tar</> consider it an error if a file
    is changed while <application>tar</> is copying it.  There does not seem
    to be any very convenient way to distinguish this error from other types
    of errors, other than manual inspection of <application>tar</>'s messages.
    GNU <application>tar</> is therefore not the best tool for making base
    backups.
698 699
   </para>

700 701 702 703 704 705
   <para>
    It is not necessary to be very concerned about the amount of time elapsed
    between <function>pg_start_backup</> and the start of the actual backup,
    nor between the end of the backup and <function>pg_stop_backup</>; a
    few minutes' delay won't hurt anything.  You
    must however be quite sure that these operations are carried out in
706
    sequence and do not overlap.
707 708 709 710
   </para>

   <para>
    Be certain that your backup dump includes all of the files underneath
711
    the database cluster directory (e.g., <filename>/usr/local/pgsql/data</>).
712 713 714 715 716 717 718 719
    If you are using tablespaces that do not reside underneath this directory,
    be careful to include them as well (and be sure that your backup dump
    archives symbolic links as links, otherwise the restore will mess up
    your tablespaces).
   </para>

   <para>
    You may, however, omit from the backup dump the files within the
720
    <filename>pg_xlog/</> subdirectory of the cluster directory.  This
721 722
    slight complication is worthwhile because it reduces the risk
    of mistakes when restoring.  This is easy to arrange if
723
    <filename>pg_xlog/</> is a symbolic link pointing to someplace outside
724 725 726 727 728 729
    the cluster directory, which is a common setup anyway for performance
    reasons.
   </para>

   <para>
    To make use of this backup, you will need to keep around all the WAL
730
    segment files generated during and after the file system backup.
731
    To aid you in doing this, the <function>pg_stop_backup</> function
732 733 734 735 736 737 738 739 740
    creates a <firstterm>backup history file</> that is immediately
    stored into the WAL archive area. This file is named after the first
    WAL segment file that you need to have to make use of the backup.
    For example, if the starting WAL file is
    <literal>0000000100001234000055CD</> the backup history file will be
    named something like
    <literal>0000000100001234000055CD.007C9330.backup</>. (The second
    number in the file name stands for an exact position within the WAL
    file, and can ordinarily be ignored.) Once you have safely archived
741 742 743 744 745
    the file system backup and the WAL segment files used during the
    backup (as specified in the backup history file), all archived WAL
    segments with names numerically less are no longer needed to recover
    the file system backup and may be deleted. However, you should
    consider keeping several backup sets to be absolutely certain that
746
    you can recover your data.
747
   </para>
748

749 750 751
   <para>
    The backup history file is just a small text file. It contains the
    label string you gave to <function>pg_start_backup</>, as well as
752 753 754
    the starting and ending times and WAL segments of the backup.
    If you used the label to identify where the associated dump file is kept, 
    then the archived history file is enough to tell you which dump file to
755
    restore, should you need to do so.
756 757 758 759
   </para>

   <para>
    Since you have to keep around all the archived WAL files back to your
760 761 762
    last base backup, the interval between base backups should usually be
    chosen based on how much storage you want to expend on archived WAL
    files.  You should also consider how long you are prepared to spend
763
    recovering, if recovery should be necessary &mdash; the system will have to
764 765
    replay all those WAL segments, and that could take awhile if it has
    been a long time since the last base backup.
766 767 768 769
   </para>

   <para>
    It's also worth noting that the <function>pg_start_backup</> function
770
    makes a file named <filename>backup_label</> in the database cluster
771 772 773 774 775 776 777 778 779 780 781
    directory, which is then removed again by <function>pg_stop_backup</>.
    This file will of course be archived as a part of your backup dump file.
    The backup label file includes the label string you gave to
    <function>pg_start_backup</>, as well as the time at which
    <function>pg_start_backup</> was run, and the name of the starting WAL
    file.  In case of confusion it will
    therefore be possible to look inside a backup dump file and determine
    exactly which backup session the dump file came from.
   </para>

   <para>
782
    It is also possible to make a backup dump while the server is
783
    stopped.  In this case, you obviously cannot use
784 785 786
    <function>pg_start_backup</> or <function>pg_stop_backup</>, and
    you will therefore be left to your own devices to keep track of which
    backup dump is which and how far back the associated WAL files go.
787
    It is generally better to follow the continuous archiving procedure above.
788 789 790 791
   </para>
  </sect2>

  <sect2 id="backup-pitr-recovery">
792
   <title>Recovering using a Continuous Archive Backup</title>
793 794 795 796 797 798 799

   <para>
    Okay, the worst has happened and you need to recover from your backup.
    Here is the procedure:
  <orderedlist>
   <listitem>
    <para>
800
     Stop the server, if it's running.
801 802 803 804 805 806 807
    </para>
   </listitem>
   <listitem>
    <para>
     If you have the space to do so,
     copy the whole cluster data directory and any tablespaces to a temporary 
     location in case you need them later. Note that this precaution will
808 809
     require that you have enough free space on your system to hold two
     copies of your existing database. If you do not have enough space, 
810 811 812
     you need at the least to copy the contents of the <filename>pg_xlog</>
     subdirectory of the cluster data directory, as it may contain logs which
     were not archived before the system went down.
813
    </para>
814 815
   </listitem>
   <listitem>
816
    <para>
817 818
     Clean out all existing files and subdirectories under the cluster data
     directory and under the root directories of any tablespaces you are using.
819 820 821 822 823 824 825
    </para>
   </listitem>
   <listitem>
    <para>
     Restore the database files from your backup dump.  Be careful that they
     are restored with the right ownership (the database system user, not
     root!) and with the right permissions.  If you are using tablespaces,
826
     you may want to verify that the symbolic links in <filename>pg_tblspc/</>
827 828 829 830 831
     were correctly restored.
    </para>
   </listitem>
   <listitem>
    <para>
832
     Remove any files present in <filename>pg_xlog/</>; these came from the
833
     backup dump and are therefore probably obsolete rather than current.
834
     If you didn't archive <filename>pg_xlog/</> at all, then re-create it,
835
     and be sure to re-create the subdirectory
836
    <filename>pg_xlog/archive_status/</> as well.
837 838 839 840
    </para>
   </listitem>
   <listitem>
    <para>
841
     If you had unarchived WAL segment files that you saved in step 2,
842 843 844
     copy them into <filename>pg_xlog/</>.  (It is best to copy them,
     not move them, so that you still have the unmodified files if a
     problem occurs and you have to start over.)
845 846 847 848
    </para>
   </listitem>
   <listitem>
    <para>
849
     Create a recovery command file <filename>recovery.conf</> in the cluster
850 851 852
     data directory (see <xref linkend="recovery-config-settings">). You may 
     also want to temporarily modify <filename>pg_hba.conf</> to prevent 
     ordinary users from connecting until you are sure the recovery has worked.
853 854 855 856
    </para>
   </listitem>
   <listitem>
    <para>
857
     Start the server.  The server will go into recovery mode and
858 859 860
     proceed to read through the archived WAL files it needs.  Should the
     recovery be terminated because of an external error, the server can
     simply be restarted and it will continue recovery.  Upon completion
861
     of the recovery process, the server will rename
862
     <filename>recovery.conf</> to <filename>recovery.done</> (to prevent
863 864 865 866 867 868 869 870
     accidentally re-entering recovery mode in case of a crash later) and then
     commence normal database operations.
    </para>
   </listitem>
   <listitem>
    <para>
     Inspect the contents of the database to ensure you have recovered to
     where you want to be.  If not, return to step 1.  If all is well,
871
     let in your users by restoring <filename>pg_hba.conf</> to normal.
872 873 874 875 876 877
    </para>
   </listitem>
  </orderedlist>
   </para>

   <para>
878 879 880
    The key part of all this is to set up a recovery command file that
    describes how you want to recover and how far the recovery should
    run.  You can use <filename>recovery.conf.sample</> (normally
881
    installed in the installation <filename>share/</> directory) as a
882
    prototype.  The one thing that you absolutely must specify in
883 884 885 886 887
    <filename>recovery.conf</> is the <varname>restore_command</>,
    which tells <productname>PostgreSQL</> how to get back archived
    WAL file segments.  Like the <varname>archive_command</>, this is
    a shell command string.  It may contain <literal>%f</>, which is
    replaced by the name of the desired log file, and <literal>%p</>,
888 889
    which is replaced by the absolute path to copy the log file to.
    Write <literal>%%</> if you need to embed an actual <literal>%</>
890 891
    character in the command.  The simplest useful command is
    something like
892 893 894 895
<programlisting>
restore_command = 'cp /mnt/server/archivedir/%f %p'
</programlisting>
    which will copy previously archived WAL segments from the directory
896
    <filename>/mnt/server/archivedir</>.  You could of course use something
897 898 899 900 901 902 903 904
    much more complicated, perhaps even a shell script that requests the
    operator to mount an appropriate tape.
   </para>

   <para>
    It is important that the command return nonzero exit status on failure.
    The command <emphasis>will</> be asked for log files that are not present
    in the archive; it must return nonzero when so asked.  This is not an
Peter Eisentraut's avatar
Peter Eisentraut committed
905
    error condition.  Be aware also that the base name of the <literal>%p</>
906 907 908 909 910 911
    path will be different from <literal>%f</>; do not expect them to be
    interchangeable.
   </para>

   <para>
    WAL segments that cannot be found in the archive will be sought in
912
    <filename>pg_xlog/</>; this allows use of recent un-archived segments.
913
    However segments that are available from the archive will be used in
914 915
    preference to files in <filename>pg_xlog/</>.  The system will not
    overwrite the existing contents of <filename>pg_xlog/</> when retrieving
916 917 918 919 920
    archived files.
   </para>

   <para>
    Normally, recovery will proceed through all available WAL segments,
921 922 923 924 925 926 927 928 929
    thereby restoring the database to the current point in time (or as
    close as we can get given the available WAL segments).  But if you want
    to recover to some previous point in time (say, right before the junior
    DBA dropped your main transaction table), just specify the required
    stopping point in <filename>recovery.conf</>.  You can specify the stop
    point, known as the <quote>recovery target</>, either by date/time or
    by completion of a specific transaction ID.  As of this writing only
    the date/time option is very usable, since there are no tools to help
    you identify with any accuracy which transaction ID to use.
930
   </para>
931

932 933 934 935 936 937 938 939
   <note>
     <para>
      The stop point must be after the ending time of the base backup (the
      time of <function>pg_stop_backup</>).  You cannot use a base backup
      to recover to a time when that backup was still going on.  (To
      recover to such a time, you must go back to your previous base backup
      and roll forward from there.)
     </para>
940 941 942 943 944 945 946 947 948 949 950 951 952 953 954
   </note>

   <para>
    If recovery finds a corruption in the WAL data then recovery will
    complete at that point and the server will not start. The recovery 
    process could be re-run from the beginning, specifying a 
    <quote>recovery target</> so that recovery can complete normally.
    If recovery fails for an external reason, such as a system crash or
    the WAL archive has become inaccessible, then the recovery can be
    simply restarted and it will restart almost from where it failed.
    Restartable recovery works by writing a restartpoint record to the control
    file at the first safely usable checkpoint record found after 
    <varname>checkpoint_timeout</> seconds. 
   </para>

955 956

    <sect3 id="recovery-config-settings" xreflabel="Recovery Settings">
957 958
     <title>Recovery Settings</title>

959 960 961 962 963 964
     <para>
      These settings can only be made in the <filename>recovery.conf</>
      file, and apply only for the duration of the recovery. They must be
      reset for any subsequent recovery you wish to perform. They cannot be
      changed once recovery has begun.
     </para>
965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981

     <variablelist>

     <varlistentry id="restore-command" xreflabel="restore_command">
      <term><varname>restore_command</varname> (<type>string</type>)</term>
      <listitem>
       <para>
        The shell command to execute to retrieve an archived segment of
        the WAL file series. This parameter is required.
        Any <literal>%f</> in the string is
        replaced by the name of the file to retrieve from the archive,
        and any <literal>%p</> is replaced by the absolute path to copy
        it to on the server.
        Write <literal>%%</> to embed an actual <literal>%</> character
        in the command. 
       </para>
       <para>
982 983
        It is important for the command to return a zero exit status if and
        only if it succeeds.  The command <emphasis>will</> be asked for file
984 985
        names that are not present in the archive; it must return nonzero
        when so asked.  Examples:
986 987 988 989 990 991 992 993 994 995 996 997 998 999
<programlisting>
restore_command = 'cp /mnt/server/archivedir/%f "%p"'
restore_command = 'copy /mnt/server/archivedir/%f "%p"'  # Windows
</programlisting>
       </para>
      </listitem>
     </varlistentry>

     <varlistentry id="recovery-target-time" xreflabel="recovery_target_time">
      <term><varname>recovery_target_time</varname> 
           (<type>timestamp</type>)
      </term>
      <listitem>
       <para>
Peter Eisentraut's avatar
Peter Eisentraut committed
1000
        This parameter specifies the time stamp up to which recovery
1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069
        will proceed.
        At most one of <varname>recovery_target_time</> and
        <xref linkend="recovery-target-xid"> can be specified.
        The default is to recover to the end of the WAL log.
        The precise stopping point is also influenced by 
        <xref linkend="recovery-target-inclusive">.
       </para>
      </listitem>
     </varlistentry>

     <varlistentry id="recovery-target-xid" xreflabel="recovery_target_xid">
      <term><varname>recovery_target_xid</varname> (<type>string</type>)</term>
      <listitem>
       <para>
        This parameter specifies the transaction ID up to which recovery
        will proceed. Keep in mind 
        that while transaction IDs are assigned sequentially at transaction 
        start, transactions can complete in a different numeric order.
        The transactions that will be recovered are those that committed
        before (and optionally including) the specified one.
        At most one of <varname>recovery_target_xid</> and
        <xref linkend="recovery-target-time"> can be specified.
        The default is to recover to the end of the WAL log.
        The precise stopping point is also influenced by 
        <xref linkend="recovery-target-inclusive">.
       </para>
      </listitem>
     </varlistentry>

     <varlistentry id="recovery-target-inclusive" 
                   xreflabel="recovery_target_inclusive">
      <term><varname>recovery_target_inclusive</varname> 
        (<type>boolean</type>)
      </term>
      <listitem>
       <para>
        Specifies whether we stop just after the specified recovery target
        (<literal>true</literal>), or just before the recovery target 
        (<literal>false</literal>).
        Applies to both <xref linkend="recovery-target-time">
        and <xref linkend="recovery-target-xid">, whichever one is
        specified for this recovery.  This indicates whether transactions
        having exactly the target commit time or ID, respectively, will
        be included in the recovery.  Default is <literal>true</>.
       </para>
      </listitem>
     </varlistentry>

     <varlistentry id="recovery-target-timeline" 
                   xreflabel="recovery_target_timeline">
      <term><varname>recovery_target_timeline</varname> 
        (<type>string</type>)
      </term>
      <listitem>
       <para>
        Specifies recovering into a particular timeline.  The default is
        to recover along the same timeline that was current when the
        base backup was taken.  You would only need to set this parameter
        in complex re-recovery situations, where you need to return to
        a state that itself was reached after a point-in-time recovery.
        See <xref linkend="backup-timelines"> for discussion.
       </para>
      </listitem>
     </varlistentry>

   </variablelist>

   </sect3>

1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087
  </sect2>

  <sect2 id="backup-timelines">
   <title>Timelines</title>

  <indexterm zone="backup">
   <primary>timelines</primary>
  </indexterm>

   <para>
    The ability to restore the database to a previous point in time creates
    some complexities that are akin to science-fiction stories about time
    travel and parallel universes.  In the original history of the database,
    perhaps you dropped a critical table at 5:15PM on Tuesday evening.
    Unfazed, you get out your backup, restore to the point-in-time 5:14PM
    Tuesday evening, and are up and running.  In <emphasis>this</> history of
    the database universe, you never dropped the table at all.  But suppose
    you later realize this wasn't such a great idea after all, and would like
1088
    to return to some later point in the original history.  You won't be able
1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134
    to if, while your database was up-and-running, it overwrote some of the
    sequence of WAL segment files that led up to the time you now wish you
    could get back to.  So you really want to distinguish the series of
    WAL records generated after you've done a point-in-time recovery from
    those that were generated in the original database history.
   </para>

   <para>
    To deal with these problems, <productname>PostgreSQL</> has a notion
    of <firstterm>timelines</>.  Each time you recover to a point-in-time
    earlier than the end of the WAL sequence, a new timeline is created
    to identify the series of WAL records generated after that recovery.
    (If recovery proceeds all the way to the end of WAL, however, we do not
    start a new timeline: we just extend the existing one.)  The timeline
    ID number is part of WAL segment file names, and so a new timeline does
    not overwrite the WAL data generated by previous timelines.  It is
    in fact possible to archive many different timelines.  While that might
    seem like a useless feature, it's often a lifesaver.  Consider the
    situation where you aren't quite sure what point-in-time to recover to,
    and so have to do several point-in-time recoveries by trial and error
    until you find the best place to branch off from the old history.  Without
    timelines this process would soon generate an unmanageable mess.  With
    timelines, you can recover to <emphasis>any</> prior state, including
    states in timeline branches that you later abandoned.
   </para>

   <para>
    Each time a new timeline is created, <productname>PostgreSQL</> creates
    a <quote>timeline history</> file that shows which timeline it branched
    off from and when.  These history files are necessary to allow the system
    to pick the right WAL segment files when recovering from an archive that
    contains multiple timelines.  Therefore, they are archived into the WAL
    archive area just like WAL segment files.  The history files are just
    small text files, so it's cheap and appropriate to keep them around
    indefinitely (unlike the segment files which are large).  You can, if
    you like, add comments to a history file to make your own notes about
    how and why this particular timeline came to be.  Such comments will be
    especially valuable when you have a thicket of different timelines as
    a result of experimentation.
   </para>

   <para>
    The default behavior of recovery is to recover along the same timeline
    that was current when the base backup was taken.  If you want to recover
    into some child timeline (that is, you want to return to some state that
    was itself generated after a recovery attempt), you need to specify the
1135
    target timeline ID in <filename>recovery.conf</>.  You cannot recover into
1136 1137 1138
    timelines that branched off earlier than the base backup.
   </para>
  </sect2>
1139

1140
  <sect2 id="continuous-archiving-caveats">
1141 1142 1143
   <title>Caveats</title>

   <para>
1144
    At this writing, there are several limitations of the continuous archiving
1145
    technique.  These will probably be fixed in future releases:
1146 1147 1148 1149

  <itemizedlist>
   <listitem>
    <para>
1150 1151
     Operations on hash indexes are
     not presently WAL-logged, so replay will not update these indexes.
1152 1153
     The recommended workaround is to manually <command>REINDEX</> each
     such index after completing a recovery operation.
1154 1155
    </para>
   </listitem>
1156

1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168
   <listitem>
    <para>
     If a <command>CREATE DATABASE</> command is executed while a base
     backup is being taken, and then the template database that the
     <command>CREATE DATABASE</> copied is modified while the base backup
     is still in progress, it is possible that recovery will cause those
     modifications to be propagated into the created database as well.
     This is of course undesirable.  To avoid this risk, it is best not to
     modify any template databases while taking a base backup.
    </para>
   </listitem>

1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180
   <listitem>
    <para>
     <command>CREATE TABLESPACE</> commands are WAL-logged with the literal
     absolute path, and will therefore be replayed as tablespace creations
     with the same absolute path.  This might be undesirable if the log is
     being replayed on a different machine.  It can be dangerous even if
     the log is being replayed on the same machine, but into a new data
     directory: the replay will still overwrite the contents of the original
     tablespace.  To avoid potential gotchas of this sort, the best practice
     is to take a new base backup after creating or dropping tablespaces.
    </para>
   </listitem>
1181 1182
  </itemizedlist>
   </para>
1183 1184

   <para>
1185
    It should also be noted that the default <acronym>WAL</acronym>
1186 1187 1188 1189 1190
    format is fairly bulky since it includes many disk page snapshots.
    These page snapshots are designed to support crash recovery,
    since we may need to fix partially-written disk pages.  Depending
    on your system hardware and software, the risk of partial writes may
    be small enough to ignore, in which case you can significantly reduce
1191
    the total volume of archived logs by turning off page snapshots 
1192 1193
    using the <xref linkend="guc-full-page-writes"> parameter.
    (Read the notes and warnings in 
1194
    <xref linkend="wal"> before you do so.)
1195 1196
    Turning off page snapshots does not prevent use of the logs for PITR
    operations.
1197
    An area for future development is to compress archived WAL data by
1198 1199
    removing unnecessary page copies even when <varname>full_page_writes</>
    is on.  In the meantime, administrators
1200 1201
    may wish to reduce the number of page snapshots included in WAL by
    increasing the checkpoint interval parameters as much as feasible.
1202
   </para>
1203
  </sect2>
1204 1205
 </sect1>

1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511
 <sect1 id="warm-standby">
  <title>Warm Standby Servers for High Availability</title>

  <indexterm zone="backup">
   <primary>Warm Standby</primary>
  </indexterm>

  <indexterm zone="backup">
   <primary>PITR Standby</primary>
  </indexterm>

  <indexterm zone="backup">
   <primary>Standby Server</primary>
  </indexterm>

  <indexterm zone="backup">
   <primary>Log Shipping</primary>
  </indexterm>

  <indexterm zone="backup">
   <primary>Witness Server</primary>
  </indexterm>

  <indexterm zone="backup">
   <primary>STONITH</primary>
  </indexterm>

  <indexterm zone="backup">
   <primary>High Availability</primary>
  </indexterm>

  <para>
   Continuous Archiving can be used to create a High Availability (HA)
   cluster configuration with one or more Standby Servers ready to take
   over operations in the case that the Primary Server fails. This
   capability is more widely known as Warm Standby Log Shipping.
  </para>

  <para>
   The Primary and Standby Server work together to provide this capability,
   though the servers are only loosely coupled. The Primary Server operates
   in Continuous Archiving mode, while the Standby Server operates in a
   continuous Recovery mode, reading the WAL files from the Primary. No
   changes to the database tables are required to enable this capability,
   so it offers a low administration overhead in comparison with other
   replication approaches. This configuration also has a very low
   performance impact on the Primary server.
  </para>

  <para>
   Directly moving WAL or "log" records from one database server to another
   is typically described as Log Shipping. PostgreSQL implements file-based
   Log Shipping, meaning WAL records are batched one file at a time. WAL
   files can be shipped easily and cheaply over any distance, whether it be
   to an adjacent system, another system on the same site or another system
   on the far side of the globe. The bandwidth required for this technique
   varies according to the transaction rate of the Primary Server.
   Record-based Log Shipping is also possible with custom-developed
   procedures, discussed in a later section. Future developments are likely
   to include options for synchronous and/or integrated record-based log
   shipping.
  </para>

  <para>
   It should be noted that the log shipping is asynchronous, i.e. the WAL
   records are shipped after transaction commit. As a result there can be a
   small window of data loss, should the Primary Server suffer a
   catastrophic failure. The window of data loss is minimised by the use of
   the archive_timeout parameter, which can be set as low as a few seconds
   if required. A very low setting can increase the bandwidth requirements
   for file shipping.
  </para>

  <para>
   The Standby server is not available for access, since it is continually
   performing recovery processing. Recovery performance is sufficiently
   good that the Standby will typically be only minutes away from full
   availability once it has been activated. As a result, we refer to this
   capability as a Warm Standby configuration that offers High
   Availability. Restoring a server from an archived base backup and
   rollforward can take considerably longer and so that technique only
   really offers a solution for Disaster Recovery, not HA.
  </para>

  <para>
   Other mechanisms for High Availability replication are available, both
   commercially and as open-source software.  
  </para>

  <para>
   In general, log shipping between servers running different release
   levels will not be possible. It is the policy of the PostgreSQL Worldwide
   Development Group not to make changes to disk formats during minor release
   upgrades, so it is likely that running different minor release levels 
   on Primary and Standby servers will work successfully. However, no
   formal support for that is offered and you are advised not to allow this
   to occur over long periods.
  </para>

  <sect2 id="warm-standby-planning">
   <title>Planning</title>

   <para>
    On the Standby server all tablespaces and paths will refer to similarly
    named mount points, so it is important to create the Primary and Standby
    servers so that they are as similar as possible, at least from the
    perspective of the database server. Furthermore, any CREATE TABLESPACE
    commands will be passed across as-is, so any new mount points must be
    created on both servers before they are used on the Primary. Hardware
    need not be the same, but experience shows that maintaining two
    identical systems is easier than maintaining two dissimilar ones over
    the whole lifetime of the application and system.
   </para>

   <para>
    There is no special mode required to enable a Standby server. The
    operations that occur on both Primary and Standby servers are entirely
    normal continuous archiving and recovery tasks. The primary point of
    contact between the two database servers is the archive of WAL files
    that both share: Primary writing to the archive, Standby reading from
    the archive. Care must be taken to ensure that WAL archives for separate
    servers do not become mixed together or confused.
   </para>

   <para>
    The magic that makes the two loosely coupled servers work together is
    simply a restore_command that waits for the next WAL file to be archived
    from the Primary. The restore_command is specified in the recovery.conf
    file on the Standby Server. Normal recovery processing would request a
    file from the WAL archive, causing an error if the file was unavailable.
    For Standby processing it is normal for the next file to be unavailable,
    so we must be patient and wait for it to appear. A waiting
    restore_command can be written as a custom script that loops after
    polling for the existence of the next WAL file. There must also be some
    way to trigger failover, which should interrupt the restore_command,
    break the loop and return a file not found error to the Standby Server.
    This then ends recovery and the Standby will then come up as a normal
    server.
   </para>

   <para>
    Sample code for the C version of the restore_command would be be:
<programlisting>
triggered = false;
while (!NextWALFileReady() && !triggered)
{
    sleep(100000L);         // wait for ~0.1 sec
    if (CheckForExternalTrigger())
        triggered = true;
}
if (!triggered)
        CopyWALFileForRecovery();
</programlisting>
   </para>

   <para>
    PostgreSQL does not provide the system software required to identify a
    failure on the Primary and notify the Standby system and then the
    Standby database server. Many such tools exist and are well integrated
    with other aspects of a system failover, such as ip address migration.
   </para>

   <para>
    Triggering failover is an important part of planning and design. The
    restore_command is executed in full once for each WAL file. The process
    running the restore_command is therefore created and dies for each file,
    so there is no daemon or server process and so we cannot use signals and
    a signal handler. A more permanent notification is required to trigger
    the failover. It is possible to use a simple timeout facility,
    especially if used in conjunction with a known archive_timeout setting
    on the Primary. This is somewhat error prone since a network or busy
    Primary server might be sufficient to initiate failover. A notification
    mechanism such as the explicit creation of a trigger file is less error
    prone, if this can be arranged.
   </para>
  </sect2>

  <sect2 id="warm-standby-config">
   <title>Implementation</title>

   <para>
    The short procedure for configuring a Standby Server is as follows. For
    full details of each step, refer to previous sections as noted.
    <orderedlist>
     <listitem>
      <para>
       Set up Primary and Standby systems as near identically as possible,
       including two identical copies of PostgreSQL at same release level.
      </para>
     </listitem>
     <listitem>
      <para>
       Set up Continuous Archiving from the Primary to a WAL archive located
       in a directory on the Standby Server. Ensure that both <xref
       linkend="guc-archive-command"> and <xref linkend="guc-archive-timeout">
       are set. (See <xref linkend="backup-archiving-wal">)
      </para>
     </listitem>
     <listitem>
      <para>
       Make a Base Backup of the Primary Server. (See <xref
       linkend="backup-base-backup">)
      </para>
     </listitem>
     <listitem>
      <para>
       Begin recovery on the Standby Server from the local WAL archive,
       using a recovery.conf that specifies a restore_command that waits as
       described previously. (See <xref linkend="backup-pitr-recovery">)
      </para>
     </listitem>
    </orderedlist>
   </para>

   <para>
    Recovery treats the WAL Archive as read-only, so once a WAL file has
    been copied to the Standby system it can be copied to tape at the same
    time as it is being used by the Standby database server to recover.
    Thus, running a Standby Server for High Availability can be performed at
    the same time as files are stored for longer term Disaster Recovery
    purposes. 
   </para>

   <para>
    For testing purposes, it is possible to run both Primary and Standby
    servers on the same system. This does not provide any worthwhile
    improvement on server robustness, nor would it be described as HA.
   </para>
  </sect2>

  <sect2 id="warm-standby-failover">
   <title>Failover</title>

   <para>
    If the Primary Server fails then the Standby Server should take begin
    failover procedures.
   </para>

   <para>
    If the Standby Server fails then no failover need take place. If the
    Standby Server can be restarted, then the recovery process can also be
    immediately restarted, taking advantage of Restartable Recovery.
   </para>

   <para>
    If the Primary Server fails and then immediately restarts, you must have
    a mechanism for informing it that it is no longer the Primary. This is
    sometimes known as STONITH (Should the Other Node In The Head), which is
    necessary to avoid situations where both systems think they are the
    Primary, which can lead to confusion and ultimately data loss.
   </para>

   <para>
    Many failover systems use just two systems, the Primary and the Standby,
    connected by some kind of heartbeat mechanism to continually verify the
    connectivity between the two and the viability of the Primary. It is
    also possible to use a third system, known as a Witness Server to avoid
    some problems of inappropriate failover, but the additional complexity
    may not be worthwhile unless it is set-up with sufficient care and
    rigorous testing.
   </para>

   <para>
    At the instant that failover takes place to the Standby, we have only a
    single server in operation. This is known as a degenerate state.
    The former Standby is now the Primary, but the former Primary is down 
    and may stay down. We must now fully re-create a Standby server, 
    either on the former Primary system when it comes up, or on a third, 
    possibly new, system. Once complete the Primary and Standby can be 
    considered to have switched roles. Some people choose to use a third 
    server to provide additional protection across the failover interval, 
    though clearly this complicates the system configuration and 
    operational processes (and this can also act as a Witness Server).
   </para>

   <para>
    So, switching from Primary to Standby Server can be fast, but requires
    some time to re-prepare the failover cluster. Regular switching from
    Primary to Standby is encouraged, since it allows the regular downtime
    one each system required to maintain HA. This also acts as a test of the
    failover so that it definitely works when you really need it. Written
    administration procedures are advised.
   </para>
  </sect2>

  <sect2 id="warm-standby-record">
   <title>Implementing Record-based Log Shipping</title>

   <para>
    The main features for Log Shipping in this release are based around the
    file-based Log Shipping described above. It is also possible to
    implement record-based Log Shipping using the pg_xlogfile_name_offset()
    function, though this requires custom development.
   </para>

   <para>
    An external program can call pg_xlogfile_name_offset() to find out the
    filename and the exact byte offset within it of the latest WAL pointer.
    If the external program regularly polls the server it can find out how
    far forward the pointer has moved. It can then access the WAL file
    directly and copy those bytes across to a less up-to-date copy on a
    Standby Server.
   </para>
  </sect2>
 </sect1>

1512
 <sect1 id="migration">
Peter Eisentraut's avatar
Peter Eisentraut committed
1513
  <title>Migration Between Releases</title>
Peter Eisentraut's avatar
Peter Eisentraut committed
1514 1515 1516 1517 1518 1519 1520 1521 1522

  <indexterm zone="migration">
   <primary>upgrading</primary>
  </indexterm>

  <indexterm zone="migration">
   <primary>version</primary>
   <secondary>compatibility</secondary>
  </indexterm>
1523

1524 1525 1526 1527 1528 1529 1530
  <para>
   This section discusses how to migrate your database data from one
   <productname>PostgreSQL</> release to a newer one.
   The software installation procedure <foreignphrase>per se</> is not the
   subject of this section; those details are in <xref linkend="installation">.
  </para>

1531 1532
  <para>
   As a general rule, the internal data storage format is subject to
Peter Eisentraut's avatar
Peter Eisentraut committed
1533 1534 1535
   change between major releases of <productname>PostgreSQL</> (where
   the number after the first dot changes). This does not apply to
   different minor releases under the same major release (where the
1536
   number after the second dot changes); these always have compatible
1537 1538
   storage formats. For example, releases 7.2.1, 7.3.2, and 7.4 are
   not compatible, whereas 7.2.1 and 7.2.2 are. When you update
1539
   between compatible versions, you can simply replace the executables
1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554
   and reuse the data directory on disk. Otherwise you need to back
   up your data and restore it on the new server.  This has to be done
   using <application>pg_dump</>; file system level backup methods
   obviously won't work. There are checks in place that prevent you
   from using a data directory with an incompatible version of
   <productname>PostgreSQL</productname>, so no great harm can be done by
   trying to start the wrong server version on a data directory.
  </para>

  <para>
   It is recommended that you use the <application>pg_dump</> and
   <application>pg_dumpall</> programs from the newer version of
   <productname>PostgreSQL</>, to take advantage of any enhancements
   that may have been made in these programs.  Current releases of the
   dump programs can read data from any server version back to 7.0.
1555 1556 1557 1558 1559 1560
  </para>

  <para>
   The least downtime can be achieved by installing the new server in
   a different directory and running both the old and the new servers
   in parallel, on different ports. Then you can use something like
1561

1562
<programlisting>
1563
pg_dumpall -p 5432 | psql -d postgres -p 6543
1564
</programlisting>
1565

1566
   to transfer your data.  Or use an intermediate file if you want.
1567 1568
   Then you can shut down the old server and start the new server at
   the port the old one was running at. You should make sure that the
1569
   old database is not updated after you run <application>pg_dumpall</>,
1570 1571
   otherwise you will obviously lose that data. See <xref
   linkend="client-authentication"> for information on how to prohibit
1572 1573 1574 1575 1576 1577 1578 1579
   access.
  </para>

  <para>
   In practice you probably want to test your client
   applications on the new setup before switching over completely.
   This is another reason for setting up concurrent installations
   of old and new versions.
1580 1581 1582 1583
  </para>

  <para>
   If you cannot or do not want to run two servers in parallel you can
1584
   do the backup step before installing the new version, bring down
1585 1586
   the server, move the old version out of the way, install the new
   version, start the new server, restore the data. For example:
1587

1588
<programlisting>
1589
pg_dumpall &gt; backup
1590
pg_ctl stop
1591
mv /usr/local/pgsql /usr/local/pgsql.old
1592
cd ~/postgresql-&version;
1593 1594
gmake install
initdb -D /usr/local/pgsql/data
1595
postgres -D /usr/local/pgsql/data
1596
psql -f backup postgres
1597
</programlisting>
1598

1599 1600 1601 1602 1603 1604 1605 1606
   See <xref linkend="runtime"> about ways to start and stop the
   server and other details. The installation instructions will advise
   you of strategic places to perform these steps.
  </para>

  <note>
   <para>
    When you <quote>move the old installation out of the way</quote>
1607 1608 1609
    it may no longer be perfectly usable. Some of the executable programs
    contain absolute paths to various installed programs and data files.
    This is usually not a big problem but if you plan on using two
1610
    installations in parallel for a while you should assign them
1611 1612 1613
    different installation directories at build time.  (This problem
    is rectified in <productname>PostgreSQL</> 8.0 and later, but you
    need to be wary of moving older installations.)
1614 1615 1616 1617
   </para>
  </note>
 </sect1>
</chapter>
1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635

<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
sgml-omittag:nil
sgml-shorttag:t
sgml-minimize-attributes:nil
sgml-always-quote-attributes:t
sgml-indent-step:1
sgml-indent-tabs-mode:nil
sgml-indent-data:t
sgml-parent-document:nil
sgml-default-dtd-file:"./reference.ced"
sgml-exposed-tags:nil
sgml-local-catalogs:("/usr/share/sgml/catalog")
sgml-local-ecat-files:nil
End:
-->