Commit b9f5c93b authored by Peter Eisentraut's avatar Peter Eisentraut

Regenerate text files.

parent e873207f
PostgreSQL Installation Instructions
This document describes the installation of PostgreSQL from the source code
distribution.
-------------------------------------------------------------------------------
Short Version
./configure
gmake
su
gmake install
adduser postgres
mkdir /usr/local/pgsql/data
chown postgres /usr/local/pgsql/data
su - postgres
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data >logfile 2>&1 &
/usr/local/pgsql/bin/createdb test
/usr/local/pgsql/bin/psql test
The long version is the rest of this document.
-------------------------------------------------------------------------------
Requirements
In general, a modern Unix-compatible platform should be able to run PostgreSQL.
The platforms that had received specific testing at the time of release are
listed in the Section called Supported Platforms below. In the "doc"
subdirectory of the distribution there are several platform-specific FAQ
documents you might wish to consult if you are having trouble. The following
software packages are required for building PostgreSQL:
* GNU make is required; other make programs will *not* work. GNU make is
often installed under the name "gmake"; this document will always refer
to it by that name. (On some systems GNU make is the default tool with
the name "make".) To test for GNU make enter
gmake --version
It is recommended to use version 3.76.1 or later.
* You need an ISO/ANSI C compiler. Recent versions of GCC are
recommendable, but PostgreSQL is known to build with a wide variety of
compilers from different vendors.
* gzip is needed to unpack the distribution in the first place. If you are
reading this, you probably already got past that hurdle.
* The GNU Readline library (for comfortable line editing and command
history retrieval) will be used by default. If you don't want to use it
then you must specify the "--without-readline" option for "configure".
(On NetBSD, the "libedit" library is Readline-compatible and is used if
"libreadline" is not found.)
* To build on Windows NT or Windows 2000 you need the Cygwin and cygipc
packages. See the file "doc/FAQ_MSWIN" for details.
The following packages are optional. They are not required in the default
configuration, but they are needed when certain build options are enabled, as
explained below.
* To build the server programming language PL/Perl you need a full Perl
installation, including the "libperl" library and the header files. Since
PL/Perl will be a shared library, the "libperl" library must be a shared
library also on most platforms. This appears to be the default in recent
Perl versions, but it was not in earlier versions, and in general it is
the choice of whomever installed Perl at your site.
If you don't have the shared library but you need one, a message like
this will appear during the build to point out this fact:
*** Cannot build PL/Perl because libperl is not a shared library.
*** You might have to rebuild your Perl installation. Refer to
*** the documentation for details.
(If you don't follow the on-screen output you will merely notice that the
PL/Perl library object, "plperl.so" or similar, will not be installed.)
If you see this, you will have to rebuild and install Perl manually to be
able to build PL/Perl. During the configuration process for Perl, request
a shared library.
* To build the PL/Python server programming language, you need a Python
installation, including the header files. Since PL/Python will be a
shared library, the "libpython" library must be a shared library also on
most platforms. This is not the case in a default Python installation.
If after building and installing you have a file called "plpython.so"
(possibly a different extension), then everything went well. Otherwise
you should have seen a notice like this flying by:
*** Cannot build PL/Python because libpython is not a shared library.
*** You might have to rebuild your Python installation. Refer to
*** the documentation for details.
That means you have to rebuild (part of) your Python installation to
supply this shared library.
The catch is that the Python distribution or the Python maintainers do
not provide any direct way to do this. The closest thing we can offer you
is the information in Python FAQ 3.30. On some operating systems you
don't really have to build a shared library, but then you will have to
convince the PostgreSQL build system of this. Consult the "Makefile" in
the "src/pl/plpython" directory for details.
* If you want to build Tcl or Tk components (clients and the PL/Tcl
language) you of course need a Tcl installation.
* To build the JDBC driver, you need Ant 1.5 or higher and a JDK. Ant is a
special tool for building Java-based packages. It can be downloaded from
the Ant web site.
If you have several Java compilers installed, it depends on the Ant
configuration which one gets used. Precompiled Ant distributions are
typically set up to read a file ".antrc" in the current user's home
directory for configuration. For example, to use a different JDK than the
default, this may work:
JAVA_HOME=/usr/local/sun-jdk1.3
JAVACMD=$JAVA_HOME/bin/java
Note: Do not try to build the driver by calling "ant" or even
"javac" directly. This will not work. Run "gmake" normally as
described below.
* To enable Native Language Support (NLS), that is, the ability to display
a program's messages in a language other than English, you need an
implementation of the Gettext API. Some operating systems have this
built-in (e.g., Linux, NetBSD, Solaris), for other systems you can
download an add-on package from here: http://www.postgresql.org/~petere/
gettext.html. If you are using the Gettext implementation in the GNU C
library then you will additionally need the GNU Gettext package for some
utility programs. For any of the other implementations you will not need
it.
* Kerberos, OpenSSL, or PAM, if you want to support authentication using
these services.
If you are building from a CVS tree instead of using a released source package,
or if you want to do development, you also need the following packages:
* Flex and Bison are needed to build a CVS checkout or if you changed the
actual scanner and parser definition files. If you need them, be sure to
get Flex 2.5.4 or later and Bison 1.875 or later. Other yacc programs can
sometimes be used, but doing so requires extra effort and is not
recommended. Other lex programs will definitely not work.
If you need to get a GNU package, you can find it at your local GNU mirror site
(see http://www.gnu.org/order/ftp.html for a list) or at ftp://ftp.gnu.org/
gnu/.
Also check that you have sufficient disk space. You will need about 65 MB for
the source tree during compilation and about 15 MB for the installation
directory. An empty database cluster takes about 25 MB, databases take about
five times the amount of space that a flat text file with the same data would
take. If you are going to run the regression tests you will temporarily need up
to an extra 90 MB. Use the "df" command to check for disk space.
-------------------------------------------------------------------------------
If You Are Upgrading
The internal data storage format changes with new releases of PostgreSQL.
Therefore, if you are upgrading an existing installation that does not have a
version number "7.4.x", you must back up and restore your data as shown here.
These instructions assume that your existing installation is under the "/usr/
local/pgsql" directory, and that the data area is in "/usr/local/pgsql/data".
Substitute your paths appropriately.
1. Make sure that your database is not updated during or after the backup.
This does not affect the integrity of the backup, but the changed data
would of course not be included. If necessary, edit the permissions in
the file "/usr/local/pgsql/data/pg_hba.conf" (or equivalent) to disallow
access from everyone except you.
2. To back up your database installation, type:
pg_dumpall > outputfile
If you need to preserve OIDs (such as when using them as foreign keys),
then use the "-o" option when running "pg_dumpall".
"pg_dumpall" does not save large objects. Check the documentation if you
need to do this.
To make the backup, you can use the "pg_dumpall" command from the version
you are currently running. For best results, however, try to use the
"pg_dumpall" command from PostgreSQL 7.4, since this version contains
bug fixes and improvements over older versions. While this advice might
seem idiosyncratic since you haven't installed the new version yet, it is
advisable to follow it if you plan to install the new version in parallel
with the old version. In that case you can complete the installation
normally and transfer the data later. This will also decrease the
downtime.
3. If you are installing the new version at the same location as the old one
then shut down the old server, at the latest before you install the new
files:
kill -INT `cat /usr/local/pgsql/data/postmaster.pid`
Versions prior to 7.0 do not have this "postmaster.pid" file. If you are
using such a version you must find out the process ID of the server
yourself, for example by typing "ps ax | grep postmaster", and supply it
to the "kill" command.
On systems that have PostgreSQL started at boot time, there is probably a
start-up file that will accomplish the same thing. For example, on a Red
Hat Linux system one might find that
/etc/rc.d/init.d/postgresql stop
works. Another possibility is "pg_ctl stop".
4. If you are installing in the same place as the old version then it is
also a good idea to move the old installation out of the way, in case you
have trouble and need to revert to it. Use a command like this:
mv /usr/local/pgsql /usr/local/pgsql.old
After you have installed PostgreSQL 7.4, create a new database directory and
start the new server. Remember that you must execute these commands while
logged in to the special database user account (which you already have if you
are upgrading).
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data
Finally, restore your data with
/usr/local/pgsql/bin/psql -d template1 -f outputfile
using the *new* psql.
These topics are discussed at length in the documentation, which you are
encouraged to read in any case.
-------------------------------------------------------------------------------
Installation Procedure
1. Configuration
The first step of the installation procedure is to configure the source
tree for your system and choose the options you would like. This is done
by running the "configure" script. For a default installation simply
enter
./configure
This script will run a number of tests to guess values for various system
dependent variables and detect some quirks of your operating system, and
finally will create several files in the build tree to record what it
found. (You can also run "configure" in a directory outside the source
tree if you want to keep the build directory separate.)
The default configuration will build the server and utilities, as well as
all client applications and interfaces that require only a C compiler.
All files will be installed under "/usr/local/pgsql" by default.
You can customize the build and installation process by supplying one or
more of the following command line options to "configure":
PostgreSQL Installation Instructions
This document describes the installation of PostgreSQL from the source
code distribution.
_________________________________________________________________
Short Version
./configure
gmake
su
gmake install
adduser postgres
mkdir /usr/local/pgsql/data
chown postgres /usr/local/pgsql/data
su - postgres
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data >logfile 2>&1 &
/usr/local/pgsql/bin/createdb test
/usr/local/pgsql/bin/psql test
The long version is the rest of this document.
_________________________________________________________________
Requirements
In general, a modern Unix-compatible platform should be able to run
PostgreSQL. The platforms that had received specific testing at the
time of release are listed in the section called Supported Platforms
below. In the "doc" subdirectory of the distribution there are several
platform-specific FAQ documents you might wish to consult if you are
having trouble.
The following software packages are required for building PostgreSQL:
* GNU make is required; other make programs will *not* work. GNU
make is often installed under the name "gmake"; this document will
always refer to it by that name. (On some systems GNU make is the
default tool with the name "make".) To test for GNU make enter
gmake --version
It is recommended to use version 3.76.1 or later.
* You need an ISO/ANSI C compiler. Recent versions of GCC are
recommendable, but PostgreSQL is known to build with a wide
variety of compilers from different vendors.
* gzip is needed to unpack the distribution in the first place. If
you are reading this, you probably already got past that hurdle.
* The GNU Readline library (for comfortable line editing and command
history retrieval) will be used by default. If you don't want to
use it then you must specify the "--without-readline" option for
"configure". (On NetBSD, the "libedit" library is
Readline-compatible and is used if "libreadline" is not found.)
* To build on Windows NT or Windows 2000 you need the Cygwin and
cygipc packages. See the file "doc/FAQ_MSWIN" for details.
The following packages are optional. They are not required in the
default configuration, but they are needed when certain build options
are enabled, as explained below.
* To build the server programming language PL/Perl you need a full
Perl installation, including the "libperl" library and the header
files. Since PL/Perl will be a shared library, the "libperl"
library must be a shared library also on most platforms. This
appears to be the default in recent Perl versions, but it was not
in earlier versions, and in general it is the choice of whomever
installed Perl at your site.
If you don't have the shared library but you need one, a message
like this will appear during the build to point out this fact:
*** Cannot build PL/Perl because libperl is not a shared library.
*** You might have to rebuild your Perl installation. Refer to
*** the documentation for details.
(If you don't follow the on-screen output you will merely notice
that the PL/Perl library object, "plperl.so" or similar, will not
be installed.) If you see this, you will have to rebuild and
install Perl manually to be able to build PL/Perl. During the
configuration process for Perl, request a shared library.
* To build the PL/Python server programming language, you need a
Python installation, including the header files. Since PL/Python
will be a shared library, the "libpython" library must be a shared
library also on most platforms. This is not the case in a default
Python installation.
If after building and installing you have a file called
"plpython.so" (possibly a different extension), then everything
went well. Otherwise you should have seen a notice like this
flying by:
*** Cannot build PL/Python because libpython is not a shared library.
*** You might have to rebuild your Python installation. Refer to
*** the documentation for details.
That means you have to rebuild (part of) your Python installation
to supply this shared library.
The catch is that the Python distribution or the Python
maintainers do not provide any direct way to do this. The closest
thing we can offer you is the information in Python FAQ 3.30. On
some operating systems you don't really have to build a shared
library, but then you will have to convince the PostgreSQL build
system of this. Consult the "Makefile" in the "src/pl/plpython"
directory for details.
* If you want to build Tcl or Tk components (clients and the PL/Tcl
language) you of course need a Tcl installation.
* To build the JDBC driver, you need Ant 1.5 or higher and a JDK.
Ant is a special tool for building Java-based packages. It can be
downloaded from the Ant web site.
If you have several Java compilers installed, it depends on the
Ant configuration which one gets used. Precompiled Ant
distributions are typically set up to read a file ".antrc" in the
current user's home directory for configuration. For example, to
use a different JDK than the default, this may work:
JAVA_HOME=/usr/local/sun-jdk1.3
JAVACMD=$JAVA_HOME/bin/java
Note: Do not try to build the driver by calling "ant" or even
"javac" directly. This will not work. Run "gmake" normally as
described below.
* To enable Native Language Support (NLS), that is, the ability to
display a program's messages in a language other than English, you
need an implementation of the Gettext API. Some operating systems
have this built-in (e.g., Linux, NetBSD, Solaris), for other
systems you can download an add-on package from here:
http://www.postgresql.org/~petere/gettext.html. If you are using
the Gettext implementation in the GNU C library then you will
additionally need the GNU Gettext package for some utility
programs. For any of the other implementations you will not need
it.
* Kerberos, OpenSSL, or PAM, if you want to support authentication
using these services.
If you are building from a CVS tree instead of using a released source
package, or if you want to do development, you also need the following
packages:
* Flex and Bison are needed to build a CVS checkout or if you
changed the actual scanner and parser definition files. If you
need them, be sure to get Flex 2.5.4 or later and Bison 1.875 or
later. Other yacc programs can sometimes be used, but doing so
requires extra effort and is not recommended. Other lex programs
will definitely not work.
If you need to get a GNU package, you can find it at your local GNU
mirror site (see http://www.gnu.org/order/ftp.html for a list) or at
ftp://ftp.gnu.org/gnu/.
Also check that you have sufficient disk space. You will need about 65
MB for the source tree during compilation and about 15 MB for the
installation directory. An empty database cluster takes about 25 MB,
databases take about five times the amount of space that a flat text
file with the same data would take. If you are going to run the
regression tests you will temporarily need up to an extra 90 MB. Use
the "df" command to check for disk space.
_________________________________________________________________
If You Are Upgrading
The internal data storage format changes with new releases of
PostgreSQL. Therefore, if you are upgrading an existing installation
that does not have a version number "7.4.x", you must back up and
restore your data as shown here. These instructions assume that your
existing installation is under the "/usr/local/pgsql" directory, and
that the data area is in "/usr/local/pgsql/data". Substitute your
paths appropriately.
1. Make sure that your database is not updated during or after the
backup. This does not affect the integrity of the backup, but the
changed data would of course not be included. If necessary, edit
the permissions in the file "/usr/local/pgsql/data/pg_hba.conf"
(or equivalent) to disallow access from everyone except you.
2. To back up your database installation, type:
pg_dumpall > outputfile
If you need to preserve OIDs (such as when using them as foreign
keys), then use the "-o" option when running "pg_dumpall".
"pg_dumpall" does not save large objects. Check the documentation
if you need to do this.
To make the backup, you can use the "pg_dumpall" command from the
version you are currently running. For best results, however, try
to use the "pg_dumpall" command from PostgreSQL 7.4beta5, since
this version contains bug fixes and improvements over older
versions. While this advice might seem idiosyncratic since you
haven't installed the new version yet, it is advisable to follow
it if you plan to install the new version in parallel with the old
version. In that case you can complete the installation normally
and transfer the data later. This will also decrease the downtime.
3. If you are installing the new version at the same location as the
old one then shut down the old server, at the latest before you
install the new files:
kill -INT `cat /usr/local/pgsql/data/postmaster.pid`
Versions prior to 7.0 do not have this "postmaster.pid" file. If
you are using such a version you must find out the process ID of
the server yourself, for example by typing "ps ax | grep
postmaster", and supply it to the "kill" command.
On systems that have PostgreSQL started at boot time, there is
probably a start-up file that will accomplish the same thing. For
example, on a Red Hat Linux system one might find that
/etc/rc.d/init.d/postgresql stop
works. Another possibility is "pg_ctl stop".
4. If you are installing in the same place as the old version then it
is also a good idea to move the old installation out of the way,
in case you have trouble and need to revert to it. Use a command
like this:
mv /usr/local/pgsql /usr/local/pgsql.old
After you have installed PostgreSQL 7.4beta5, create a new database
directory and start the new server. Remember that you must execute
these commands while logged in to the special database user account
(which you already have if you are upgrading).
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data
Finally, restore your data with
/usr/local/pgsql/bin/psql -d template1 -f outputfile
using the *new* psql.
These topics are discussed at length in the documentation, which you
are encouraged to read in any case.
_________________________________________________________________
Installation Procedure
1. Configuration
The first step of the installation procedure is to configure the
source tree for your system and choose the options you would like.
This is done by running the "configure" script. For a default
installation simply enter
./configure
This script will run a number of tests to guess values for various
system dependent variables and detect some quirks of your
operating system, and finally will create several files in the
build tree to record what it found. (You can also run "configure"
in a directory outside the source tree if you want to keep the
build directory separate.)
The default configuration will build the server and utilities, as
well as all client applications and interfaces that require only a
C compiler. All files will be installed under "/usr/local/pgsql"
by default.
You can customize the build and installation process by supplying
one or more of the following command line options to "configure":
--prefix=PREFIX --prefix=PREFIX
Install all files under the directory "PREFIX" instead of
"/usr/local/pgsql". The actual files will be installed Install all files under the directory "PREFIX" instead of "/usr/
into various subdirectories; no files will ever be local/pgsql". The actual files will be installed into various
installed directly into the "PREFIX" directory. subdirectories; no files will ever be installed directly into the
"PREFIX" directory.
If you have special needs, you can also customize the If you have special needs, you can also customize the individual
individual subdirectories with the following options. subdirectories with the following options.
--exec-prefix=EXEC-PREFIX --exec-prefix=EXEC-PREFIX
You can install architecture-dependent files under a
different prefix, "EXEC-PREFIX", than what "PREFIX" was You can install architecture-dependent files under a different
set to. This can be useful to share prefix, "EXEC-PREFIX", than what "PREFIX" was set to. This can be
architecture-independent files between hosts. If you omit useful to share architecture-independent files between hosts. If
this, then "EXEC-PREFIX" is set equal to "PREFIX" and you omit this, then "EXEC-PREFIX" is set equal to "PREFIX" and both
both architecture-dependent and independent files will be architecture-dependent and independent files will be installed
installed under the same tree, which is probably what you under the same tree, which is probably what you want.
want.
--bindir=DIRECTORY --bindir=DIRECTORY
Specifies the directory for executable programs. The
default is "EXEC-PREFIX/bin", which normally means Specifies the directory for executable programs. The default is
"/usr/local/pgsql/bin". "EXEC-PREFIX/bin", which normally means "/usr/local/pgsql/bin".
--datadir=DIRECTORY --datadir=DIRECTORY
Sets the directory for read-only data files used by the
installed programs. The default is "PREFIX/share". Note Sets the directory for read-only data files used by the installed
that this has nothing to do with where your database programs. The default is "PREFIX/share". Note that this has nothing
files will be placed. to do with where your database files will be placed.
--sysconfdir=DIRECTORY --sysconfdir=DIRECTORY
The directory for various configuration files,
"PREFIX/etc" by default. The directory for various configuration files, "PREFIX/etc" by
default.
--libdir=DIRECTORY --libdir=DIRECTORY
The location to install libraries and dynamically
loadable modules. The default is "EXEC-PREFIX/lib". The location to install libraries and dynamically loadable modules.
The default is "EXEC-PREFIX/lib".
--includedir=DIRECTORY --includedir=DIRECTORY
The directory for installing C and C++ header files. The
default is "PREFIX/include". The directory for installing C and C++ header files. The default is
"PREFIX/include".
--docdir=DIRECTORY --docdir=DIRECTORY
Documentation files, except "man" pages, will be
installed into this directory. The default is Documentation files, except "man" pages, will be installed into
"PREFIX/doc". this directory. The default is "PREFIX/doc".
--mandir=DIRECTORY --mandir=DIRECTORY
The man pages that come with PostgreSQL will be installed
under this directory, in their respective "manx" The man pages that come with PostgreSQL will be installed under
subdirectories. The default is "PREFIX/man". this directory, in their respective "manx" subdirectories. The
default is "PREFIX/man".
Note: Care has been taken to make it possible to install PostgreSQL
into shared installation locations (such as "/usr/local/include") Note: Care has been taken to make it possible to install
without interfering with the namespace of the rest of the system. PostgreSQL into shared installation locations (such as "/usr/
First, the string "/postgresql" is automatically appended to local/include") without interfering with the namespace of the
datadir, sysconfdir, and docdir, unless the fully expanded rest of the system. First, the string "/postgresql" is
directory name already contains the string "postgres" or "pgsql". automatically appended to datadir, sysconfdir, and docdir,
For example, if you choose "/usr/local" as prefix, the unless the fully expanded directory name already contains the
documentation will be installed in "/usr/local/doc/postgresql", but string "postgres" or "pgsql". For example, if you choose "/usr/
if the prefix is "/opt/postgres", then it will be in local" as prefix, the documentation will be installed in "/usr/
"/opt/postgres/doc". The public C header files of the client local/doc/postgresql", but if the prefix is "/opt/postgres",
interfaces are installed into includedir and are namespace-clean. then it will be in "/opt/postgres/doc". The public C header
The internal header files and the server header files are installed files of the client interfaces are installed into includedir
into private directories under includedir. See the documentation of and are namespace-clean. The internal header files and the
each interface for information about how to get at the its header server header files are installed into private directories
files. Finally, a private subdirectory will also be created, if under includedir. See the documentation of each interface for
appropriate, under libdir for dynamically loadable modules. information about how to get at the its header files. Finally,
a private subdirectory will also be created, if appropriate,
under libdir for dynamically loadable modules.
--with-includes=DIRECTORIES --with-includes=DIRECTORIES
"DIRECTORIES" is a colon-separated list of directories
that will be added to the list the compiler searches for "DIRECTORIES" is a colon-separated list of directories that will be
header files. If you have optional packages (such as GNU added to the list the compiler searches for header files. If you
Readline) installed in a non-standard location, you have have optional packages (such as GNU Readline) installed in a non-
to use this option and probably also the corresponding standard location, you have to use this option and probably also
"--with-libraries" option. the corresponding "--with-libraries" option.
Example: --with-includes=/opt/gnu/include:/usr/sup/include.
Example:
--with-includes=/opt/gnu/include:/usr/sup/include.
--with-libraries=DIRECTORIES --with-libraries=DIRECTORIES
"DIRECTORIES" is a colon-separated list of directories to
search for libraries. You will probably have to use this "DIRECTORIES" is a colon-separated list of directories to search
option (and the corresponding "--with-includes" option) for libraries. You will probably have to use this option (and the
if you have packages installed in non-standard locations. corresponding "--with-includes" option) if you have packages
installed in non-standard locations.
Example: --with-libraries=/opt/gnu/lib:/usr/sup/lib. Example: --with-libraries=/opt/gnu/lib:/usr/sup/lib.
--enable-nls[=LANGUAGES] --enable-nls[=LANGUAGES]
Enables Native Language Support (NLS), that is, the
ability to display a program's messages in a language Enables Native Language Support (NLS), that is, the ability to
other than English. "LANGUAGES" is a space separated list display a program's messages in a language other than English.
of codes of the languages that you want supported, for "LANGUAGES" is a space separated list of codes of the languages
example --enable-nls='de fr'. (The intersection between that you want supported, for example --enable-nls='de fr'. (The
your list and the set of actually provided translations intersection between your list and the set of actually provided
will be computed automatically.) If you do not specify a translations will be computed automatically.) If you do not specify
list, then all available translations are installed. a list, then all available translations are installed.
To use this option, you will need an implementation of the Gettext
To use this option, you will need an implementation of API; see above.
the Gettext API; see above.
--with-pgport=NUMBER --with-pgport=NUMBER
Set "NUMBER" as the default port number for server and
clients. The default is 5432. The port can always be Set "NUMBER" as the default port number for server and clients. The
changed later on, but if you specify it here then both default is 5432. The port can always be changed later on, but if
server and clients will have the same default compiled you specify it here then both server and clients will have the same
in, which can be very convenient. Usually the only good default compiled in, which can be very convenient. Usually the only
reason to select a non-default value is if you intend to good reason to select a non-default value is if you intend to run
run multiple PostgreSQL servers on the same machine. multiple PostgreSQL servers on the same machine.
--with-perl --with-perl
Build the PL/Perl server-side language.
Build the PL/Perl server-side language.
--with-python --with-python
Build the PL/Python server-side language.
Build the PL/Python server-side language.
--with-tcl --with-tcl
Build components that require Tcl/Tk, which are libpgtcl,
pgtclsh, pgtksh, and PL/Tcl. But see below about Build components that require Tcl/Tk, which are libpgtcl, pgtclsh,
"--without-tk". pgtksh, and PL/Tcl. But see below about "--without-tk".
--without-tk --without-tk
If you specify "--with-tcl" and this option, then the
program that requires Tk (pgtksh) will be excluded. If you specify "--with-tcl" and this option, then the program that
requires Tk (pgtksh) will be excluded.
--with-tclconfig=DIRECTORY, --with-tkconfig=DIRECTORY --with-tclconfig=DIRECTORY, --with-tkconfig=DIRECTORY
Tcl/Tk installs the files "tclConfig.sh" and
"tkConfig.sh", which contain configuration information Tcl/Tk installs the files "tclConfig.sh" and "tkConfig.sh", which
needed to build modules interfacing to Tcl or Tk. These contain configuration information needed to build modules
files are normally found automatically at their interfacing to Tcl or Tk. These files are normally found
well-known locations, but if you want to use a different automatically at their well-known locations, but if you want to use
version of Tcl or Tk you can specify the directory in a different version of Tcl or Tk you can specify the directory in
which to find them. which to find them.
--with-java --with-java
Build the JDBC driver and associated Java packages.
Build the JDBC driver and associated Java packages.
--with-krb4[=DIRECTORY], --with-krb5[=DIRECTORY] --with-krb4[=DIRECTORY], --with-krb5[=DIRECTORY]
Build with support for Kerberos authentication. You can
use either Kerberos version 4 or 5, but not both. The Build with support for Kerberos authentication. You can use either
"DIRECTORY" argument specifies the root directory of the Kerberos version 4 or 5, but not both. The "DIRECTORY" argument
Kerberos installation; "/usr/athena" is assumed as specifies the root directory of the Kerberos installation; "/usr/
default. If the relevant header files and libraries are athena" is assumed as default. If the relevant header files and
not under a common parent directory, then you must use libraries are not under a common parent directory, then you must
the "--with-includes" and "--with-libraries" options in use the "--with-includes" and "--with-libraries" options in
addition to this option. If, on the other hand, the addition to this option. If, on the other hand, the required files
required files are in a location that is searched by are in a location that is searched by default (e.g., "/usr/lib"),
default (e.g., "/usr/lib"), then you can leave off the then you can leave off the argument.
argument. "configure" will check for the required header files and libraries
to make sure that your Kerberos installation is sufficient before
"configure" will check for the required header files and proceeding.
libraries to make sure that your Kerberos installation is
sufficient before proceeding.
--with-krb-srvnam=NAME --with-krb-srvnam=NAME
The name of the Kerberos service principal. postgres is
the default. There's probably no reason to change this. The name of the Kerberos service principal. postgres is the
default. There's probably no reason to change this.
--with-openssl[=DIRECTORY] --with-openssl[=DIRECTORY]
Build with support for SSL (encrypted) connections. This
requires the OpenSSL package to be installed. The Build with support for SSL (encrypted) connections. This requires
"DIRECTORY" argument specifies the root directory of the the OpenSSL package to be installed. The "DIRECTORY" argument
OpenSSL installation; the default is "/usr/local/ssl". specifies the root directory of the OpenSSL installation; the
default is "/usr/local/ssl".
"configure" will check for the required header files and "configure" will check for the required header files and libraries
libraries to make sure that your OpenSSL installation is to make sure that your OpenSSL installation is sufficient before
sufficient before proceeding. proceeding.
--with-pam --with-pam
Build with PAM (Pluggable Authentication Modules)
support. Build with PAM (Pluggable Authentication Modules) support.
--without-readline --without-readline
Prevents the use of the Readline library. This disables
command-line editing and history in psql, so it is not Prevents the use of the Readline library. This disables command-
recommended. line editing and history in psql, so it is not recommended.
--with-rendezvous --with-rendezvous
Build with Rendezvous support.
Build with Rendezvous support.
--disable-spinlocks --disable-spinlocks
Allows source builds to succeed without CPU spinlock
support. Lack of spinlock support will produce poor Allow the builds to succeed even if PostgreSQL has no CPU spinlock
performance. This option is to be used only by platforms support for the platform. The lack of spinlock support will result
lacking spinlock support. in poor performance; therefore, this option should only be used if
the build aborts and informs you that the platform lacks spinlock
support.
--enable-thread-safety --enable-thread-safety
Allow separate libpq and ecpg threads to safely control
their private connection handles. Make the client libraries thread-safe. This allows concurrent
threads in libpq and ECPG programs to safely control their private
connection handles.
--without-zlib --without-zlib
Prevents the use of the Zlib library. This disables
compression support in pg_dump. This option is only Prevents the use of the Zlib library. This disables compression
intended for those rare systems where this library is not support in pg_dump. This option is only intended for those rare
available. systems where this library is not available.
--enable-debug --enable-debug
Compiles all programs and libraries with debugging
symbols. This means that you can run the programs through Compiles all programs and libraries with debugging symbols. This
a debugger to analyze problems. This enlarges the size of means that you can run the programs through a debugger to analyze
the installed executables considerably, and on non-GCC problems. This enlarges the size of the installed executables
compilers it usually also disables compiler optimization, considerably, and on non-GCC compilers it usually also disables
causing slowdowns. However, having the symbols available compiler optimization, causing slowdowns. However, having the
is extremely helpful for dealing with any problems that symbols available is extremely helpful for dealing with any
may arise. Currently, this option is recommended for problems that may arise. Currently, this option is recommended for
production installations only if you use GCC. But you production installations only if you use GCC. But you should always
should always have it on if you are doing development have it on if you are doing development work or running a beta
work or running a beta version. version.
--enable-cassert --enable-cassert
Enables assertion checks in the server, which test for
many "can't happen" conditions. This is invaluable for Enables assertion checks in the server, which test for many "can't
code development purposes, but the tests slow things down happen" conditions. This is invaluable for code development
a little. Also, having the tests turned on won't purposes, but the tests slow things down a little. Also, having the
necessarily enhance the stability of your server! The tests turned on won't necessarily enhance the stability of your
assertion checks are not categorized for severity, and so server! The assertion checks are not categorized for severity, and
what might be a relatively harmless bug will still lead so what might be a relatively harmless bug will still lead to
to server restarts if it triggers an assertion failure. server restarts if it triggers an assertion failure. Currently,
Currently, this option is not recommended for production this option is not recommended for production use, but you should
use, but you should have it on for development work or have it on for development work or when running a beta version.
when running a beta version.
--enable-depend --enable-depend
Enables automatic dependency tracking. With this option,
the makefiles are set up so that all affected object Enables automatic dependency tracking. With this option, the
files will be rebuilt when any header file is changed. makefiles are set up so that all affected object files will be
This is useful if you are doing development work, but is rebuilt when any header file is changed. This is useful if you are
just wasted overhead if you intend only to compile once doing development work, but is just wasted overhead if you intend
and install. At present, this option will work only if only to compile once and install. At present, this option will work
you use GCC. only if you use GCC.
If you prefer a C compiler different from the one "configure" If you prefer a C compiler different from the one "configure" picks then
picks then you can set the environment variable CC to the program you can set the environment variable CC to the program of your choice. By
of your choice. By default, "configure" will pick "gcc" unless default, "configure" will pick "gcc" unless this is inappropriate for the
this is inappropriate for the platform. Similarly, you can platform. Similarly, you can override the default compiler flags with the
override the default compiler flags with the CFLAGS variable. CFLAGS variable.
You can specify environment variables on the "configure" command
line, for example: You can specify environment variables on the "configure" command line,
./configure CC=/opt/bin/gcc CFLAGS='-O2 -pipe' for example:
2. Build
To start the build, type ./configure CC=/opt/bin/gcc CFLAGS='-O2 -pipe'
gmake
(Remember to use GNU make.) The build may take anywhere from 5 2. Build
minutes to half an hour depending on your hardware. The last line To start the build, type
displayed should be
All of PostgreSQL is successfully made. Ready to install. gmake
3. Regression Tests
If you want to test the newly built server before you install it, (Remember to use GNU make.) The build may take anywhere from 5 minutes to
you can run the regression tests at this point. The regression half an hour depending on your hardware. The last line displayed should
tests are a test suite to verify that PostgreSQL runs on your be
machine in the way the developers expected it to. Type
gmake check All of PostgreSQL is successfully made. Ready to install.
(This won't work as root; do it as an unprivileged user.) It is
possible that some tests fail, due to differences in error message 3. Regression Tests
wording or floating point results. The file If you want to test the newly built server before you install it, you can
"src/test/regress/README" and the documentation contain detailed run the regression tests at this point. The regression tests are a test
information about interpreting the test results. You can repeat suite to verify that PostgreSQL runs on your machine in the way the
this test at any later time by issuing the same command. developers expected it to. Type
4. Installing The Files
gmake check
Note: If you are upgrading an existing system and are going to
install the new files over the old ones, then you should have (This won't work as root; do it as an unprivileged user.) The file "src/
backed up your data and shut down the old server by now, as test/regress/README" and the documentation contain detailed information
explained in the section called If You Are Upgrading above. about interpreting the test results. You can repeat this test at any
To install PostgreSQL enter later time by issuing the same command.
gmake install
This will install files into the directories that were specified 4. Installing The Files
in step 1. Make sure that you have appropriate permissions to Note: If you are upgrading an existing system and are going to
write into that area. Normally you need to do this step as root. install the new files over the old ones, then you should have
Alternatively, you could create the target directories in advance backed up your data and shut down the old server by now, as
and arrange for appropriate permissions to be granted. explained in
You can use gmake install-strip instead of gmake install to strip the Section called If You Are Upgrading
the executable files and libraries as they are installed. This above.
will save some space. If you built with debugging support, To install PostgreSQL enter
stripping will effectively remove the debugging support, so it
should only be done if debugging is no longer needed. gmake install
install-strip tries to do a reasonable job saving space, but it
does not have perfect knowledge of how to strip every unneeded This will install files into the directories that were specified in step
byte from an executable file, so if you want to save all the disk 1. Make sure that you have appropriate permissions to write into that
space you possibly can, you will have to do manual work. area. Normally you need to do this step as root. Alternatively, you could
The standard installation provides only the header files needed create the target directories in advance and arrange for appropriate
for client application development. If you plan to do any permissions to be granted.
server-side program development (such as custom functions or data You can use gmake install-strip instead of gmake install to strip the
types written in C), then you may want to install the entire executable files and libraries as they are installed. This will save some
PostgreSQL include tree into your target include directory. To do space. If you built with debugging support, stripping will effectively
that, enter remove the debugging support, so it should only be done if debugging is
gmake install-all-headers no longer needed. install-strip tries to do a reasonable job saving
This adds a megabyte or two to the installation footprint, and is space, but it does not have perfect knowledge of how to strip every
only useful if you don't plan to keep the whole source tree around unneeded byte from an executable file, so if you want to save all the
for reference. (If you do, you can just use the source's include disk space you possibly can, you will have to do manual work.
directory when building server-side software.) The standard installation provides only the header files needed for
Client-only installation: If you want to install only the client client application development. If you plan to do any server-side program
applications and interface libraries, then you can use these development (such as custom functions or data types written in C), then
commands: you may want to install the entire PostgreSQL include tree into your
gmake -C src/bin install target include directory. To do that, enter
gmake -C src/include install
gmake -C src/interfaces install gmake install-all-headers
gmake -C doc install
This adds a megabyte or two to the installation footprint, and is only
Uninstallation: To undo the installation use the command "gmake useful if you don't plan to keep the whole source tree around for
uninstall". However, this will not remove any created directories. reference. (If you do, you can just use the source's include directory
when building server-side software.)
Cleaning: After the installation you can make room by removing the Client-only installation: If you want to install only the client
built files from the source tree with the command "gmake clean". This applications and interface libraries, then you can use these commands:
will preserve the files made by the "configure" program, so that you
can rebuild everything with "gmake" later on. To reset the source tree gmake -C src/bin install
to the state in which it was distributed, use "gmake distclean". If gmake -C src/include install
you are going to build for several platforms from the same source tree gmake -C src/interfaces install
you must do this and re-configure for each build. gmake -C doc install
If you perform a build and then discover that your "configure" options Uninstallation: To undo the installation use the command "gmake uninstall".
were wrong, or if you change anything that "configure" investigates However, this will not remove any created directories.
(for example, software upgrades), then it's a good idea to do "gmake Cleaning: After the installation you can make room by removing the built files
distclean" before reconfiguring and rebuilding. Without this, your from the source tree with the command "gmake clean". This will preserve the
changes in configuration choices may not propagate everywhere they files made by the "configure" program, so that you can rebuild everything with
need to. "gmake" later on. To reset the source tree to the state in which it was
_________________________________________________________________ distributed, use "gmake distclean". If you are going to build for several
platforms from the same source tree you must do this and re-configure for each
Post-Installation Setup build.
If you perform a build and then discover that your "configure" options were
Tuning wrong, or if you change anything that "configure" investigates (for example,
software upgrades), then it's a good idea to do "gmake distclean" before
By default, PostgreSQL is configured to run on minimal hardware. This reconfiguring and rebuilding. Without this, your changes in configuration
allows it to start up with almost any hardware configuration. However, choices may not propagate everywhere they need to.
the default configuration is not designed for optimum performance. To
achieve optimum performance, several server variables must be -------------------------------------------------------------------------------
adjusted, the two most common being shared_buffers and sort_mem
mentioned in the documentation . Other parameters in the documentation Post-Installation Setup
also affect performance.
_________________________________________________________________ Shared Libraries
Shared Libraries On some systems that have shared libraries (which most systems do) you need to
tell your system how to find the newly installed shared libraries. The systems
On some systems that have shared libraries (which most systems do) you on which this is *not* necessary include BSD/OS, FreeBSD, HP-UX, IRIX, Linux,
need to tell your system how to find the newly installed shared NetBSD, OpenBSD, Tru64 UNIX (formerly Digital UNIX), and Solaris.
libraries. The systems on which this is *not* necessary include The method to set the shared library search path varies between platforms, but
BSD/OS, FreeBSD, HP-UX, IRIX, Linux, NetBSD, OpenBSD, Tru64 UNIX the most widely usable method is to set the environment variable
(formerly Digital UNIX), and Solaris. LD_LIBRARY_PATH like so: In Bourne shells ("sh", "ksh", "bash", "zsh")
The method to set the shared library search path varies between LD_LIBRARY_PATH=/usr/local/pgsql/lib
platforms, but the most widely usable method is to set the environment export LD_LIBRARY_PATH
variable LD_LIBRARY_PATH like so: In Bourne shells ("sh", "ksh",
"bash", "zsh") or in "csh" or "tcsh"
LD_LIBRARY_PATH=/usr/local/pgsql/lib
export LD_LIBRARY_PATH setenv LD_LIBRARY_PATH /usr/local/pgsql/lib
or in "csh" or "tcsh" Replace /usr/local/pgsql/lib with whatever you set "--libdir" to in step 1. You
setenv LD_LIBRARY_PATH /usr/local/pgsql/lib should put these commands into a shell start-up file such as "/etc/profile" or
"~/.bash_profile". Some good information about the caveats associated with this
Replace /usr/local/pgsql/lib with whatever you set "--libdir" to in method can be found at http://www.visi.com/~barr/ldpath.html.
step 1. You should put these commands into a shell start-up file such On some systems it might be preferable to set the environment variable
as "/etc/profile" or "~/.bash_profile". Some good information about LD_RUN_PATH *before* building.
the caveats associated with this method can be found at On Cygwin, put the library directory in the PATH or move the ".dll" files into
http://www.visi.com/~barr/ldpath.html. the "bin" directory.
If in doubt, refer to the manual pages of your system (perhaps "ld.so" or
On some systems it might be preferable to set the environment variable "rld"). If you later on get a message like
LD_RUN_PATH *before* building.
psql: error in loading shared libraries
On Cygwin, put the library directory in the PATH or move the ".dll" libpq.so.2.1: cannot open shared object file: No such file or directory
files into the "bin" directory.
then this step was necessary. Simply take care of it then.
If in doubt, refer to the manual pages of your system (perhaps "ld.so" If you are on BSD/OS, Linux, or SunOS 4 and you have root access you can run
or "rld"). If you later on get a message like
psql: error in loading shared libraries /sbin/ldconfig /usr/local/pgsql/lib
libpq.so.2.1: cannot open shared object file: No such file or directory
(or equivalent directory) after installation to enable the run-time linker to
then this step was necessary. Simply take care of it then. find the shared libraries faster. Refer to the manual page of "ldconfig" for
more information. On FreeBSD, NetBSD, and OpenBSD the command is
If you are on BSD/OS, Linux, or SunOS 4 and you have root access you
can run /sbin/ldconfig -m /usr/local/pgsql/lib
/sbin/ldconfig /usr/local/pgsql/lib
instead. Other systems are not known to have an equivalent command.
(or equivalent directory) after installation to enable the run-time
linker to find the shared libraries faster. Refer to the manual page -------------------------------------------------------------------------------
of "ldconfig" for more information. On FreeBSD, NetBSD, and OpenBSD
the command is Environment Variables
/sbin/ldconfig -m /usr/local/pgsql/lib
If you installed into "/usr/local/pgsql" or some other location that is not
instead. Other systems are not known to have an equivalent command. searched for programs by default, you should add "/usr/local/pgsql/bin" (or
_________________________________________________________________ whatever you set "--bindir" to in step 1) into your PATH. Strictly speaking,
this is not necessary, but it will make the use of PostgreSQL much more
Environment Variables convenient.
To do this, add the following to your shell start-up file, such as
If you installed into "/usr/local/pgsql" or some other location that "~/.bash_profile" (or "/etc/profile", if you want it to affect every user):
is not searched for programs by default, you should add
"/usr/local/pgsql/bin" (or whatever you set "--bindir" to in step 1) PATH=/usr/local/pgsql/bin:$PATH
into your PATH. Strictly speaking, this is not necessary, but it will export PATH
make the use of PostgreSQL much more convenient.
If you are using "csh" or "tcsh", then use this command:
To do this, add the following to your shell start-up file, such as
"~/.bash_profile" (or "/etc/profile", if you want it to affect every set path = ( /usr/local/pgsql/bin $path )
user):
PATH=/usr/local/pgsql/bin:$PATH To enable your system to find the man documentation, you need to add lines like
export PATH the following to a shell start-up file unless you installed into a location
that is searched by default.
If you are using "csh" or "tcsh", then use this command:
set path = ( /usr/local/pgsql/bin $path ) MANPATH=/usr/local/pgsql/man:$MANPATH
export MANPATH
To enable your system to find the man documentation, you need to add
lines like the following to a shell start-up file unless you installed The environment variables PGHOST and PGPORT specify to client applications the
into a location that is searched by default. host and port of the database server, overriding the compiled-in defaults. If
MANPATH=/usr/local/pgsql/man:$MANPATH you are going to run client applications remotely then it is convenient if
export MANPATH every user that plans to use the database sets PGHOST. This is not required,
however: the settings can be communicated via command line options to most
The environment variables PGHOST and PGPORT specify to client client programs.
applications the host and port of the database server, overriding the
compiled-in defaults. If you are going to run client applications -------------------------------------------------------------------------------
remotely then it is convenient if every user that plans to use the
database sets PGHOST. This is not required, however: the settings can Getting Started
be communicated via command line options to most client programs.
_________________________________________________________________ The following is a quick summary of how to get PostgreSQL up and running once
installed. The main documentation contains more information.
Getting Started
1. Create a user account for the PostgreSQL server. This is the user the
The following is a quick summary of how to get PostgreSQL up and server will run as. For production use you should create a separate,
running once installed. The main documentation contains more unprivileged account ("postgres" is commonly used). If you do not have
information. root access or just want to play around, your own user account is enough,
1. Create a user account for the PostgreSQL server. This is the user but running the server as root is a security risk and will not work.
the server will run as. For production use you should create a
separate, unprivileged account ("postgres" is commonly used). If adduser postgres
you do not have root access or just want to play around, your own
user account is enough, but running the server as root is a 2. Create a database installation with the "initdb" command. To run "initdb"
security risk and will not work. you must be logged in to your PostgreSQL server account. It will not work
adduser postgres as root.
2. Create a database installation with the "initdb" command. To run
"initdb" you must be logged in to your PostgreSQL server account. root# mkdir /usr/local/pgsql/data
It will not work as root. root# chown postgres /usr/local/pgsql/data
root# mkdir /usr/local/pgsql/data root# su - postgres
root# chown postgres /usr/local/pgsql/data postgres$ /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
root# su - postgres
postgres$ /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data The "-D" option specifies the location where the data will be stored. You
The "-D" option specifies the location where the data will be can use any path you want, it does not have to be under the installation
stored. You can use any path you want, it does not have to be directory. Just make sure that the server account can write to the
under the installation directory. Just make sure that the server directory (or create it, if it doesn't already exist) before starting
account can write to the directory (or create it, if it doesn't "initdb", as illustrated here.
already exist) before starting "initdb", as illustrated here.
3. The previous step should have told you how to start up the 3. The previous step should have told you how to start up the database
database server. Do so now. The command should look something like server. Do so now. The command should look something like
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data
This will start the server in the foreground. To put the server in /usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data
the background use something like
nohup /usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data \ This will start the server in the foreground. To put the server in the
</dev/null >>server.log 2>&1 </dev/null & background use something like
To stop a server running in the background you can type
kill `cat /usr/local/pgsql/data/postmaster.pid` nohup /usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data \
In order to allow TCP/IP connections (rather than only Unix domain </dev/null >>server.log 2>&1 </dev/null &
socket ones) you need to pass the "-i" option to "postmaster".
4. Create a database: To stop a server running in the background you can type
createdb testdb
Then enter kill `cat /usr/local/pgsql/data/postmaster.pid`
psql testdb
to connect to that database. At the prompt you can enter SQL In order to allow TCP/IP connections (rather than only Unix domain socket
commands and start experimenting. ones) you need to pass the "-i" option to "postmaster".
_________________________________________________________________
4. Create a database:
What Now?
createdb testdb
* The PostgreSQL distribution contains a comprehensive documentation
set, which you should read sometime. After installation, the Then enter
documentation can be accessed by pointing your browser to
"/usr/local/pgsql/doc/html/index.html", unless you changed the psql testdb
installation directories.
The first few chapters of the main documentation are the Tutorial, to connect to that database. At the prompt you can enter SQL commands and
which should be your first reading if you are completely new to start experimenting.
SQL databases. If you are familiar with database concepts then you
want to proceed with part on server administration, which contains -------------------------------------------------------------------------------
information about how to set up the database server, database
users, and authentication. What Now?
* Usually, you will want to modify your computer so that it will
automatically start the database server whenever it boots. Some * The PostgreSQL distribution contains a comprehensive documentation set,
suggestions for this are in the documentation. which you should read sometime. After installation, the documentation can
* Run the regression tests against the installed server (using the be accessed by pointing your browser to "/usr/local/pgsql/doc/html/
sequential test method). If you didn't run the tests before index.html", unless you changed the installation directories.
installation, you should definitely do it now. This is also The first few chapters of the main documentation are the Tutorial, which
explained in the documentation. should be your first reading if you are completely new to SQL databases.
_________________________________________________________________ If you are familiar with database concepts then you want to proceed with
part on server administration, which contains information about how to
Supported Platforms set up the database server, database users, and authentication.
PostgreSQL has been verified by the developer community to work on the * Usually, you will want to modify your computer so that it will
platforms listed below. A supported platform generally means that automatically start the database server whenever it boots. Some
PostgreSQL builds and installs according to these instructions and suggestions for this are in the documentation.
that the regression tests pass.
* Run the regression tests against the installed server (using "gmake
Note: If you are having problems with the installation on a installcheck"). If you didn't run the tests before installation, you
supported platform, please write to <pgsql-bugs@postgresql.org> or should definitely do it now. This is also explained in the documentation.
<pgsql-ports@postgresql.org>, not to the people listed here.
* By default, PostgreSQL is configured to run on minimal hardware. This
OS Processor Version Reported Remarks allows it to start up with almost any hardware configuration. The default
AIX RS6000 7.3 2002-11-12, Andreas Zeugswetter configuration is, however, not designed for optimum performance. To
(<ZeugswetterA@spardat.at>) see also doc/FAQ_AIX achieve optimum performance, several server parameters must be adjusted,
BSD/OS x86 7.3 2002-10-25, Bruce Momjian (<pgman@candle.pha.pa.us>) the two most common being shared_buffers and sort_mem mentioned in the
4.2 documentation. Other parameters mentioned in the documentation also
FreeBSD Alpha 7.3 2002-11-13, Chris Kings-Lynne affect performance.
(<chriskl@familyhealth.com.au>)
FreeBSD x86 7.3 2002-10-29, 3.3, Nigel J. Andrews -------------------------------------------------------------------------------
(<nandrews@investsystems.co.uk>), 4.7, Larry Rosenman
(<ler@lerctr.org>), 5.0, Sean Chittenden (<sean@chittenden.org>) Supported Platforms
HP-UX PA-RISC 7.3 2002-10-28, 10.20 Tom Lane (<tgl@sss.pgh.pa.us>),
11.00, 11.11, 32 and 64 bit, Giles Lean (<giles@nemeton.com.au>) gcc PostgreSQL has been verified by the developer community to work on the
and cc; see also doc/FAQ_HPUX platforms listed below. A supported platform generally means that PostgreSQL
IRIX MIPS 7.3 2002-10-27, Ian Barwick (<barwick@gmx.net>) Irix64 Komma builds and installs according to these instructions and that the regression
6.5 tests pass.
Linux Alpha 7.3 2002-10-28, Magnus Naeslund (<mag@fbab.net>) Note: If you are having problems with the installation on a supported
2.4.19-pre6 platform, please write to <pgsql-bugs@postgresql.org> or <pgsql-
Linux armv4l 7.2 2001-12-10, Mark Knox (<segfault@hardline.org>) 2.2.x ports@postgresql.org>, not to the people listed here.
Linux MIPS 7.2 2001-11-15, Hisao Shibuya (<shibuya@alpha.or.jp>) _____________________________________________________________________________
2.0.x; Cobalt Qube2 |OS__________|Processor|Version|Reported______________________|Remarks________|
Linux PlayStation 2 7.3 2002-11-19, Permaine Cheung |AIX |RS6000 |7.4 |2003-10-25, Hans-Jürgen |see also doc/ |
<pcheung@redhat.com>) #undef HAS_TEST_AND_SET, remove slock_t typedef |____________|_________|_______|Schönig_(<hs@cybertec.at>)____|FAQ_AIX________|
Linux PPC74xx 7.3 2002-10-26, Tom Lane (<tgl@sss.pgh.pa.us>) bye |BSD/OS |x86 |7.4 |2003-10-24, Bruce Momjian |4.3 |
2.2.18; Apple G3 |____________|_________|_______|(<pgman@candle.pha.pa.us>)____|_______________|
Linux S/390 7.3 2002-11-22, Permaine Cheung <pcheung@redhat.com>) both |FreeBSD |Alpha |7.4 |2003-10-25, Peter Eisentraut |4.8 |
s390 and s390x (32 and 64 bit) |____________|_________|_______|(<peter_e@gmx.net>)___________|_______________|
Linux Sparc 7.3 2002-10-26, Doug McNaught (<doug@mcnaught.org>) 3.0 |FreeBSD |x86 |7.4 |2003-10-24, Peter Eisentraut |4.9 |
Linux x86 7.3 2002-10-26, Alvaro Herrera (<alvherre@dcc.uchile.cl>) |____________|_________|_______|(<peter_e@gmx.net>)___________|_______________|
2.4 |HP-UX |PA-RISC |7.4 |2003-10-31, 10.20, Tom Lane |gcc and cc; see|
MacOS X PPC 7.3 2002-10-28, 10.1, Tom Lane (<tgl@sss.pgh.pa.us>), | | | |(<tgl@sss.pgh.pa.us>); 2003- |also doc/ |
10.2.1, Adam Witney (<awitney@sghms.ac.uk>) | | | |11-04, 11.00, Peter Eisentraut|FAQ_HPUX |
NetBSD Alpha 7.2 2001-11-20, Thomas Thai (<tom@minnesota.com>) 1.5W |____________|_________|_______|(<peter_e@gmx.net>)___________|_______________|
NetBSD arm32 7.3 2002-11-19, Patrick Welche (<prlw1@newn.cam.ac.uk>) |IRIX |MIPS |7.4 |2003-11-12, Robert E. |6.5.20, cc only|
1.6 | | | |Bruccoleri | |
NetBSD m68k 7.0 2000-04-10, Henry B. Hotz (<hotz@jpl.nasa.gov>) Mac |____________|_________|_______|(<bruc@stone.congenomics.com>)|_______________|
8xx |Linux |Alpha |7.4 |2003-10-25, Noèl Köthe |2.4 |
NetBSD MIPS 7.2.1 2002-06-13, Warwick Hunter (<whunter@agile.tv>) |____________|_________|_______|(<noel@debian.org>)___________|_______________|
1.5.3 |Linux |arm41 |7.4 |2003-10-25, Noèl Köthe |2.4 |
NetBSD PPC 7.2 2001-11-28, Bill Studenmund (<wrstuden@netbsd.org>) 1.5 |____________|_________|_______|(<noel@debian.org>)___________|_______________|
NetBSD Sparc 7.2 2001-12-03, Matthew Green (<mrg@eterna.com.au>) 32- |Linux |Itanium |7.4 |2003-10-25, Noèl Köthe |2.4 |
and 64-bit builds |____________|_________|_______|(<noel@debian.org>)___________|_______________|
NetBSD VAX 7.1 2001-03-30, Tom I. Helbekkmo (<tih@kpnQwest.no>) 1.5 |Linux |m68k |7.4 |2003-10-25, Noèl Köthe |2.4 |
NetBSD x86 7.3 2002-11-14, Patrick Welche (<prlw1@newn.cam.ac.uk>) 1.6 |____________|_________|_______|(<noel@debian.org>)___________|_______________|
OpenBSD Sparc 7.3 2002-11-17, Christopher Kings-Lynne |Linux |MIPS |7.4 |2003-10-25, Noèl Köthe |2.4 |
(<chriskl@familyhealth.com.au>) 3.2 |____________|_________|_______|(<noel@debian.org>)___________|_______________|
OpenBSD x86 7.3 2002-11-14, 3.1 Magnus Naeslund (<mag@fbab.net>), 3.2 |Linux |Opteron |7.4 |2003-11-01, Jani Averbach |2.6 |
Christopher Kings-Lynne (<chriskl@familyhealth.com.au>) |____________|_________|_______|(<jaa@cc.jyu.fi>)_____________|_______________|
SCO OpenServer 5 x86 7.3.1 2002-12-11, Shibashish Satpathy |Linux |PPC |7.4 |2003-10-25, Noèl Köthe | |
(<shib@postmark.net>) 5.0.4, gcc; see also doc/FAQ_SCO |____________|_________|_______|(<noel@debian.org>)___________|_______________|
Solaris Sparc 7.3 2002-10-28, Andrew Sullivan |Linux |S/390 |7.4 |2003-10-25, Noèl Köthe |2.4 |
(<andrew@libertyrms.info>) Solaris 7 and 8; see also doc/FAQ_Solaris |____________|_________|_______|(<noel@debian.org>)___________|_______________|
Solaris x86 7.3 2002-11-20, Martin Renters (<martin@datafax.com>) 5.8; |Linux |Sparc |7.4 |2003-10-24, Peter Eisentraut |2.4, 32-bit |
see also doc/FAQ_Solaris |____________|_________|_______|(<peter_e@gmx.net>)___________|_______________|
SunOS 4 Sparc 7.2 2001-12-04, Tatsuo Ishii (<t-ishii@sra.co.jp>) |Linux |x86 |7.4 |2003-10-24, Peter Eisentraut |2.4 |
Tru64 UNIX Alpha 7.3 2002-11-05, Alessio Bragadini |____________|_________|_______|(<peter_e@gmx.net>)___________|_______________|
(<alessio@albourne.com>) |MacOS X |PPC |7.4 |2003-10-24, 10.2.8, Adam | |
UnixWare x86 7.3 2002-11-01, 7.1.3 Larry Rosenman (<ler@lerctr.org>), | | | |Witney | |
7.1.1 and 7.1.2(8.0.0) Olivier Prenant (<ohp@pyrenet.fr>) see also | | | |(<awitney@sghms.ac.uk>), 10.3,| |
doc/FAQ_SCO | | | |Marko Karppinen | |
Windows x86 7.3 2002-10-29, Dave Page (<dpage@vale-housing.co.uk>), |____________|_________|_______|(<marko@karppinen.fi>)________|_______________|
Jason Tishler (<jason@tishler.net>) with Cygwin; see doc/FAQ_MSWIN |NetBSD |arm32 |7.4 |2003-11-12, Patrick Welche |1.6ZE/acorn32 |
Windows x86 7.3 2002-11-05, Dave Page (<dpage@vale-housing.co.uk>) |____________|_________|_______|(<prlw1@newn.cam.ac.uk>)______|_______________|
native is client-side only; see documentation |NetBSD |x86 |7.4 |2003-10-24, Peter Eisentraut |1.6 |
|____________|_________|_______|(<peter_e@gmx.net>)___________|_______________|
Unsupported Platforms: The following platforms are either known not to |OpenBSD |Sparc |7.4 |2003-11-01, Peter Eisentraut |3.4 |
work, or they used to work in a previous release and we did not |____________|_________|_______|(<peter_e@gmx.net>)___________|_______________|
receive explicit confirmation of a successful test with version 7.4 at |OpenBSD |x86 |7.4 |2003-10-24, Peter Eisentraut |3.2 |
the time this list was compiled. We include these here to let you know |____________|_________|_______|(<peter_e@gmx.net>)___________|_______________|
that these platforms *could* be supported if given some attention. |Solaris |Sparc |7.4 |2003-10-26, Christopher Browne|2.8; see also |
|____________|_________|_______|(<cbbrowne@libertyrms.info>)__|doc/FAQ_Solaris|
OS Processor Version Reported Remarks |Solaris |x86 |7.4 |2003-10-26, Kurt Roeckx |2.6 see also |
BeOS x86 7.2 2001-11-29, Cyril Velter (<cyril.velter@libertysurf.fr>) |____________|_________|_______|(<Q@ping.be>)_________________|doc/FAQ_Solaris|
needs updates to semaphore code |Tru64 UNIX |Alpha |7.4 |2003-10-25, 5.1b, Peter | |
DG/UX 5.4R4.11 m88k 6.3 1998-03-01, Brian E Gallew (<geek+@cmu.edu>) | | | |Eisentraut | |
no recent reports | | | |(<peter_e@gmx.net>); 2003-10- | |
MkLinux DR1 PPC750 7.0 2001-04-03, Tatsuo Ishii (<t-ishii@sra.co.jp>) | | | |29, 4.0g, Alessio Bragadini | |
7.1 needs OS update? |____________|_________|_______|(<alessio@albourne.com>)______|_______________|
NeXTSTEP x86 6.x 1998-03-01, David Wetzel (<dave@turbocat.de>) bit rot |UnixWare |x86 |7.4 |2003-11-03, Larry Rosenman |7.1.3; join |
suspected | | | |(<ler@lerctr.org>) |test may fail, |
QNX 4 RTOS x86 7.2 2001-12-10, Bernd Tegge (<tegge@repas-aeg.de>) | | | | |see also doc/ |
needs updates to semaphore code; see also doc/FAQ_QNX4 |____________|_________|_______|______________________________|FAQ_SCO________|
QNX RTOS v6 x86 7.2 2001-11-20, Igor Kovalenko |Windows with|x86 |7.4 |2003-10-24, Peter Eisentraut |see doc/ |
(<Igor.Kovalenko@motorola.com>) patches available in archives, but too |Cygwin______|_________|_______|(<peter_e@gmx.net>)___________|FAQ_MSWIN______|
late for 7.2 |Windows |x86 |7.4 |2003-10-27, Dave Page |native is |
System V R4 m88k 6.2.1 1998-03-01, Doug Winterburn | | | |(<dpage@vale-housing.co.uk>) |client-side |
(<dlw@seavme.xroads.com>) needs new TAS spinlock code | | | | |only, see |
System V R4 MIPS 6.4 1998-10-28, Frank Ridderbusch |____________|_________|_______|______________________________|documentation__|
(<ridderbusch.pad@sni.de>) no recent reports
Ultrix MIPS 7.1 2001-03-26 TAS spinlock code not detected Unsupported Platforms: The following platforms are either known not to work, or
Ultrix VAX 6.x 1998-03-01 they used to work in a previous release and we did not receive explicit
confirmation of a successful test with version 7.4 at the time this list was
compiled. We include these here to let you know that these platforms *could* be
supported if given some attention.
________________________________________________________________________________
|OS________|Processor__|Version|Reported_______________________|Remarks__________|
|BeOS |x86 |7.2 |2001-11-29, Cyril Velter |needs updates to |
|__________|___________|_______|(<cyril.velter@libertysurf.fr>)|semaphore_code___|
|Linux |PlayStation|7.4 |2003-11-02, Peter Eisentraut |needs new |
| |2 | |(<peter_e@gmx.net>) |config.guess, -- |
| | | | |disable- |
| | | | |spinlocks, #undef|
| | | | |HAS_TEST_AND_SET,|
| | | | |disable tas_dummy|
|__________|___________|_______|_______________________________|()_______________|
|Linux |PA-RISC |7.4 |2003-10-25, Noèl Köthe |needs --disable- |
| | | |(<noel@debian.org>) |spinlocks, |
|__________|___________|_______|_______________________________|otherwise_OK_____|
|NetBSD |Alpha |7.2 |2001-11-20, Thomas Thai |1.5W |
|__________|___________|_______|(<tom@minnesota.com>)__________|_________________|
|NetBSD |MIPS |7.2.1 |2002-06-13, Warwick Hunter |1.5.3 |
|__________|___________|_______|(<whunter@agile.tv>)___________|_________________|
|NetBSD |PPC |7.2 |2001-11-28, Bill Studenmund |1.5 |
|__________|___________|_______|(<wrstuden@netbsd.org>)________|_________________|
|NetBSD |Sparc |7.2 |2001-12-03, Matthew Green |32- and 64-bit |
|__________|___________|_______|(<mrg@eterna.com.au>)__________|builds___________|
|NetBSD |VAX |7.1 |2001-03-30, Tom I. Helbekkmo |1.5 |
|__________|___________|_______|(<tih@kpnQwest.no>)____________|_________________|
|QNX 4 RTOS|x86 |7.2 |2001-12-10, Bernd Tegge |needs updates to |
| | | |(<tegge@repas-aeg.de>) |semaphore code; |
| | | | |see also doc/ |
|__________|___________|_______|_______________________________|FAQ_QNX4_________|
|QNX RTOS |x86 |7.2 |2001-11-20, Igor Kovalenko |patches available|
|v6 | | |(<Igor.Kovalenko@motorola.com>)|in archives, but |
|__________|___________|_______|_______________________________|too_late_for_7.2_|
|SCO |x86 |7.3.1 |2002-12-11, Shibashish Satpathy|5.0.4, gcc; see |
|OpenServer|___________|_______|(<shib@postmark.net>)__________|also_doc/FAQ_SCO_|
|SunOS 4 |Sparc |7.2 |2001-12-04, Tatsuo Ishii (<t- | |
|__________|___________|_______|ishii@sra.co.jp>)______________|_________________|
Regression Tests Regression Tests
Introduction
The regression tests are a comprehensive set of tests for the SQL The regression tests are a comprehensive set of tests for the SQL
implementation in PostgreSQL. They test standard SQL operations as well as implementation in PostgreSQL. They test standard SQL operations as well as the
the extended capabilities of PostgreSQL. The test suite was originally extended capabilities of PostgreSQL. From PostgreSQL 6.1 onward, the regression
developed by Jolly Chen and Andrew Yu, and was extensively revised and tests are current for every official release.
repackaged by Marc Fournier and Thomas Lockhart. From PostgreSQL 6.1 onward
the regression tests are current for every official release.
------------------------------------------------------------------------ -------------------------------------------------------------------------------
Running the Tests Running the Tests
The regression test can be run against an already installed and running The regression test can be run against an already installed and running server,
server, or using a temporary installation within the build tree. or using a temporary installation within the build tree. Furthermore, there is
Furthermore, there is a "parallel" and a "sequential" mode for running the a "parallel" and a "sequential" mode for running the tests. The sequential
tests. The sequential method runs each test script in turn, whereas the method runs each test script in turn, whereas the parallel method starts up
parallel method starts up multiple server processes to run groups of tests multiple server processes to run groups of tests in parallel. Parallel testing
in parallel. Parallel testing gives confidence that interprocess gives confidence that interprocess communication and locking are working
communication and locking are working correctly. For historical reasons, the correctly. For historical reasons, the sequential test is usually run against
sequential test is usually run against an existing installation and the an existing installation and the parallel method against a temporary
parallel method against a temporary installation, but there are no technical installation, but there are no technical reasons for this.
reasons for this.
To run the regression tests after building but before installation, type To run the regression tests after building but before installation, type
$ gmake check gmake check
in the top-level directory. (Or you can change to src/test/regress and run in the top-level directory. (Or you can change to "src/test/regress" and run
the command there.) This will first build several auxiliary files, such as the command there.) This will first build several auxiliary files, such as some
platform-dependent "expected" files and some sample user-defined trigger sample user-defined trigger functions, and then run the test driver script. At
functions, and then run the test driver script. At the end you should see the end you should see something like
something like
====================== ======================
All 77 tests passed. All 93 tests passed.
====================== ======================
or otherwise a note about what tests failed. See the Section called Test or otherwise a note about which tests failed. See the Section called Test
Evaluation below for more. Evaluation below for more.
Note: Because this test method runs a temporary server, it will Because this test method runs a temporary server, it will not work when you are
not work when you are the root user (the server will not start as the root user (since the server will not start as root). If you already did the
root). If you already did the build as root, you do not have to build as root, you do not have to start all over. Instead, make the regression
start all over. Instead, make the regression test directory test directory writable by some other user, log in as that user, and restart
writable by some other user, log in as that user, and restart the the tests. For example
tests. For example,
root# chmod -R a+w src/test/regress
root# chmod -R a+w contrib/spi
root# su - joeuser
joeuser$ cd top-level build directory
joeuser$ gmake check
(The only possible "security risk" here is that other users might be able to
alter the regression test results behind your back. Use common sense when
managing user permissions.)
Alternatively, run the tests after installation.
The parallel regression test starts quite a few processes under your user ID.
Presently, the maximum concurrency is twenty parallel test scripts, which means
sixty processes: there's a server process, a psql, and usually a shell parent
process for the psql for each test script. So if your system enforces a per-
user limit on the number of processes, make sure this limit is at least
seventy-five or so, else you may get random-seeming failures in the parallel
test. If you are not in a position to raise the limit, you can cut down the
degree of parallelism by setting the MAX_CONNECTIONS parameter. For example,
root# chmod -R a+w src/test/regress gmake MAX_CONNECTIONS=10 check
root# chmod -R a+w contrib/spi
root# su - joeuser
joeuser$ cd <build top-level directory>
joeuser$ gmake check
(The only possible "security risk" here is that other users might runs no more than ten tests concurrently.
be able to alter the regression test results behind your back. Use
common sense when managing user permissions.)
Alternatively, run the tests after installation. On some systems, the default Bourne-compatible shell ("/bin/sh") gets confused
when it has to manage too many child processes in parallel. This may cause the
parallel test run to lock up or fail. In such cases, specify a different
Bourne-compatible shell on the command line, for example:
Tip: On some systems, the default Bourne-compatible shell gmake SHELL=/bin/ksh check
(/bin/sh) gets confused when it has to manage too many child
processes in parallel. This may cause the parallel test run to
lock up or fail. In such cases, specify a different
Bourne-compatible shell on the command line, for example:
$ gmake SHELL=/bin/ksh check If no non-broken shell is available, you may be able to work around the problem
by limiting the number of connections, as shown above.
To run the tests after installation, initialize a data area and start the To run the tests after installation, initialize a data area and start the
server, then type server, then type
$ gmake installcheck gmake installcheck
The tests will expect to contact the server at the local host and the The tests will expect to contact the server at the local host and the default
default port number, unless directed otherwise by PGHOST and PGPORT port number, unless directed otherwise by PGHOST and PGPORT environment
environment variables. variables.
------------------------------------------------------------------------ -------------------------------------------------------------------------------
Test Evaluation Test Evaluation
Some properly installed and fully functional PostgreSQL installations can Some properly installed and fully functional PostgreSQL installations can
"fail" some of these regression tests due to platform-specific artifacts "fail" some of these regression tests due to platform-specific artifacts such
such as varying floating point representation and time zone support. The as varying floating-point representation and time zone support. The tests are
tests are currently evaluated using a simple diff comparison against the currently evaluated using a simple "diff" comparison against the outputs
outputs generated on a reference system, so the results are sensitive to generated on a reference system, so the results are sensitive to small system
small system differences. When a test is reported as "failed", always differences. When a test is reported as "failed", always examine the
examine the differences between expected and actual results; you may well differences between expected and actual results; you may well find that the
find that the differences are not significant. Nonetheless, we still strive differences are not significant. Nonetheless, we still strive to maintain
to maintain accurate reference files across all supported platforms, so it accurate reference files across all supported platforms, so it can be expected
can be expected that all tests pass. that all tests pass.
The actual outputs of the regression tests are in files in the The actual outputs of the regression tests are in files in the "src/test/
src/test/regress/results directory. The test script uses diff to compare regress/results" directory. The test script uses "diff" to compare each output
each output file against the reference outputs stored in the file against the reference outputs stored in the "src/test/regress/expected"
src/test/regress/expected directory. Any differences are saved for your directory. Any differences are saved for your inspection in "src/test/regress/
inspection in src/test/regress/regression.diffs. (Or you can run diff regression.diffs". (Or you can run "diff" yourself, if you prefer.)
yourself, if you prefer.)
-------------------------------------------------------------------------------
------------------------------------------------------------------------
Error message differences Error message differences
Some of the regression tests involve intentional invalid input values. Error Some of the regression tests involve intentional invalid input values. Error
messages can come from either the PostgreSQL code or from the host platform messages can come from either the PostgreSQL code or from the host platform
system routines. In the latter case, the messages may vary between system routines. In the latter case, the messages may vary between platforms,
platforms, but should reflect similar information. These differences in but should reflect similar information. These differences in messages will
messages will result in a "failed" regression test that can be validated by result in a "failed" regression test that can be validated by inspection.
inspection.
------------------------------------------------------------------------ -------------------------------------------------------------------------------
Locale differences Locale differences
The tests expect to run in plain "C" locale. This should not cause any If you run the tests against an already-installed server that was initialized
problems when you run the tests against a temporary installation, since the with a collation-order locale other than C, then there may be differences due
regression test driver takes care to start the server in C locale. However, to sort order and follow-up failures. The regression test suite is set up to
if you run the tests against an already-installed server that is using non-C handle this problem by providing alternative result files that together are
locale settings, you may see differences caused by varying rules for string known to handle a large number of locales. For example, for the char test, the
sort order, formatting of numeric and monetary values, and so forth. expected file "char.out" handles the C and POSIX locales, and the file
"char_1.out" handles many other locales. The regression test driver will
In some locales the resulting differences are small and easily checked by automatically pick the best file to match against when checking for success and
inspection. However, in a locale that changes the rules for formatting of for computing failure differences. (This means that the regression tests cannot
numeric values (typically by swapping the usage of commas and decimal detect whether the results are appropriate for the configured locale. The tests
points), entry of some data values will fail, resulting in extensive will simply pick the one result file that works best.)
differences later in the tests where the missing data values are supposed to
be used. If for some reason the existing expected files do not cover some locale, you
can add a new file. The naming scheme is testname_digit.out. The actual digit
------------------------------------------------------------------------ is not significant. Remember that the regression test driver will consider all
such files to be equally valid test results. If the test results are platform-
specific, the technique described in the Section called Platform-specific
comparison files should be used instead.
-------------------------------------------------------------------------------
Date and time differences Date and time differences
Some of the queries in the timestamp test will fail if you run the test on A few of the queries in the "horology" test will fail if you run the test on
the day of a daylight-savings time changeover, or the day before or after the day of a daylight-saving time changeover, or the day after one. These
one. These queries assume that the intervals between midnight yesterday, queries expect that the intervals between midnight yesterday, midnight today
midnight today and midnight tomorrow are exactly twenty-four hours -- which and midnight tomorrow are exactly twenty-four hours --- which is wrong if
is wrong if daylight-savings time went into or out of effect meanwhile. daylight-saving time went into or out of effect meanwhile.
Most of the date and time results are dependent on the time zone
environment. The reference files are generated for time zone PST8PDT
(Berkeley, California) and there will be apparent failures if the tests are
not run with that time zone setting. The regression test driver sets
environment variable PGTZ to PST8PDT, which normally ensures proper results.
However, your system must provide library support for the PST8PDT time zone,
or the time zone-dependent tests will fail. To verify that your machine does
have this support, type the following:
$ env TZ=PST8PDT date
The command above should have returned the current system time in the
PST8PDT time zone. If the PST8PDT database is not available, then your
system may have returned the time in GMT. If the PST8PDT time zone is not
available, you can set the time zone rules explicitly:
PGTZ='PST8PDT7,M04.01.0,M10.05.03'; export PGTZ Note: Because USA daylight-saving time rules are used, this problem
always occurs on the first Sunday of April, the last Sunday of
October, and their following Mondays, regardless of when daylight-
saving time is in effect where you live. Also note that the problem
appears or disappears at midnight Pacific time (UTC-7 or UTC-8), not
midnight your local time. Thus the failure may appear late on
Saturday or persist through much of Tuesday, depending on where you
live.
There appear to be some systems that do not accept the recommended syntax Most of the date and time results are dependent on the time zone environment.
for explicitly setting the local time zone rules; you may need to use a The reference files are generated for time zone PST8PDT (Berkeley, California),
different PGTZ setting on such machines. and there will be apparent failures if the tests are not run with that time
zone setting. The regression test driver sets environment variable PGTZ to
PST8PDT, which normally ensures proper results. However, your operating system
must provide support for the PST8PDT time zone, or the time zone-dependent
tests will fail. To verify that your machine does have this support, type the
following:
Some systems using older time zone libraries fail to apply daylight-savings env TZ=PST8PDT date
corrections to dates before 1970, causing pre-1970 PDT times to be displayed
in PST instead. This will result in localized differences in the test
results.
------------------------------------------------------------------------ The command above should have returned the current system time in the PST8PDT
time zone. If the PST8PDT time zone is not available, then your system may have
returned the time in UTC. If the PST8PDT time zone is missing, you can set the
time zone rules explicitly:
Floating point differences PGTZ='PST8PDT7,M04.01.0,M10.05.03'; export PGTZ
Some of the tests involve computing 64-bit (double precision) numbers from There appear to be some systems that do not accept the recommended syntax for
table columns. Differences in results involving mathematical functions of explicitly setting the local time zone rules; you may need to use a different
double precision columns have been observed. The float8 and geometry tests PGTZ setting on such machines.
are particularly prone to small differences across platforms, or even with
different compiler optimization options. Human eyeball comparison is needed
to determine the real significance of these differences which are usually 10
places to the right of the decimal point.
Some systems signal errors from pow() and exp() differently from the Some systems using older time-zone libraries fail to apply daylight-saving
mechanism expected by the current PostgreSQL code. corrections to dates before 1970, causing pre-1970 PDT times to be displayed in
PST instead. This will result in localized differences in the test results.
------------------------------------------------------------------------ -------------------------------------------------------------------------------
Polygon differences Floating-point differences
Several of the tests involve operations on geographic data about the Some of the tests involve computing 64-bit floating-point numbers (double
Oakland/Berkeley, California street map. The map data is expressed as precision) from table columns. Differences in results involving mathematical
polygons whose vertices are represented as pairs of double precision numbers functions of double precision columns have been observed. The float8 and
(decimal latitude and longitude). Initially, some tables are created and geometry tests are particularly prone to small differences across platforms, or
loaded with geographic data, then some views are created that join two even with different compiler optimization options. Human eyeball comparison is
tables using the polygon intersection operator (##), then a select is done needed to determine the real significance of these differences which are
on the view. usually 10 places to the right of the decimal point.
When comparing the results from different platforms, differences occur in Some systems display minus zero as -0, while others just show 0.
the 2nd or 3rd place to the right of the decimal point. The SQL statements
where these problems occur are the following:
SELECT * from street; Some systems signal errors from pow() and exp() differently from the mechanism
SELECT * from iexit; expected by the current PostgreSQL code.
------------------------------------------------------------------------ -------------------------------------------------------------------------------
Row ordering differences Row ordering differences
You might see differences in which the same rows are output in a different You might see differences in which the same rows are output in a different
order than what appears in the expected file. In most cases this is not, order than what appears in the expected file. In most cases this is not,
strictly speaking, a bug. Most of the regression test scripts are not so strictly speaking, a bug. Most of the regression test scripts are not so
pedantic as to use an ORDER BY for every single SELECT, and so their result pedantic as to use an ORDER BY for every single SELECT, and so their result row
row orderings are not well-defined according to the letter of the SQL orderings are not well-defined according to the letter of the SQL
specification. In practice, since we are looking at the same queries being specification. In practice, since we are looking at the same queries being
executed on the same data by the same software, we usually get the same executed on the same data by the same software, we usually get the same result
result ordering on all platforms, and so the lack of ORDER BY isn't a ordering on all platforms, and so the lack of ORDER BY isn't a problem. Some
problem. Some queries do exhibit cross-platform ordering differences, queries do exhibit cross-platform ordering differences, however. (Ordering
however. (Ordering differences can also be triggered by non-C locale differences can also be triggered by non-C locale settings.)
settings.)
Therefore, if you see an ordering difference, it's not something to worry Therefore, if you see an ordering difference, it's not something to worry
about, unless the query does have an ORDER BY that your result is violating. about, unless the query does have an ORDER BY that your result is violating.
But please report it anyway, so that we can add an ORDER BY to that But please report it anyway, so that we can add an ORDER BY to that particular
particular query and thereby eliminate the bogus "failure" in future query and thereby eliminate the bogus "failure" in future releases.
releases.
You might wonder why we don't order all the regress test queries explicitly You might wonder why we don't order all the regression test queries explicitly
to get rid of this issue once and for all. The reason is that that would to get rid of this issue once and for all. The reason is that that would make
make the regression tests less useful, not more, since they'd tend to the regression tests less useful, not more, since they'd tend to exercise query
exercise query plan types that produce ordered results to the exclusion of plan types that produce ordered results to the exclusion of those that don't.
those that don't.
------------------------------------------------------------------------ -------------------------------------------------------------------------------
The "random" test The "random" test
There is at least one case in the "random" test script that is intended to There is at least one case in the random test script that is intended to
produce random results. This causes random to fail the regression test once produce random results. This causes random to fail the regression test once in
in a while (perhaps once in every five to ten trials). Typing a while (perhaps once in every five to ten trials). Typing
diff results/random.out expected/random.out diff results/random.out expected/random.out
should produce only one or a few lines of differences. You need not worry should produce only one or a few lines of differences. You need not worry
unless the random test always fails in repeated attempts. (On the other unless the random test always fails in repeated attempts. (On the other hand,
hand, if the random test is never reported to fail even in many trials of if the random test is *never* reported to fail even in many trials of the
the regression tests, you probably should worry.) regression tests, you probably *should* worry.)
-------------------------------------------------------------------------------
Platform-specific comparison files
Since some of the tests inherently produce platform-specific results, we have
provided a way to supply platform-specific result comparison files. Frequently,
the same variation applies to multiple platforms; rather than supplying a
separate comparison file for every platform, there is a mapping file that
defines which comparison file to use. So, to eliminate bogus test "failures"
for a particular platform, you must choose or make a variant result file, and
then add a line to the mapping file, which is "src/test/regress/resultmap".
Each line in the mapping file is of the form
testname/platformpattern=comparisonfilename
The test name is just the name of the particular regression test module. The
platform pattern is a pattern in the style of the Unix tool "expr" (that is, a
regular expression with an implicit ^ anchor at the start). It is matched
against the platform name as printed by "config.guess" followed by :gcc or :cc,
depending on whether you use the GNU compiler or the system's native compiler
(on systems where there is a difference). The comparison file name is the name
of the substitute result comparison file.
For example: some systems using older time zone libraries fail to apply
daylight-saving corrections to dates before 1970, causing pre-1970 PDT times to
be displayed in PST instead. This causes a few differences in the "horology"
regression test. Therefore, we provide a variant comparison file, "horology-no-
DST-before-1970.out", which includes the results to be expected on these
systems. To silence the bogus "failure" message on HPUX platforms, "resultmap"
includes
horology/.*-hpux=horology-no-DST-before-1970
which will trigger on any machine for which the output of "config.guess"
includes -hpux. Other lines in "resultmap" select the variant comparison file
for other platforms where it's appropriate.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment