Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
P
Postgres FD Implementation
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Abuhujair Javed
Postgres FD Implementation
Commits
b9f5c93b
Commit
b9f5c93b
authored
Nov 13, 2003
by
Peter Eisentraut
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Regenerate text files.
parent
e873207f
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
1053 additions
and
952 deletions
+1053
-952
INSTALL
INSTALL
+845
-781
src/test/regress/README
src/test/regress/README
+208
-171
No files found.
INSTALL
View file @
b9f5c93b
PostgreSQL Installation Instructions
PostgreSQL Installation Instructions
This document describes the installation of PostgreSQL from the source
This document describes the installation of PostgreSQL from the source code
code distribution.
distribution.
_________________________________________________________________
-------------------------------------------------------------------------------
Short Version
Short Version
./configure
gmake
./configure
su
gmake
gmake install
su
adduser postgres
gmake install
mkdir /usr/local/pgsql/data
adduser postgres
chown postgres /usr/local/pgsql/data
mkdir /usr/local/pgsql/data
su - postgres
chown postgres /usr/local/pgsql/data
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
su - postgres
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data >logfile 2>&1 &
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/createdb test
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data >logfile 2>&1 &
/usr/local/pgsql/bin/psql test
/usr/local/pgsql/bin/createdb test
/usr/local/pgsql/bin/psql test
The long version is the rest of this document.
_________________________________________________________________
The long version is the rest of this document.
Requirements
-------------------------------------------------------------------------------
In general, a modern Unix-compatible platform should be able to run
Requirements
PostgreSQL. The platforms that had received specific testing at the
time of release are listed in the section called Supported Platforms
In general, a modern Unix-compatible platform should be able to run PostgreSQL.
below. In the "doc" subdirectory of the distribution there are several
The platforms that had received specific testing at the time of release are
platform-specific FAQ documents you might wish to consult if you are
listed in the Section called Supported Platforms below. In the "doc"
having trouble.
subdirectory of the distribution there are several platform-specific FAQ
documents you might wish to consult if you are having trouble. The following
The following software packages are required for building PostgreSQL:
software packages are required for building PostgreSQL:
* GNU make is required; other make programs will *not* work. GNU
* GNU make is required; other make programs will *not* work. GNU make is
make is often installed under the name "gmake"; this document will
often installed under the name "gmake"; this document will always refer
always refer to it by that name. (On some systems GNU make is the
to it by that name. (On some systems GNU make is the default tool with
default tool with the name "make".) To test for GNU make enter
the name "make".) To test for GNU make enter
gmake --version
gmake --version
It is recommended to use version 3.76.1 or later.
It is recommended to use version 3.76.1 or later.
* You need an ISO/ANSI C compiler. Recent versions of GCC are
* You need an ISO/ANSI C compiler. Recent versions of GCC are
recommendable, but PostgreSQL is known to build with a wide
recommendable, but PostgreSQL is known to build with a wide variety of
variety of compilers from different vendors.
compilers from different vendors.
* gzip is needed to unpack the distribution in the first place. If
you are reading this, you probably already got past that hurdle.
* gzip is needed to unpack the distribution in the first place. If you are
reading this, you probably already got past that hurdle.
* The GNU Readline library (for comfortable line editing and command
* The GNU Readline library (for comfortable line editing and command
history retrieval) will be used by default. If you don't want to
history retrieval) will be used by default. If you don't want to use it
use it then you must specify the "--without-readline" option for
then you must specify the "--without-readline" option for "configure".
"configure". (On NetBSD, the "libedit" library is
(On NetBSD, the "libedit" library is Readline-compatible and is used if
Readline-compatible and is used if "libreadline" is not found.)
"libreadline" is not found.)
* To build on Windows NT or Windows 2000 you need the Cygwin and
cygipc packages. See the file "doc/FAQ_MSWIN" for details.
* To build on Windows NT or Windows 2000 you need the Cygwin and cygipc
packages. See the file "doc/FAQ_MSWIN" for details.
The following packages are optional. They are not required in the
default configuration, but they are needed when certain build options
The following packages are optional. They are not required in the default
are enabled, as explained below.
configuration, but they are needed when certain build options are enabled, as
explained below.
* To build the server programming language PL/Perl you need a full
Perl installation, including the "libperl" library and the header
* To build the server programming language PL/Perl you need a full Perl
files. Since PL/Perl will be a shared library, the "libperl"
installation, including the "libperl" library and the header files. Since
library must be a shared library also on most platforms. This
PL/Perl will be a shared library, the "libperl" library must be a shared
appears to be the default in recent Perl versions, but it was not
library also on most platforms. This appears to be the default in recent
in earlier versions, and in general it is the choice of whomever
Perl versions, but it was not in earlier versions, and in general it is
installed Perl at your site.
the choice of whomever installed Perl at your site.
If you don't have the shared library but you need one, a message
If you don't have the shared library but you need one, a message like
like this will appear during the build to point out this fact:
this will appear during the build to point out this fact:
*** Cannot build PL/Perl because libperl is not a shared library.
*** You might have to rebuild your Perl installation. Refer to
*** Cannot build PL/Perl because libperl is not a shared library.
*** the documentation for details.
*** You might have to rebuild your Perl installation. Refer to
(If you don't follow the on-screen output you will merely notice
*** the documentation for details.
that the PL/Perl library object, "plperl.so" or similar, will not
be installed.) If you see this, you will have to rebuild and
(If you don't follow the on-screen output you will merely notice that the
install Perl manually to be able to build PL/Perl. During the
PL/Perl library object, "plperl.so" or similar, will not be installed.)
configuration process for Perl, request a shared library.
If you see this, you will have to rebuild and install Perl manually to be
* To build the PL/Python server programming language, you need a
able to build PL/Perl. During the configuration process for Perl, request
Python installation, including the header files. Since PL/Python
a shared library.
will be a shared library, the "libpython" library must be a shared
library also on most platforms. This is not the case in a default
* To build the PL/Python server programming language, you need a Python
Python installation.
installation, including the header files. Since PL/Python will be a
If after building and installing you have a file called
shared library, the "libpython" library must be a shared library also on
"plpython.so" (possibly a different extension), then everything
most platforms. This is not the case in a default Python installation.
went well. Otherwise you should have seen a notice like this
If after building and installing you have a file called "plpython.so"
flying by:
(possibly a different extension), then everything went well. Otherwise
*** Cannot build PL/Python because libpython is not a shared library.
you should have seen a notice like this flying by:
*** You might have to rebuild your Python installation. Refer to
*** the documentation for details.
*** Cannot build PL/Python because libpython is not a shared library.
That means you have to rebuild (part of) your Python installation
*** You might have to rebuild your Python installation. Refer to
to supply this shared library.
*** the documentation for details.
The catch is that the Python distribution or the Python
maintainers do not provide any direct way to do this. The closest
That means you have to rebuild (part of) your Python installation to
thing we can offer you is the information in Python FAQ 3.30. On
supply this shared library.
some operating systems you don't really have to build a shared
The catch is that the Python distribution or the Python maintainers do
library, but then you will have to convince the PostgreSQL build
not provide any direct way to do this. The closest thing we can offer you
system of this. Consult the "Makefile" in the "src/pl/plpython"
is the information in Python FAQ 3.30. On some operating systems you
directory for details.
don't really have to build a shared library, but then you will have to
convince the PostgreSQL build system of this. Consult the "Makefile" in
the "src/pl/plpython" directory for details.
* If you want to build Tcl or Tk components (clients and the PL/Tcl
* If you want to build Tcl or Tk components (clients and the PL/Tcl
language) you of course need a Tcl installation.
language) you of course need a Tcl installation.
* To build the JDBC driver, you need Ant 1.5 or higher and a JDK.
Ant is a special tool for building Java-based packages. It can be
* To build the JDBC driver, you need Ant 1.5 or higher and a JDK. Ant is a
downloaded from the Ant web site.
special tool for building Java-based packages. It can be downloaded from
If you have several Java compilers installed, it depends on the
the Ant web site.
Ant configuration which one gets used. Precompiled Ant
If you have several Java compilers installed, it depends on the Ant
distributions are typically set up to read a file ".antrc" in the
configuration which one gets used. Precompiled Ant distributions are
current user's home directory for configuration. For example, to
typically set up to read a file ".antrc" in the current user's home
use a different JDK than the default, this may work:
directory for configuration. For example, to use a different JDK than the
JAVA_HOME=/usr/local/sun-jdk1.3
default, this may work:
JAVACMD=$JAVA_HOME/bin/java
JAVA_HOME=/usr/local/sun-jdk1.3
JAVACMD=$JAVA_HOME/bin/java
Note: Do not try to build the driver by calling "ant" or even
Note: Do not try to build the driver by calling "ant" or even
"javac" directly. This will not work. Run "gmake" normally as
"javac" directly. This will not work. Run "gmake" normally as
described below.
described below.
* To enable Native Language Support (NLS), that is, the ability to
display a program's messages in a language other than English, you
* To enable Native Language Support (NLS), that is, the ability to display
need an implementation of the Gettext API. Some operating systems
a program's messages in a language other than English, you need an
have this built-in (e.g., Linux, NetBSD, Solaris), for other
implementation of the Gettext API. Some operating systems have this
systems you can download an add-on package from here:
built-in (e.g., Linux, NetBSD, Solaris), for other systems you can
http://www.postgresql.org/~petere/gettext.html. If you are using
download an add-on package from here: http://www.postgresql.org/~petere/
the Gettext implementation in the GNU C library then you will
gettext.html. If you are using the Gettext implementation in the GNU C
additionally need the GNU Gettext package for some utility
library then you will additionally need the GNU Gettext package for some
programs. For any of the other implementations you will not need
utility
programs. For any of the other implementations you will not need
it.
it.
* Kerberos, OpenSSL, or PAM, if you want to support authentication
using these services.
* Kerberos, OpenSSL, or PAM, if you want to support authentication using
these services.
If you are building from a CVS tree instead of using a released source
package, or if you want to do development, you also need the following
If you are building from a CVS tree instead of using a released source package,
packages:
or if you want to do development, you also need the following packages:
* Flex and Bison are needed to build a CVS checkout or if you
* Flex and Bison are needed to build a CVS checkout or if you changed the
changed the actual scanner and parser definition files. If you
actual scanner and parser definition files. If you need them, be sure to
need them, be sure to get Flex 2.5.4 or later and Bison 1.875 or
get Flex 2.5.4 or later and Bison 1.875 or later. Other yacc programs can
later. Other yacc programs can sometimes be used, but doing so
sometimes be used, but doing so requires extra effort and is not
requires extra effort and is not recommended. Other lex programs
recommended. Other lex programs will definitely not work.
will definitely not work.
If you need to get a GNU package, you can find it at your local GNU mirror site
If you need to get a GNU package, you can find it at your local GNU
(see http://www.gnu.org/order/ftp.html for a list) or at ftp://ftp.gnu.org/
mirror site (see http://www.gnu.org/order/ftp.html for a list) or at
gnu/.
ftp://ftp.gnu.org/gnu/.
Also check that you have sufficient disk space. You will need about 65 MB for
the source tree during compilation and about 15 MB for the installation
Also check that you have sufficient disk space. You will need about 65
directory. An empty database cluster takes about 25 MB, databases take about
MB for the source tree during compilation and about 15 MB for the
five times the amount of space that a flat text file with the same data would
installation directory. An empty database cluster takes about 25 MB,
take. If you are going to run the regression tests you will temporarily need up
databases take about five times the amount of space that a flat text
to an extra 90 MB. Use the "df" command to check for disk space.
file with the same data would take. If you are going to run the
regression tests you will temporarily need up to an extra 90 MB. Use
-------------------------------------------------------------------------------
the "df" command to check for disk space.
_________________________________________________________________
If You Are Upgrading
If You Are Upgrading
The internal data storage format changes with new releases of PostgreSQL.
Therefore, if you are upgrading an existing installation that does not have a
The internal data storage format changes with new releases of
version number "7.4.x", you must back up and restore your data as shown here.
PostgreSQL. Therefore, if you are upgrading an existing installation
These instructions assume that your existing installation is under the "/usr/
that does not have a version number "7.4.x", you must back up and
local/pgsql" directory, and that the data area is in "/usr/local/pgsql/data".
restore your data as shown here. These instructions assume that your
Substitute your paths appropriately.
existing installation is under the "/usr/local/pgsql" directory, and
that the data area is in "/usr/local/pgsql/data". Substitute your
1. Make sure that your database is not updated during or after the backup.
paths appropriately.
This does not affect the integrity of the backup, but the changed data
1. Make sure that your database is not updated during or after the
would of course not be included. If necessary, edit the permissions in
backup. This does not affect the integrity of the backup, but the
the file "/usr/local/pgsql/data/pg_hba.conf" (or equivalent) to disallow
changed data would of course not be included. If necessary, edit
access from everyone except you.
the permissions in the file "/usr/local/pgsql/data/pg_hba.conf"
(or equivalent) to disallow access from everyone except you.
2. To back up your database installation, type:
2. To back up your database installation, type:
pg_dumpall > outputfile
If you need to preserve OIDs (such as when using them as foreign
pg_dumpall > outputfile
keys), then use the "-o" option when running "pg_dumpall".
"pg_dumpall" does not save large objects. Check the documentation
If you need to preserve OIDs (such as when using them as foreign keys),
if you need to do this.
then use the "-o" option when running "pg_dumpall".
To make the backup, you can use the "pg_dumpall" command from the
"pg_dumpall" does not save large objects. Check the documentation if you
version you are currently running. For best results, however, try
need to do this.
to use the "pg_dumpall" command from PostgreSQL 7.4beta5, since
To make the backup, you can use the "pg_dumpall" command from the version
this version contains bug fixes and improvements over older
you are currently running. For best results, however, try to use the
versions. While this advice might seem idiosyncratic since you
"pg_dumpall" command from PostgreSQL 7.4, since this version contains
haven't installed the new version yet, it is advisable to follow
bug fixes and improvements over older versions. While this advice might
it if you plan to install the new version in parallel with the old
seem idiosyncratic since you haven't installed the new version yet, it is
version. In that case you can complete the installation normally
advisable to follow it if you plan to install the new version in parallel
and transfer the data later. This will also decrease the downtime.
with the old version. In that case you can complete the installation
3. If you are installing the new version at the same location as the
normally and transfer the data later. This will also decrease the
old one then shut down the old server, at the latest before you
downtime.
install the new files:
kill -INT `cat /usr/local/pgsql/data/postmaster.pid`
3. If you are installing the new version at the same location as the old one
Versions prior to 7.0 do not have this "postmaster.pid" file. If
then shut down the old server, at the latest before you install the new
you are using such a version you must find out the process ID of
files:
the server yourself, for example by typing "ps ax | grep
postmaster", and supply it to the "kill" command.
kill -INT `cat /usr/local/pgsql/data/postmaster.pid`
On systems that have PostgreSQL started at boot time, there is
probably a start-up file that will accomplish the same thing. For
Versions prior to 7.0 do not have this "postmaster.pid" file. If you are
example, on a Red Hat Linux system one might find that
using such a version you must find out the process ID of the server
/etc/rc.d/init.d/postgresql stop
yourself, for example by typing "ps ax | grep postmaster", and supply it
to the "kill" command.
On systems that have PostgreSQL started at boot time, there is probably a
start-up file that will accomplish the same thing. For example, on a Red
Hat Linux system one might find that
/etc/rc.d/init.d/postgresql stop
works. Another possibility is "pg_ctl stop".
works. Another possibility is "pg_ctl stop".
4. If you are installing in the same place as the old version then it
is also a good idea to move the old installation out of the way,
in case you have trouble and need to revert to it. Use a command
like this:
mv /usr/local/pgsql /usr/local/pgsql.old
After you have installed PostgreSQL 7.4beta5, create a new database
4. If you are installing in the same place as the old version then it is
directory and start the new server. Remember that you must execute
also a good idea to move the old installation out of the way, in case you
these commands while logged in to the special database user account
have trouble and need to revert to it. Use a command like this:
(which you already have if you are upgrading).
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
mv /usr/local/pgsql /usr/local/pgsql.old
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data
Finally, restore your data with
After you have installed PostgreSQL 7.4, create a new database directory and
/usr/local/pgsql/bin/psql -d template1 -f outputfile
start the new server. Remember that you must execute these commands while
logged in to the special database user account (which you already have if you
are upgrading).
using the *new* psql.
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data
These topics are discussed at length in the documentation, which you
Finally, restore your data with
are encouraged to read in any case.
_________________________________________________________________
Installation Procedure
/usr/local/pgsql/bin/psql -d template1 -f outputfile
using the *new* psql.
These topics are discussed at length in the documentation, which you are
encouraged to read in any case.
-------------------------------------------------------------------------------
Installation Procedure
1. Configuration
1. Configuration
The first step of the installation procedure is to configure th
e
The first step of the installation procedure is to configure the sourc
e
source tree for your system and choose the options you would like.
tree for your system and choose the options you would like. This is done
This is done by running the "configure" script. For a default
by running the "configure" script. For a default installation simply
installation simply
enter
enter
./configure
This script will run a number of tests to guess values for various
./configure
system dependent variables and detect some quirks of your
operating system, and finally will create several files in the
This script will run a number of tests to guess values for various system
build tree to record what it found. (You can also run "configure"
dependent variables and detect some quirks of your operating system, and
in a directory outside the source tree if you want to keep the
finally will create several files in the build tree to record what it
build directory separate.)
found. (You can also run "configure" in a directory outside the source
The default configuration will build the server and utilities, as
tree if you want to keep the build directory separate.)
well as all client applications and interfaces that require only a
The default configuration will build the server and utilities, as well as
C compiler. All files will be installed under "/usr/local/pgsql"
all client applications and interfaces that require only a C compiler.
by default.
All files will be installed under "/usr/local/pgsql"
by default.
You can customize the build and installation process by supplying
You can customize the build and installation process by supplying one or
one or
more of the following command line options to "configure":
more of the following command line options to "configure":
--prefix=PREFIX
--prefix=PREFIX
Install all files under the directory "PREFIX" instead of
"/usr/local/pgsql". The actual files will be installed
into various subdirectories; no files will ever be
installed directly into the "PREFIX" directory.
If you have special needs, you can also customize the
Install all files under the directory "PREFIX" instead of "/usr/
individual subdirectories with the following options.
local/pgsql". The actual files will be installed into various
subdirectories; no files will ever be installed directly into the
"PREFIX" directory.
If you have special needs, you can also customize the individual
subdirectories with the following options.
--exec-prefix=EXEC-PREFIX
--exec-prefix=EXEC-PREFIX
You can install architecture-dependent files under a
different prefix, "EXEC-PREFIX", than what "PREFIX" was
You can install architecture-dependent files under a different
set to. This can be useful to share
prefix, "EXEC-PREFIX", than what "PREFIX" was set to. This can be
architecture-independent files between hosts. If you omit
useful to share architecture-independent files between hosts. If
this, then "EXEC-PREFIX" is set equal to "PREFIX" and
you omit this, then "EXEC-PREFIX" is set equal to "PREFIX" and both
both architecture-dependent and independent files will be
architecture-dependent and independent files will be installed
installed under the same tree, which is probably what you
under the same tree, which is probably what you want.
want.
--bindir=DIRECTORY
--bindir=DIRECTORY
Specifies the directory for executable programs. The
default is "EXEC-PREFIX/bin", which normally mean
s
Specifies the directory for executable programs. The default i
s
"/usr/local/pgsql/bin".
"EXEC-PREFIX/bin", which normally means
"/usr/local/pgsql/bin".
--datadir=DIRECTORY
--datadir=DIRECTORY
Sets the directory for read-only data files used by the
installed programs. The default is "PREFIX/share". Note
Sets the directory for read-only data files used by the installed
that this has nothing to do with where your database
programs. The default is "PREFIX/share". Note that this has nothing
files will be placed.
to do with where your database
files will be placed.
--sysconfdir=DIRECTORY
--sysconfdir=DIRECTORY
The directory for various configuration files,
"PREFIX/etc" by default.
The directory for various configuration files, "PREFIX/etc" by
default.
--libdir=DIRECTORY
--libdir=DIRECTORY
The location to install libraries and dynamically
loadable modules. The default is "EXEC-PREFIX/lib".
The location to install libraries and dynamically loadable modules.
The default is "EXEC-PREFIX/lib".
--includedir=DIRECTORY
--includedir=DIRECTORY
The directory for installing C and C++ header files. The
default is "PREFIX/include".
The directory for installing C and C++ header files. The default is
"PREFIX/include".
--docdir=DIRECTORY
--docdir=DIRECTORY
Documentation files, except "man" pages, will be
installed into this directory. The default is
Documentation files, except "man" pages, will be installed into
"PREFIX/doc".
this directory. The default is
"PREFIX/doc".
--mandir=DIRECTORY
--mandir=DIRECTORY
The man pages that come with PostgreSQL will be installed
under this directory, in their respective "manx"
The man pages that come with PostgreSQL will be installed under
subdirectories. The default is "PREFIX/man".
this directory, in their respective "manx" subdirectories. The
default is "PREFIX/man".
Note: Care has been taken to make it possible to install PostgreSQL
into shared installation locations (such as "/usr/local/include")
Note: Care has been taken to make it possible to install
without interfering with the namespace of the rest of the system.
PostgreSQL into shared installation locations (such as "/usr/
First, the string "/postgresql" is automatically appended to
local/include") without interfering with the namespace of the
datadir, sysconfdir, and docdir, unless the fully expanded
rest of the system. First, the string "/postgresql" is
directory name already contains the string "postgres" or "pgsql".
automatically appended to datadir, sysconfdir, and docdir,
For example, if you choose "/usr/local" as prefix, the
unless the fully expanded directory name already contains the
documentation will be installed in "/usr/local/doc/postgresql", but
string "postgres" or "pgsql". For example, if you choose "/usr/
if the prefix is "/opt/postgres", then it will be in
local" as prefix, the documentation will be installed in "/usr/
"/opt/postgres/doc". The public C header files of the client
local/doc/postgresql", but if the prefix is "/opt/postgres",
interfaces are installed into includedir and are namespace-clean.
then it will be in "/opt/postgres/doc". The public C header
The internal header files and the server header files are installed
files of the client interfaces are installed into includedir
into private directories under includedir. See the documentation of
and are namespace-clean. The internal header files and the
each interface for information about how to get at the its header
server header files are installed into private directories
files. Finally, a private subdirectory will also be created, if
under includedir. See the documentation of each interface for
appropriate, under libdir for dynamically loadable modules.
information about how to get at the its header files. Finally,
a private subdirectory will also be created, if appropriate,
under libdir for dynamically loadable modules.
--with-includes=DIRECTORIES
--with-includes=DIRECTORIES
"DIRECTORIES" is a colon-separated list of directories
that will be added to the list the compiler searches for
header files. If you have optional packages (such as GNU
Readline) installed in a non-standard location, you have
to use this option and probably also the corresponding
"--with-libraries" option.
Example:
"DIRECTORIES" is a colon-separated list of directories that will be
--with-includes=/opt/gnu/include:/usr/sup/include.
added to the list the compiler searches for header files. If you
have optional packages (such as GNU Readline) installed in a non-
standard location, you have to use this option and probably also
the corresponding "--with-libraries" option.
Example: --with-includes=/opt/gnu/include:/usr/sup/include.
--with-libraries=DIRECTORIES
--with-libraries=DIRECTORIES
"DIRECTORIES" is a colon-separated list of directories to
search for libraries. You will probably have to use this
option (and the corresponding "--with-includes" option)
if you have packages installed in non-standard locations.
"DIRECTORIES" is a colon-separated list of directories to search
for libraries. You will probably have to use this option (and the
corresponding "--with-includes" option) if you have packages
installed in non-standard locations.
Example: --with-libraries=/opt/gnu/lib:/usr/sup/lib.
Example: --with-libraries=/opt/gnu/lib:/usr/sup/lib.
--enable-nls[=LANGUAGES]
--enable-nls[=LANGUAGES]
Enables Native Language Support (NLS), that is, the
ability to display a program's messages in a language
Enables Native Language Support (NLS), that is, the ability to
other than English. "LANGUAGES" is a space separated list
display a program's messages in a language other than English.
of codes of the languages that you want supported, for
"LANGUAGES" is a space separated list of codes of the languages
example --enable-nls='de fr'. (The intersection between
that you want supported, for example --enable-nls='de fr'. (The
your list and the set of actually provided translations
intersection between your list and the set of actually provided
will be computed automatically.) If you do not specify a
translations will be computed automatically.) If you do not specify
list, then all available translations are installed.
a list, then all available translations are installed.
To use this option, you will need an implementation of the Gettext
To use this option, you will need an implementation of
API; see above.
the Gettext API; see above.
--with-pgport=NUMBER
--with-pgport=NUMBER
Set "NUMBER" as the default port number for server and
clients. The default is 5432. The port can always b
e
Set "NUMBER" as the default port number for server and clients. Th
e
changed later on, but if you specify it here then both
default is 5432. The port can always be changed later on, but if
server and clients will have the same default compiled
you specify it here then both server and clients will have the same
in, which can be very convenient. Usually the only good
default compiled in, which can be very convenient. Usually the only
reason to select a non-default value is if you intend to
good reason to select a non-default value is if you intend to run
run
multiple PostgreSQL servers on the same machine.
multiple PostgreSQL servers on the same machine.
--with-perl
--with-perl
Build the PL/Perl server-side language.
Build the PL/Perl server-side language.
--with-python
--with-python
Build the PL/Python server-side language.
Build the PL/Python server-side language.
--with-tcl
--with-tcl
Build components that require Tcl/Tk, which are libpgtcl,
pgtclsh, pgtksh, and PL/Tcl. But see below about
Build components that require Tcl/Tk, which are libpgtcl, pgtclsh,
"--without-tk".
pgtksh, and PL/Tcl. But see below about
"--without-tk".
--without-tk
--without-tk
If you specify "--with-tcl" and this option, then the
program that requires Tk (pgtksh) will be excluded.
If you specify "--with-tcl" and this option, then the program that
requires Tk (pgtksh) will be excluded.
--with-tclconfig=DIRECTORY, --with-tkconfig=DIRECTORY
--with-tclconfig=DIRECTORY, --with-tkconfig=DIRECTORY
Tcl/Tk installs the files "tclConfig.sh" and
"tkConfig.sh", which contain configuration information
Tcl/Tk installs the files "tclConfig.sh" and "tkConfig.sh", which
needed to build modules interfacing to Tcl or Tk. These
contain configuration information needed to build modules
files are normally found automatically at their
interfacing to Tcl or Tk. These files are normally found
well-known locations, but if you want to use a different
automatically at their well-known locations, but if you want to use
version of Tcl or Tk you can specify the directory in
a different
version of Tcl or Tk you can specify the directory in
which to find them.
which to find them.
--with-java
--with-java
Build the JDBC driver and associated Java packages.
Build the JDBC driver and associated Java packages.
--with-krb4[=DIRECTORY], --with-krb5[=DIRECTORY]
--with-krb4[=DIRECTORY], --with-krb5[=DIRECTORY]
Build with support for Kerberos authentication. You can
use either Kerberos version 4 or 5, but not both. The
Build with support for Kerberos authentication. You can use either
"DIRECTORY" argument specifies the root directory of the
Kerberos version 4 or 5, but not both. The "DIRECTORY" argument
Kerberos installation; "/usr/athena" is assumed as
specifies the root directory of the Kerberos installation; "/usr/
default. If the relevant header files and libraries are
athena" is assumed as default. If the relevant header files and
not under a common parent directory, then you must use
libraries are not under a common parent directory, then you must
the "--with-includes" and "--with-libraries" options in
use the "--with-includes" and "--with-libraries" options in
addition to this option. If, on the other hand, the
addition to this option. If, on the other hand, the required files
required files are in a location that is searched by
are in a location that is searched by default (e.g., "/usr/lib"),
default (e.g., "/usr/lib"), then you can leave off the
then you can leave off the argument.
argument.
"configure" will check for the required header files and libraries
to make sure that your Kerberos installation is sufficient before
"configure" will check for the required header files and
proceeding.
libraries to make sure that your Kerberos installation is
sufficient before proceeding.
--with-krb-srvnam=NAME
--with-krb-srvnam=NAME
The name of the Kerberos service principal. postgres is
the default. There's probably no reason to change this.
The name of the Kerberos service principal. postgres is the
default. There's probably no reason to change this.
--with-openssl[=DIRECTORY]
--with-openssl[=DIRECTORY]
Build with support for SSL (encrypted) connections. This
requires the OpenSSL package to be installed. The
"DIRECTORY" argument specifies the root directory of the
OpenSSL installation; the default is "/usr/local/ssl".
"configure" will check for the required header files and
Build with support for SSL (encrypted) connections. This requires
libraries to make sure that your OpenSSL installation is
the OpenSSL package to be installed. The "DIRECTORY" argument
sufficient before proceeding.
specifies the root directory of the OpenSSL installation; the
default is "/usr/local/ssl".
"configure" will check for the required header files and libraries
to make sure that your OpenSSL installation is sufficient before
proceeding.
--with-pam
--with-pam
Build with PAM (Pluggable Authentication Modules)
support.
Build with PAM (Pluggable Authentication Modules)
support.
--without-readline
--without-readline
Prevents the use of the Readline library. This disables
command-line editing and history in psql, so it is not
Prevents the use of the Readline library. This disables command-
recommended.
line editing and history in psql, so it is not
recommended.
--with-rendezvous
--with-rendezvous
Build with Rendezvous support.
Build with Rendezvous support.
--disable-spinlocks
--disable-spinlocks
Allows source builds to succeed without CPU spinlock
support. Lack of spinlock support will produce poor
Allow the builds to succeed even if PostgreSQL has no CPU spinlock
performance. This option is to be used only by platforms
support for the platform. The lack of spinlock support will result
lacking spinlock support.
in poor performance; therefore, this option should only be used if
the build aborts and informs you that the platform lacks spinlock
support.
--enable-thread-safety
--enable-thread-safety
Allow separate libpq and ecpg threads to safely control
their private connection handles.
Make the client libraries thread-safe. This allows concurrent
threads in libpq and ECPG programs to safely control their private
connection handles.
--without-zlib
--without-zlib
Prevents the use of the Zlib library. This disables
compression support in pg_dump. This option is only
Prevents the use of the Zlib library. This disables compression
intended for those rare systems where this library is not
support in pg_dump. This option is only intended for those rare
available.
systems where this library is not
available.
--enable-debug
--enable-debug
Compiles all programs and libraries with debugging
symbols. This means that you can run the programs through
Compiles all programs and libraries with debugging symbols. This
a debugger to analyze problems. This enlarges the size of
means that you can run the programs through a debugger to analyze
the installed executables considerably, and on non-GCC
problems. This enlarges the size of the installed executables
compilers it usually also disables compiler optimization,
considerably, and on non-GCC compilers it usually also disables
causing slowdowns. However, having the symbols availabl
e
compiler optimization, causing slowdowns. However, having th
e
is extremely helpful for dealing with any problems that
symbols available is extremely helpful for dealing with any
may arise. Currently, this option is recommended for
problems that
may arise. Currently, this option is recommended for
production installations only if you use GCC. But you
production installations only if you use GCC. But you should always
should always have it on if you are doing development
have it on if you are doing development work or running a beta
work or running a beta
version.
version.
--enable-cassert
--enable-cassert
Enables assertion checks in the server, which test for
many "can't happen" conditions. This is invaluable for
Enables assertion checks in the server, which test for many "can't
code development purposes, but the tests slow things down
happen" conditions. This is invaluable for code development
a little. Also, having the tests turned on won't
purposes, but the tests slow things down a little. Also, having the
necessarily enhance the stability of your server! The
tests turned on won't necessarily enhance the stability of your
assertion checks are not categorized for severity, and so
server! The assertion checks are not categorized for severity, and
what might be a relatively harmless bug will still lead
so what might be a relatively harmless bug will still lead to
to server restarts if it triggers an assertion failure.
server restarts if it triggers an assertion failure. Currently,
Currently, this option is not recommended for production
this option is not recommended for production use, but you should
use, but you should have it on for development work or
have it on for development work or when running a beta version.
when running a beta version.
--enable-depend
--enable-depend
Enables automatic dependency tracking. With this option,
the makefiles are set up so that all affected object
Enables automatic dependency tracking. With this option, the
files will be rebuilt when any header file is changed.
makefiles are set up so that all affected object files will be
This is useful if you are doing development work, but is
rebuilt when any header file is changed. This is useful if you are
just wasted overhead if you intend only to compile once
doing development work, but is just wasted overhead if you intend
and install. At present, this option will work only if
only to compile once and install. At present, this option will work
you use GCC.
only if you use GCC.
If you prefer a C compiler different from the one "configure"
If you prefer a C compiler different from the one "configure" picks then
picks then you can set the environment variable CC to the program
you can set the environment variable CC to the program of your choice. By
of your choice. By default, "configure" will pick "gcc" unless
default, "configure" will pick "gcc" unless this is inappropriate for the
this is inappropriate for the platform. Similarly, you can
platform. Similarly, you can override the default compiler flags with the
override the default compiler flags with the CFLAGS variable.
CFLAGS variable.
You can specify environment variables on the "configure" command
line, for example:
You can specify environment variables on the "configure" command line,
./configure CC=/opt/bin/gcc CFLAGS='-O2 -pipe'
for example:
./configure CC=/opt/bin/gcc CFLAGS='-O2 -pipe'
2. Build
2. Build
To start the build, type
To start the build, type
gmake
(Remember to use GNU make.) The build may take anywhere from 5
gmake
minutes to half an hour depending on your hardware. The last line
displayed should be
(Remember to use GNU make.) The build may take anywhere from 5 minutes to
All of PostgreSQL is successfully made. Ready to install.
half an hour depending on your hardware. The last line displayed should
be
All of PostgreSQL is successfully made. Ready to install.
3. Regression Tests
3. Regression Tests
If you want to test the newly built server before you install it,
If you want to test the newly built server before you install it, you can
you can run the regression tests at this point. The regression
run the regression tests at this point. The regression tests are a test
tests are a test suite to verify that PostgreSQL runs on your
suite to verify that PostgreSQL runs on your machine in the way the
machine in the way the developers expected it to. Type
developers expected it to. Type
gmake check
(This won't work as root; do it as an unprivileged user.) It is
gmake check
possible that some tests fail, due to differences in error message
wording or floating point results. The file
"src/test/regress/README" and the documentation contain detailed
information about interpreting the test results. You can repeat
this test at any later time by issuing the same command.
4. Installing The Files
(This won't work as root; do it as an unprivileged user.) The file "src/
test/regress/README" and the documentation contain detailed information
about interpreting the test results. You can repeat this test at any
later time by issuing the same command.
4. Installing The Files
Note: If you are upgrading an existing system and are going to
Note: If you are upgrading an existing system and are going to
install the new files over the old ones, then you should have
install the new files over the old ones, then you should have
backed up your data and shut down the old server by now, as
backed up your data and shut down the old server by now, as
explained in the section called If You Are Upgrading above.
explained in
the Section called If You Are Upgrading
above.
To install PostgreSQL enter
To install PostgreSQL enter
gmake install
This will install files into the directories that were specified
gmake install
in step 1. Make sure that you have appropriate permissions to
write into that area. Normally you need to do this step as root.
This will install files into the directories that were specified in step
Alternatively, you could create the target directories in advance
1. Make sure that you have appropriate permissions to write into that
and arrange for appropriate permissions to be granted.
area. Normally you need to do this step as root. Alternatively, you could
You can use gmake install-strip instead of gmake install to strip
create the target directories in advance and arrange for appropriate
the executable files and libraries as they are installed. This
permissions to be granted.
will save some space. If you built with debugging support,
You can use gmake install-strip instead of gmake install to strip the
stripping will effectively remove the debugging support, so it
executable files and libraries as they are installed. This will save some
should only be done if debugging is no longer needed.
space. If you built with debugging support, stripping will effectively
install-strip tries to do a reasonable job saving space, but it
remove the debugging support, so it should only be done if debugging is
does not have perfect knowledge of how to strip every unneeded
no longer needed. install-strip tries to do a reasonable job saving
byte from an executable file, so if you want to save all the disk
space, but it does not have perfect knowledge of how to strip every
space you possibly can, you will have to do manual work.
unneeded byte from an executable file, so if you want to save all the
The standard installation provides only the header files needed
disk space you possibly can, you will have to do manual work.
for client application development. If you plan to do any
The standard installation provides only the header files needed for
server-side program development (such as custom functions or data
client application development. If you plan to do any server-side program
types written in C), then you may want to install the entire
development (such as custom functions or data types written in C), then
PostgreSQL include tree into your target include directory. To do
you may want to install the entire PostgreSQL include tree into your
that, enter
target include directory. To do that, enter
gmake install-all-headers
This adds a megabyte or two to the installation footprint, and is
gmake install-all-headers
only useful if you don't plan to keep the whole source tree around
for reference. (If you do, you can just use the source's include
This adds a megabyte or two to the installation footprint, and is only
directory when building server-side software.)
useful if you don't plan to keep the whole source tree around for
reference. (If you do, you can just use the source's include directory
when building server-side software.)
Client-only installation: If you want to install only the client
Client-only installation: If you want to install only the client
applications and interface libraries, then you can use these
applications and interface libraries, then you can use these commands:
commands:
gmake -C src/bin install
gmake -C src/bin install
gmake -C src/include install
gmake -C src/include install
gmake -C src/interfaces install
gmake -C src/interfaces install
gmake -C doc install
gmake -C doc install
Uninstallation: To undo the installation use the command "gmake
Uninstallation: To undo the installation use the command "gmake uninstall".
uninstall". However, this will not remove any created directories.
However, this will not remove any created directories.
Cleaning: After the installation you can make room by removing the built files
Cleaning: After the installation you can make room by removing the
from the source tree with the command "gmake clean". This will preserve the
built files from the source tree with the command "gmake clean". This
files made by the "configure" program, so that you can rebuild everything with
will preserve the files made by the "configure" program, so that you
"gmake" later on. To reset the source tree to the state in which it was
can rebuild everything with "gmake" later on. To reset the source tree
distributed, use "gmake distclean". If you are going to build for several
to the state in which it was distributed, use "gmake distclean". If
platforms from the same source tree you must do this and re-configure for each
you are going to build for several platforms from the same source tree
build.
you must do this and re-configure for each build.
If you perform a build and then discover that your "configure" options were
wrong, or if you change anything that "configure" investigates (for example,
If you perform a build and then discover that your "configure" options
software upgrades), then it's a good idea to do "gmake distclean" before
were wrong, or if you change anything that "configure" investigates
reconfiguring and rebuilding. Without this, your changes in configuration
(for example, software upgrades), then it's a good idea to do "gmake
choices may not propagate everywhere they need to.
distclean" before reconfiguring and rebuilding. Without this, your
changes in configuration choices may not propagate everywhere they
-------------------------------------------------------------------------------
need to.
_________________________________________________________________
Post-Installation Setup
Post-Installation Setup
Shared Libraries
Tuning
On some systems that have shared libraries (which most systems do) you need to
tell your system how to find the newly installed shared libraries. The systems
By default, PostgreSQL is configured to run on minimal hardware. This
on which this is *not* necessary include BSD/OS, FreeBSD, HP-UX, IRIX, Linux,
allows it to start up with almost any hardware configuration. However,
NetBSD, OpenBSD, Tru64 UNIX (formerly Digital UNIX), and Solaris.
the default configuration is not designed for optimum performance. To
The method to set the shared library search path varies between platforms, but
achieve optimum performance, several server variables must be
the most widely usable method is to set the environment variable
adjusted, the two most common being shared_buffers and sort_mem
LD_LIBRARY_PATH like so: In Bourne shells ("sh", "ksh", "bash", "zsh")
mentioned in the documentation . Other parameters in the documentation
also affect performance.
LD_LIBRARY_PATH=/usr/local/pgsql/lib
_________________________________________________________________
export LD_LIBRARY_PATH
Shared Libraries
or in "csh" or "tcsh"
On some systems that have shared libraries (which most systems do) you
setenv LD_LIBRARY_PATH /usr/local/pgsql/lib
need to tell your system how to find the newly installed shared
libraries. The systems on which this is *not* necessary include
Replace /usr/local/pgsql/lib with whatever you set "--libdir" to in step 1. You
BSD/OS, FreeBSD, HP-UX, IRIX, Linux, NetBSD, OpenBSD, Tru64 UNIX
should put these commands into a shell start-up file such as "/etc/profile" or
(formerly Digital UNIX), and Solaris.
"~/.bash_profile". Some good information about the caveats associated with this
method can be found at http://www.visi.com/~barr/ldpath.html.
The method to set the shared library search path varies between
On some systems it might be preferable to set the environment variable
platforms, but the most widely usable method is to set the environment
LD_RUN_PATH *before* building.
variable LD_LIBRARY_PATH like so: In Bourne shells ("sh", "ksh",
On Cygwin, put the library directory in the PATH or move the ".dll" files into
"bash", "zsh")
the "bin" directory.
LD_LIBRARY_PATH=/usr/local/pgsql/lib
If in doubt, refer to the manual pages of your system (perhaps "ld.so" or
export LD_LIBRARY_PATH
"rld"). If you later on get a message like
or in "csh" or "tcsh"
psql: error in loading shared libraries
setenv LD_LIBRARY_PATH /usr/local/pgsql/lib
libpq.so.2.1: cannot open shared object file: No such file or directory
Replace /usr/local/pgsql/lib with whatever you set "--libdir" to in
then this step was necessary. Simply take care of it then.
step 1. You should put these commands into a shell start-up file such
If you are on BSD/OS, Linux, or SunOS 4 and you have root access you can run
as "/etc/profile" or "~/.bash_profile". Some good information about
the caveats associated with this method can be found at
/sbin/ldconfig /usr/local/pgsql/lib
http://www.visi.com/~barr/ldpath.html.
(or equivalent directory) after installation to enable the run-time linker to
On some systems it might be preferable to set the environment variable
find the shared libraries faster. Refer to the manual page of "ldconfig" for
LD_RUN_PATH *before* building.
more information. On FreeBSD, NetBSD, and OpenBSD the command is
On Cygwin, put the library directory in the PATH or move the ".dll"
/sbin/ldconfig -m /usr/local/pgsql/lib
files into the "bin" directory.
instead. Other systems are not known to have an equivalent command.
If in doubt, refer to the manual pages of your system (perhaps "ld.so"
or "rld"). If you later on get a message like
-------------------------------------------------------------------------------
psql: error in loading shared libraries
libpq.so.2.1: cannot open shared object file: No such file or directory
Environment Variables
then this step was necessary. Simply take care of it then.
If you installed into "/usr/local/pgsql" or some other location that is not
searched for programs by default, you should add "/usr/local/pgsql/bin" (or
If you are on BSD/OS, Linux, or SunOS 4 and you have root access you
whatever you set "--bindir" to in step 1) into your PATH. Strictly speaking,
can run
this is not necessary, but it will make the use of PostgreSQL much more
/sbin/ldconfig /usr/local/pgsql/lib
convenient.
To do this, add the following to your shell start-up file, such as
(or equivalent directory) after installation to enable the run-time
"~/.bash_profile" (or "/etc/profile", if you want it to affect every user):
linker to find the shared libraries faster. Refer to the manual page
of "ldconfig" for more information. On FreeBSD, NetBSD, and OpenBSD
PATH=/usr/local/pgsql/bin:$PATH
the command is
export PATH
/sbin/ldconfig -m /usr/local/pgsql/lib
If you are using "csh" or "tcsh", then use this command:
instead. Other systems are not known to have an equivalent command.
_________________________________________________________________
set path = ( /usr/local/pgsql/bin $path )
Environment Variables
To enable your system to find the man documentation, you need to add lines like
the following to a shell start-up file unless you installed into a location
If you installed into "/usr/local/pgsql" or some other location that
that is searched by default.
is not searched for programs by default, you should add
"/usr/local/pgsql/bin" (or whatever you set "--bindir" to in step 1)
MANPATH=/usr/local/pgsql/man:$MANPATH
into your PATH. Strictly speaking, this is not necessary, but it will
export MANPATH
make the use of PostgreSQL much more convenient.
The environment variables PGHOST and PGPORT specify to client applications the
To do this, add the following to your shell start-up file, such as
host and port of the database server, overriding the compiled-in defaults. If
"~/.bash_profile" (or "/etc/profile", if you want it to affect every
you are going to run client applications remotely then it is convenient if
user):
every user that plans to use the database sets PGHOST. This is not required,
PATH=/usr/local/pgsql/bin:$PATH
however: the settings can be communicated via command line options to most
export PATH
client programs.
If you are using "csh" or "tcsh", then use this command:
-------------------------------------------------------------------------------
set path = ( /usr/local/pgsql/bin $path )
Getting Started
To enable your system to find the man documentation, you need to add
lines like the following to a shell start-up file unless you installed
The following is a quick summary of how to get PostgreSQL up and running once
into a location that is searched by default.
installed. The main documentation contains more information.
MANPATH=/usr/local/pgsql/man:$MANPATH
export MANPATH
1. Create a user account for the PostgreSQL server. This is the user the
server will run as. For production use you should create a separate,
The environment variables PGHOST and PGPORT specify to client
unprivileged account ("postgres" is commonly used). If you do not have
applications the host and port of the database server, overriding the
root access or just want to play around, your own user account is enough,
compiled-in defaults. If you are going to run client applications
but running the server as root is a security risk and will not work.
remotely then it is convenient if every user that plans to use the
database sets PGHOST. This is not required, however: the settings can
adduser postgres
be communicated via command line options to most client programs.
_________________________________________________________________
2. Create a database installation with the "initdb" command. To run "initdb"
you must be logged in to your PostgreSQL server account. It will not work
Getting Started
as root.
The following is a quick summary of how to get PostgreSQL up and
root# mkdir /usr/local/pgsql/data
running once installed. The main documentation contains more
root# chown postgres /usr/local/pgsql/data
information.
root# su - postgres
1. Create a user account for the PostgreSQL server. This is the user
postgres$ /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
the server will run as. For production use you should create a
separate, unprivileged account ("postgres" is commonly used). If
The "-D" option specifies the location where the data will be stored. You
you do not have root access or just want to play around, your own
can use any path you want, it does not have to be under the installation
user account is enough, but running the server as root is a
directory. Just make sure that the server account can write to the
security risk and will not work.
directory (or create it, if it doesn't already exist) before starting
adduser postgres
"initdb", as illustrated here.
2. Create a database installation with the "initdb" command. To run
"initdb" you must be logged in to your PostgreSQL server account.
3. The previous step should have told you how to start up the database
It will not work as root.
server. Do so now. The command should look something like
root# mkdir /usr/local/pgsql/data
root# chown postgres /usr/local/pgsql/data
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data
root# su - postgres
postgres$ /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
This will start the server in the foreground. To put the server in the
The "-D" option specifies the location where the data will be
background use something like
stored. You can use any path you want, it does not have to be
under the installation directory. Just make sure that the server
nohup /usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data \
account can write to the directory (or create it, if it doesn't
already exist) before starting "initdb", as illustrated here.
3. The previous step should have told you how to start up the
database server. Do so now. The command should look something like
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data
This will start the server in the foreground. To put the server in
the background use something like
nohup /usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data \
</dev/null >>server.log 2>&1 </dev/null &
</dev/null >>server.log 2>&1 </dev/null &
To stop a server running in the background you can type
To stop a server running in the background you can type
kill `cat /usr/local/pgsql/data/postmaster.pid`
In order to allow TCP/IP connections (rather than only Unix domain
kill `cat /usr/local/pgsql/data/postmaster.pid`
socket ones) you need to pass the "-i" option to "postmaster".
In order to allow TCP/IP connections (rather than only Unix domain socket
ones) you need to pass the "-i" option to "postmaster".
4. Create a database:
4. Create a database:
createdb testdb
createdb testdb
Then enter
Then enter
psql testdb
to connect to that database. At the prompt you can enter SQL
psql testdb
commands and start experimenting.
_________________________________________________________________
to connect to that database. At the prompt you can enter SQL commands and
start experimenting.
What Now?
-------------------------------------------------------------------------------
* The PostgreSQL distribution contains a comprehensive documentation
set, which you should read sometime. After installation, the
What Now?
documentation can be accessed by pointing your browser to
"/usr/local/pgsql/doc/html/index.html", unless you changed the
* The PostgreSQL distribution contains a comprehensive documentation set,
installation directories.
which you should read sometime. After installation, the documentation can
The first few chapters of the main documentation are the Tutorial,
be accessed by pointing your browser to "/usr/local/pgsql/doc/html/
which should be your first reading if you are completely new to
index.html", unless you changed the installation directories.
SQL databases. If you are familiar with database concepts then you
The first few chapters of the main documentation are the Tutorial, which
want to proceed with part on server administration, which contains
should be your first reading if you are completely new to SQL databases.
information about how to set up the database server, database
If you are familiar with database concepts then you want to proceed with
users, and authentication.
part on server administration, which contains information about how to
set up the database server, database users, and authentication.
* Usually, you will want to modify your computer so that it will
* Usually, you will want to modify your computer so that it will
automatically start the database server whenever it boots. Some
automatically start the database server whenever it boots. Some
suggestions for this are in the documentation.
suggestions for this are in the documentation.
* Run the regression tests against the installed server (using the
sequential test method). If you didn't run the tests before
* Run the regression tests against the installed server (using "gmake
installation, you should definitely do it now. This is also
installcheck"). If you didn't run the tests before installation, you
explained in the documentation.
should definitely do it now. This is also explained in the documentation.
_________________________________________________________________
* By default, PostgreSQL is configured to run on minimal hardware. This
Supported Platforms
allows it to start up with almost any hardware configuration. The default
configuration is, however, not designed for optimum performance. To
PostgreSQL has been verified by the developer community to work on the
achieve optimum performance, several server parameters must be adjusted,
platforms listed below. A supported platform generally means that
the two most common being shared_buffers and sort_mem mentioned in the
PostgreSQL builds and installs according to these instructions and
documentation. Other parameters mentioned in the documentation also
that the regression tests pass.
affect performance.
Note: If you are having problems with the installation on a
-------------------------------------------------------------------------------
supported platform, please write to <pgsql-bugs@postgresql.org> or
<pgsql-ports@postgresql.org>, not to the people listed here.
Supported Platforms
OS Processor Version Reported Remarks
PostgreSQL has been verified by the developer community to work on the
AIX RS6000 7.3 2002-11-12, Andreas Zeugswetter
platforms listed below. A supported platform generally means that PostgreSQL
(<ZeugswetterA@spardat.at>) see also doc/FAQ_AIX
builds and installs according to these instructions and that the regression
BSD/OS x86 7.3 2002-10-25, Bruce Momjian (<pgman@candle.pha.pa.us>)
tests pass.
4.2
Note: If you are having problems with the installation on a supported
FreeBSD Alpha 7.3 2002-11-13, Chris Kings-Lynne
platform, please write to <pgsql-bugs@postgresql.org> or <pgsql-
(<chriskl@familyhealth.com.au>)
ports@postgresql.org>, not to the people listed here.
FreeBSD x86 7.3 2002-10-29, 3.3, Nigel J. Andrews
_____________________________________________________________________________
(<nandrews@investsystems.co.uk>), 4.7, Larry Rosenman
|OS__________|Processor|Version|Reported______________________|Remarks________|
(<ler@lerctr.org>), 5.0, Sean Chittenden (<sean@chittenden.org>)
|AIX |RS6000 |7.4 |2003-10-25, Hans-Jürgen |see also doc/ |
HP-UX PA-RISC 7.3 2002-10-28, 10.20 Tom Lane (<tgl@sss.pgh.pa.us>),
|____________|_________|_______|Schönig_(<hs@cybertec.at>)____|FAQ_AIX________|
11.00, 11.11, 32 and 64 bit, Giles Lean (<giles@nemeton.com.au>) gcc
|BSD/OS |x86 |7.4 |2003-10-24, Bruce Momjian |4.3 |
and cc; see also doc/FAQ_HPUX
|____________|_________|_______|(<pgman@candle.pha.pa.us>)____|_______________|
IRIX MIPS 7.3 2002-10-27, Ian Barwick (<barwick@gmx.net>) Irix64 Komma
|FreeBSD |Alpha |7.4 |2003-10-25, Peter Eisentraut |4.8 |
6.5
|____________|_________|_______|(<peter_e@gmx.net>)___________|_______________|
Linux Alpha 7.3 2002-10-28, Magnus Naeslund (<mag@fbab.net>)
|FreeBSD |x86 |7.4 |2003-10-24, Peter Eisentraut |4.9 |
2.4.19-pre6
|____________|_________|_______|(<peter_e@gmx.net>)___________|_______________|
Linux armv4l 7.2 2001-12-10, Mark Knox (<segfault@hardline.org>) 2.2.x
|HP-UX |PA-RISC |7.4 |2003-10-31, 10.20, Tom Lane |gcc and cc; see|
Linux MIPS 7.2 2001-11-15, Hisao Shibuya (<shibuya@alpha.or.jp>)
| | | |(<tgl@sss.pgh.pa.us>); 2003- |also doc/ |
2.0.x; Cobalt Qube2
| | | |11-04, 11.00, Peter Eisentraut|FAQ_HPUX |
Linux PlayStation 2 7.3 2002-11-19, Permaine Cheung
|____________|_________|_______|(<peter_e@gmx.net>)___________|_______________|
<pcheung@redhat.com>) #undef HAS_TEST_AND_SET, remove slock_t typedef
|IRIX |MIPS |7.4 |2003-11-12, Robert E. |6.5.20, cc only|
Linux PPC74xx 7.3 2002-10-26, Tom Lane (<tgl@sss.pgh.pa.us>) bye
| | | |Bruccoleri | |
2.2.18; Apple G3
|____________|_________|_______|(<bruc@stone.congenomics.com>)|_______________|
Linux S/390 7.3 2002-11-22, Permaine Cheung <pcheung@redhat.com>) both
|Linux |Alpha |7.4 |2003-10-25, Noèl Köthe |2.4 |
s390 and s390x (32 and 64 bit)
|____________|_________|_______|(<noel@debian.org>)___________|_______________|
Linux Sparc 7.3 2002-10-26, Doug McNaught (<doug@mcnaught.org>) 3.0
|Linux |arm41 |7.4 |2003-10-25, Noèl Köthe |2.4 |
Linux x86 7.3 2002-10-26, Alvaro Herrera (<alvherre@dcc.uchile.cl>)
|____________|_________|_______|(<noel@debian.org>)___________|_______________|
2.4
|Linux |Itanium |7.4 |2003-10-25, Noèl Köthe |2.4 |
MacOS X PPC 7.3 2002-10-28, 10.1, Tom Lane (<tgl@sss.pgh.pa.us>),
|____________|_________|_______|(<noel@debian.org>)___________|_______________|
10.2.1, Adam Witney (<awitney@sghms.ac.uk>)
|Linux |m68k |7.4 |2003-10-25, Noèl Köthe |2.4 |
NetBSD Alpha 7.2 2001-11-20, Thomas Thai (<tom@minnesota.com>) 1.5W
|____________|_________|_______|(<noel@debian.org>)___________|_______________|
NetBSD arm32 7.3 2002-11-19, Patrick Welche (<prlw1@newn.cam.ac.uk>)
|Linux |MIPS |7.4 |2003-10-25, Noèl Köthe |2.4 |
1.6
|____________|_________|_______|(<noel@debian.org>)___________|_______________|
NetBSD m68k 7.0 2000-04-10, Henry B. Hotz (<hotz@jpl.nasa.gov>) Mac
|Linux |Opteron |7.4 |2003-11-01, Jani Averbach |2.6 |
8xx
|____________|_________|_______|(<jaa@cc.jyu.fi>)_____________|_______________|
NetBSD MIPS 7.2.1 2002-06-13, Warwick Hunter (<whunter@agile.tv>)
|Linux |PPC |7.4 |2003-10-25, Noèl Köthe | |
1.5.3
|____________|_________|_______|(<noel@debian.org>)___________|_______________|
NetBSD PPC 7.2 2001-11-28, Bill Studenmund (<wrstuden@netbsd.org>) 1.5
|Linux |S/390 |7.4 |2003-10-25, Noèl Köthe |2.4 |
NetBSD Sparc 7.2 2001-12-03, Matthew Green (<mrg@eterna.com.au>) 32-
|____________|_________|_______|(<noel@debian.org>)___________|_______________|
and 64-bit builds
|Linux |Sparc |7.4 |2003-10-24, Peter Eisentraut |2.4, 32-bit |
NetBSD VAX 7.1 2001-03-30, Tom I. Helbekkmo (<tih@kpnQwest.no>) 1.5
|____________|_________|_______|(<peter_e@gmx.net>)___________|_______________|
NetBSD x86 7.3 2002-11-14, Patrick Welche (<prlw1@newn.cam.ac.uk>) 1.6
|Linux |x86 |7.4 |2003-10-24, Peter Eisentraut |2.4 |
OpenBSD Sparc 7.3 2002-11-17, Christopher Kings-Lynne
|____________|_________|_______|(<peter_e@gmx.net>)___________|_______________|
(<chriskl@familyhealth.com.au>) 3.2
|MacOS X |PPC |7.4 |2003-10-24, 10.2.8, Adam | |
OpenBSD x86 7.3 2002-11-14, 3.1 Magnus Naeslund (<mag@fbab.net>), 3.2
| | | |Witney | |
Christopher Kings-Lynne (<chriskl@familyhealth.com.au>)
| | | |(<awitney@sghms.ac.uk>), 10.3,| |
SCO OpenServer 5 x86 7.3.1 2002-12-11, Shibashish Satpathy
| | | |Marko Karppinen | |
(<shib@postmark.net>) 5.0.4, gcc; see also doc/FAQ_SCO
|____________|_________|_______|(<marko@karppinen.fi>)________|_______________|
Solaris Sparc 7.3 2002-10-28, Andrew Sullivan
|NetBSD |arm32 |7.4 |2003-11-12, Patrick Welche |1.6ZE/acorn32 |
(<andrew@libertyrms.info>) Solaris 7 and 8; see also doc/FAQ_Solaris
|____________|_________|_______|(<prlw1@newn.cam.ac.uk>)______|_______________|
Solaris x86 7.3 2002-11-20, Martin Renters (<martin@datafax.com>) 5.8;
|NetBSD |x86 |7.4 |2003-10-24, Peter Eisentraut |1.6 |
see also doc/FAQ_Solaris
|____________|_________|_______|(<peter_e@gmx.net>)___________|_______________|
SunOS 4 Sparc 7.2 2001-12-04, Tatsuo Ishii (<t-ishii@sra.co.jp>)
|OpenBSD |Sparc |7.4 |2003-11-01, Peter Eisentraut |3.4 |
Tru64 UNIX Alpha 7.3 2002-11-05, Alessio Bragadini
|____________|_________|_______|(<peter_e@gmx.net>)___________|_______________|
(<alessio@albourne.com>)
|OpenBSD |x86 |7.4 |2003-10-24, Peter Eisentraut |3.2 |
UnixWare x86 7.3 2002-11-01, 7.1.3 Larry Rosenman (<ler@lerctr.org>),
|____________|_________|_______|(<peter_e@gmx.net>)___________|_______________|
7.1.1 and 7.1.2(8.0.0) Olivier Prenant (<ohp@pyrenet.fr>) see also
|Solaris |Sparc |7.4 |2003-10-26, Christopher Browne|2.8; see also |
doc/FAQ_SCO
|____________|_________|_______|(<cbbrowne@libertyrms.info>)__|doc/FAQ_Solaris|
Windows x86 7.3 2002-10-29, Dave Page (<dpage@vale-housing.co.uk>),
|Solaris |x86 |7.4 |2003-10-26, Kurt Roeckx |2.6 see also |
Jason Tishler (<jason@tishler.net>) with Cygwin; see doc/FAQ_MSWIN
|____________|_________|_______|(<Q@ping.be>)_________________|doc/FAQ_Solaris|
Windows x86 7.3 2002-11-05, Dave Page (<dpage@vale-housing.co.uk>)
|Tru64 UNIX |Alpha |7.4 |2003-10-25, 5.1b, Peter | |
native is client-side only; see documentation
| | | |Eisentraut | |
| | | |(<peter_e@gmx.net>); 2003-10- | |
Unsupported Platforms: The following platforms are either known not to
| | | |29, 4.0g, Alessio Bragadini | |
work, or they used to work in a previous release and we did not
|____________|_________|_______|(<alessio@albourne.com>)______|_______________|
receive explicit confirmation of a successful test with version 7.4 at
|UnixWare |x86 |7.4 |2003-11-03, Larry Rosenman |7.1.3; join |
the time this list was compiled. We include these here to let you know
| | | |(<ler@lerctr.org>) |test may fail, |
that these platforms *could* be supported if given some attention.
| | | | |see also doc/ |
|____________|_________|_______|______________________________|FAQ_SCO________|
OS Processor Version Reported Remarks
|Windows with|x86 |7.4 |2003-10-24, Peter Eisentraut |see doc/ |
BeOS x86 7.2 2001-11-29, Cyril Velter (<cyril.velter@libertysurf.fr>)
|Cygwin______|_________|_______|(<peter_e@gmx.net>)___________|FAQ_MSWIN______|
needs updates to semaphore code
|Windows |x86 |7.4 |2003-10-27, Dave Page |native is |
DG/UX 5.4R4.11 m88k 6.3 1998-03-01, Brian E Gallew (<geek+@cmu.edu>)
| | | |(<dpage@vale-housing.co.uk>) |client-side |
no recent reports
| | | | |only, see |
MkLinux DR1 PPC750 7.0 2001-04-03, Tatsuo Ishii (<t-ishii@sra.co.jp>)
|____________|_________|_______|______________________________|documentation__|
7.1 needs OS update?
NeXTSTEP x86 6.x 1998-03-01, David Wetzel (<dave@turbocat.de>) bit rot
Unsupported Platforms: The following platforms are either known not to work, or
suspected
they used to work in a previous release and we did not receive explicit
QNX 4 RTOS x86 7.2 2001-12-10, Bernd Tegge (<tegge@repas-aeg.de>)
confirmation of a successful test with version 7.4 at the time this list was
needs updates to semaphore code; see also doc/FAQ_QNX4
compiled. We include these here to let you know that these platforms *could* be
QNX RTOS v6 x86 7.2 2001-11-20, Igor Kovalenko
supported if given some attention.
(<Igor.Kovalenko@motorola.com>) patches available in archives, but too
________________________________________________________________________________
late for 7.2
|OS________|Processor__|Version|Reported_______________________|Remarks__________|
System V R4 m88k 6.2.1 1998-03-01, Doug Winterburn
|BeOS |x86 |7.2 |2001-11-29, Cyril Velter |needs updates to |
(<dlw@seavme.xroads.com>) needs new TAS spinlock code
|__________|___________|_______|(<cyril.velter@libertysurf.fr>)|semaphore_code___|
System V R4 MIPS 6.4 1998-10-28, Frank Ridderbusch
|Linux |PlayStation|7.4 |2003-11-02, Peter Eisentraut |needs new |
(<ridderbusch.pad@sni.de>) no recent reports
| |2 | |(<peter_e@gmx.net>) |config.guess, -- |
Ultrix MIPS 7.1 2001-03-26 TAS spinlock code not detected
| | | | |disable- |
Ultrix VAX 6.x 1998-03-01
| | | | |spinlocks, #undef|
| | | | |HAS_TEST_AND_SET,|
| | | | |disable tas_dummy|
|__________|___________|_______|_______________________________|()_______________|
|Linux |PA-RISC |7.4 |2003-10-25, Noèl Köthe |needs --disable- |
| | | |(<noel@debian.org>) |spinlocks, |
|__________|___________|_______|_______________________________|otherwise_OK_____|
|NetBSD |Alpha |7.2 |2001-11-20, Thomas Thai |1.5W |
|__________|___________|_______|(<tom@minnesota.com>)__________|_________________|
|NetBSD |MIPS |7.2.1 |2002-06-13, Warwick Hunter |1.5.3 |
|__________|___________|_______|(<whunter@agile.tv>)___________|_________________|
|NetBSD |PPC |7.2 |2001-11-28, Bill Studenmund |1.5 |
|__________|___________|_______|(<wrstuden@netbsd.org>)________|_________________|
|NetBSD |Sparc |7.2 |2001-12-03, Matthew Green |32- and 64-bit |
|__________|___________|_______|(<mrg@eterna.com.au>)__________|builds___________|
|NetBSD |VAX |7.1 |2001-03-30, Tom I. Helbekkmo |1.5 |
|__________|___________|_______|(<tih@kpnQwest.no>)____________|_________________|
|QNX 4 RTOS|x86 |7.2 |2001-12-10, Bernd Tegge |needs updates to |
| | | |(<tegge@repas-aeg.de>) |semaphore code; |
| | | | |see also doc/ |
|__________|___________|_______|_______________________________|FAQ_QNX4_________|
|QNX RTOS |x86 |7.2 |2001-11-20, Igor Kovalenko |patches available|
|v6 | | |(<Igor.Kovalenko@motorola.com>)|in archives, but |
|__________|___________|_______|_______________________________|too_late_for_7.2_|
|SCO |x86 |7.3.1 |2002-12-11, Shibashish Satpathy|5.0.4, gcc; see |
|OpenServer|___________|_______|(<shib@postmark.net>)__________|also_doc/FAQ_SCO_|
|SunOS 4 |Sparc |7.2 |2001-12-04, Tatsuo Ishii (<t- | |
|__________|___________|_______|ishii@sra.co.jp>)______________|_________________|
src/test/regress/README
View file @
b9f5c93b
Regression Tests
Regression Tests
Introduction
The regression tests are a comprehensive set of tests for the SQL
The regression tests are a comprehensive set of tests for the SQL
implementation in PostgreSQL. They test standard SQL operations as well as
implementation in PostgreSQL. They test standard SQL operations as well as the
the extended capabilities of PostgreSQL. The test suite was originally
extended capabilities of PostgreSQL. From PostgreSQL 6.1 onward, the regression
developed by Jolly Chen and Andrew Yu, and was extensively revised and
tests are current for every official release.
repackaged by Marc Fournier and Thomas Lockhart. From PostgreSQL 6.1 onward
the regression tests are current for every official release.
------------------------------------------------------------------------
-------
------------------------------------------------------------------------
Running the Tests
Running the Tests
The regression test can be run against an already installed and running
The regression test can be run against an already installed and running server,
server, or using a temporary installation within the build tree.
or using a temporary installation within the build tree. Furthermore, there is
Furthermore, there is a "parallel" and a "sequential" mode for running the
a "parallel" and a "sequential" mode for running the tests. The sequential
tests. The sequential method runs each test script in turn, whereas the
method runs each test script in turn, whereas the parallel method starts up
parallel method starts up multiple server processes to run groups of tests
multiple server processes to run groups of tests in parallel. Parallel testing
in parallel. Parallel testing gives confidence that interprocess
gives confidence that interprocess communication and locking are working
communication and locking are working correctly. For historical reasons, the
correctly. For historical reasons, the sequential test is usually run against
sequential test is usually run against an existing installation and the
an existing installation and the parallel method against a temporary
parallel method against a temporary installation, but there are no technical
installation, but there are no technical reasons for this.
reasons for this.
To run the regression tests after building but before installation, type
To run the regression tests after building but before installation, type
$
gmake check
gmake check
in the top-level directory. (Or you can change to src/test/regress and run
in the top-level directory. (Or you can change to "src/test/regress" and run
the command there.) This will first build several auxiliary files, such as
the command there.) This will first build several auxiliary files, such as some
platform-dependent "expected" files and some sample user-defined trigger
sample user-defined trigger functions, and then run the test driver script. At
functions, and then run the test driver script. At the end you should see
the end you should see something like
something like
======================
======================
All 77
tests passed.
All 93
tests passed.
======================
======================
or otherwise a note about wh
at
tests failed. See the Section called Test
or otherwise a note about wh
ich
tests failed. See the Section called Test
Evaluation below for more.
Evaluation below for more.
Note: Because this test method runs a temporary server, it will
Because this test method runs a temporary server, it will not work when you are
not work when you are the root user (the server will not start as
the root user (since the server will not start as root). If you already did the
root). If you already did the build as root, you do not have to
build as root, you do not have to start all over. Instead, make the regression
start all over. Instead, make the regression test directory
test directory writable by some other user, log in as that user, and restart
writable by some other user, log in as that user, and restart the
the tests. For example
tests. For example,
root# chmod -R a+w src/test/regress
root# chmod -R a+w src/test/regress
root# chmod -R a+w contrib/spi
root# chmod -R a+w contrib/spi
root# su - joeuser
root# su - joeuser
joeuser$ cd <build top-level directory>
joeuser$ cd top-level build directory
joeuser$ gmake check
joeuser$ gmake check
(The only possible "security risk" here is that other users might
(The only possible "security risk" here is that other users might be able to
be able to alter the regression test results behind your back. Use
alter the regression test results behind your back. Use common sense when
common sense when managing user permissions.)
managing user permissions.)
Alternatively, run the tests after installation.
The parallel regression test starts quite a few processes under your user ID.
Presently, the maximum concurrency is twenty parallel test scripts, which means
sixty processes: there's a server process, a psql, and usually a shell parent
process for the psql for each test script. So if your system enforces a per-
user limit on the number of processes, make sure this limit is at least
seventy-five or so, else you may get random-seeming failures in the parallel
test. If you are not in a position to raise the limit, you can cut down the
degree of parallelism by setting the MAX_CONNECTIONS parameter. For example,
gmake MAX_CONNECTIONS=10 check
runs no more than ten tests concurrently.
Alternatively, run the tests after installation.
On some systems, the default Bourne-compatible shell ("/bin/sh") gets confused
when it has to manage too many child processes in parallel. This may cause the
parallel test run to lock up or fail. In such cases, specify a different
Bourne-compatible shell on the command line, for example:
Tip: On some systems, the default Bourne-compatible shell
gmake SHELL=/bin/ksh check
(/bin/sh) gets confused when it has to manage too many child
processes in parallel. This may cause the parallel test run to
lock up or fail. In such cases, specify a different
Bourne-compatible shell on the command line, for example:
$ gmake SHELL=/bin/ksh check
If no non-broken shell is available, you may be able to work around the problem
by limiting the number of connections, as shown above.
To run the tests after installation, initialize a data area and start the
To run the tests after installation, initialize a data area and start the
server, then type
server, then type
$
gmake installcheck
gmake installcheck
The tests will expect to contact the server at the local host and the
The tests will expect to contact the server at the local host and the
default
default port number, unless directed otherwise by PGHOST and PGPORT
port number, unless directed otherwise by PGHOST and PGPORT environment
environment
variables.
variables.
------------------------------------------------------------------------
-------
------------------------------------------------------------------------
Test Evaluation
Test Evaluation
Some properly installed and fully functional PostgreSQL installations can
Some properly installed and fully functional PostgreSQL installations can
"fail" some of these regression tests due to platform-specific artifacts
"fail" some of these regression tests due to platform-specific artifacts such
such as varying floating point representation and time zone support. The
as varying floating-point representation and time zone support. The tests are
tests are currently evaluated using a simple diff comparison against the
currently evaluated using a simple "diff" comparison against the outputs
outputs generated on a reference system, so the results are sensitive to
generated on a reference system, so the results are sensitive to small system
small system differences. When a test is reported as "failed", always
differences. When a test is reported as "failed", always examine the
examine the differences between expected and actual results; you may well
differences between expected and actual results; you may well find that the
find that the differences are not significant. Nonetheless, we still strive
differences are not significant. Nonetheless, we still strive to maintain
to maintain accurate reference files across all supported platforms, so it
accurate reference files across all supported platforms, so it can be expected
can be expected that all tests pass.
that all tests pass.
The actual outputs of the regression tests are in files in the
The actual outputs of the regression tests are in files in the "src/test/
src/test/regress/results directory. The test script uses diff to compare
regress/results" directory. The test script uses "diff" to compare each output
each output file against the reference outputs stored in the
file against the reference outputs stored in the "src/test/regress/expected"
src/test/regress/expected directory. Any differences are saved for your
directory. Any differences are saved for your inspection in "src/test/regress/
inspection in src/test/regress/regression.diffs. (Or you can run diff
regression.diffs". (Or you can run "diff" yourself, if you prefer.)
yourself, if you prefer.)
-------------------------------------------------------------------------------
------------------------------------------------------------------------
Error message differences
Error message differences
Some of the regression tests involve intentional invalid input values. Error
Some of the regression tests involve intentional invalid input values. Error
messages can come from either the PostgreSQL code or from the host platform
messages can come from either the PostgreSQL code or from the host platform
system routines. In the latter case, the messages may vary between
system routines. In the latter case, the messages may vary between platforms,
platforms, but should reflect similar information. These differences in
but should reflect similar information. These differences in messages will
messages will result in a "failed" regression test that can be validated by
result in a "failed" regression test that can be validated by inspection.
inspection.
------------------------------------------------------------------------
-------
------------------------------------------------------------------------
Locale differences
Locale differences
The tests expect to run in plain "C" locale. This should not cause any
If you run the tests against an already-installed server that was initialized
problems when you run the tests against a temporary installation, since the
with a collation-order locale other than C, then there may be differences due
regression test driver takes care to start the server in C locale. However,
to sort order and follow-up failures. The regression test suite is set up to
if you run the tests against an already-installed server that is using non-C
handle this problem by providing alternative result files that together are
locale settings, you may see differences caused by varying rules for string
known to handle a large number of locales. For example, for the char test, the
sort order, formatting of numeric and monetary values, and so forth.
expected file "char.out" handles the C and POSIX locales, and the file
"char_1.out" handles many other locales. The regression test driver will
In some locales the resulting differences are small and easily checked by
automatically pick the best file to match against when checking for success and
inspection. However, in a locale that changes the rules for formatting of
for computing failure differences. (This means that the regression tests cannot
numeric values (typically by swapping the usage of commas and decimal
detect whether the results are appropriate for the configured locale. The tests
points), entry of some data values will fail, resulting in extensive
will simply pick the one result file that works best.)
differences later in the tests where the missing data values are supposed to
be used.
If for some reason the existing expected files do not cover some locale, you
can add a new file. The naming scheme is testname_digit.out. The actual digit
------------------------------------------------------------------------
is not significant. Remember that the regression test driver will consider all
such files to be equally valid test results. If the test results are platform-
specific, the technique described in the Section called Platform-specific
comparison files should be used instead.
-------------------------------------------------------------------------------
Date and time differences
Date and time differences
Some of the queries in the timestamp test will fail if you run the test on
A few of the queries in the "horology" test will fail if you run the test on
the day of a daylight-savings time changeover, or the day before or after
the day of a daylight-saving time changeover, or the day after one. These
one. These queries assume that the intervals between midnight yesterday,
queries expect that the intervals between midnight yesterday, midnight today
midnight today and midnight tomorrow are exactly twenty-four hours -- which
and midnight tomorrow are exactly twenty-four hours --- which is wrong if
is wrong if daylight-savings time went into or out of effect meanwhile.
daylight-saving time went into or out of effect meanwhile.
Most of the date and time results are dependent on the time zone
environment. The reference files are generated for time zone PST8PDT
(Berkeley, California) and there will be apparent failures if the tests are
not run with that time zone setting. The regression test driver sets
environment variable PGTZ to PST8PDT, which normally ensures proper results.
However, your system must provide library support for the PST8PDT time zone,
or the time zone-dependent tests will fail. To verify that your machine does
have this support, type the following:
$ env TZ=PST8PDT date
The command above should have returned the current system time in the
PST8PDT time zone. If the PST8PDT database is not available, then your
system may have returned the time in GMT. If the PST8PDT time zone is not
available, you can set the time zone rules explicitly:
PGTZ='PST8PDT7,M04.01.0,M10.05.03'; export PGTZ
Note: Because USA daylight-saving time rules are used, this problem
always occurs on the first Sunday of April, the last Sunday of
October, and their following Mondays, regardless of when daylight-
saving time is in effect where you live. Also note that the problem
appears or disappears at midnight Pacific time (UTC-7 or UTC-8), not
midnight your local time. Thus the failure may appear late on
Saturday or persist through much of Tuesday, depending on where you
live.
There appear to be some systems that do not accept the recommended syntax
Most of the date and time results are dependent on the time zone environment.
for explicitly setting the local time zone rules; you may need to use a
The reference files are generated for time zone PST8PDT (Berkeley, California),
different PGTZ setting on such machines.
and there will be apparent failures if the tests are not run with that time
zone setting. The regression test driver sets environment variable PGTZ to
PST8PDT, which normally ensures proper results. However, your operating system
must provide support for the PST8PDT time zone, or the time zone-dependent
tests will fail. To verify that your machine does have this support, type the
following:
Some systems using older time zone libraries fail to apply daylight-savings
env TZ=PST8PDT date
corrections to dates before 1970, causing pre-1970 PDT times to be displayed
in PST instead. This will result in localized differences in the test
results.
------------------------------------------------------------------------
The command above should have returned the current system time in the PST8PDT
time zone. If the PST8PDT time zone is not available, then your system may have
returned the time in UTC. If the PST8PDT time zone is missing, you can set the
time zone rules explicitly:
Floating point differences
PGTZ='PST8PDT7,M04.01.0,M10.05.03'; export PGTZ
Some of the tests involve computing 64-bit (double precision) numbers from
There appear to be some systems that do not accept the recommended syntax for
table columns. Differences in results involving mathematical functions of
explicitly setting the local time zone rules; you may need to use a different
double precision columns have been observed. The float8 and geometry tests
PGTZ setting on such machines.
are particularly prone to small differences across platforms, or even with
different compiler optimization options. Human eyeball comparison is needed
to determine the real significance of these differences which are usually 10
places to the right of the decimal point.
Some systems signal errors from pow() and exp() differently from the
Some systems using older time-zone libraries fail to apply daylight-saving
mechanism expected by the current PostgreSQL code.
corrections to dates before 1970, causing pre-1970 PDT times to be displayed in
PST instead. This will result in localized differences in the test results.
------------------------------------------------------------------------
-------
------------------------------------------------------------------------
Polygon
differences
Floating-point
differences
S
everal of the tests involve operations on geographic data about th
e
S
ome of the tests involve computing 64-bit floating-point numbers (doubl
e
Oakland/Berkeley, California street map. The map data is expressed as
precision) from table columns. Differences in results involving mathematical
polygons whose vertices are represented as pairs of double precision numbers
functions of double precision columns have been observed. The float8 and
(decimal latitude and longitude). Initially, some tables are created and
geometry tests are particularly prone to small differences across platforms, or
loaded with geographic data, then some views are created that join two
even with different compiler optimization options. Human eyeball comparison is
tables using the polygon intersection operator (##), then a select is don
e
needed to determine the real significance of these differences which ar
e
on the view
.
usually 10 places to the right of the decimal point
.
When comparing the results from different platforms, differences occur in
Some systems display minus zero as -0, while others just show 0.
the 2nd or 3rd place to the right of the decimal point. The SQL statements
where these problems occur are the following:
S
ELECT * from street;
S
ome systems signal errors from pow() and exp() differently from the mechanism
SELECT * from iexit;
expected by the current PostgreSQL code.
------------------------------------------------------------------------
-------
------------------------------------------------------------------------
Row ordering differences
Row ordering differences
You might see differences in which the same rows are output in a different
You might see differences in which the same rows are output in a different
order than what appears in the expected file. In most cases this is not,
order than what appears in the expected file. In most cases this is not,
strictly speaking, a bug. Most of the regression test scripts are not so
strictly speaking, a bug. Most of the regression test scripts are not so
pedantic as to use an ORDER BY for every single SELECT, and so their result
pedantic as to use an ORDER BY for every single SELECT, and so their result
row
row
orderings are not well-defined according to the letter of the SQL
orderings are not well-defined according to the letter of the SQL
specification. In practice, since we are looking at the same queries being
specification. In practice, since we are looking at the same queries being
executed on the same data by the same software, we usually get the same
executed on the same data by the same software, we usually get the same result
result ordering on all platforms, and so the lack of ORDER BY isn't a
ordering on all platforms, and so the lack of ORDER BY isn't a problem. Some
problem. Some queries do exhibit cross-platform ordering differences,
queries do exhibit cross-platform ordering differences, however. (Ordering
however. (Ordering differences can also be triggered by non-C locale
differences can also be triggered by non-C locale settings.)
settings.)
Therefore, if you see an ordering difference, it's not something to worry
Therefore, if you see an ordering difference, it's not something to worry
about, unless the query does have an ORDER BY that your result is violating.
about, unless the query does have an ORDER BY that your result is violating.
But please report it anyway, so that we can add an ORDER BY to that
But please report it anyway, so that we can add an ORDER BY to that particular
particular query and thereby eliminate the bogus "failure" in future
query and thereby eliminate the bogus "failure" in future releases.
releases.
You might wonder why we don't order all the regress test queries explicitly
You might wonder why we don't order all the regression test queries explicitly
to get rid of this issue once and for all. The reason is that that would
to get rid of this issue once and for all. The reason is that that would make
make the regression tests less useful, not more, since they'd tend to
the regression tests less useful, not more, since they'd tend to exercise query
exercise query plan types that produce ordered results to the exclusion of
plan types that produce ordered results to the exclusion of those that don't.
those that don't.
------------------------------------------------------------------------
-------
------------------------------------------------------------------------
The "random" test
The "random" test
There is at least one case in the
"random"
test script that is intended to
There is at least one case in the
random
test script that is intended to
produce random results. This causes random to fail the regression test once
produce random results. This causes random to fail the regression test once
in
in
a while (perhaps once in every five to ten trials). Typing
a while (perhaps once in every five to ten trials). Typing
diff results/random.out expected/random.out
diff results/random.out expected/random.out
should produce only one or a few lines of differences. You need not worry
should produce only one or a few lines of differences. You need not worry
unless the random test always fails in repeated attempts. (On the other
unless the random test always fails in repeated attempts. (On the other hand,
hand, if the random test is never reported to fail even in many trials of
if the random test is *never* reported to fail even in many trials of the
the regression tests, you probably should worry.)
regression tests, you probably *should* worry.)
-------------------------------------------------------------------------------
Platform-specific comparison files
Since some of the tests inherently produce platform-specific results, we have
provided a way to supply platform-specific result comparison files. Frequently,
the same variation applies to multiple platforms; rather than supplying a
separate comparison file for every platform, there is a mapping file that
defines which comparison file to use. So, to eliminate bogus test "failures"
for a particular platform, you must choose or make a variant result file, and
then add a line to the mapping file, which is "src/test/regress/resultmap".
Each line in the mapping file is of the form
testname/platformpattern=comparisonfilename
The test name is just the name of the particular regression test module. The
platform pattern is a pattern in the style of the Unix tool "expr" (that is, a
regular expression with an implicit ^ anchor at the start). It is matched
against the platform name as printed by "config.guess" followed by :gcc or :cc,
depending on whether you use the GNU compiler or the system's native compiler
(on systems where there is a difference). The comparison file name is the name
of the substitute result comparison file.
For example: some systems using older time zone libraries fail to apply
daylight-saving corrections to dates before 1970, causing pre-1970 PDT times to
be displayed in PST instead. This causes a few differences in the "horology"
regression test. Therefore, we provide a variant comparison file, "horology-no-
DST-before-1970.out", which includes the results to be expected on these
systems. To silence the bogus "failure" message on HPUX platforms, "resultmap"
includes
horology/.*-hpux=horology-no-DST-before-1970
which will trigger on any machine for which the output of "config.guess"
includes -hpux. Other lines in "resultmap" select the variant comparison file
for other platforms where it's appropriate.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment