Commit af7d257e authored by Tom Lane's avatar Tom Lane

Remove contrib modules that have been migrated to pgfoundry: adddepend,

dbase, dbmirror, fulltextindex, mac, userlock; or abandoned: mSQL-interface,
tips.
parent a3242fb4
# $PostgreSQL: pgsql/contrib/Makefile,v 1.67 2006/09/04 15:07:46 petere Exp $ # $PostgreSQL: pgsql/contrib/Makefile,v 1.68 2006/09/05 17:20:26 tgl Exp $
subdir = contrib subdir = contrib
top_builddir = .. top_builddir = ..
...@@ -9,11 +9,8 @@ WANTED_DIRS = \ ...@@ -9,11 +9,8 @@ WANTED_DIRS = \
btree_gist \ btree_gist \
chkpass \ chkpass \
cube \ cube \
dbase \
dblink \ dblink \
dbmirror \
earthdistance \ earthdistance \
fulltextindex \
fuzzystrmatch \ fuzzystrmatch \
intagg \ intagg \
intarray \ intarray \
...@@ -31,9 +28,7 @@ WANTED_DIRS = \ ...@@ -31,9 +28,7 @@ WANTED_DIRS = \
seg \ seg \
spi \ spi \
tablefunc \ tablefunc \
tips \
tsearch2 \ tsearch2 \
userlock \
vacuumlo vacuumlo
ifeq ($(with_openssl),yes) ifeq ($(with_openssl),yes)
...@@ -41,9 +36,6 @@ WANTED_DIRS += sslinfo ...@@ -41,9 +36,6 @@ WANTED_DIRS += sslinfo
endif endif
# Missing: # Missing:
# adddepend \ (does not have a makefile)
# mSQL-interface \ (requires msql installed)
# mac \ (does not have a makefile)
# start-scripts \ (does not have a makefile) # start-scripts \ (does not have a makefile)
# xml2 \ (requires libxml installed) # xml2 \ (requires libxml installed)
......
...@@ -24,13 +24,9 @@ procedure. ...@@ -24,13 +24,9 @@ procedure.
Index: Index:
------ ------
adddepend -
Add object dependency information to pre-7.3 objects.
by Rod Taylor <rbt@rbt.ca>
adminpack - adminpack -
File and log manipulation routines, used by pgAdmin File and log manipulation routines, used by pgAdmin
by From: Dave Page <dpage@vale-housing.co.uk> by Dave Page <dpage@vale-housing.co.uk>
btree_gist - btree_gist -
Support for emulating BTREE indexing in GiST Support for emulating BTREE indexing in GiST
...@@ -44,28 +40,14 @@ cube - ...@@ -44,28 +40,14 @@ cube -
Multidimensional-cube datatype (GiST indexing example) Multidimensional-cube datatype (GiST indexing example)
by Gene Selkov, Jr. <selkovjr@mcs.anl.gov> by Gene Selkov, Jr. <selkovjr@mcs.anl.gov>
dbase -
Converts from dbase/xbase to PostgreSQL
by Maarten.Boekhold <Maarten.Boekhold@reuters.com>,
Frank Koormann <fkoorman@usf.uni-osnabrueck.de>,
Ivan Baldo <lubaldo@adinet.com.uy>
dblink - dblink -
Allows remote query execution Allows remote query execution
by Joe Conway <mail@joeconway.com> by Joe Conway <mail@joeconway.com>
dbmirror -
Replication server
by Steven Singer <ssinger@navtechinc.com>
earthdistance - earthdistance -
Operator for computing earth distance for two points Operator for computing earth distance for two points
by Hal Snyder <hal@vailsys.com> by Hal Snyder <hal@vailsys.com>
fulltextindex -
Full text indexing using triggers
by Maarten Boekhold <maartenb@dutepp0.et.tudelft.nl>
fuzzystrmatch - fuzzystrmatch -
Levenshtein, metaphone, and soundex fuzzy string matching Levenshtein, metaphone, and soundex fuzzy string matching
by Joe Conway <mail@joeconway.com>, Joel Burton <jburton@scw.org> by Joe Conway <mail@joeconway.com>, Joel Burton <jburton@scw.org>
...@@ -90,14 +72,6 @@ ltree - ...@@ -90,14 +72,6 @@ ltree -
Tree-like data structures Tree-like data structures
by Teodor Sigaev <teodor@sigaev.ru> and Oleg Bartunov <oleg@sai.msu.su> by Teodor Sigaev <teodor@sigaev.ru> and Oleg Bartunov <oleg@sai.msu.su>
mSQL-interface -
mSQL API translation library
by Aldrin Leal <aldrin@americasnet.com>
mac -
Support functions for MAC address types
by Lawrence E. Rosenman <ler@lerctr.org>
oid2name - oid2name -
Maps numeric files to table names Maps numeric files to table names
by B Palmer <bpalmer@crimelabs.net> by B Palmer <bpalmer@crimelabs.net>
...@@ -139,6 +113,10 @@ seg - ...@@ -139,6 +113,10 @@ seg -
spi - spi -
Various trigger functions, examples for using SPI. Various trigger functions, examples for using SPI.
sslinfo -
Functions to get information about SSL certificates
by Victor Wagner <vitus@cryptocom.ru>
start-scripts - start-scripts -
Scripts for starting the server at boot time. Scripts for starting the server at boot time.
...@@ -146,19 +124,11 @@ tablefunc - ...@@ -146,19 +124,11 @@ tablefunc -
Examples of functions returning tables Examples of functions returning tables
by Joe Conway <mail@joeconway.com> by Joe Conway <mail@joeconway.com>
tips -
Getting Apache to log to PostgreSQL
by Terry Mackintosh <terry@terrym.com>
tsearch2 - tsearch2 -
Full-text-index support using GiST Full-text-index support using GiST
by Teodor Sigaev <teodor@sigaev.ru> and Oleg Bartunov by Teodor Sigaev <teodor@sigaev.ru> and Oleg Bartunov
<oleg@sai.msu.su>. <oleg@sai.msu.su>.
userlock -
User locks
by Massimo Dal Zotto <dz@cs.unitn.it>
vacuumlo - vacuumlo -
Remove orphaned large objects Remove orphaned large objects
by Peter T Mount <peter@retep.org.uk> by Peter T Mount <peter@retep.org.uk>
......
Dependency Additions For PostgreSQL 7.3 Upgrades
In PostgreSQL releases prior to 7.3, certain database objects didn't
have proper dependencies. For example:
1) When you created a table with a SERIAL column, there was no linkage
to its underlying sequence. If you dropped the table with the SERIAL
column, the sequence was not automatically dropped.
2) When you created a foreign key, it created three triggers. If you
wanted to drop the foreign key, you had to drop the three triggers
individually.
3) When you created a column with constraint UNIQUE, a unique index was
created but there was no indication that the index was created as a
UNIQUE column constraint.
Fortunately, PostgreSQL 7.3 and later now tracks such dependencies
and handles these cases. Unfortunately, PostgreSQL dumps from prior
releases don't contain such dependency information.
This script operates on >= 7.3 databases and adds dependency information
for the objects listed above. It prompts the user on whether to create
a linkage for each object. You can use the -Y option to prevent such
prompting and have it generate all possible linkages.
This program requires the Pg:DBD Perl interface.
Usage:
adddepend [options] [dbname [username]]
Options:
-d <dbname> Specify database name to connect to (default: postgres)
-h <host> Specify database server host (default: localhost)
-p <port> Specify database server port (default: 5432)
-u <username> Specify database username (default: postgres)
--password=<pw> Specify database password (default: blank)
-Y The script normally asks whether the user wishes to apply
the conversion for each item found. This forces YES to all
questions.
Rod Taylor <pg@rbt.ca>
This diff is collapsed.
# $PostgreSQL: pgsql/contrib/dbase/Makefile,v 1.8 2005/09/27 17:13:01 tgl Exp $
PROGRAM = dbf2pg
OBJS = dbf.o dbf2pg.o endian.o
PG_CPPFLAGS = -I$(libpq_srcdir)
PG_LIBS = $(libpq_pgport)
# Uncomment this to provide charset translation
#PG_CPPFLAGS += -DHAVE_ICONV_H
# You might need to uncomment this too, if libiconv is a separate
# library on your platform
#PG_LIBS += -liconv
DOCS = README.dbf2pg
MAN = dbf2pg.1 # XXX not implemented
ifdef USE_PGXS
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
else
subdir = contrib/dbase
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
include $(top_srcdir)/contrib/contrib-global.mk
endif
dbf2sql(1L) dbf2sql(1L)
NAME
dbf2sql - Insert xBase-style .dbf-files into a Post-
greSQL-table
SYNOPSIS
"dbf2pg [options] dbf-file"
Options:
[-v[v]] [-f] [-u | -l] [-c | -D] [-d database] [-t table]
[-h host] [-s oldname=[newname][,oldname=[newname]]] [-b
start] [-e end] [-W] [-U username] [-B transaction_size]
[-F charset_from [-T charset_to]]
DESCRIPTION
This manual page documents the program dbf2pg. It takes
an xBase-style .dbf-file, and inserts it into the speci-
fied database and table.
OPTIONS
-v Display some status-messages.
-vv Also display progress.
-f Convert all field-names from the .dbf-file to low-
ercase.
-u Convert the contents of all fields to uppercase.
-l Convert the contents of all fields to lowercase.
-c Create the table specified with -t. If this table
already exists, first DROP it.
-D Delete the contents of the table specified with -t.
Note that this table has to exists. An error is
returned if this is not the case.
-W Ask for password.
-d database
Specify the database to use. An error is returned
if this database does not exists. Default is
"test".
-t table
Specify the table to insert in. An error is
returned if this table does not exists. Default is
"test".
-h host
Specify the host to which to connect. Default is
"localhost".
1
dbf2sql(1L) dbf2sql(1L)
-s oldname=[newname][,oldname=[newname]]
Change the name of a field from oldname to newname.
This is mainly used to avoid using reserved SQL-
keywords. When the new fieldname is empty, the field
is skipped in both the CREATE-clause and the
INSERT-clauses, in common words: it will not be present
in the SQL-table.
Example:
-s SELECT=SEL,remark=,COMMIT=doit
This is done before the -f operator has taken
effect!
-b start
Specify the first record-number in the xBase-file
we will insert.
-e end Specify the last record-number in the xBase-file we
will insert.
-B transaction_size
Specify the number of records per transaction,
default is all records.
-U username
Log as the specified user in the database.
-F charset_from
If specified, it converts the data from the speci-
fied charset. Example:
-F IBM437
Consult your system documentation to see the con-
versions available. This requires iconv to be enabled
in the compile.
-T charset_to
Together with -F charset_from , it converts the
data to the specified charset. Default is
"ISO-8859-1". This requires iconv to be enabled
in the compile.
ENVIRONMENT
This program is affected by the environment-variables as
used by "PostgresSQL." See the documentation of Post-
gresSQL for more info. This program can optionally use iconv
character set conversion routines.
BUGS
Fields larger than 8192 characters are not supported and
could break the program.
Some charset convertions could cause the output to be
larger than the input and could break the program.
2
This diff is collapsed.
/* $PostgreSQL: pgsql/contrib/dbase/dbf.h,v 1.9 2006/03/11 04:38:28 momjian Exp $ */
/* header-file for dbf.c
declares routines for reading and writing xBase-files (.dbf), and
associated structures
Maarten Boekhold (maarten.boekhold@reuters.com) 29 oktober 1995
*/
#ifndef _DBF_H
#define _DBF_H
#ifdef _WIN32
#include <gmon.h> /* we need it to define u_char type */
#endif
#include <sys/types.h>
/**********************************************************************
The DBF-part
***********************************************************************/
#define DBF_FILE_MODE 0644
/* byte offsets for date in dbh_date */
#define DBH_DATE_YEAR 0
#define DBH_DATE_MONTH 1
#define DBH_DATE_DAY 2
/* maximum fieldname-length */
#define DBF_NAMELEN 11
/* magic-cookies for the file */
#define DBH_NORMAL 0x03
#define DBH_MEMO 0x83
/* magic-cookies for the fields */
#define DBF_ERROR -1
#define DBF_VALID 0x20
#define DBF_DELETED 0x2A
/* diskheader */
typedef struct
{
u_char dbh_dbt; /* indentification field */
u_char dbh_year; /* last modification-date */
u_char dbh_month;
u_char dbh_day;
u_char dbh_records[4]; /* number of records */
u_char dbh_hlen[2]; /* length of this header */
u_char dbh_rlen[2]; /* length of a record */
u_char dbh_stub[20]; /* misc stuff we don't need */
} dbf_header;
/* disk field-description */
typedef struct
{
char dbf_name[DBF_NAMELEN]; /* field-name terminated with \0 */
u_char dbf_type; /* field-type */
u_char dbf_reserved[4]; /* some reserved stuff */
u_char dbf_flen; /* field-length */
u_char dbf_dec; /* number of decimal positions if type is 'N' */
u_char dbf_stub[14]; /* stuff we don't need */
} dbf_field;
/* memory field-description */
typedef struct
{
char db_name[DBF_NAMELEN]; /* field-name terminated with \0 */
u_char db_type; /* field-type */
u_char db_flen; /* field-length */
u_char db_dec; /* number of decimal positions */
} f_descr;
/* memory dfb-header */
typedef struct
{
int db_fd; /* file-descriptor */
u_long db_offset; /* current offset in file */
u_char db_memo; /* memo-file present */
u_char db_year; /* last update as YYMMDD */
u_char db_month;
u_char db_day;
u_long db_hlen; /* length of the diskheader, for calculating
* the offsets */
u_long db_records; /* number of records */
u_long db_currec; /* current record-number starting at 0 */
u_short db_rlen; /* length of the record */
u_char db_nfields; /* number of fields */
u_char *db_buff; /* record-buffer to save malloc()'s */
f_descr *db_fields; /* pointer to an array of field- descriptions */
} dbhead;
/* structure that contains everything a user wants from a field, including
the contents (in ASCII). Warning! db_flen may be bigger than the actual
length of db_name! This is because a field doesn't have to be completely
filled */
typedef struct
{
char db_name[DBF_NAMELEN]; /* field-name terminated with \0 */
u_char db_type; /* field-type */
u_char db_flen; /* field-length */
u_char db_dec; /* number of decimal positions */
u_char *db_contents; /* contents of the field in ASCII */
} field;
/* prototypes for functions */
extern dbhead *dbf_open(char *file, int flags);
extern int dbf_write_head(dbhead * dbh);
extern int dbf_put_fields(dbhead * dbh);
extern int dbf_add_field(dbhead * dbh, char *name, u_char type,
u_char length, u_char dec);
extern dbhead *dbf_open_new(char *name, int flags);
extern void dbf_close(dbhead * dbh);
extern int dbf_get_record(dbhead * dbh, field * fields, u_long rec);
extern field *dbf_build_record(dbhead * dbh);
extern void dbf_free_record(dbhead * dbh, field * fields);
extern int dbf_put_record(dbhead * dbh, field * rec, u_long where);
/*********************************************************************
The endian-part
***********************************************************************/
extern long get_long(u_char *cp);
extern void put_long(u_char *cp, long lval);
extern short get_short(u_char *cp);
extern void put_short(u_char *cp, short lval);
#endif /* _DBF_H */
.\" $PostgreSQL: pgsql/contrib/dbase/dbf2pg.1,v 1.3 2006/03/11 04:38:28 momjian Exp $
.TH dbf2sql 1L \" -*- nroff -*-
.SH NAME
dbf2sql \- Insert xBase\-style .dbf\-files into a PostgreSQL\-table
.SH SYNOPSIS
.B dbf2pg [options] dbf-file
.br
.br
Options:
.br
[-v[v]] [-f] [-u | -l] [-c | -D] [-d database] [-t table]
[-h host] [-s oldname=[newname][,oldname=[newname]]]
[-b start] [-e end] [-W] [-U username] [-B transaction_size]
[-F charset_from [-T charset_to]]
.SH DESCRIPTION
This manual page documents the program
.BR dbf2pg.
It takes an xBase-style .dbf-file, and inserts it into the specified
database and table.
.SS OPTIONS
.TP
.I "\-v"
Display some status-messages.
.TP
.I "-vv"
Also display progress.
.TP
.I "-f"
Convert all field-names from the .dbf-file to lowercase.
.TP
.I "-u"
Convert the contents of all fields to uppercase.
.TP
.I "-l"
Convert the contents of all fields to lowercase.
.TP
.I "-c"
Create the table specified with
.IR \-t .
If this table already exists, first
.BR DROP
it.
.TP
.I "-D"
Delete the contents of the table specified with
.IR \-t .
Note that this table has to exists. An error is returned if this is not the
case.
.TP
.I "-W"
Ask for password.
.TP
.I "-d database"
Specify the database to use. An error is returned if this database does not
exists. Default is "test".
.TP
.I "-t table"
Specify the table to insert in. An error is returned if this table does not
exists. Default is "test".
.TP
.I "-h host"
Specify the host to which to connect. Default is "localhost".
.TP
.I "-s oldname=newname[,oldname=newname]"
Change the name of a field from
.BR oldname
to
.BR newname .
This is mainly used to avoid using reserved SQL-keywords. Example:
.br
.br
-s SELECT=SEL,COMMIT=doit
.br
.br
This is done
.BR before
the
.IR -f
operator has taken effect!
.TP
.I "-b start"
Specify the first record-number in the xBase-file we will insert.
.TP
.I "-e end"
Specify the last record-number in the xBase-file we will insert.
.TP
.I "-B transaction_size"
Specify the number of records per transaction, default is all records.
.TP
.I "-U username"
Log as the specified user in the database.
.TP
.I "-F charset_from"
If specified, it converts the data from the specified charset. Example:
.br
.br
-F IBM437
.br
.br
Consult your system documentation to see the convertions available.
.TP
.I "-T charset_to"
Together with
.I "-F charset_from"
, it converts the data to the specified charset. Default is "ISO-8859-1".
.SH ENVIRONMENT
This program is affected by the environment-variables as used
by
.B PostgresSQL.
See the documentation of PostgresSQL for more info.
.SH BUGS
Fields larger than 8192 characters are not supported and could break the
program.
.br
Some charset convertions could cause the output to be larger than the input
and could break the program.
This diff is collapsed.
/* $PostgreSQL: pgsql/contrib/dbase/endian.c,v 1.4 2006/03/11 04:38:28 momjian Exp $ */
/* Maarten Boekhold (maarten.boekhold@reuters.com) oktober 1995 */
#include <sys/types.h>
#include "dbf.h"
/*
* routine to change little endian long to host long
*/
long
get_long(u_char *cp)
{
long ret;
ret = *cp++;
ret += ((*cp++) << 8);
ret += ((*cp++) << 16);
ret += ((*cp++) << 24);
return ret;
}
void
put_long(u_char *cp, long lval)
{
cp[0] = lval & 0xff;
cp[1] = (lval >> 8) & 0xff;
cp[2] = (lval >> 16) & 0xff;
cp[3] = (lval >> 24) & 0xff;
}
/*
* routine to change little endian short to host short
*/
short
get_short(u_char *cp)
{
short ret;
ret = *cp++;
ret += ((*cp++) << 8);
return ret;
}
void
put_short(u_char *cp, short sval)
{
cp[0] = sval & 0xff;
cp[1] = (sval >> 8) & 0xff;
}
-- Adjust this setting to control where the objects get created.
SET search_path = public;
CREATE TRIGGER "MyTableName_Trig"
AFTER INSERT OR DELETE OR UPDATE ON "MyTableName"
FOR EACH ROW EXECUTE PROCEDURE "recordchange" ();
This diff is collapsed.
# $PostgreSQL: pgsql/contrib/dbmirror/Makefile,v 1.5 2005/09/27 17:13:01 tgl Exp $
MODULES = pending
SCRIPTS = clean_pending.pl DBMirror.pl
DATA = AddTrigger.sql MirrorSetup.sql slaveDatabase.conf
DOCS = README.dbmirror
ifdef USE_PGXS
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
else
subdir = contrib/dbmirror
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
include $(top_srcdir)/contrib/contrib-global.mk
endif
BEGIN;
CREATE FUNCTION "recordchange" () RETURNS trigger
AS '$libdir/pending', 'recordchange'
LANGUAGE C;
CREATE TABLE dbmirror_MirrorHost (
MirrorHostId serial PRIMARY KEY,
SlaveName varchar NOT NULL
);
CREATE TABLE dbmirror_Pending (
SeqId serial PRIMARY KEY,
TableName name NOT NULL,
Op character,
XID integer NOT NULL
);
CREATE INDEX dbmirror_Pending_XID_Index ON dbmirror_Pending (XID);
CREATE TABLE dbmirror_PendingData (
SeqId integer NOT NULL,
IsKey boolean NOT NULL,
Data varchar,
PRIMARY KEY (SeqId, IsKey) ,
FOREIGN KEY (SeqId) REFERENCES dbmirror_Pending (SeqId) ON UPDATE CASCADE ON DELETE CASCADE
);
CREATE TABLE dbmirror_MirroredTransaction (
XID integer NOT NULL,
LastSeqId integer NOT NULL,
MirrorHostId integer NOT NULL,
PRIMARY KEY (XID, MirrorHostId),
FOREIGN KEY (MirrorHostId) REFERENCES dbmirror_MirrorHost (MirrorHostId) ON UPDATE CASCADE ON DELETE CASCADE,
FOREIGN KEY (LastSeqId) REFERENCES dbmirror_Pending (SeqId) ON UPDATE CASCADE ON DELETE CASCADE
);
UPDATE pg_proc SET proname='nextval_pg' WHERE proname='nextval';
CREATE FUNCTION pg_catalog.nextval(regclass) RETURNS bigint
AS '$libdir/pending', 'nextval_mirror'
LANGUAGE C STRICT;
UPDATE pg_proc set proname='setval_pg' WHERE proname='setval';
CREATE FUNCTION pg_catalog.setval(regclass, bigint, boolean) RETURNS bigint
AS '$libdir/pending', 'setval3_mirror'
LANGUAGE C STRICT;
CREATE FUNCTION pg_catalog.setval(regclass, bigint) RETURNS bigint
AS '$libdir/pending', 'setval_mirror'
LANGUAGE C STRICT;
COMMIT;
DBMirror - PostgreSQL Database Mirroring
===================================================
DBMirror is a database mirroring system developed for the PostgreSQL
database Written and maintained by Steven Singer(ssinger@navtechinc.com)
(c) 2001-2004 Navtech Systems Support Inc.
ALL RIGHTS RESERVED
Permission to use, copy, modify, and distribute this software and its
documentation for any purpose, without fee, and without a written agreement
is hereby granted, provided that the above copyright notice and this
paragraph and the following two paragraphs appear in all copies.
IN NO EVENT SHALL THE AUTHOR OR DISTRIBUTORS BE LIABLE TO ANY PARTY FOR
DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
DOCUMENTATION, EVEN IF THE AUTHOR OR DISTRIBUTORS HAVE BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
THE AUTHOR AND DISTRIBUTORS SPECIFICALLY DISCLAIMS ANY WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS
ON AN "AS IS" BASIS, AND THE AUTHOR AND DISTRIBUTORS HAS NO OBLIGATIONS TO
PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
Overview
--------------------------------------------------------------------
The mirroring system is trigger based and provides the following key features:
-Support for multiple mirror slaves
-Transactions are maintained
-Per table selection of what gets mirrored.
The system is based on the idea that a master database exists where all
edits are made to the tables being mirrored. A trigger attached to the
tables being mirrored runs logging information about the edit to
the Pending table and PendingData table.
A perl script(DBMirror.pl) runs continuously for each slave database(A database
that the change is supposed to be mirrored to) examining the Pending
table; searching for transactions that need to be sent to that particular slave
database. Those transactions are then mirrored to the slave database and
the MirroredTransaction table is updated to reflect that the transaction has
been sent.
If the transaction has been sent to all know slave hosts (All entries
in the MirrorHost table) then all records of it are purged from the
Pending tables.
Requirements:
---------------------------------
-PostgreSQL-8.1 (Older versions are no longer supported)
-Perl 5.6 or 5.8 (Other versions might work)
-PgPerl (http://gborg.postgresql.org/project/pgperl/projdisplay.php)
Upgrading from versions prior to 8.0
---------------------------------------
Users upgrading from a version of dbmirror prior to the one shipped with
Postgresql 8.0 will need to perform the following steps
1. Dump the database then drop it (dropdb no not use the -C option)
2. Create database with createdb.
3. Run psql databasename -f MirrorSetup.sql
4. Restore the database(do not use the -C option of pg_dump/pg_restore)
5. run the SQL commands: DROP "Pending";DROP "PendingData"; DROP "MirrorHost";
DROP "MirroredTransaction";
The above steps are needed A) Because the names of the tables used by dbmirror
to store data have changed and B) In order for sequences to be mirrored properly
all serial types must be recreated.
Installation Instructions
------------------------------------------------------------------------
1) Compile pending.c
The file pending.c contains the recordchange trigger. This runs every
time a row inside of a table being mirrored changes.
To build the trigger run make on the "Makefile" in the DBMirror directory.
PostgreSQL-8.0 Make Instructions:
If you have already run "configure" in the top (pgsql) directory
then run "make" in the dbmirror directory to compile the trigger.
You should now have a file named pending.so that contains the trigger.
Install this file in your Postgresql lib directory (/usr/local/pgsql/lib)
2) Run MirrorSetup.sql
This file contains SQL commands to setup the Mirroring environment.
This includes
-Telling PostgreSQL about the "recordchange" trigger function.
-Creating the dbmirror_Pending,dbmirror_PendingData,dbmirror_MirrorHost,
dbmirror_MirroredTransaction tables
To execute the script use psql as follows
"psql -f MirrorSetup.sql MyDatabaseName"
where MyDatabaseName is the name of the database you wish to install mirroring
on(Your master).
3) Create slaveDatabase.conf files.
Each slave database needs its own configuration file for the
DBMirror.pl script. See slaveDatabase.conf for a sample.
The master settings refer to the master database(The one that is
being mirrored).
The slave settings refer to the database that the data is being
mirrored to.
The slaveName setting in the configuration file must match the slave
name specified in the dbmirror_MirrorHost table.
DBMirror.pl can be run in two modes of operation:
A) It can connect directly to the slave database. To do this specify
a slave database name and optional host and port along with a username
and password. See slaveDatabase.conf for details.
The master user must have sufficient permissions to modify the Pending
tables and to read all of the tables being mirrored.
The slave user must have enough permissions on the slave database to
modify(INSERT,UPDATE,DELETE) any tables on the slave system that are being
mirrored.
B) The SQL statements that should be executed on the slave can be
written to files which can then be executed slave database through
psql. This would be suitable for setups where their is no direct
connection between the slave database and the master. A file is
generated for each transaction in the directory specified by
TransactionFileDirectory. The file name contains the date/time the
file was created along with the transaction id.
4) Add the trigger to tables.
Execute the SQL code in AddTrigger.sql once for each table that should
be mirrored. Replace MyTableName with the name of the table that should
be mirrored.
NOTE: DBMirror requires that every table being mirrored have a primary key
defined.
5) Create the slave database.
The DBMirror system keeps the contents of mirrored tables identical on the
master and slave databases. When you first install the mirror triggers the
master and slave databases must be the same.
If you are starting with an empty master database then the slave should
be empty as well. Otherwise use pg_dump to ensure that the slave database
tables are initially identical to the master.
6) Add entries in the dbmirror_MirrorHost table.
Each slave database must have an entry in the dbmirror_MirrorHost table.
The name of the host in the dbmirror_MirrorHost table must exactly match the
slaveHost variable for that slave in the configuration file.
For example
INSERT INTO dbmirror_MirrorHost (SlaveName) VALUES ('backup_system');
6) Start DBMirror.pl
DBMirror.pl is the perl script that handles the mirroring.
It requires the Perl library Pg(See http://gborg.postgresql.org/project/pgperl/projdisplay.php)
It takes its configuration file as an argument(The one from step 3)
One instance of DBMirror.pl runs for each slave machine that is receiving
mirrored data.
Any errors are printed to standard out and emailed to the address specified in
the configuration file.
DBMirror can be run from the master, the slave, or a third machine as long
as it is able to access both the master and slave databases(not
required if SQL files are being generated)
7) Periodically run clean_pending.pl
clean_pending.pl cleans out any entries from the Pending tables that
have already been mirrored to all hosts in the MirrorHost table.
It uses the same configuration file as DBMirror.pl.
Normally DBMirror.pl will clean these tables as it goes but in some
circumstances this will not happen.
For example if a transaction has been mirrored to all slaves except for
one, then that host is removed from the MirrorHost table(It stops being
a mirror slave) the transactions that had already been mirrored to
all the other hosts will not be deleted from the Pending tables by
DBMirror.pl since DBMirror.pl will run against these transactions again
since they have already been sent to all the other hosts.
clean_pending.pl will remove these transactions.
TODO(Current Limitations)
----------
-Support for selective mirroring based on the content of data.
-Support for BLOB's.
-Support for multi-master mirroring with conflict resolution.
-Better support for dealing with Schema changes.
Significant Changes Since 7.4
----------------
-Support for mirroring SEQUENCE's
-Support for unix domain sockets
-Support for outputting slave SQL statements to a file
-Changed the names of replication tables are now named
dbmirror_pending etc..
Credits
-----------
Achilleus Mantzios <achill@matrix.gatewaynet.com>
Steven Singer
Navtech Systems Support Inc.
ssinger@navtechinc.com
#!/usr/bin/perl
# clean_pending.pl
# This perl script removes entries from the pending,pendingKeys,
# pendingDeleteData tables that have already been mirrored to all hosts.
#
#
#
# Written by Steven Singer (ssinger@navtechinc.com)
# (c) 2001-2002 Navtech Systems Support Inc.
# Released under the GNU Public License version 2. See COPYING.
#
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
##############################################################################
# $PostgreSQL: pgsql/contrib/dbmirror/clean_pending.pl,v 1.5 2004/09/10 04:31:06 neilc Exp $
##############################################################################
=head1 NAME
clean_pending.pl - A Perl script to remove old entries from the
pending, pendingKeys, and pendingDeleteData tables.
=head1 SYNPOSIS
clean_pending.pl databasename
=head1 DESCRIPTION
This Perl script connects to the database specified as a command line argument
on the local system. It uses a hard-coded username and password.
It then removes any entries from the pending, pendingDeleteData, and
pendingKeys tables that have already been sent to all hosts in mirrorHosts.
=cut
BEGIN {
# add in a global path to files
#Ensure that Pg is in the path.
}
use strict;
use Pg;
if ($#ARGV != 0) {
die "usage: clean_pending.pl configFile\n";
}
if( ! defined do $ARGV[0]) {
die("Invalid Configuration file $ARGV[0]");
}
#connect to the database.
my $connectString = "host=$::masterHost dbname=$::masterDb user=$::masterUser password=$::masterPassword";
my $dbConn = Pg::connectdb($connectString);
unless($dbConn->status == PGRES_CONNECTION_OK) {
printf("Can't connect to database\n");
die;
}
my $result = $dbConn->exec("BEGIN");
unless($result->resultStatus == PGRES_COMMAND_OK) {
die $dbConn->errorMessage;
}
#delete all transactions that have been sent to all mirrorhosts
#or delete everything if no mirror hosts are defined.
# Postgres takes the "SELECT COUNT(*) FROM dbmirror_MirrorHost and makes it into
# an InitPlan. EXPLAIN show's this.
my $deletePendingQuery = 'DELETE FROM dbmirror_Pending WHERE (SELECT ';
$deletePendingQuery .= ' COUNT(*) FROM dbmirror_MirroredTransaction WHERE ';
$deletePendingQuery .= ' XID=dbmirror_Pending.XID) = (SELECT COUNT(*) FROM ';
$deletePendingQuery .= ' dbmirror_MirrorHost) OR (SELECT COUNT(*) FROM ';
$deletePendingQuery .= ' dbmirror_MirrorHost) = 0';
my $result = $dbConn->exec($deletePendingQuery);
unless ($result->resultStatus == PGRES_COMMAND_OK ) {
printf($dbConn->errorMessage);
die;
}
$dbConn->exec("COMMIT");
$result = $dbConn->exec('VACUUM dbmirror_Pending');
unless ($result->resultStatus == PGRES_COMMAND_OK) {
printf($dbConn->errorMessage);
}
$result = $dbConn->exec('VACUUM dbmirror_PendingData');
unless($result->resultStatus == PGRES_COMMAND_OK) {
printf($dbConn->errorMessage);
}
$result = $dbConn->exec('VACUUM dbmirror_MirroredTransaction');
unless($result->resultStatus == PGRES_COMMAND_OK) {
printf($dbConn->errorMessage);
}
This diff is collapsed.
#########################################################################
# Config file for DBMirror.pl
# This file contains a sample configuration file for DBMirror.pl
# It contains configuration information to mirror data from
# the master database to a single slave system.
#
# $PostgreSQL: pgsql/contrib/dbmirror/slaveDatabase.conf,v 1.3 2004/09/10 04:31:06 neilc Exp $
#######################################################################
$masterHost = "masterMachine.mydomain.com";
$masterDb = "myDatabase";
$masterUser = "postgres";
$masterPassword = "postgrespassword";
# Where to email Error messages to
# $errorEmailAddr = "me@mydomain.com";
$slaveInfo->{"slaveName"} = "backupMachine";
$slaveInfo->{"slaveHost"} = "backupMachine.mydomain.com";
$slaveInfo->{"slaveDb"} = "myDatabase";
$slaveInfo->{"slavePort"} = 5432;
$slaveInfo->{"slaveUser"} = "postgres";
$slaveInfo->{"slavePassword"} = "postgrespassword";
# If uncommented then text files with SQL statements are generated instead
# of connecting to the slave database directly.
# slaveDb should then be commented out.
# $slaveInfo{"TransactionFileDirectory"} = '/tmp';
#
# The number of seconds dbmirror should sleep for between checking to see
# if more data is ready to be mirrored.
$sleepInterval = 60;
#If you want to use syslog
# $syslog = 1;
# $PostgreSQL: pgsql/contrib/fulltextindex/Makefile,v 1.14 2005/09/27 17:13:02 tgl Exp $
MODULES = fti
DATA_built = fti.sql
DOCS = README.fti
SCRIPTS = fti.pl
ifdef USE_PGXS
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
else
subdir = contrib/fulltextindex
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
include $(top_srcdir)/contrib/contrib-global.mk
endif
This diff is collapsed.
Place "stop" words in lookup table
WARNING
-------
This implementation of full text indexing is very slow and inefficient. It is
STRONGLY recommended that you switch to using contrib/tsearch which offers these
features:
Advantages
----------
* Actively developed and improved
* Tight integration with OpenFTS (openfts.sourceforge.net)
* Orders of magnitude faster (eg. 300 times faster for two keyword search)
* No extra tables or multi-way joins required
* Select syntax allows easy 'and'ing, 'or'ing and 'not'ing of keywords
* Built-in stemmer with customisable dictionaries (ie. searching for 'jellies' will find 'jelly')
* Stop words automatically ignored
* Supports non-C locales
Disadvantages
-------------
* Only indexes full words - substring searches on words won't work.
eg. Searching for 'burg' won't find 'burger'
Due to the deficiencies in this module, it is quite likely that it will be removed from the standard PostgreSQL distribution in the future.
This diff is collapsed.
This diff is collapsed.
-- Adjust this setting to control where the objects get created.
SET search_path = public;
CREATE OR REPLACE FUNCTION fti() RETURNS trigger AS
'MODULE_PATHNAME', 'fti'
LANGUAGE C VOLATILE CALLED ON NULL INPUT;
This diff is collapsed.
-- Adjust this setting to control where the objects get created.
SET search_path = public;
DROP FUNCTION fti() CASCADE;
#
# $PostgreSQL: pgsql/contrib/mSQL-interface/Makefile,v 1.12 2006/07/15 03:33:14 tgl Exp $
#
MODULE_big = mpgsql
SO_MAJOR_VERSION = 0
SO_MINOR_VERSION = 0
OBJS = mpgsql.o
DOCS = README.mpgsql
PG_CPPFLAGS = -I$(libpq_srcdir)
ifdef USE_PGXS
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
else
subdir = contrib/mSQL-interface
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
include $(top_srcdir)/contrib/contrib-global.mk
endif
Hello! :)
(Sorry for my english. But if i wrote in portuguese, you wouldn't
understand nothing. :])
I found it's the right place to post this. I'm a newcomer in these
lists. I hope i did it right. :]
<BOREDOM>
When i started using SQL, i started with mSQL. I developed a lot
of useful apps for me and my job with C, mainly because i loved it's
elegant, simple api. But for a large project i'm doing in these days, i
thought is was not enough, because it felt a lot of features i started to
need, like security and subselects. (and it's not free :))
So after looking at the options, choose to start again with
postgres. It offered everything that i needed, and the documentation is
really good (remember me to thank the one who wrote'em).
But for my little apps, i needed to start porting them to libpq.
After looking at pq's syntax, i found it was better to write a bridge
between the mSQL api and libpq. I found that rewriting the libmsql.a
routines that calls libpq would made things much easier. I guess the
results are quite good right now.
</BOREDOM>
Ok. Lets' summarize it:
mpgsql.c is the bridge. Acting as a wrapper, it's really good,
since i could run mSQL. But it's not accurate. Some highlights:
CONS:
* It's not well documented
(this post is it's first documentation attempt, in fact);
* It doesn't handle field types correctly. I plan to fix it,
if people start doing feedbacks;
* It's limited to 10 simultaneous connections. I plan to enhance
this, i'm just figuring out;
* I'd like to make it reentrant/thread safe, although i don't
think this could be done without changing the API structure;
* Error Management should be better. This is my first priority
now;
* Some calls are just empty implementations.
PROS:
* the mSQL Monitor runs Okay. :]
* It's really cool. :)
* Make mSQL-made applications compatible with postgresql just by
changing link options.
* Uses postgreSQL. :]
* the mSQL API it's far easier to use and understand than libpq.
Consider this example:
#include "msql.h"
void main(int argc, char **argv, char **envp) {
int sid;
sid = msqlConnect(NULL); /* Connects via unix socket */
if (sid >= 0) {
m_result *rlt;
m_row *row;
msqlSelectDB(sid, "hosts");
if (msqlQuery(sid, "select host_id from hosts")) {
rlt = msqlStoreResult();
while (row = (m_row*)msqlFetchRow(rlt))
printf("hostid: %s\n", row[0]);
msqlFreeResult(rlt);
}
msqlClose(sid);
}
}
I enclose mpgsql.c inside. I'd like to maintain it, and (maybe, am
i dreaming) make it as part of the pgsql distribution. I guess it doesn't
depends on me, but mainly on it's acceptance by its users.
Hm... i forgot: you'll need a msql.h copy, since it's copyrighted
by Hughes Technologies Pty Ltd. If you haven't it yes, fetch one
from www.hughes.com.au.
I would like to catch users ideas. My next goal is to add better
error handling, and to make it better documented, and try to let relshow
run through it. :)
done. Aldrin Leal <aldrin@americasnet.com>
This diff is collapsed.
This directory contains tools to create a mapping table from MAC
addresses (e.g., Ethernet hardware addresses) to human-readable
manufacturer strings. The `createoui' script builds the table
structure, `updateoui' obtains the current official mapping table
from the web site of the IEEE, converts it, and stores it in the
database, `dropoui' removes everything. Use the --help option to
get more usage information from the respective script. All three
use the psql program; any extra arguments will be passed to psql.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
# $PostgreSQL: pgsql/contrib/tips/Makefile,v 1.8 2005/09/27 17:13:10 tgl Exp $
DOCS = README.apachelog
ifdef USE_PGXS
PGXS := $(shell pg_config --pgxs)
include $(PGXS)
else
subdir = contrib/tips
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
include $(top_srcdir)/contrib/contrib-global.mk
endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment