Commit 500b62b0 authored by Bruce Momjian's avatar Bruce Momjian

pg_dump patch from Philip Warner

parent 20c01ef1
From owner-pgsql-hackers@hub.org Wed Sep 22 20:31:02 1999
Received: from renoir.op.net (root@renoir.op.net [209.152.193.4])
by candle.pha.pa.us (8.9.0/8.9.0) with ESMTP id UAA15611
for <maillist@candle.pha.pa.us>; Wed, 22 Sep 1999 20:31:01 -0400 (EDT)
Received: from hub.org (hub.org [216.126.84.1]) by renoir.op.net (o1/$ Revision: 1.18 $) with ESMTP id UAA02926 for <maillist@candle.pha.pa.us>; Wed, 22 Sep 1999 20:21:24 -0400 (EDT)
Received: from hub.org (hub.org [216.126.84.1])
by hub.org (8.9.3/8.9.3) with ESMTP id UAA75413;
Wed, 22 Sep 1999 20:09:35 -0400 (EDT)
(envelope-from owner-pgsql-hackers@hub.org)
Received: by hub.org (TLB v0.10a (1.23 tibbs 1997/01/09 00:29:32)); Wed, 22 Sep 1999 20:08:50 +0000 (EDT)
Received: (from majordom@localhost)
by hub.org (8.9.3/8.9.3) id UAA75058
for pgsql-hackers-outgoing; Wed, 22 Sep 1999 20:06:58 -0400 (EDT)
(envelope-from owner-pgsql-hackers@postgreSQL.org)
Received: from sss.sss.pgh.pa.us (sss.pgh.pa.us [209.114.166.2])
by hub.org (8.9.3/8.9.3) with ESMTP id UAA74982
for <pgsql-hackers@postgreSQL.org>; Wed, 22 Sep 1999 20:06:25 -0400 (EDT)
(envelope-from tgl@sss.pgh.pa.us)
Received: from sss.sss.pgh.pa.us (localhost [127.0.0.1])
by sss.sss.pgh.pa.us (8.9.1/8.9.1) with ESMTP id UAA06411
for <pgsql-hackers@postgreSQL.org>; Wed, 22 Sep 1999 20:05:40 -0400 (EDT)
To: pgsql-hackers@postgreSQL.org
Subject: [HACKERS] Progress report: buffer refcount bugs and SQL functions
Date: Wed, 22 Sep 1999 20:05:39 -0400
Message-ID: <6408.938045139@sss.pgh.pa.us>
From: Tom Lane <tgl@sss.pgh.pa.us>
Sender: owner-pgsql-hackers@postgreSQL.org
Precedence: bulk
Status: RO
I have been finding a lot of interesting stuff while looking into
the buffer reference count/leakage issue.
It turns out that there were two specific things that were camouflaging
the existence of bugs in this area:
1. The BufferLeakCheck routine that's run at transaction commit was
only looking for nonzero PrivateRefCount to indicate a missing unpin.
It failed to notice nonzero LastRefCount --- which meant that an
error in refcount save/restore usage could leave a buffer pinned,
and BufferLeakCheck wouldn't notice.
2. The BufferIsValid macro, which you'd think just checks whether
it's handed a valid buffer identifier or not, actually did more:
it only returned true if the buffer ID was valid *and* the buffer
had positive PrivateRefCount. That meant that the common pattern
if (BufferIsValid(buf))
ReleaseBuffer(buf);
wouldn't complain if it were handed a valid but already unpinned buffer.
And that behavior masks bugs that result in buffers being unpinned too
early. For example, consider a sequence like
1. LockBuffer (buffer now has refcount 1). Store reference to
a tuple on that buffer page in a tuple table slot.
2. Copy buffer reference to a second tuple-table slot, but forget to
increment buffer's refcount.
3. Release second tuple table slot. Buffer refcount drops to 0,
so it's unpinned.
4. Release original tuple slot. Because of BufferIsValid behavior,
no assert happens here; in fact nothing at all happens.
This is, of course, buggy code: during the interval from 3 to 4 you
still have an apparently valid tuple reference in the original slot,
which someone might try to use; but the buffer it points to is unpinned
and could be replaced at any time by another backend.
In short, we had errors that would mask both missing-pin bugs and
missing-unpin bugs. And naturally there were a few such bugs lurking
behind them...
3. The buffer refcount save/restore stuff, which I had suspected
was useless, is not only useless but also buggy. The reason it's
buggy is that it only works if used in a nested fashion. You could
save state A, pin some buffers, save state B, pin some more
buffers, restore state B (thereby unpinning what you pinned since
the save), and finally restore state A (unpinning the earlier stuff).
What you could not do is save state A, pin, save B, pin more, then
restore state A --- that might unpin some of A's buffers, or some
of B's buffers, or some unforeseen combination thereof. If you
restore A and then restore B, you do not necessarily return to a zero-
pins state, either. And it turns out the actual usage pattern was a
nearly random sequence of saves and restores, compounded by a failure to
do all of the restores reliably (which was masked by the oversight in
BufferLeakCheck).
What I have done so far is to rip out the buffer refcount save/restore
support (including LastRefCount), change BufferIsValid to a simple
validity check (so that you get an assert if you unpin something that
was pinned), change ExecStoreTuple so that it increments the refcount
when it is handed a buffer reference (for symmetry with ExecClearTuple's
decrement of the refcount), and fix about a dozen bugs exposed by these
changes.
I am still getting Buffer Leak notices in the "misc" regression test,
specifically in the queries that invoke more than one SQL function.
What I find there is that SQL functions are not always run to
completion. Apparently, when a function can return multiple tuples,
it won't necessarily be asked to produce them all. And when it isn't,
postquel_end() isn't invoked for the function's current query, so its
tuple table isn't cleared, so we have dangling refcounts if any of the
tuples involved are in disk buffers.
It may be that the save/restore code was a misguided attempt to fix
this problem. I can't tell. But I think what we really need to do is
find some way of ensuring that Postquel function execution contexts
always get shut down by the end of the query, so that they don't leak
resources.
I suppose a straightforward approach would be to keep a list of open
function contexts somewhere (attached to the outer execution context,
perhaps), and clean them up at outer-plan shutdown.
What I am wondering, though, is whether this addition is actually
necessary, or is it a bug that the functions aren't run to completion
in the first place? I don't really understand the semantics of this
"nested dot notation". I suppose it is a Berkeleyism; I can't find
anything about it in the SQL92 document. The test cases shown in the
misc regress test seem peculiar, not to say wrong. For example:
regression=> SELECT p.hobbies.equipment.name, p.hobbies.name, p.name FROM person p;
name |name |name
-------------+-----------+-----
advil |posthacking|mike
peet's coffee|basketball |joe
hightops |basketball |sally
(3 rows)
which doesn't appear to agree with the contents of the underlying
relations:
regression=> SELECT * FROM hobbies_r;
name |person
-----------+------
posthacking|mike
posthacking|jeff
basketball |joe
basketball |sally
skywalking |
(5 rows)
regression=> SELECT * FROM equipment_r;
name |hobby
-------------+-----------
advil |posthacking
peet's coffee|posthacking
hightops |basketball
guts |skywalking
(4 rows)
I'd have expected an output along the lines of
advil |posthacking|mike
peet's coffee|posthacking|mike
hightops |basketball |joe
hightops |basketball |sally
Is the regression test's expected output wrong, or am I misunderstanding
what this query is supposed to do? Is there any documentation anywhere
about how SQL functions returning multiple tuples are supposed to
behave?
regards, tom lane
************
From owner-pgsql-hackers@hub.org Thu Sep 23 11:03:19 1999
Received: from hub.org (hub.org [216.126.84.1])
by candle.pha.pa.us (8.9.0/8.9.0) with ESMTP id LAA16211
for <maillist@candle.pha.pa.us>; Thu, 23 Sep 1999 11:03:17 -0400 (EDT)
Received: from hub.org (hub.org [216.126.84.1])
by hub.org (8.9.3/8.9.3) with ESMTP id KAA58151;
Thu, 23 Sep 1999 10:53:46 -0400 (EDT)
(envelope-from owner-pgsql-hackers@hub.org)
Received: by hub.org (TLB v0.10a (1.23 tibbs 1997/01/09 00:29:32)); Thu, 23 Sep 1999 10:53:05 +0000 (EDT)
Received: (from majordom@localhost)
by hub.org (8.9.3/8.9.3) id KAA57948
for pgsql-hackers-outgoing; Thu, 23 Sep 1999 10:52:23 -0400 (EDT)
(envelope-from owner-pgsql-hackers@postgreSQL.org)
Received: from sss.sss.pgh.pa.us (sss.pgh.pa.us [209.114.166.2])
by hub.org (8.9.3/8.9.3) with ESMTP id KAA57841
for <hackers@postgreSQL.org>; Thu, 23 Sep 1999 10:51:50 -0400 (EDT)
(envelope-from tgl@sss.pgh.pa.us)
Received: from sss.sss.pgh.pa.us (localhost [127.0.0.1])
by sss.sss.pgh.pa.us (8.9.1/8.9.1) with ESMTP id KAA14211;
Thu, 23 Sep 1999 10:51:10 -0400 (EDT)
To: Andreas Zeugswetter <andreas.zeugswetter@telecom.at>
cc: hackers@postgreSQL.org
Subject: Re: [HACKERS] Progress report: buffer refcount bugs and SQL functions
In-reply-to: Your message of Thu, 23 Sep 1999 10:07:24 +0200
<37E9DFBC.5C0978F@telecom.at>
Date: Thu, 23 Sep 1999 10:51:10 -0400
Message-ID: <14209.938098270@sss.pgh.pa.us>
From: Tom Lane <tgl@sss.pgh.pa.us>
Sender: owner-pgsql-hackers@postgreSQL.org
Precedence: bulk
Status: RO
Andreas Zeugswetter <andreas.zeugswetter@telecom.at> writes:
> That is what I use it for. I have never used it with a
> returns setof function, but reading the comments in the regression test,
> -- mike needs advil and peet's coffee,
> -- joe and sally need hightops, and
> -- everyone else is fine.
> it looks like the results you expected are correct, and currently the
> wrong result is given.
Yes, I have concluded the same (and partially fixed it, per my previous
message).
> Those that don't have a hobbie should return name|NULL|NULL. A hobbie
> that does'nt need equipment name|hobbie|NULL.
That's a good point. Currently (both with and without my uncommitted
fix) you get *no* rows out from ExecTargetList if there are any Iters
that return empty result sets. It might be more reasonable to treat an
empty result set as if it were NULL, which would give the behavior you
suggest.
This would be an easy change to my current patch, and I'm prepared to
make it before committing what I have, if people agree that that's a
more reasonable definition. Comments?
regards, tom lane
************
From owner-pgsql-hackers@hub.org Thu Sep 23 04:31:15 1999
Received: from renoir.op.net (root@renoir.op.net [209.152.193.4])
by candle.pha.pa.us (8.9.0/8.9.0) with ESMTP id EAA11344
for <maillist@candle.pha.pa.us>; Thu, 23 Sep 1999 04:31:15 -0400 (EDT)
Received: from hub.org (hub.org [216.126.84.1]) by renoir.op.net (o1/$ Revision: 1.18 $) with ESMTP id EAA05350 for <maillist@candle.pha.pa.us>; Thu, 23 Sep 1999 04:24:29 -0400 (EDT)
Received: from hub.org (hub.org [216.126.84.1])
by hub.org (8.9.3/8.9.3) with ESMTP id EAA85679;
Thu, 23 Sep 1999 04:16:26 -0400 (EDT)
(envelope-from owner-pgsql-hackers@hub.org)
Received: by hub.org (TLB v0.10a (1.23 tibbs 1997/01/09 00:29:32)); Thu, 23 Sep 1999 04:09:52 +0000 (EDT)
Received: (from majordom@localhost)
by hub.org (8.9.3/8.9.3) id EAA84708
for pgsql-hackers-outgoing; Thu, 23 Sep 1999 04:08:57 -0400 (EDT)
(envelope-from owner-pgsql-hackers@postgreSQL.org)
Received: from gandalf.telecom.at (gandalf.telecom.at [194.118.26.84])
by hub.org (8.9.3/8.9.3) with ESMTP id EAA84632
for <hackers@postgresql.org>; Thu, 23 Sep 1999 04:08:03 -0400 (EDT)
(envelope-from andreas.zeugswetter@telecom.at)
Received: from telecom.at (w0188000580.f000.d0188.sd.spardat.at [172.18.65.249])
by gandalf.telecom.at (xxx/xxx) with ESMTP id KAA195294
for <hackers@postgresql.org>; Thu, 23 Sep 1999 10:07:27 +0200
Message-ID: <37E9DFBC.5C0978F@telecom.at>
Date: Thu, 23 Sep 1999 10:07:24 +0200
From: Andreas Zeugswetter <andreas.zeugswetter@telecom.at>
X-Mailer: Mozilla 4.61 [en] (Win95; I)
X-Accept-Language: en
MIME-Version: 1.0
To: hackers@postgreSQL.org
Subject: Re: [HACKERS] Progress report: buffer refcount bugs and SQL functions
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Sender: owner-pgsql-hackers@postgreSQL.org
Precedence: bulk
Status: RO
> Is the regression test's expected output wrong, or am I
> misunderstanding
> what this query is supposed to do? Is there any
> documentation anywhere
> about how SQL functions returning multiple tuples are supposed to
> behave?
They are supposed to behave somewhat like a view.
Not all rows are necessarily fetched.
If used in a context that needs a single row answer,
and the answer has multiple rows it is supposed to
runtime elog. Like in:
select * from tbl where col=funcreturningmultipleresults();
-- this must elog
while this is ok:
select * from tbl where col in (select funcreturningmultipleresults());
But the caller could only fetch the first row if he wanted.
The nested notation is supposed to call the function passing it the tuple
as the first argument. This is what can be used to "fake" a column
onto a table (computed column).
That is what I use it for. I have never used it with a
returns setof function, but reading the comments in the regression test,
-- mike needs advil and peet's coffee,
-- joe and sally need hightops, and
-- everyone else is fine.
it looks like the results you expected are correct, and currently the
wrong result is given.
But I think this query could also elog whithout removing substantial
functionality.
SELECT p.name, p.hobbies.name, p.hobbies.equipment.name FROM person p;
Actually for me it would be intuitive, that this query return one row per
person, but elog on those that have more than one hobbie or a hobbie that
needs more than one equipment. Those that don't have a hobbie should
return name|NULL|NULL. A hobbie that does'nt need equipment name|hobbie|NULL.
Andreas
************
From owner-pgsql-hackers@hub.org Wed Sep 22 22:01:07 1999
Received: from renoir.op.net (root@renoir.op.net [209.152.193.4])
by candle.pha.pa.us (8.9.0/8.9.0) with ESMTP id WAA16360
for <maillist@candle.pha.pa.us>; Wed, 22 Sep 1999 22:01:05 -0400 (EDT)
Received: from hub.org (hub.org [216.126.84.1]) by renoir.op.net (o1/$ Revision: 1.18 $) with ESMTP id VAA08386 for <maillist@candle.pha.pa.us>; Wed, 22 Sep 1999 21:37:24 -0400 (EDT)
Received: from hub.org (hub.org [216.126.84.1])
by hub.org (8.9.3/8.9.3) with ESMTP id VAA88083;
Wed, 22 Sep 1999 21:28:11 -0400 (EDT)
(envelope-from owner-pgsql-hackers@hub.org)
Received: by hub.org (TLB v0.10a (1.23 tibbs 1997/01/09 00:29:32)); Wed, 22 Sep 1999 21:27:48 +0000 (EDT)
Received: (from majordom@localhost)
by hub.org (8.9.3/8.9.3) id VAA87938
for pgsql-hackers-outgoing; Wed, 22 Sep 1999 21:26:52 -0400 (EDT)
(envelope-from owner-pgsql-hackers@postgreSQL.org)
Received: from orion.SAPserv.Hamburg.dsh.de (Tpolaris2.sapham.debis.de [53.2.131.8])
by hub.org (8.9.3/8.9.3) with SMTP id VAA87909
for <pgsql-hackers@postgresql.org>; Wed, 22 Sep 1999 21:26:36 -0400 (EDT)
(envelope-from wieck@debis.com)
Received: by orion.SAPserv.Hamburg.dsh.de
for pgsql-hackers@postgresql.org
id m11TxXw-0003kLC; Thu, 23 Sep 99 03:19 MET DST
Message-Id: <m11TxXw-0003kLC@orion.SAPserv.Hamburg.dsh.de>
From: wieck@debis.com (Jan Wieck)
Subject: Re: [HACKERS] Progress report: buffer refcount bugs and SQL functions
To: tgl@sss.pgh.pa.us (Tom Lane)
Date: Thu, 23 Sep 1999 03:19:39 +0200 (MET DST)
Cc: pgsql-hackers@postgreSQL.org
Reply-To: wieck@debis.com (Jan Wieck)
In-Reply-To: <6408.938045139@sss.pgh.pa.us> from "Tom Lane" at Sep 22, 99 08:05:39 pm
X-Mailer: ELM [version 2.4 PL25]
Content-Type: text
Sender: owner-pgsql-hackers@postgreSQL.org
Precedence: bulk
Status: RO
Tom Lane wrote:
> [...]
>
> What I am wondering, though, is whether this addition is actually
> necessary, or is it a bug that the functions aren't run to completion
> in the first place? I don't really understand the semantics of this
> "nested dot notation". I suppose it is a Berkeleyism; I can't find
> anything about it in the SQL92 document. The test cases shown in the
> misc regress test seem peculiar, not to say wrong. For example:
>
> [...]
>
> Is the regression test's expected output wrong, or am I misunderstanding
> what this query is supposed to do? Is there any documentation anywhere
> about how SQL functions returning multiple tuples are supposed to
> behave?
I've said some time (maybe too long) ago, that SQL functions
returning tuple sets are broken in general. This nested dot
notation (which I think is an artefact from the postquel
querylanguage) is implemented via set functions.
Set functions have total different semantics from all other
functions. First they don't really return a tuple set as
someone might think - all that screwed up code instead
simulates that they return something you could consider a
scan of the last SQL statement in the function. Then, on
each subsequent call inside of the same command, they return
a "tupletable slot" containing the next found tuple (that's
why their Func node is mangled up after the first call).
Second they have a targetlist what I think was originally
intended to extract attributes out of the tuples returned
when the above scan is asked to get the next tuple. But as I
read the code it invokes the function again and this might
cause the resource leakage you see.
Third, all this seems to never have been implemented
(thought?) to the end. A targetlist doesn't make sense at
this place because it could at max contain a single attribute
- so a single attno would have the same power. And if set
functions could appear in the rangetable (FROM clause), than
they would be treated as that and regular Var nodes in the
query would do it.
I think you shouldn't really care for that regression test
and maybe we should disable set functions until we really
implement stored procedures returning sets in the rangetable.
Set functions where planned by Stonebraker's team as
something that today is called stored procedures. But AFAIK
they never reached the useful state because even in Postgres
4.2 you haven't been able to get more than one attribute out
of a set function. It was a feature of the postquel
querylanguage that you could get one attribute from a set
function via
RETRIEVE (attributename(setfuncname()))
While working on the constraint triggers I've came across
another regression test (triggers :-) that's errorneous too.
The funny_dup17 trigger proc executes an INSERT into the same
relation where it get fired for by a previous INSERT. And it
stops this recursion only if it reaches a nesting level of
17, which could only occur if it is fired DURING the
execution of it's own SPI_exec(). After Vadim quouted some
SQL92 definitions about when constraint checks and triggers
are to be executed, I decided to fire regular triggers at the
end of a query too. Thus, there is absolutely no nesting
possible for AFTER triggers resulting in an endless loop.
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#========================================= wieck@debis.com (Jan Wieck) #
************
From owner-pgsql-hackers@hub.org Thu Sep 23 11:01:06 1999
Received: from renoir.op.net (root@renoir.op.net [209.152.193.4])
by candle.pha.pa.us (8.9.0/8.9.0) with ESMTP id LAA16162
for <maillist@candle.pha.pa.us>; Thu, 23 Sep 1999 11:01:04 -0400 (EDT)
Received: from hub.org (hub.org [216.126.84.1]) by renoir.op.net (o1/$ Revision: 1.18 $) with ESMTP id KAA28544 for <maillist@candle.pha.pa.us>; Thu, 23 Sep 1999 10:45:54 -0400 (EDT)
Received: from hub.org (hub.org [216.126.84.1])
by hub.org (8.9.3/8.9.3) with ESMTP id KAA52943;
Thu, 23 Sep 1999 10:20:51 -0400 (EDT)
(envelope-from owner-pgsql-hackers@hub.org)
Received: by hub.org (TLB v0.10a (1.23 tibbs 1997/01/09 00:29:32)); Thu, 23 Sep 1999 10:19:58 +0000 (EDT)
Received: (from majordom@localhost)
by hub.org (8.9.3/8.9.3) id KAA52472
for pgsql-hackers-outgoing; Thu, 23 Sep 1999 10:19:03 -0400 (EDT)
(envelope-from owner-pgsql-hackers@postgreSQL.org)
Received: from sss.sss.pgh.pa.us (sss.pgh.pa.us [209.114.166.2])
by hub.org (8.9.3/8.9.3) with ESMTP id KAA52431
for <pgsql-hackers@postgresql.org>; Thu, 23 Sep 1999 10:18:47 -0400 (EDT)
(envelope-from tgl@sss.pgh.pa.us)
Received: from sss.sss.pgh.pa.us (localhost [127.0.0.1])
by sss.sss.pgh.pa.us (8.9.1/8.9.1) with ESMTP id KAA13253;
Thu, 23 Sep 1999 10:18:02 -0400 (EDT)
To: wieck@debis.com (Jan Wieck)
cc: pgsql-hackers@postgreSQL.org
Subject: Re: [HACKERS] Progress report: buffer refcount bugs and SQL functions
In-reply-to: Your message of Thu, 23 Sep 1999 03:19:39 +0200 (MET DST)
<m11TxXw-0003kLC@orion.SAPserv.Hamburg.dsh.de>
Date: Thu, 23 Sep 1999 10:18:01 -0400
Message-ID: <13251.938096281@sss.pgh.pa.us>
From: Tom Lane <tgl@sss.pgh.pa.us>
Sender: owner-pgsql-hackers@postgreSQL.org
Precedence: bulk
Status: RO
wieck@debis.com (Jan Wieck) writes:
> Tom Lane wrote:
>> What I am wondering, though, is whether this addition is actually
>> necessary, or is it a bug that the functions aren't run to completion
>> in the first place?
> I've said some time (maybe too long) ago, that SQL functions
> returning tuple sets are broken in general.
Indeed they are. Try this on for size (using the regression database):
SELECT p.name, p.hobbies.equipment.name FROM person p;
SELECT p.hobbies.equipment.name, p.name FROM person p;
You get different result sets!?
The problem in this example is that ExecTargetList returns the isDone
flag from the last targetlist entry, regardless of whether there are
incomplete iterations in previous entries. More generally, the buffer
leak problem that I started with only occurs if some Iter nodes are not
run to completion --- but execQual.c has no mechanism to make sure that
they have all reached completion simultaneously.
What we really need to make functions-returning-sets work properly is
an implementation somewhat like aggregate functions. We need to make
a list of all the Iter nodes present in a targetlist and cycle through
the values returned by each in a methodical fashion (run the rightmost
through its full cycle, then advance the next-to-rightmost one value,
run the rightmost through its cycle again, etc etc). Also there needs
to be an understanding of the hierarchy when an Iter appears in the
arguments of another Iter's function. (You cycle the upper one for
*each* set of arguments created by cycling its sub-Iters.)
I am not particularly interested in working on this feature right now,
since AFAIK it's a Berkeleyism not found in SQL92. What I've done
is to hack ExecTargetList so that it behaves semi-sanely when there's
more than one Iter at the top level of the target list --- it still
doesn't really give the right answer, but at least it will keep
generating tuples until all the Iters are done at the same time.
It happens that that's enough to give correct answers for the examples
shown in the misc regress test. Even when it fails to generate all
the possible combinations, there will be no buffer leaks.
So, I'm going to declare victory and go home ;-). We ought to add a
TODO item along the lines of
* Functions returning sets don't really work right
in hopes that someone will feel like tackling this someday.
regards, tom lane
************
......@@ -4,7 +4,7 @@
#
# Copyright (c) 1994, Regents of the University of California
#
# $Header: /cvsroot/pgsql/src/bin/pg_dump/Makefile,v 1.17 2000/07/03 16:35:39 petere Exp $
# $Header: /cvsroot/pgsql/src/bin/pg_dump/Makefile,v 1.18 2000/07/04 14:25:26 momjian Exp $
#
#-------------------------------------------------------------------------
......@@ -12,21 +12,19 @@ subdir = src/bin/pg_dump
top_builddir = ../../..
include ../../Makefile.global
OBJS= pg_dump.o common.o $(STRDUP)
OBJS= pg_backup_archiver.o pg_backup_custom.o pg_backup_files.o \
pg_backup_plain_text.o $(STRDUP)
CFLAGS+= -I$(LIBPQDIR)
LDFLAGS+= -lz
all: submake pg_dump$(X) pg_restore$(X)
all: submake pg_dump pg_dumpall
pg_dump$(X): pg_dump.o common.o $(OBJS) $(LIBPQDIR)/libpq.a
$(CC) $(CFLAGS) -o $@ pg_dump.o common.o $(OBJS) $(LIBPQ) $(LDFLAGS)
pg_dump: $(OBJS) $(LIBPQDIR)/libpq.a
$(CC) $(CFLAGS) -o $@ $(OBJS) $(LIBPQ) $(LDFLAGS)
pg_dumpall: pg_dumpall.sh
sed -e 's:__VERSION__:$(VERSION):g' \
-e 's:__MULTIBYTE__:$(MULTIBYTE):g' \
-e 's:__bindir__:$(bindir):g' \
< $< > $@
pg_restore$(X): pg_restore.o $(OBJS) $(LIBPQDIR)/libpq.a
$(CC) $(CFLAGS) -o $@ pg_restore.o $(OBJS) $(LIBPQ) $(LDFLAGS)
../../utils/strdup.o:
$(MAKE) -C ../../utils strdup.o
......@@ -37,6 +35,7 @@ submake:
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) $(bindir)/pg_dump$(X)
$(INSTALL_PROGRAM) pg_restore$(X) $(bindir)/pg_restore$(X)
$(INSTALL_SCRIPT) pg_dumpall $(bindir)/pg_dumpall
$(INSTALL_SCRIPT) pg_upgrade $(bindir)/pg_upgrade
......@@ -50,7 +49,7 @@ depend dep:
$(CC) -MM $(CFLAGS) *.c >depend
clean distclean maintainer-clean:
rm -f pg_dump$(X) $(OBJS) pg_dumpall
rm -f pg_dump$(X) pg_restore$(X) $(OBJS) pg_dump.o common.o pg_restore.o
ifeq (depend,$(wildcard depend))
include depend
......
Notes on pg_dump
================
pg_dump, by default, still outputs text files.
pg_dumpall forces all pg_dump output to be text, since it also outputs text into the same output stream.
The plain text output format can not be used as input into pg_restore.
To dump a database into the next custom format, type:
pg_dump <db-name> -Fc > <backup-file>
To restore, try
To list contents:
pg_restore -l <backup-file> | less
or to list tables:
pg_restore <backup-file> --table | less
or to list in a differnet orderL
pg_restore <backup-file> -l --oid --rearrange | less
Once you are happy with the list, just remove the '-l', and an SQL script will be output.
You can also dump a listing:
pg_restore -l <backup-file> > toc.lis
or
pg_restore -l <backup-file> -f toc.lis
edit it, and rearrange the lines (or delete some):
vi toc.lis
then use it to restore selected items:
pg_restore <backup-file> --use=toc.lis -l | less
When you like the list, type
pg_restore backup.bck --use=toc.lis > script.sql
or, simply:
createdb newdbname
pg_restore backup.bck --use=toc.lis | psql newdbname
Philip Warner, 3-Jul-2000
pjw@rhyme.com.au
......@@ -8,7 +8,7 @@
*
*
* IDENTIFICATION
* $Header: /cvsroot/pgsql/src/bin/pg_dump/common.c,v 1.43 2000/06/14 18:17:50 petere Exp $
* $Header: /cvsroot/pgsql/src/bin/pg_dump/common.c,v 1.44 2000/07/04 14:25:27 momjian Exp $
*
* Modifications - 6/12/96 - dave@bensoft.com - version 1.13.dhb.2
*
......@@ -232,10 +232,13 @@ strInArray(const char *pattern, char **arr, int arr_size)
*/
TableInfo *
dumpSchema(FILE *fout,
int *numTablesPtr,
const char *tablename,
const bool aclsSkip)
dumpSchema(Archive *fout,
int *numTablesPtr,
const char *tablename,
const bool aclsSkip,
const bool oids,
const bool schemaOnly,
const bool dataOnly)
{
int numTypes;
int numFuncs;
......@@ -290,7 +293,7 @@ dumpSchema(FILE *fout,
g_comment_start, g_comment_end);
flagInhAttrs(tblinfo, numTables, inhinfo, numInherits);
if (!tablename && fout)
if (!tablename && !dataOnly)
{
if (g_verbose)
fprintf(stderr, "%s dumping out database comment %s\n",
......@@ -306,16 +309,13 @@ dumpSchema(FILE *fout,
dumpTypes(fout, finfo, numFuncs, tinfo, numTypes);
}
if (fout)
{
if (g_verbose)
fprintf(stderr, "%s dumping out tables %s\n",
g_comment_start, g_comment_end);
dumpTables(fout, tblinfo, numTables, inhinfo, numInherits,
tinfo, numTypes, tablename, aclsSkip);
}
if (g_verbose)
fprintf(stderr, "%s dumping out tables %s\n",
g_comment_start, g_comment_end);
dumpTables(fout, tblinfo, numTables, inhinfo, numInherits,
tinfo, numTypes, tablename, aclsSkip, oids, schemaOnly, dataOnly);
if (!tablename && fout)
if (!tablename && !dataOnly)
{
if (g_verbose)
fprintf(stderr, "%s dumping out user-defined procedural languages %s\n",
......@@ -323,7 +323,7 @@ dumpSchema(FILE *fout,
dumpProcLangs(fout, finfo, numFuncs, tinfo, numTypes);
}
if (!tablename && fout)
if (!tablename && !dataOnly)
{
if (g_verbose)
fprintf(stderr, "%s dumping out user-defined functions %s\n",
......@@ -331,7 +331,7 @@ dumpSchema(FILE *fout,
dumpFuncs(fout, finfo, numFuncs, tinfo, numTypes);
}
if (!tablename && fout)
if (!tablename && !dataOnly)
{
if (g_verbose)
fprintf(stderr, "%s dumping out user-defined aggregates %s\n",
......@@ -339,7 +339,7 @@ dumpSchema(FILE *fout,
dumpAggs(fout, agginfo, numAggregates, tinfo, numTypes);
}
if (!tablename && fout)
if (!tablename && !dataOnly)
{
if (g_verbose)
fprintf(stderr, "%s dumping out user-defined operators %s\n",
......@@ -363,7 +363,7 @@ dumpSchema(FILE *fout,
*/
extern void
dumpSchemaIdx(FILE *fout, const char *tablename,
dumpSchemaIdx(Archive *fout, const char *tablename,
TableInfo *tblinfo, int numTables)
{
int numIndices;
......
/*-------------------------------------------------------------------------
*
* pg_backup.h
*
* Public interface to the pg_dump archiver routines.
*
* See the headers to pg_restore for more details.
*
* Copyright (c) 2000, Philip Warner
* Rights are granted to use this software in any way so long
* as this notice is not removed.
*
* The author is not responsible for loss or damages that may
* result from it's use.
*
*
* IDENTIFICATION
*
* Modifications - 28-Jun-2000 - pjw@rhyme.com.au
*
* Initial version.
*
*-------------------------------------------------------------------------
*/
#ifndef PG_BACKUP__
#include "config.h"
#include "c.h"
#define PG_BACKUP__
typedef enum _archiveFormat {
archUnknown = 0,
archCustom = 1,
archFiles = 2,
archTar = 3,
archPlainText = 4
} ArchiveFormat;
/*
* We may want to have so user-readbale data, but in the mean
* time this gives us some abstraction and type checking.
*/
typedef struct _Archive {
/* Nothing here */
} Archive;
typedef int (*DataDumperPtr)(Archive* AH, char* oid, void* userArg);
typedef struct _restoreOptions {
int dataOnly;
int dropSchema;
char *filename;
int schemaOnly;
int verbose;
int aclsSkip;
int tocSummary;
char *tocFile;
int oidOrder;
int origOrder;
int rearrange;
int format;
char *formatName;
int selTypes;
int selIndex;
int selFunction;
int selTrigger;
int selTable;
char *indexNames;
char *functionNames;
char *tableNames;
char *triggerNames;
int *idWanted;
int limitToList;
int compression;
} RestoreOptions;
/*
* Main archiver interface.
*/
/* Called to add a TOC entry */
extern void ArchiveEntry(Archive* AH, const char* oid, const char* name,
const char* desc, const char* (deps[]), const char* defn,
const char* dropStmt, const char* owner,
DataDumperPtr dumpFn, void* dumpArg);
/* Called to write *data* to the archive */
extern int WriteData(Archive* AH, const void* data, int dLen);
extern void CloseArchive(Archive* AH);
extern void RestoreArchive(Archive* AH, RestoreOptions *ropt);
/* Open an existing archive */
extern Archive* OpenArchive(const char* FileSpec, ArchiveFormat fmt);
/* Create a new archive */
extern Archive* CreateArchive(const char* FileSpec, ArchiveFormat fmt, int compression);
/* The --list option */
extern void PrintTOCSummary(Archive* AH, RestoreOptions *ropt);
extern RestoreOptions* NewRestoreOptions(void);
/* Rearrange TOC entries */
extern void MoveToStart(Archive* AH, char *oType);
extern void MoveToEnd(Archive* AH, char *oType);
extern void SortTocByOID(Archive* AH);
extern void SortTocByID(Archive* AH);
extern void SortTocFromFile(Archive* AH, RestoreOptions *ropt);
/* Convenience functions used only when writing DATA */
extern int archputs(const char *s, Archive* AH);
extern int archputc(const char c, Archive* AH);
extern int archprintf(Archive* AH, const char *fmt, ...);
#endif
/*-------------------------------------------------------------------------
*
* pg_backup_archiver.c
*
* Private implementation of the archiver routines.
*
* See the headers to pg_restore for more details.
*
* Copyright (c) 2000, Philip Warner
* Rights are granted to use this software in any way so long
* as this notice is not removed.
*
* The author is not responsible for loss or damages that may
* result from it's use.
*
*
* IDENTIFICATION
*
* Modifications - 28-Jun-2000 - pjw@rhyme.com.au
*
* Initial version.
*
*-------------------------------------------------------------------------
*/
#include "pg_backup.h"
#include "pg_backup_archiver.h"
#include <string.h>
#include <unistd.h> /* for dup */
#include <stdio.h>
#include <stdlib.h>
#include <stdarg.h>
static void _SortToc(ArchiveHandle* AH, TocSortCompareFn fn);
static int _tocSortCompareByOIDNum(const void *p1, const void *p2);
static int _tocSortCompareByIDNum(const void *p1, const void *p2);
static ArchiveHandle* _allocAH(const char* FileSpec, ArchiveFormat fmt,
int compression, ArchiveMode mode);
static int _printTocEntry(ArchiveHandle* AH, TocEntry* te, RestoreOptions *ropt);
static int _tocEntryRequired(TocEntry* te, RestoreOptions *ropt);
static void _disableTriggers(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt);
static void _enableTriggers(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt);
static TocEntry* _getTocEntry(ArchiveHandle* AH, int id);
static void _moveAfter(ArchiveHandle* AH, TocEntry* pos, TocEntry* te);
static void _moveBefore(ArchiveHandle* AH, TocEntry* pos, TocEntry* te);
static int _discoverArchiveFormat(ArchiveHandle* AH);
static char *progname = "Archiver";
/*
* Wrapper functions.
*
* The objective it to make writing new formats and dumpers as simple
* as possible, if necessary at the expense of extra function calls etc.
*
*/
/* Create a new archive */
/* Public */
Archive* CreateArchive(const char* FileSpec, ArchiveFormat fmt, int compression)
{
ArchiveHandle* AH = _allocAH(FileSpec, fmt, compression, archModeWrite);
return (Archive*)AH;
}
/* Open an existing archive */
/* Public */
Archive* OpenArchive(const char* FileSpec, ArchiveFormat fmt)
{
ArchiveHandle* AH = _allocAH(FileSpec, fmt, 0, archModeRead);
return (Archive*)AH;
}
/* Public */
void CloseArchive(Archive* AHX)
{
ArchiveHandle* AH = (ArchiveHandle*)AHX;
(*AH->ClosePtr)(AH);
/* Close the output */
if (AH->gzOut)
GZCLOSE(AH->OF);
else if (AH->OF != stdout)
fclose(AH->OF);
}
/* Public */
void RestoreArchive(Archive* AHX, RestoreOptions *ropt)
{
ArchiveHandle* AH = (ArchiveHandle*) AHX;
TocEntry *te = AH->toc->next;
int reqs;
OutputContext sav;
if (ropt->filename || ropt->compression)
sav = SetOutput(AH, ropt->filename, ropt->compression);
ahprintf(AH, "--\n-- Selected TOC Entries:\n--\n");
/* Drop the items at the start, in reverse order */
if (ropt->dropSchema) {
te = AH->toc->prev;
while (te != AH->toc) {
reqs = _tocEntryRequired(te, ropt);
if ( (reqs & 1) && te->dropStmt) { /* We want the schema */
ahprintf(AH, "%s", te->dropStmt);
}
te = te->prev;
}
}
te = AH->toc->next;
while (te != AH->toc) {
reqs = _tocEntryRequired(te, ropt);
if (reqs & 1) /* We want the schema */
_printTocEntry(AH, te, ropt);
if (AH->PrintTocDataPtr != NULL && (reqs & 2) != 0) {
#ifndef HAVE_ZLIB
if (AH->compression != 0)
die_horribly("%s: Unable to restore data from a compressed archive\n", progname);
#endif
_disableTriggers(AH, te, ropt);
(*AH->PrintTocDataPtr)(AH, te, ropt);
_enableTriggers(AH, te, ropt);
}
te = te->next;
}
if (ropt->filename)
ResetOutput(AH, sav);
}
RestoreOptions* NewRestoreOptions(void)
{
RestoreOptions* opts;
opts = (RestoreOptions*)calloc(1, sizeof(RestoreOptions));
opts->format = archUnknown;
return opts;
}
static void _disableTriggers(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt)
{
ahprintf(AH, "-- Disable triggers\n");
ahprintf(AH, "UPDATE \"pg_class\" SET \"reltriggers\" = 0 WHERE \"relname\" !~ '^pg_';\n\n");
}
static void _enableTriggers(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt)
{
ahprintf(AH, "-- Enable triggers\n");
ahprintf(AH, "BEGIN TRANSACTION;\n");
ahprintf(AH, "CREATE TEMP TABLE \"tr\" (\"tmp_relname\" name, \"tmp_reltriggers\" smallint);\n");
ahprintf(AH, "INSERT INTO \"tr\" SELECT C.\"relname\", count(T.\"oid\") FROM \"pg_class\" C,"
" \"pg_trigger\" T WHERE C.\"oid\" = T.\"tgrelid\" AND C.\"relname\" !~ '^pg_' "
" GROUP BY 1;\n");
ahprintf(AH, "UPDATE \"pg_class\" SET \"reltriggers\" = TMP.\"tmp_reltriggers\" "
"FROM \"tr\" TMP WHERE "
"\"pg_class\".\"relname\" = TMP.\"tmp_relname\";\n");
ahprintf(AH, "DROP TABLE \"tr\";\n");
ahprintf(AH, "COMMIT TRANSACTION;\n\n");
}
/*
* This is a routine that is available to pg_dump, hence the 'Archive*' parameter.
*/
/* Public */
int WriteData(Archive* AHX, const void* data, int dLen)
{
ArchiveHandle* AH = (ArchiveHandle*)AHX;
return (*AH->WriteDataPtr)(AH, data, dLen);
}
/*
* Create a new TOC entry. The TOC was designed as a TOC, but is now the
* repository for all metadata. But the name has stuck.
*/
/* Public */
void ArchiveEntry(Archive* AHX, const char* oid, const char* name,
const char* desc, const char* (deps[]), const char* defn,
const char* dropStmt, const char* owner,
DataDumperPtr dumpFn, void* dumpArg)
{
ArchiveHandle* AH = (ArchiveHandle*)AHX;
TocEntry* newToc;
AH->lastID++;
AH->tocCount++;
newToc = (TocEntry*)malloc(sizeof(TocEntry));
if (!newToc)
die_horribly("Archiver: unable to allocate memory for TOC entry\n");
newToc->prev = AH->toc->prev;
newToc->next = AH->toc;
AH->toc->prev->next = newToc;
AH->toc->prev = newToc;
newToc->id = AH->lastID;
newToc->oid = strdup(oid);
newToc->oidVal = atoi(oid);
newToc->name = strdup(name);
newToc->desc = strdup(desc);
newToc->defn = strdup(defn);
newToc->dropStmt = strdup(dropStmt);
newToc->owner = strdup(owner);
newToc->printed = 0;
newToc->formatData = NULL;
newToc->dataDumper = dumpFn,
newToc->dataDumperArg = dumpArg;
newToc->hadDumper = dumpFn ? 1 : 0;
if (AH->ArchiveEntryPtr != NULL) {
(*AH->ArchiveEntryPtr)(AH, newToc);
}
/* printf("New toc owned by '%s', oid %d\n", newToc->owner, newToc->oidVal); */
}
/* Public */
void PrintTOCSummary(Archive* AHX, RestoreOptions *ropt)
{
ArchiveHandle* AH = (ArchiveHandle*) AHX;
TocEntry *te = AH->toc->next;
OutputContext sav;
if (ropt->filename)
sav = SetOutput(AH, ropt->filename, ropt->compression);
ahprintf(AH, ";\n; Selected TOC Entries:\n;\n");
while (te != AH->toc) {
if (_tocEntryRequired(te, ropt) != 0)
ahprintf(AH, "%d; %d %s %s %s\n", te->id, te->oidVal, te->desc, te->name, te->owner);
te = te->next;
}
if (ropt->filename)
ResetOutput(AH, sav);
}
/***********
* Sorting and Reordering
***********/
/*
* Move TOC entries of the specified type to the START of the TOC.
*/
/* Public */
void MoveToStart(Archive* AHX, char *oType)
{
ArchiveHandle* AH = (ArchiveHandle*)AHX;
TocEntry *te = AH->toc->next;
TocEntry *newTe;
while (te != AH->toc) {
te->_moved = 0;
te = te->next;
}
te = AH->toc->prev;
while (te != AH->toc && !te->_moved) {
newTe = te->prev;
if (strcmp(te->desc, oType) == 0) {
_moveAfter(AH, AH->toc, te);
}
te = newTe;
}
}
/*
* Move TOC entries of the specified type to the end of the TOC.
*/
/* Public */
void MoveToEnd(Archive* AHX, char *oType)
{
ArchiveHandle* AH = (ArchiveHandle*)AHX;
TocEntry *te = AH->toc->next;
TocEntry *newTe;
while (te != AH->toc) {
te->_moved = 0;
te = te->next;
}
te = AH->toc->next;
while (te != AH->toc && !te->_moved) {
newTe = te->next;
if (strcmp(te->desc, oType) == 0) {
_moveBefore(AH, AH->toc, te);
}
te = newTe;
}
}
/*
* Sort TOC by OID
*/
/* Public */
void SortTocByOID(Archive* AHX)
{
ArchiveHandle* AH = (ArchiveHandle*)AHX;
_SortToc(AH, _tocSortCompareByOIDNum);
}
/*
* Sort TOC by ID
*/
/* Public */
void SortTocByID(Archive* AHX)
{
ArchiveHandle* AH = (ArchiveHandle*)AHX;
_SortToc(AH, _tocSortCompareByIDNum);
}
void SortTocFromFile(Archive* AHX, RestoreOptions *ropt)
{
ArchiveHandle* AH = (ArchiveHandle*)AHX;
FILE *fh;
char buf[1024];
char *cmnt;
char *endptr;
int id;
TocEntry *te;
TocEntry *tePrev;
int i;
/* Allocate space for the 'wanted' array, and init it */
ropt->idWanted = (int*)malloc(sizeof(int)*AH->tocCount);
for ( i = 0 ; i < AH->tocCount ; i++ )
ropt->idWanted[i] = 0;
ropt->limitToList = 1;
/* Mark all entries as 'not moved' */
te = AH->toc->next;
while (te != AH->toc) {
te->_moved = 0;
te = te->next;
}
/* Set prev entry as head of list */
tePrev = AH->toc;
/* Setup the file */
fh = fopen(ropt->tocFile, PG_BINARY_R);
if (!fh)
die_horribly("%s: could not open TOC file\n", progname);
while (fgets(buf, 1024, fh) != NULL)
{
/* Find a comment */
cmnt = strchr(buf, ';');
if (cmnt == buf)
continue;
/* End string at comment */
if (cmnt != NULL)
cmnt[0] = '\0';
/* Skip if all spaces */
if (strspn(buf, " \t") == strlen(buf))
continue;
/* Get an ID */
id = strtol(buf, &endptr, 10);
if (endptr == buf)
{
fprintf(stderr, "%s: warning - line ignored: %s\n", progname, buf);
continue;
}
/* Find TOC entry */
te = _getTocEntry(AH, id);
if (!te)
die_horribly("%s: could not find entry for id %d\n",progname, id);
ropt->idWanted[id-1] = 1;
_moveAfter(AH, tePrev, te);
tePrev = te;
}
fclose(fh);
}
/**********************
* 'Convenience functions that look like standard IO functions
* for writing data when in dump mode.
**********************/
/* Public */
int archputs(const char *s, Archive* AH) {
return WriteData(AH, s, strlen(s));
}
/* Public */
int archputc(const char c, Archive* AH) {
return WriteData(AH, &c, 1);
}
/* Public */
int archprintf(Archive* AH, const char *fmt, ...)
{
char *p = NULL;
va_list ap;
int bSize = strlen(fmt) + 1024;
int cnt = -1;
va_start(ap, fmt);
while (cnt < 0) {
if (p != NULL) free(p);
bSize *= 2;
if ((p = malloc(bSize)) == NULL)
{
va_end(ap);
die_horribly("%s: could not allocate buffer for archprintf\n", progname);
}
cnt = vsnprintf(p, bSize, fmt, ap);
}
va_end(ap);
WriteData(AH, p, cnt);
free(p);
return cnt;
}
/*******************************
* Stuff below here should be 'private' to the archiver routines
*******************************/
OutputContext SetOutput(ArchiveHandle* AH, char *filename, int compression)
{
OutputContext sav;
#ifdef HAVE_ZLIB
char fmode[10];
#endif
int fn = 0;
/* Replace the AH output file handle */
sav.OF = AH->OF;
sav.gzOut = AH->gzOut;
if (filename) {
fn = 0;
} else if (AH->FH) {
fn = fileno(AH->FH);
} else if (AH->fSpec) {
fn = 0;
filename = AH->fSpec;
} else {
fn = fileno(stdout);
}
/* If compression explicitly requested, use gzopen */
#ifdef HAVE_ZLIB
if (compression != 0)
{
sprintf(fmode, "wb%d", compression);
if (fn) {
AH->OF = gzdopen(dup(fn), fmode); /* Don't use PG_BINARY_x since this is zlib */
} else {
AH->OF = gzopen(filename, fmode);
}
AH->gzOut = 1;
} else { /* Use fopen */
#endif
if (fn) {
AH->OF = fdopen(dup(fn), PG_BINARY_W);
} else {
AH->OF = fopen(filename, PG_BINARY_W);
}
AH->gzOut = 0;
#ifdef HAVE_ZLIB
}
#endif
return sav;
}
void ResetOutput(ArchiveHandle* AH, OutputContext sav)
{
if (AH->gzOut)
GZCLOSE(AH->OF);
else
fclose(AH->OF);
AH->gzOut = sav.gzOut;
AH->OF = sav.OF;
}
/*
* Print formatted text to the output file (usually stdout).
*/
int ahprintf(ArchiveHandle* AH, const char *fmt, ...)
{
char *p = NULL;
va_list ap;
int bSize = strlen(fmt) + 1024; /* Should be enough */
int cnt = -1;
va_start(ap, fmt);
while (cnt < 0) {
if (p != NULL) free(p);
bSize *= 2;
p = (char*)malloc(bSize);
if (p == NULL)
{
va_end(ap);
die_horribly("%s: could not allocate buffer for ahprintf\n", progname);
}
cnt = vsnprintf(p, bSize, fmt, ap);
}
va_end(ap);
ahwrite(p, 1, cnt, AH);
free(p);
return cnt;
}
/*
* Write buffer to the output file (usually stdout).
*/
int ahwrite(const void *ptr, size_t size, size_t nmemb, ArchiveHandle* AH)
{
if (AH->gzOut)
return GZWRITE((void*)ptr, size, nmemb, AH->OF);
else
return fwrite((void*)ptr, size, nmemb, AH->OF);
}
void die_horribly(const char *fmt, ...)
{
va_list ap;
va_start(ap, fmt);
vfprintf(stderr, fmt, ap);
va_end(ap);
exit(1);
}
static void _moveAfter(ArchiveHandle* AH, TocEntry* pos, TocEntry* te)
{
te->prev->next = te->next;
te->next->prev = te->prev;
te->prev = pos;
te->next = pos->next;
pos->next->prev = te;
pos->next = te;
te->_moved = 1;
}
static void _moveBefore(ArchiveHandle* AH, TocEntry* pos, TocEntry* te)
{
te->prev->next = te->next;
te->next->prev = te->prev;
te->prev = pos->prev;
te->next = pos;
pos->prev->next = te;
pos->prev = te;
te->_moved = 1;
}
static TocEntry* _getTocEntry(ArchiveHandle* AH, int id)
{
TocEntry *te;
te = AH->toc->next;
while (te != AH->toc) {
if (te->id == id)
return te;
te = te->next;
}
return NULL;
}
int TocIDRequired(ArchiveHandle* AH, int id, RestoreOptions *ropt)
{
TocEntry *te = _getTocEntry(AH, id);
if (!te)
return 0;
return _tocEntryRequired(te, ropt);
}
int WriteInt(ArchiveHandle* AH, int i)
{
int b;
/* This is a bit yucky, but I don't want to make the
* binary format very dependant on representation,
* and not knowing much about it, I write out a
* sign byte. If you change this, don't forget to change the
* file version #, and modify readInt to read the new format
* AS WELL AS the old formats.
*/
/* SIGN byte */
if (i < 0) {
(*AH->WriteBytePtr)(AH, 1);
i = -i;
} else {
(*AH->WriteBytePtr)(AH, 0);
}
for(b = 0 ; b < AH->intSize ; b++) {
(*AH->WriteBytePtr)(AH, i & 0xFF);
i = i / 256;
}
return AH->intSize + 1;
}
int ReadInt(ArchiveHandle* AH)
{
int res = 0;
int shft = 1;
int bv, b;
int sign = 0; /* Default positive */
if (AH->version > K_VERS_1_0)
/* Read a sign byte */
sign = (*AH->ReadBytePtr)(AH);
for( b = 0 ; b < AH->intSize ; b++) {
bv = (*AH->ReadBytePtr)(AH);
res = res + shft * bv;
shft *= 256;
}
if (sign)
res = - res;
return res;
}
int WriteStr(ArchiveHandle* AH, char* c)
{
int l = WriteInt(AH, strlen(c));
return (*AH->WriteBufPtr)(AH, c, strlen(c)) + l;
}
char* ReadStr(ArchiveHandle* AH)
{
char* buf;
int l;
l = ReadInt(AH);
buf = (char*)malloc(l+1);
if (!buf)
die_horribly("Archiver: Unable to allocate sufficient memory in ReadStr\n");
(*AH->ReadBufPtr)(AH, (void*)buf, l);
buf[l] = '\0';
return buf;
}
int _discoverArchiveFormat(ArchiveHandle* AH)
{
FILE *fh;
char sig[6]; /* More than enough */
int cnt;
int wantClose = 0;
if (AH->fSpec) {
wantClose = 1;
fh = fopen(AH->fSpec, PG_BINARY_R);
} else {
fh = stdin;
}
if (!fh)
die_horribly("Archiver: could not open input file\n");
cnt = fread(sig, 1, 5, fh);
if (cnt != 5) {
fprintf(stderr, "Archiver: input file is too short, or is unreadable\n");
exit(1);
}
if (strncmp(sig, "PGDMP", 5) != 0)
{
fprintf(stderr, "Archiver: input file does not appear to be a valid archive\n");
exit(1);
}
AH->vmaj = fgetc(fh);
AH->vmin = fgetc(fh);
/* Check header version; varies from V1.0 */
if (AH->vmaj > 1 || ( (AH->vmaj == 1) && (AH->vmin > 0) ) ) /* Version > 1.0 */
AH->vrev = fgetc(fh);
else
AH->vrev = 0;
AH->intSize = fgetc(fh);
AH->format = fgetc(fh);
/* Make a convenient integer <maj><min><rev>00 */
AH->version = ( (AH->vmaj * 256 + AH->vmin) * 256 + AH->vrev ) * 256 + 0;
/* If we can't seek, then mark the header as read */
if (fseek(fh, 0, SEEK_SET) != 0)
AH->readHeader = 1;
/* Close the file */
if (wantClose)
fclose(fh);
return AH->format;
}
/*
* Allocate an archive handle
*/
static ArchiveHandle* _allocAH(const char* FileSpec, ArchiveFormat fmt,
int compression, ArchiveMode mode) {
ArchiveHandle* AH;
AH = (ArchiveHandle*)malloc(sizeof(ArchiveHandle));
if (!AH)
die_horribly("Archiver: Could not allocate archive handle\n");
AH->vmaj = K_VERS_MAJOR;
AH->vmin = K_VERS_MINOR;
AH->intSize = sizeof(int);
AH->lastID = 0;
if (FileSpec) {
AH->fSpec = strdup(FileSpec);
} else {
AH->fSpec = NULL;
}
AH->FH = NULL;
AH->formatData = NULL;
AH->currToc = NULL;
AH->currUser = "";
AH->toc = (TocEntry*)malloc(sizeof(TocEntry));
if (!AH->toc)
die_horribly("Archiver: Could not allocate TOC header\n");
AH->tocCount = 0;
AH->toc->next = AH->toc;
AH->toc->prev = AH->toc;
AH->toc->id = 0;
AH->toc->oid = NULL;
AH->toc->name = NULL; /* eg. MY_SPECIAL_FUNCTION */
AH->toc->desc = NULL; /* eg. FUNCTION */
AH->toc->defn = NULL; /* ie. sql to define it */
AH->toc->depOid = NULL;
AH->mode = mode;
AH->format = fmt;
AH->compression = compression;
AH->ArchiveEntryPtr = NULL;
AH->StartDataPtr = NULL;
AH->WriteDataPtr = NULL;
AH->EndDataPtr = NULL;
AH->WriteBytePtr = NULL;
AH->ReadBytePtr = NULL;
AH->WriteBufPtr = NULL;
AH->ReadBufPtr = NULL;
AH->ClosePtr = NULL;
AH->WriteExtraTocPtr = NULL;
AH->ReadExtraTocPtr = NULL;
AH->PrintExtraTocPtr = NULL;
AH->readHeader = 0;
/* Open stdout with no compression for AH output handle */
AH->gzOut = 0;
AH->OF = stdout;
if (fmt == archUnknown)
fmt = _discoverArchiveFormat(AH);
switch (fmt) {
case archCustom:
InitArchiveFmt_Custom(AH);
break;
case archFiles:
InitArchiveFmt_Files(AH);
break;
case archPlainText:
InitArchiveFmt_PlainText(AH);
break;
default:
die_horribly("Archiver: Unrecognized file format '%d'\n", fmt);
}
return AH;
}
void WriteDataChunks(ArchiveHandle* AH)
{
TocEntry *te = AH->toc->next;
while (te != AH->toc) {
if (te->dataDumper != NULL) {
AH->currToc = te;
/* printf("Writing data for %d (%x)\n", te->id, te); */
if (AH->StartDataPtr != NULL) {
(*AH->StartDataPtr)(AH, te);
}
/* printf("Dumper arg for %d is %x\n", te->id, te->dataDumperArg); */
/*
* The user-provided DataDumper routine needs to call AH->WriteData
*/
(*te->dataDumper)((Archive*)AH, te->oid, te->dataDumperArg);
if (AH->EndDataPtr != NULL) {
(*AH->EndDataPtr)(AH, te);
}
AH->currToc = NULL;
}
te = te->next;
}
}
void WriteToc(ArchiveHandle* AH)
{
TocEntry *te = AH->toc->next;
/* printf("%d TOC Entries to save\n", AH->tocCount); */
WriteInt(AH, AH->tocCount);
while (te != AH->toc) {
WriteInt(AH, te->id);
WriteInt(AH, te->dataDumper ? 1 : 0);
WriteStr(AH, te->oid);
WriteStr(AH, te->name);
WriteStr(AH, te->desc);
WriteStr(AH, te->defn);
WriteStr(AH, te->dropStmt);
WriteStr(AH, te->owner);
if (AH->WriteExtraTocPtr) {
(*AH->WriteExtraTocPtr)(AH, te);
}
te = te->next;
}
}
void ReadToc(ArchiveHandle* AH)
{
int i;
TocEntry *te = AH->toc->next;
AH->tocCount = ReadInt(AH);
for( i = 0 ; i < AH->tocCount ; i++) {
te = (TocEntry*)malloc(sizeof(TocEntry));
te->id = ReadInt(AH);
/* Sanity check */
if (te->id <= 0 || te->id > AH->tocCount)
die_horribly("Archiver: failed sanity check (bad entry id) - perhaps a corrupt TOC\n");
te->hadDumper = ReadInt(AH);
te->oid = ReadStr(AH);
te->oidVal = atoi(te->oid);
te->name = ReadStr(AH);
te->desc = ReadStr(AH);
te->defn = ReadStr(AH);
te->dropStmt = ReadStr(AH);
te->owner = ReadStr(AH);
if (AH->ReadExtraTocPtr) {
(*AH->ReadExtraTocPtr)(AH, te);
}
te->prev = AH->toc->prev;
AH->toc->prev->next = te;
AH->toc->prev = te;
te->next = AH->toc;
}
}
static int _tocEntryRequired(TocEntry* te, RestoreOptions *ropt)
{
int res = 3; /* Data and Schema */
/* If it's an ACL, maybe ignore it */
if (ropt->aclsSkip && strcmp(te->desc,"ACL") == 0)
return 0;
/* Check if tablename only is wanted */
if (ropt->selTypes)
{
if ( (strcmp(te->desc, "TABLE") == 0) || (strcmp(te->desc, "TABLE DATA") == 0) )
{
if (!ropt->selTable)
return 0;
if (ropt->tableNames && strcmp(ropt->tableNames, te->name) != 0)
return 0;
} else if (strcmp(te->desc, "INDEX") == 0) {
if (!ropt->selIndex)
return 0;
if (ropt->indexNames && strcmp(ropt->indexNames, te->name) != 0)
return 0;
} else if (strcmp(te->desc, "FUNCTION") == 0) {
if (!ropt->selFunction)
return 0;
if (ropt->functionNames && strcmp(ropt->functionNames, te->name) != 0)
return 0;
} else if (strcmp(te->desc, "TRIGGER") == 0) {
if (!ropt->selTrigger)
return 0;
if (ropt->triggerNames && strcmp(ropt->triggerNames, te->name) != 0)
return 0;
} else {
return 0;
}
}
/* Mask it if we only want schema */
if (ropt->schemaOnly)
res = res & 1;
/* Mask it we only want data */
if (ropt->dataOnly)
res = res & 2;
/* Mask it if we don't have a schema contribition */
if (!te->defn || strlen(te->defn) == 0)
res = res & 2;
/* Mask it if we don't have a possible data contribition */
if (!te->hadDumper)
res = res & 1;
/* Finally, if we used a list, limit based on that as well */
if (ropt->limitToList && !ropt->idWanted[te->id - 1])
return 0;
return res;
}
static int _printTocEntry(ArchiveHandle* AH, TocEntry* te, RestoreOptions *ropt)
{
ahprintf(AH, "--\n-- TOC Entry ID %d (OID %s)\n--\n-- Name: %s Type: %s Owner: %s\n",
te->id, te->oid, te->name, te->desc, te->owner);
if (AH->PrintExtraTocPtr != NULL) {
(*AH->PrintExtraTocPtr)(AH, te);
}
ahprintf(AH, "--\n\n");
if (te->owner && strlen(te->owner) != 0 && strcmp(AH->currUser, te->owner) != 0) {
ahprintf(AH, "\\connect - %s\n", te->owner);
AH->currUser = te->owner;
}
ahprintf(AH, "%s\n", te->defn);
return 1;
}
void WriteHead(ArchiveHandle* AH)
{
(*AH->WriteBufPtr)(AH, "PGDMP", 5); /* Magic code */
(*AH->WriteBytePtr)(AH, AH->vmaj);
(*AH->WriteBytePtr)(AH, AH->vmin);
(*AH->WriteBytePtr)(AH, AH->vrev);
(*AH->WriteBytePtr)(AH, AH->intSize);
(*AH->WriteBytePtr)(AH, AH->format);
#ifndef HAVE_ZLIB
if (AH->compression != 0)
fprintf(stderr, "%s: WARNING - requested compression not available in this installation - "
"archive will be uncompressed \n", progname);
AH->compression = 0;
(*AH->WriteBytePtr)(AH, 0);
#else
(*AH->WriteBytePtr)(AH, AH->compression);
#endif
}
void ReadHead(ArchiveHandle* AH)
{
char tmpMag[7];
int fmt;
if (AH->readHeader)
return;
(*AH->ReadBufPtr)(AH, tmpMag, 5);
if (strncmp(tmpMag,"PGDMP", 5) != 0)
die_horribly("Archiver: Did not fing magic PGDMP in file header\n");
AH->vmaj = (*AH->ReadBytePtr)(AH);
AH->vmin = (*AH->ReadBytePtr)(AH);
if (AH->vmaj > 1 || ( (AH->vmaj == 1) && (AH->vmin > 0) ) ) /* Version > 1.0 */
{
AH->vrev = (*AH->ReadBytePtr)(AH);
} else {
AH->vrev = 0;
}
AH->version = ( (AH->vmaj * 256 + AH->vmin) * 256 + AH->vrev ) * 256 + 0;
if (AH->version < K_VERS_1_0 || AH->version > K_VERS_MAX)
die_horribly("Archiver: unsupported version (%d.%d) in file header\n", AH->vmaj, AH->vmin);
AH->intSize = (*AH->ReadBytePtr)(AH);
if (AH->intSize > 32)
die_horribly("Archiver: sanity check on integer size (%d) failes\n", AH->intSize);
if (AH->intSize > sizeof(int))
fprintf(stderr, "\nWARNING: Backup file was made on a machine with larger integers, "
"some operations may fail\n");
fmt = (*AH->ReadBytePtr)(AH);
if (AH->format != fmt)
die_horribly("Archiver: expected format (%d) differs from format found in file (%d)\n",
AH->format, fmt);
if (AH->version >= K_VERS_1_2)
{
AH->compression = (*AH->ReadBytePtr)(AH);
} else {
AH->compression = Z_DEFAULT_COMPRESSION;
}
#ifndef HAVE_ZLIB
fprintf(stderr, "%s: WARNING - archive is compressed - any data will not be available\n", progname);
#endif
}
static void _SortToc(ArchiveHandle* AH, TocSortCompareFn fn)
{
TocEntry** tea;
TocEntry* te;
int i;
/* Allocate an array for quicksort (TOC size + head & foot) */
tea = (TocEntry**)malloc(sizeof(TocEntry*) * (AH->tocCount + 2) );
/* Build array of toc entries, including header at start and end */
te = AH->toc;
for( i = 0 ; i <= AH->tocCount+1 ; i++) {
/* printf("%d: %x (%x, %x) - %d\n", i, te, te->prev, te->next, te->oidVal); */
tea[i] = te;
te = te->next;
}
/* Sort it, but ignore the header entries */
qsort(&(tea[1]), AH->tocCount, sizeof(TocEntry*), fn);
/* Rebuild list: this works becuase we have headers at each end */
for( i = 1 ; i <= AH->tocCount ; i++) {
tea[i]->next = tea[i+1];
tea[i]->prev = tea[i-1];
}
te = AH->toc;
for( i = 0 ; i <= AH->tocCount+1 ; i++) {
/* printf("%d: %x (%x, %x) - %d\n", i, te, te->prev, te->next, te->oidVal); */
te = te->next;
}
AH->toc->next = tea[1];
AH->toc->prev = tea[AH->tocCount];
}
static int _tocSortCompareByOIDNum(const void* p1, const void* p2)
{
TocEntry* te1 = *(TocEntry**)p1;
TocEntry* te2 = *(TocEntry**)p2;
int id1 = te1->oidVal;
int id2 = te2->oidVal;
/* printf("Comparing %d to %d\n", id1, id2); */
if (id1 < id2) {
return -1;
} else if (id1 > id2) {
return 1;
} else {
return _tocSortCompareByIDNum(te1, te2);
}
}
static int _tocSortCompareByIDNum(const void* p1, const void* p2)
{
TocEntry* te1 = *(TocEntry**)p1;
TocEntry* te2 = *(TocEntry**)p2;
int id1 = te1->id;
int id2 = te2->id;
/* printf("Comparing %d to %d\n", id1, id2); */
if (id1 < id2) {
return -1;
} else if (id1 > id2) {
return 1;
} else {
return 0;
}
}
/*-------------------------------------------------------------------------
*
* pg_backup_archiver.h
*
* Private interface to the pg_dump archiver routines.
* It is NOT intended that these routines be called by any
* dumper directly.
*
* See the headers to pg_restore for more details.
*
* Copyright (c) 2000, Philip Warner
* Rights are granted to use this software in any way so long
* as this notice is not removed.
*
* The author is not responsible for loss or damages that may
* result from it's use.
*
*
* IDENTIFICATION
*
* Modifications - 28-Jun-2000 - pjw@rhyme.com.au
*
* Initial version.
*
*-------------------------------------------------------------------------
*/
#ifndef __PG_BACKUP_ARCHIVE__
#define __PG_BACKUP_ARCHIVE__
#include <stdio.h>
#ifdef HAVE_ZLIB
#include <zlib.h>
#define GZCLOSE(fh) gzclose(fh)
#define GZWRITE(p, s, n, fh) gzwrite(fh, p, n * s)
#define GZREAD(p, s, n, fh) gzread(fh, p, n * s)
#else
#define GZCLOSE(fh) fclose(fh)
#define GZWRITE(p, s, n, fh) fwrite(p, s, n, fh)
#define GZREAD(p, s, n, fh) fread(p, s, n, fh)
#define Z_DEFAULT_COMPRESSION -1
typedef struct _z_stream {
void *next_in;
void *next_out;
int avail_in;
int avail_out;
} z_stream;
typedef z_stream *z_streamp;
#endif
#include "pg_backup.h"
#define K_VERS_MAJOR 1
#define K_VERS_MINOR 2
#define K_VERS_REV 0
/* Some important version numbers (checked in code) */
#define K_VERS_1_0 (( (1 * 256 + 0) * 256 + 0) * 256 + 0)
#define K_VERS_1_2 (( (1 * 256 + 2) * 256 + 0) * 256 + 0)
#define K_VERS_MAX (( (1 * 256 + 2) * 256 + 255) * 256 + 0)
struct _archiveHandle;
struct _tocEntry;
struct _restoreList;
typedef void (*ClosePtr) (struct _archiveHandle* AH);
typedef void (*ArchiveEntryPtr) (struct _archiveHandle* AH, struct _tocEntry* te);
typedef void (*StartDataPtr) (struct _archiveHandle* AH, struct _tocEntry* te);
typedef int (*WriteDataPtr) (struct _archiveHandle* AH, const void* data, int dLen);
typedef void (*EndDataPtr) (struct _archiveHandle* AH, struct _tocEntry* te);
typedef int (*WriteBytePtr) (struct _archiveHandle* AH, const int i);
typedef int (*ReadBytePtr) (struct _archiveHandle* AH);
typedef int (*WriteBufPtr) (struct _archiveHandle* AH, const void* c, int len);
typedef int (*ReadBufPtr) (struct _archiveHandle* AH, void* buf, int len);
typedef void (*SaveArchivePtr) (struct _archiveHandle* AH);
typedef void (*WriteExtraTocPtr) (struct _archiveHandle* AH, struct _tocEntry* te);
typedef void (*ReadExtraTocPtr) (struct _archiveHandle* AH, struct _tocEntry* te);
typedef void (*PrintExtraTocPtr) (struct _archiveHandle* AH, struct _tocEntry* te);
typedef void (*PrintTocDataPtr) (struct _archiveHandle* AH, struct _tocEntry* te,
RestoreOptions *ropt);
typedef int (*TocSortCompareFn) (const void* te1, const void *te2);
typedef enum _archiveMode {
archModeWrite,
archModeRead
} ArchiveMode;
typedef struct _outputContext {
void *OF;
int gzOut;
} OutputContext;
typedef struct _archiveHandle {
char vmaj; /* Version of file */
char vmin;
char vrev;
int version; /* Conveniently formatted version */
int intSize; /* Size of an integer in the archive */
ArchiveFormat format; /* Archive format */
int readHeader; /* Used if file header has been read already */
ArchiveEntryPtr ArchiveEntryPtr; /* Called for each metadata object */
StartDataPtr StartDataPtr; /* Called when table data is about to be dumped */
WriteDataPtr WriteDataPtr; /* Called to send some table data to the archive */
EndDataPtr EndDataPtr; /* Called when table data dump is finished */
WriteBytePtr WriteBytePtr; /* Write a byte to output */
ReadBytePtr ReadBytePtr; /* */
WriteBufPtr WriteBufPtr;
ReadBufPtr ReadBufPtr;
ClosePtr ClosePtr; /* Close the archive */
WriteExtraTocPtr WriteExtraTocPtr; /* Write extra TOC entry data associated with */
/* the current archive format */
ReadExtraTocPtr ReadExtraTocPtr; /* Read extr info associated with archie format */
PrintExtraTocPtr PrintExtraTocPtr; /* Extra TOC info for format */
PrintTocDataPtr PrintTocDataPtr;
int lastID; /* Last internal ID for a TOC entry */
char* fSpec; /* Archive File Spec */
FILE *FH; /* General purpose file handle */
void *OF;
int gzOut; /* Output file */
struct _tocEntry* toc; /* List of TOC entries */
int tocCount; /* Number of TOC entries */
struct _tocEntry* currToc; /* Used when dumping data */
char *currUser; /* Restore: current username in script */
int compression; /* Compression requested on open */
ArchiveMode mode; /* File mode - r or w */
void* formatData; /* Header data specific to file format */
} ArchiveHandle;
typedef struct _tocEntry {
struct _tocEntry* prev;
struct _tocEntry* next;
int id;
int hadDumper; /* Archiver was passed a dumper routine (used in restore) */
char* oid;
int oidVal;
char* name;
char* desc;
char* defn;
char* dropStmt;
char* owner;
char** depOid;
int printed; /* Indicates if entry defn has been dumped */
DataDumperPtr dataDumper; /* Routine to dump data for object */
void* dataDumperArg; /* Arg for above routine */
void* formatData; /* TOC Entry data specific to file format */
int _moved; /* Marker used when rearranging TOC */
} TocEntry;
extern void die_horribly(const char *fmt, ...);
extern void WriteTOC(ArchiveHandle* AH);
extern void ReadTOC(ArchiveHandle* AH);
extern void WriteHead(ArchiveHandle* AH);
extern void ReadHead(ArchiveHandle* AH);
extern void WriteToc(ArchiveHandle* AH);
extern void ReadToc(ArchiveHandle* AH);
extern void WriteDataChunks(ArchiveHandle* AH);
extern int TocIDRequired(ArchiveHandle* AH, int id, RestoreOptions *ropt);
/*
* Mandatory routines for each supported format
*/
extern int WriteInt(ArchiveHandle* AH, int i);
extern int ReadInt(ArchiveHandle* AH);
extern char* ReadStr(ArchiveHandle* AH);
extern int WriteStr(ArchiveHandle* AH, char* s);
extern void InitArchiveFmt_Custom(ArchiveHandle* AH);
extern void InitArchiveFmt_Files(ArchiveHandle* AH);
extern void InitArchiveFmt_PlainText(ArchiveHandle* AH);
extern OutputContext SetOutput(ArchiveHandle* AH, char *filename, int compression);
extern void ResetOutput(ArchiveHandle* AH, OutputContext savedContext);
int ahwrite(const void *ptr, size_t size, size_t nmemb, ArchiveHandle* AH);
int ahprintf(ArchiveHandle* AH, const char *fmt, ...);
#endif
/*-------------------------------------------------------------------------
*
* pg_backup_custom.c
*
* Implements the custom output format.
*
* See the headers to pg_restore for more details.
*
* Copyright (c) 2000, Philip Warner
* Rights are granted to use this software in any way so long
* as this notice is not removed.
*
* The author is not responsible for loss or damages that may
* result from it's use.
*
*
* IDENTIFICATION
*
* Modifications - 28-Jun-2000 - pjw@rhyme.com.au
*
* Initial version.
*
*-------------------------------------------------------------------------
*/
#include <stdlib.h>
#include "pg_backup.h"
#include "pg_backup_archiver.h"
extern int errno;
static void _ArchiveEntry(ArchiveHandle* AH, TocEntry* te);
static void _StartData(ArchiveHandle* AH, TocEntry* te);
static int _WriteData(ArchiveHandle* AH, const void* data, int dLen);
static void _EndData(ArchiveHandle* AH, TocEntry* te);
static int _WriteByte(ArchiveHandle* AH, const int i);
static int _ReadByte(ArchiveHandle* );
static int _WriteBuf(ArchiveHandle* AH, const void* buf, int len);
static int _ReadBuf(ArchiveHandle* AH, void* buf, int len);
static void _CloseArchive(ArchiveHandle* AH);
static void _PrintTocData(ArchiveHandle* AH, TocEntry* te, RestoreOptions *ropt);
static void _WriteExtraToc(ArchiveHandle* AH, TocEntry* te);
static void _ReadExtraToc(ArchiveHandle* AH, TocEntry* te);
static void _PrintExtraToc(ArchiveHandle* AH, TocEntry* te);
static void _PrintData(ArchiveHandle* AH);
static void _skipData(ArchiveHandle* AH);
#define zlibOutSize 4096
#define zlibInSize 4096
typedef struct {
z_streamp zp;
char* zlibOut;
char* zlibIn;
int inSize;
int hasSeek;
int filePos;
int dataStart;
} lclContext;
typedef struct {
int dataPos;
int dataLen;
} lclTocEntry;
static int _getFilePos(ArchiveHandle* AH, lclContext* ctx);
static char* progname = "Archiver(custom)";
/*
* Handler functions.
*/
void InitArchiveFmt_Custom(ArchiveHandle* AH)
{
lclContext* ctx;
/* Assuming static functions, this can be copied for each format. */
AH->ArchiveEntryPtr = _ArchiveEntry;
AH->StartDataPtr = _StartData;
AH->WriteDataPtr = _WriteData;
AH->EndDataPtr = _EndData;
AH->WriteBytePtr = _WriteByte;
AH->ReadBytePtr = _ReadByte;
AH->WriteBufPtr = _WriteBuf;
AH->ReadBufPtr = _ReadBuf;
AH->ClosePtr = _CloseArchive;
AH->PrintTocDataPtr = _PrintTocData;
AH->ReadExtraTocPtr = _ReadExtraToc;
AH->WriteExtraTocPtr = _WriteExtraToc;
AH->PrintExtraTocPtr = _PrintExtraToc;
/*
* Set up some special context used in compressing data.
*/
ctx = (lclContext*)malloc(sizeof(lclContext));
if (ctx == NULL)
die_horribly("%s: Unable to allocate archive context",progname);
AH->formatData = (void*)ctx;
ctx->zp = (z_streamp)malloc(sizeof(z_stream));
if (ctx->zp == NULL)
die_horribly("%s: unable to allocate zlib stream archive context",progname);
ctx->zlibOut = (char*)malloc(zlibOutSize);
ctx->zlibIn = (char*)malloc(zlibInSize);
ctx->inSize = zlibInSize;
ctx->filePos = 0;
if (ctx->zlibOut == NULL || ctx->zlibIn == NULL)
die_horribly("%s: unable to allocate buffers in archive context",progname);
/*
* Now open the file
*/
if (AH->mode == archModeWrite) {
if (AH->fSpec && strcmp(AH->fSpec,"") != 0) {
AH->FH = fopen(AH->fSpec, PG_BINARY_W);
} else {
AH->FH = stdout;
}
if (!AH)
die_horribly("%s: unable to open archive file %s",progname, AH->fSpec);
ctx->hasSeek = (fseek(AH->FH, 0, SEEK_CUR) == 0);
} else {
if (AH->fSpec && strcmp(AH->fSpec,"") != 0) {
AH->FH = fopen(AH->fSpec, PG_BINARY_R);
} else {
AH->FH = stdin;
}
if (!AH)
die_horribly("%s: unable to open archive file %s",progname, AH->fSpec);
ctx->hasSeek = (fseek(AH->FH, 0, SEEK_CUR) == 0);
ReadHead(AH);
ReadToc(AH);
ctx->dataStart = _getFilePos(AH, ctx);
}
}
/*
* - Start a new TOC entry
*/
static void _ArchiveEntry(ArchiveHandle* AH, TocEntry* te)
{
lclTocEntry* ctx;
ctx = (lclTocEntry*)malloc(sizeof(lclTocEntry));
if (te->dataDumper) {
ctx->dataPos = -1;
} else {
ctx->dataPos = 0;
}
ctx->dataLen = 0;
te->formatData = (void*)ctx;
}
static void _WriteExtraToc(ArchiveHandle* AH, TocEntry* te)
{
lclTocEntry* ctx = (lclTocEntry*)te->formatData;
WriteInt(AH, ctx->dataPos);
WriteInt(AH, ctx->dataLen);
}
static void _ReadExtraToc(ArchiveHandle* AH, TocEntry* te)
{
lclTocEntry* ctx = (lclTocEntry*)te->formatData;
if (ctx == NULL) {
ctx = (lclTocEntry*)malloc(sizeof(lclTocEntry));
te->formatData = (void*)ctx;
}
ctx->dataPos = ReadInt( AH );
ctx->dataLen = ReadInt( AH );
}
static void _PrintExtraToc(ArchiveHandle* AH, TocEntry* te)
{
lclTocEntry* ctx = (lclTocEntry*)te->formatData;
ahprintf(AH, "-- Data Pos: %d (Length %d)\n", ctx->dataPos, ctx->dataLen);
}
static void _StartData(ArchiveHandle* AH, TocEntry* te)
{
lclContext* ctx = (lclContext*)AH->formatData;
z_streamp zp = ctx->zp;
lclTocEntry* tctx = (lclTocEntry*)te->formatData;
tctx->dataPos = _getFilePos(AH, ctx);
WriteInt(AH, te->id); /* For sanity check */
#ifdef HAVE_ZLIB
if (AH->compression < 0 || AH->compression > 9) {
AH->compression = Z_DEFAULT_COMPRESSION;
}
if (AH->compression != 0) {
zp->zalloc = Z_NULL;
zp->zfree = Z_NULL;
zp->opaque = Z_NULL;
if (deflateInit(zp, AH->compression) != Z_OK)
die_horribly("%s: could not initialize compression library - %s\n",progname, zp->msg);
}
#else
AH->compression = 0;
#endif
/* Just be paranoid - maye End is called after Start, with no Write */
zp->next_out = ctx->zlibOut;
zp->avail_out = zlibOutSize;
}
static int _DoDeflate(ArchiveHandle* AH, lclContext* ctx, int flush)
{
z_streamp zp = ctx->zp;
#ifdef HAVE_ZLIB
char* out = ctx->zlibOut;
int res = Z_OK;
if (AH->compression != 0)
{
res = deflate(zp, flush);
if (res == Z_STREAM_ERROR)
die_horribly("%s: could not compress data - %s\n",progname, zp->msg);
if ( ( (flush == Z_FINISH) && (zp->avail_out < zlibOutSize) )
|| (zp->avail_out == 0)
|| (zp->avail_in != 0)
)
{
/*
* Extra paranoia: avoid zero-length chunks since a zero
* length chunk is the EOF marker. This should never happen
* but...
*/
if (zp->avail_out < zlibOutSize) {
/* printf("Wrote %d byte deflated chunk\n", zlibOutSize - zp->avail_out); */
WriteInt(AH, zlibOutSize - zp->avail_out);
fwrite(out, 1, zlibOutSize - zp->avail_out, AH->FH);
ctx->filePos += zlibOutSize - zp->avail_out;
}
zp->next_out = out;
zp->avail_out = zlibOutSize;
}
} else {
#endif
if (zp->avail_in > 0)
{
WriteInt(AH, zp->avail_in);
fwrite(zp->next_in, 1, zp->avail_in, AH->FH);
ctx->filePos += zp->avail_in;
zp->avail_in = 0;
} else {
#ifdef HAVE_ZLIB
if (flush == Z_FINISH)
res = Z_STREAM_END;
#endif
}
#ifdef HAVE_ZLIB
}
return res;
#else
return 1;
#endif
}
static int _WriteData(ArchiveHandle* AH, const void* data, int dLen)
{
lclContext* ctx = (lclContext*)AH->formatData;
z_streamp zp = ctx->zp;
zp->next_in = (void*)data;
zp->avail_in = dLen;
while (zp->avail_in != 0) {
/* printf("Deflating %d bytes\n", dLen); */
_DoDeflate(AH, ctx, 0);
}
return dLen;
}
static void _EndData(ArchiveHandle* AH, TocEntry* te)
{
lclContext* ctx = (lclContext*)AH->formatData;
lclTocEntry* tctx = (lclTocEntry*) te->formatData;
#ifdef HAVE_ZLIB
z_streamp zp = ctx->zp;
int res;
if (AH->compression != 0)
{
zp->next_in = NULL;
zp->avail_in = 0;
do {
/* printf("Ending data output\n"); */
res = _DoDeflate(AH, ctx, Z_FINISH);
} while (res != Z_STREAM_END);
if (deflateEnd(zp) != Z_OK)
die_horribly("%s: error closing compression stream - %s\n", progname, zp->msg);
}
#endif
/* Send the end marker */
WriteInt(AH, 0);
tctx->dataLen = _getFilePos(AH, ctx) - tctx->dataPos;
}
/*
* Print data for a gievn TOC entry
*/
static void _PrintTocData(ArchiveHandle* AH, TocEntry* te, RestoreOptions *ropt)
{
lclContext* ctx = (lclContext*)AH->formatData;
int id;
lclTocEntry* tctx = (lclTocEntry*) te->formatData;
if (tctx->dataPos == 0)
return;
if (!ctx->hasSeek || tctx->dataPos < 0) {
id = ReadInt(AH);
while (id != te->id) {
if (TocIDRequired(AH, id, ropt) & 2)
die_horribly("%s: Dumping a specific TOC data block out of order is not supported"
" without on this input stream (fseek required)\n", progname);
_skipData(AH);
id = ReadInt(AH);
}
} else {
if (fseek(AH->FH, tctx->dataPos, SEEK_SET) != 0)
die_horribly("%s: error %d in file seek\n",progname, errno);
id = ReadInt(AH);
}
if (id != te->id)
die_horribly("%s: Found unexpected block ID (%d) when reading data - expected %d\n",
progname, id, te->id);
ahprintf(AH, "--\n-- Data for TOC Entry ID %d (OID %s) %s %s\n--\n\n",
te->id, te->oid, te->desc, te->name);
_PrintData(AH);
ahprintf(AH, "\n\n");
}
/*
* Print data from current file position.
*/
static void _PrintData(ArchiveHandle* AH)
{
lclContext* ctx = (lclContext*)AH->formatData;
z_streamp zp = ctx->zp;
int blkLen;
char* in = ctx->zlibIn;
int cnt;
#ifdef HAVE_ZLIB
int res;
char* out = ctx->zlibOut;
res = Z_OK;
if (AH->compression != 0) {
zp->zalloc = Z_NULL;
zp->zfree = Z_NULL;
zp->opaque = Z_NULL;
if (inflateInit(zp) != Z_OK)
die_horribly("%s: could not initialize compression library - %s\n", progname, zp->msg);
}
#endif
blkLen = ReadInt(AH);
while (blkLen != 0) {
if (blkLen > ctx->inSize) {
free(ctx->zlibIn);
ctx->zlibIn = NULL;
ctx->zlibIn = (char*)malloc(blkLen);
if (!ctx->zlibIn)
die_horribly("%s: failed to allocate decompression buffer\n", progname);
ctx->inSize = blkLen;
in = ctx->zlibIn;
}
cnt = fread(in, 1, blkLen, AH->FH);
if (cnt != blkLen)
die_horribly("%s: could not read data block - expected %d, got %d\n", progname, blkLen, cnt);
ctx->filePos += blkLen;
zp->next_in = in;
zp->avail_in = blkLen;
#ifdef HAVE_ZLIB
if (AH->compression != 0) {
while (zp->avail_in != 0) {
zp->next_out = out;
zp->avail_out = zlibOutSize;
res = inflate(zp, 0);
if (res != Z_OK && res != Z_STREAM_END)
die_horribly("%s: unable to uncompress data - %s\n", progname, zp->msg);
out[zlibOutSize - zp->avail_out] = '\0';
ahwrite(out, 1, zlibOutSize - zp->avail_out, AH);
}
} else {
#endif
ahwrite(in, 1, zp->avail_in, AH);
zp->avail_in = 0;
#ifdef HAVE_ZLIB
}
#endif
blkLen = ReadInt(AH);
}
#ifdef HAVE_ZLIB
if (AH->compression != 0)
{
zp->next_in = NULL;
zp->avail_in = 0;
while (res != Z_STREAM_END) {
zp->next_out = out;
zp->avail_out = zlibOutSize;
res = inflate(zp, 0);
if (res != Z_OK && res != Z_STREAM_END)
die_horribly("%s: unable to uncompress data - %s\n", progname, zp->msg);
out[zlibOutSize - zp->avail_out] = '\0';
ahwrite(out, 1, zlibOutSize - zp->avail_out, AH);
}
}
#endif
}
/*
* Skip data from current file position.
*/
static void _skipData(ArchiveHandle* AH)
{
lclContext* ctx = (lclContext*)AH->formatData;
int blkLen;
char* in = ctx->zlibIn;
int cnt;
blkLen = ReadInt(AH);
while (blkLen != 0) {
if (blkLen > ctx->inSize) {
free(ctx->zlibIn);
ctx->zlibIn = (char*)malloc(blkLen);
ctx->inSize = blkLen;
in = ctx->zlibIn;
}
cnt = fread(in, 1, blkLen, AH->FH);
if (cnt != blkLen)
die_horribly("%s: could not read data block - expected %d, got %d\n", progname, blkLen, cnt);
ctx->filePos += blkLen;
blkLen = ReadInt(AH);
}
}
static int _WriteByte(ArchiveHandle* AH, const int i)
{
lclContext* ctx = (lclContext*)AH->formatData;
int res;
res = fputc(i, AH->FH);
if (res != EOF) {
ctx->filePos += 1;
}
return res;
}
static int _ReadByte(ArchiveHandle* AH)
{
lclContext* ctx = (lclContext*)AH->formatData;
int res;
res = fgetc(AH->FH);
if (res != EOF) {
ctx->filePos += 1;
}
return res;
}
static int _WriteBuf(ArchiveHandle* AH, const void* buf, int len)
{
lclContext* ctx = (lclContext*)AH->formatData;
int res;
res = fwrite(buf, 1, len, AH->FH);
ctx->filePos += res;
return res;
}
static int _ReadBuf(ArchiveHandle* AH, void* buf, int len)
{
lclContext* ctx = (lclContext*)AH->formatData;
int res;
res = fread(buf, 1, len, AH->FH);
ctx->filePos += res;
return res;
}
static void _CloseArchive(ArchiveHandle* AH)
{
lclContext* ctx = (lclContext*)AH->formatData;
int tpos;
if (AH->mode == archModeWrite) {
WriteHead(AH);
tpos = ftell(AH->FH);
WriteToc(AH);
ctx->dataStart = _getFilePos(AH, ctx);
WriteDataChunks(AH);
/* This is not an essential operation - it is really only
* needed if we expect to be doing seeks to read the data back
* - it may be ok to just use the existing self-consistent block
* formatting.
*/
if (ctx->hasSeek) {
fseek(AH->FH, tpos, SEEK_SET);
WriteToc(AH);
}
}
fclose(AH->FH);
AH->FH = NULL;
}
static int _getFilePos(ArchiveHandle* AH, lclContext* ctx)
{
int pos;
if (ctx->hasSeek) {
pos = ftell(AH->FH);
if (pos != ctx->filePos) {
fprintf(stderr, "Warning: ftell mismatch with filePos\n");
}
} else {
pos = ctx->filePos;
}
return pos;
}
/*-------------------------------------------------------------------------
*
* pg_backup_files.c
*
* This file is copied from the 'custom' format file, but dumps data into
* separate files, and the TOC into the 'main' file.
*
* IT IS FOR DEMONSTRATION PURPOSES ONLY.
*
* (and could probably be used as a basis for writing a tar file)
*
* See the headers to pg_restore for more details.
*
* Copyright (c) 2000, Philip Warner
* Rights are granted to use this software in any way so long
* as this notice is not removed.
*
* The author is not responsible for loss or damages that may
* result from it's use.
*
*
* IDENTIFICATION
*
* Modifications - 28-Jun-2000 - pjw@rhyme.com.au
*
* Initial version.
*
*-------------------------------------------------------------------------
*/
#include <stdlib.h>
#include <string.h>
#include "pg_backup.h"
#include "pg_backup_archiver.h"
static void _ArchiveEntry(ArchiveHandle* AH, TocEntry* te);
static void _StartData(ArchiveHandle* AH, TocEntry* te);
static int _WriteData(ArchiveHandle* AH, const void* data, int dLen);
static void _EndData(ArchiveHandle* AH, TocEntry* te);
static int _WriteByte(ArchiveHandle* AH, const int i);
static int _ReadByte(ArchiveHandle* );
static int _WriteBuf(ArchiveHandle* AH, const void* buf, int len);
static int _ReadBuf(ArchiveHandle* AH, void* buf, int len);
static void _CloseArchive(ArchiveHandle* AH);
static void _PrintTocData(ArchiveHandle* AH, TocEntry* te, RestoreOptions *ropt);
static void _WriteExtraToc(ArchiveHandle* AH, TocEntry* te);
static void _ReadExtraToc(ArchiveHandle* AH, TocEntry* te);
static void _PrintExtraToc(ArchiveHandle* AH, TocEntry* te);
typedef struct {
int hasSeek;
int filePos;
} lclContext;
typedef struct {
#ifdef HAVE_ZLIB
gzFile *FH;
#else
FILE *FH;
#endif
char *filename;
} lclTocEntry;
/*
* Initializer
*/
void InitArchiveFmt_Files(ArchiveHandle* AH)
{
lclContext* ctx;
/* Assuming static functions, this can be copied for each format. */
AH->ArchiveEntryPtr = _ArchiveEntry;
AH->StartDataPtr = _StartData;
AH->WriteDataPtr = _WriteData;
AH->EndDataPtr = _EndData;
AH->WriteBytePtr = _WriteByte;
AH->ReadBytePtr = _ReadByte;
AH->WriteBufPtr = _WriteBuf;
AH->ReadBufPtr = _ReadBuf;
AH->ClosePtr = _CloseArchive;
AH->PrintTocDataPtr = _PrintTocData;
AH->ReadExtraTocPtr = _ReadExtraToc;
AH->WriteExtraTocPtr = _WriteExtraToc;
AH->PrintExtraTocPtr = _PrintExtraToc;
/*
* Set up some special context used in compressing data.
*/
ctx = (lclContext*)malloc(sizeof(lclContext));
AH->formatData = (void*)ctx;
ctx->filePos = 0;
/*
* Now open the TOC file
*/
if (AH->mode == archModeWrite) {
if (AH->fSpec && strcmp(AH->fSpec,"") != 0) {
AH->FH = fopen(AH->fSpec, PG_BINARY_W);
} else {
AH->FH = stdout;
}
ctx->hasSeek = (fseek(AH->FH, 0, SEEK_CUR) == 0);
if (AH->compression < 0 || AH->compression > 9) {
AH->compression = Z_DEFAULT_COMPRESSION;
}
} else {
if (AH->fSpec && strcmp(AH->fSpec,"") != 0) {
AH->FH = fopen(AH->fSpec, PG_BINARY_R);
} else {
AH->FH = stdin;
}
ctx->hasSeek = (fseek(AH->FH, 0, SEEK_CUR) == 0);
ReadHead(AH);
ReadToc(AH);
fclose(AH->FH); /* Nothing else in the file... */
}
}
/*
* - Start a new TOC entry
* Setup the output file name.
*/
static void _ArchiveEntry(ArchiveHandle* AH, TocEntry* te)
{
lclTocEntry* ctx;
char fn[1024];
ctx = (lclTocEntry*)malloc(sizeof(lclTocEntry));
if (te->dataDumper) {
#ifdef HAVE_ZLIB
if (AH->compression == 0) {
sprintf(fn, "%d.dat", te->id);
} else {
sprintf(fn, "%d.dat.gz", te->id);
}
#else
sprintf(fn, "%d.dat", te->id);
#endif
ctx->filename = strdup(fn);
} else {
ctx->filename = NULL;
ctx->FH = NULL;
}
te->formatData = (void*)ctx;
}
static void _WriteExtraToc(ArchiveHandle* AH, TocEntry* te)
{
lclTocEntry* ctx = (lclTocEntry*)te->formatData;
if (ctx->filename) {
WriteStr(AH, ctx->filename);
} else {
WriteStr(AH, "");
}
}
static void _ReadExtraToc(ArchiveHandle* AH, TocEntry* te)
{
lclTocEntry* ctx = (lclTocEntry*)te->formatData;
if (ctx == NULL) {
ctx = (lclTocEntry*)malloc(sizeof(lclTocEntry));
te->formatData = (void*)ctx;
}
ctx->filename = ReadStr(AH);
if (strlen(ctx->filename) == 0) {
free(ctx->filename);
ctx->filename = NULL;
}
ctx->FH = NULL;
}
static void _PrintExtraToc(ArchiveHandle* AH, TocEntry* te)
{
lclTocEntry* ctx = (lclTocEntry*)te->formatData;
ahprintf(AH, "-- File: %s\n", ctx->filename);
}
static void _StartData(ArchiveHandle* AH, TocEntry* te)
{
lclTocEntry* tctx = (lclTocEntry*)te->formatData;
char fmode[10];
sprintf(fmode, "wb%d", AH->compression);
#ifdef HAVE_ZLIB
tctx->FH = gzopen(tctx->filename, fmode);
#else
tctx->FH = fopen(tctx->filename, PG_BINARY_W);
#endif
}
static int _WriteData(ArchiveHandle* AH, const void* data, int dLen)
{
lclTocEntry* tctx = (lclTocEntry*)AH->currToc->formatData;
GZWRITE((void*)data, 1, dLen, tctx->FH);
return dLen;
}
static void _EndData(ArchiveHandle* AH, TocEntry* te)
{
lclTocEntry* tctx = (lclTocEntry*) te->formatData;
/* Close the file */
GZCLOSE(tctx->FH);
tctx->FH = NULL;
}
/*
* Print data for a given TOC entry
*/
static void _PrintTocData(ArchiveHandle* AH, TocEntry* te, RestoreOptions *ropt)
{
lclTocEntry* tctx = (lclTocEntry*) te->formatData;
char buf[4096];
int cnt;
if (!tctx->filename)
return;
#ifdef HAVE_ZLIB
AH->FH = gzopen(tctx->filename,"rb");
#else
AH->FH = fopen(tctx->filename,PG_BINARY_R);
#endif
ahprintf(AH, "--\n-- Data for TOC Entry ID %d (OID %s) %s %s\n--\n\n",
te->id, te->oid, te->desc, te->name);
while ( (cnt = GZREAD(buf, 1, 4096, AH->FH)) > 0) {
ahwrite(buf, 1, cnt, AH);
}
GZCLOSE(AH->FH);
ahprintf(AH, "\n\n");
}
static int _WriteByte(ArchiveHandle* AH, const int i)
{
lclContext* ctx = (lclContext*)AH->formatData;
int res;
res = fputc(i, AH->FH);
if (res != EOF) {
ctx->filePos += 1;
}
return res;
}
static int _ReadByte(ArchiveHandle* AH)
{
lclContext* ctx = (lclContext*)AH->formatData;
int res;
res = fgetc(AH->FH);
if (res != EOF) {
ctx->filePos += 1;
}
return res;
}
static int _WriteBuf(ArchiveHandle* AH, const void* buf, int len)
{
lclContext* ctx = (lclContext*)AH->formatData;
int res;
res = fwrite(buf, 1, len, AH->FH);
ctx->filePos += res;
return res;
}
static int _ReadBuf(ArchiveHandle* AH, void* buf, int len)
{
lclContext* ctx = (lclContext*)AH->formatData;
int res;
res = fread(buf, 1, len, AH->FH);
ctx->filePos += res;
return res;
}
static void _CloseArchive(ArchiveHandle* AH)
{
if (AH->mode == archModeWrite) {
WriteHead(AH);
WriteToc(AH);
fclose(AH->FH);
WriteDataChunks(AH);
}
AH->FH = NULL;
}
/*-------------------------------------------------------------------------
*
* pg_backup_plain_text.c
*
* This file is copied from the 'custom' format file, but dumps data into
* directly to a text file, and the TOC into the 'main' file.
*
* See the headers to pg_restore for more details.
*
* Copyright (c) 2000, Philip Warner
* Rights are granted to use this software in any way so long
* as this notice is not removed.
*
* The author is not responsible for loss or damages that may
* result from it's use.
*
*
* IDENTIFICATION
*
* Modifications - 01-Jul-2000 - pjw@rhyme.com.au
*
* Initial version.
*
*-------------------------------------------------------------------------
*/
#include <stdlib.h>
#include <string.h>
#include <unistd.h> /* for dup */
#include "pg_backup.h"
#include "pg_backup_archiver.h"
static void _ArchiveEntry(ArchiveHandle* AH, TocEntry* te);
static void _StartData(ArchiveHandle* AH, TocEntry* te);
static int _WriteData(ArchiveHandle* AH, const void* data, int dLen);
static void _EndData(ArchiveHandle* AH, TocEntry* te);
static int _WriteByte(ArchiveHandle* AH, const int i);
static int _WriteBuf(ArchiveHandle* AH, const void* buf, int len);
static void _CloseArchive(ArchiveHandle* AH);
static void _PrintTocData(ArchiveHandle* AH, TocEntry* te, RestoreOptions *ropt);
/*
* Initializer
*/
void InitArchiveFmt_PlainText(ArchiveHandle* AH)
{
/* Assuming static functions, this can be copied for each format. */
AH->ArchiveEntryPtr = _ArchiveEntry;
AH->StartDataPtr = _StartData;
AH->WriteDataPtr = _WriteData;
AH->EndDataPtr = _EndData;
AH->WriteBytePtr = _WriteByte;
AH->WriteBufPtr = _WriteBuf;
AH->ClosePtr = _CloseArchive;
AH->PrintTocDataPtr = _PrintTocData;
/*
* Now prevent reading...
*/
if (AH->mode == archModeRead)
die_horribly("%s: This format can not be read\n");
}
/*
* - Start a new TOC entry
*/
static void _ArchiveEntry(ArchiveHandle* AH, TocEntry* te)
{
/* Don't need to do anything */
}
static void _StartData(ArchiveHandle* AH, TocEntry* te)
{
ahprintf(AH, "--\n-- Data for TOC Entry ID %d (OID %s) %s %s\n--\n\n",
te->id, te->oid, te->desc, te->name);
}
static int _WriteData(ArchiveHandle* AH, const void* data, int dLen)
{
ahwrite(data, 1, dLen, AH);
return dLen;
}
static void _EndData(ArchiveHandle* AH, TocEntry* te)
{
ahprintf(AH, "\n\n");
}
/*
* Print data for a given TOC entry
*/
static void _PrintTocData(ArchiveHandle* AH, TocEntry* te, RestoreOptions *ropt)
{
if (*te->dataDumper)
(*te->dataDumper)((Archive*)AH, te->oid, te->dataDumperArg);
}
static int _WriteByte(ArchiveHandle* AH, const int i)
{
/* Don't do anything */
return 0;
}
static int _WriteBuf(ArchiveHandle* AH, const void* buf, int len)
{
/* Don't do anything */
return len;
}
static void _CloseArchive(ArchiveHandle* AH)
{
/* Nothing to do */
}
......@@ -22,7 +22,7 @@
*
*
* IDENTIFICATION
* $Header: /cvsroot/pgsql/src/bin/pg_dump/pg_dump.c,v 1.153 2000/07/02 15:21:05 petere Exp $
* $Header: /cvsroot/pgsql/src/bin/pg_dump/pg_dump.c,v 1.154 2000/07/04 14:25:28 momjian Exp $
*
* Modifications - 6/10/96 - dave@bensoft.com - version 1.13.dhb
*
......@@ -49,7 +49,19 @@
*
* Modifications - 1/26/98 - pjlobo@euitt.upm.es
* - Added support for password authentication
*-------------------------------------------------------------------------
*
* Modifications - 28-Jun-2000 - Philip Warner pjw@rhyme.com.au
* - Used custom IO routines to allow for more
* output formats and simple rearrangement of order.
* - Discouraged operations more appropriate to the 'restore'
* operation. (eg. -c "clear schema" - now always dumps
* commands, but pg_restore can be told not to output them).
* - Added RI warnings to the 'as insert strings' output mode
* - Added a small number of comments
* - Added a -Z option for compression level on compressed formats
* - Restored '-f' in usage output
*
*-------------------------------------------------------------------------
*/
#include <unistd.h> /* for getopt() */
......@@ -77,25 +89,25 @@
#endif
#include "pg_dump.h"
#include "pg_backup.h"
static void dumpComment(FILE *outfile, const char *target, const char *oid);
static void dumpSequence(FILE *fout, TableInfo tbinfo);
static void dumpACL(FILE *fout, TableInfo tbinfo);
static void dumpTriggers(FILE *fout, const char *tablename,
static void dumpComment(Archive *outfile, const char *target, const char *oid);
static void dumpSequence(Archive *fout, TableInfo tbinfo);
static void dumpACL(Archive *fout, TableInfo tbinfo);
static void dumpTriggers(Archive *fout, const char *tablename,
TableInfo *tblinfo, int numTables);
static void dumpRules(FILE *fout, const char *tablename,
static void dumpRules(Archive *fout, const char *tablename,
TableInfo *tblinfo, int numTables);
static char *checkForQuote(const char *s);
static void clearTableInfo(TableInfo *, int);
static void dumpOneFunc(FILE *fout, FuncInfo *finfo, int i,
static void dumpOneFunc(Archive *fout, FuncInfo *finfo, int i,
TypeInfo *tinfo, int numTypes);
static int findLastBuiltinOid(void);
static bool isViewRule(char *relname);
static void setMaxOid(FILE *fout);
static void setMaxOid(Archive *fout);
static void AddAcl(char *aclbuf, const char *keyword);
static char *GetPrivileges(const char *s);
static void becomeUser(FILE *fout, const char *username);
extern char *optarg;
extern int optind,
......@@ -105,7 +117,7 @@ extern int optind,
bool g_verbose; /* User wants verbose narration of our
* activities. */
int g_last_builtin_oid; /* value of the last builtin oid */
FILE *g_fout; /* the script file */
Archive *g_fout; /* the script file */
PGconn *g_conn; /* the database connection */
bool force_quotes; /* User wants to suppress double-quotes */
......@@ -114,7 +126,6 @@ bool attrNames; /* put attr names into insert strings */
bool schemaOnly;
bool dataOnly;
bool aclsSkip;
bool dropSchema;
char g_opaque_type[10]; /* name for the opaque type */
......@@ -123,6 +134,12 @@ char g_comment_start[10];
char g_comment_end[10];
typedef struct _dumpContext {
TableInfo *tblinfo;
int tblidx;
bool oids;
} DumpContext;
static void
help(const char *progname)
{
......@@ -133,39 +150,45 @@ help(const char *progname)
#ifdef HAVE_GETOPT_LONG
puts(
" -a, --data-only dump out only the data, not the schema\n"
" -c, --clean clean (drop) schema prior to create\n"
" -d, --inserts dump data as INSERT, rather than COPY, commands\n"
" -D, --attribute-inserts dump data as INSERT commands with attribute names\n"
" -h, --host <hostname> server host name\n"
" -i, --ignore-version proceed when database version != pg_dump version\n"
" -n, --no-quotes suppress most quotes around identifiers\n"
" -N, --quotes enable most quotes around identifiers\n"
" -o, --oids dump object ids (oids)\n"
" -p, --port <port> server port number\n"
" -s, --schema-only dump out only the schema, no data\n"
" -t, --table <table> dump for this table only\n"
" -u, --password use password authentication\n"
" -v, --verbose verbose\n"
" -x, --no-acl do not dump ACL's (grant/revoke)\n"
" -a, --data-only dump out only the data, not the schema\n"
" -c, --clean clean (drop) schema prior to create\n"
" -d, --inserts dump data as INSERT, rather than COPY, commands\n"
" -D, --attribute-inserts dump data as INSERT commands with attribute names\n"
" -f, --file specify output file name\n"
" -F, --format {c|f|p} output file format (custom, files, plain text)\n"
" -h, --host <hostname> server host name\n"
" -i, --ignore-version proceed when database version != pg_dump version\n"
" -n, --no-quotes suppress most quotes around identifiers\n"
" -N, --quotes enable most quotes around identifiers\n"
" -o, --oids dump object ids (oids)\n"
" -p, --port <port> server port number\n"
" -s, --schema-only dump out only the schema, no data\n"
" -t, --table <table> dump for this table only\n"
" -u, --password use password authentication\n"
" -v, --verbose verbose\n"
" -x, --no-acl do not dump ACL's (grant/revoke)\n"
" -Z, --compress {0-9} compression level for compressed formats\n"
);
#else
puts(
" -a dump out only the data, no schema\n"
" -c clean (drop) schema prior to create\n"
" -d dump data as INSERT, rather than COPY, commands\n"
" -D dump data as INSERT commands with attribute names\n"
" -h <hostname> server host name\n"
" -i proceed when database version != pg_dump version\n"
" -n suppress most quotes around identifiers\n"
" -N enable most quotes around identifiers\n"
" -o dump object ids (oids)\n"
" -p <port> server port number\n"
" -s dump out only the schema, no data\n"
" -t <table> dump for this table only\n"
" -u use password authentication\n"
" -v verbose\n"
" -x do not dump ACL's (grant/revoke)\n"
" -a dump out only the data, no schema\n"
" -c clean (drop) schema prior to create\n"
" -d dump data as INSERT, rather than COPY, commands\n"
" -D dump data as INSERT commands with attribute names\n"
" -f specify output file name\n"
" -F {c|f|p} output file format (custom, files, plain text)\n"
" -h <hostname> server host name\n"
" -i proceed when database version != pg_dump version\n"
" -n suppress most quotes around identifiers\n"
" -N enable most quotes around identifiers\n"
" -o dump object ids (oids)\n"
" -p <port> server port number\n"
" -s dump out only the schema, no data\n"
" -t <table> dump for this table only\n"
" -u use password authentication\n"
" -v verbose\n"
" -x do not dump ACL's (grant/revoke)\n"
" -Z {0-9} compression level for compressed formats\n"
);
#endif
puts("If no database name is not supplied, then the PGDATABASE environment\nvariable value is used.\n");
......@@ -212,7 +235,8 @@ isViewRule(char *relname)
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
{
fprintf(stderr, "isViewRule(): SELECT failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
fprintf(stderr, "isViewRule(): SELECT failed. Explanation from backend: '%s'.\n",
PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
......@@ -224,10 +248,17 @@ isViewRule(char *relname)
#define COPYBUFSIZ 8192
static void
dumpClasses_nodumpData(FILE *fout, const char *classname, const bool oids)
/*
* Dump a table's contents for loading using the COPY command
* - this routine is called by the Archiver when it wants the table
* to be dumped.
*/
static int
dumpClasses_nodumpData(Archive *fout, char* oid, void *dctxv)
{
const DumpContext *dctx = (DumpContext*)dctxv;
const char *classname = dctx->tblinfo[dctx->tblidx].relname;
const bool oids = dctx->oids;
PGresult *res;
char query[255];
......@@ -237,14 +268,14 @@ dumpClasses_nodumpData(FILE *fout, const char *classname, const bool oids)
if (oids == true)
{
fprintf(fout, "COPY %s WITH OIDS FROM stdin;\n",
archprintf(fout, "COPY %s WITH OIDS FROM stdin;\n",
fmtId(classname, force_quotes));
sprintf(query, "COPY %s WITH OIDS TO stdout;\n",
fmtId(classname, force_quotes));
}
else
{
fprintf(fout, "COPY %s FROM stdin;\n", fmtId(classname, force_quotes));
archprintf(fout, "COPY %s FROM stdin;\n", fmtId(classname, force_quotes));
sprintf(query, "COPY %s TO stdout;\n", fmtId(classname, force_quotes));
}
res = PQexec(g_conn, query);
......@@ -283,21 +314,21 @@ dumpClasses_nodumpData(FILE *fout, const char *classname, const bool oids)
}
else
{
fputs(copybuf, fout);
archputs(copybuf, fout);
switch (ret)
{
case EOF:
copydone = true;
/* FALLTHROUGH */
case 0:
fputc('\n', fout);
archputc('\n', fout);
break;
case 1:
break;
}
}
}
fprintf(fout, "\\.\n");
archprintf(fout, "\\.\n");
}
ret = PQendcopy(g_conn);
if (ret != 0)
......@@ -312,13 +343,17 @@ dumpClasses_nodumpData(FILE *fout, const char *classname, const bool oids)
exit_nicely(g_conn);
}
}
return 1;
}
static void
dumpClasses_dumpData(FILE *fout, const char *classname)
static int
dumpClasses_dumpData(Archive *fout, char* oid, void *dctxv)
{
const DumpContext *dctx = (DumpContext*)dctxv;
const char *classname = dctx->tblinfo[dctx->tblidx].relname;
PGresult *res;
PQExpBuffer q = createPQExpBuffer();
int tuple;
......@@ -330,12 +365,13 @@ dumpClasses_dumpData(FILE *fout, const char *classname)
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
{
fprintf(stderr, "dumpClasses(): command failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
fprintf(stderr, "dumpClasses(): command failed. Explanation from backend: '%s'.\n",
PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
for (tuple = 0; tuple < PQntuples(res); tuple++)
{
fprintf(fout, "INSERT INTO %s ", fmtId(classname, force_quotes));
archprintf(fout, "INSERT INTO %s ", fmtId(classname, force_quotes));
if (attrNames == true)
{
resetPQExpBuffer(q);
......@@ -347,16 +383,16 @@ dumpClasses_dumpData(FILE *fout, const char *classname)
appendPQExpBuffer(q, fmtId(PQfname(res, field), force_quotes));
}
appendPQExpBuffer(q, ") ");
fprintf(fout, "%s", q->data);
archprintf(fout, "%s", q->data);
}
fprintf(fout, "VALUES (");
archprintf(fout, "VALUES (");
for (field = 0; field < PQnfields(res); field++)
{
if (field > 0)
fprintf(fout, ",");
archprintf(fout, ",");
if (PQgetisnull(res, tuple, field))
{
fprintf(fout, "NULL");
archprintf(fout, "NULL");
continue;
}
switch (PQftype(res, field))
......@@ -367,7 +403,7 @@ dumpClasses_dumpData(FILE *fout, const char *classname)
case FLOAT4OID:
case FLOAT8OID:/* float types */
/* These types are printed without quotes */
fprintf(fout, "%s",
archprintf(fout, "%s",
PQgetvalue(res, tuple, field));
break;
default:
......@@ -378,7 +414,7 @@ dumpClasses_dumpData(FILE *fout, const char *classname)
* Quote mark ' goes to '' per SQL standard, other
* stuff goes to \ sequences.
*/
putc('\'', fout);
archputc('\'', fout);
expsrc = PQgetvalue(res, tuple, field);
while (*expsrc)
{
......@@ -386,42 +422,43 @@ dumpClasses_dumpData(FILE *fout, const char *classname)
if (ch == '\\' || ch == '\'')
{
putc(ch, fout); /* double these */
putc(ch, fout);
archputc(ch, fout); /* double these */
archputc(ch, fout);
}
else if (ch < '\040')
{
/* generate octal escape for control chars */
putc('\\', fout);
putc(((ch >> 6) & 3) + '0', fout);
putc(((ch >> 3) & 7) + '0', fout);
putc((ch & 7) + '0', fout);
archputc('\\', fout);
archputc(((ch >> 6) & 3) + '0', fout);
archputc(((ch >> 3) & 7) + '0', fout);
archputc((ch & 7) + '0', fout);
}
else
putc(ch, fout);
archputc(ch, fout);
}
putc('\'', fout);
archputc('\'', fout);
break;
}
}
fprintf(fout, ");\n");
archprintf(fout, ");\n");
}
PQclear(res);
return 1;
}
/*
* DumpClasses -
* dump the contents of all the classes.
*/
static void
dumpClasses(const TableInfo *tblinfo, const int numTables, FILE *fout,
dumpClasses(const TableInfo *tblinfo, const int numTables, Archive *fout,
const char *onlytable, const bool oids)
{
int i;
char *all_only;
int i;
char *all_only;
DataDumperPtr dumpFn;
DumpContext *dumpCtx;
if (onlytable == NULL)
all_only = "all";
......@@ -447,7 +484,7 @@ dumpClasses(const TableInfo *tblinfo, const int numTables, FILE *fout,
if (g_verbose)
fprintf(stderr, "%s dumping out schema of sequence '%s' %s\n",
g_comment_start, tblinfo[i].relname, g_comment_end);
becomeUser(fout, tblinfo[i].usename);
/* becomeUser(fout, tblinfo[i].usename); */
dumpSequence(fout, tblinfo[i]);
}
}
......@@ -470,17 +507,27 @@ dumpClasses(const TableInfo *tblinfo, const int numTables, FILE *fout,
fprintf(stderr, "%s dumping out the contents of Table '%s' %s\n",
g_comment_start, classname, g_comment_end);
becomeUser(fout, tblinfo[i].usename);
/* becomeUser(fout, tblinfo[i].usename); */
dumpCtx = (DumpContext*)malloc(sizeof(DumpContext));
dumpCtx->tblinfo = (TableInfo*)tblinfo;
dumpCtx->tblidx = i;
dumpCtx->oids = oids;
if (!dumpData)
dumpClasses_nodumpData(fout, classname, oids);
dumpFn = dumpClasses_nodumpData;
/* dumpClasses_nodumpData(fout, classname, oids); */
else
dumpClasses_dumpData(fout, classname);
dumpFn = dumpClasses_dumpData;
/* dumpClasses_dumpData(fout, classname); */
ArchiveEntry(fout, tblinfo[i].oid, fmtId(tblinfo[i].relname, false),
"TABLE DATA", NULL, "", "", tblinfo[i].usename,
dumpFn, dumpCtx);
}
}
}
static void
prompt_for_password(char *username, char *password)
{
......@@ -579,6 +626,7 @@ main(int argc, char **argv)
int c;
const char *progname;
const char *filename = NULL;
const char *format = "p";
const char *dbname = NULL;
const char *pghost = NULL;
const char *pgport = NULL;
......@@ -591,12 +639,18 @@ main(int argc, char **argv)
char username[100];
char password[100];
bool use_password = false;
int compressLevel = -1;
bool ignore_version = false;
int plainText = 0;
int outputClean = 0;
RestoreOptions *ropt;
#ifdef HAVE_GETOPT_LONG
static struct option long_options[] = {
{"data-only", no_argument, NULL, 'a'},
{"clean", no_argument, NULL, 'c'},
{"file", required_argument, NULL, 'f'},
{"format", required_argument, NULL, 'F'},
{"inserts", no_argument, NULL, 'd'},
{"attribute-inserts", no_argument, NULL, 'D'},
{"host", required_argument, NULL, 'h'},
......@@ -610,6 +664,7 @@ main(int argc, char **argv)
{"password", no_argument, NULL, 'u'},
{"verbose", no_argument, NULL, 'v'},
{"no-acl", no_argument, NULL, 'x'},
{"compress", required_argument, NULL, 'Z'},
{"help", no_argument, NULL, '?'},
{"version", no_argument, NULL, 'V'}
};
......@@ -619,7 +674,6 @@ main(int argc, char **argv)
g_verbose = false;
force_quotes = true;
dropSchema = false;
strcpy(g_comment_start, "-- ");
g_comment_end[0] = '\0';
......@@ -634,9 +688,9 @@ main(int argc, char **argv)
#ifdef HAVE_GETOPT_LONG
while ((c = getopt_long(argc, argv, "acdDf:h:inNop:st:uvxzV?", long_options, &optindex)) != -1)
while ((c = getopt_long(argc, argv, "acdDf:F:h:inNop:st:uvxzZ:V?", long_options, &optindex)) != -1)
#else
while ((c = getopt(argc, argv, "acdDf:h:inNop:st:uvxzV?-")) != -1)
while ((c = getopt(argc, argv, "acdDf:F:h:inNop:st:uvxzZ:V?-")) != -1)
#endif
{
switch (c)
......@@ -645,9 +699,10 @@ main(int argc, char **argv)
dataOnly = true;
break;
case 'c': /* clean (i.e., drop) schema prior to
* create */
dropSchema = true;
break;
* create */
outputClean = 1;
break;
case 'd': /* dump data as proper insert strings */
dumpData = true;
break;
......@@ -659,6 +714,9 @@ main(int argc, char **argv)
case 'f':
filename = optarg;
break;
case 'F':
format = optarg;
break;
case 'h': /* server host */
pghost = optarg;
break;
......@@ -716,6 +774,9 @@ main(int argc, char **argv)
case 'x': /* skip ACL dump */
aclsSkip = true;
break;
case 'Z': /* Compression Level */
compressLevel = atoi(optarg);
break;
case 'V':
version();
exit(0);
......@@ -750,6 +811,14 @@ main(int argc, char **argv)
}
}
if (dataOnly && schemaOnly)
{
fprintf(stderr,
"%s: 'Schema Only' and 'Data Only' are incompatible options.\n",
progname);
exit(1);
}
if (dumpData == true && oids == true)
{
fprintf(stderr,
......@@ -759,18 +828,36 @@ main(int argc, char **argv)
}
/* open the output file */
if (filename == NULL)
g_fout = stdout;
else
{
g_fout = fopen(filename, PG_BINARY_W);
if (g_fout == NULL)
{
switch (format[0]) {
case 'c':
case 'C':
g_fout = CreateArchive(filename, archCustom, compressLevel);
break;
case 'f':
case 'F':
g_fout = CreateArchive(filename, archFiles, compressLevel);
break;
case 'p':
case 'P':
plainText = 1;
g_fout = CreateArchive(filename, archPlainText, 0);
break;
default:
fprintf(stderr,
"%s: could not open output file named %s for writing\n",
progname, filename);
exit(1);
}
"%s: invalid output format '%s' specified\n", progname, format);
exit(1);
}
if (g_fout == NULL)
{
fprintf(stderr,
"%s: could not open output file named %s for writing\n",
progname, filename);
exit(1);
}
/* find database */
......@@ -847,32 +934,14 @@ main(int argc, char **argv)
if (oids == true)
setMaxOid(g_fout);
if (!dataOnly)
{
if (g_verbose)
if (g_verbose)
fprintf(stderr, "%s last builtin oid is %u %s\n",
g_comment_start, g_last_builtin_oid, g_comment_end);
tblinfo = dumpSchema(g_fout, &numTables, tablename, aclsSkip);
}
else
tblinfo = dumpSchema(NULL, &numTables, tablename, aclsSkip);
tblinfo = dumpSchema(g_fout, &numTables, tablename, aclsSkip, oids, schemaOnly, dataOnly);
if (!schemaOnly)
{
if (dataOnly)
fprintf(g_fout, "UPDATE \"pg_class\" SET \"reltriggers\" = 0 WHERE \"relname\" !~ '^pg_';\n");
dumpClasses(tblinfo, numTables, g_fout, tablename, oids);
if (dataOnly)
{
fprintf(g_fout, "BEGIN TRANSACTION;\n");
fprintf(g_fout, "CREATE TEMP TABLE \"tr\" (\"tmp_relname\" name, \"tmp_reltriggers\" smallint);\n");
fprintf(g_fout, "INSERT INTO \"tr\" SELECT C.\"relname\", count(T.\"oid\") FROM \"pg_class\" C, \"pg_trigger\" T WHERE C.\"oid\" = T.\"tgrelid\" AND C.\"relname\" !~ '^pg_' GROUP BY 1;\n");
fprintf(g_fout, "UPDATE \"pg_class\" SET \"reltriggers\" = TMP.\"tmp_reltriggers\" FROM \"tr\" TMP WHERE \"pg_class\".\"relname\" = TMP.\"tmp_relname\";\n");
fprintf(g_fout, "COMMIT TRANSACTION;\n");
}
}
dumpClasses(tblinfo, numTables, g_fout, tablename, oids);
if (!dataOnly) /* dump indexes and triggers at the end
* for performance */
......@@ -882,9 +951,22 @@ main(int argc, char **argv)
dumpRules(g_fout, tablename, tblinfo, numTables);
}
fflush(g_fout);
if (g_fout != stdout)
fclose(g_fout);
if (plainText)
{
ropt = NewRestoreOptions();
ropt->filename = (char*)filename;
ropt->dropSchema = outputClean;
ropt->aclsSkip = aclsSkip;
if (compressLevel == -1)
ropt->compression = 0;
else
ropt->compression = compressLevel;
RestoreArchive(g_fout, ropt);
}
CloseArchive(g_fout);
clearTableInfo(tblinfo, numTables);
PQfinish(g_conn);
......@@ -1203,6 +1285,22 @@ clearTableInfo(TableInfo *tblinfo, int numTables)
if (tblinfo[i].typnames[j])
free(tblinfo[i].typnames[j]);
}
if (tblinfo[i].triggers) {
for (j = 0; j < tblinfo[i].ntrig ; j++)
{
if (tblinfo[i].triggers[j].tgsrc)
free(tblinfo[i].triggers[j].tgsrc);
if (tblinfo[i].triggers[j].oid)
free(tblinfo[i].triggers[j].oid);
if (tblinfo[i].triggers[j].tgname)
free(tblinfo[i].triggers[j].tgname);
if (tblinfo[i].triggers[j].tgdel)
free(tblinfo[i].triggers[j].tgdel);
}
free(tblinfo[i].triggers);
}
if (tblinfo[i].atttypmod)
free((int *) tblinfo[i].atttypmod);
if (tblinfo[i].inhAttrs)
......@@ -1387,7 +1485,8 @@ getAggregates(int *numAggs)
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
{
fprintf(stderr, "getAggregates(): SELECT failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
fprintf(stderr, "getAggregates(): SELECT failed. Explanation from backend: '%s'.\n",
PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
......@@ -1470,7 +1569,8 @@ getFuncs(int *numFuncs)
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
{
fprintf(stderr, "getFuncs(): SELECT failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
fprintf(stderr, "getFuncs(): SELECT failed. Explanation from backend: '%s'.\n",
PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
......@@ -1540,9 +1640,10 @@ getTables(int *numTables, FuncInfo *finfo, int numFuncs)
int ntups;
int i;
PQExpBuffer query = createPQExpBuffer();
PQExpBuffer delqry = createPQExpBuffer();
TableInfo *tblinfo;
int i_oid;
int i_reloid;
int i_relname;
int i_relkind;
int i_relacl;
......@@ -1573,7 +1674,8 @@ getTables(int *numTables, FuncInfo *finfo, int numFuncs)
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
{
fprintf(stderr, "getTables(): SELECT failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
fprintf(stderr, "getTables(): SELECT failed. Explanation from backend: '%s'.\n",
PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
......@@ -1583,7 +1685,7 @@ getTables(int *numTables, FuncInfo *finfo, int numFuncs)
tblinfo = (TableInfo *) malloc(ntups * sizeof(TableInfo));
i_oid = PQfnumber(res, "oid");
i_reloid = PQfnumber(res, "oid");
i_relname = PQfnumber(res, "relname");
i_relkind = PQfnumber(res, "relkind");
i_relacl = PQfnumber(res, "relacl");
......@@ -1594,7 +1696,7 @@ getTables(int *numTables, FuncInfo *finfo, int numFuncs)
for (i = 0; i < ntups; i++)
{
tblinfo[i].oid = strdup(PQgetvalue(res, i, i_oid));
tblinfo[i].oid = strdup(PQgetvalue(res, i, i_reloid));
tblinfo[i].relname = strdup(PQgetvalue(res, i, i_relname));
tblinfo[i].relacl = strdup(PQgetvalue(res, i, i_relacl));
tblinfo[i].sequence = (strcmp(PQgetvalue(res, i, i_relkind), "S") == 0);
......@@ -1633,7 +1735,8 @@ getTables(int *numTables, FuncInfo *finfo, int numFuncs)
if (!res2 ||
PQresultStatus(res2) != PGRES_TUPLES_OK)
{
fprintf(stderr, "getTables(): SELECT (for inherited CHECK) failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
fprintf(stderr, "getTables(): SELECT (for inherited CHECK) failed. "
"Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
ntups2 = PQntuples(res2);
......@@ -1677,7 +1780,8 @@ getTables(int *numTables, FuncInfo *finfo, int numFuncs)
if (!res2 ||
PQresultStatus(res2) != PGRES_TUPLES_OK)
{
fprintf(stderr, "getTables(): SELECT (for CHECK) failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
fprintf(stderr, "getTables(): SELECT (for CHECK) failed. "
"Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
ntups2 = PQntuples(res2);
......@@ -1787,7 +1891,8 @@ getTables(int *numTables, FuncInfo *finfo, int numFuncs)
if (!res2 ||
PQresultStatus(res2) != PGRES_TUPLES_OK)
{
fprintf(stderr, "getTables(): SELECT (for TRIGGER) failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
fprintf(stderr, "getTables(): SELECT (for TRIGGER) failed. "
"Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
ntups2 = PQntuples(res2);
......@@ -1808,9 +1913,7 @@ getTables(int *numTables, FuncInfo *finfo, int numFuncs)
i_tgdeferrable = PQfnumber(res2, "tgdeferrable");
i_tginitdeferred = PQfnumber(res2, "tginitdeferred");
tblinfo[i].triggers = (char **) malloc(ntups2 * sizeof(char *));
tblinfo[i].trcomments = (char **) malloc(ntups2 * sizeof(char *));
tblinfo[i].troids = (char **) malloc(ntups2 * sizeof(char *));
tblinfo[i].triggers = (TrigInfo*) malloc(ntups2 * sizeof(TrigInfo));
resetPQExpBuffer(query);
for (i2 = 0; i2 < ntups2; i2++)
{
......@@ -1876,19 +1979,12 @@ getTables(int *numTables, FuncInfo *finfo, int numFuncs)
}
else
tgfunc = strdup(finfo[findx].proname);
#if 0
/* XXX - how to emit this DROP TRIGGER? */
if (dropSchema)
{
resetPQExpBuffer(query);
appendPQExpBuffer(query, "DROP TRIGGER %s ",
appendPQExpBuffer(delqry, "DROP TRIGGER %s ",
fmtId(PQgetvalue(res2, i2, i_tgname),
force_quotes));
appendPQExpBuffer(query, "ON %s;\n",
fmtId(tblinfo[i].relname, force_quotes));
fputs(query->data, fout);
}
#endif
force_quotes));
appendPQExpBuffer(delqry, "ON %s;\n",
fmtId(tblinfo[i].relname, force_quotes));
resetPQExpBuffer(query);
if (tgisconstraint)
......@@ -1954,7 +2050,8 @@ getTables(int *numTables, FuncInfo *finfo, int numFuncs)
p = strchr(p, '\\');
if (p == NULL)
{
fprintf(stderr, "getTables(): relation '%s': bad argument string (%s) for trigger '%s'\n",
fprintf(stderr, "getTables(): relation '%s': bad argument "
"string (%s) for trigger '%s'\n",
tblinfo[i].relname,
PQgetvalue(res2, i2, i_tgargs),
PQgetvalue(res2, i2, i_tgname));
......@@ -1983,7 +2080,7 @@ getTables(int *numTables, FuncInfo *finfo, int numFuncs)
}
appendPQExpBuffer(query, ");\n");
tblinfo[i].triggers[i2] = strdup(query->data);
tblinfo[i].triggers[i2].tgsrc = strdup(query->data);
/*** Initialize trcomments and troids ***/
......@@ -1992,8 +2089,10 @@ getTables(int *numTables, FuncInfo *finfo, int numFuncs)
fmtId(PQgetvalue(res2, i2, i_tgname), force_quotes));
appendPQExpBuffer(query, "ON %s",
fmtId(tblinfo[i].relname, force_quotes));
tblinfo[i].trcomments[i2] = strdup(query->data);
tblinfo[i].troids[i2] = strdup(PQgetvalue(res2, i2, i_tgoid));
tblinfo[i].triggers[i2].tgcomment = strdup(query->data);
tblinfo[i].triggers[i2].oid = strdup(PQgetvalue(res2, i2, i_tgoid));
tblinfo[i].triggers[i2].tgname = strdup(fmtId(PQgetvalue(res2, i2, i_tgname),false));
tblinfo[i].triggers[i2].tgdel = strdup(delqry->data);
if (tgfunc)
free(tgfunc);
......@@ -2003,8 +2102,6 @@ getTables(int *numTables, FuncInfo *finfo, int numFuncs)
else
{
tblinfo[i].triggers = NULL;
tblinfo[i].trcomments = NULL;
tblinfo[i].troids = NULL;
}
}
......@@ -2044,7 +2141,8 @@ getInherits(int *numInherits)
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
{
fprintf(stderr, "getInherits(): SELECT failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
fprintf(stderr, "getInherits(): SELECT failed. Explanation from backend: '%s'.\n",
PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
......@@ -2122,7 +2220,8 @@ getTableAttrs(TableInfo *tblinfo, int numTables)
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
{
fprintf(stderr, "getTableAttrs(): SELECT failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
fprintf(stderr, "getTableAttrs(): SELECT failed. "
"Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
......@@ -2172,7 +2271,8 @@ getTableAttrs(TableInfo *tblinfo, int numTables)
if (!res2 ||
PQresultStatus(res2) != PGRES_TUPLES_OK)
{
fprintf(stderr, "getTableAttrs(): SELECT (for DEFAULT) failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
fprintf(stderr, "getTableAttrs(): SELECT (for DEFAULT) failed. "
"Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
tblinfo[i].adef_expr[j] = strdup(PQgetvalue(res2, 0, PQfnumber(res2, "adsrc")));
......@@ -2236,7 +2336,8 @@ getIndices(int *numIndices)
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
{
fprintf(stderr, "getIndices(): SELECT failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
fprintf(stderr, "getIndices(): SELECT failed. "
"Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
......@@ -2290,7 +2391,7 @@ getIndices(int *numIndices)
*/
static void
dumpComment(FILE *fout, const char *target, const char *oid)
dumpComment(Archive *fout, const char *target, const char *oid)
{
PGresult *res;
......@@ -2318,8 +2419,13 @@ dumpComment(FILE *fout, const char *target, const char *oid)
if (PQntuples(res) != 0)
{
i_description = PQfnumber(res, "description");
fprintf(fout, "COMMENT ON %s IS '%s';\n",
target, checkForQuote(PQgetvalue(res, 0, i_description)));
resetPQExpBuffer(query);
appendPQExpBuffer(query, "COMMENT ON %s IS '%s';\n",
target, checkForQuote(PQgetvalue(res, 0, i_description)));
ArchiveEntry(fout, oid, target, "COMMENT", NULL, query->data, "" /*Del*/,
"" /*Owner*/, NULL, NULL);
}
/*** Clear the statement buffer and return ***/
......@@ -2339,7 +2445,7 @@ dumpComment(FILE *fout, const char *target, const char *oid)
*/
void
dumpDBComment(FILE *fout)
dumpDBComment(Archive *fout)
{
PGresult *res;
......@@ -2384,11 +2490,12 @@ dumpDBComment(FILE *fout)
*
*/
void
dumpTypes(FILE *fout, FuncInfo *finfo, int numFuncs,
dumpTypes(Archive *fout, FuncInfo *finfo, int numFuncs,
TypeInfo *tinfo, int numTypes)
{
int i;
PQExpBuffer q = createPQExpBuffer();
PQExpBuffer delq = createPQExpBuffer();
int funcInd;
for (i = 0; i < numTypes; i++)
......@@ -2419,14 +2526,7 @@ dumpTypes(FILE *fout, FuncInfo *finfo, int numFuncs,
if (funcInd != -1)
dumpOneFunc(fout, finfo, funcInd, tinfo, numTypes);
becomeUser(fout, tinfo[i].usename);
if (dropSchema)
{
resetPQExpBuffer(q);
appendPQExpBuffer(q, "DROP TYPE %s;\n", fmtId(tinfo[i].typname, force_quotes));
fputs(q->data, fout);
}
appendPQExpBuffer(delq, "DROP TYPE %s;\n", fmtId(tinfo[i].typname, force_quotes));
resetPQExpBuffer(q);
appendPQExpBuffer(q,
......@@ -2456,14 +2556,18 @@ dumpTypes(FILE *fout, FuncInfo *finfo, int numFuncs,
else
appendPQExpBuffer(q, ");\n");
fputs(q->data, fout);
ArchiveEntry(fout, tinfo[i].oid, fmtId(tinfo[i].typname, force_quotes), "TYPE", NULL,
q->data, delq->data, tinfo[i].usename, NULL, NULL);
/*** Dump Type Comments ***/
resetPQExpBuffer(q);
resetPQExpBuffer(delq);
appendPQExpBuffer(q, "TYPE %s", fmtId(tinfo[i].typname, force_quotes));
dumpComment(fout, q->data, tinfo[i].oid);
resetPQExpBuffer(q);
}
}
......@@ -2473,12 +2577,15 @@ dumpTypes(FILE *fout, FuncInfo *finfo, int numFuncs,
*
*/
void
dumpProcLangs(FILE *fout, FuncInfo *finfo, int numFuncs,
dumpProcLangs(Archive *fout, FuncInfo *finfo, int numFuncs,
TypeInfo *tinfo, int numTypes)
{
PGresult *res;
PQExpBuffer query = createPQExpBuffer();
PQExpBuffer defqry = createPQExpBuffer();
PQExpBuffer delqry = createPQExpBuffer();
int ntups;
int i_oid;
int i_lanname;
int i_lanpltrusted;
int i_lanplcallfoid;
......@@ -2489,7 +2596,7 @@ dumpProcLangs(FILE *fout, FuncInfo *finfo, int numFuncs,
int i,
fidx;
appendPQExpBuffer(query, "SELECT * FROM pg_language "
appendPQExpBuffer(query, "SELECT oid, * FROM pg_language "
"WHERE lanispl "
"ORDER BY oid");
res = PQexec(g_conn, query->data);
......@@ -2505,6 +2612,7 @@ dumpProcLangs(FILE *fout, FuncInfo *finfo, int numFuncs,
i_lanpltrusted = PQfnumber(res, "lanpltrusted");
i_lanplcallfoid = PQfnumber(res, "lanplcallfoid");
i_lancompiler = PQfnumber(res, "lancompiler");
i_oid = PQfnumber(res, "oid");
for (i = 0; i < ntups; i++)
{
......@@ -2516,7 +2624,8 @@ dumpProcLangs(FILE *fout, FuncInfo *finfo, int numFuncs,
}
if (fidx >= numFuncs)
{
fprintf(stderr, "dumpProcLangs(): handler procedure for language %s not found\n", PQgetvalue(res, i, i_lanname));
fprintf(stderr, "dumpProcLangs(): handler procedure for "
"language %s not found\n", PQgetvalue(res, i, i_lanname));
exit_nicely(g_conn);
}
......@@ -2525,16 +2634,18 @@ dumpProcLangs(FILE *fout, FuncInfo *finfo, int numFuncs,
lanname = checkForQuote(PQgetvalue(res, i, i_lanname));
lancompiler = checkForQuote(PQgetvalue(res, i, i_lancompiler));
if (dropSchema)
fprintf(fout, "DROP PROCEDURAL LANGUAGE '%s';\n", lanname);
appendPQExpBuffer(delqry, "DROP PROCEDURAL LANGUAGE '%s';\n", lanname);
fprintf(fout, "CREATE %sPROCEDURAL LANGUAGE '%s' "
appendPQExpBuffer(defqry, "CREATE %sPROCEDURAL LANGUAGE '%s' "
"HANDLER %s LANCOMPILER '%s';\n",
(PQgetvalue(res, i, i_lanpltrusted)[0] == 't') ? "TRUSTED " : "",
(PQgetvalue(res, i, i_lanpltrusted)[0] == 't') ? "TRUSTED " : "",
lanname,
fmtId(finfo[fidx].proname, force_quotes),
lancompiler);
ArchiveEntry(fout, PQgetvalue(res, i, i_oid), lanname, "PROCEDURAL LANGUAGE",
NULL, defqry->data, delqry->data, "", NULL, NULL);
free(lanname);
free(lancompiler);
}
......@@ -2549,7 +2660,7 @@ dumpProcLangs(FILE *fout, FuncInfo *finfo, int numFuncs,
*
*/
void
dumpFuncs(FILE *fout, FuncInfo *finfo, int numFuncs,
dumpFuncs(Archive *fout, FuncInfo *finfo, int numFuncs,
TypeInfo *tinfo, int numTypes)
{
int i;
......@@ -2566,10 +2677,12 @@ dumpFuncs(FILE *fout, FuncInfo *finfo, int numFuncs,
*/
static void
dumpOneFunc(FILE *fout, FuncInfo *finfo, int i,
dumpOneFunc(Archive *fout, FuncInfo *finfo, int i,
TypeInfo *tinfo, int numTypes)
{
PQExpBuffer q = createPQExpBuffer();
PQExpBuffer fn = createPQExpBuffer();
PQExpBuffer delqry = createPQExpBuffer();
PQExpBuffer fnlist = createPQExpBuffer();
int j;
char *func_def;
......@@ -2584,69 +2697,60 @@ dumpOneFunc(FILE *fout, FuncInfo *finfo, int i,
else
finfo[i].dumped = 1;
becomeUser(fout, finfo[i].usename);
/* becomeUser(fout, finfo[i].usename); */
sprintf(query, "SELECT lanname FROM pg_language WHERE oid = %u",
finfo[i].lang);
res = PQexec(g_conn, query);
if (!res ||
PQresultStatus(res) != PGRES_TUPLES_OK)
{
{
fprintf(stderr, "dumpOneFunc(): SELECT for procedural language failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
}
nlangs = PQntuples(res);
if (nlangs != 1)
{
{
fprintf(stderr, "dumpOneFunc(): procedural language for function %s not found\n", finfo[i].proname);
exit_nicely(g_conn);
}
}
i_lanname = PQfnumber(res, "lanname");
func_def = finfo[i].prosrc;
strcpy(func_lang, PQgetvalue(res, 0, i_lanname));
PQclear(res);
if (dropSchema)
{
resetPQExpBuffer(q);
appendPQExpBuffer(q, "DROP FUNCTION %s (", fmtId(finfo[i].proname, force_quotes));
for (j = 0; j < finfo[i].nargs; j++)
{
char *typname;
typname = findTypeByOid(tinfo, numTypes, finfo[i].argtypes[j]);
appendPQExpBuffer(q, "%s%s",
(j > 0) ? "," : "",
fmtId(typname, false));
}
appendPQExpBuffer(q, ");\n");
fputs(q->data, fout);
}
resetPQExpBuffer(q);
appendPQExpBuffer(q, "CREATE FUNCTION %s (", fmtId(finfo[i].proname, force_quotes));
resetPQExpBuffer(fn);
appendPQExpBuffer(fn, "%s (", fmtId(finfo[i].proname, force_quotes));
for (j = 0; j < finfo[i].nargs; j++)
{
char *typname;
char *typname;
typname = findTypeByOid(tinfo, numTypes, finfo[i].argtypes[j]);
appendPQExpBuffer(q, "%s%s",
(j > 0) ? "," : "",
fmtId(typname, false));
appendPQExpBuffer(fn, "%s%s",
(j > 0) ? "," : "",
fmtId(typname, false));
appendPQExpBuffer(fnlist, "%s%s",
(j > 0) ? "," : "",
fmtId(typname, false));
(j > 0) ? "," : "",
fmtId(typname, false));
}
appendPQExpBuffer(q, " ) RETURNS %s%s AS '%s' LANGUAGE '%s';\n",
appendPQExpBuffer(fn, ")");
resetPQExpBuffer(delqry);
appendPQExpBuffer(delqry, "DROP FUNCTION %s;\n", fn->data );
resetPQExpBuffer(q);
appendPQExpBuffer(q, "CREATE FUNCTION %s ", fn->data );
appendPQExpBuffer(q, "RETURNS %s%s AS '%s' LANGUAGE '%s';\n",
(finfo[i].retset) ? " SETOF " : "",
fmtId(findTypeByOid(tinfo, numTypes, finfo[i].prorettype), false),
fmtId(findTypeByOid(tinfo, numTypes, finfo[i].prorettype), false),
func_def, func_lang);
fputs(q->data, fout);
ArchiveEntry(fout, finfo[i].oid, fn->data, "FUNCTION", NULL, q->data, delqry->data,
finfo[i].usename, NULL, NULL);
/*** Dump Function Comments ***/
......@@ -2664,11 +2768,12 @@ dumpOneFunc(FILE *fout, FuncInfo *finfo, int i,
*
*/
void
dumpOprs(FILE *fout, OprInfo *oprinfo, int numOperators,
dumpOprs(Archive *fout, OprInfo *oprinfo, int numOperators,
TypeInfo *tinfo, int numTypes)
{
int i;
PQExpBuffer q = createPQExpBuffer();
PQExpBuffer delq = createPQExpBuffer();
PQExpBuffer leftarg = createPQExpBuffer();
PQExpBuffer rightarg = createPQExpBuffer();
PQExpBuffer commutator = createPQExpBuffer();
......@@ -2739,19 +2844,13 @@ dumpOprs(FILE *fout, OprInfo *oprinfo, int numOperators,
appendPQExpBuffer(sort2, ",\n\tSORT2 = %s ",
findOprByOid(oprinfo, numOperators, oprinfo[i].oprrsortop));
becomeUser(fout, oprinfo[i].usename);
if (dropSchema)
{
resetPQExpBuffer(q);
appendPQExpBuffer(q, "DROP OPERATOR %s (%s", oprinfo[i].oprname,
resetPQExpBuffer(delq);
appendPQExpBuffer(delq, "DROP OPERATOR %s (%s", oprinfo[i].oprname,
fmtId(findTypeByOid(tinfo, numTypes, oprinfo[i].oprleft),
false));
appendPQExpBuffer(q, ", %s);\n",
fmtId(findTypeByOid(tinfo, numTypes, oprinfo[i].oprright),
appendPQExpBuffer(delq, ", %s);\n",
fmtId(findTypeByOid(tinfo, numTypes, oprinfo[i].oprright),
false));
fputs(q->data, fout);
}
resetPQExpBuffer(q);
appendPQExpBuffer(q,
......@@ -2764,12 +2863,13 @@ dumpOprs(FILE *fout, OprInfo *oprinfo, int numOperators,
commutator->data,
negator->data,
restrictor->data,
(strcmp(oprinfo[i].oprcanhash, "t") == 0) ? ",\n\tHASHES" : "",
(strcmp(oprinfo[i].oprcanhash, "t") == 0) ? ",\n\tHASHES" : "",
join->data,
sort1->data,
sort2->data);
fputs(q->data, fout);
ArchiveEntry(fout, oprinfo[i].oid, oprinfo[i].oprname, "OPERATOR", NULL,
q->data, delq->data, oprinfo[i].usename, NULL, NULL);
}
}
......@@ -2779,11 +2879,13 @@ dumpOprs(FILE *fout, OprInfo *oprinfo, int numOperators,
*
*/
void
dumpAggs(FILE *fout, AggInfo *agginfo, int numAggs,
dumpAggs(Archive *fout, AggInfo *agginfo, int numAggs,
TypeInfo *tinfo, int numTypes)
{
int i;
PQExpBuffer q = createPQExpBuffer();
PQExpBuffer delq = createPQExpBuffer();
PQExpBuffer aggSig = createPQExpBuffer();
PQExpBuffer sfunc1 = createPQExpBuffer();
PQExpBuffer sfunc2 = createPQExpBuffer();
PQExpBuffer basetype = createPQExpBuffer();
......@@ -2848,15 +2950,12 @@ dumpAggs(FILE *fout, AggInfo *agginfo, int numAggs,
else
comma2[0] = '\0';
becomeUser(fout, agginfo[i].usename);
resetPQExpBuffer(aggSig);
appendPQExpBuffer(aggSig, "%s %s", agginfo[i].aggname,
fmtId(findTypeByOid(tinfo, numTypes, agginfo[i].aggbasetype), false));
if (dropSchema)
{
resetPQExpBuffer(q);
appendPQExpBuffer(q, "DROP AGGREGATE %s %s;\n", agginfo[i].aggname,
fmtId(findTypeByOid(tinfo, numTypes, agginfo[i].aggbasetype), false));
fputs(q->data, fout);
}
resetPQExpBuffer(delq);
appendPQExpBuffer(delq, "DROP AGGREGATE %s;\n", aggSig->data);
resetPQExpBuffer(q);
appendPQExpBuffer(q, "CREATE AGGREGATE %s ( %s %s%s %s%s %s );\n",
......@@ -2868,7 +2967,8 @@ dumpAggs(FILE *fout, AggInfo *agginfo, int numAggs,
comma2,
finalfunc->data);
fputs(q->data, fout);
ArchiveEntry(fout, agginfo[i].oid, aggSig->data, "AGGREGATE", NULL,
q->data, delq->data, agginfo[i].usename, NULL, NULL);
/*** Dump Aggregate Comments ***/
......@@ -2927,6 +3027,22 @@ GetPrivileges(const char *s)
return strdup(aclbuf);
}
/*
* The name says it all; a function to append a string is the dest
* is big enough. If not, it does a realloc.
*/
static void strcatalloc(char **dest, int *dSize, char *src)
{
int dLen = strlen(*dest);
int sLen = strlen(src);
if ( (dLen + sLen) >= *dSize) {
*dSize = (dLen + sLen) * 2;
*dest = realloc(*dest, *dSize);
}
strcpy(*dest + dLen, src);
}
/*
* dumpACL:
* Write out grant/revoke information
......@@ -2934,23 +3050,30 @@ GetPrivileges(const char *s)
*/
static void
dumpACL(FILE *fout, TableInfo tbinfo)
dumpACL(Archive *fout, TableInfo tbinfo)
{
const char *acls = tbinfo.relacl;
char *aclbuf,
const char *acls = tbinfo.relacl;
char *aclbuf,
*tok,
*eqpos,
*priv;
char *sql;
char tmp[1024];
int sSize = 4096;
if (strlen(acls) == 0)
return; /* table has default permissions */
/*
* Allocate a larginsh buffer for the output SQL.
*/
sql = (char*)malloc(sSize);
/*
* Revoke Default permissions for PUBLIC. Is this actually necessary,
* or is it just a waste of time?
*/
fprintf(fout,
"REVOKE ALL on %s from PUBLIC;\n",
sprintf(sql, "REVOKE ALL on %s from PUBLIC;\n",
fmtId(tbinfo.relname, force_quotes));
/* Make a working copy of acls so we can use strtok */
......@@ -2985,9 +3108,9 @@ dumpACL(FILE *fout, TableInfo tbinfo)
priv = GetPrivileges(eqpos + 1);
if (*priv)
{
fprintf(fout,
"GRANT %s on %s to ",
sprintf(tmp, "GRANT %s on %s to ",
priv, fmtId(tbinfo.relname, force_quotes));
strcatalloc(&sql, &sSize, tmp);
/*
* Note: fmtId() can only be called once per printf, so don't
......@@ -2996,22 +3119,26 @@ dumpACL(FILE *fout, TableInfo tbinfo)
if (eqpos == tok)
{
/* Empty left-hand side means "PUBLIC" */
fprintf(fout, "PUBLIC;\n");
strcatalloc(&sql, &sSize, "PUBLIC;\n");
}
else
{
*eqpos = '\0'; /* it's ok to clobber aclbuf */
if (strncmp(tok, "group ", strlen("group ")) == 0)
fprintf(fout, "GROUP %s;\n",
sprintf(tmp, "GROUP %s;\n",
fmtId(tok + strlen("group "), force_quotes));
else
fprintf(fout, "%s;\n", fmtId(tok, force_quotes));
sprintf(tmp, "%s;\n", fmtId(tok, force_quotes));
strcatalloc(&sql, &sSize, tmp);
}
}
free(priv);
}
free(aclbuf);
ArchiveEntry(fout, tbinfo.oid, tbinfo.relname, "ACL", NULL, sql, "", "", NULL, NULL);
}
......@@ -3021,15 +3148,17 @@ dumpACL(FILE *fout, TableInfo tbinfo)
*/
void
dumpTables(FILE *fout, TableInfo *tblinfo, int numTables,
dumpTables(Archive *fout, TableInfo *tblinfo, int numTables,
InhInfo *inhinfo, int numInherits,
TypeInfo *tinfo, int numTypes, const char *tablename,
const bool aclsSkip)
const bool aclsSkip, const bool oids,
const bool schemaOnly, const bool dataOnly)
{
int i,
j,
k;
PQExpBuffer q = createPQExpBuffer();
PQExpBuffer delq = createPQExpBuffer();
char *serialSeq = NULL; /* implicit sequence name created
* by SERIAL datatype */
const char *serialSeqSuffix = "_id_seq"; /* suffix for implicit
......@@ -3041,7 +3170,6 @@ dumpTables(FILE *fout, TableInfo *tblinfo, int numTables,
int precision;
int scale;
/* First - dump SEQUENCEs */
if (tablename)
{
......@@ -3056,7 +3184,7 @@ dumpTables(FILE *fout, TableInfo *tblinfo, int numTables,
if (!tablename || (!strcmp(tblinfo[i].relname, tablename))
|| (serialSeq && !strcmp(tblinfo[i].relname, serialSeq)))
{
becomeUser(fout, tblinfo[i].usename);
/* becomeUser(fout, tblinfo[i].usename); */
dumpSequence(fout, tblinfo[i]);
if (!aclsSkip)
dumpACL(fout, tblinfo[i]);
......@@ -3082,14 +3210,8 @@ dumpTables(FILE *fout, TableInfo *tblinfo, int numTables,
parentRels = tblinfo[i].parentRels;
numParents = tblinfo[i].numParents;
becomeUser(fout, tblinfo[i].usename);
if (dropSchema)
{
resetPQExpBuffer(q);
appendPQExpBuffer(q, "DROP TABLE %s;\n", fmtId(tblinfo[i].relname, force_quotes));
fputs(q->data, fout);
}
resetPQExpBuffer(delq);
appendPQExpBuffer(delq, "DROP TABLE %s;\n", fmtId(tblinfo[i].relname, force_quotes));
resetPQExpBuffer(q);
appendPQExpBuffer(q, "CREATE TABLE %s (\n\t", fmtId(tblinfo[i].relname, force_quotes));
......@@ -3191,8 +3313,14 @@ dumpTables(FILE *fout, TableInfo *tblinfo, int numTables,
}
appendPQExpBuffer(q, ";\n");
fputs(q->data, fout);
if (!aclsSkip)
if (!dataOnly) {
ArchiveEntry(fout, tblinfo[i].oid, fmtId(tblinfo[i].relname, false),
"TABLE", NULL, q->data, delq->data, tblinfo[i].usename,
NULL, NULL);
}
if (!dataOnly && !aclsSkip)
dumpACL(fout, tblinfo[i]);
/* Dump Field Comments */
......@@ -3221,7 +3349,7 @@ dumpTables(FILE *fout, TableInfo *tblinfo, int numTables,
* write out to fout all the user-define indices
*/
void
dumpIndices(FILE *fout, IndInfo *indinfo, int numIndices,
dumpIndices(Archive *fout, IndInfo *indinfo, int numIndices,
TableInfo *tblinfo, int numTables, const char *tablename)
{
int i,
......@@ -3236,6 +3364,7 @@ dumpIndices(FILE *fout, IndInfo *indinfo, int numIndices,
int nclass;
PQExpBuffer q = createPQExpBuffer(),
delq = createPQExpBuffer(),
id1 = createPQExpBuffer(),
id2 = createPQExpBuffer();
PGresult *res;
......@@ -3270,11 +3399,11 @@ dumpIndices(FILE *fout, IndInfo *indinfo, int numIndices,
res = PQexec(g_conn, q->data);
if (!res || PQresultStatus(res) != PGRES_TUPLES_OK)
{
fprintf(stderr, "dumpIndices(): SELECT (funcname) failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
fprintf(stderr, "dumpIndices(): SELECT (funcname) failed. "
"Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
funcname = strdup(PQgetvalue(res, 0,
PQfnumber(res, "proname")));
funcname = strdup(PQgetvalue(res, 0, PQfnumber(res, "proname")));
PQclear(res);
}
......@@ -3292,11 +3421,11 @@ dumpIndices(FILE *fout, IndInfo *indinfo, int numIndices,
res = PQexec(g_conn, q->data);
if (!res || PQresultStatus(res) != PGRES_TUPLES_OK)
{
fprintf(stderr, "dumpIndices(): SELECT (classname) failed. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
fprintf(stderr, "dumpIndices(): SELECT (classname) failed. "
"Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
classname[nclass] = strdup(PQgetvalue(res, 0,
PQfnumber(res, "opcname")));
classname[nclass] = strdup(PQgetvalue(res, 0, PQfnumber(res, "opcname")));
PQclear(res);
}
......@@ -3351,38 +3480,38 @@ dumpIndices(FILE *fout, IndInfo *indinfo, int numIndices,
* is not necessarily right but should answer 99% of the time.
* Would have to add owner name to IndInfo to do it right.
*/
becomeUser(fout, tblinfo[tableInd].usename);
resetPQExpBuffer(id1);
resetPQExpBuffer(id2);
appendPQExpBuffer(id1, fmtId(indinfo[i].indexrelname, force_quotes));
appendPQExpBuffer(id2, fmtId(indinfo[i].indrelname, force_quotes));
if (dropSchema)
{
resetPQExpBuffer(q);
appendPQExpBuffer(q, "DROP INDEX %s;\n", id1->data);
fputs(q->data, fout);
}
resetPQExpBuffer(delq);
appendPQExpBuffer(delq, "DROP INDEX %s;\n", id1->data);
fprintf(fout, "CREATE %s INDEX %s on %s using %s (",
(strcmp(indinfo[i].indisunique, "t") == 0) ? "UNIQUE" : "",
resetPQExpBuffer(q);
appendPQExpBuffer(q, "CREATE %s INDEX %s on %s using %s (",
(strcmp(indinfo[i].indisunique, "t") == 0) ? "UNIQUE" : "",
id1->data,
id2->data,
indinfo[i].indamname);
if (funcname)
{
/* need 2 printf's here cuz fmtId has static return area */
fprintf(fout, " %s", fmtId(funcname, false));
fprintf(fout, " (%s) %s );\n", attlist->data, fmtId(classname[0], force_quotes));
appendPQExpBuffer(q, " %s", fmtId(funcname, false));
appendPQExpBuffer(q, " (%s) %s );\n", attlist->data,
fmtId(classname[0], force_quotes));
free(funcname);
free(classname[0]);
}
else
fprintf(fout, " %s );\n", attlist->data);
appendPQExpBuffer(q, " %s );\n", attlist->data);
/* Dump Index Comments */
ArchiveEntry(fout, tblinfo[tableInd].oid, id1->data, "INDEX", NULL, q->data, delq->data,
tblinfo[tableInd].usename, NULL, NULL);
resetPQExpBuffer(q);
appendPQExpBuffer(q, "INDEX %s", id1->data);
dumpComment(fout, q->data, indinfo[i].indoid);
......@@ -3463,16 +3592,19 @@ dumpTuples(PGresult *res, FILE *fout, int *attrmap)
*/
static void
setMaxOid(FILE *fout)
setMaxOid(Archive *fout)
{
PGresult *res;
Oid max_oid;
PGresult *res;
Oid max_oid;
char sql[1024];
int pos;
res = PQexec(g_conn, "CREATE TABLE pgdump_oid (dummy int4)");
if (!res ||
PQresultStatus(res) != PGRES_COMMAND_OK)
{
fprintf(stderr, "Can not create pgdump_oid table. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
fprintf(stderr, "Can not create pgdump_oid table. "
"Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
PQclear(res);
......@@ -3480,7 +3612,8 @@ setMaxOid(FILE *fout)
if (!res ||
PQresultStatus(res) != PGRES_COMMAND_OK)
{
fprintf(stderr, "Can not insert into pgdump_oid table. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
fprintf(stderr, "Can not insert into pgdump_oid table. "
"Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
max_oid = atol(PQoidStatus(res));
......@@ -3494,18 +3627,21 @@ setMaxOid(FILE *fout)
if (!res ||
PQresultStatus(res) != PGRES_COMMAND_OK)
{
fprintf(stderr, "Can not drop pgdump_oid table. Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
fprintf(stderr, "Can not drop pgdump_oid table. "
"Explanation from backend: '%s'.\n", PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
PQclear(res);
if (g_verbose)
fprintf(stderr, "%s maximum system oid is %u %s\n",
g_comment_start, max_oid, g_comment_end);
fprintf(fout, "CREATE TABLE pgdump_oid (dummy int4);\n");
fprintf(fout, "COPY pgdump_oid WITH OIDS FROM stdin;\n");
fprintf(fout, "%-d\t0\n", max_oid);
fprintf(fout, "\\.\n");
fprintf(fout, "DROP TABLE pgdump_oid;\n");
pos = snprintf(sql, 1024, "CREATE TABLE pgdump_oid (dummy int4);\n");
pos = pos + snprintf(sql+pos, 1024-pos, "COPY pgdump_oid WITH OIDS FROM stdin;\n");
pos = pos + snprintf(sql+pos, 1024-pos, "%-d\t0\n", max_oid);
pos = pos + snprintf(sql+pos, 1024-pos, "\\.\n");
pos = pos + snprintf(sql+pos, 1024-pos, "DROP TABLE pgdump_oid;\n");
ArchiveEntry(fout, "0", "Max OID", "<Init>", NULL, sql, "","", NULL, NULL);
}
/*
......@@ -3586,7 +3722,7 @@ checkForQuote(const char *s)
static void
dumpSequence(FILE *fout, TableInfo tbinfo)
dumpSequence(Archive *fout, TableInfo tbinfo)
{
PGresult *res;
int4 last,
......@@ -3598,6 +3734,7 @@ dumpSequence(FILE *fout, TableInfo tbinfo)
called;
const char *t;
PQExpBuffer query = createPQExpBuffer();
PQExpBuffer delqry = createPQExpBuffer();
appendPQExpBuffer(query,
"SELECT sequence_name, last_value, increment_by, max_value, "
......@@ -3607,7 +3744,8 @@ dumpSequence(FILE *fout, TableInfo tbinfo)
res = PQexec(g_conn, query->data);
if (!res || PQresultStatus(res) != PGRES_TUPLES_OK)
{
fprintf(stderr, "dumpSequence(%s): SELECT failed. Explanation from backend: '%s'.\n", tbinfo.relname, PQerrorMessage(g_conn));
fprintf(stderr, "dumpSequence(%s): SELECT failed. "
"Explanation from backend: '%s'.\n", tbinfo.relname, PQerrorMessage(g_conn));
exit_nicely(g_conn);
}
......@@ -3639,21 +3777,22 @@ dumpSequence(FILE *fout, TableInfo tbinfo)
PQclear(res);
if (dropSchema)
{
resetPQExpBuffer(query);
appendPQExpBuffer(query, "DROP SEQUENCE %s;\n", fmtId(tbinfo.relname, force_quotes));
fputs(query->data, fout);
}
resetPQExpBuffer(delqry);
appendPQExpBuffer(delqry, "DROP SEQUENCE %s;\n", fmtId(tbinfo.relname, force_quotes));
resetPQExpBuffer(query);
appendPQExpBuffer(query,
"CREATE SEQUENCE %s start %d increment %d maxvalue %d "
"minvalue %d cache %d %s;\n",
fmtId(tbinfo.relname, force_quotes), last, incby, maxv, minv, cache,
fmtId(tbinfo.relname, force_quotes), last, incby, maxv, minv, cache,
(cycled == 't') ? "cycle" : "");
fputs(query->data, fout);
if (called != 'f') {
appendPQExpBuffer(query, "SELECT nextval ('%s');\n", fmtId(tbinfo.relname, force_quotes));
}
ArchiveEntry(fout, tbinfo.oid, fmtId(tbinfo.relname, force_quotes), "SEQUENCE", NULL,
query->data, delqry->data, tbinfo.usename, NULL, NULL);
/* Dump Sequence Comments */
......@@ -3661,18 +3800,11 @@ dumpSequence(FILE *fout, TableInfo tbinfo)
appendPQExpBuffer(query, "SEQUENCE %s", fmtId(tbinfo.relname, force_quotes));
dumpComment(fout, query->data, tbinfo.oid);
if (called == 'f')
return; /* nothing to do more */
resetPQExpBuffer(query);
appendPQExpBuffer(query, "SELECT nextval ('%s');\n", fmtId(tbinfo.relname, force_quotes));
fputs(query->data, fout);
}
static void
dumpTriggers(FILE *fout, const char *tablename,
dumpTriggers(Archive *fout, const char *tablename,
TableInfo *tblinfo, int numTables)
{
int i,
......@@ -3688,16 +3820,17 @@ dumpTriggers(FILE *fout, const char *tablename,
continue;
for (j = 0; j < tblinfo[i].ntrig; j++)
{
becomeUser(fout, tblinfo[i].usename);
fputs(tblinfo[i].triggers[j], fout);
dumpComment(fout, tblinfo[i].trcomments[j], tblinfo[i].troids[j]);
ArchiveEntry(fout, tblinfo[i].triggers[j].oid, tblinfo[i].triggers[j].tgname,
"TRIGGER", NULL, tblinfo[i].triggers[j].tgsrc, "",
tblinfo[i].usename, NULL, NULL);
dumpComment(fout, tblinfo[i].triggers[j].tgcomment, tblinfo[i].triggers[j].oid);
}
}
}
static void
dumpRules(FILE *fout, const char *tablename,
dumpRules(Archive *fout, const char *tablename,
TableInfo *tblinfo, int numTables)
{
PGresult *res;
......@@ -3753,7 +3886,9 @@ dumpRules(FILE *fout, const char *tablename,
for (i = 0; i < nrules; i++)
{
fprintf(fout, "%s\n", PQgetvalue(res, i, i_definition));
ArchiveEntry(fout, PQgetvalue(res, i, i_oid), PQgetvalue(res, i, i_rulename),
"RULE", NULL, PQgetvalue(res, i, i_definition),
"", "", NULL, NULL);
/* Dump rule comments */
......@@ -3767,25 +3902,3 @@ dumpRules(FILE *fout, const char *tablename,
}
}
/* Issue a psql \connect command to become the specified user.
* We want to do this only if we are dumping ACLs,
* and only if the new username is different from the last one
* (to avoid the overhead of useless backend launches).
*/
static void
becomeUser(FILE *fout, const char *username)
{
static const char *lastusername = "";
if (aclsSkip)
return;
if (strcmp(lastusername, username) == 0)
return;
fprintf(fout, "\\connect - %s\n", username);
lastusername = username;
}
......@@ -6,7 +6,7 @@
* Portions Copyright (c) 1996-2000, PostgreSQL, Inc
* Portions Copyright (c) 1994, Regents of the University of California
*
* $Id: pg_dump.h,v 1.48 2000/04/12 17:16:15 momjian Exp $
* $Id: pg_dump.h,v 1.49 2000/07/04 14:25:28 momjian Exp $
*
* Modifications - 6/12/96 - dave@bensoft.com - version 1.13.dhb.2
*
......@@ -25,6 +25,7 @@
#include "pqexpbuffer.h"
#include "catalog/pg_index.h"
#include "pg_backup.h"
/* The data structures used to store system catalog information */
......@@ -64,6 +65,15 @@ typedef struct _funcInfo
int dumped; /* 1 if already dumped */
} FuncInfo;
typedef struct _trigInfo
{
char *oid;
char *tgname;
char *tgsrc;
char *tgdel;
char *tgcomment;
} TrigInfo;
typedef struct _tableInfo
{
char *oid;
......@@ -94,9 +104,7 @@ typedef struct _tableInfo
int ncheck; /* # of CHECK expressions */
char **check_expr; /* [CONSTRAINT name] CHECK expressions */
int ntrig; /* # of triggers */
char **triggers; /* CREATE TRIGGER ... */
char **trcomments; /* COMMENT ON TRIGGER ... */
char **troids; /* TRIGGER oids */
TrigInfo *triggers; /* Triggers on the table */
char *primary_key; /* PRIMARY KEY of the table, if any */
} TableInfo;
......@@ -162,7 +170,7 @@ typedef struct _oprInfo
extern bool g_force_quotes; /* double-quotes for identifiers flag */
extern bool g_verbose; /* verbose flag */
extern int g_last_builtin_oid; /* value of the last builtin oid */
extern FILE *g_fout; /* the script file */
extern Archive *g_fout; /* the script file */
/* placeholders for comment starting and ending delimiters */
extern char g_comment_start[10];
......@@ -179,11 +187,14 @@ extern char g_opaque_type[10]; /* name for the opaque type */
* common utility functions
*/
extern TableInfo *dumpSchema(FILE *fout,
extern TableInfo *dumpSchema(Archive *fout,
int *numTablesPtr,
const char *tablename,
const bool acls);
extern void dumpSchemaIdx(FILE *fout,
const bool acls,
const bool oids,
const bool schemaOnly,
const bool dataOnly);
extern void dumpSchemaIdx(Archive *fout,
const char *tablename,
TableInfo *tblinfo,
int numTables);
......@@ -215,22 +226,23 @@ extern TableInfo *getTables(int *numTables, FuncInfo *finfo, int numFuncs);
extern InhInfo *getInherits(int *numInherits);
extern void getTableAttrs(TableInfo *tbinfo, int numTables);
extern IndInfo *getIndices(int *numIndices);
extern void dumpDBComment(FILE *outfile);
extern void dumpTypes(FILE *fout, FuncInfo *finfo, int numFuncs,
extern void dumpDBComment(Archive *outfile);
extern void dumpTypes(Archive *fout, FuncInfo *finfo, int numFuncs,
TypeInfo *tinfo, int numTypes);
extern void dumpProcLangs(FILE *fout, FuncInfo *finfo, int numFuncs,
extern void dumpProcLangs(Archive *fout, FuncInfo *finfo, int numFuncs,
TypeInfo *tinfo, int numTypes);
extern void dumpFuncs(FILE *fout, FuncInfo *finfo, int numFuncs,
extern void dumpFuncs(Archive *fout, FuncInfo *finfo, int numFuncs,
TypeInfo *tinfo, int numTypes);
extern void dumpAggs(FILE *fout, AggInfo *agginfo, int numAggregates,
extern void dumpAggs(Archive *fout, AggInfo *agginfo, int numAggregates,
TypeInfo *tinfo, int numTypes);
extern void dumpOprs(FILE *fout, OprInfo *agginfo, int numOperators,
extern void dumpOprs(Archive *fout, OprInfo *agginfo, int numOperators,
TypeInfo *tinfo, int numTypes);
extern void dumpTables(FILE *fout, TableInfo *tbinfo, int numTables,
extern void dumpTables(Archive *fout, TableInfo *tbinfo, int numTables,
InhInfo *inhinfo, int numInherits,
TypeInfo *tinfo, int numTypes, const char *tablename,
const bool acls);
extern void dumpIndices(FILE *fout, IndInfo *indinfo, int numIndices,
const bool acls, const bool oids,
const bool schemaOnly, const bool dataOnly);
extern void dumpIndices(Archive *fout, IndInfo *indinfo, int numIndices,
TableInfo *tbinfo, int numTables, const char *tablename);
extern const char *fmtId(const char *identifier, bool force_quotes);
......
......@@ -6,7 +6,7 @@
# and "pg_group" tables, which belong to the whole installation rather
# than any one individual database.
#
# $Header: /cvsroot/pgsql/src/bin/pg_dump/Attic/pg_dumpall.sh,v 1.1 2000/07/03 16:35:39 petere Exp $
# $Header: /cvsroot/pgsql/src/bin/pg_dump/Attic/pg_dumpall.sh,v 1.2 2000/07/04 14:25:28 momjian Exp $
CMDNAME=`basename $0`
......@@ -135,7 +135,7 @@ fi
PSQL="${PGPATH}/psql $connectopts"
PGDUMP="${PGPATH}/pg_dump $connectopts $pgdumpextraopts"
PGDUMP="${PGPATH}/pg_dump $connectopts $pgdumpextraopts -Fp"
echo "--"
......
/*-------------------------------------------------------------------------
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
* from a backup archive created by pg_dump using the archiver
* interface.
*
* pg_restore will read the backup archive and
* dump out a script that reproduces
* the schema of the database in terms of
* user-defined types
* user-defined functions
* tables
* indices
* aggregates
* operators
* ACL - grant/revoke
*
* the output script is SQL that is understood by PostgreSQL
*
* Basic process in a restore operation is:
*
* Open the Archive and read the TOC.
* Set flags in TOC entries, and *maybe* reorder them.
* Generate script to stdout
* Exit
*
* Copyright (c) 2000, Philip Warner
* Rights are granted to use this software in any way so long
* as this notice is not removed.
*
* The author is not responsible for loss or damages that may
* result from it's use.
*
*
* IDENTIFICATION
*
* Modifications - 28-Jun-2000 - pjw@rhyme.com.au
*
* Initial version. Command processing taken from original pg_dump.
*
*-------------------------------------------------------------------------
*/
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <ctype.h>
/*
#include "postgres.h"
#include "access/htup.h"
#include "catalog/pg_type.h"
#include "catalog/pg_language.h"
#include "catalog/pg_index.h"
#include "catalog/pg_trigger.h"
#include "libpq-fe.h"
*/
#include "pg_backup.h"
#ifndef HAVE_STRDUP
#include "strdup.h"
#endif
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
#ifdef HAVE_GETOPT_H
#include <getopt.h>
#else
#include <unistd.h>
#endif
/* Forward decls */
static void usage(const char *progname);
static char* _cleanupName(char* name);
typedef struct option optType;
#ifdef HAVE_GETOPT_H
struct option cmdopts[] = {
{ "clean", 0, NULL, 'c' },
{ "data-only", 0, NULL, 'a' },
{ "file", 1, NULL, 'f' },
{ "format", 1, NULL, 'F' },
{ "function", 2, NULL, 'p' },
{ "index", 2, NULL, 'i'},
{ "list", 0, NULL, 'l'},
{ "no-acl", 0, NULL, 'x' },
{ "oid-order", 0, NULL, 'o'},
{ "orig-order", 0, NULL, 'O' },
{ "rearrange", 0, NULL, 'r'},
{ "schema-only", 0, NULL, 's' },
{ "table", 2, NULL, 't'},
{ "trigger", 2, NULL, 'T' },
{ "use-list", 1, NULL, 'u'},
{ "verbose", 0, NULL, 'v' },
{ NULL, 0, NULL, 0}
};
#endif
int main(int argc, char **argv)
{
RestoreOptions *opts;
char *progname;
int c;
Archive* AH;
char *fileSpec;
opts = NewRestoreOptions();
progname = *argv;
#ifdef HAVE_GETOPT_LONG
while ((c = getopt_long(argc, argv, "acf:F:i:loOp:st:T:u:vx", cmdopts, NULL)) != EOF)
#else
while ((c = getopt(argc, argv, "acf:F:i:loOp:st:T:u:vx")) != -1)
#endif
{
switch (c)
{
case 'a': /* Dump data only */
opts->dataOnly = 1;
break;
case 'c': /* clean (i.e., drop) schema prior to
* create */
opts->dropSchema = 1;
break;
case 'f': /* output file name */
opts->filename = strdup(optarg);
break;
case 'F':
if (strlen(optarg) != 0)
opts->formatName = strdup(optarg);
break;
case 'o':
opts->oidOrder = 1;
break;
case 'O':
opts->origOrder = 1;
break;
case 'r':
opts->rearrange = 1;
break;
case 'p': /* Function */
opts->selTypes = 1;
opts->selFunction = 1;
opts->functionNames = _cleanupName(optarg);
break;
case 'i': /* Index */
opts->selTypes = 1;
opts->selIndex = 1;
opts->indexNames = _cleanupName(optarg);
break;
case 'T': /* Trigger */
opts->selTypes = 1;
opts->selTrigger = 1;
opts->triggerNames = _cleanupName(optarg);
break;
case 's': /* dump schema only */
opts->schemaOnly = 1;
break;
case 't': /* Dump data for this table only */
opts->selTypes = 1;
opts->selTable = 1;
opts->tableNames = _cleanupName(optarg);
break;
case 'l': /* Dump the TOC summary */
opts->tocSummary = 1;
break;
case 'u': /* input TOC summary file name */
opts->tocFile = strdup(optarg);
break;
case 'v': /* verbose */
opts->verbose = 1;
break;
case 'x': /* skip ACL dump */
opts->aclsSkip = 1;
break;
default:
usage(progname);
break;
}
}
if (optind < argc) {
fileSpec = argv[optind];
} else {
fileSpec = NULL;
}
if (opts->formatName) {
switch (opts->formatName[0]) {
case 'c':
case 'C':
opts->format = archCustom;
break;
case 'f':
case 'F':
opts->format = archFiles;
break;
default:
fprintf(stderr, "%s: Unknown archive format '%s', please specify 'f' or 'c'\n", progname, opts->formatName);
exit (1);
}
}
AH = OpenArchive(fileSpec, opts->format);
if (opts->tocFile)
SortTocFromFile(AH, opts);
if (opts->oidOrder)
SortTocByOID(AH);
else if (opts->origOrder)
SortTocByID(AH);
if (opts->rearrange) {
MoveToEnd(AH, "TABLE DATA");
MoveToEnd(AH, "INDEX");
MoveToEnd(AH, "TRIGGER");
MoveToEnd(AH, "RULE");
MoveToEnd(AH, "ACL");
}
if (opts->tocSummary) {
PrintTOCSummary(AH, opts);
} else {
RestoreArchive(AH, opts);
}
CloseArchive(AH);
return 1;
}
static void usage(const char *progname)
{
#ifdef HAVE_GETOPT_LONG
fprintf(stderr,
"usage: %s [options] [backup file]\n"
" -a, --data-only \t dump out only the data, no schema\n"
" -c, --clean \t clean(drop) schema prior to create\n"
" -f filename \t script output filename\n"
" -F, --format {c|f} \t specify backup file format\n"
" -p, --function[=name] \t dump functions or named function\n"
" -i, --index[=name] \t dump indexes or named index\n"
" -l, --list \t dump summarized TOC for this file\n"
" -o, --oid-order \t dump in oid order\n"
" -O, --orig-order \t dump in original dump order\n"
" -r, --rearrange \t rearrange output to put indexes etc at end\n"
" -s, --schema-only \t dump out only the schema, no data\n"
" -t [table], --table[=table] \t dump for this table only\n"
" -T, --trigger[=name] \t dump triggers or named trigger\n"
" -u, --use-list filename \t use specified TOC for ordering output from this file\n"
" -v \t verbose\n"
" -x, --no-acl \t skip dumping of ACLs (grant/revoke)\n"
, progname);
#else
fprintf(stderr,
"usage: %s [options] [backup file]\n"
" -a \t dump out only the data, no schema\n"
" -c \t clean(drop) schema prior to create\n"
" -f filename NOT IMPLEMENTED \t script output filename\n"
" -F {c|f} \t specify backup file format\n"
" -p name \t dump functions or named function\n"
" -i name \t dump indexes or named index\n"
" -l \t dump summarized TOC for this file\n"
" -o \t dump in oid order\n"
" -O \t dump in original dump order\n"
" -r \t rearrange output to put indexes etc at end\n"
" -s \t dump out only the schema, no data\n"
" -t name \t dump for this table only\n"
" -T name \t dump triggers or named trigger\n"
" -u filename \t use specified TOC for ordering output from this file\n"
" -v \t verbose\n"
" -x \t skip dumping of ACLs (grant/revoke)\n"
, progname);
#endif
fprintf(stderr,
"\nIf [backup file] is not supplied, then standard input "
"is used.\n");
fprintf(stderr, "\n");
exit(1);
}
static char* _cleanupName(char* name)
{
int i;
if (!name)
return NULL;
if (strlen(name) == 0)
return NULL;
name = strdup(name);
if (name[0] == '"')
{
strcpy(name, &name[1]);
if (*(name + strlen(name) - 1) == '"')
*(name + strlen(name) - 1) = '\0';
}
/* otherwise, convert table name to lowercase... */
else
{
for (i = 0; name[i]; i++)
if (isascii((unsigned char) name[i]) && isupper(name[i]))
name[i] = tolower(name[i]);
}
return name;
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment