- 03 Oct, 2012 6 commits
-
-
Alvaro Herrera authored
Apparently this was considered in the original code (see commit cec3b0a9) but I failed to notice that such entries would always be skipped by the database check at the start of the loop. Per bugs #7578 by Nikolay, #6116 by tushar.qa@gmail.com.
-
Heikki Linnakangas authored
This allows logging only some fraction of transactions, greatly reducing the amount of log generated. Tomas Vondra, reviewed by Robert Haas and Jeff Janes.
-
Heikki Linnakangas authored
You can now get the number of rows processed by a COPY statement in a PL/pgSQL function with "GET DIAGNOSTICS x = ROW_COUNT". Pavel Stehule, reviewed by Amit Kapila, with some editing by me.
-
Heikki Linnakangas authored
The comment explaining the naming of timeline history files was wrong, and the history file was not being arhived. Pointed out by Fujii Masao.
-
Peter Eisentraut authored
-
Bruce Momjian authored
Backpatch to 9.2.
-
- 02 Oct, 2012 12 commits
-
-
Tom Lane authored
On some platforms these functions return NULL, rather than the more common practice of returning a pointer to a zero-sized block of memory. Hack our various wrapper functions to hide the difference by substituting a size request of 1. This is probably not so important for the callers, who should never touch the block anyway if they asked for size 0 --- but it's important for the wrapper functions themselves, which mistakenly treated the NULL result as an out-of-memory failure. This broke at least pg_dump for the case of no user-defined aggregates, as per report from Matthew Carrington. Back-patch to 9.2 to fix the pg_dump issue. Given the lack of previous complaints, it seems likely that there is no live bug in previous releases, even though some of these functions were in place before that.
-
Alvaro Herrera authored
Instead of having each object type implement the catalog munging independently, centralize knowledge about how to do it and expand the existing table in objectaddress.c with enough data about each object type to support this operation. Author: KaiGai Kohei Tweaks by me Reviewed by Robert Haas
-
Tom Lane authored
We had a number of variants on the theme of "malloc or die", with the majority named like "pg_malloc", but by no means all. Standardize on the names pg_malloc, pg_malloc0, pg_realloc, pg_strdup. Get rid of pg_calloc entirely in favor of using pg_malloc0. This is an essentially cosmetic change, so no back-patch. (I did find a couple of places where psql and pg_dump were using plain malloc or strdup instead of the pg_ versions, but they don't look significant enough to bother back-patching.)
-
Heikki Linnakangas authored
Fujii Masao
-
Bruce Momjian authored
objects does not match between the old and new clusters. Backpatch to 9.2.
-
Bruce Momjian authored
entries are not dumped. This fixes an error caused by droping/recreating the information_schema, but other failures were also possible. Backpatch to 9.2.
-
Bruce Momjian authored
comparison; also report the old/new values if they don't match. Backpatch to 9.2.
-
Heikki Linnakangas authored
timeval.t_sec is of type time_t, which is not always compatible with long. I'm not sure if this was just harmless warning or a real bug, but this fixes it, anyway.
-
Andrew Dunstan authored
-
Heikki Linnakangas authored
Hopefully this makes the *BSD buildfarm animals happy.
-
Heikki Linnakangas authored
This is just refactoring, to make the functions accessible outside xlog.c. A followup patch will make use of that, to allow fetching timeline history files over streaming replication.
-
Heikki Linnakangas authored
This affects date_in(), and a couple of other funcions that use DecodeDate(). Hitoshi Harada
-
- 01 Oct, 2012 4 commits
-
-
Bruce Momjian authored
don't accidentally remove it.
-
Alvaro Herrera authored
The error messages they generate are not portable enough. Also, since the only point of the alter_generic_1 expected file was to cover platforms with no collation support, it's now useless, so remove it.
-
Heikki Linnakangas authored
Jeff Janes
-
Tom Lane authored
On reflection (especially after noticing how many buildfarm critters have __builtin_types_compatible_p but not _Static_assert), it seems like we ought to try a bit harder to make these macros do something everywhere. The initial cut at it would have been no help to code that is compiled only on platforms without _Static_assert, for instance; and in any case not all our contributors do their initial coding on the latest gcc version. Some googling about static assertions turns up quite a bit of prior art for making it work in compilers that lack _Static_assert. The method that seems closest to our needs involves defining a struct with a bit-field that has negative width if the assertion condition fails. There seems no reliable way to get the error message string to be output, but throwing a compile error with a confusing message is better than missing the problem altogether. In the same spirit, if we don't have __builtin_types_compatible_p we can at least insist that the variable have the same width as the type. This won't catch errors such as "wrong pointer type", but it's far better than nothing. In addition to changing the macro definitions, adjust a compile-time-constant Assert in contrib/hstore to use StaticAssertStmt, so we can get some buildfarm coverage on whether that macro behaves sanely or not. There's surely more places that could be converted, but this is the first one I came across.
-
- 30 Sep, 2012 3 commits
-
-
Tom Lane authored
Currently, the macros only work with fairly recent gcc versions, but there is room to expand them to other compilers that have comparable features. Heavily revised and autoconfiscated version of a patch by Andres Freund.
-
Peter Eisentraut authored
-
Peter Eisentraut authored
There are apparently some incompatibilities, per buildfarm.
-
- 29 Sep, 2012 5 commits
-
-
Tom Lane authored
The tar output module did some very ugly and ultimately incorrect hacking on COPY commands to try to get them to work in the context of restoring a deconstructed tar archive. In particular, it would fail altogether for table names containing any upper-case characters, since it smashed the command string to lower-case before modifying it (and, just to add insult to injury, did that in a way that would fail in multibyte encodings). I don't see any particular value in being flexible about the case of the command keywords, since the string will just have been created by dumpTableData, so let's get rid of the whole case-folding thing. Also, it doesn't seem to meet the POLA for the script to restore data only in COPY mode, so add \i commands to make it have comparable behavior in --inserts mode. Noted while looking at the tar-output code in connection with Brian Weaver's patch.
-
Peter Eisentraut authored
Many distributors use this, so we might as well see the warnings as well.
-
Peter Eisentraut authored
Since Python 2.2 is no longer supported, we can now use Py_RETURN_TRUE and Py_RETURN_FALSE instead of the old workaround.
-
Peter Eisentraut authored
oid is a numeric type, so transform it to the appropriate Python numeric type like the other ones.
-
Alvaro Herrera authored
The original only expected file failed to consider machines without non-default collation support. Per buildfarm. Also, move the test to another parallel group; the one it was originally put in is already full according to comments in the schedule file. Per note from Tom Lane.
-
- 28 Sep, 2012 4 commits
-
-
Andrew Dunstan authored
-
Alvaro Herrera authored
This makes refactoring of parts of the ALTER command safe(r) because we ensure no change in functionality. Author: KaiGai Kohei
-
Tom Lane authored
Both programs got the "magic" string wrong, causing standard-conforming tar implementations to believe the output was just legacy tar format without any POSIX extensions. This doesn't actually matter that much, especially since pg_dump failed to fill the POSIX fields anyway, but still there is little point in emitting tar format if we can't be compliant with the standard. In addition, pg_dump failed to write the EOF marker correctly (there should be 2 blocks of zeroes not just one), pg_basebackup put the numeric group ID in the wrong place, and both programs had a pretty brain-dead idea of how to compute the checksum. Fix all that and improve the comments a bit. pg_restore is modified to accept either the correct POSIX-compliant "magic" string or the previous value. This part of the change will need to be back-patched to avoid an unnecessary compatibility break when a previous version tries to read tar-format output from 9.3 pg_dump. Brian Weaver and Tom Lane
-
Peter Eisentraut authored
-
- 27 Sep, 2012 6 commits
-
-
Tom Lane authored
This fixes another error in commit 9e8da0f7. I neglected to make the mark/restore functionality save and restore the current set of array key values, which led to strange behavior if an IndexScan with ScalarArrayOpExpr quals was used as the inner side of a mergejoin. Per bug #7570 from Melese Tesfaye.
-
Alvaro Herrera authored
This worked fine for superusers, but not for ordinary users trying to cancel their own processes. Tweak the order the checks are done in so that we correctly return SIGNAL_BACKEND_ERROR (which current callers know to ignore without erroring out) so that an ordinary user can loop through a resultset without fearing that a process might exit in the middle of said looping -- causing the remaining processes to go unsignalled. Incidentally, the last in-core caller of IsBackendPid() is now gone. However, the function is exported and must remain in place, because there are plenty of callers in external modules. Author: Josh Kupershmidt Reviewed by Noah Misch
-
Tom Lane authored
This script is a bit slow, but still it only takes a fraction of the time the bison run does, so the overhead doesn't seem intolerable. And we definitely need some mechanical aid here, because people keep missing the need to add new keywords to the appropriate keyword-list production. While at it, I moved check_keywords.pl from src/tools into src/backend/parser where it's actually used, and did some very minor cleanup on the script.
-
Peter Eisentraut authored
This mirrors the behavior of pg_regress and makes the test run much faster.
-
Tom Lane authored
There were assorted places where unreserved keywords were not treated the same as T_WORD (that is, a random unrecognized identifier). Fix them. It might not always be possible to allow this, but it is in all these places, so I don't see any downside. Per gripe from Jim Wilson. Arguably this is a bug fix, but given the lack of other complaints and the ease of working around it (just quote the word), I won't risk back-patching.
-
Tom Lane authored
Once again, somebody who ought to know better forgot this. We really need some automated cross-check on the keyword-list productions, I think. Per report from Brian Weaver.
-