- 27 May, 2012 5 commits
-
-
Bruce Momjian authored
-
Bruce Momjian authored
pg_catalog schema, even though they are not explicitly dumped (they are implicitly dumped, e.g. create language plperl).
-
Bruce Momjian authored
-
Magnus Hagander authored
Avoids the need for an external script in the most common scenario. Behavior can be overridden using the -n/--noloop commandline parameter.
-
Magnus Hagander authored
Write the file to a temporary name and then rename() it into the permanent name, to ensure it can't end up half-written and corrupt in case of a crash during shutdown. Unlink the file after it has been read so it's removed from the data directory and not included in base backups going to replication slaves.
-
- 26 May, 2012 1 commit
-
-
Tom Lane authored
The only interesting-for-performance case wherein we force heapscan here is when we're rebuilding the relcache init file, and the only such case that is likely to be examining a catalog big enough to be syncscanned is RelationBuildTupleDesc. But the early-exit optimization in that code gets broken if we start the scan at a random place within the catalog, so that allowing syncscan is actually a big deoptimization if pg_attribute is large (at least for the normal case where the rows for core system catalogs have never been changed since initdb). Hence, prevent syncscan here. Per my testing pursuant to complaints from Jeff Frost and Greg Sabino Mullane, though neither of them seem to have actually hit this specific problem. Back-patch to 8.3, where syncscan was introduced.
-
- 25 May, 2012 5 commits
-
-
Tom Lane authored
Previously, casts to name could generate invalidly-encoded results. Also, make these functions match namein() more exactly, by consistently using palloc0() instead of ad-hoc zeroing code. Back-patch to all supported branches. Karl Schnaitter and Tom Lane
-
Tom Lane authored
The previous coding presented a significant bottleneck when dumping databases containing many thousands of schemas, since the total time spent searching would increase roughly as O(N^2) in the number of objects. Noted by Jeff Janes, though I rewrote his proposed patch to use the existing findObjectByOid infrastructure. Since this is a longstanding performance bug, backpatch to all supported versions.
-
Bruce Momjian authored
-
Magnus Hagander authored
When backing up from a standby server, the backup process will not automatically switch xlog segment. So we must accept a partially transferred xlog file in this case, but rename it into position anyway. In passing, merge the two callbacks for segment end and stop stream into a single callback, since their implementations were close to identical, and rename this callback to reflect that it stops streaming rather than continues it. Patch by Magnus Hagander, review by Fujii Masao
-
Bruce Momjian authored
start/stop output, to fix file share error reported by Edmund Horner
-
- 24 May, 2012 6 commits
-
-
Bruce Momjian authored
document fix of double counting and read/write count addition, per Peter Geoghegan
-
Bruce Momjian authored
Geoghegan
-
Bruce Momjian authored
-
Tom Lane authored
zaptreesubs() was coded to unconditionally reset a capture subre's corresponding pmatch[] entry. However, in regexes without backrefs, that array is caller-supplied and might not have as many entries as the regex has capturing parens. So check the array length and do nothing if there is no corresponding entry, much as subset() does. Failure to check this resulted in a stack clobber in the case reported by Marko Kreen. This bug appears to have been latent in the regex library from the beginning. It was not exposed because find() called dissect() not cdissect(), and the dissect() code path didn't ever call zaptreesubs() (formerly zapmem()). When I unified dissect() and cdissect() in commit 4dd78bf3, the problem was exposed. Now that I've seen this, I'm rather suspicious that we might need to back-patch it; but will refrain for now, for lack of evidence that the case can be hit in the previous coding.
-
Peter Eisentraut authored
For space reasons, drop SQL:1999 and SQL:2003. Only keep the latest two and SQL-92 for historical comparison.
-
Bruce Momjian authored
Windows, to avoid opening a file by multiple processes.
-
- 23 May, 2012 5 commits
-
-
Magnus Hagander authored
Fujii Masao
-
Peter Eisentraut authored
And align a bit better with the rest of the debug output.
-
Bruce Momjian authored
-
Bruce Momjian authored
-
Bruce Momjian authored
Korotkov, per Alexander Korotkov.
-
- 22 May, 2012 12 commits
-
-
Tom Lane authored
If a seqscan encounters many consecutive pages containing only dead tuples, it can remain in the loop in heapgettup for a long time, and there was no CHECK_FOR_INTERRUPTS anywhere in that loop. This meant there were real-world situations where a query would be effectively uncancelable for long stretches. Add a check placed to occur once per page, which should be enough to provide reasonable response time without adding any measurable overhead. Report and patch by Merlin Moncure (though I tweaked it a bit). Back-patch to all supported branches.
-
Peter Eisentraut authored
Every time I read this I had doubts about whether the argument to the -x option should include the dot (yes). A small example should clarify this.
-
Bruce Momjian authored
-
Bruce Momjian authored
reindexed, not vacuumed (typo). Per report from Thomas REISS
-
Bruce Momjian authored
types, per Alexander Korotkov
-
Bruce Momjian authored
-
Robert Haas authored
When the column name is an unqualified name, rather than table.column, the error message complains about too many dotted names, which is wrong. Report by Peter Eisentraut based on examination of the sepgsql regression test output, but the problem also affects COMMENT. New wording as suggested by Tom Lane.
-
Robert Haas authored
Document some more things as incompatibilities, and improve wording of another item. Noah Misch
-
Robert Haas authored
Magnus Hagander, reviewed by Fujii Masao, with slight wording changes by me.
-
Robert Haas authored
In commit d526575f, we changed things so that buffer usage counts are incremented when the buffer is pinned, rather than when it is unpinned, but the README file didn't get the memo. Report by Amit Kapila.
-
Tom Lane authored
There is no reason to do this as early as possible in postmaster startup, and good reason not to do it until we have completely created the postmaster's lock file, namely that it might contribute to pg_ctl thinking that postmaster startup has timed out. (This would require a rather unusual amount of time to be spent scanning temp file directories, but we have at least one field report of it happening reproducibly.) Back-patch to 9.1. Before that, pg_ctl didn't wait for additional info to be added to the lock file, so it wasn't a problem. Note that this is not a complete fix to the slow-start issue in 9.1, because we still had identify_system_timezone being run during postmaster start in 9.1. But that's at least a reasonably well-defined delay, with an easy workaround if needed, whereas the temp-files scan is not so predictable and cannot be avoided.
-
Tom Lane authored
The accurate info about what's in a lock file has been in miscadmin.h for some time, so let's just make this comment point there instead of maintaining a duplicative copy.
-
- 21 May, 2012 4 commits
-
-
Peter Eisentraut authored
The list was neither logical nor numerical nor alphabetical. Let's go with alphabetical.
-
Peter Eisentraut authored
For the record, fe-print.c is also missing, but it's sort of deprecated, and the string internationalization there has some issues, and it doesn't seem worth fixing that. So let's leave that out.
-
Tom Lane authored
Josh Kupershmidt
-
Tom Lane authored
Per discussion, we should explain that we follow RFC 3339 and not really the letter of the ISO 8601 spec for timestamp output format. Mostly Brendan Jurd's wording, though I tweaked it to clarify that we do take 'T' on input. Minor additional copy-editing and markup-tweaking, too.
-
- 19 May, 2012 2 commits
-
-
Peter Eisentraut authored
Detectable by gcc -Wlogical-op. Add two regression test cases that would previously allow incorrect values to pass.
-
Peter Eisentraut authored
initdb: Add -T option oid2name: Put options in some non-random order pg_dump: Put --section option in the right place And some additional markup and terminology improvements.
-