- 30 May, 2012 5 commits
-
-
Robert Haas authored
First, the previous code failed to account for the fact that, during Hot Standby operation, the startup process takes AccessExclusiveLocks on relations without setting MyDatabaseId. This resulted in fast path strong lock counts failing to be incremented with the startup process took locks, which in turn allowed conflicting lock requests to succeed when they should not have. Report by Erik Rijkers, diagnosis by Heikki Linnakangas. Second, LockReleaseAll() failed to honor the allLocks and lockmethodid restrictions with respect to fast-path locks. It's not clear to me whether this produces any user-visible breakage at the moment, but it's certainly wrong. Rearrange order of operations in LockReleaseAll to fix. Noted by Tom Lane.
-
Tom Lane authored
Overly tight coding caused the password transformation loop to stop examining input once it had processed a byte equal to 0x80. Thus, if the given password string contained such a byte (which is possible though not highly likely in UTF8, and perhaps also in other non-ASCII encodings), all subsequent characters would not contribute to the hash, making the password much weaker than it appears on the surface. This would only affect cases where applications used DES crypt() to encode passwords before storing them in the database. If a weak password has been created in this fashion, the hash will stop matching after this update has been applied, so it will be easy to tell if any passwords were unexpectedly weak. Changing to a different password would be a good idea in such a case. (Since DES has been considered inadequately secure for some time, changing to a different encryption algorithm can also be recommended.) This code, and the bug, are shared with at least PHP, FreeBSD, and OpenBSD. Since the other projects have already published their fixes, there is no point in trying to keep this commit private. This bug has been assigned CVE-2012-2143, and credit for its discovery goes to Rubin Xu and Joseph Bonneau.
-
Heikki Linnakangas authored
We used to mimic the way a stack is constructed when descending the tree during normal GiST inserts, but that was quite complicated during a buffered build. It was also wrong: in GiST, the left-to-right relationships on different levels might not match each other, so that when you know the parent of a child page, you won't necessarily find the parent of the page to the right of the child page by following the rightlinks at the parent level. This sometimes led to "could not re-find parent" errors while building a GiST index. We now use a simple hash table to track the parent of every internal page. Whenever a page is split, and downlinks are moved from one page to another, we update the hash table accordingly. This is also better for performance than the old method, as we never need to move right to re-find the parent page, which could take a significant amount of time for buffers that were created much earlier in the index build.
-
Heikki Linnakangas authored
There were two bugs here: We forgot to call gistFreeBuildBuffers() function at the end of build, and we passed interXact == true to BufFileCreateTemp, so the file wasn't automatically cleaned up at end-of-transaction either.
-
Tom Lane authored
The initial implementation of pg_dump's --section option supposed that the existing --schema-only and --data-only options could be made equivalent to --section settings. This is wrong, though, due to dubious but long since set-in-stone decisions about where to dump SEQUENCE SET items, as seen in bug report from Martin Pitt. (And I'm not totally convinced there weren't other bugs, either.) Undo that coupling and instead drive --section filtering off current-section state tracked as we scan through the TOC list to call _tocEntryRequired(). To make sure those decisions don't shift around and hopefully save a few cycles, run _tocEntryRequired() only once per TOC entry and save the result in a new TOC field. This required minor rejiggering of ACL handling but also allows a far cleaner implementation of inhibit_data_for_failed_table. Also, to ensure that pg_dump and pg_restore have the same behavior with respect to the --section switches, add _tocEntryRequired() filtering to WriteToc() and WriteDataChunks(), rather than trying to implement section filtering in an entirely orthogonal way in dumpDumpableObject(). This required adjusting the handling of the special ENCODING and STDSTRINGS items, but they were pretty weird before anyway. Minor other code review for the patch, too.
-
- 29 May, 2012 3 commits
-
-
Heikki Linnakangas authored
The result of (maintenance_work_mem * 1024) / BLCKSZ doesn't fit in a signed 32-bit integer, if maintenance_work_mem >= 2GB. Use double instead. And while we're at it, write the calculations in an easier to understand form, with the intermediary steps written out and commented.
-
Tom Lane authored
AbortOutOfAnyTransaction failed to do anything if the state it saw on entry corresponded to failing partway through StartTransaction. I fixed AbortCurrentTransaction to cope with that case way back in commit 60b2444c, but evidently overlooked that AbortOutOfAnyTransaction should do likewise. Back-patch to all supported branches. It's not clear that this omission has any more-than-cosmetic consequences, but it's also not clear that it doesn't, so back-patching seems the least risky choice.
-
Tom Lane authored
This patch fixes three places (which AFAICT is all of them) where runtime was O(N^2) in the number of TOC entries, by using an index array to replace linear searches of the TOC list. This performance issue is a bit less bad than those recently fixed, because it depends on the number of items dumped not the number in the source database, so the problem can be dodged by doing partial dumps. The previous coding already had an instance of one of the two index arrays needed, but it was only calculated in parallel-restore cases; now we need it all the time. I also chose to move the arrays into the ArchiveHandle data structure, to make this code a bit more ready for the day that we try to sling multiple ArchiveHandles around in pg_dump or pg_restore. Since we still need some server-side work before pg_dump can really cope nicely with tens of thousands of tables, there's probably little point in back-patching.
-
- 28 May, 2012 1 commit
-
-
Peter Eisentraut authored
Drop special handling of host component with slashes to mean Unix-domain socket. Specify it as separate parameter or using percent-encoding now. Allow omitting username, password, and port even if the corresponding designators are present in URI. Handle percent-encoding in query parameter keywords. Alex Shulgin some documentation improvements by myself
-
- 27 May, 2012 9 commits
-
-
Peter Eisentraut authored
Set E081 Basic Privileges to supported, since by the letter of it, we support it, even though not all possible forms of USAGE privileges are implemented.
-
Peter Eisentraut authored
This was from a time when readline support wasn't standard. And it doesn't help analyzing current line editing library problems.
-
Peter Eisentraut authored
This is related to aa90e148, but this code is only used under -DLINUX_OOM_ADJ, so it was apparently overlooked then.
-
Peter Eisentraut authored
Use SvREFCNT_inc_simple_void() instead of SvREFCNT_inc() to avoid warning about unused return value.
-
Bruce Momjian authored
-
Bruce Momjian authored
pg_catalog schema, even though they are not explicitly dumped (they are implicitly dumped, e.g. create language plperl).
-
Bruce Momjian authored
-
Magnus Hagander authored
Avoids the need for an external script in the most common scenario. Behavior can be overridden using the -n/--noloop commandline parameter.
-
Magnus Hagander authored
Write the file to a temporary name and then rename() it into the permanent name, to ensure it can't end up half-written and corrupt in case of a crash during shutdown. Unlink the file after it has been read so it's removed from the data directory and not included in base backups going to replication slaves.
-
- 26 May, 2012 1 commit
-
-
Tom Lane authored
The only interesting-for-performance case wherein we force heapscan here is when we're rebuilding the relcache init file, and the only such case that is likely to be examining a catalog big enough to be syncscanned is RelationBuildTupleDesc. But the early-exit optimization in that code gets broken if we start the scan at a random place within the catalog, so that allowing syncscan is actually a big deoptimization if pg_attribute is large (at least for the normal case where the rows for core system catalogs have never been changed since initdb). Hence, prevent syncscan here. Per my testing pursuant to complaints from Jeff Frost and Greg Sabino Mullane, though neither of them seem to have actually hit this specific problem. Back-patch to 8.3, where syncscan was introduced.
-
- 25 May, 2012 5 commits
-
-
Tom Lane authored
Previously, casts to name could generate invalidly-encoded results. Also, make these functions match namein() more exactly, by consistently using palloc0() instead of ad-hoc zeroing code. Back-patch to all supported branches. Karl Schnaitter and Tom Lane
-
Tom Lane authored
The previous coding presented a significant bottleneck when dumping databases containing many thousands of schemas, since the total time spent searching would increase roughly as O(N^2) in the number of objects. Noted by Jeff Janes, though I rewrote his proposed patch to use the existing findObjectByOid infrastructure. Since this is a longstanding performance bug, backpatch to all supported versions.
-
Bruce Momjian authored
-
Magnus Hagander authored
When backing up from a standby server, the backup process will not automatically switch xlog segment. So we must accept a partially transferred xlog file in this case, but rename it into position anyway. In passing, merge the two callbacks for segment end and stop stream into a single callback, since their implementations were close to identical, and rename this callback to reflect that it stops streaming rather than continues it. Patch by Magnus Hagander, review by Fujii Masao
-
Bruce Momjian authored
start/stop output, to fix file share error reported by Edmund Horner
-
- 24 May, 2012 6 commits
-
-
Bruce Momjian authored
document fix of double counting and read/write count addition, per Peter Geoghegan
-
Bruce Momjian authored
Geoghegan
-
Bruce Momjian authored
-
Tom Lane authored
zaptreesubs() was coded to unconditionally reset a capture subre's corresponding pmatch[] entry. However, in regexes without backrefs, that array is caller-supplied and might not have as many entries as the regex has capturing parens. So check the array length and do nothing if there is no corresponding entry, much as subset() does. Failure to check this resulted in a stack clobber in the case reported by Marko Kreen. This bug appears to have been latent in the regex library from the beginning. It was not exposed because find() called dissect() not cdissect(), and the dissect() code path didn't ever call zaptreesubs() (formerly zapmem()). When I unified dissect() and cdissect() in commit 4dd78bf3, the problem was exposed. Now that I've seen this, I'm rather suspicious that we might need to back-patch it; but will refrain for now, for lack of evidence that the case can be hit in the previous coding.
-
Peter Eisentraut authored
For space reasons, drop SQL:1999 and SQL:2003. Only keep the latest two and SQL-92 for historical comparison.
-
Bruce Momjian authored
Windows, to avoid opening a file by multiple processes.
-
- 23 May, 2012 5 commits
-
-
Magnus Hagander authored
Fujii Masao
-
Peter Eisentraut authored
And align a bit better with the rest of the debug output.
-
Bruce Momjian authored
-
Bruce Momjian authored
-
Bruce Momjian authored
Korotkov, per Alexander Korotkov.
-
- 22 May, 2012 5 commits
-
-
Tom Lane authored
If a seqscan encounters many consecutive pages containing only dead tuples, it can remain in the loop in heapgettup for a long time, and there was no CHECK_FOR_INTERRUPTS anywhere in that loop. This meant there were real-world situations where a query would be effectively uncancelable for long stretches. Add a check placed to occur once per page, which should be enough to provide reasonable response time without adding any measurable overhead. Report and patch by Merlin Moncure (though I tweaked it a bit). Back-patch to all supported branches.
-
Peter Eisentraut authored
Every time I read this I had doubts about whether the argument to the -x option should include the dot (yes). A small example should clarify this.
-
Bruce Momjian authored
-
Bruce Momjian authored
reindexed, not vacuumed (typo). Per report from Thomas REISS
-
Bruce Momjian authored
types, per Alexander Korotkov
-