- 03 Oct, 2013 2 commits
-
-
Peter Eisentraut authored
-
Peter Eisentraut authored
The cancel handler was uselessly set up even before the first connection was opened. By setting it up afterwards, the user can use Ctrl+C to abort psql if the initial connection attempt hangs. Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com> Reviewed-by: Ryan Kelly <rpkelly22@gmail.com>
-
- 02 Oct, 2013 2 commits
-
-
Bruce Momjian authored
-
Magnus Hagander authored
-
- 01 Oct, 2013 3 commits
-
-
Bruce Momjian authored
-
Alvaro Herrera authored
This is in support of a future REINDEX CONCURRENTLY feature. Michael Paquier
-
Alvaro Herrera authored
With the PGXS boilerplate in place, pg_xlogdump currently fails with an ominous error message that certain targets cannot be built because certain files do not exist. Remove that and instead throw a quick error message alerting the user of the actual problem, which should be easier to diagnose that the statu quo. Andres Freund
-
- 30 Sep, 2013 6 commits
-
-
Andrew Dunstan authored
Error noted by Andres Freund.
-
Andrew Dunstan authored
Push dependency on installdirs down to individual targets. Christoph Berg
-
Heikki Linnakangas authored
Previously bms_add_member() would palloc a whole-new copy of the existing set, copy the words, and pfree the old one. repalloc() is potentially much faster, and more importantly, this is less surprising if CurrentMemoryContext is not the same as the context the old set is in. bms_add_member() still allocates a new bitmapset in CurrentMemoryContext if NULL is passed as argument, but that is a lot less likely to induce bugs. Nicholas White.
-
Heikki Linnakangas authored
lo_open registers the currently active snapshot, and checks if the large object exists after that. Normally, snapshots registered by lo_open are unregistered at end of transaction when the lo descriptor is closed, but if we error out before the lo descriptor is added to the list of open descriptors, it is leaked. Fix by moving the snapshot registration to after checking if the large object exists. Reported by Pavel Stehule. Backpatch to 8.4. The snapshot registration system was introduced in 8.4, so prior versions are not affected (and not supported, anyway).
-
Fujii Masao authored
Pavan Deolasee
-
Andrew Dunstan authored
-
- 29 Sep, 2013 2 commits
-
-
Andrew Dunstan authored
This should have been done when the json functionality was added to hstore in 9.3.0. To handle this correctly, the upgrade script therefore uses conditional logic by using plpgsql in a DO statement to add the two new functions and the new cast. If hstore_to_json_loose is detected as already present and dependent on the hstore extension nothing is done. This will require that the database be loaded with plpgsql. People who have installed the earlier and spurious 1.1 version of hstore will need to do: ALTER EXTENSION hstore UPDATE; to pick up the new functions properly.
-
Andrew Dunstan authored
Cédric Villemain
-
- 26 Sep, 2013 4 commits
-
-
Robert Haas authored
David Rowley, after a suggestion from Heikki Linnakangas. Reviewed by Albe Laurenz, and further edited by me.
-
Andrew Dunstan authored
The behaviour in json_populate_record() and json_populate_recordset() was changed during development but the docs were not.
-
Heikki Linnakangas authored
There is a rare race condition, when a transaction that inserted a tuple aborts while vacuum is processing the page containing the inserted tuple. Vacuum prunes the page first, which normally removes any dead tuples, but if the inserting transaction aborts right after that, the loop after pruning will see a dead tuple and remove it instead. That's OK, but if the page is on a table with no indexes, and the page becomes completely empty after removing the dead tuple (or tuples) on it, it will be immediately marked as all-visible. That's OK, but the sanity check in vacuum would throw a warning because it thinks that the page contains dead tuples and was nevertheless marked as all-visible, even though it just vacuumed away the dead tuples and so it doesn't actually contain any. Spotted this while reading the code. It's difficult to hit the race condition otherwise, but can be done by putting a breakpoint after the heap_page_prune() call. Backpatch all the way to 8.4, where this code first appeared.
-
Noah Misch authored
Previous code gave a mean delay 0.44% below target. This change also has the effect of increasing the maximum possible delay. Fabien COELHO
-
- 25 Sep, 2013 1 commit
-
-
Heikki Linnakangas authored
B-tree operators are not allowed to leak memory into the current memory context. Range_cmp leaked detoasted copies of the arguments. That caused a quick out-of-memory error when creating an index on a range column. Reported by Marian Krucina, bug #8468.
-
- 24 Sep, 2013 1 commit
-
-
Alvaro Herrera authored
-
- 23 Sep, 2013 8 commits
-
-
Noah Misch authored
Though @libdir@ almost always matches @abs_builddir@ in this context, the test could only fail if they differed. Back-patch to 9.1, where the test was introduced. Hamid Quddus Akhtar
-
Noah Misch authored
Fabien COELHO
-
Robert Haas authored
Mike Blackwell and Robert Haas
-
Robert Haas authored
Previously, arbitray system columns could be mentioned in table constraints, but they were not correctly checked at runtime, because the values weren't actually set correctly in the tuple. Since it seems easy enough to initialize the table OID properly, do that, and continue allowing that column, but disallow the rest unless and until someone figures out a way to make them work properly. No back-patch, because this doesn't seem important enough to take the risk of destabilizing the back branches. In fact, this will pose a dump-and-reload hazard for those upgrading from previous versions: constraints that were accepted before but were not correctly enforced will now either be enforced correctly or not accepted at all. Either could result in restore failures, but in practice I think very few users will notice the difference, since the use case is pretty marginal anyway and few users will be relying on features that have not historically worked. Amit Kapila, reviewed by Rushabh Lathia, with doc changes by me.
-
Bruce Momjian authored
-
Bruce Momjian authored
-
Stephen Frost authored
In libpq, we set up and pass to OpenSSL callback routines to handle locking. When we run out of SSL connections, we try to clean things up by de-registering the hooks. Unfortunately, we had a few calls into the OpenSSL library after these hooks were de-registered during SSL cleanup which lead to deadlocking. This moves the thread callback cleanup to be after all SSL-cleanup related OpenSSL library calls. I've been unable to reproduce the deadlock with this fix. In passing, also move the close_SSL call to be after unlocking our ssl_config mutex when in a failure state. While it looks pretty unlikely to be an issue, it could have resulted in deadlocks if we ended up in this code path due to something other than SSL_new failing. Thanks to Heikki for pointing this out. Back-patch to all supported versions; note that the close_SSL issue only goes back to 9.0, so that hunk isn't included in the 8.4 patch. Initially found and reported by Vesa-Matti J Kari; many thanks to both Heikki and Andres for their help running down the specific issue and reviewing the patch.
-
Heikki Linnakangas authored
When a timeline history file is fetched from server, it is initially created with a temporary file name, and renamed to place. However, the temporary file name was constructed using an uninitialized buffer. Usually that meant that the file was created in current directory instead of the target, which usually goes unnoticed, but if the target is on a different filesystem than the current dir, the rename() would fail. Fix that. The second issue is that pg_receivexlog would not take .partial files into account when determining when scanning the target directory for existing WAL files. If the timeline has switched in the server several times in the last WAL segment, and pg_receivexlog is restarted, it would choose a too old starting point. That's not a problem as long as the old WAL segment exists in the server and can be streamed over, but will cause a failure if it's not. Backpatch to 9.3, where this timeline handling code was written. Analysed by Andrew Gierth, bug #8453, based on a bug report on IRC.
-
- 19 Sep, 2013 1 commit
-
-
Robert Haas authored
Per complaint from Andrew Gierth.
-
- 18 Sep, 2013 3 commits
-
-
Fujii Masao authored
Ian Lawrence Barwick
-
Robert Haas authored
Etsuro Fujita
-
Robert Haas authored
Etsuro Fujita
-
- 17 Sep, 2013 1 commit
-
-
Alvaro Herrera authored
This has been unused since commit 8563ccae. Noted by Antonin Houska
-
- 16 Sep, 2013 2 commits
-
-
Alvaro Herrera authored
It seems to make more sense to use "cutoff multixact" terminology throughout the backend code; "freeze" is associated with replacing of an Xid with FrozenTransactionId, which is not what we do for MultiXactIds. Andres Freund Some adjustments by Álvaro Herrera
-
Heikki Linnakangas authored
Bernd Helmle
-
- 15 Sep, 2013 1 commit
-
-
Peter Eisentraut authored
-
- 12 Sep, 2013 1 commit
-
-
Noah Misch authored
Once the administrator has called for an immediate shutdown or a backend crash has triggered a reinitialization, no mere SIGINT or SIGTERM should change that course. Such derailment remains possible when the signal arrives before quickdie() blocks signals. That being a narrow race affecting most PostgreSQL signal handlers in some way, leave it for another patch. Back-patch this to all supported versions.
-
- 11 Sep, 2013 2 commits
-
-
Kevin Grittner authored
Comments and the tests make clear that the intent is to test with and without an index, but there was no index.
-
Bruce Momjian authored
Albe Laurenz
-