- 12 Dec, 2013 5 commits
-
-
Simon Riggs authored
If autovacuum_work_mem is set, autovacuum workers now use this parameter in preference to maintenance_work_mem. Peter Geoghegan
-
Simon Riggs authored
Set min_recovery_apply_delay to force a delay in recovery apply for commit and restore point WAL records. Other records are replayed immediately. Delay is measured between WAL record time and local standby time. Robert Haas, Fabrízio de Royes Mello and Simon Riggs Detailed review by Mitsumasa Kondo
-
Tatsuo Ishii authored
Integer overflow showed minus percent and minus remaining time something like this. 239300000 of 3800000000 tuples (-48%) done (elapsed 226.86 s, remaining -696.10 s).
-
Heikki Linnakangas authored
For extra clarity. Rajeev Rastogi, reviewed by Amit Kapila
-
- 11 Dec, 2013 11 commits
-
-
Tom Lane authored
We had coverage for functions returning setof a named composite type, but not for anonymous records, which is a somewhat different code path. In view of recent crash report from Sergey Konoplev, this seems worth testing, though I doubt there's any deterministic bug here today.
-
Simon Riggs authored
Hari Babu Kommi reviewed by David Rowley
-
Simon Riggs authored
Hari Babu Kommi reviewed by David Rowley
-
Tom Lane authored
Make the COPY test, which loads most of the large static tables used in the tests, also explicitly ANALYZE those tables. This allows us to get rid of various ad-hoc, and rather redundant, ANALYZE commands that had gotten stuck into various test scripts over time to ensure we got consistent plan choices. (We could have done a database-wide ANALYZE, but that would cause stats to get attached to the small static tables too, which results in plan changes compared to the historical behavior. I'm not sure that's a good idea, so not going that far for now.) Back-patch to 9.0, since 9.0 and 9.1 are currently sometimes failing regression tests for lack of an "ANALYZE tenk1" in the subselect test. There's no need for this in 8.4 since we didn't print any plans back then.
-
Robert Haas authored
There's no real point in not doing this. It doesn't cost anything in performance or space. So let's go wild. Andres Freund, with substantial editing as to style by me.
-
Kevin Grittner authored
The test only needs the one table to be vacuumed. Vacuuming the database may affect other tests. Per gripe from Tom Lane. Back-patch to 9.3, where the test was was added.
-
Peter Eisentraut authored
From: Dimitri Fontaine <dimitri@2ndQuadrant.fr>
-
Robert Haas authored
Chistopher Browne, reviewed by Sameer Thakur, Amit Kapila, and Peter Eisentraut.
-
Peter Eisentraut authored
This was broken in d6464fdc.
-
Robert Haas authored
When this reloption is set and wal_level=logical is configured, we'll record the CIDs stamped by inserts, updates, and deletes to the table just as we would for an actual catalog table. This will allow logical decoding to use historical MVCC snapshots to access such tables just as they access ordinary catalog tables. Replication solutions built around the logical decoding machinery will likely need to set this operation for their configuration tables; it might also be needed by extensions which perform table access in their output functions. Andres Freund, reviewed by myself and others.
-
Robert Haas authored
When wal_level=logical, we'll log columns from the old tuple as configured by the REPLICA IDENTITY facility added in commit 07cacba9. This makes it possible a properly-configured logical replication solution to correctly follow table updates even if they change the chosen key columns, or, with REPLICA IDENTITY FULL, even if the table has no key at all. Note that updates which do not modify the replica identity column won't log anything extra, making the choice of a good key (i.e. one that will rarely be changed) important to performance when wal_level=logical is configured. Each insert, update, or delete to a catalog table will also log the CMIN and/or CMAX values of stamped by the current transaction. This is necessary because logical decoding will require access to historical snapshots of the catalog in order to decode some data types, and the CMIN/CMAX values that we may need in order to judge row visibility may have been overwritten by the time we need them. Andres Freund, reviewed in various versions by myself, Heikki Linnakangas, KONDO Mitsumasa, and many others.
-
- 10 Dec, 2013 2 commits
-
-
Tom Lane authored
An expression such as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) could produce an invalid plan that results in a crash at execution time, if the planner attempts to flatten the outer IN into a semi-join. This happens because convert_testexpr() was not expecting any nested SubLinks and would wrongly replace any PARAM_SUBLINK Params belonging to the inner SubLink. (I think the comment denying that this case could happen was wrong when written; it's certainly been wrong for quite a long time, since very early versions of the semijoin flattening logic.) Per report from Teodor Sigaev. Back-patch to all supported branches.
-
Noah Misch authored
SQL-standard TABLE() is a subset of UNNEST(); they deal with arrays and other collection types. This feature, however, deals with set-returning functions. Use a different syntax for this feature to keep open the possibility of implementing the standard TABLE().
-
- 09 Dec, 2013 3 commits
-
-
Bruce Momjian authored
-
Bruce Momjian authored
Miles Elam
-
Robert Haas authored
Per complaint from Tom Lane.
-
- 08 Dec, 2013 2 commits
-
-
Magnus Hagander authored
Peter Geoghegan
-
Joe Conway authored
Previous commit e5de6012 modified dblink to ensure client encoding matched the server. However the added PQsetClientEncoding() call added significant overhead. Restore original performance in the common case where client encoding already matches server encoding by doing nothing in that case. Applies to all active branches. Issue reported and work sponsored by Zonar Systems.
-
- 07 Dec, 2013 4 commits
-
-
Magnus Hagander authored
Noted by Peter Geoghegan
-
Peter Eisentraut authored
This sets up ECDH key exchange, when compiling against OpenSSL that supports EC. Then the ECDHE-RSA and ECDHE-ECDSA cipher suites can be used for SSL connections. The latter one means that EC keys are now usable. The reason for EC key exchange is that it's faster than DHE and it allows to go to higher security levels where RSA will be horribly slow. There is also new GUC option ssl_ecdh_curve that specifies the curve name used for ECDH. It defaults to "prime256v1", which is the most common curve in use in HTTPS. From: Marko Kreen <markokr@gmail.com> Reviewed-by: Adrian Klaver <adrian.klaver@gmail.com>
-
Fujii Masao authored
The query ID is the internal hash identifier of the statement, and was not available in pg_stat_statements view so far. Daniel Farina, Sameer Thakur and Peter Geoghegan, reviewed by me.
-
Peter Eisentraut authored
By default, OpenSSL (and SSL/TLS in general) lets the client cipher order take priority. This is OK for browsers where the ciphers were tuned, but few PostgreSQL client libraries make the cipher order configurable. So it makes sense to have the cipher order in postgresql.conf take priority over client defaults. This patch adds the setting "ssl_prefer_server_ciphers" that can be turned on so that server cipher order is preferred. Per discussion, this now defaults to on. From: Marko Kreen <markokr@gmail.com> Reviewed-by: Adrian Klaver <adrian.klaver@gmail.com>
-
- 06 Dec, 2013 2 commits
-
-
Bruce Momjian authored
Text from Adam Vande More
-
Bruce Momjian authored
Previously, the requirements of how intermediate certificates were handled and their chain to root certificates was unclear.
-
- 05 Dec, 2013 3 commits
-
-
Alvaro Herrera authored
In 247c76a9, I added some code to do fine-grained checking of MultiXact status of locking/updating transactions when traversing an update chain. There was a thinko in that patch which would have the traversing abort, that is return HeapTupleUpdated, when the other transaction is a committed lock-only. In this case we should ignore it and return success instead. Of course, in the case where there is a committed update, HeapTupleUpdated is the correct return value. A user-visible symptom of this bug is that in REPEATABLE READ and SERIALIZABLE transaction isolation modes spurious serializability errors can occur: ERROR: could not serialize access due to concurrent update In order for this to happen, there needs to be a tuple that's key-share- locked and also updated, and the update must abort; a subsequent transaction trying to acquire a new lock on that tuple would abort with the above error. The reason is that the initial FOR KEY SHARE is seen as committed by the new locking transaction, which triggers this bug. (If the UPDATE commits, then the serialization error is correctly reported.) When running a query in READ COMMITTED mode, what happens is that the locking is aborted by the HeapTupleUpdated return value, then EvalPlanQual fetches the newest version of the tuple, which is then the only version that gets locked. (The second time the tuple is checked there is no misbehavior on the committed lock-only, because it's not checked by the code that traverses update chains; so no bug.) Only the newest version of the tuple is locked, not older ones, but this is harmless. The isolation test added by this commit illustrates the desired behavior, including the proper serialization errors that get thrown. Backpatch to 9.3.
-
Tom Lane authored
Current OpenSSL code includes a BIO_clear_retry_flags() step in the sock_write() function. Either we failed to copy the code correctly, or they added this since we copied it. In any case, lack of the clear step appears to be the cause of the server lockup after connection loss reported in bug #8647 from Valentine Gogichashvili. Assume that this is correct coding for all OpenSSL versions, and hence back-patch to all supported branches. Diagnosis and patch by Alexander Kukushkin.
-
Alvaro Herrera authored
HeapTupleSatisfiesUpdate can very easily "forget" tuple locks while checking the contents of a multixact and finding it contains an aborted update, by setting the HEAP_XMAX_INVALID bit. This would lead to concurrent transactions not noticing any previous locks held by transactions that might still be running, and thus being able to acquire subsequent locks they wouldn't be normally able to acquire. This bug was introduced in commit 1ce150b7; backpatch this fix to 9.3, like that commit. This change reverts the change to the delete-abort-savept isolation test in 1ce150b7, because that behavior change was caused by this bug. Noticed by Andres Freund while investigating a different issue reported by Noah Misch.
-
- 04 Dec, 2013 3 commits
-
-
Bruce Momjian authored
Christoph Berg
-
Bruce Momjian authored
Report from Erik Rijkers
-
Peter Eisentraut authored
-
- 03 Dec, 2013 5 commits
-
-
Heikki Linnakangas authored
This is the same trick we use when taking a full page image of a buffer passed to XLogInsert.
-
Heikki Linnakangas authored
Insertion to a non-leaf GIN page didn't make a full-page image of the page, which is wrong. The code used to do it correctly, but was changed (commit 853d1c31) because the redo-routine didn't track incomplete splits correctly when the page was restored from a full page image. Of course, that was not right way to fix it, the redo routine should've been fixed instead. The redo-routine was surreptitiously fixed in 2010 (commit 4016bdef), so all we need to do now is revert the code that creates the record to its original form. This doesn't change the format of the WAL record. Backpatch to all supported versions.
-
Bruce Momjian authored
-
Bruce Momjian authored
This function no longer takes an argument.
-
Bruce Momjian authored
Previously missing or invalid service files returned NULL. Also fix pg_upgrade to report "out of memory" for a null return from PQconndefaults(). Patch by Steve Singer, rewritten by me
-