- 27 Nov, 2013 13 commits
-
-
Tom Lane authored
When acquiring a lock in fast-path mode, we must reset the locallock object's lock and proclock fields to NULL. They are not necessarily that way to start with, because the locallock could be left over from a failed lock acquisition attempt earlier in the transaction. Failure to do this led to all sorts of interesting misbehaviors when LockRelease tried to clean up no-longer-related lock and proclock objects in shared memory. Per report from Dan Wood. In passing, modify LockRelease to elog not just Assert if it doesn't find lock and proclock objects for a formerly fast-path lock, matching the code in FastPathGetRelationLockEntry and LockRefindAndRelease. This isn't a bug but it will help in diagnosing any future bugs in this area. Also, modify FastPathTransferRelationLocks and FastPathGetRelationLockEntry to break out of their loops over the fastpath array once they've found the sole matching entry. This was inconsistently done in some search loops and not others. Improve assorted related comments, too. Back-patch to 9.2 where the fast-path mechanism was introduced.
-
Kevin Grittner authored
Per report from AK
-
Tom Lane authored
Correct an obsolete statement that no backend touches another backend's PROCLOCK lists. This was probably wrong even when written (the deadlock checker looks at everybody's lists), and it's certainly quite wrong now that fast-path locking can require creation of lock and proclock objects on behalf of another backend. Also improve some statements in the hot standby explanation, and do one or two other trivial bits of wordsmithing/ reformatting.
-
Heikki Linnakangas authored
Replace it with an approach similar to what GiST uses: when a page is split, the left sibling is marked with a flag indicating that the parent hasn't been updated yet. When the parent is updated, the flag is cleared. If an insertion steps on a page with the flag set, it will finish split before proceeding with the insertion. The post-recovery cleanup mechanism was never totally reliable, as insertion to the parent could fail e.g because of running out of memory or disk space, leaving the tree in an inconsistent state. This also divides the responsibility of WAL-logging more clearly between the generic ginbtree.c code, and the parts specific to entry and posting trees. There is now a common WAL record format for insertions and deletions, which is written by ginbtree.c, followed by tree-specific payload, which is returned by the placetopage- and split- callbacks.
-
Heikki Linnakangas authored
Separate the insertion payload from the more static portions of GinBtree. GinBtree now only contains information related to searching the tree, and the information of what to insert is passed separately. Add root block number to GinBtree, instead of passing it around all the functions as argument. Split off ginFinishSplit() from ginInsertValue(). ginFinishSplit is responsible for finding the parent and inserting the downlink to it.
-
Heikki Linnakangas authored
I neglected this in the previous commit that updated the plpython2 output, which I forgot to "git add" earlier. As pointed out by Rodolfo Campero and Marko Kreen.
-
Heikki Linnakangas authored
Vacuum recognizes that it can update relfrozenxid by checking whether it has processed all pages of a relation. Unfortunately it performed that check after truncating the dead pages at the end of the relation, and used the new number of pages to decide whether all pages have been scanned. If the new number of pages happened to be smaller or equal to the number of pages scanned, it incorrectly decided that all pages were scanned. This can lead to relfrozenxid being updated, even though some pages were skipped that still contain old XIDs. That can lead to data loss due to xid wraparounds with some rows suddenly missing. This likely has escaped notice so far because it takes a large number (~2^31) of xids being used to see the effect, while a full-table vacuum before that would fix the issue. The incorrect logic was introduced by commit b4b6923e. Backpatch this fix down to 8.4, like that commit. Andres Freund, with some modifications by me.
-
Michael Meskes authored
The latest fixes removed a limitation that was still in the docs, so Zoltan updated the docs, too.
-
Michael Meskes authored
Patch by Böszörményi Zoltán <zb@cybertec.at>
-
Fujii Masao authored
Haribabu kommi, slightly modified by me.
-
Fujii Masao authored
Backpatch to 9.1. Josh Kupershmidt
-
Peter Eisentraut authored
Reviewed-by: Ali Dar <ali.munir.dar@gmail.com> Reviewed-by: Amit Khandekar <amit.khandekar@enterprisedb.com> Reviewed-by: Rodolfo Campero <rodolfo.campero@anachronics.com>
-
Peter Eisentraut authored
That way, the HTML file name of the index will be the same as currently for the DSSSL build.
-
- 26 Nov, 2013 9 commits
-
-
Heikki Linnakangas authored
-
Michael Meskes authored
Patch by Boszormenyi Zoltan <zb@cybertec.at>
-
Michael Meskes authored
Patch by Boszormenyi Zoltan <zb@cybertec.at>
-
Michael Meskes authored
Patch by Boszormenyi Zoltan <zb@cybertec.at>
-
Michael Meskes authored
Valgrind runs. Patch by Boszormenyi Zoltan <zb@cybertec.at>
-
Michael Meskes authored
variables is varchar. This fixes this test case: int main(void) { exec sql begin declare section; varchar a[50], b[50]; exec sql end declare section; return 0; } Since varchars are internally turned into custom structs and the type name is emitted for these variable declarations, the preprocessed code previously had: struct varchar_1 { ... } a _,_ struct varchar_2 { ... } b ; The comma in the generated C file was a syntax error. There are no regression test changes since it's not exercised. Patch by Boszormenyi Zoltan <zb@cybertec.at>
-
Heikki Linnakangas authored
Domains over arrays are now converted to/from python lists when passed as arguments or return values. Like regular arrays. This has some potential to break applications that rely on the old behavior that they are passed as strings, but in practice there probably aren't many such applications out there. Rodolfo Campero
-
Jeff Davis authored
The omission was apparently an oversight in the original patch.
-
Bruce Momjian authored
Change SET LOCAL/CONSTRAINTS/TRANSACTION behavior outside of a transaction block from error (post-9.3) to warning. (Was nothing in <= 9.3.) Also change ABORT outside of a transaction block from notice to warning.
-
- 25 Nov, 2013 2 commits
-
-
Michael Meskes authored
ECPG is not supposed to allow and output nested comments in C. These comments are only allowed in the SQL parts and must not be written into the C file. Also the different handling of different comments is documented.
-
Michael Meskes authored
The last fix used the wrong non-terminal to define valid types.
-
- 24 Nov, 2013 7 commits
-
-
Jeff Davis authored
Previously, messages were emitted at the LOG level every time a backend preloaded a library. That was acceptable (though unnecessary) for shared_preload_libraries; but it was excessive for local_preload_libraries and session_preload_libraries. Reduce to DEBUG1. Also, there was logic in the EXEC_BACKEND case to avoid repeated messages for shared_preload_libraries by demoting them to DEBUG2. DEBUG1 seems more appropriate there, as well, so eliminate that special case. Peter Geoghegan.
-
Tom Lane authored
These functions must be careful that they return the intended value of errno to their callers. There were several scenarios where this might not happen: 1. The recent SSL renegotiation patch added a hunk of code that would execute after setting errno. In the first place, it's doubtful that we should consider renegotiation to be successfully completed after a failure, and in the second, there's no real guarantee that the called OpenSSL routines wouldn't clobber errno. Fix by not executing that hunk except during success exit. 2. errno was left in an unknown state in case of an unrecognized return code from SSL_get_error(). While this is a "can't happen" case, it seems like a good idea to be sure we know what would happen, so reset errno to ECONNRESET in such cases. (The corresponding code in libpq's fe-secure.c already did this.) 3. There was an (undocumented) assumption that client_read_ended() wouldn't change errno. While true in the current state of the code, this seems less than future-proof. Add explicit saving/restoring of errno to make sure that changes in the called functions won't break things. I see no need to back-patch, since #1 is new code and the other two issues are mostly hypothetical. Per discussion with Amit Kapila.
-
Michael Meskes authored
When parsing C variable definitions ecpg should allow sizeof() operators as array dimensions.
-
Michael Meskes authored
SQL standard asks for allowing nested comments, while C does not. Therefore the two comments, while mostly similar, have to be parsed seperately.
-
Tom Lane authored
This function formerly crashed if called as a statement-level trigger, or if a column-name argument wasn't given. In passing, add the trigger name to all error messages from the function. (None of them are expected cases, so this shouldn't pose any compatibility risk.) Marc Cousin, reviewed by Sawada Masahiko
-
Peter Eisentraut authored
From: Dimitri Fontaine <dimitri@2ndQuadrant.fr>
-
Tom Lane authored
The previous coding labeled expressions such as pg_index.indkey[1:3] as being of int2vector type; which is not right because the subscript bounds of such a result don't, in general, satisfy the restrictions of int2vector. To fix, implicitly promote the result of slicing int2vector to int2[], or oidvector to oid[]. This is similar to what we've done with domains over arrays, which is a good analogy because these types are very much like restricted domains of the corresponding regular-array types. A side-effect is that we now also forbid array-element updates on such columns, eg while "update pg_index set indkey[4] = 42" would have worked before if you were superuser (and corrupted your catalogs irretrievably, no doubt) it's now disallowed. This seems like a good thing since, again, some choices of subscripting would've led to results not satisfying the restrictions of int2vector. The case of an array-slice update was rejected before, though with a different error message than you get now. We could make these cases work in future if we added a cast from int2[] to int2vector (with a cast function checking the subscript restrictions) but it seems unlikely that there's any value in that. Per report from Ronan Dunklau. Back-patch to all supported branches because of the crash risks involved.
-
- 23 Nov, 2013 3 commits
-
-
Tom Lane authored
If logging is enabled, either ereport() or fprintf() might stomp on errno internally, causing this function to return the wrong result. That might only end in a misleading error report, but in any code that's examining errno to decide what to do next, the consequences could be far graver. This has been broken since the very first version of this file in 2006 ... it's a bit astonishing that we didn't identify this long ago. Reported by Amit Kapila, though this isn't his proposed fix.
-
Peter Eisentraut authored
Two call sites were apparently thinking that the last argument of SPI_execute_plan() is the number of query parameters, but it is actually the row limit. Change the calls to 0, since we don't care about the limit there. The previous code didn't break anything, but it was still wrong.
-
Peter Eisentraut authored
A pointer to a C string was treated as a pointer to a "name" datum and passed to SPI_execute_plan(). This pointer would then end up being passed through datumCopy(), which would try to copy the entire 64 bytes of name data, thus running past the end of the C string. Fix by converting the string to a proper name structure. Found by LLVM AddressSanitizer.
-
- 22 Nov, 2013 6 commits
-
-
Tom Lane authored
pullup_replace_vars()'s decisions about whether a pulled-up replacement expression needs to be wrapped in a PlaceHolderVar depend on the assumption that what looks like a Var behaves like a Var. However, if the Var is a join alias reference, later flattening of join aliases might replace the Var with something that's not a Var at all, and should have been wrapped. To fix, do a forcible pass of flatten_join_alias_vars() on the subquery targetlist before we start to copy items out of it. We'll re-run that processing on the pulled-up expressions later, but that's harmless. Per report from Ken Tanzer; the added regression test case is based on his example. This bug has been there since the PlaceHolderVar mechanism was invented, but has escaped detection because the circumstances that trigger it are fairly narrow. You need a flattenable query underneath an outer join, which contains another flattenable query inside a join of its own, with a dangerous expression (a constant or something else non-strict) in that one's targetlist. Having seen this, I'm wondering if it wouldn't be prudent to do all alias-variable flattening earlier, perhaps even in the rewriter. But that would probably not be a back-patchable change.
-
Tom Lane authored
The command we're telling people to type needs to include double-quoting around the unfortunately-chosen extension name. Twiddle the textual quoting so that it looks somewhat sane. Per gripe from roadrunner6.
-
Heikki Linnakangas authored
These bugs can cause data loss on standbys started with hot_standby=on at the moment they start to accept read only queries, by marking committed transactions as uncommited. The likelihood of such corruptions is small unless the primary has a high transaction rate. 5a031a55 fixed bugs in HS's startup logic by maintaining less state until at least STANDBY_SNAPSHOT_PENDING state was reached, missing the fact that both clog and subtrans are written to before that. This only failed to fail in common cases because the usage of ExtendCLOG in procarray.c was superflous since clog extensions are actually WAL logged. f44eedc3f0f347a856eea8590730769125964597/I then tried to fix the missing extensions of pg_subtrans due to the former commit's changes - which are not WAL logged - by performing the extensions when switching to a state > STANDBY_INITIALIZED and not performing xid assignments before that - again missing the fact that ExtendCLOG is unneccessary - but screwed up twice: Once because latestObservedXid wasn't updated anymore in that state due to the earlier commit and once by having an off-by-one error in the loop performing extensions. This means that whenever a CLOG_XACTS_PER_PAGE (32768 with default settings) boundary was crossed between the start of the checkpoint recovery started from and the first xl_running_xact record old transactions commit bits in pg_clog could be overwritten if they started and committed in that window. Fix this mess by not performing ExtendCLOG() in HS at all anymore since it's unneeded and evidently dangerous and by performing subtrans extensions even before reaching STANDBY_SNAPSHOT_PENDING. Analysis and patch by Andres Freund. Reported by Christophe Pettus. Backpatch down to 9.0, like the previous commit that caused this.
-
Heikki Linnakangas authored
RecoveryIsInProgress() can be called very frequently. During normal operation, it just checks a backend-local variable and returns quickly, but during hot standby, it checks a spinlock-protected shared variable. Those spinlock acquisitions can become a point of contention on a busy hot standby system. Replace the spinlock acquisition with a memory barrier. Per discussion with Andres Freund, Ants Aasma and Merlin Moncure.
-
Peter Eisentraut authored
The previous change added a new scan-build warning about need_password assigned but not read.
-
Tom Lane authored
This patch adds the ability to write TABLE( function1(), function2(), ...) as a single FROM-clause entry. The result is the concatenation of the first row from each function, followed by the second row from each function, etc; with NULLs inserted if any function produces fewer rows than others. This is believed to be a much more useful behavior than what Postgres currently does with multiple SRFs in a SELECT list. This syntax also provides a reasonable way to combine use of column definition lists with WITH ORDINALITY: put the column definition list inside TABLE(), where it's clear that it doesn't control the ordinality column as well. Also implement SQL-compliant multiple-argument UNNEST(), by turning UNNEST(a,b,c) into TABLE(unnest(a), unnest(b), unnest(c)). The SQL standard specifies TABLE() with only a single function, not multiple functions, and it seems to require an implicit UNNEST() which is not what this patch does. There may be something wrong with that reading of the spec, though, because if it's right then the spec's TABLE() is just a pointless alternative spelling of UNNEST(). After further review of that, we might choose to adopt a different syntax for what this patch does, but in any case this functionality seems clearly worthwhile. Andrew Gierth, reviewed by Zoltán Böszörményi and Heikki Linnakangas, and significantly revised by me
-