- 13 Nov, 2014 9 commits
-
-
Tom Lane authored
Fix breakage induced by commits d8d3d2a4 and 463f2625: pg_dumpall has crashed when attempting to dump from pre-8.1 servers since then, due to faulty construction of the query used for dumping roles from older servers. The query was erroneous as of the earlier commit, but it wasn't exposed unless you tried to use --binary-upgrade, which you presumably wouldn't with a pre-8.1 server. However commit 463f2625 made it fail always. In HEAD, also fix additional breakage induced in the same query by commit 491c029d, which evidently wasn't tested against pre-8.1 servers either. The bug is only latent in 9.1 because 463f2625 hadn't landed yet, but it seems best to back-patch all branches containing the faulty query. Gilles Darold
-
Andres Freund authored
There are basically three situations in which logical decoding needs to perform cache invalidation. During/After replaying a transaction with catalog changes, when skipping a uninteresting transaction that performed catalog changes and when erroring out while replaying a transaction. Unfortunately these three cases were all done slightly differently - partially because 8de3e410, which greatly simplifies matters, got committed in the midst of the development of logical decoding. The actually problematic case was when logical decoding skipped transaction commits (and thus processed invalidations). When used via the SQL interface cache invalidation could access the catalog - bad, because we didn't set up enough state to allow that correctly. It'd not be hard to setup sufficient state, but the simpler solution is to always perform cache invalidation outside a valid transaction. Also make the different cache invalidation cases look as similar as possible, to ease code review. This fixes the assertion failure reported by Antonin Houska in 53EE02D9.7040702@gmail.com. The presented testcase has been expanded into a regression test. Backpatch to 9.4, where logical decoding was introduced.
-
Andres Freund authored
When building the initial historic catalog snapshot there were scenarios where snapbuild.c would use incorrect xmin/xmax values when starting from a xl_running_xacts record. The values used were always a bit suspect, but happened to be correct in the easy to test cases. Notably the values used when the the initial snapshot was computed while no other transactions were running were correct. This is likely to be the cause of the occasional buildfarm failures on animals markhor and tick; but it's quite possible to reproduce problems without CLOBBER_CACHE_ALWAYS. Backpatch to 9.4, where logical decoding was introduced.
-
Heikki Linnakangas authored
There was a window in RestoreBackupBlock where a page would be zeroed out, but not yet locked. If a backend pinned and locked the page in that window, it saw the zeroed page instead of the old page or new page contents, which could lead to missing rows in a result set, or errors. To fix, replace RBM_ZERO with RBM_ZERO_AND_LOCK, which atomically pins, zeroes, and locks the page, if it's not in the buffer cache already. In stable branches, the old RBM_ZERO constant is renamed to RBM_DO_NOT_USE, to avoid breaking any 3rd party extensions that might use RBM_ZERO. More importantly, this avoids renumbering the other enum values, which would cause even bigger confusion in extensions that use ReadBufferExtended, but haven't been recompiled. Backpatch to all supported versions; this has been racy since hot standby was introduced.
-
Alvaro Herrera authored
Move the meat of locking levels to mvcc.sgml, leaving only a link to it in the SELECT reference page. Michael Paquier, with some tweaks by Álvaro
-
Robert Haas authored
The hope is that we can use this to produce better diagnostics in some cases. Peter Geoghegan, reviewed by Michael Paquier, with some further changes by me.
-
Peter Eisentraut authored
-
Heikki Linnakangas authored
-
Fujii Masao authored
Since this parameter is only for GIN index, it's better to add "gin" to the parameter name for easier understanding.
-
- 12 Nov, 2014 5 commits
-
-
Tom Lane authored
This only happens if a client issues a Parse message with an empty query string, which is a bit odd; but since it is explicitly called out as legal by our FE/BE protocol spec, we'd probably better continue to allow it. Fix by adding tests everywhere that the raw_parse_tree field is passed to functions that don't or shouldn't accept NULL. Also make it clear in the relevant comments that NULL is an expected case. This reverts commits a73c9dba and 2e9650cb, which fixed specific crash symptoms by hacking things at what now seems to be the wrong end, ie the callee functions. Making the callees allow NULL is superficially more robust, but it's not always true that there is a defensible thing for the callee to do in such cases. The caller has more context and is better able to decide what the empty-query case ought to do. Per followup discussion of bug #11335. Back-patch to 9.2. The code before that is sufficiently different that it would require development of a separate patch, which doesn't seem worthwhile for what is believed to be an essentially cosmetic change.
-
Andres Freund authored
Heikki noticed in 544E23C0.8090605@vmware.com that slot.c and snapbuild.c were missing the FIN_CRC32 call when computing/checking checksums of on disk files. That doesn't lower the the error detection capabilities of the checksum, but is inconsistent with other usages. In a followup mail Heikki also noticed that, contrary to a comment, the 'version' and 'length' struct fields of replication slot's on disk data where not covered by the checksum. That's not likely to lead to actually missed corruption as those fields are cross checked with the expected version and the actual file length. But it's wrong nonetheless. As fixing these issues makes existing on disk files unreadable, bump the expected versions of on disk files for both slots and logical decoding historic catalog snapshots. This means that loading old files will fail with ERROR: "replication slot file ... has unsupported version 1" and ERROR: "snapbuild state file ... has unsupported version 1 instead of 2" respectively. Given the low likelihood of anybody already using these new features in a production setup that seems acceptable. Fixing these issues made me notice that there's no regression test covering the loading of historic snapshot from disk - so add one. Backpatch to 9.4 where these features were introduced.
-
Andres Freund authored
Currently the extension's pg_prewarm() function didn't check interrupts once it started "warming" data. Since individual calls can take a long while it's important for them to be interruptible. Backpatch to 9.4 where pg_prewarm was introduced.
-
Noah Misch authored
On Windows, DROP TABLESPACE has a race condition when run concurrently with other processes having opened files in the tablespace. This led to a rare failure on buildfarm member frogmouth. Back-patch to 9.4, where the reconnection was introduced.
-
Peter Eisentraut authored
-
- 11 Nov, 2014 6 commits
-
-
Robert Haas authored
This was introduced by commit 5ea86e6e. Peter Geoghegan
-
Tom Lane authored
This fixes a scenario in which pgp_sym_decrypt() failed with "Wrong key or corrupt data" on messages whose length is 6 less than a power of 2. Per bug #11905 from Connor Penhale. Fix by Marko Tiikkaja, regression test case from Jeff Janes.
-
Tom Lane authored
When the recursive search in dependency.c visits a column and then later visits the whole table containing the column, it needs to propagate the drop-context flags for the table to the existing target-object entry for the column. Otherwise we might refuse the DROP (if not CASCADE) on the incorrect grounds that there was no automatic drop pathway to the column. Remarkably, this has not been reported before, though it's possible at least when an extension creates both a datatype and a table using that datatype. Rather than just marking the column as allowed to be dropped, it might seem good to skip the DROP COLUMN step altogether, since the later DROP of the table will surely get the job done. The problem with that is that the datatype would then be dropped before the table (since the whole situation occurred because we visited the datatype, and then recursed to the dependent column, before visiting the table). That seems pretty risky, and the case is rare enough that it doesn't seem worth expending a lot of effort or risk to make the drops happen in a safe order. So we just play dumb and delete the column separately according to the existing drop ordering rules. Per report from Petr Jelinek, though this is different from his proposed patch. Back-patch to 9.1, where extensions were introduced. There's currently no evidence that such cases can arise before 9.1, and in any case we would also need to back-patch cb5c2ba2 to 9.0 if we wanted to back-patch this.
-
Fujii Masao authored
Платон Малюгин Reviewed by Michael Paquier, Ali Akbar and Marti Raudsepp
-
Fujii Masao authored
Previously the maximum size of GIN pending list was controlled only by work_mem. But the reasonable value of work_mem and the reasonable size of the list are basically not the same, so it was not appropriate to control both of them by only one GUC, i.e., work_mem. This commit separates new GUC, pending_list_cleanup_size, from work_mem to allow users to control only the size of the list. Also this commit adds pending_list_cleanup_size as new storage parameter to allow users to specify the size of the list per index. This is useful, for example, when users want to increase the size of the list only for the GIN index which can be updated heavily, and decrease it otherwise. Reviewed by Etsuro Fujita.
-
Heikki Linnakangas authored
I missed an additional colon in previous patch. Oops. to make that mistake less likely in the future, add comments as placeholders for unused inputs and outputs in inline assembly.
-
- 10 Nov, 2014 8 commits
-
-
Heikki Linnakangas authored
Rémi Zara
-
Alvaro Herrera authored
The code that generates the BRIN_XLOG_UPDATE removes the buffer reference when the page that's target for the updated tuple is freshly initialized. This is a pretty usual optimization, but was breaking the case where the revmap buffer, which is referenced in the same WAL record, is getting a backup block: the replay code was using backup block index 1, which is not valid when the update target buffer gets pruned; the revmap buffer gets assigned 0 instead. Make sure to use the correct backup block index for revmap when replaying. Bug reported by Fujii Masao.
-
Robert Haas authored
Commit 2781b4be arranged to defer the setup of after-trigger-related data structures, but AfterTriggerPendingOnRel didn't get the memo.
-
Tom Lane authored
At one time it wasn't terribly important what column names were associated with the fields of a composite Datum, but since the introduction of operations like row_to_json(), it's important that looking up the rowtype ID embedded in the Datum returns the column names that users would expect. That did not work terribly well before this patch: you could get the column names of the underlying table, or column aliases from any level of the query, depending on minor details of the plan tree. You could even get totally empty field names, which is disastrous for cases like row_to_json(). To fix this for whole-row Vars, look to the RTE referenced by the Var, and make sure its column aliases are applied to the rowtype associated with the result Datums. This is a tad scary because we might have to return a transient RECORD type even though the Var is declared as having some named rowtype. In principle it should be all right because the record type will still be physically compatible with the named rowtype; but I had to weaken one Assert in ExecEvalConvertRowtype, and there might be third-party code containing similar assumptions. Similarly, RowExprs have to be willing to override the column names coming from a named composite result type and produce a RECORD when the column aliases visible at the site of the RowExpr differ from the underlying table's column names. In passing, revert the decision made in commit 398f70ec to add an alias-list argument to ExecTypeFromExprList: better to provide that functionality in a separate function. This also reverts most of the code changes in d6858148, which we don't need because we're no longer depending on the tupdesc found in the child plan node's result slot to be blessed. Back-patch to 9.4, but not earlier, since this solution changes the results in some cases that users might not have realized were buggy. We'll apply a more restricted form of this patch in older branches.
-
Alvaro Herrera authored
Besides a couple of typo fixes, per David Rowley, Thom Brown, and Amit Langote, and mentions of BRIN in the general CREATE INDEX page again per David, this includes silencing MSVC compiler warnings (thanks Microsoft) and an additional variable initialization per Coverity scanner.
-
Kevin Grittner authored
Reported by Peter Geoghegan David Rowley
-
Robert Haas authored
Ian Barwick
-
Bruce Momjian authored
It is invalid because the Gregorian calendar is used for all years.
-
- 08 Nov, 2014 3 commits
-
-
Alvaro Herrera authored
Reported by David Rowley: variadic macros are a problem. Get rid of them using a trick suggested by Tom Lane: add extra parentheses where needed. In the future we might decide we don't need the calls at all and remove them, but it seems appropriate to keep them while this code is still new. Also from David Rowley: brininsert() was trying to use a variable before initializing it. Fix by moving the brin_form_tuple call (which initializes the variable) to within the locked section. Reported by Peter Eisentraut: can't use "new" as a struct member name, because C++ compilers will choke on it, as reported by cpluspluscheck.
-
Peter Eisentraut authored
Work around accidental test failures because the working directory path is too long by creating a temporary directory in the (hopefully shorter) system location, symlinking that to the working directory, and creating the tablespaces using the shorter path.
-
Peter Eisentraut authored
The old note about how to use pg_receivexlog as an alternative to archive_command was obsoleted by replication slots.
-
- 07 Nov, 2014 9 commits
-
-
Robert Haas authored
This allows extension modules to define their own methods for scanning a relation, and get the core code to use them. It's unclear as yet how much use this capability will find, but we won't find out if we never commit it. KaiGai Kohei, reviewed at various times and in various levels of detail by Shigeru Hanada, Tom Lane, Andres Freund, Álvaro Herrera, and myself.
-
Heikki Linnakangas authored
Now that the backup blocks are appended to the WAL record in xloginsert.c, XLogInsert doesn't see them anymore and cannot remove them from the version reconstructed for xlog_outdesc. This makes running with wal_debug=on more expensive, as we now make (unnecessary) temporary copies of the backup blocks, but it doesn't seem worth convoluting the code to keep that optimization. Reported by Alvaro Herrera.
-
Robert Haas authored
This removes some fmgr overhead from cases such as btree index builds. Peter Geoghegan, reviewed by Andreas Karlsson and me.
-
Robert Haas authored
-
Alvaro Herrera authored
Test misc depends on brin, but it was earlier in the serial schedule file. I didn't notice this because I only run the parallel schedule, but the buildfarm exposed my folly ...
-
Alvaro Herrera authored
BRIN is a new index access method intended to accelerate scans of very large tables, without the maintenance overhead of btrees or other traditional indexes. They work by maintaining "summary" data about block ranges. Bitmap index scans work by reading each summary tuple and comparing them with the query quals; all pages in the range are returned in a lossy TID bitmap if the quals are consistent with the values in the summary tuple, otherwise not. Normal index scans are not supported because these indexes do not store TIDs. As new tuples are added into the index, the summary information is updated (if the block range in which the tuple is added is already summarized) or not; in the latter case, a subsequent pass of VACUUM or the brin_summarize_new_values() function will create the summary information. For data types with natural 1-D sort orders, the summary info consists of the maximum and the minimum values of each indexed column within each page range. This type of operator class we call "Minmax", and we supply a bunch of them for most data types with B-tree opclasses. Since the BRIN code is generalized, other approaches are possible for things such as arrays, geometric types, ranges, etc; even for things such as enum types we could do something different than minmax with better results. In this commit I only include minmax. Catalog version bumped due to new builtin catalog entries. There's more that could be done here, but this is a good step forwards. Loosely based on ideas from Simon Riggs; code mostly by Álvaro Herrera, with contribution by Heikki Linnakangas. Patch reviewed by: Amit Kapila, Heikki Linnakangas, Robert Haas. Testing help from Jeff Janes, Erik Rijkers, Emanuel Calvo. PS: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 318633.
-
Heikki Linnakangas authored
I broke these in 8776faa8. Backpatch to 9.4, where that was done.
-
Heikki Linnakangas authored
The code that generated a record to clear the F_TUPLES_DELETED flag hasn't existed since we got rid of old-style VACUUM FULL. I kept the code that sets the flag, although it's not used for anything anymore, because it might still be interesting information for debugging purposes that some tuples have been deleted from a page. Likewise, the code to turn the root page from non-leaf to leaf page was removed when we got rid of old-style VACUUM FULL. Remove the code to replay that action, too.
-
Tom Lane authored
dict_thesaurus stored phrase IDs in uint16 fields, so it would get confused and even crash if there were more than 64K entries in the configuration file. It turns out to be basically free to widen the phrase IDs to uint32, so let's just do so. This was complained of some time ago by David Boutin (in bug #7793); he later submitted an informal patch but it was never acted on. We now have another complaint (bug #11901 from Luc Ouellette) so it's time to make something happen. This is basically Boutin's patch, but for future-proofing I also added a defense against too many words per phrase. Note that we don't need any explicit defense against overflow of the uint32 counters, since before that happens we'd hit array allocation sizes that repalloc rejects. Back-patch to all supported branches because of the crash risk.
-