- 15 Jan, 2013 3 commits
-
-
Alvaro Herrera authored
When attempting to move an object into the schema in which it already was, for most objects classes we were correctly complaining about exactly that ("object is already in schema"); but for some other object classes, such as functions, we were instead complaining of a name collision ("object already exists in schema"). The latter is wrong and misleading, per complaint from Robert Haas in CA+TgmoZ0+gNf7RDKRc3u5rHXffP=QjqPZKGxb4BsPz65k7qnHQ@mail.gmail.com To fix, refactor the way these checks are done. As a bonus, the resulting code is smaller and can also share some code with Rename cases. While at it, remove use of getObjectDescriptionOids() in error messages. These are normally disallowed because of translatability considerations, but this one had slipped through since 9.1. (Not sure that this is worth backpatching, though, as it would create some untranslated messages in back branches.) This is loosely based on a patch by KaiGai Kohei, heavily reworked by me.
-
Heikki Linnakangas authored
The WAL streaming message format changed in 9.3, so 9.3 pg_basebackup or pg_receivelog won't work against older servers.
-
Tom Lane authored
Original coding would corrupt the hashtable if the item being updated was at the end of its bucket chain and the new hash key hashed to that same bucket. Diagnosis and fix by Heikki Linnakangas.
-
- 14 Jan, 2013 7 commits
-
-
Heikki Linnakangas authored
Because the return value of lseek() was assigned to an unsigned size_t variable, we'd fail to notice an error return code -1. Compiler gave a warning about this. Andres Freund
-
Tom Lane authored
This was legal back in the days of add_missing_from, though perhaps never good style. It's not legal anymore ... Jan Urbański
-
Tom Lane authored
Dates outside the supported range could be entered, but would not print reasonably, and operations such as conversion to timestamp wouldn't behave sanely either. Since this has the potential to result in undumpable table data, it seems worth back-patching. Hitoshi Harada
-
Tom Lane authored
This seems to have been invented in 2011 to represent GMT+3, non daylight savings rules, as now used in Europe/Kaliningrad and Europe/Minsk. There are no conflicts so might as well add it to the Default list. Per bug #7804 from Ruslan Izmaylov.
-
Alvaro Herrera authored
Andres Freund
-
Tom Lane authored
The code in PostPrepare_Locks supposed that it could reassign locks to the prepared transaction's dummy PGPROC by deleting the PROCLOCK table entries and immediately creating new ones. This was safe when that code was written, but since we invented partitioning of the shared lock table, it's not safe --- another process could steal away the PROCLOCK entry in the short interval when it's on the freelist. Then, if we were otherwise out of shared memory, PostPrepare_Locks would have to PANIC, since it's too late to back out of the PREPARE at that point. Fix by inventing a dynahash.c function to atomically update a hashtable entry's key. (This might possibly have other uses in future.) This is an ancient bug that in principle we ought to back-patch, but the odds of someone hitting it in the field seem really tiny, because (a) the risk window is small, and (b) nobody runs servers with maxed-out lock tables for long, because they'll be getting non-PANIC out-of-memory errors anyway. So fixing it in HEAD seems sufficient, at least until the new code has gotten some testing.
-
Peter Eisentraut authored
-
- 13 Jan, 2013 2 commits
-
-
Tom Lane authored
Forgot I was going to do this as part of the previous patch ...
-
Tom Lane authored
In commit 71450d7f, we added code to inform suitably-intelligent compilers that ereport() doesn't return if the elevel is ERROR or higher. This patch extends that to elog(), and also fixes a double-evaluation hazard that the previous commit created in ereport(), as well as reducing the emitted code size. The elog() improvement requires the compiler to support __VA_ARGS__, which should be available in just about anything nowadays since it's required by C99. But our minimum language baseline is still C89, so add a configure test for that. The previous commit assumed that ereport's elevel could be evaluated twice, which isn't terribly safe --- there are already counterexamples in xlog.c. On compilers that have __builtin_constant_p, we can use that to protect the second test, since there's no possible optimization gain if the compiler doesn't know the value of elevel. Otherwise, use a local variable inside the macros to prevent double evaluation. The local-variable solution is inferior because (a) it leads to useless code being emitted when elevel isn't constant, and (b) it increases the optimization level needed for the compiler to recognize that subsequent code is unreachable. But it seems better than not teaching non-gcc compilers about unreachability at all. Lastly, if the compiler has __builtin_unreachable(), we can use that instead of abort(), resulting in a noticeable code savings since no function call is actually emitted. However, it seems wise to do this only in non-assert builds. In an assert build, continue to use abort(), so that the behavior will be predictable and debuggable if the "impossible" happens. These changes involve making the ereport and elog macros emit do-while statement blocks not just expressions, which forces small changes in a few call sites. Andres Freund, Tom Lane, Heikki Linnakangas
-
- 12 Jan, 2013 1 commit
-
-
Andrew Dunstan authored
This is now used by ecpg tests, and not clobbered by pg_upgrade tests. This change won't affect anything that doesn't set this environment variable, but will enable the buildfarm to control exactly what port regression test installs will be running on, and thus to detect possible rogue postmasters more easily. Backpatch to release 9.2 where EXTRA_REGRESS_OPTS was first used.
-
- 11 Jan, 2013 2 commits
-
-
Tom Lane authored
Historically we've used a couple of very ad-hoc fudge factors to try to get the right results when indexes of different sizes would satisfy a query with the same number of index leaf tuples being visited. In commit 21a39de5 I tweaked one of these fudge factors, with results that proved disastrous for larger indexes. Commit bf01e34b fudged it some more, but still with not a lot of principle behind it. What seems like a better way to address these issues is to explicitly model index-descent costs, since that's what's really at stake when considering diferent indexes with similar leaf-page-level costs. We tried that once long ago, and found that charging random_page_cost per page descended through was way too much, because upper btree levels tend to stay in cache in real-world workloads. However, there's still CPU costs to think about, and the previous fudge factors can be seen as a crude attempt to account for those costs. So this patch replaces those fudge factors with explicit charges for the number of tuple comparisons needed to descend the index tree, plus a small charge per page touched in the descent. The cost multipliers are chosen so that the resulting charges are in the vicinity of the historical (pre-9.2) fudge factors for indexes of up to about a million tuples, while not ballooning unreasonably beyond that, as the old fudge factor did (even more so in 9.2). To make this work accurately for btree indexes, add some code that allows extraction of the known root-page height from a btree. There's no equivalent number readily available for other index types, but we can use the log of the number of index pages as an approximate substitute. This seems like too much of a behavioral change to risk back-patching, but it should improve matters going forward. In 9.2 I'll just revert the fudge-factor change.
-
Tom Lane authored
I notice that plperl's makefile adds the -I for $perl_archlibexp/CORE at the end of CPPFLAGS not the beginning. It seems somewhat unlikely that the include search order has anything to do with why buildfarm member okapi is failing, but I'm about out of other ideas.
-
- 10 Jan, 2013 2 commits
-
-
Tom Lane authored
It appears that perl_embed_ldflags should already mention all the libraries that are required by libperl.so itself. So let's try the test link with just those and not the other LIBS we've found up to now. This should more nearly reproduce what will happen when plperl is linked, and perhaps will fix buildfarm member okapi's problem.
-
Tom Lane authored
Although most platforms seem to package Perl in such a way that these files are present even in basic Perl installations, Debian does not. Hence, make an effort to fail during configure rather than build if --with-perl was given and these files are lacking. Per gripe from Josh Berkus.
-
- 09 Jan, 2013 4 commits
-
-
Andrew Dunstan authored
This means we can now construct a configure test for the library presence. Previously these parameters were only figured out at build time in plperl's GnuMakefile.
-
Magnus Hagander authored
JiangGuiqing
-
Magnus Hagander authored
Fixes segmentation fault during regular use. Fujii Masao
-
Bruce Momjian authored
This patch implements parallel copying/linking of files by tablespace using the --jobs option in pg_upgrade.
-
- 08 Jan, 2013 2 commits
-
-
Tom Lane authored
If VirtualXactLock() has to wait for a transaction that holds its VXID lock as a fast-path lock, it must first convert the fast-path lock to a regular lock. It failed to take the required "partition" lock on the main shared-memory lock table while doing so. This is the direct cause of the assert failure in GetLockStatusData() recently observed in the buildfarm, but more worryingly it could result in arbitrary corruption of the shared lock table if some other process were concurrently engaged in modifying the same partition of the lock table. Fortunately, VirtualXactLock() is only used by CREATE INDEX CONCURRENTLY and DROP INDEX CONCURRENTLY, so the opportunities for failure are fewer than they might have been. In passing, improve some comments and be a bit more consistent about order of operations.
-
Peter Eisentraut authored
-
- 07 Jan, 2013 3 commits
-
-
Andrew Dunstan authored
-
Robert Haas authored
Report by me. Fix by KaiGai Kohei.
-
Tatsuo Ishii authored
(-i), producing only one progress message per 5 seconds along with elapsed time and estimated remaining time. Also add elapsed time and estimated remaining time to the default logging(prints one message each 100000 rows). Patch contributed by Tomas Vondra, reviewed by Jeevan Chalke and Tatsuo Ishii.
-
- 06 Jan, 2013 1 commit
-
-
Tom Lane authored
Pre-Lion versions of Apple's linker don't allow space between -F and its argument. (Snow Leopard is nice enough to tell you that in so many words, but older versions just fail with very obscure link errors, as seen on buildfarm member locust for instance.) Oversight in commit fc874507.
-
- 05 Jan, 2013 6 commits
-
-
Magnus Hagander authored
Adds commandline option -R to pg_basebackup that creates a recovery.conf which enables standby mode using the same parameters that pg_basebackup used to connect to the master, and writes it into the output directory (or injects it in the tar file when tar format is used). Zoltan Boszormenyi, modified by Magnus Hagander, reviewed by Amit Kapila & Fujii Masao
-
Magnus Hagander authored
For code-reuse in upcoming functionality in pg_basebackup. Zoltan Boszormenyi
-
Peter Eisentraut authored
The PL/Python build on OS X was previously hardcoded to use the system installation of Python, ignoring whatever was specified to configure. Except that it would use the header files from configure, which could lead to mismatches. It was not possible to build against a custom Python installation. Now, we check in configure how the specified Python installation was built and use that, supporting framework and non-framework builds.
-
Peter Eisentraut authored
This reverts commit be0dfbad. The previous information that Py_RETURN_TRUE and Py_RETURN_FALSE are supported in Python 2.3 is wrong. They require Python 2.4. Update the comment about that.
-
Peter Eisentraut authored
Parameter defaults are actually in the SQL standard, while it was previously claimed they were not.
-
Peter Eisentraut authored
-
- 04 Jan, 2013 1 commit
-
-
Tom Lane authored
SPI_execute() and related functions create a CachedPlan, execute it once, and immediately discard it, so that the functionality offered by plancache.c is of no value in this code path. And performance measurements show that the extra data copying and invalidation checking done by plancache.c slows down simple queries by 10% or more compared to 9.1. However, enough of the SPI code is shared with functions that do need plan caching that it seems impractical to bypass plancache.c altogether. Instead, let's invent a variant version of cached plans that preserves 99% of the API but doesn't offer any of the actual functionality, nor the overhead. This puts SPI_execute() performance back on par, or maybe even slightly better, than it was before. This change should resolve recent complaints of performance degradation from Dong Ye, Pavel Stehule, and others. By avoiding data copying, this change also reduces the amount of memory needed to execute many-statement SPI_execute() strings, as for instance in a recent complaint from Tomas Vondra. An additional benefit of this change is that multi-statement SPI_execute() query strings are now processed fully serially, that is we complete execution of earlier statements before running parse analysis and planning on following ones. This eliminates a long-standing POLA violation, in that DDL that affects the behavior of a later statement will now behave as expected. Back-patch to 9.2, since this was a performance regression compared to 9.1. (In 9.2, place the added struct fields so as to avoid changing the offsets of existing fields.) Heikki Linnakangas and Tom Lane
-
- 03 Jan, 2013 4 commits
-
-
Tom Lane authored
On non-Windows machines, we use the Unix socket for connections to test postmasters, so there is no need to create a TCP socket. Furthermore, doing so causes failures due to port conflicts if two builds are carried out concurrently on one machine. (If the builds are done in different chroots, which is standard practice at least in Red Hat distros, there is no risk of conflict on the Unix socket.) Suppressing the TCP socket by setting listen_addresses to empty has long been standard practice for pg_regress, and pg_upgrade knows about this too ... but pg_upgrade's test.sh didn't get the memo. Back-patch to 9.2, and also sync the 9.2 version of the script with HEAD as much as practical.
-
Heikki Linnakangas authored
If you take a base backup from a standby server with "pg_basebackup -X fetch", and the timeline switches while the backup is being taken, the backup used to fail with an error "requested WAL segment %s has already been removed". This is because the server-side code that sends over the required WAL files would not construct the WAL filename with the correct timeline after a switch. Fix that by using readdir() to scan pg_xlog for all the WAL segments in the range, regardless of timeline. Also, include all timeline history files in the backup, if taken with "-X fetch". That fixes another related bug: If a timeline switch happened just before the backup was initiated in a standby, the WAL segment containing the initial checkpoint record contains WAL from the older timeline too. Recovery will not accept that without a timeline history file that lists the older timeline. Backpatch to 9.2. Versions prior to that were not affected as you could not take a base backup from a standby before 9.2.
-
Heikki Linnakangas authored
Streaming replication can fetch any missing timeline history files from the master, but recovery would read the timeline history file for the target timeline before reading the checkpoint record, and before walreceiver has had a chance to fetch it from the master. Delay reading it, and the sanity checks involving timeline history, until after reading the checkpoint record. There is at least one scenario where this makes a difference: if you take a base backup from a standby server right after a timeline switch, the WAL segment containing the initial checkpoint record will begin with an older timeline ID. Without the timeline history file, recovering that file will fail as the older timeline ID is not recognized to be an ancestor of the target timeline. If you try to recover from such a backup, using only streaming replication to fetch the WAL, this patch is required for that to work.
-
Bruce Momjian authored
Adjust pg_upgrade page conversion functions (which are not used) to return void so transfer_all_new_dbs can return void.
-
- 02 Jan, 2013 2 commits
-
-
Alvaro Herrera authored
-
Alvaro Herrera authored
... not on auxiliary processes. I managed to overlook the fact that I had disabled assertions on my HEAD checkout long ago. Hopefully this will turn the buildfarm green again, and put an end to today's silliness.
-