- 20 Dec, 2012 2 commits
-
-
Heikki Linnakangas authored
We used to set it to the current recovery target timeline, but the recovery target timeline can change during recovery, leaving ThisTimeLineID at an old value. That seems worse than always leaving it at zero to begin with. AFAICS there was no good reason to set it in the first place. ThisTimeLineID is not needed in checkpointer or bgwriter process, until it's time to write the end-of-recovery checkpoint, and at that point ThisTimeLineID is updated anyway.
-
Bruce Momjian authored
Add comment stating that constraint and index names must match.
-
- 19 Dec, 2012 3 commits
-
-
Heikki Linnakangas authored
If you restored from a backup taken from a standby, and the last record in the backup is the checkpoint record, ie. there is no redo required except for the checkpoint record, we would fail to notice that we've reached the end-of-backup point, and the database is consistent. The result was an error "WAL ends before end of online backup". To fix, move the have-we-reached-end-of-backup check into CheckRecoveryConsistency(), which is already responsible for similar checks with minRecoveryPoint, and is called in the right places. Backpatch to 9.2, this check and bug did not exist before that.
-
Peter Eisentraut authored
In an earlier version of the standard, this was called just "MAX_CARDINALITY".
-
Peter Eisentraut authored
-
- 18 Dec, 2012 5 commits
-
-
Andrew Dunstan authored
Error exposed by recent Assert changes. Complaint from Peter Eisentraut.
-
Tom Lane authored
Some versions of libedit expose bogus definitions of setproctitle(), optreset, and perhaps other symbols that we don't want configure to pick up on. There was a previous report of similar problems with strlcpy(), which we addressed in commit 59cf88da, but the problem has evidently grown in scope since then. In hopes of not having to deal with it again in future, rearrange configure's tests for supplied functions so that we ignore libedit/libreadline except when probing specifically for functions we expect them to provide. Per report from Christoph Berg, though this is slightly more aggressive than his proposed patch.
-
Peter Eisentraut authored
This was used in a time when a shared libperl or libpython was difficult to come by. That is obsolete, and the idea behind the flag was never fully portable anyway and will likely fail on more modern CPU architectures.
-
Peter Eisentraut authored
Karl O. Pinc
-
Tom Lane authored
During crash recovery, we remove disk files belonging to temporary tables, but the system catalog entries for such tables are intentionally not cleaned up right away. Instead, the first backend that uses a temp schema is expected to clean out any leftover objects therein. This approach requires that we be careful to ignore leftover temp tables (since any actual access attempt would fail), *even if their BackendId matches our session*, if we have not yet established use of the session's corresponding temp schema. That worked fine in the past, but was broken by commit debcec7d which incorrectly removed the rd_islocaltemp relcache flag. Put it back, and undo various changes that substituted tests like "rel->rd_backend == MyBackendId" for use of a state-aware flag. Per trouble report from Heikki Linnakangas. Back-patch to 9.1 where the erroneous change was made. In the back branches, be careful to add rd_islocaltemp in a spot in the struct that was alignment padding before, so as not to break existing add-on code.
-
- 16 Dec, 2012 4 commits
-
-
Tom Lane authored
We failed to ever fill the sixth line (LISTEN_ADDR), which caused the attempt to fill the seventh line (SHMEM_KEY) to fail, so that the shared memory key never got added to the file in standalone mode. This has been broken since we added more content to our lock files in 9.1. To fix, tweak the logic in CreateLockFile to add an empty LISTEN_ADDR line in standalone mode. This is a tad grotty, but since that function already knows almost everything there is to know about the contents of lock files, it doesn't seem that it's any better to hack it elsewhere. It's not clear how significant this bug really is, since a standalone backend should never have any children and thus it seems not critical to be able to check the nattch count of the shmem segment externally. But I'm going to back-patch the fix anyway. This problem had escaped notice because of an ancient (and in hindsight pretty dubious) decision to suppress LOG-level messages by default in standalone mode; so that the elog(LOG) complaint in AddToDataDirLockFile that should have warned of the problem didn't do anything. Fixing that is material for a separate patch though.
-
Andrew Dunstan authored
Quiet compiler warnings noted by Peter Eisentraut.
-
Magnus Hagander authored
Craig Ringer
-
Peter Eisentraut authored
Not all system catalog description tables have the same number of columns, and the patch to add oid columns did one bit too much copy-and-pasting.
-
- 15 Dec, 2012 2 commits
-
-
Peter Eisentraut authored
Karl O. Pinc and Jeff Davis
-
Peter Eisentraut authored
-
- 14 Dec, 2012 4 commits
-
-
Andrew Dunstan authored
Per discussion on-hackers. psql is converted to use the new code. Follows a suggestion from Heikki Linnakangas.
-
Robert Haas authored
Pavan Deolasee, slightly modified by me
-
Peter Eisentraut authored
It provides some additional help to translators.
-
Peter Eisentraut authored
Karl O. Pinc
-
- 13 Dec, 2012 2 commits
-
-
Heikki Linnakangas authored
Before this patch, streaming replication would refuse to start replicating if the timeline in the primary doesn't exactly match the standby. The situation where it doesn't match is when you have a master, and two standbys, and you promote one of the standbys to become new master. Promoting bumps up the timeline ID, and after that bump, the other standby would refuse to continue. There's significantly more timeline related logic in streaming replication now. First of all, when a standby connects to primary, it will ask the primary for any timeline history files that are missing from the standby. The missing files are sent using a new replication command TIMELINE_HISTORY, and stored in standby's pg_xlog directory. Using the timeline history files, the standby can follow the latest timeline present in the primary (recovery_target_timeline='latest'), just as it can follow new timelines appearing in an archive directory. START_REPLICATION now takes a TIMELINE parameter, to specify exactly which timeline to stream WAL from. This allows the standby to request the primary to send over WAL that precedes the promotion. The replication protocol is changed slightly (in a backwards-compatible way although there's little hope of streaming replication working across major versions anyway), to allow replication to stop when the end of timeline reached, putting the walsender back into accepting a replication command. Many thanks to Amit Kapila for testing and reviewing various versions of this patch.
-
Heikki Linnakangas authored
This makes unnecessary the ugly hack used to #include postgres.h in pg_basebackup. Based on Alvaro Herrera's patch
-
- 12 Dec, 2012 3 commits
-
-
Heikki Linnakangas authored
If a tuple is larger than page size minus space reserved for fillfactor, heap_multi_insert would never find a page that it fits in and repeatedly ask for a new page from RelationGetBufferForTuple. If a tuple is too large to fit on any page, taking fillfactor into account, RelationGetBufferForTuple will always expand the relation. In a normal insert, heap_insert will accept that and put the tuple on the new page. heap_multi_insert, however, does a fillfactor check of its own, and doesn't accept the newly-extended page RelationGetBufferForTuple returns, even though there is no other choice to make the tuple fit. Fix that by making the logic in heap_multi_insert more like the heap_insert logic. The first tuple is always put on the page RelationGetBufferForTuple gives us, and the fillfactor check is only applied to the subsequent tuples. Report from David Gould, although I didn't use his patch.
-
Tom Lane authored
The dynahash code requires the number of buckets in a hash table to fit in an int; but since we calculate the desired hash table size dynamically, there are various scenarios where we might calculate too large a value. The resulting overflow can lead to infinite loops, division-by-zero crashes, etc. I (tgl) had previously installed some defenses against that in commit 299d1716, but that covered only one call path. Moreover it worked by limiting the request size to work_mem, but in a 64-bit machine it's possible to set work_mem high enough that the problem appears anyway. So let's fix the problem at the root by installing limits in the dynahash.c functions themselves. Trouble report and patch by Jeff Davis.
-
Tom Lane authored
Per discussion, this seems necessary to allow recovery from broken event triggers, or broken indexes on pg_event_trigger. Dimitri Fontaine
-
- 11 Dec, 2012 6 commits
-
-
Kevin Grittner authored
In situations where there are over 8MB of empty pages at the end of a table, the truncation work for trailing empty pages takes longer than deadlock_timeout, and there is frequent access to the table by processes other than autovacuum, there was a problem with the autovacuum worker process being canceled by the deadlock checking code. The truncation work done by autovacuum up that point was lost, and the attempt tried again by a later autovacuum worker. The attempts could continue indefinitely without making progress, consuming resources and blocking other processes for up to deadlock_timeout each time. This patch has the autovacuum worker checking whether it is blocking any other thread at 20ms intervals. If such a condition develops, the autovacuum worker will persist the work it has done so far, release its lock on the table, and sleep in 50ms intervals for up to 5 seconds, hoping to be able to re-acquire the lock and try again. If it is unable to get the lock in that time, it moves on and a worker will try to continue later from the point this one left off. While this patch doesn't change the rules about when and what to truncate, it does cause the truncation to occur sooner, with less blocking, and with the consumption of fewer resources when there is contention for the table's lock. The only user-visible change other than improved performance is that the table size during truncation may change incrementally instead of just once. This problem exists in all supported versions but is infrequently reported, although some reports of performance problems when autovacuum runs might be caused by this. Initial commit is just the master branch, but this should probably be backpatched once the build farm and general developer usage confirm that there are no surprising effects. Jan Wieck
-
Bruce Momjian authored
All versions of pg_upgrade upgraded invalid indexes caused by CREATE INDEX CONCURRENTLY failures and marked them as valid. The patch adds a check to all pg_upgrade versions and throws an error during upgrade or --check. Backpatch to 9.2, 9.1, 9.0. Patch slightly adjusted.
-
Heikki Linnakangas authored
EndRecPtr is the last record that we've read, but not necessarily yet replayed. CheckRecoveryConsistency should compare minRecoveryPoint with the last replayed record instead. This caused recovery to think it's reached consistency too early. Now that we do the check in CheckRecoveryConsistency correctly, we have to move the call of that function to after redoing a record. The current place, after reading a record but before replaying it, is wrong. In particular, if there are no more records after the one ending at minRecoveryPoint, we don't enter hot standby until one extra record is generated and read by the standby, and CheckRecoveryConsistency is called. These two bugs conspired to make the code appear to work correctly, except for the small window between reading the last record that reaches minRecoveryPoint, and replaying it. In the passing, rename recoveryLastRecPtr, which is the last record replayed, to lastReplayedEndRecPtr. This makes it slightly less confusing with replayEndRecPtr, which is the last record read that we're about to replay. Original report from Kyotaro HORIGUCHI, further diagnosis by Fujii Masao. Backpatch to 9.0, where Hot Standby subtly changed the test from "minRecoveryPoint < EndRecPtr" to "minRecoveryPoint <= EndRecPtr". The former works because where the test is performed, we have always read one more record than we've replayed.
-
Andrew Dunstan authored
Normally each module is tested in a database named contrib_regression, which is dropped and recreated at the beginhning of each pg_regress run. This new mode, enabled by adding USE_MODULE_DB=1 to the make command line, runs most modules in a database with the module name embedded in it. This will make testing pg_upgrade on clusters with the contrib modules a lot easier. Second attempt at this, this time accomodating make versions older than 3.82. Still to be done: adapt to the MSVC build system. Backpatch to 9.0, which is the earliest version it is reasonably possible to test upgrading from.
-
Bruce Momjian authored
Fix previous commit that added synchronous_commit=off, but broke -O/-o due to missing space in argument passing. Backpatch to 9.2.
-
Peter Eisentraut authored
Apparently, this service has been dead since 2008.
-
- 10 Dec, 2012 2 commits
-
-
Heikki Linnakangas authored
If a file is truncated, we must update minRecoveryPoint. Once a file is truncated, there's no going back; it would not be safe to stop recovery at a point earlier than that anymore. Per report from Kyotaro HORIGUCHI. Backpatch to 8.4. Before that, minRecoveryPoint was not updated during recovery at all.
-
Heikki Linnakangas authored
Forgot to update it at the right place. Also, consider checkpoint record that switches to new timelne to be on the new timeline. This fixes erroneous "requested timeline 2 does not contain minimum recovery point" errors, pointed out by Amit Kapila while testing another patch.
-
- 09 Dec, 2012 1 commit
-
-
Tom Lane authored
Commit 72920557 added privileges on data types, but there were a number of oversights. The implementation of default privileges for types missed a few places, and pg_dump was utterly innocent of the whole concept. Per bug #7741 from Nathan Alden, and subsequent wider investigation.
-
- 08 Dec, 2012 2 commits
-
-
Tom Lane authored
This patch makes "simple" views automatically updatable, without the need to create either INSTEAD OF triggers or INSTEAD rules. "Simple" views are those classified as updatable according to SQL-92 rules. The rewriter transforms INSERT/UPDATE/DELETE commands on such views directly into an equivalent command on the underlying table, which will generally have noticeably better performance than is possible with either triggers or user-written rules. A view that has INSTEAD OF triggers or INSTEAD rules continues to operate the same as before. For the moment, security_barrier views are not considered simple. Also, we do not support WITH CHECK OPTION. These features may be added in future. Dean Rasheed, reviewed by Amit Kapila
-
Peter Eisentraut authored
The old one is responding with 404.
-
- 07 Dec, 2012 4 commits
-
-
Bruce Momjian authored
Pg_upgrade displays file names during copy and database names during dump/restore. Andrew Dunstan identified three bugs: * long file names were being truncated to 60 _leading_ characters, which often do not change for long file names * file names were truncated to 60 characters in log files * carriage returns were being output to log files This commit fixes these --- it prints 60 _trailing_ characters to the status display, and full path names without carriage returns to log files. It also suppresses status output to the log file unless verbose mode is used.
-
Simon Riggs authored
-
Simon Riggs authored
Jeff Davis, additional test by me
-
Simon Riggs authored
Remove message when FREEZE not honoured, clarify reasons in comments and docs.
-