- 29 Nov, 2012 3 commits
-
-
Michael Meskes authored
array datatype which of course is wrong. Applied patch by Muhammad Usama <m.usama@gmail.com> to fix this.
-
Tom Lane authored
This is to see if it will stop intermittent build failures on buildfarm member okapi. We know that gmake 3.82 has some problems with sometimes not honoring dependencies in parallel builds, and it seems likely that this is more of the same. Since the vast bulk of the work in the preproc directory is associated with creating preproc.c and then preproc.o, parallelism buys us hardly anything here anyway. Also, make both this .NOTPARALLEL and the one previously added in interfaces/ecpg/Makefile be conditional on "ifeq ($(MAKE_VERSION),3.82)". The known bug in gmake is fixed upstream and should not be present in 3.83 and up, and there's no reason to think it affects older releases.
-
Tom Lane authored
Commit 8cb53654, which introduced DROP INDEX CONCURRENTLY, managed to break CREATE INDEX CONCURRENTLY via a poor choice of catalog state representation. The pg_index state for an index that's reached the final pre-drop stage was the same as the state for an index just created by CREATE INDEX CONCURRENTLY. This meant that the (necessary) change to make RelationGetIndexList ignore about-to-die indexes also made it ignore freshly-created indexes; which is catastrophic because the latter do need to be considered in HOT-safety decisions. Failure to do so leads to incorrect index entries and subsequently wrong results from queries depending on the concurrently-created index. To fix, add an additional boolean column "indislive" to pg_index, so that the freshly-created and about-to-die states can be distinguished. (This change obviously is only possible in HEAD. This patch will need to be back-patched, but in 9.2 we'll use a kluge consisting of overloading the formerly-impossible state of indisvalid = true and indisready = false.) In addition, change CREATE/DROP INDEX CONCURRENTLY so that the pg_index flag changes they make without exclusive lock on the index are made via heap_inplace_update() rather than a normal transactional update. The latter is not very safe because moving the pg_index tuple could result in concurrent SnapshotNow scans finding it twice or not at all, thus possibly resulting in index corruption. This is a pre-existing bug in CREATE INDEX CONCURRENTLY, which was copied into the DROP code. In addition, fix various places in the code that ought to check to make sure that the indexes they are manipulating are valid and/or ready as appropriate. These represent bugs that have existed since 8.2, since a failed CREATE INDEX CONCURRENTLY could leave a corrupt or invalid index behind, and we ought not try to do anything that might fail with such an index. Also fix RelationReloadIndexInfo to ensure it copies all the pg_index columns that are allowed to change after initial creation. Previously we could have been left with stale values of some fields in an index relcache entry. It's not clear whether this actually had any user-visible consequences, but it's at least a bug waiting to happen. In addition, do some code and docs review for DROP INDEX CONCURRENTLY; some cosmetic code cleanup but mostly addition and revision of comments. This will need to be back-patched, but in a noticeably different form, so I'm committing it to HEAD before working on the back-patch. Problem reported by Amit Kapila, diagnosis by Pavan Deolassee, fix by Tom Lane and Andres Freund.
-
- 28 Nov, 2012 2 commits
-
-
Alvaro Herrera authored
This is necessary (but not sufficient) to have them compilable outside of a backend environment.
-
Heikki Linnakangas authored
This was apparently a typo, which caused recovery to think that it immediately reached the end of backup, and allowed the database to start up too early. Reported by Jeff Janes. Backpatch to 9.2, where this code was introduced.
-
- 27 Nov, 2012 2 commits
-
-
Tom Lane authored
Needed to silence C++ errors, per report from Peter Eisentraut. Andres Freund
-
Heikki Linnakangas authored
Files opened with BasicOpenFile or PathNameOpenFile are not automatically cleaned up on error. That puts unnecessary burden on callers that only want to keep the file open for a short time. There is AllocateFile, but that returns a buffered FILE * stream, which in many cases is not the nicest API to work with. So add function called OpenTransientFile, which returns a unbuffered fd that's cleaned up like the FILE* returned by AllocateFile(). This plugs a few rare fd leaks in error cases: 1. copy_file() - fixed by by using OpenTransientFile instead of BasicOpenFile 2. XLogFileInit() - fixed by adding close() calls to the error cases. Can't use OpenTransientFile here because the fd is supposed to persist over transaction boundaries. 3. lo_import/lo_export - fixed by using OpenTransientFile instead of PathNameOpenFile. In addition to plugging those leaks, this replaces many BasicOpenFile() calls with OpenTransientFile() that were not leaking, because the code meticulously closed the file on error. That wasn't strictly necessary, but IMHO it's good for robustness. The same leaks exist in older versions, but given the rarity of the issues, I'm not backpatching this. Not yet, anyway - it might be good to backpatch later, after this mechanism has had some more testing in master branch.
-
- 26 Nov, 2012 2 commits
-
-
Tom Lane authored
This reverts commit d573e239, "Take fewer snapshots". While that seemed like a good idea at the time, it caused execution to use a snapshot that had been acquired before locking any of the tables mentioned in the query. This created user-visible anomalies that were not present in any prior release of Postgres, as reported by Tomas Vondra. While this whole area could do with a redesign (since there are related cases that have anomalies anyway), it doesn't seem likely that any future patch would be reasonably back-patchable; and we don't want 9.2 to exhibit a behavior that's subtly unlike either past or future releases. Hence, revert to prior code while we rethink the problem.
-
Tom Lane authored
In a query such as "SELECT DISTINCT min(x) FROM tab", the DISTINCT is pretty useless (there being only one output row), but nonetheless it shouldn't fail. But it could fail if "tab" is an inheritance parent, because planagg.c's code for fixing up equivalence classes after making the index-optimized MIN/MAX transformation wasn't prepared to find child-table versions of the aggregate expression. The least ugly fix seems to be to add an option to mutate_eclass_expressions() to skip child-table equivalence class members, which aren't used anymore at this stage of planning so it's not really necessary to fix them. Since child members are ignored in many cases already, it seems plausible for mutate_eclass_expressions() to have an option to ignore them too. Per bug #7703 from Maxim Boguk. Back-patch to 9.1. Although the same code exists before that, it cannot encounter child-table aggregates AFAICS, because the index optimization transformation cannot succeed on inheritance trees before 9.1 (for lack of MergeAppend).
-
- 25 Nov, 2012 2 commits
-
-
Bruce Momjian authored
centralizing error/shutdown code.
-
Bruce Momjian authored
pg_malloc/pg_free.
-
- 23 Nov, 2012 2 commits
-
-
Peter Eisentraut authored
It creates too many warnings with GCC 4.3 and 4.4.
-
Michael Meskes authored
with very long structs.
-
- 22 Nov, 2012 2 commits
-
-
Tom Lane authored
Since we've already chdir'd into the data directory, the file should be referenced as just "postmaster.pid", without prefixing the directory path. This is harmless in the normal case where an absolute PGDATA path is used, but quite dangerous if a relative path is specified, since the program might then fail to notice an active postmaster. Reported by Hari Babu. This got broken in my commit eb5949d1, so patch all active versions.
-
Heikki Linnakangas authored
When startup process opens a WAL segment after replaying part of it, it validates the first page on the WAL segment, even though the page it's really interested in later in the file. As part of the validation, it checks that the TLI on the page header is >= the TLI it saw on the last page it read. If the segment contains a timeline switch, and we have already replayed it, and then re-open the WAL segment (because of streaming replication got disconnected and reconnected, for example), the TLI check will fail when the first page is validated. Fix that by relaxing the TLI check when re-opening a WAL segment. Backpatch to 9.0. Earlier versions had the same code, but before standby mode was introduced in 9.0, recovery never tried to re-read a segment after partially replaying it. Reported by Amit Kapila, while testing a new feature.
-
- 21 Nov, 2012 2 commits
-
-
Tom Lane authored
Once we've received a shutdown signal (SIGINT or SIGTERM), we should not launch any more child processes, even if we get signals requesting such. The normal code path for spawning backends has always understood that, but the postmaster's infrastructure for hot standby and autovacuum didn't get the memo. As reported by Hari Babu in bug #7643, this could lead to failure to shut down at all in some cases, such as when SIGINT is received just before the startup process sends PMSIGNAL_RECOVERY_STARTED: we'd launch a bgwriter and checkpointer, and then those processes would have no idea that they ought to quit. Similarly, launching a new autovacuum worker would result in waiting till it finished before shutting down. Also, switch the order of the code blocks in reaper() that detect startup process crash versus shutdown termination. Once we've sent it a signal, we should not consider that exit(1) is surprising. This is just a cosmetic fix since shutdown occurs correctly anyway, but better not to log a phony complaint about startup process crash. Back-patch to 9.0. Some parts of this might be applicable before that, but given the lack of prior complaints I'm not going to worry too much about older branches.
-
Heikki Linnakangas authored
In many functions, a NumericVar was initialized from an input Numeric, to be passed as input to a calculation function. When the NumericVar is not modified, the digits array of the NumericVar can point directly to the digits array in the original Numeric, and we can avoid a palloc() and memcpy(). Add init_var_from_num() function to initialize a var like that. Remove dscale argument from get_str_from_var(), as all the callers just passed the dscale of the variable. That means that the rounding it used to do was not actually necessary, and get_str_from_var() no longer scribbles on its input. That makes it safer in general, and allows us to use the new init_var_from_num() function in e.g numeric_out(). Also modified numericvar_to_int8() to no scribble on its input either. It creates a temporary copy to avoid that. To compensate, the callers no longer need to create a temporary copy, so the net # of pallocs is the same, but this is nicer. In the passing, use a constant for the number 10 in get_str_from_var_sci(), when calculating 10^exponent. Saves a palloc() and some cycles to convert integer 10 to numeric. Original patch by Kyotaro HORIGUCHI, with further changes by me. Reviewed by Pavel Stehule.
-
- 19 Nov, 2012 3 commits
-
-
Bruce Momjian authored
error and errno != ENOENT.
-
Tom Lane authored
Some platforms throw an exception for this division, rather than returning a necessarily-overflowed result. Since we were testing for overflow after the fact, an exception isn't nice. We can avoid the problem by treating division by -1 as negation. Add some regression tests so that we'll find out if any compilers try to optimize away the overflow check conditions. This ought to be back-patched, but I'm going to see what the buildfarm reports about the regression tests first. Per discussion with Xi Wang, though this is different from the patch he submitted.
-
Heikki Linnakangas authored
When I moved ExecuteRecoveryCommand() from xlog.c to xlogarchive.c, I didn't realize that it's called from the checkpoint process, not the startup process. I tried to use InRedo variable to decide whether or not to attempt cleaning up the archive (must not do so before we have read the initial checkpoint record), but that variable is only valid within the startup process. Instead, let ExecuteRecoveryCommand() always clean up the archive, and add an explicit argument to RestoreArchivedFile() to say whether that's allowed or not. The caller knows better. Reported by Erik Rijkers, diagnosis by Fujii Masao. Only 9.3devel is affected.
-
- 18 Nov, 2012 3 commits
-
-
Tom Lane authored
The previous definitions of these GUC variables allowed them to range up to INT_MAX, but in point of fact the underlying code would suffer overflows or other errors with large values. Reduce the maximum values to something that won't misbehave. There's no apparent value in working harder than this, since very large delays aren't sensible for any of these. (Note: the risk with archive_timeout is that if we're late checking the state, the timestamp difference it's being compared to might overflow. So we need some amount of slop; the choice of INT_MAX/2 is arbitrary.) Per followup investigation of bug #7670. Although this isn't a very significant fix, might as well back-patch.
-
Tom Lane authored
We need to avoid calling WaitLatch with timeouts exceeding INT_MAX. Fortunately a simple clamp will do the trick, since no harm is done if the wait times out before it's really time to rotate the log file. Per bug #7670 (probably bug #7545 is the same thing, too). In passing, fix bogus definition of log_rotation_age's maximum value in guc.c --- it was numerically right, but only because MINS_PER_HOUR and SECS_PER_MINUTE have the same value. Back-patch to 9.2. Before that, syslogger wasn't using WaitLatch.
-
Tom Lane authored
The behavior with larger values is unspecified by the Single Unix Spec. It appears that BSD-derived kernels report EINVAL, although Linux does not. If waiting for longer intervals is desired, the calling code has to do something to limit the delay; we can't portably fix it here since "long" may not be any wider than "int" in the first place. Part of response to bug #7670, though this change doesn't fix that (in fact, it converts the problem from an ERROR into an Assert failure). No back-patch since it's just an assertion addition.
-
- 17 Nov, 2012 1 commit
-
-
Peter Eisentraut authored
Karl O. Pinc
-
- 16 Nov, 2012 2 commits
-
-
Peter Eisentraut authored
-
Tom Lane authored
Traditionally check_partial_indexes() has only looked at restriction clauses while trying to prove partial indexes usable in queries. However, join clauses can also be used in some cases; mainly, that a strict operator on "x" proves an "x IS NOT NULL" index predicate, even if the operator is in a join clause rather than a restriction clause. Adding this code fixes a regression in 9.2, because previously we would take join clauses into account when considering whether a partial index could be used in a nestloop inner indexscan path. 9.2 doesn't handle nestloop inner indexscans in the same way, and this consideration was overlooked in the rewrite. Moving the work to check_partial_indexes() is a better solution anyway, since the proof applies whether or not we actually use the index in that particular way, and we don't have to do it over again for each possible outer relation. Per report from Dave Cramer.
-
- 15 Nov, 2012 2 commits
-
-
Peter Eisentraut authored
Karl O. Pinc
-
Bruce Momjian authored
-
- 14 Nov, 2012 4 commits
-
-
Bruce Momjian authored
existence via open(), rather than collecting a directory listing and looking up matching relfilenode files with sequential scans of the array. This speeds up pg_upgrade by 2x for a large number of tables, e.g. 16k. Per observation by Ants Aasma.
-
Tom Lane authored
The correct answer for this (or any other case with arg2 = -1) is zero, but some machines throw a floating-point exception instead of behaving sanely. Commit f9ac414c dealt with this in int4mod, but overlooked the fact that it also happens in int8mod (at least on my Linux x86_64 machine). Protect int2mod as well; it's not clear whether any machines fail there (mine does not) but since the test is so cheap it seems better safe than sorry. While at it, simplify the original guard in int4mod: we need only check for arg2 == -1, we don't need to check arg1 explicitly. Xi Wang, with some editing by me.
-
Bruce Momjian authored
-
Bruce Momjian authored
-
- 13 Nov, 2012 6 commits
-
-
Tom Lane authored
record_out() leaks memory: it fails to free the strings returned by the per-column output functions, and also is careless about detoasted values. This results in a query-lifespan memory leakage when returning composite values to the client, because printtup() runs the output functions in the query-lifespan memory context. Fix it to handle these issues the same way printtup() does. Also fix a similar leakage in record_send(). (At some point we might want to try to run output functions in shorter-lived memory contexts, so that we don't need a zero-leakage policy for them. But that would be a significantly more invasive patch, which doesn't seem like material for back-patching.) In passing, use appendStringInfoCharMacro instead of appendStringInfoChar in the innermost data-copying loop of record_out, to try to shave a few cycles from this function's runtime. Per trouble report from Carlos Henrique Reimer. Back-patch to all supported versions.
-
Simon Riggs authored
At commit all standby locks are released for the top-level transaction, so searching for locks for each subtransaction is both pointless and costly (N^2) in the presence of many AccessExclusiveLocks.
-
Simon Riggs authored
Andres Freund and Simon Riggs
-
Peter Eisentraut authored
Karl O. Pinc
-
Tom Lane authored
Most of the replay functions for WAL record types that modify more than one page failed to ensure that those pages were locked correctly to ensure that concurrent queries could not see inconsistent page states. This is a hangover from coding decisions made long before Hot Standby was added, when it was hardly necessary to acquire buffer locks during WAL replay at all, let alone hold them for carefully-chosen periods. The key problem was that RestoreBkpBlocks was written to hold lock on each page restored from a full-page image for only as long as it took to update that page. This was guaranteed to break any WAL replay function in which there was any update-ordering constraint between pages, because even if the nominal order of the pages is the right one, any mixture of full-page and non-full-page updates in the same record would result in out-of-order updates. Moreover, it wouldn't work for situations where there's a requirement to maintain lock on one page while updating another. Failure to honor an update ordering constraint in this way is thought to be the cause of bug #7648 from Daniel Farina: what seems to have happened there is that a btree page being split was rewritten from a full-page image before the new right sibling page was written, and because lock on the original page was not maintained it was possible for hot standby queries to try to traverse the page's right-link to the not-yet-existing sibling page. To fix, get rid of RestoreBkpBlocks as such, and instead create a new function RestoreBackupBlock that restores just one full-page image at a time. This function can be invoked by WAL replay functions at the points where they would otherwise perform non-full-page updates; in this way, the physical order of page updates remains the same no matter which pages are replaced by full-page images. We can then further adjust the logic in individual replay functions if it is necessary to hold buffer locks for overlapping periods. A side benefit is that we can simplify the handling of concurrency conflict resolution by moving that code into the record-type-specfic functions; there's no more need to contort the code layout to keep conflict resolution in front of the RestoreBkpBlocks call. In connection with that, standardize on zero-based numbering rather than one-based numbering for referencing the full-page images. In HEAD, I removed the macros XLR_BKP_BLOCK_1 through XLR_BKP_BLOCK_4. They are still there in the header files in previous branches, but are no longer used by the code. In addition, fix some other bugs identified in the course of making these changes: spgRedoAddNode could fail to update the parent downlink at all, if the parent tuple is in the same page as either the old or new split tuple and we're not doing a full-page image: it would get fooled by the LSN having been advanced already. This would result in permanent index corruption, not just transient failure of concurrent queries. Also, ginHeapTupleFastInsert's "merge lists" case failed to mark the old tail page as a candidate for a full-page image; in the worst case this could result in torn-page corruption. heap_xlog_freeze() was inconsistent about using a cleanup lock or plain exclusive lock: it did the former in the normal path but the latter for a full-page image. A plain exclusive lock seems sufficient, so change to that. Also, remove gistRedoPageDeleteRecord(), which has been dead code since VACUUM FULL was rewritten. Back-patch to 9.0, where hot standby was introduced. Note however that 9.0 had a significantly different WAL-logging scheme for GIST index updates, and it doesn't appear possible to make that scheme safe for concurrent hot standby queries, because it can leave inconsistent states in the index even between WAL records. Given the lack of complaints from the field, we won't work too hard on fixing that branch.
-
Peter Eisentraut authored
This way it works more like the DSSSL build, and dependencies are tracked better by make. Also copy the CSS stylesheet to the html directory. This was forgotten when the output directory was changed.
-
- 12 Nov, 2012 2 commits
-
-
Heikki Linnakangas authored
As pointed out by Alvaro.
-
Heikki Linnakangas authored
errcontext() is typically used in an error context callback function, not within an ereport() invocation like e.g errmsg and errdetail are. That means that the message domain that the TEXTDOMAIN magic in ereport() determines is not the right one for the errcontext() calls. The message domain needs to be determined by the C file containing the errcontext() call, not the file containing the ereport() call. Fix by turning errcontext() into a macro that passes the TEXTDOMAIN to use for the errcontext message. "errcontext" was used in a few places as a variable or struct field name, I had to rename those out of the way, now that errcontext is a macro. We've had this problem all along, but this isn't doesn't seem worth backporting. It's a fairly minor issue, and turning errcontext from a function to a macro requires at least a recompile of any external code that calls errcontext().
-