- 18 Nov, 2012 3 commits
-
-
Tom Lane authored
The previous definitions of these GUC variables allowed them to range up to INT_MAX, but in point of fact the underlying code would suffer overflows or other errors with large values. Reduce the maximum values to something that won't misbehave. There's no apparent value in working harder than this, since very large delays aren't sensible for any of these. (Note: the risk with archive_timeout is that if we're late checking the state, the timestamp difference it's being compared to might overflow. So we need some amount of slop; the choice of INT_MAX/2 is arbitrary.) Per followup investigation of bug #7670. Although this isn't a very significant fix, might as well back-patch.
-
Tom Lane authored
We need to avoid calling WaitLatch with timeouts exceeding INT_MAX. Fortunately a simple clamp will do the trick, since no harm is done if the wait times out before it's really time to rotate the log file. Per bug #7670 (probably bug #7545 is the same thing, too). In passing, fix bogus definition of log_rotation_age's maximum value in guc.c --- it was numerically right, but only because MINS_PER_HOUR and SECS_PER_MINUTE have the same value. Back-patch to 9.2. Before that, syslogger wasn't using WaitLatch.
-
Tom Lane authored
The behavior with larger values is unspecified by the Single Unix Spec. It appears that BSD-derived kernels report EINVAL, although Linux does not. If waiting for longer intervals is desired, the calling code has to do something to limit the delay; we can't portably fix it here since "long" may not be any wider than "int" in the first place. Part of response to bug #7670, though this change doesn't fix that (in fact, it converts the problem from an ERROR into an Assert failure). No back-patch since it's just an assertion addition.
-
- 17 Nov, 2012 1 commit
-
-
Peter Eisentraut authored
Karl O. Pinc
-
- 16 Nov, 2012 2 commits
-
-
Peter Eisentraut authored
-
Tom Lane authored
Traditionally check_partial_indexes() has only looked at restriction clauses while trying to prove partial indexes usable in queries. However, join clauses can also be used in some cases; mainly, that a strict operator on "x" proves an "x IS NOT NULL" index predicate, even if the operator is in a join clause rather than a restriction clause. Adding this code fixes a regression in 9.2, because previously we would take join clauses into account when considering whether a partial index could be used in a nestloop inner indexscan path. 9.2 doesn't handle nestloop inner indexscans in the same way, and this consideration was overlooked in the rewrite. Moving the work to check_partial_indexes() is a better solution anyway, since the proof applies whether or not we actually use the index in that particular way, and we don't have to do it over again for each possible outer relation. Per report from Dave Cramer.
-
- 15 Nov, 2012 2 commits
-
-
Peter Eisentraut authored
Karl O. Pinc
-
Bruce Momjian authored
-
- 14 Nov, 2012 4 commits
-
-
Bruce Momjian authored
existence via open(), rather than collecting a directory listing and looking up matching relfilenode files with sequential scans of the array. This speeds up pg_upgrade by 2x for a large number of tables, e.g. 16k. Per observation by Ants Aasma.
-
Tom Lane authored
The correct answer for this (or any other case with arg2 = -1) is zero, but some machines throw a floating-point exception instead of behaving sanely. Commit f9ac414c dealt with this in int4mod, but overlooked the fact that it also happens in int8mod (at least on my Linux x86_64 machine). Protect int2mod as well; it's not clear whether any machines fail there (mine does not) but since the test is so cheap it seems better safe than sorry. While at it, simplify the original guard in int4mod: we need only check for arg2 == -1, we don't need to check arg1 explicitly. Xi Wang, with some editing by me.
-
Bruce Momjian authored
-
Bruce Momjian authored
-
- 13 Nov, 2012 6 commits
-
-
Tom Lane authored
record_out() leaks memory: it fails to free the strings returned by the per-column output functions, and also is careless about detoasted values. This results in a query-lifespan memory leakage when returning composite values to the client, because printtup() runs the output functions in the query-lifespan memory context. Fix it to handle these issues the same way printtup() does. Also fix a similar leakage in record_send(). (At some point we might want to try to run output functions in shorter-lived memory contexts, so that we don't need a zero-leakage policy for them. But that would be a significantly more invasive patch, which doesn't seem like material for back-patching.) In passing, use appendStringInfoCharMacro instead of appendStringInfoChar in the innermost data-copying loop of record_out, to try to shave a few cycles from this function's runtime. Per trouble report from Carlos Henrique Reimer. Back-patch to all supported versions.
-
Simon Riggs authored
At commit all standby locks are released for the top-level transaction, so searching for locks for each subtransaction is both pointless and costly (N^2) in the presence of many AccessExclusiveLocks.
-
Simon Riggs authored
Andres Freund and Simon Riggs
-
Peter Eisentraut authored
Karl O. Pinc
-
Tom Lane authored
Most of the replay functions for WAL record types that modify more than one page failed to ensure that those pages were locked correctly to ensure that concurrent queries could not see inconsistent page states. This is a hangover from coding decisions made long before Hot Standby was added, when it was hardly necessary to acquire buffer locks during WAL replay at all, let alone hold them for carefully-chosen periods. The key problem was that RestoreBkpBlocks was written to hold lock on each page restored from a full-page image for only as long as it took to update that page. This was guaranteed to break any WAL replay function in which there was any update-ordering constraint between pages, because even if the nominal order of the pages is the right one, any mixture of full-page and non-full-page updates in the same record would result in out-of-order updates. Moreover, it wouldn't work for situations where there's a requirement to maintain lock on one page while updating another. Failure to honor an update ordering constraint in this way is thought to be the cause of bug #7648 from Daniel Farina: what seems to have happened there is that a btree page being split was rewritten from a full-page image before the new right sibling page was written, and because lock on the original page was not maintained it was possible for hot standby queries to try to traverse the page's right-link to the not-yet-existing sibling page. To fix, get rid of RestoreBkpBlocks as such, and instead create a new function RestoreBackupBlock that restores just one full-page image at a time. This function can be invoked by WAL replay functions at the points where they would otherwise perform non-full-page updates; in this way, the physical order of page updates remains the same no matter which pages are replaced by full-page images. We can then further adjust the logic in individual replay functions if it is necessary to hold buffer locks for overlapping periods. A side benefit is that we can simplify the handling of concurrency conflict resolution by moving that code into the record-type-specfic functions; there's no more need to contort the code layout to keep conflict resolution in front of the RestoreBkpBlocks call. In connection with that, standardize on zero-based numbering rather than one-based numbering for referencing the full-page images. In HEAD, I removed the macros XLR_BKP_BLOCK_1 through XLR_BKP_BLOCK_4. They are still there in the header files in previous branches, but are no longer used by the code. In addition, fix some other bugs identified in the course of making these changes: spgRedoAddNode could fail to update the parent downlink at all, if the parent tuple is in the same page as either the old or new split tuple and we're not doing a full-page image: it would get fooled by the LSN having been advanced already. This would result in permanent index corruption, not just transient failure of concurrent queries. Also, ginHeapTupleFastInsert's "merge lists" case failed to mark the old tail page as a candidate for a full-page image; in the worst case this could result in torn-page corruption. heap_xlog_freeze() was inconsistent about using a cleanup lock or plain exclusive lock: it did the former in the normal path but the latter for a full-page image. A plain exclusive lock seems sufficient, so change to that. Also, remove gistRedoPageDeleteRecord(), which has been dead code since VACUUM FULL was rewritten. Back-patch to 9.0, where hot standby was introduced. Note however that 9.0 had a significantly different WAL-logging scheme for GIST index updates, and it doesn't appear possible to make that scheme safe for concurrent hot standby queries, because it can leave inconsistent states in the index even between WAL records. Given the lack of complaints from the field, we won't work too hard on fixing that branch.
-
Peter Eisentraut authored
This way it works more like the DSSSL build, and dependencies are tracked better by make. Also copy the CSS stylesheet to the html directory. This was forgotten when the output directory was changed.
-
- 12 Nov, 2012 5 commits
-
-
Heikki Linnakangas authored
As pointed out by Alvaro.
-
Heikki Linnakangas authored
errcontext() is typically used in an error context callback function, not within an ereport() invocation like e.g errmsg and errdetail are. That means that the message domain that the TEXTDOMAIN magic in ereport() determines is not the right one for the errcontext() calls. The message domain needs to be determined by the C file containing the errcontext() call, not the file containing the ereport() call. Fix by turning errcontext() into a macro that passes the TEXTDOMAIN to use for the errcontext message. "errcontext" was used in a few places as a variable or struct field name, I had to rename those out of the way, now that errcontext is a macro. We've had this problem all along, but this isn't doesn't seem worth backporting. It's a fairly minor issue, and turning errcontext from a function to a macro requires at least a recompile of any external code that calls errcontext().
-
Heikki Linnakangas authored
At least clang 3.1 generates those warnings. Prepend (void) to avoid them, like we have in AssertMacro.
-
Peter Eisentraut authored
Karl O. Pinc
-
Tom Lane authored
Since transformSetOperationTree() recurses, it can be driven to stack overflow with enough UNION/INTERSECT/EXCEPT clauses in a query. Add a check to ensure it fails cleanly instead of crashing. Per report from Matthew Gerber (though it's not clear whether this is the only thing going wrong for him). Historical note: I think the reasoning behind not putting a check here in the beginning was that the check in transformExpr() ought to be sufficient to guard the whole parser. However, because transformSetOperationTree() recurses all the way to the bottom of the set-operation tree before doing any analysis of the statement's expressions, that check doesn't save it.
-
- 09 Nov, 2012 3 commits
-
-
Alvaro Herrera authored
This code was refactored in d5497b95 but an extra LWLockRelease call was left behind. Per report from Erik Rijkers
-
Peter Eisentraut authored
Some versions of the XSLT stylesheets don't handle the missing slash correctly (they concatenate directory and file name without the slash). This might never have worked correctly.
-
Tom Lane authored
If the sleep is interrupted by a signal, we must recompute the remaining time to wait; otherwise, a steady stream of non-wait-terminating interrupts could delay return from WaitLatch indefinitely. This has been shown to be a problem for the autovacuum launcher, and there may well be other places now or in the future with similar issues. So we'd better make the function robust, even though this'll add at least one gettimeofday call per wait. Back-patch to 9.2. We might eventually need to fix 9.1 as well, but the code is quite different there, and the usage of WaitLatch in 9.1 is so limited that it's not clearly important to do so. Reported and diagnosed by Jeff Janes, though I rewrote his patch rather heavily.
-
- 08 Nov, 2012 3 commits
-
-
Tom Lane authored
This function currently lacks the option to throw error if the provided targetlist doesn't have any matching entry for a Var to be replaced. Two of the four existing call sites would be better off with an error, as would the usage in the pending auto-updatable-views patch, so it seems past time to extend the API to support that. To do so, replace the "event" parameter (historically of type CmdType, though it was declared plain int) with a special-purpose enum type. It's unclear whether this function might be called by third-party code. Since many C compilers wouldn't warn about a call site continuing to use the old calling convention, rename the function to forcibly break any such code that hasn't been updated. The old name was none too well chosen anyhow.
-
Tom Lane authored
The trigger and rule cases need to split up the input name list, but they mustn't corrupt the passed-in data structure, since it could be part of a cached utility-statement parsetree. Per bug #7641.
-
Heikki Linnakangas authored
Without this, the connection will be killed after timeout if wal_sender_timeout is set in the server. Original patch by Amit Kapila, modified by me to fit recent changes in the code.
-
- 07 Nov, 2012 5 commits
-
-
Tom Lane authored
Some platforms require including <netinet/in.h> and/or <arpa/inet.h> to use htonl() and ntohl(). Per build failure locally.
-
Bruce Momjian authored
Per email from Craig Ringer
-
Alvaro Herrera authored
Commit 4c9d0901 mistakenly introduced a call to TransferPredicateLocksToHeapRelation() on an index relation that had been closed a few lines above. Moving up an index_open() call that's below is enough to fix the problem. Discovered by me while testing an unrelated patch.
-
Bruce Momjian authored
logical replication upgrades. Backpatch to 9.2.
-
Heikki Linnakangas authored
We used to send structs wrapped in CopyData messages, which works as long as the client and server agree on things like endianess, timestamp format and alignment. That's good enough for running a standby server, which has to run on the same platform anyway, but it's useful for tools like pg_receivexlog to work across platforms. This breaks protocol compatibility of streaming replication, but we never promised that to be compatible across versions, anyway.
-
- 06 Nov, 2012 1 commit
-
-
Bruce Momjian authored
improve performance when restoring the schema from the old cluster. Backpatch to 9.2.
-
- 05 Nov, 2012 1 commit
-
-
Tom Lane authored
This case got broken in 8.4 by the addition of an error check that complains if ALTER TABLE ONLY is used on a table that has children. We do use ONLY for this situation, but it's okay because the necessary recursion occurs at a higher level. So we need to have a separate flag to suppress recursion without making the error check. Reported and patched by Pavan Deolasee, with some editorial adjustments by me. Back-patch to 8.4, since this is a regression of functionality that worked in earlier branches.
-
- 02 Nov, 2012 1 commit
-
-
Peter Eisentraut authored
-
- 01 Nov, 2012 3 commits
-
-
Tom Lane authored
In bug #7626, Brian Dunavant exposes a performance problem created by commit 3b8968f2: that commit attempted to consider *all* possible combinations of indexable join clauses, but if said clauses join to enough different relations, there's an exponential increase in the number of outer-relation sets considered. In Brian's example, all the clauses come from the same equivalence class, which means it's redundant to use more than one of them in an indexscan anyway. So we can prevent the problem in this class of cases (which is probably the majority of real examples) by rejecting combinations that would only serve to add a known-redundant clause. But that still leaves us exposed to exponential growth of planning time when the query has a lot of non-equivalence join clauses that are usable with the same index. I chose to prevent such cases by setting an upper limit on the number of relation sets considered, equal to ten times the number of index clauses considered so far. (This sliding limit still allows new relsets to be added on as we move to additional index columns, which is probably more important than considering even more combinations of clauses for the previous column.) This should keep the amount of work done roughly linear rather than exponential in the apparent query complexity. This part of the fix is pretty ad-hoc; but without a clearer idea of real-world cases for which this would result in markedly inferior plans, it's hard to see how to do better.
-
Peter Eisentraut authored
Several hacks in certain modes already thought this was a bad idea, so just disable it globally.