- 30 Dec, 2015 2 commits
-
-
Tom Lane authored
VACUUM can skip heap pages altogether when there's a run of consecutive pages that are all-visible according to the visibility map. This causes it to not update its nonempty_pages count, just as if those pages were empty, which means that at the end we will think they are candidates for deletion. Thus, we may take the table's AccessExclusive lock only to find that no pages are really truncatable. This usually causes no real problems on a master server, thanks to the lock being acquired only conditionally; but on hot-standby servers, the same lock must be acquired unconditionally which can result in unnecessary query cancellations. To improve matters, force examination of the table's last page whenever we reach there with a nonempty_pages count that would allow a truncation attempt. If it's not empty, we'll advance nonempty_pages and thereby prevent the truncation attempt. If we are unable to acquire cleanup lock on that page, there's no need to force it, unless we're doing an anti-wraparound vacuum. We can just check for tuples with a shared buffer lock and then give up. (When we are doing an anti-wraparound vacuum, and decide it's okay to skip the page because it contains no freezable tuples, this patch still improves matters because nonempty_pages is properly updated, which it was not before.) Since only the last page is special-cased in this way, we might attempt a truncation that will release many fewer pages than the normal heuristic would suggest; at worst, only one page would be truncated. But that seems all right, because the situation won't repeat during the next vacuum. The real problem with the old logic is that the useless truncation attempt happens every time we vacuum, so long as the state of the last few dozen pages doesn't change. This is a longstanding deficiency, but since the consequences aren't very severe in most scenarios, I'm not going to risk a back-patch. Jeff Janes and Tom Lane
-
Tom Lane authored
Improve markup, particularly of the table of functions; add or improve examples for some of the functions; wordsmith some of the function descriptions.
-
- 29 Dec, 2015 2 commits
-
-
Tom Lane authored
The rationale for the way targetlist processing is done wasn't clearly stated anywhere, and I for one had forgotten some of the details. Having just painfully re-learned them, add some breadcrumbs for the next person.
-
Tom Lane authored
Commit 6f8cb1e2 tried to centralize rewriteTargetView's copying of a target view's Query struct. However, it ignored the fact that the jointree->quals field was used twice. This only accidentally failed to fail immediately because the same ChangeVarNodes mutation is applied in both cases, so that we end up with logically identical expression trees for both uses (and, as the code stands, the second ChangeVarNodes call actually does nothing). However, we end up linking *physically* identical expression trees into both an RTE's securityQuals list and the WithCheckOption list. That's pretty dangerous, mainly because prepsecurity.c is utterly cavalier about further munging such structures without copying them first. There may be no live bug in HEAD as a consequence of the fact that we apply preprocess_expression in between here and prepsecurity.c, and that will make a copy of the tree anyway. Or it may just be that the regression tests happen to not trip over it. (I noticed this only because things fell over pretty badly when I tried to relocate the planner's call of expand_security_quals to before expression preprocessing.) In any case it's very fragile because if anyone tried to make the securityQuals and WithCheckOption trees diverge before we reach preprocess_expression, it would not work. The fact that the current code will preprocess securityQuals and WithCheckOptions lists at completely different times in different query levels does nothing to increase my trust that that can't happen. In view of the fact that 9.5.0 is almost upon us and the aforesaid commit has seen exactly zero field testing, the prudent course is to make an extra copy of the quals so that the behavior is not different from what has been in the field during beta.
-
- 28 Dec, 2015 8 commits
-
-
Joe Conway authored
The variables newestCommitTs and oldestCommitTs sound as if they are timestamps, but in fact they are the transaction Ids that correspond to the newest and oldest timestamps rather than the actual timestamps. Rename these variables to reflect that they are actually xids: to wit newestCommitTsXid and oldestCommitTsXid respectively. Also modify related code in a similar fashion, particularly the user facing output emitted by pg_controldata and pg_resetxlog. Complaint and patch by me, review by Tom Lane and Alvaro Herrera. Backpatch to 9.5 where these variables were first introduced.
-
Tom Lane authored
Commit 33bd250f could have done with some more review: Adjust coding so that compilers unfamiliar with elog/ereport don't complain about uninitialized values. Fix misuse of PG_GETARG_INT16 to retrieve arguments declared as "integer" at the SQL level. (This was evidently copied from cube_ll_coord and cube_ur_coord, but those were wrong too.) Fix non-style-guide-conforming error messages. Fix underparenthesized if statements, which pgindent would have made a hash of, and remove some unnecessary parens elsewhere. Run pgindent over new code. Revise documentation: repeated accretion of more operators without any rethinking of the text already there had left things in a bit of a mess. Merge all the cube operators into one table and adjust surrounding text appropriately. David Rowley and Tom Lane
-
Alvaro Herrera authored
Pointer out by Jeff Janes
-
Tom Lane authored
Common mathematical convention is that exponentiation associates right to left. We aren't going to change the parser for this, but we could note it in the operator's description. (It's already noted in the operator precedence/associativity table, but users might not look there.) Per bug #13829 from Henrik Pauli.
-
Tom Lane authored
As of commit d5563d7d, psql -c no longer implies -X, but not all of our regression testing scripts had gotten that memo. To ensure consistency of results across different developers, make sure that *all* invocations of psql in all scripts in our tree use -X, even where this is not what previously happened. Michael Paquier and Tom Lane
-
Alvaro Herrera authored
Reported by: Alain Laporte (#13836)
-
Tom Lane authored
Tone down an overly strong statement about which pseudo-types PLs are likely to allow. Add "event_trigger" to the list, as well as "pg_ddl_command" in 9.5/HEAD. Back-patch to 9.3 where event_trigger was added.
-
Alvaro Herrera authored
For some reason, we've been overlooking the fact that pg_receivexlog and pg_recvlogical are using wrong translation domains all along, so their output hasn't ever been translated. The right domain is pg_basebackup, not their own executable names. Noticed by Ioseph Kim, who's been working on the Korean translation. Backpatch pg_receivexlog to 9.2 and pg_recvlogical to 9.4.
-
- 27 Dec, 2015 1 commit
-
-
Alvaro Herrera authored
Both Blowfish and DES implementations of crypt() can take arbitrarily long time, depending on the number of rounds specified by the caller; make sure they can be interrupted. Author: Andreas Karlsson Reviewer: Jeff Janes Backpatch to 9.1.
-
- 26 Dec, 2015 2 commits
-
-
Tom Lane authored
MergeAttributes() rejects cases where columns to be merged have the same type but different typmod, which is correct; but the error message it printed didn't show either typmod, which is unhelpful. Changing this requires using format_type_with_typemod() in place of TypeNameToString(), which will have some minor side effects on the way some type names are printed, but on balance this is an improvement: the old code sometimes printed one type according to one set of rules and the other type according to the other set, which could be confusing in its own way. Oddly, there were no regression test cases covering any of this behavior, so add some. Complaint and fix by Amit Langote
-
Tom Lane authored
brin_summarize_new_values() did not check that the passed OID was for an index at all, much less that it was a BRIN index, and would fail in obscure ways if it wasn't (possibly damaging data first?). It also lacked any permissions test; by analogy to VACUUM, we should only allow the table's owner to summarize. Noted by Jeff Janes, fix by Michael Paquier and me
-
- 25 Dec, 2015 2 commits
-
-
Fujii Masao authored
Add DATABASE, EVENT TRIGGER, FOREIGN TABLE, ROLE, and TABLESPACE to tab completion for SECURITY LABEL. Kyotaro Horiguchi
-
Teodor Sigaev authored
Previous coding assumes too pessimistic upper bound of similarity in consistent method of GIN. Author: Fornaroli Christophe with comments by me.
-
- 24 Dec, 2015 5 commits
- 23 Dec, 2015 6 commits
-
-
Tom Lane authored
This reverts most of commit 83dec5a7 in favor of having connectDatabase() store the possibly-reusable password in a static variable, similar to the coding we've had for a long time in pg_dump's version of that function. To avoid possible problems with unwanted password reuse, make callers specify whether it's reasonable to attempt to re-use the password. This is a wash for cases where re-use isn't needed, but it is far simpler for callers that do want that. Functionally there should be no difference. Even though we're past RC1, it seems like a good idea to back-patch this into 9.5, like the prior commit. Otherwise, if there are any third-party users of connectDatabase(), they'll have to deal with an API change in 9.5 and then another one in 9.6. Michael Paquier
-
Tom Lane authored
When pg_dump prompts the user for a password, it remembers the password for possible re-use by parallel worker processes. However, libpq might have extracted the password from a connection string originally passed as "dbname". Since we don't record the original form of dbname but break it down to host/port/etc, the password gets lost. Fix that by retrieving the actual password from the PGconn. (It strikes me that this whole approach is rather broken, as it will also lose other information such as options that might have been present in the connection string. But we'll leave that problem for another day.) In passing, get rid of rather silly use of malloc() for small fixed-size arrays. Back-patch to 9.3 where parallel pg_dump was introduced. Report and fix by Zeus Kronion, adjusted a bit by Michael Paquier and me
-
Robert Haas authored
The original coding read tuples from workers in round-robin fashion, but performance testing shows that it works much better to read enough to empty one queue before moving on to the next. I believe the reason for this is that, with the old approach, we could easily wake up a worker repeatedly to write only one new tuple into the shm_mq each time. With this approach, by the time the process gets scheduled, it has a decent chance of being able to fill the entire buffer in one go. Patch by me. Dilip Kumar helped with performance testing.
-
Robert Haas authored
This should have been part of the original commit, but was missed. Pushing data between processes is expensive, so we definitely want to project away unneeded columns here, just as we do for other nodes like Sort and Hash that care about the volume of data.
-
Peter Eisentraut authored
'\"' is more commonly written simply as '"'.
-
- 22 Dec, 2015 2 commits
-
-
Robert Haas authored
Peter Geoghegan and Robert Haas
-
Robert Haas authored
When use_remote_estimate is enabled, consider adding ORDER BY to the query we sending to the remote server so that we can use that ordered data for a merge join. Commit f18c944b arranges to push down the query pathkeys, which seems like the case mostly likely to be a win, but testing shows this can sometimes win, too. For a regular table, we know which indexes are present and therefore test whether the ordering provided by each such index is useful. Here, we take the opposite approach: guess what orderings would be useful if they could be generated cheaply, and then ask the remote side what those will cost. Ashutosh Bapat, with very substantial cosmetic revisions by me. Also reviewed by Rushabh Lathia.
-
- 21 Dec, 2015 2 commits
-
-
Tom Lane authored
Yesterday in commit d854118c, I had a serious brain fade leading me to underestimate the number of words that the tab-completion logic could divide a line into. On input such as "(((((", each character will get seen as a separate word, which means we do indeed sometimes need more space for the words than for the original line. Fix that.
-
Stephen Frost authored
Rather than expect the Query returned by get_view_query() to be read-only and then copy bits and pieces of it out, simply copy the entire structure when we get it. This addresses an issue where AcquireRewriteLocks, which is called by acquireLocksOnSubLinks(), scribbles on the parsetree passed in, which was actually an entry in relcache, leading to segfaults with certain view definitions. This also future-proofs us a bit for anyone adding more code to this path. The acquireLocksOnSubLinks() was added in commit c3e0ddd4. Back-patch to 9.3 as that commit was.
-
- 20 Dec, 2015 3 commits
-
-
Tom Lane authored
Up to now, the tab completion logic has only examined the last few words of the current input line; "last few" being originally as few as four words, but lately up to nine words. Furthermore, it only looked at what libreadline considers the current line of input, which made it rather myopic if you split your command across lines. This was tolerable, sort of, so long as the match patterns were only designed to consider the last few words of input; but with the recent addition of HeadMatches() and Matches() matching rules, we really have to do better if we want those to behave sanely. Hence, change the code to break the entire line down into words, and to include any previous lines in the command buffer along with the active readline input buffer. This will be a little bit slower than the previous coding, but some measurements say that even a query of several thousand characters can be parsed in a hundred or so microseconds on modern machines; so it's really not going to be significant for interactive tab completion. To reduce the cost some, I arranged to avoid the per-word malloc calls that used to occur: all the words are now kept in one malloc'd buffer.
-
Peter Eisentraut authored
- 19 Dec, 2015 4 commits
-
-
Tom Lane authored
Commit e5e11c8c added a bunch of EXPLAIN statements without COSTS OFF to the regression tests. This is contrary to project policy since it results in unnecessary platform dependencies in the output (it's just luck that we didn't get buildfarm failures from it). Per gripe from Mike Wilson.
-
Tom Lane authored
Replace tests like else if (pg_strcasecmp(prev4_wd, "CREATE") == 0 && pg_strcasecmp(prev3_wd, "TRIGGER") == 0 && (pg_strcasecmp(prev_wd, "BEFORE") == 0 || pg_strcasecmp(prev_wd, "AFTER") == 0)) with new notation like this: else if (TailMatches4("CREATE", "TRIGGER", MatchAny, "BEFORE|AFTER")) In addition, provide some macros COMPLETE_WITH_LISTn() to reduce the amount of clutter needed to specify a small number of predetermined completion alternatives. This makes the code substantially more compact: tab-complete.c gets over a thousand lines shorter in this patch, despite the addition of a couple of hundred lines of infrastructure for the new notations. The new way of specifying match rules seems a whole lot more readable and less error-prone, too. There's a lot more that could be done now to make matching faster and more reliable; for example I suspect that most of the TailMatches() rules should now be Matches() rules. That would allow them to be skipped after a single integer comparison if there aren't the right number of words on the line, and it would reduce the risk of unintended matches. But for now, (mostly) refrain from reworking any match rules in favor of just converting what we've got into the new notation. Thomas Munro, reviewed by Michael Paquier, some adjustments by me
-
Peter Eisentraut authored
-
Andres Freund authored
Previously the completion used the wrong word to match 'BY'. This was introduced brokenly, in b2de2a. While at it, also add completion of IN TABLESPACE ... OWNED BY and fix comments referencing nonexistent syntax. Reported-By: Michael Paquier Author: Michael Paquier and Andres Freund Discussion: CAB7nPqSHDdSwsJqX0d2XzjqOHr==HdWiubCi4L=Zs7YFTUne8w@mail.gmail.com Backpatch: 9.4, like the commit introducing the bug
-
- 18 Dec, 2015 1 commit
-
-
Teodor Sigaev authored
I miss too much. Patch is returned to commitfest process.
-