- 28 Dec, 2015 4 commits
-
-
Tom Lane authored
As of commit d5563d7d, psql -c no longer implies -X, but not all of our regression testing scripts had gotten that memo. To ensure consistency of results across different developers, make sure that *all* invocations of psql in all scripts in our tree use -X, even where this is not what previously happened. Michael Paquier and Tom Lane
-
Alvaro Herrera authored
Reported by: Alain Laporte (#13836)
-
Tom Lane authored
Tone down an overly strong statement about which pseudo-types PLs are likely to allow. Add "event_trigger" to the list, as well as "pg_ddl_command" in 9.5/HEAD. Back-patch to 9.3 where event_trigger was added.
-
Alvaro Herrera authored
For some reason, we've been overlooking the fact that pg_receivexlog and pg_recvlogical are using wrong translation domains all along, so their output hasn't ever been translated. The right domain is pg_basebackup, not their own executable names. Noticed by Ioseph Kim, who's been working on the Korean translation. Backpatch pg_receivexlog to 9.2 and pg_recvlogical to 9.4.
-
- 27 Dec, 2015 1 commit
-
-
Alvaro Herrera authored
Both Blowfish and DES implementations of crypt() can take arbitrarily long time, depending on the number of rounds specified by the caller; make sure they can be interrupted. Author: Andreas Karlsson Reviewer: Jeff Janes Backpatch to 9.1.
-
- 26 Dec, 2015 2 commits
-
-
Tom Lane authored
MergeAttributes() rejects cases where columns to be merged have the same type but different typmod, which is correct; but the error message it printed didn't show either typmod, which is unhelpful. Changing this requires using format_type_with_typemod() in place of TypeNameToString(), which will have some minor side effects on the way some type names are printed, but on balance this is an improvement: the old code sometimes printed one type according to one set of rules and the other type according to the other set, which could be confusing in its own way. Oddly, there were no regression test cases covering any of this behavior, so add some. Complaint and fix by Amit Langote
-
Tom Lane authored
brin_summarize_new_values() did not check that the passed OID was for an index at all, much less that it was a BRIN index, and would fail in obscure ways if it wasn't (possibly damaging data first?). It also lacked any permissions test; by analogy to VACUUM, we should only allow the table's owner to summarize. Noted by Jeff Janes, fix by Michael Paquier and me
-
- 25 Dec, 2015 2 commits
-
-
Fujii Masao authored
Add DATABASE, EVENT TRIGGER, FOREIGN TABLE, ROLE, and TABLESPACE to tab completion for SECURITY LABEL. Kyotaro Horiguchi
-
Teodor Sigaev authored
Previous coding assumes too pessimistic upper bound of similarity in consistent method of GIN. Author: Fornaroli Christophe with comments by me.
-
- 24 Dec, 2015 5 commits
- 23 Dec, 2015 6 commits
-
-
Tom Lane authored
This reverts most of commit 83dec5a7 in favor of having connectDatabase() store the possibly-reusable password in a static variable, similar to the coding we've had for a long time in pg_dump's version of that function. To avoid possible problems with unwanted password reuse, make callers specify whether it's reasonable to attempt to re-use the password. This is a wash for cases where re-use isn't needed, but it is far simpler for callers that do want that. Functionally there should be no difference. Even though we're past RC1, it seems like a good idea to back-patch this into 9.5, like the prior commit. Otherwise, if there are any third-party users of connectDatabase(), they'll have to deal with an API change in 9.5 and then another one in 9.6. Michael Paquier
-
Tom Lane authored
When pg_dump prompts the user for a password, it remembers the password for possible re-use by parallel worker processes. However, libpq might have extracted the password from a connection string originally passed as "dbname". Since we don't record the original form of dbname but break it down to host/port/etc, the password gets lost. Fix that by retrieving the actual password from the PGconn. (It strikes me that this whole approach is rather broken, as it will also lose other information such as options that might have been present in the connection string. But we'll leave that problem for another day.) In passing, get rid of rather silly use of malloc() for small fixed-size arrays. Back-patch to 9.3 where parallel pg_dump was introduced. Report and fix by Zeus Kronion, adjusted a bit by Michael Paquier and me
-
Robert Haas authored
The original coding read tuples from workers in round-robin fashion, but performance testing shows that it works much better to read enough to empty one queue before moving on to the next. I believe the reason for this is that, with the old approach, we could easily wake up a worker repeatedly to write only one new tuple into the shm_mq each time. With this approach, by the time the process gets scheduled, it has a decent chance of being able to fill the entire buffer in one go. Patch by me. Dilip Kumar helped with performance testing.
-
Robert Haas authored
This should have been part of the original commit, but was missed. Pushing data between processes is expensive, so we definitely want to project away unneeded columns here, just as we do for other nodes like Sort and Hash that care about the volume of data.
-
Peter Eisentraut authored
'\"' is more commonly written simply as '"'.
-
- 22 Dec, 2015 2 commits
-
-
Robert Haas authored
Peter Geoghegan and Robert Haas
-
Robert Haas authored
When use_remote_estimate is enabled, consider adding ORDER BY to the query we sending to the remote server so that we can use that ordered data for a merge join. Commit f18c944b arranges to push down the query pathkeys, which seems like the case mostly likely to be a win, but testing shows this can sometimes win, too. For a regular table, we know which indexes are present and therefore test whether the ordering provided by each such index is useful. Here, we take the opposite approach: guess what orderings would be useful if they could be generated cheaply, and then ask the remote side what those will cost. Ashutosh Bapat, with very substantial cosmetic revisions by me. Also reviewed by Rushabh Lathia.
-
- 21 Dec, 2015 2 commits
-
-
Tom Lane authored
Yesterday in commit d854118c, I had a serious brain fade leading me to underestimate the number of words that the tab-completion logic could divide a line into. On input such as "(((((", each character will get seen as a separate word, which means we do indeed sometimes need more space for the words than for the original line. Fix that.
-
Stephen Frost authored
Rather than expect the Query returned by get_view_query() to be read-only and then copy bits and pieces of it out, simply copy the entire structure when we get it. This addresses an issue where AcquireRewriteLocks, which is called by acquireLocksOnSubLinks(), scribbles on the parsetree passed in, which was actually an entry in relcache, leading to segfaults with certain view definitions. This also future-proofs us a bit for anyone adding more code to this path. The acquireLocksOnSubLinks() was added in commit c3e0ddd4. Back-patch to 9.3 as that commit was.
-
- 20 Dec, 2015 3 commits
-
-
Tom Lane authored
Up to now, the tab completion logic has only examined the last few words of the current input line; "last few" being originally as few as four words, but lately up to nine words. Furthermore, it only looked at what libreadline considers the current line of input, which made it rather myopic if you split your command across lines. This was tolerable, sort of, so long as the match patterns were only designed to consider the last few words of input; but with the recent addition of HeadMatches() and Matches() matching rules, we really have to do better if we want those to behave sanely. Hence, change the code to break the entire line down into words, and to include any previous lines in the command buffer along with the active readline input buffer. This will be a little bit slower than the previous coding, but some measurements say that even a query of several thousand characters can be parsed in a hundred or so microseconds on modern machines; so it's really not going to be significant for interactive tab completion. To reduce the cost some, I arranged to avoid the per-word malloc calls that used to occur: all the words are now kept in one malloc'd buffer.
-
Peter Eisentraut authored
- 19 Dec, 2015 4 commits
-
-
Tom Lane authored
Commit e5e11c8c added a bunch of EXPLAIN statements without COSTS OFF to the regression tests. This is contrary to project policy since it results in unnecessary platform dependencies in the output (it's just luck that we didn't get buildfarm failures from it). Per gripe from Mike Wilson.
-
Tom Lane authored
Replace tests like else if (pg_strcasecmp(prev4_wd, "CREATE") == 0 && pg_strcasecmp(prev3_wd, "TRIGGER") == 0 && (pg_strcasecmp(prev_wd, "BEFORE") == 0 || pg_strcasecmp(prev_wd, "AFTER") == 0)) with new notation like this: else if (TailMatches4("CREATE", "TRIGGER", MatchAny, "BEFORE|AFTER")) In addition, provide some macros COMPLETE_WITH_LISTn() to reduce the amount of clutter needed to specify a small number of predetermined completion alternatives. This makes the code substantially more compact: tab-complete.c gets over a thousand lines shorter in this patch, despite the addition of a couple of hundred lines of infrastructure for the new notations. The new way of specifying match rules seems a whole lot more readable and less error-prone, too. There's a lot more that could be done now to make matching faster and more reliable; for example I suspect that most of the TailMatches() rules should now be Matches() rules. That would allow them to be skipped after a single integer comparison if there aren't the right number of words on the line, and it would reduce the risk of unintended matches. But for now, (mostly) refrain from reworking any match rules in favor of just converting what we've got into the new notation. Thomas Munro, reviewed by Michael Paquier, some adjustments by me
-
Peter Eisentraut authored
-
Andres Freund authored
Previously the completion used the wrong word to match 'BY'. This was introduced brokenly, in b2de2a. While at it, also add completion of IN TABLESPACE ... OWNED BY and fix comments referencing nonexistent syntax. Reported-By: Michael Paquier Author: Michael Paquier and Andres Freund Discussion: CAB7nPqSHDdSwsJqX0d2XzjqOHr==HdWiubCi4L=Zs7YFTUne8w@mail.gmail.com Backpatch: 9.4, like the commit introducing the bug
-
- 18 Dec, 2015 9 commits
-
-
Teodor Sigaev authored
I miss too much. Patch is returned to commitfest process.
-
Robert Haas authored
Per a recommendation from Tomas Vondra, it's more helpful to refer to the value that determines how skewed a Gaussian or exponential distribution is as a parameter rather than a threshold. Since it's not quite too late to get this right in 9.5, where it was introduced, back-patch this. Most of the patch changes only comments and documentation, but a few pgbench messages are altered to match. Fabien Coelho, reviewed by Michael Paquier and by me.
-
Robert Haas authored
Kyotaro Horiguchi
-
Robert Haas authored
This was a silly goof on my (rhaas's) part. Report and fix by Rushabh Lathia.
-
Robert Haas authored
This could result in the error context misidentifying where the error actually occurred. Craig Ringer
-
Robert Haas authored
Amit Langote
-
Teodor Sigaev authored
Allow to omiy lower or upper or both boundaries in array subscript for selecting slice of array. Author: YUriy Zhuravlev
-
Teodor Sigaev authored
Introduce distance operators over cubes: <#> taxicab distance <-> euclidean distance <=> chebyshev distance Also add kNN support of those distances in GiST opclass. Author: Stas Kelvich
-
Tom Lane authored
datapagemap_create() and datapagemap_destroy() were declared extern, but they don't actually exist anywhere. Per YUriy Zhuravlev and Michael Paquier.
-