- 11 Jan, 2019 3 commits
-
-
Peter Eisentraut authored
Replace using lynx with using pandoc. Pandoc creates better looking output and it avoids the delicate locale/encoding issues of lynx because it always uses UTF-8 for both input and output. Note: requires Pandoc >=1.13 Discussion: https://www.postgresql.org/message-id/flat/dcfaa74d-8037-bb32-f9e0-3fea7ccf4551@2ndquadrant.com/Reviewed-by: Mi Tar <mmitar@gmail.com>
-
Peter Eisentraut authored
This value represents the default behavior of using the current timeline. Previously, this was represented by an empty string. (Before the removal of recovery.conf, this setting could not be chosen explicitly but was used when recovery_target_timeline was not mentioned at all.) Discussion: https://www.postgresql.org/message-id/flat/6dd2c23a-4162-8469-410f-bfe146e28c0c@2ndquadrant.com/Reviewed-by: David Steele <david@pgmasters.net> Reviewed-by: Michael Paquier <michael@paquier.xyz>
-
Amit Kapila authored
particular user/db/query. The function pg_stat_statements_reset() is extended to accept userid, dbid, and queryid as input parameters. Now, it can discard the statistics gathered so far by pg_stat_statements corresponding to the specified userid, dbid, and queryid. If no parameter is specified or all the specified parameters have default value aka 0, it will discard all statistics as per the old behavior. The new behavior is useful to get the fresh statistics for a specific user/database/query without resetting all the existing statistics. Author: Haribabu Kommi, with few additional changes by me Reviewed-by: Michael Paquier, Amit Kapila and Fujii Masao Discussion: https://postgr.es/m/CAJrrPGcyh-gkFswyc6C661K6cknL0XkNqVT0sQt2mFNMR4HRKA@mail.gmail.com
-
- 10 Jan, 2019 10 commits
-
-
Andrew Dunstan authored
This was an oversight in commit 16828d5c. If the table is going to be rewritten, we simply clear all the missing values from all the table's attributes, since there will no longer be any rows with the attributes missing. Otherwise, we repackage the missing value in an array constructed with the new type specifications. Backpatch to release 11. This fixes bug #15446, reported by Dmitry Molotkov Reviewed by Dean Rasheed
-
Tom Lane authored
I chanced to notice that "make distprep" leaves a state of the tree that git complains about. It's been like this for awhile, but given the lack of complaints it probably doesn't need back-patching.
-
Tom Lane authored
Avoid using "typeid" as a parameter name in header files, since that is a C++ keyword. These cases were introduced recently, in 04fe805a and 586b98fd. Since I'm an incurable neatnik, also rename these parameters in the underlying function definitions. That's not really necessary per project rules, but I don't like function declarations that don't quite agree with the underlying definitions. Per src/tools/pginclude/cpluspluscheck.
-
Tom Lane authored
Discussion: https://postgr.es/m/4380.1547143967@sss.pgh.pa.us
-
Alvaro Herrera authored
This commit moves expand_inherited_tables and underlings from optimizer/prep/prepunionc.c to optimizer/utils/inherit.c. Also, all of the AppendRelInfo-based expression manipulation routines are moved to optimizer/utils/appendinfo.c. No functional code changes. One exception is the introduction of make_append_rel_info, but that's still just moving around code. Also, stop including <limits.h> in prepunion.c, which no longer needs it since 3fc6e2d7. I (Álvaro) noticed this because Amit was copying that to inherit.c, which likewise doesn't need it. Author: Amit Langote Discussion: https://postgr.es/m/3be67028-a00a-502c-199a-da00eec8fb6e@lab.ntt.co.jp
-
Alvaro Herrera authored
Per buildfarm
-
Alvaro Herrera authored
These commands allow assignment of values produced by queries to pgbench variables, where they can be used by further commands. \gset terminates a command sequence (just like a bare semicolon); \cset separates multiple queries in a compound command, like an escaped semicolon (\;). A prefix can be provided to the \-command and is prepended to the name of each output column to produce the final variable name. This feature allows pgbench scripts to react meaningfully to the actual database contents, allowing more powerful benchmarks to be written. Authors: Fabien Coelho, Álvaro Herrera Reviewed-by: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> Reviewed-by: Stephen Frost <sfrost@snowman.net> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Tatsuo Ishii <ishii@sraoss.co.jp> Reviewed-by: Rafia Sabih <rafia.sabih@enterprisedb.com> Discussion: https://postgr.es/m/alpine.DEB.2.20.1607091005330.3412@sto
-
Michael Paquier authored
This has required an update of the python script generating the rules, as its format has changed in release 29. This release has also added new punctuation and symbols, and a new set of rules has been generated to include them. The way to find newest versions of Latin-ASCII gets also more clearly documented. Author: Hugh Ranalli, Michael Paquier Discussion: https://postgr.es/m/15548-cef1b3f8de190d4f@postgresql.org
-
Tom Lane authored
We've been speculating for a long time that hash-based keyword lookup ought to be faster than binary search, but up to now we hadn't found a suitable tool for generating the hash function. Joerg Sonnenberger provided the inspiration, and sample code, to show us that rolling our own generator wasn't a ridiculous idea. Hence, do that. The method used here requires a lookup table of approximately 4 bytes per keyword, but that's less than what we saved in the predecessor commit afb0d071, so it's not a big problem. The time savings is indeed significant: preliminary testing suggests that the total time for raw parsing (flex + bison phases) drops by ~20%. Patch by me, but it owes its existence to Joerg Sonnenberger; thanks also to John Naylor for review. Discussion: https://postgr.es/m/20190103163340.GA15803@britannica.bec.de
-
Michael Paquier authored
Author: Kirk Jamison Discussion: https://postgr.es/m/D09B13F772D2274BB348A310EE3027C640AC54@g01jpexmbkw24
-
- 09 Jan, 2019 2 commits
-
-
Tom Lane authored
This index array was originally defined to have 10000 entries (ranging up to FirstGenbkiObjectId), but we really only need entries up to the last existing builtin function OID, currently 6121. That saves close to 8K of never-accessed space in the server executable, at the small price of one more fetch in fmgr_isbuiltin(). We could reduce the array size still further by renumbering a few of the highest-numbered builtin functions; but there's a small risk of breaking clients that have chosen to hardwire those function OIDs, so it's not clear if it'd be worth the trouble. (We should, however, discourage future patches from choosing function OIDs above 6K as long as there's still lots of space below that.) Discussion: https://postgr.es/m/12359.1547063064@sss.pgh.pa.us
-
Tom Lane authored
For a long time, plpgsql has allowed trigger functions to parse references to OLD and NEW even if the current trigger event type didn't assign a value to one or the other variable; but actually executing such a reference would fail. The v11 changes to use "expanded records" for DTYPE_REC variables changed the behavior so that the unassigned variable now reads as a null composite value. While this behavioral change was more or less unintentional, it seems that leaving it like this is better than adding code and complexity to be bug-compatible with the old way. The change doesn't break any code that worked before, and it eliminates a gotcha that often required extra code to work around. Hence, update the docs to say that these variables are "null" not "unassigned" when not relevant to the event type. And add a regression test covering the behavior, so that we'll notice if we ever break it again. Per report from Kristjan Tammekivi. Discussion: https://postgr.es/m/CAABK7uL-uC9ZxKBXzo_68pKt7cECfNRv+c35CXZpjq6jCAzYYA@mail.gmail.com
-
- 08 Jan, 2019 3 commits
-
-
Tom Lane authored
runtime.sgml said that you couldn't change SysV IPC parameters on OpenBSD except by rebuilding the kernel. That's definitely wrong in OpenBSD 6.x, and excavation in their man pages says it changed in OpenBSD 3.3. Update NetBSD and OpenBSD sections to recommend adjustment of the SEMMNI and SEMMNS settings, which are painfully small by default on those platforms. (The discussion thread contemplated recommending that people select POSIX semaphores instead, but the performance consequences of that aren't really clear, so I'll refrain.) Remove pointless discussion of SEMMNU and SEMMAP from the FreeBSD section. Minor other wordsmithing. Discussion: https://postgr.es/m/27582.1546928073@sss.pgh.pa.us
-
Michael Paquier authored
DISABLE_PAGE_SKIPPING is available since v9.6, and SKIP_LOCKED since v12. They lacked equivalents for vacuumdb, so this closes the gap. Author: Nathan Bossart Reviewed-by: Michael Paquier, Masahiko Sawada Discussion: https://postgr.es/m/FFE5373C-E26A-495B-B5C8-911EC4A41C5E@amazon.com
-
Tatsuo Ishii authored
Acronym "btree" better means "multi-way balanced tree" rather than "multi-way binary tree". Discussion: https://postgr.es/m/20190105.183532.1686260542006440682.t-ishii%40sraoss.co.jp
-
- 07 Jan, 2019 4 commits
-
-
Andrew Gierth authored
In passing add a couple of links to the message severity table. Backpatch because it's always been this way. Author: Karl O. Pinc <kop@meme.com>
-
Peter Eisentraut authored
Replace exit_nicely() calls with standard exit() and register the cleanup actions using atexit(). Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/ec4135ba-84e9-28bf-b584-0e78d47448d5@2ndquadrant.com/
-
Peter Eisentraut authored
Replace exit_nicely() calls with standard exit() and register the cleanup actions using atexit(). The coding pattern used here mirrors existing use in pg_basebackup.c. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://www.postgresql.org/message-id/flat/ec4135ba-84e9-28bf-b584-0e78d47448d5@2ndquadrant.com/
-
Peter Eisentraut authored
Instead of using our custom disconnect_and_exit(), just register the desired cleanup using atexit() and use the standard exit() to leave the program. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://www.postgresql.org/message-id/flat/ec4135ba-84e9-28bf-b584-0e78d47448d5@2ndquadrant.com/
-
- 06 Jan, 2019 1 commit
-
-
Tom Lane authored
Previously, ScanKeywordLookup was passed an array of string pointers. This had some performance deficiencies: the strings themselves might be scattered all over the place depending on the compiler (and some quick checking shows that at least with gcc-on-Linux, they indeed weren't reliably close together). That led to very cache-unfriendly behavior as the binary search touched strings in many different pages. Also, depending on the platform, the string pointers might need to be adjusted at program start, so that they couldn't be simple constant data. And the ScanKeyword struct had been designed with an eye to 32-bit machines originally; on 64-bit it requires 16 bytes per keyword, making it even more cache-unfriendly. Redesign so that the keyword strings themselves are allocated consecutively (as part of one big char-string constant), thereby eliminating the touch-lots-of-unrelated-pages syndrome. And get rid of the ScanKeyword array in favor of three separate arrays: uint16 offsets into the keyword array, uint16 token codes, and uint8 keyword categories. That reduces the overhead per keyword to 5 bytes instead of 16 (even less in programs that only need one of the token codes and categories); moreover, the binary search only touches the offsets array, further reducing its cache footprint. This also lets us put the token codes somewhere else than the keyword strings are, which avoids some unpleasant build dependencies. While we're at it, wrap the data used by ScanKeywordLookup into a struct that can be treated as an opaque type by most callers. That doesn't change things much right now, but it will make it less painful to switch to a hash-based lookup method, as is being discussed in the mailing list thread. Most of the change here is associated with adding a generator script that can build the new data structure from the same list-of-PG_KEYWORD header representation we used before. The PG_KEYWORD lists that plpgsql and ecpg used to embed in their scanner .c files have to be moved into headers, and the Makefiles have to be taught to invoke the generator script. This work is also necessary if we're to consider hash-based lookup, since the generator script is what would be responsible for constructing a hash table. Aside from saving a few kilobytes in each program that includes the keyword table, this seems to speed up raw parsing (flex+bison) by a few percent. So it's worth doing even as it stands, though we think we can gain even more with a follow-on patch to switch to hash-based lookup. John Naylor, with further hacking by me Discussion: https://postgr.es/m/CAJVSVGXdFVU2sgym89XPL=Lv1zOS5=EHHQ8XWNzFL=mTXkKMLw@mail.gmail.com
-
- 05 Jan, 2019 1 commit
-
-
Tom Lane authored
Commit 69ae9dcb added a globally-visible "%: %.o" rule, but we failed to notice that src/bin/scripts/Makefile already had such a rule. Apparently, the later occurrence of the same rule wins in nearly all versions of gmake ... but not in the one used by buildfarm member jacana. jacana is evidently using the global rule, which says to link "$<", ie just the first dependency. But the scripts makefile needs to link "$^", ie all the dependencies listed for the target. There is, fortunately, no good reason not to use "$^" in the global version of the rule, so we can just do that and get rid of the local version.
-
- 04 Jan, 2019 6 commits
-
-
Alvaro Herrera authored
Some relation kinds had relfilenode set to some non-zero value, but apparently the actual files did not really exist because creation was prevented elsewhere. Get rid of the phony pg_class.relfilenode values. Catversion bumped, but only because the sanity_test check will fail if run in a system initdb'd with the previous version. Reviewed-by: Kyotaro HORIGUCHI, Michael Paquier Discussion: https://postgr.es/m/20181206215552.fm2ypuxq6nhpwjuc@alvherre.pgsql
-
Alvaro Herrera authored
The original name was an unfortunate choice. Discussion: https://postgr.es/m/20181218.145600.172055615.horiguchi.kyotaro@lab.ntt.co.jp
-
Tom Lane authored
A variable name matching a statement-introducing keyword, such as "comment" or "update", caused parse failures if one tried to write a statement using that keyword. Commit bb1b8f69 already addressed this scenario for the case of variable names matching unreserved plpgsql keywords, but we didn't think about unreserved core-grammar keywords. The same heuristic (viz, it can't be a variable name unless the next token is assignment or '[') should work fine for that case too, and as a bonus the code gets shorter and less duplicative. Per bug #15555 from Feike Steenbergen. Since this hasn't been complained of before, and is easily worked around anyway, I won't risk a back-patch. Discussion: https://postgr.es/m/15555-149bbd70ddc7b4b6@postgresql.org
-
Peter Eisentraut authored
Python 2 is still supported.
-
Peter Eisentraut authored
Python 2 is still supported. Author: Hugh Ranalli <hugh@whtc.ca> Discussion: https://www.postgresql.org/message-id/CAAhbUMNyZ+PhNr_mQ=G161K0-hvbq13Tz2is9M3WK+yX9cQOCw@mail.gmail.com
-
Tom Lane authored
Instead of running a SQL script to create the standard conversion functions and pg_conversion entries, put those entries into the initial data in postgres.bki. This shaves a few percent off the runtime of initdb, and also allows accurate comments to be attached to the conversion functions; the previous script labeled them with machine-generated comments that were not quite right for multi-purpose conversion functions. Also, we can get rid of the duplicative Makefile and MSVC perl implementations of the generation code for that SQL script. A functional change is that these pg_proc and pg_conversion entries are now "pinned" by initdb. Leaving them unpinned was perhaps a good thing back while the conversions feature was under development, but there seems no valid reason for it now. Also, the conversion functions are now marked as immutable, where before they were volatile by virtue of lacking any explicit specification. That seems like it was just an oversight. To avoid using magic constants in pg_conversion.dat, extend genbki.pl to allow encoding names to be converted, much as it does for language, access method, etc names. John Naylor Discussion: https://postgr.es/m/CAJVSVGWtUqxpfAaxS88vEGvi+jKzWZb2EStu5io-UPc4p9rSJg@mail.gmail.com
-
- 03 Jan, 2019 2 commits
-
-
Tom Lane authored
This patch teaches genbki.pl to replace pg_language names by OIDs in much the same way as it already does for pg_am names etc, and converts pg_proc.dat to use such symbolic references in the prolang column. Aside from getting rid of a few more magic numbers in the initial catalog data, this means that Gen_fmgrtab.pl no longer needs to read pg_language.dat, since it doesn't have to know the OID of the "internal" language; now it's just looking for the string "internal". No need for a catversion bump, since the contents of postgres.bki don't actually change at all. John Naylor Discussion: https://postgr.es/m/CAJVSVGWtUqxpfAaxS88vEGvi+jKzWZb2EStu5io-UPc4p9rSJg@mail.gmail.com
-
Tom Lane authored
This patch changes the rule for whether or not a tuple seen by ANALYZE should be included in its sample. When we last touched this logic, in commit 51e1445f, we weren't thinking very hard about tuples being UPDATEd by a long-running concurrent transaction. In such a case, we might see the pre-image as either LIVE or DELETE_IN_PROGRESS depending on timing; and we might see the post-image not at all, or as INSERT_IN_PROGRESS. Since the existing code will not sample either DELETE_IN_PROGRESS or INSERT_IN_PROGRESS tuples, this leads to concurrently-updated rows being omitted from the sample entirely. That's not very helpful, and it's especially the wrong thing if the concurrent transaction ends up rolling back. The right thing seems to be to sample DELETE_IN_PROGRESS rows just as if they were live. This makes the "sample it" and "count it" decisions the same, which seems good for consistency. It's clearly the right thing if the concurrent transaction ends up rolling back; in effect, we are sampling as though IN_PROGRESS transactions haven't happened yet. Also, this combination of choices ensures maximum robustness against the different combinations of whether and in which state we might see the pre- and post-images of an update. It's slightly annoying that we end up recording immediately-out-of-date stats in the case where the transaction does commit, but on the other hand the stats are fine for columns that didn't change in the update. And the alternative of sampling INSERT_IN_PROGRESS rows instead seems like a bad idea, because then the sampling would be inconsistent with the way rows are counted for the stats report. Per report from Mark Chambers; thanks to Jeff Janes for diagnosing what was happening. Back-patch to all supported versions. Discussion: https://postgr.es/m/CAFh58O_Myr6G3tcH3gcGrF-=OExB08PJdWZcSBcEcovaiPsrHA@mail.gmail.com
-
- 02 Jan, 2019 5 commits
-
-
Tom Lane authored
MinMaxExpr invokes the btree comparison function for its input datatype, so it's only leakproof if that function is. Many such functions are indeed leakproof, but others are not, and we should not just assume that they are. Hence, adjust contain_leaked_vars to verify the leakproofness of the referenced function explicitly. I didn't add a regression test because it would need to depend on some particular comparison function being leaky, and that's a moving target, per discussion. This has been wrong all along, so back-patch to supported branches. Discussion: https://postgr.es/m/31042.1546194242@sss.pgh.pa.us
-
Peter Eisentraut authored
Author: Christoph Berg <myon@debian.org> Discussion: https://www.postgresql.org/message-id/flat/20170406223103.ixihdedf6d6d4kbk@alap3.anarazel.de/
-
Tom Lane authored
It's important for link commands to list *.o input files before -l switches for libraries, as library code may not get pulled into the link unless referenced by an earlier command-line entry. This is certainly necessary for static libraries (.a style). Apparently on some platforms it is also necessary for shared libraries, as reported by Donald Dong. We often put -l switches for within-tree libraries into LDFLAGS, meaning that link commands that list *.o files after LDFLAGS are hazardous. Most of our link commands got this right, but a few did not. In particular, places that relied on gmake's default implicit link rule failed, because that puts LDFLAGS first. Fix that by overriding the built-in rule with our own. The implicit link rules in src/makefiles/Makefile.* for single-.o-file shared libraries mostly got this wrong too, so fix them. I also changed the link rules for the backend and a couple of other places for consistency, even though they are not (currently) at risk because they aren't adding any -l switches to LDFLAGS. Arguably, the real problem here is that we're abusing LDFLAGS by putting -l switches in it and we should stop doing that. But changing that would be quite invasive, so I'm not eager to do so. Perhaps this is a candidate for back-patching, but so far it seems that problems can only be exhibited in test code we don't normally build, and at least some of the problems are new in HEAD anyway. So I'll refrain for now. Donald Dong and Tom Lane Discussion: https://postgr.es/m/CAKABAquXn-BF-vBeRZxhzvPyfMqgGuc74p8BmQZyCFDpyROBJQ@mail.gmail.com
-
Bruce Momjian authored
Backpatch-through: certain files through 9.4
-
Peter Eisentraut authored
This makes it easier to add new tests that are specific to Unicode features. The files were previously in KOI8-R. Discussion: https://www.postgresql.org/message-id/8506.1545111362@sss.pgh.pa.us
-
- 01 Jan, 2019 2 commits
-
-
Michael Paquier authored
This removes a portion of infrastructure introduced by fe0a0b59 to allow compilation of Postgres in environments where no strong random source is available, meaning that there is no linking to OpenSSL and no /dev/urandom (Windows having its own CryptoAPI). No systems shipped this century lack /dev/urandom, and the buildfarm is actually not testing this switch at all, so just remove it. This simplifies particularly some backend code which included a fallback implementation using shared memory, and removes a set of alternate regression output files from pgcrypto. Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/20181230063219.GG608@paquier.xyz
-
Michael Paquier authored
fe0a0b59, which has added a stronger random source in Postgres, has introduced a thinko when creating a padding message which gets encrypted for Elgamal. The padding message cannot have zeros, which are replaced by random bytes. However if pg_strong_random() failed, the message would finish by being considered in correct shape for encryption with zeros. Author: Tom Lane Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/20186.1546188423@sss.pgh.pa.us Backpatch-through: 10
-
- 31 Dec, 2018 1 commit
-
-
Michael Paquier authored
The function name pg_stop_backup() has been included for ages in some log messages when stopping the backup, which is confusing for base backups taken with the replication protocol because this function is never called. Some other comments and messages in this area are improved while on it. The new wording is based on input and suggestions from several people, all listed below. Author: Michael Paquier Reviewed-by: Peter Eisentraut, Álvaro Herrera, Tom Lane Discussion: https://postgr.es/m/20181221040510.GA12599@paquier.xyz
-