- 04 Jan, 2019 2 commits
-
-
Peter Eisentraut authored
Python 2 is still supported. Author: Hugh Ranalli <hugh@whtc.ca> Discussion: https://www.postgresql.org/message-id/CAAhbUMNyZ+PhNr_mQ=G161K0-hvbq13Tz2is9M3WK+yX9cQOCw@mail.gmail.com
-
Tom Lane authored
Instead of running a SQL script to create the standard conversion functions and pg_conversion entries, put those entries into the initial data in postgres.bki. This shaves a few percent off the runtime of initdb, and also allows accurate comments to be attached to the conversion functions; the previous script labeled them with machine-generated comments that were not quite right for multi-purpose conversion functions. Also, we can get rid of the duplicative Makefile and MSVC perl implementations of the generation code for that SQL script. A functional change is that these pg_proc and pg_conversion entries are now "pinned" by initdb. Leaving them unpinned was perhaps a good thing back while the conversions feature was under development, but there seems no valid reason for it now. Also, the conversion functions are now marked as immutable, where before they were volatile by virtue of lacking any explicit specification. That seems like it was just an oversight. To avoid using magic constants in pg_conversion.dat, extend genbki.pl to allow encoding names to be converted, much as it does for language, access method, etc names. John Naylor Discussion: https://postgr.es/m/CAJVSVGWtUqxpfAaxS88vEGvi+jKzWZb2EStu5io-UPc4p9rSJg@mail.gmail.com
-
- 03 Jan, 2019 2 commits
-
-
Tom Lane authored
This patch teaches genbki.pl to replace pg_language names by OIDs in much the same way as it already does for pg_am names etc, and converts pg_proc.dat to use such symbolic references in the prolang column. Aside from getting rid of a few more magic numbers in the initial catalog data, this means that Gen_fmgrtab.pl no longer needs to read pg_language.dat, since it doesn't have to know the OID of the "internal" language; now it's just looking for the string "internal". No need for a catversion bump, since the contents of postgres.bki don't actually change at all. John Naylor Discussion: https://postgr.es/m/CAJVSVGWtUqxpfAaxS88vEGvi+jKzWZb2EStu5io-UPc4p9rSJg@mail.gmail.com
-
Tom Lane authored
This patch changes the rule for whether or not a tuple seen by ANALYZE should be included in its sample. When we last touched this logic, in commit 51e1445f, we weren't thinking very hard about tuples being UPDATEd by a long-running concurrent transaction. In such a case, we might see the pre-image as either LIVE or DELETE_IN_PROGRESS depending on timing; and we might see the post-image not at all, or as INSERT_IN_PROGRESS. Since the existing code will not sample either DELETE_IN_PROGRESS or INSERT_IN_PROGRESS tuples, this leads to concurrently-updated rows being omitted from the sample entirely. That's not very helpful, and it's especially the wrong thing if the concurrent transaction ends up rolling back. The right thing seems to be to sample DELETE_IN_PROGRESS rows just as if they were live. This makes the "sample it" and "count it" decisions the same, which seems good for consistency. It's clearly the right thing if the concurrent transaction ends up rolling back; in effect, we are sampling as though IN_PROGRESS transactions haven't happened yet. Also, this combination of choices ensures maximum robustness against the different combinations of whether and in which state we might see the pre- and post-images of an update. It's slightly annoying that we end up recording immediately-out-of-date stats in the case where the transaction does commit, but on the other hand the stats are fine for columns that didn't change in the update. And the alternative of sampling INSERT_IN_PROGRESS rows instead seems like a bad idea, because then the sampling would be inconsistent with the way rows are counted for the stats report. Per report from Mark Chambers; thanks to Jeff Janes for diagnosing what was happening. Back-patch to all supported versions. Discussion: https://postgr.es/m/CAFh58O_Myr6G3tcH3gcGrF-=OExB08PJdWZcSBcEcovaiPsrHA@mail.gmail.com
-
- 02 Jan, 2019 5 commits
-
-
Tom Lane authored
MinMaxExpr invokes the btree comparison function for its input datatype, so it's only leakproof if that function is. Many such functions are indeed leakproof, but others are not, and we should not just assume that they are. Hence, adjust contain_leaked_vars to verify the leakproofness of the referenced function explicitly. I didn't add a regression test because it would need to depend on some particular comparison function being leaky, and that's a moving target, per discussion. This has been wrong all along, so back-patch to supported branches. Discussion: https://postgr.es/m/31042.1546194242@sss.pgh.pa.us
-
Peter Eisentraut authored
Author: Christoph Berg <myon@debian.org> Discussion: https://www.postgresql.org/message-id/flat/20170406223103.ixihdedf6d6d4kbk@alap3.anarazel.de/
-
Tom Lane authored
It's important for link commands to list *.o input files before -l switches for libraries, as library code may not get pulled into the link unless referenced by an earlier command-line entry. This is certainly necessary for static libraries (.a style). Apparently on some platforms it is also necessary for shared libraries, as reported by Donald Dong. We often put -l switches for within-tree libraries into LDFLAGS, meaning that link commands that list *.o files after LDFLAGS are hazardous. Most of our link commands got this right, but a few did not. In particular, places that relied on gmake's default implicit link rule failed, because that puts LDFLAGS first. Fix that by overriding the built-in rule with our own. The implicit link rules in src/makefiles/Makefile.* for single-.o-file shared libraries mostly got this wrong too, so fix them. I also changed the link rules for the backend and a couple of other places for consistency, even though they are not (currently) at risk because they aren't adding any -l switches to LDFLAGS. Arguably, the real problem here is that we're abusing LDFLAGS by putting -l switches in it and we should stop doing that. But changing that would be quite invasive, so I'm not eager to do so. Perhaps this is a candidate for back-patching, but so far it seems that problems can only be exhibited in test code we don't normally build, and at least some of the problems are new in HEAD anyway. So I'll refrain for now. Donald Dong and Tom Lane Discussion: https://postgr.es/m/CAKABAquXn-BF-vBeRZxhzvPyfMqgGuc74p8BmQZyCFDpyROBJQ@mail.gmail.com
-
Bruce Momjian authored
Backpatch-through: certain files through 9.4
-
Peter Eisentraut authored
This makes it easier to add new tests that are specific to Unicode features. The files were previously in KOI8-R. Discussion: https://www.postgresql.org/message-id/8506.1545111362@sss.pgh.pa.us
-
- 01 Jan, 2019 2 commits
-
-
Michael Paquier authored
This removes a portion of infrastructure introduced by fe0a0b59 to allow compilation of Postgres in environments where no strong random source is available, meaning that there is no linking to OpenSSL and no /dev/urandom (Windows having its own CryptoAPI). No systems shipped this century lack /dev/urandom, and the buildfarm is actually not testing this switch at all, so just remove it. This simplifies particularly some backend code which included a fallback implementation using shared memory, and removes a set of alternate regression output files from pgcrypto. Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/20181230063219.GG608@paquier.xyz
-
Michael Paquier authored
fe0a0b59, which has added a stronger random source in Postgres, has introduced a thinko when creating a padding message which gets encrypted for Elgamal. The padding message cannot have zeros, which are replaced by random bytes. However if pg_strong_random() failed, the message would finish by being considered in correct shape for encryption with zeros. Author: Tom Lane Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/20186.1546188423@sss.pgh.pa.us Backpatch-through: 10
-
- 31 Dec, 2018 7 commits
-
-
Michael Paquier authored
The function name pg_stop_backup() has been included for ages in some log messages when stopping the backup, which is confusing for base backups taken with the replication protocol because this function is never called. Some other comments and messages in this area are improved while on it. The new wording is based on input and suggestions from several people, all listed below. Author: Michael Paquier Reviewed-by: Peter Eisentraut, Álvaro Herrera, Tom Lane Discussion: https://postgr.es/m/20181221040510.GA12599@paquier.xyz
-
Noah Misch authored
This closes a race condition in "make -j check-world"; the symptom was EEXIST errors. Back-patch to v10, before which parallel check-world had worse problems. Discussion: https://postgr.es/m/20181224221601.GA3227827@rfd.leadboat.com
-
Noah Misch authored
We already redirected other temp-install stderr and all temp-install stdout in this way. Back-patch to v10, like the next commit. Discussion: https://postgr.es/m/20181224221601.GA3227827@rfd.leadboat.com
-
Noah Misch authored
Detect it the way pg_ctl's wait_for_postmaster() does. When pg_regress spawned a postmaster that failed startup, we were detecting that only with "pg_regress: postmaster did not respond within 60 seconds". Back-patch to 9.4 (all supported versions). Reviewed by Tom Lane. Discussion: https://postgr.es/m/20181231172922.GA199150@gust.leadboat.com
-
Tom Lane authored
Mark pg_lsn and oidvector comparison functions as leakproof. Per discussion, these clearly are leakproof so we might as well mark them so. On the other hand, remove leakproof markings from name comparison functions other than equal/not-equal. Now that these depend on varstr_cmp, they can't be considered leakproof if text comparison isn't. (This was my error in commit 586b98fd.) While at it, add some opr_sanity queries to catch cases where related functions do not have the same volatility and leakproof markings. This would clearly be bogus for commutator or negator pairs. In the domain of btree comparison functions, we do have some exceptions, because text equality is leakproof but inequality comparisons are not. That's odd on first glance but is reasonable (for now anyway) given the much greater complexity of the inequality code paths. Discussion: https://postgr.es/m/20181231172551.GA206480@gust.leadboat.com
-
Alvaro Herrera authored
In commit 8b08f7d4 I added member relationId to IndexStmt struct. I'm now not sure why; DefineIndex doesn't need it, since the relation OID is passed as a separate argument anyway. Remove it. Also remove a redundant assignment to the relationId argument (it wasn't redundant when added by commit e093dcdd, but should have been removed in commit 5f173040), and use relationId instead of stmt->relation when locking the relation in the second phase of CREATE INDEX CONCURRENTLY, which is not only confusing but it means we resolve the name twice for no reason.
-
Tom Lane authored
While rearranging code in tidpath.c, I overlooked the fact that we ought to check restriction_is_securely_promotable when trying to use a join clause as a TID qual. Since tideq itself is leakproof, this wouldn't really allow any interesting leak AFAICT, but it still seems like we had better check it. For consistency with the corresponding logic in indxpath.c, also check rinfo->pseudoconstant. I'm not sure right now that it's possible for that to be set in a join clause, but if it were, a match couldn't be made anyway.
-
- 30 Dec, 2018 5 commits
-
-
Peter Eisentraut authored
This catches up with the recent renaming of all user-facing mentions of "xlog" to "wal". Discussion: https://www.postgresql.org/message-id/flat/20181129084708.GA9562%40msg.credativ.de
-
Tom Lane authored
Up to now we've not worried much about joins where the join key is a relation's CTID column, reasoning that storing a table's CTIDs in some other table would be pretty useless. However, there are use-cases for this sort of query involving self-joins, so that argument doesn't really hold water. With larger relations, a merge or hash join is desirable. We had a btree opclass for type "tid", allowing merge joins on CTID, but no hash opclass so that hash joins weren't possible. Add the missing infrastructure. This also potentially enables hash aggregation on "tid", though the use-cases for that aren't too clear. Discussion: https://postgr.es/m/1853.1545453106@sss.pgh.pa.us
-
Tom Lane authored
Up to now we've not worried much about joins where the join key is a relation's CTID column, reasoning that storing a table's CTIDs in some other table would be pretty useless. However, there are use-cases for this sort of query involving self-joins, so that argument doesn't really hold water. This patch allows generating plans for joins on CTID that use a nestloop with inner TidScan, similar to what we might do with an index on the join column. This is the most efficient way to join when the outer side of the nestloop is expected to yield relatively few rows. This change requires upgrading tidpath.c and the generated TidPaths to work with RestrictInfos instead of bare qual clauses, but that's long-postponed technical debt anyway. Discussion: https://postgr.es/m/17443.1545435266@sss.pgh.pa.us
-
Tom Lane authored
Doing this requires an assumption that the invoked btree comparison function is immutable. We could check that explicitly, but in other places such as contain_mutable_functions we just assume that it's true, so we may as well do likewise here. (If the comparison function's behavior isn't immutable, the sort order in indexes built with it would be unstable, so it seems certainly wrong for it not to be so.) Vik Fearing Discussion: https://postgr.es/m/c6e8504c-4c43-35fa-6c8f-3c0b80a912cc@2ndquadrant.com
-
Michael Paquier authored
PL/pgSQL provides a set of callbacks which can be used for extra instrumentation of functions written in this language called at function setup, begin and end, as well as statement begin and end. When calling a routine, a trigger, or an event trigger, statement callbacks are not getting called for the top-level statement block leading to an inconsistent handling compared to the other statements. This inconsistency can potentially complicate extensions doing instrumentation work on top of PL/pgSQL, so this commit makes sure that all statement blocks, including the top-level one, go through the correct corresponding callbacks. Author: Pavel Stehule Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/CAFj8pRArEANsaUjo5in9_iQt0vKf9ecwDAmsdN_EBwL13ps12A@mail.gmail.com
-
- 29 Dec, 2018 4 commits
-
-
Tom Lane authored
Previously we just set the seed based on process ID and start timestamp. Both those values are directly available within the session, and can be found out or guessed by other users too, making the session's series of random(3) values fairly predictable. Up to now, our backend-internal uses of random(3) haven't seemed security-critical, but commit 88bdbd3f added one that potentially is: when using log_statement_sample_rate, a user might be able to predict which of his SQL statements will get logged. To improve this situation, upgrade the per-process seed initialization method to use pg_strong_random() if available, greatly reducing the predictability of the initial seed value. This adds a few tens of microseconds to process start time, but since backend startup time is at least a couple of milliseconds, that seems an acceptable price. This means that pg_strong_random() needs to be able to run without reliance on any backend infrastructure, since it will be invoked before any of that is up. It was safe for that already, but adjust comments and #include commands to make it clearer. Discussion: https://postgr.es/m/3859.1545849900@sss.pgh.pa.us
-
Tom Lane authored
Previously, the SQL random() function depended on libc's random(3), and setseed() invoked srandom(3). This results in interference between these functions and backend-internal uses of random(3). We'd never paid too much mind to that, but in the wake of commit 88bdbd3f which added log_statement_sample_rate, the interference arguably has a security consequence: if log_statement_sample_rate is active then an unprivileged user could probably control which if any of his SQL commands get logged, by issuing setseed() at the right times. That seems bad. To fix this reliably, we need random() and setseed() to use their own private random state variable. Standard random(3) isn't amenable to such usage, so let's switch to pg_erand48(). It's hard to say whether that's more or less "random" than any particular platform's version of random(3), but it does have a wider seed value and a longer period than are required by POSIX, so we can hope that this isn't a big downgrade. Also, we should now have uniform behavior of random() across platforms, which is worth something. While at it, upgrade the per-process seed initialization method to use pg_strong_random() if available, greatly reducing the predictability of the initial seed value. (I'll separately do something similar for the internal uses of random().) In addition to forestalling the possible security problem, this has a benefit in the other direction, which is that we can now document setseed() as guaranteeing a reproducible sequence of random() values. Previously, because of the possibility of internal calls of random(3), we could not promise any such thing. Discussion: https://postgr.es/m/3859.1545849900@sss.pgh.pa.us
-
Peter Eisentraut authored
-
Peter Eisentraut authored
psql_error() already handles that itself.
-
- 28 Dec, 2018 7 commits
-
-
Michael Paquier authored
This was incorrectly referring to --walsegsize, and its description is rewritten in a clearer way. Author: Ian Barwick, Tom Lane Reviewed-by: Álvaro Herrera, Michael Paquier Discussion: https://postgr.es/m/08534fc6-119a-c498-254e-d5acc4e6bf85@2ndquadrant.com
-
Tom Lane authored
Get rid of the multiplier and addend variables in favor of hard-wired constants. Do the multiply-and-add using uint64 arithmetic, rather than manually combining several narrower multiplications and additions. Make _dorand48 return the full-width new random value, and have its callers use that directly (after suitable masking) rather than reconstructing what they need from the unsigned short[] representation. On my machine, this is good for a nearly factor-of-2 speedup of pg_erand48(), probably mostly from needing just one call of ldexp() rather than three. The wins for the other functions are smaller but measurable. While none of the existing call sites are really performance-critical, a cycle saved is a cycle earned; and besides the machine code is smaller this way (at least on x86_64). Patch by me, but the original idea to optimize this by switching to int64 arithmetic is from Fabien Coelho. Discussion: https://postgr.es/m/1551.1546018192@sss.pgh.pa.us
-
Tom Lane authored
POSIX specifies that jrand48() returns a signed 32-bit value (in the range [-2^31, 2^31)), but our code was returning an unsigned 32-bit value (in the range [0, 2^32)). This doesn't actually matter to any existing call site, because they all cast the "long" result to int32 or uint32; but it will doubtless bite somebody in the future. To fix, cast the arithmetic result to int32 explicitly before the compiler widens it to long (if widening is needed). While at it, upgrade this file's far-short-of-project-style comments. Had there been some peer pressure to document pg_jrand48() properly, maybe this thinko wouldn't have gotten committed to begin with. Backpatch to v10 where pg_jrand48() was added, just in case somebody back-patches a fix that uses it and depends on the standard behavior. Discussion: https://postgr.es/m/17235.1545951602@sss.pgh.pa.us
-
Alvaro Herrera authored
-
Alvaro Herrera authored
The original was hard to follow and failed to comply with DRY principle. Discussion: https://postgr.es/m/20181206222221.g5witbsklvqthjll@alvherre.pgsql
-
Michael Paquier authored
The documentation of ON DELETE and ON UPDATE uses the term "action", which is also used on the ALTER TABLE page for other purposes. This commit renames the term to "referential_action", which is more consistent with the SQL specification. The new term is now used on the documentation of both CREATE TABLE and ALTER TABLE for consistency. Reported-by: Brigitte Blanc-Lafay Author: Lætitia Avrot Reviewed-by: Álvaro Herrera Discussion: https://postgr.es/m/CAB_COdiHEVVs0uB+uYCjjYUwQ4YFFekppq+Xqv6qAM8+cd42gA@mail.gmail.com
-
Alexander Korotkov authored
Isolation test suite of GIN predicate locking was criticized for being too slow, especially under Valgrind. This commit is intended to accelerate it. Tests are simplified in the following ways. 1) Amount of data is reduced. We're now close to the minimal amount of data, which produces at least one posting tree and at least two pages of entry tree. 2) Three isolation tests are merged into one. 3) Only one tuple is queried from posting tree. So, locking of index is the same, but tuple locks are not propagated to relation lock. Also, it is faster. 4) Test cases itself are simplified. Now each test case run just one INSERT and one SELECT involving GIN, which either conflict or not. Discussion: https://postgr.es/m/20181204000740.ok2q53nvkftwu43a%40alap3.anarazel.de Reported-by: Andres Freund Tested-by: Andrew Dunstan Author: Alexander Korotkov Backpatch-through: 11
-
- 27 Dec, 2018 4 commits
-
-
Peter Eisentraut authored
Remove IndexIsValid(), IndexIsReady(), IndexIsLive() in favor of accessing the index structure directly. These macros haven't been used consistently, and the original reason of maintaining source compatibility with PostgreSQL 9.2 is gone. Discussion: https://www.postgresql.org/message-id/flat/d419147c-09d4-6196-5d9d-0234b230880a%402ndquadrant.com
-
Peter Eisentraut authored
-
Alexander Korotkov authored
According to README we acquire predicate locks on entry tree leafs and posting tree roots. However, when ginFindLeafPage() is going to lock leaf in exclusive mode, then it checks root for conflicts regardless whether it's a entry or posting tree. Assuming that we never place predicate lock on entry tree root (excluding corner case when root is leaf), this check is redundant. This commit removes this check. Now, root conflict checking is controlled by separate argument of ginFindLeafPage(). Discussion: https://postgr.es/m/CAPpHfdv7rrDyy%3DMgsaK-L9kk0AH7az0B-mdC3w3p0FSb9uoyEg%40mail.gmail.com Author: Alexander Korotkov Backpatch-through: 11
-
Michael Paquier authored
Inheritance trees can include temporary tables if the parent is permanent, which makes possible the presence of multiple temporary children from different sessions. Trying to issue a TRUNCATE on the parent in this scenario causes a failure, so similarly to any other queries just ignore such cases, which makes TRUNCATE work transparently. This makes truncation behave similarly to any other DML query working on the parent table with queries which need to be work on the children. A set of isolation tests is added to cover basic cases. Reported-by: Zhou Digoal Author: Amit Langote, Michael Paquier Discussion: https://postgr.es/m/15565-ce67a48d0244436a@postgresql.org Backpatch-through: 9.4
-
- 26 Dec, 2018 2 commits
-
-
Tom Lane authored
While it seems OK to not be concerned about fsync() failure for a pre-existing signal file, it's not OK to not even check for open() failure. This at least causes complaints from static analyzers, and I think on some platforms passing -1 to fsync() or close() might trigger assertion-type failures. Also add (void) casts to make clear that we're ignoring fsync's result intentionally. Oversights in commit 2dedf4d9, noted by Coverity.
-
Tom Lane authored
I made a frontend fprintf() format use %m, forgetting that that's only safe in HEAD not the back branches; prior to 96bf88d5 and d6c55de1, it would work on glibc platforms but not elsewhere. Revert to using %s ... strerror(errno) as the code did before. We could have left HEAD as-is, but for code consistency across branches, I chose to apply this patch there too. Per Coverity and a few buildfarm members.
-