- 25 Nov, 2015 4 commits
-
-
Tom Lane authored
PQhost() can return NULL in non-error situations, namely when a Unix-socket connection has been selected by default. That behavior is a tad debatable perhaps, but for the moment we should make sure that psql copes with it. Unfortunately, do_connect() failed to: it could pass a NULL pointer to strcmp(), resulting in crashes on most platforms. This was reported as a security issue by ChenQin of Topsec Security Team, but the consensus of the security list is that it's just a garden-variety bug with no security implications. For paranoia's sake, I made the keep_password test not trust PQuser or PQport either, even though I believe those will never return NULL given a valid PGconn. Back-patch to all supported branches.
-
Tom Lane authored
The integer overflow situation in div_var_fast() is a great deal more complicated than the pre-existing comments would suggest. Moreover, the comments were also flat out incorrect as to the precise statement of the maxdiv loop invariant. Upon clarifying that, it becomes apparent that the way in which we updated maxdiv after a carry propagation pass was overly slow, complex, and conservative: we can just reset it to one, which is much easier and also reduces the number of times carry propagation occurs. Fix that and improve the relevant comments. Since this is mostly a comment fix, with only a rather marginal performance boost, no need for back-patch. Tom Lane and Dean Rasheed
-
Teodor Sigaev authored
-
Teodor Sigaev authored
Now pageinspect can show data stored in the heap tuple. Nikolay Shaplov
-
- 24 Nov, 2015 2 commits
-
-
Bruce Momjian authored
Also fix getErrorText() to return the right error string on failure. This behavior now matches that of other operating systems. Report by Noah Misch Backpatch through 9.1
-
Peter Eisentraut authored
-
- 23 Nov, 2015 2 commits
-
-
Teodor Sigaev authored
Per http://www.postgresql.org/message-id/flat/564C4CE6.9000509@postgrespro.ru Pavel Luzanov <p.luzanov@postgrespro.ru>
-
Peter Eisentraut authored
from Michael Paquier
-
- 22 Nov, 2015 1 commit
-
-
Tom Lane authored
The POSIX standard for tar headers requires archive member sizes to be printed in octal with at most 11 digits, limiting the representable file size to 8GB. However, GNU tar and apparently most other modern tars support a convention in which oversized values can be stored in base-256, allowing any practical file to be a tar member. Adopt this convention to remove two limitations: * pg_dump with -Ft output format failed if the contents of any one table exceeded 8GB. * pg_basebackup failed if the data directory contained any file exceeding 8GB. (This would be a fatal problem for installations configured with a table segment size of 8GB or more, and it has also been seen to fail when large core dump files exist in the data directory.) File sizes under 8GB are still printed in octal, so that no compatibility issues are created except in cases that would have failed entirely before. In addition, this patch fixes several bugs in the same area: * In 9.3 and later, we'd defined tarCreateHeader's file-size argument as size_t, which meant that on 32-bit machines it would write a corrupt tar header for file sizes between 4GB and 8GB, even though no error was raised. This broke both "pg_dump -Ft" and pg_basebackup for such cases. * pg_restore from a tar archive would fail on tables of size between 4GB and 8GB, on machines where either "size_t" or "unsigned long" is 32 bits. This happened even with an archive file not affected by the previous bug. * pg_basebackup would fail if there were files of size between 4GB and 8GB, even on 64-bit machines. * In 9.3 and later, "pg_basebackup -Ft" failed entirely, for any file size, on 64-bit big-endian machines. In view of these potential data-loss bugs, back-patch to all supported branches, even though removal of the documented 8GB limit might otherwise be considered a new feature rather than a bug fix.
-
- 20 Nov, 2015 2 commits
-
-
Tom Lane authored
The previous way of reconstructing check constraints was to do a separate "ALTER TABLE ONLY tab ADD CONSTRAINT" for each table in an inheritance hierarchy. However, that way has no hope of reconstructing the check constraints' own inheritance properties correctly, as pointed out in bug #13779 from Jan Dirk Zijlstra. What we should do instead is to do a regular "ALTER TABLE", allowing recursion, at the topmost table that has a particular constraint, and then suppress the work queue entries for inherited instances of the constraint. Annoyingly, we'd tried to fix this behavior before, in commit 5ed6546c, but we failed to notice that it wasn't reconstructing the pg_constraint field values correctly. As long as I'm touching pg_get_constraintdef_worker anyway, tweak it to always schema-qualify the target table name; this seems like useful backup to the protections installed by commit 5f173040. In HEAD/9.5, get rid of get_constraint_relation_oids, which is now unused. (I could alternatively have modified it to also return conislocal, but that seemed like a pretty single-purpose API, so let's not pretend it has some other use.) It's unused in the back branches as well, but I left it in place just in case some third-party code has decided to use it. In HEAD/9.5, also rename pg_get_constraintdef_string to pg_get_constraintdef_command, as the previous name did nothing to explain what that entry point did differently from others (and its comment was equally useless). Again, that change doesn't seem like material for back-patching. I did a bit of re-pgindenting in tablecmds.c in HEAD/9.5, as well. Otherwise, back-patch to all supported branches.
-
Robert Haas authored
The previous coding attempts to destroy the DSM in this case, but child nodes might have stored data there and still be holding onto pointers in this case. So don't do that. Also, free the reader array instead of leaking it. Extracted from two different patch versions both by Amit Kapila.
-
- 19 Nov, 2015 12 commits
-
-
Robert Haas authored
Amit Langote
-
Robert Haas authored
Reported by Andres Freund.
-
Tom Lane authored
Some versions of Perl export a macro named HS_KEY. This creates a conflict in contrib/hstore_plperl against hstore's macro of the same name. The most future-proof solution seems to be to rename our macro; I chose HSTORE_KEY. For consistency, rename HS_VAL and related macros similarly. Back-patch to 9.5. contrib/hstore_plperl doesn't exist before that so there is no need to worry about the conflict in older releases. Per reports from Marco Atzeri and Mike Blackwell.
-
Peter Eisentraut authored
-
Tom Lane authored
Silly mistake in my commit 09cecdf2, reported by Erik Rijkers. The fact that the buildfarm didn't find this implies that we are not testing Perl builds that lack MULTIPLICITY, which is a bit disturbing from a coverage standpoint. Until today I'd have said nobody cared about such configurations anymore; but maybe not.
-
Robert Haas authored
This was already true for CREATE EXTENSION, but historically has not been true for other object types. Therefore, this is a backward incompatibility. Per discussion on pgsql-hackers, everyone seems to agree that the new behavior is better. Marti Raudsepp, reviewed by Haribabu Kommi and myself
-
Andrew Dunstan authored
-
Andrew Dunstan authored
-
Andrew Dunstan authored
The target is now named 'bincheck' rather than 'tapcheck' so that it reflects what is checked instead of the test mechanism. Some of the logic is improved, making it easier to add further sets of TAP based tests in future. Also, the environment setting logic is imrpoved. As discussed on -hackers a couple of months ago.
-
Robert Haas authored
KaiGai Kohei
-
Andres Freund authored
At least one of the names was, due to a function renaming late in the development of ON CONFLICT, wrong. Since including function names in error messages is against the message style guide anyway, remove them from the messages. Discussion: CAM3SWZT8paz=usgMVHm0XOETkQvzjRtAUthATnmaHQQY0obnGw@mail.gmail.com Backpatch: 9.5, where ON CONFLICT was introduced
-
Andres Freund authored
Author: Peter Geoghegan and Andres Freund Discussion: CAM3SWZScpWzQ-7EJC77vwqzZ1GO8GNmURQ1QqDQ3wRn7AbW1Cg@mail.gmail.com Backpatch: 9.5, where ON CONFLICT was introduced
-
- 18 Nov, 2015 4 commits
-
-
Tom Lane authored
Per buildfarm member anchovy, 2.6.0 exists in the wild now. Hopefully it works with Postgres; if not, we'll have to do something about that, but in any case claiming it's "too old" is pretty silly.
-
Robert Haas authored
Remote expressions now also matter to make_foreignscan() Noted by Etsuro Fujita.
-
Robert Haas authored
Amit Kapila, per design ideas from me.
-
Robert Haas authored
When I wrote this code originally, the intention was to recompute the remapinfo only when the tupledesc changes. This presumably only happens once per query, but I copied the design pattern from other DestReceivers. However, due to a silly oversight on my part, tqueue->tupledesc never got set, leading to recomputation for every tuple. This should improve the performance of parallel scans that return a significant number of tuples. Report by Amit Kapila; patch by me, reviewed by him.
-
- 17 Nov, 2015 5 commits
-
-
Tom Lane authored
div_var_fast() postpones propagating carries in the same way as mul_var(), so it has the same corner-case overflow risk we fixed in 246693e5, namely that the size of the carries has to be accounted for when setting the threshold for executing a carry propagation step. We've not devised a test case illustrating the brokenness, but the required fix seems clear enough. Like the previous fix, back-patch to all active branches. Dean Rasheed
-
Peter Eisentraut authored
from Euler Taveira
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
- 16 Nov, 2015 2 commits
-
-
Robert Haas authored
Prior to commit 0709b7ee, access to variables within a spinlock-protected critical section had to be done through a volatile pointer, but that should no longer be necessary. Review by Andres Freund
-
Tom Lane authored
Since commit 11e13185, ruleutils.c has attempted to ensure that each RTE in a query or plan tree has a unique alias name. However, the code that was added for this could be quite slow, even as bad as O(N^3) if N identical RTE names must be replaced, as noted by Jeff Janes. Improve matters by building a transient hash table within set_rtable_names. The hash table in itself reduces the cost of detecting a duplicate from O(N) to O(1), and we can save another factor of N by storing the number of de-duplicated names already created for each entry, so that we don't have to re-try names already created. This way is probably a bit slower overall for small range tables, but almost by definition, such cases should not be a performance problem. In principle the same problem applies to the column-name-de-duplication code; but in practice that seems to be less of a problem, first because N is limited since we don't support extremely wide tables, and second because duplicate column names within an RTE are fairly rare, so that in practice the cost is more like O(N^2) not O(N^3). It would be very much messier to fix the column-name code, so for now I've left that alone. An independent problem in the same area was that the de-duplication code paid no attention to the identifier length limit, and would happily produce identifiers that were longer than NAMEDATALEN and wouldn't be unique after truncation to NAMEDATALEN. This could result in dump/reload failures, or perhaps even views that silently behaved differently than before. We can fix that by shortening the base name as needed. Fix it for both the relation and column name cases. In passing, check for interrupts in set_rtable_names, just in case it's still slow enough to be an issue. Back-patch to 9.3 where this code was introduced.
-
- 15 Nov, 2015 2 commits
-
-
Robert Haas authored
Amit Kapila
-
Tom Lane authored
Normally ruleutils prints a whole-row Var as "foo.*". We already knew that that doesn't work at top level of a SELECT list, because the parser would treat the "*" as a directive to expand the reference into separate columns, not a whole-row Var. However, Joshua Yanovski points out in bug #13776 that the same thing happens at top level of a ROW() construct; and some nosing around in the parser shows that the same is true in VALUES(). Hence, apply the same workaround already devised for the SELECT-list case, namely to add a forced cast to the appropriate rowtype in these cases. (The alternative of just printing "foo" was rejected because it is difficult to avoid ambiguity against plain columns named "foo".) Back-patch to all supported branches.
-
- 14 Nov, 2015 3 commits
-
-
Tom Lane authored
Set the "rscales" for intermediate-result calculations to ensure that suitable numbers of significant digits are maintained throughout. The previous coding hadn't thought this through in any detail, and as a result could deliver results with many inaccurate digits, or in the worst cases even fail with divide-by-zero errors as a result of losing all nonzero digits of intermediate results. In exp_var(), get rid entirely of the logic that separated the calculation into integer and fractional parts: that was neither accurate nor particularly fast. The existing range-reduction method of dividing by 2^n can be applied across the full input range instead of only 0..1, as long as we are careful to set an appropriate rscale for each step. Also fix the logic in mul_var() for shortening the calculation when the caller asks for fewer output digits than an exact calculation would require. This bug doesn't affect simple multiplications since that code path asks for an exact result, but it does contribute to accuracy issues in the transcendental math functions. In passing, improve performance of mul_var() a bit by forcing the shorter input to be on the left, thus reducing the number of iterations of the outer loop and probably also reducing the number of carry-propagation steps needed. This is arguably a bug fix, but in view of the lack of field complaints, it does not seem worth the risk of back-patching. Dean Rasheed
-
Bruce Momjian authored
Report by Greg Clough
-
Bruce Momjian authored
Previously, file copy failures were ignored on Windows due to an incorrect return value check. Report by Manu Joye Backpatch through 9.1
-
- 13 Nov, 2015 1 commit
-
-
Stephen Frost authored
The sepgsql docs included a comment that PG doesn't support RLS. That is only true for versions prior to 9.5. Update the docs for 9.5 and master to say that PG supports RLS but that sepgsql does not yet. Pointed out by Heikki. Back-patch to 9.5
-