- 19 Nov, 2015 8 commits
-
-
Tom Lane authored
Silly mistake in my commit 09cecdf2, reported by Erik Rijkers. The fact that the buildfarm didn't find this implies that we are not testing Perl builds that lack MULTIPLICITY, which is a bit disturbing from a coverage standpoint. Until today I'd have said nobody cared about such configurations anymore; but maybe not.
-
Robert Haas authored
This was already true for CREATE EXTENSION, but historically has not been true for other object types. Therefore, this is a backward incompatibility. Per discussion on pgsql-hackers, everyone seems to agree that the new behavior is better. Marti Raudsepp, reviewed by Haribabu Kommi and myself
-
Andrew Dunstan authored
-
Andrew Dunstan authored
-
Andrew Dunstan authored
The target is now named 'bincheck' rather than 'tapcheck' so that it reflects what is checked instead of the test mechanism. Some of the logic is improved, making it easier to add further sets of TAP based tests in future. Also, the environment setting logic is imrpoved. As discussed on -hackers a couple of months ago.
-
Robert Haas authored
KaiGai Kohei
-
Andres Freund authored
At least one of the names was, due to a function renaming late in the development of ON CONFLICT, wrong. Since including function names in error messages is against the message style guide anyway, remove them from the messages. Discussion: CAM3SWZT8paz=usgMVHm0XOETkQvzjRtAUthATnmaHQQY0obnGw@mail.gmail.com Backpatch: 9.5, where ON CONFLICT was introduced
-
Andres Freund authored
Author: Peter Geoghegan and Andres Freund Discussion: CAM3SWZScpWzQ-7EJC77vwqzZ1GO8GNmURQ1QqDQ3wRn7AbW1Cg@mail.gmail.com Backpatch: 9.5, where ON CONFLICT was introduced
-
- 18 Nov, 2015 4 commits
-
-
Tom Lane authored
Per buildfarm member anchovy, 2.6.0 exists in the wild now. Hopefully it works with Postgres; if not, we'll have to do something about that, but in any case claiming it's "too old" is pretty silly.
-
Robert Haas authored
Remote expressions now also matter to make_foreignscan() Noted by Etsuro Fujita.
-
Robert Haas authored
Amit Kapila, per design ideas from me.
-
Robert Haas authored
When I wrote this code originally, the intention was to recompute the remapinfo only when the tupledesc changes. This presumably only happens once per query, but I copied the design pattern from other DestReceivers. However, due to a silly oversight on my part, tqueue->tupledesc never got set, leading to recomputation for every tuple. This should improve the performance of parallel scans that return a significant number of tuples. Report by Amit Kapila; patch by me, reviewed by him.
-
- 17 Nov, 2015 5 commits
-
-
Tom Lane authored
div_var_fast() postpones propagating carries in the same way as mul_var(), so it has the same corner-case overflow risk we fixed in 246693e5, namely that the size of the carries has to be accounted for when setting the threshold for executing a carry propagation step. We've not devised a test case illustrating the brokenness, but the required fix seems clear enough. Like the previous fix, back-patch to all active branches. Dean Rasheed
-
Peter Eisentraut authored
from Euler Taveira
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
- 16 Nov, 2015 2 commits
-
-
Robert Haas authored
Prior to commit 0709b7ee, access to variables within a spinlock-protected critical section had to be done through a volatile pointer, but that should no longer be necessary. Review by Andres Freund
-
Tom Lane authored
Since commit 11e13185, ruleutils.c has attempted to ensure that each RTE in a query or plan tree has a unique alias name. However, the code that was added for this could be quite slow, even as bad as O(N^3) if N identical RTE names must be replaced, as noted by Jeff Janes. Improve matters by building a transient hash table within set_rtable_names. The hash table in itself reduces the cost of detecting a duplicate from O(N) to O(1), and we can save another factor of N by storing the number of de-duplicated names already created for each entry, so that we don't have to re-try names already created. This way is probably a bit slower overall for small range tables, but almost by definition, such cases should not be a performance problem. In principle the same problem applies to the column-name-de-duplication code; but in practice that seems to be less of a problem, first because N is limited since we don't support extremely wide tables, and second because duplicate column names within an RTE are fairly rare, so that in practice the cost is more like O(N^2) not O(N^3). It would be very much messier to fix the column-name code, so for now I've left that alone. An independent problem in the same area was that the de-duplication code paid no attention to the identifier length limit, and would happily produce identifiers that were longer than NAMEDATALEN and wouldn't be unique after truncation to NAMEDATALEN. This could result in dump/reload failures, or perhaps even views that silently behaved differently than before. We can fix that by shortening the base name as needed. Fix it for both the relation and column name cases. In passing, check for interrupts in set_rtable_names, just in case it's still slow enough to be an issue. Back-patch to 9.3 where this code was introduced.
-
- 15 Nov, 2015 2 commits
-
-
Robert Haas authored
Amit Kapila
-
Tom Lane authored
Normally ruleutils prints a whole-row Var as "foo.*". We already knew that that doesn't work at top level of a SELECT list, because the parser would treat the "*" as a directive to expand the reference into separate columns, not a whole-row Var. However, Joshua Yanovski points out in bug #13776 that the same thing happens at top level of a ROW() construct; and some nosing around in the parser shows that the same is true in VALUES(). Hence, apply the same workaround already devised for the SELECT-list case, namely to add a forced cast to the appropriate rowtype in these cases. (The alternative of just printing "foo" was rejected because it is difficult to avoid ambiguity against plain columns named "foo".) Back-patch to all supported branches.
-
- 14 Nov, 2015 3 commits
-
-
Tom Lane authored
Set the "rscales" for intermediate-result calculations to ensure that suitable numbers of significant digits are maintained throughout. The previous coding hadn't thought this through in any detail, and as a result could deliver results with many inaccurate digits, or in the worst cases even fail with divide-by-zero errors as a result of losing all nonzero digits of intermediate results. In exp_var(), get rid entirely of the logic that separated the calculation into integer and fractional parts: that was neither accurate nor particularly fast. The existing range-reduction method of dividing by 2^n can be applied across the full input range instead of only 0..1, as long as we are careful to set an appropriate rscale for each step. Also fix the logic in mul_var() for shortening the calculation when the caller asks for fewer output digits than an exact calculation would require. This bug doesn't affect simple multiplications since that code path asks for an exact result, but it does contribute to accuracy issues in the transcendental math functions. In passing, improve performance of mul_var() a bit by forcing the shorter input to be on the left, thus reducing the number of iterations of the outer loop and probably also reducing the number of carry-propagation steps needed. This is arguably a bug fix, but in view of the lack of field complaints, it does not seem worth the risk of back-patching. Dean Rasheed
-
Bruce Momjian authored
Report by Greg Clough
-
Bruce Momjian authored
Previously, file copy failures were ignored on Windows due to an incorrect return value check. Report by Manu Joye Backpatch through 9.1
-
- 13 Nov, 2015 1 commit
-
-
Stephen Frost authored
The sepgsql docs included a comment that PG doesn't support RLS. That is only true for versions prior to 9.5. Update the docs for 9.5 and master to say that PG supports RLS but that sepgsql does not yet. Pointed out by Heikki. Back-patch to 9.5
-
- 12 Nov, 2015 7 commits
-
-
Alvaro Herrera authored
Having the script prompt for passwords over and over was a preexisting problem when it processed multiple databases or when it processed multiple analyze stages, but the parallel mode introduced in commit a1792320 made it worse. Fix the annoyance by keeping a copy of the password used by the first connection that requires one. Since users can (currently) only have a single password, there's no need for more complex arrangements (such as remembering one password per database). Per bug #13741 reported by Eric Brown. Patch authored and cross-reviewed by Haribabu Kommi and Michael Paquier, slightly tweaked by Álvaro Herrera. Discussion: http://www.postgresql.org/message-id/20151027193919.931.54948@wrigleys.postgresql.org Backpatch to 9.5, where parallel vacuumdb was introduced.
-
Robert Haas authored
This makes it significantly easier to identify these lwlocks in LWLOCK_STATS or Trace_lwlocks output. It's also arguably better from a modularity standpoint, since lwlock.c no longer needs to know anything about the LWLock needs of the higher-level SLRU facility. Ildus Kurbangaliev, reviewd by Álvaro Herrera and by me.
-
Tom Lane authored
In commit 210eb9b7 I centralized libpq's logic for closing down the backend communication socket, and made the new pqDropConnection routine always reset the I/O buffers to empty. Many of the call sites previously had not had such code, and while that amounted to an oversight in some cases, there was one place where it was intentional and necessary *not* to flush the input buffer: pqReadData should never cause that to happen, since we probably still want to process whatever data we read. This is the true cause of the problem Robert was attempting to fix in c3e7c24a, namely that libpq no longer reported the backend's final ERROR message before reporting "server closed the connection unexpectedly". But that only accidentally fixed it, by invoking parseInput before the input buffer got flushed; and very likely there are timing scenarios where we'd still lose the message before processing it. To fix, pass a flag to pqDropConnection to tell it whether to flush the input buffer or not. On review I think flushing is actually correct for every other call site. Back-patch to 9.3 where the problem was introduced. In HEAD, also improve the comments added by c3e7c24a.
-
Robert Haas authored
At least since the introduction of Hot Standby, the backend has sometimes sent fatal errors even when no client query was in progress, assuming that the client would receive it. However, pqHandleSendFailure was not in sync with this assumption, and only tries to catch notices and notifies. Add a parseInput call to the loop there to fix. Andres Freund suggested the fix. Comments are by me. Reviewed by Michael Paquier.
-
Robert Haas authored
Letting backends continue to run if the postmaster has exited prevents PostgreSQL from being restarted, which in many environments is catastrophic. Worse, if some other backend crashes, we no longer have any protection against shared memory corruption. So, arrange for them to exit instead. We don't want to expend many cycles on this, but including postmaster death in the set of things that we wait for when a backend is idle seems cheap enough. Rajeev Rastogi and Robert Haas
-
Robert Haas authored
Commit a0d9f6e4 added this support for all other plan node types; this fills in the gap. Since TextOutCustomScan complicates this and is pretty well useless, remove it. KaiGai Kohei, with some modifications by me.
-
Tom Lane authored
Also fill in the previously empty "major enhancements" list. YMMV as to which items should make the cut, but it's past time we had something more than a placeholder here. (I meant to get this done before beta2 was wrapped, but got distracted by PDF build problems. Better late than never.)
-
- 11 Nov, 2015 6 commits
-
-
Tom Lane authored
These were discussed in three different sections of the manual, which unsurprisingly had diverged over time; and the descriptions of individual variables lacked stylistic consistency even within each section (and frequently weren't in very good English anyway). Clean up the mess, and remove some of the redundant information in hopes that future additions will be less likely to re-introduce inconsistency. For instance I see no need for maintenance.sgml to include its very own list of all the autovacuum storage parameters, especially since that list was already incomplete.
-
Tom Lane authored
In commit 5d1ff6bd I added some logic to relcache.c to try to ensure that the regression tests would fail if we made a mistake about which relations belong in the relcache init files. I'm quite sure I tested that, but I must have done so only for the non-shared-catalog case, because a report from Adam Brightwell showed that the regression tests still pass just fine if we bollix the shared-catalog init file in the way this code was supposed to catch. The reason is that that file gets loaded before we do client authentication, so the WARNING is not sent to the client, only to the postmaster log, where it's far too easily missed. The least Rube Goldbergian answer to this is to put an Assert(false) after the elog(WARNING). That will certainly get developers' attention, while not breaking production builds' ability to recover from corner cases with similar symptoms. Since this is only of interest to developers, there seems no need for a back-patch, even though the previous commit went into all branches.
-
Robert Haas authored
Add a new flag, consider_parallel, to each RelOptInfo, indicating whether a plan for that relation could conceivably be run inside of a parallel worker. Right now, we're pretty conservative: for example, it might be possible to defer applying a parallel-restricted qual in a worker, and later do it in the leader, but right now we just don't try to parallelize access to that relation. That's probably the right decision in most cases, anyway. Using the new flag, generate parallel sequential scan plans for plain baserels, meaning that we now have parallel sequential scan in PostgreSQL. The logic here is pretty unsophisticated right now: the costing model probably isn't right in detail, and we can't push joins beneath Gather nodes, so the number of plans that can actually benefit from this is pretty limited right now. Lots more work is needed. Nevertheless, it seems time to enable this functionality so that all this code can actually be tested easily by users and developers. Note that, if you wish to test this functionality, it will be necessary to set max_parallel_degree to a value greater than the default of 0. Once a few more loose ends have been tidied up here, we might want to consider changing the default value of this GUC, but I'm leaving it alone for now. Along the way, fix a bug in cost_gather: the previous coding thought that a Gather node's transfer overhead should be costed on the basis of the relation size rather than the number of tuples that actually need to be passed off to the leader. Patch by me, reviewed in earlier versions by Amit Kapila.
-
Robert Haas authored
In addition, this path fills in a number of missing bits and pieces in the parallel infrastructure. Paths and plans now have a parallel_aware flag indicating whether whatever parallel-aware logic they have should be engaged. It is believed that we will need this flag for a number of path/plan types, not just sequential scans, which is why the flag is generic rather than part of the SeqScan structures specifically. Also, execParallel.c now gives parallel nodes a chance to initialize their PlanState nodes from the DSM during parallel worker startup. Amit Kapila, with a fair amount of adjustment by me. Review of previous patch versions by Haribabu Kommi and others.
-
Robert Haas authored
I dunno how commit 3bd909b2 missed this, but it evidently did.
-
Tom Lane authored
Commit 8457d0be introduced an example which, while not incorrect, failed to exhibit the behavior it meant to describe, as a result of omitting an E'' prefix that needed to be there. Noticed and fixed by Peter Geoghegan. I (tgl) failed to resist the temptation to wordsmith nearby text a bit while at it.
-
- 10 Nov, 2015 2 commits
-
-
Tom Lane authored
Per buildfarm member pademelon.
-
Tom Lane authored
In commit a5ec86a7 I wrote a quick hack that reduced the number of TeX string pool entries created while converting our documentation to PDF form. That held the fort for awhile, but as of HEAD we're back up against the same limitation. It turns out that the original coding of \FlowObjectSetup actually results in *three* string pool entries being generated for every "flow object" (that is, potential cross-reference target) in the documentation, and my previous hack only got rid of one of them. With a little more care, we can reduce the string count to one per flow object plus one per actually-cross-referenced flow object (about 115000 + 5000 as of current HEAD); that should work until the documentation volume roughly doubles from where it is today. As a not-incidental side benefit, this change also causes pdfjadetex to stop emitting unreferenced hyperlink anchors (bookmarks) into the PDF file. It had been making one willy-nilly for every flow object; now it's just one per actually-cross-referenced object. This results in close to a 2X savings in PDF file size. We will still want to run the output through "jpdftweak" to get it to be compressed; but we no longer need removal of unreferenced bookmarks, so we might be able to find a quicker tool for that step. Although the failure only affects HEAD and US-format output at the moment, 9.5 cannot be more than a few pages short of failing likewise, so it will inevitably fail after a few rounds of minor-version release notes. I don't have a lot of faith that we'll never hit the limit in the older branches; and anyway it would be nice to get rid of jpdftweak across the board. Therefore, back-patch to all supported branches.
-