- 11 Dec, 2015 5 commits
-
-
Tom Lane authored
I originally modeled this data structure on SpecialJoinInfo, but after commit acfcd45c that looks like a pretty poor decision. All we really need is relid sets identifying laterally-referenced rels; and most of the time, what we want to know about includes indirect lateral references, a case the LateralJoinInfo data was unsuited to compute with any efficiency. The previous commit redefined RelOptInfo.lateral_relids as the transitive closure of lateral references, so that it easily supports checking indirect references. For the places where we really do want just direct references, add a new RelOptInfo field direct_lateral_relids, which is easily set up as a copy of lateral_relids before we perform the transitive closure calculation. Then we can just drop lateral_info_list and LateralJoinInfo and the supporting code. This makes the planner's handling of lateral references noticeably more efficient, and shorter too. Such a change can't be back-patched into stable branches for fear of breaking extensions that might be looking at the planner's data structures; but it seems not too late to push it into 9.5, so I've done so.
-
Stephen Frost authored
ALTER POLICY hadn't fully considered partial policy alternation (eg: change just the roles on the policy, or just change one of the expressions) when rebuilding the dependencies. Instead, it would happily remove all dependencies which existed for the policy and then only recreate the dependencies for the objects referred to in the specific ALTER POLICY command. Correct that by extracting and building the dependencies for all objects referenced by the policy, regardless of if they were provided as part of the ALTER POLICY command or were already in place as part of the pre-existing policy.
-
Tom Lane authored
More fuzz testing by Andreas Seltenreich exposed that the planner did not cope well with chains of lateral references. If relation X references Y laterally, and Y references Z laterally, then we will have to scan X on the inside of a nestloop with Z, so for all intents and purposes X is laterally dependent on Z too. The planner did not understand this and would generate intermediate joins that could not be used. While that was usually harmless except for wasting some planning cycles, under the right circumstances it would lead to "failed to build any N-way joins" or "could not devise a query plan" planner failures. To fix that, convert the existing per-relation lateral_relids and lateral_referencers relid sets into their transitive closures; that is, they now show all relations on which a rel is directly or indirectly laterally dependent. This not only fixes the chained-reference problem but allows some of the relevant tests to be made substantially simpler and faster, since they can be reduced to simple bitmap manipulations instead of searches of the LateralJoinInfo list. Also, when a PlaceHolderVar that is due to be evaluated at a join contains lateral references, we should treat those references as indirect lateral dependencies of each of the join's base relations. This prevents us from trying to join any individual base relations to the lateral reference source before the join is formed, which again cannot work. Andreas' testing also exposed another oversight in the "dangerous PlaceHolderVar" test added in commit 85e5e222. Simply rejecting unsafe join paths in joinpath.c is insufficient, because in some cases we will end up rejecting *all* possible paths for a particular join, again leading to "could not devise a query plan" failures. The restriction has to be known also to join_is_legal and its cohort functions, so that they will not select a join for which that will happen. I chose to move the supporting logic into joinrels.c where the latter functions are. Back-patch to 9.3 where LATERAL support was introduced.
-
Alvaro Herrera authored
This module needs explicit initialization in order to replay WAL records in recovery, but we had broken this recently following changes to make other (stranger) scenarios work correctly. To fix, rework the initialization sequence so that it always takes place before WAL replay commences for both master and standby. I could have gone for a more localized fix that just added a "startup" call for the master server, but it seemed better to restructure the existing callers as well so that the whole thing made more sense. As a drawback, there is more control logic in xlog.c now than previously, but doing otherwise meant passing down the ControlFile flag, which seemed uglier as a whole. This also meant adding a check to not re-execute ActivateCommitTs if it had already been called. Reported by Fujii Masao. Backpatch to 9.5.
-
Peter Eisentraut authored
-
- 10 Dec, 2015 5 commits
-
-
Robert Haas authored
Complete "ALTER POLICY" with a policy name, as we do for DROP POLICY. And, complete "ALTER POLICY polname ON" with a table name that has such a policy, as we do for DROP POLICY, rather than with any table name at all. Masahiko Sawada
-
Robert Haas authored
Etsuro Fujita
-
Andres Freund authored
ExecOnConflictUpdate() passed t_ctid of the to-be-updated tuple to ExecUpdate(). That's problematic primarily because of two reason: First and foremost t_ctid could point to a different tuple. Secondly, and that's what triggered the complaint by Stanislav, t_ctid is changed by heap_update() to point to the new tuple version. The behavior of AFTER UPDATE triggers was therefore broken, with NEW.* and OLD.* tuples spuriously identical within AFTER UPDATE triggers. To fix both issues, pass a pointer to t_self of a on-stack HeapTuple instead. Fixing this bug lead to one change in regression tests, which previously failed due to the first issue mentioned above. There's a reasonable expectation that test fails, as it updates one row repeatedly within one INSERT ... ON CONFLICT statement. That is only possible if the second update is triggered via ON CONFLICT ... SET, ON CONFLICT ... WHERE, or by a WITH CHECK expression, as those are executed after ExecOnConflictUpdate() does a visibility check. That could easily be prohibited, but given it's allowed for plain UPDATEs and a rare corner case, it doesn't seem worthwhile. Reported-By: Stanislav Grozev Author: Andres Freund and Peter Geoghegan Discussion: CAA78GVqy1+LisN-8DygekD_Ldfy=BJLarSpjGhytOsgkpMavfQ@mail.gmail.com Backpatch: 9.5, where ON CONFLICT was introduced
-
Andres Freund authored
At the end of crash recovery, unlogged relations are reset to the empty state, using their init fork as the template. The init fork is copied to the main fork without going through shared buffers. Unfortunately WAL replay so far has not necessarily flushed writes from shared buffers to disk at that point. In normal crash recovery, and before the introduction of 'fast promotions' in fd4ced52 / 9.3, the END_OF_RECOVERY checkpoint flushes the buffers out in time. But with fast promotions that's not the case anymore. To fix, force WAL writes targeting the init fork to be flushed immediately (using the new FlushOneBuffer() function). In 9.5+ that flush can centrally be triggered from the code dealing with restoring full page writes (XLogReadBufferForRedoExtended), in earlier releases that responsibility is in the hands of XLOG_HEAP_NEWPAGE's replay function. Backpatch to 9.1, even if this currently is only known to trigger in 9.3+. Flushing earlier is more robust, and it is advantageous to keep the branches similar. Typical symptoms of this bug are errors like 'ERROR: index "..." contains unexpected zero page at block 0' shortly after promoting a node. Reported-By: Thom Brown Author: Andres Freund and Michael Paquier Discussion: 20150326175024.GJ451@alap3.anarazel.de Backpatch: 9.1-
-
- 09 Dec, 2015 2 commits
-
-
Robert Haas authored
Peter Geoghegan
-
Robert Haas authored
The original parallel sequential scan commit included only very limited changes to the EXPLAIN output. Aggregated totals from all workers were displayed, but there was no way to see what each individual worker did or to distinguish the effort made by the workers from the effort made by the leader. Per a gripe by Thom Brown (and maybe others). Patch by me, reviewed by Amit Kapila.
-
- 08 Dec, 2015 5 commits
-
-
Kevin Grittner authored
The single linked list of memory contexts could result in O(N^2) performance to free a set of contexts if they were not freed in reverse order of creation. In many cases the reverse order was used, but there were some significant exceptions that caused real- world performance problems. Rather than requiring all callers to care about the order in which contexts were freed, and hunting down and changing all existing cases where the wrong order was used, we add one pointer per memory context so that the implementation details are not so visible. Jan Wieck
-
Tom Lane authored
For unclear reasons, this function doesn't always read the expected data in some old Perl versions. Rewriting it to avoid use of ARGV seems to dodge the problem, and this version is clearer anyway if you ask me. In passing, also improve error message in adjacent append_to_file function.
-
Robert Haas authored
To support this, we must reconcile some historical anomalies in the behavior of -c. In particular, as a backward-incompatibility, -c no longer implies --no-psqlrc. Pavel Stehule (code) and Catalin Iacob (documentation). Review by Michael Paquier and myself. Proposed behavior per Tom Lane.
-
Robert Haas authored
Commit e7cb7ee1 provided basic infrastructure for allowing a foreign data wrapper or custom scan provider to replace a join of one or more tables with a scan. However, this infrastructure failed to take into account the need for possible EvalPlanQual rechecks, and ExecScanFetch would fail an assertion (or just overwrite memory) if such a check was attempted for a plan containing a pushed-down join. To fix, adjust the EPQ machinery to skip some processing steps when scanrelid == 0, making those the responsibility of scan's recheck method, which also has the responsibility in this case of correctly populating the relevant slot. To allow foreign scans to gain control in the right place to make use of this new facility, add a new, optional RecheckForeignScan method. Also, allow a foreign scan to have a child plan, which can be used to correctly populate the slot (or perhaps for something else, but this is the only use currently envisioned). KaiGai Kohei, reviewed by Robert Haas, Etsuro Fujita, and Kyotaro Horiguchi.
-
- 07 Dec, 2015 4 commits
-
-
Tom Lane authored
While convincing myself that commit 7e19db0c would solve both of the problems recently reported by Andreas Seltenreich, I realized that add_paths_to_joinrel's handling of LATERAL restrictions could be made noticeably simpler and faster if we were to retain the minimum possible parameterization for each joinrel (that is, the set of relids supplying unsatisfied lateral references in it). We already retain that for baserels, in RelOptInfo.lateral_relids, so we can use that field for joinrels too. I re-pgindent'd the files touched here, which affects some unrelated comments. This is, I believe, just a minor optimization not a bug fix, so no back-patch.
-
Alvaro Herrera authored
Per note from Tom Lane
-
Tom Lane authored
It was possible for the planner to decide to join a LATERAL subquery to the outer side of an outer join before the outer join itself is completed. Normally that's fine because of the associativity rules, but it doesn't work if the subquery contains a lateral reference to the inner side of the outer join. In such a situation the outer join *must* be done first. join_is_legal() missed this consideration and would allow the join to be attempted, but the actual path-building code correctly decided that no valid join path could be made, sometimes leading to planner errors such as "failed to build any N-way joins". Per report from Andreas Seltenreich. Back-patch to 9.3 where LATERAL support was added.
-
Alvaro Herrera authored
Noted by Tom Lane: - PostgresNode had a BEGIN block which created files, contrary to perlmod suggestions to do that only on INIT blocks. - Assign ports randomly rather than starting from 90600. Noted by Noah Misch: - Change use of no-longer-set PGPORT environment variable to $node->port - Don't start a server in pg_controldata test - PostgresNode was reading the PID file incorrectly; test the right thing, and chomp the line we read from the PID file. - Remove an unused $devnull variable - Use 'pg_ctl kill' instead of "kill" directly, for Windos portability. - Make server log names more informative. Author: Michael Paquier
-
- 06 Dec, 2015 1 commit
-
- 05 Dec, 2015 2 commits
-
-
Tom Lane authored
This way, existing .gitignore entries and makefile clean actions will automatically apply to the tempdir, should it survive a TAP test run (which can happen if the user control-C's out of the run, for example). Michael Paquier, per a complaint from me
-
Noah Misch authored
This should make Coverity deduce that plperl_call_perl_func() does not dereference NULL argtypes. Back-patch to 9.5, where the affected code was introduced. Michael Paquier
-
- 04 Dec, 2015 1 commit
-
-
Tom Lane authored
In commit 1ea0c73c I added a section to user-manag.sgml about how to drop roles that own objects; but as pointed out by Stephen Frost, I neglected that shared objects (databases or tablespaces) may need special treatment. Fix that. Back-patch to supported versions, like the previous patch.
-
- 03 Dec, 2015 6 commits
-
-
Alvaro Herrera authored
As pointed out by Fujii Masao, we weren't quite there on a standby behaving sanely: first because we were failing to acquire the correct state in the case where no XLOG_PARAMETER_CHANGE message was sent (because a checkpoint had already happened after the setting was changed in the master, and then the standby was restarted); and second because promoting the standby with the feature enabled failed to activate it if the master had the feature disabled. This patch fixes both those misbehaviors hopefully without re-introducing any old problems. Also change the hint emitted in a standby together with the error message about the feature being disabled, to make it point out that the place to chance the setting is the master. Otherwise, if the setting is already enabled in the standby, it is very confusing to have it say that the setting must be enabled ... Authors: Álvaro Herrera, Petr Jelínek. Backpatch to 9.5.
-
Tom Lane authored
Formerly, if "psql -o foo" failed to open the output file "foo", it would print an error message but then carry on as though -o had not been specified at all. This seems contrary to expectation: a program that cannot open its output file normally fails altogether. Make psql do exit(1) after reporting the error. If "\o foo" failed to open "foo", it would print an error message but then reset the output file to stdout, as if the argument had been omitted. This is likewise pretty surprising behavior. Make it keep the previous output state, instead. psql keeps SIGPIPE interrupts disabled when it is writing to a pipe, either a pipe specified by -o/\o or a transient pipe opened for purposes such as using a pager on query output. The logic for this was too simple and could sometimes re-enable SIGPIPE when a -o pipe was still active, thus possibly leading to an unexpected psql crash later. Fixing the last point required getting rid of the kluge in PrintQueryTuples and ExecQueryUsingCursor whereby they'd transiently change the global queryFout state, but that seems like good cleanup anyway. Back-patch to 9.5 but not further; these are minor-enough issues that changing the behavior in stable branches doesn't seem appropriate.
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
Alvaro Herrera authored
Michael Paquier
-
- 02 Dec, 2015 3 commits
-
-
Tom Lane authored
The formatting modes that depend on knowledge of the terminal window width did not work right when printing a query result that's been fetched in sections (as a result of FETCH_SIZE). ExecQueryUsingCursor() would force use of the pager as soon as there's more than one result section, and then print.c would see an output file pointer that's not stdout and incorrectly conclude that the terminal window width isn't relevant. This has been broken all along for non-expanded "wrapped" output format, and as of 9.5 the issue affects expanded mode as well. The problem also caused "\pset expanded auto" mode to invariably *not* switch to expanded output in a segmented result, which seems to me to be exactly backwards. To fix, we need to pass down an "is_pager" flag to inform the print.c subroutines that some calling level has already replaced stdout with a pager pipe, so they should (a) not do that again and (b) nonetheless honor the window size. (Notably, this makes the first is_pager test in print_aligned_text() not be dead code anymore.) This patch is a bit invasive because there are so many existing calls of printQuery()/printTable(), but fortunately all but a couple can just pass "false" for the added parameter. Back-patch to 9.5 but no further. Given the lack of field complaints, it's not clear that we should change the behavior in stable branches. Also, the API change for printQuery()/printTable() might possibly break third-party code, again something we don't like to do in stable branches. However, it's not quite too late to do this in 9.5, and with the larger scope of the problem there, it seems worth doing.
-
Alvaro Herrera authored
The original code was a bit clunky; make it more amenable for further reuse by creating a new Perl package PostgresNode, which is an object-oriented representation of a single server, with some support routines such as init, start, stop, psql. This serves as a better basis on which to build further test code, and enables writing tests that use more than one server without too much complication. This commit modifies a lot of the existing test files, mostly to remove explicit calls to system commands (pg_ctl) replacing them with method calls of a PostgresNode object. The result is quite a bit more straightforward. Also move some initialization code to BEGIN and INIT blocks instead of having it straight in as top-level code. This commit also introduces package RecursiveCopy so that we can copy whole directories without having to depend on packages that may not be present on vanilla Perl 5.8 installations. I also ran perltidy on the modified files, which changes some code sites that are not otherwise touched by this patch. I tried to avoid this, but it ended up being more trouble than it's worth. Authors: Michael Paquier, Álvaro Herrera Review: Noah Misch
-
Robert Haas authored
Peter Geoghegan
-
- 01 Dec, 2015 5 commits
-
-
Tom Lane authored
We tried to fetch statistics data from the index metapage, which does not work if the index isn't actually present. If the index is hypothetical, instead extrapolate some plausible internal statistics based on the index page count provided by the index-advisor plugin. There was already some code in gincostestimate() to invent internal stats in this way, but since it was only meant as a stopgap for pre-9.1 GIN indexes that hadn't been vacuumed since upgrading, it was pretty crude. If we want it to support index advisors, we should try a little harder. A small amount of testing says that it's better to estimate the entry pages as 90% of the index, not 100%. Also, estimating the number of entries (keys) as equal to the heap tuple count could be wildly wrong in either direction. Instead, let's estimate 100 entries per entry page. Perhaps someday somebody will want the index advisor to be able to provide these numbers more directly, but for the moment this should serve. Problem report and initial patch by Julien Rouhaud; modified by me to invent less-bogus internal statistics. Back-patch to all supported branches, since we've supported index advisors since 9.0.
-
Tom Lane authored
Don't force the data width to extend all the way to the right margin if it doesn't need to. This reverts the behavior in non-wrapping cases to be what it was in 9.4. Also, make the logic that ensures the data line width is at least equal to the record-header line width a little less obscure. In passing, avoid possible calculation of log10(0). Probably that's harmless, given the lack of field complaints, but it seems risky: conversion of NaN to an integer isn't well defined.
-
Tom Lane authored
The previous coding could overrun the provided buffer size for a very large input, or lose precision for a very small input. Adopt the methodology that's been in use in the equivalent backend code for a long time. Per private report from Bas van Schaik. Back-patch to all supported branches.
-
Tom Lane authored
We should ignore output_columns unless it's greater than zero. A zero means we couldn't get any information from ioctl(TIOCGWINSZ); in that case the expected behavior is to print the data at native width, not to wrap it at the smallest possible value. print_aligned_text() gets this consideration right, but print_aligned_vertical() lost track of this detail somewhere along the line.
-
Teodor Sigaev authored
Allow pg_rewind to work when target timeline was switched. Now user can return promoted standby to old master. Target timeline history becomes a global variable. Index in target timeline history is used in function interfaces instead of specifying TLI directly. Thus, SimpleXLogPageRead() can easily start reading XLOGs from next timeline when current timeline ends. Author: Alexander Korotkov Review: Michael Paquier
-
- 30 Nov, 2015 1 commit
-
-
Tom Lane authored
This area was rather heavily whacked around in 6513633b and follow-on commits, and it was showing it, because the logic to calculate the allowable data width in wrapped expanded mode had only the vaguest relationship to the logic that was actually printing the data. It was not very close to being right about the conditions requiring overhead columns to be added. Aside from being wrong, it was pretty unreadable and under-commented. Rewrite it so it corresponds to what the printing code actually does. In passing, remove a couple of dead tests in the printing logic, too. Per a complaint from Jeff Janes, though this doesn't look much like his patch because it fixes a number of other corner-case bogosities too. One such fix that's visible in the regression test results is that although the code was attempting to enforce a minimum data width of 3 columns, it sometimes left less space than that available.
-