- 20 Nov, 2016 1 commit
-
-
Tom Lane authored
Because we use transformTargetList() for UPDATE as well as SELECT tlists, the code accidentally tried to expand a "*" reference into several columns. This is nonsensical, because the UPDATE syntax provides exactly one target column to put the value into. The immediate result was that transformUpdateTargetList() got confused and reported "UPDATE target count mismatch --- internal error". It seems better to treat such a reference as a plain whole-row variable, as it would be in other contexts. (This could produce useful results when the target column is of composite type.) Fix by tweaking transformTargetList() to perform *-expansion only conditionally, depending on its exprKind parameter. Back-patch to 9.3. The problem exists further back, but a fix would be much more invasive before that, because transformTargetList() wasn't told what kind of list it was working on. Doesn't seem worth the trouble given the lack of field reports. (I only noticed it because I was checking the code while trying to improve the documentation about how we handle "foo.*".) Discussion: <4308.1479595330@sss.pgh.pa.us>
-
- 19 Nov, 2016 2 commits
-
-
Tom Lane authored
create_merge_append_path should use the path rowcount it just computed, not rel->tuples, for costing purposes. Those numbers should always be the same at present, but if we ever support parameterized MergeAppend paths (a case this function is otherwise prepared for), the former would be right and the latter wrong. No need for back-patch since the problem is only latent. Ashutosh Bapat Discussion: <CAFjFpRek+cLCnTo24youuGtsq4zRphEB8EUUPjDxZjnL4n4HYQ@mail.gmail.com>
-
Tom Lane authored
The serialization code dumped core for a string-valued GUC whose value is NULL, which is a legal state. The infrastructure isn't capable of transmitting that state exactly, but fortunately, transmitting an empty string instead should be close enough (compare, eg, commit e45e990e). The code potentially underestimated the space required to format a real-valued variable, both because it made an unwarranted assumption that %g output would never be longer than %e output, and because it didn't count right even for %e format. In practice this would pretty much always be masked by overestimates for other variables, but it's still wrong. Also fix boundary-case error in read_gucstate, incorrect handling of the case where guc_sourcefile is non-NULL but zero length (not clear that can happen, but if it did, this code would get totally confused), and confusingly useless check for a NULL result from read_gucstate. Andreas Seltenreich discovered the core dump; other issues noted while reading nearby code. Back-patch to 9.5 where this code was introduced. Michael Paquier and Tom Lane Discussion: <871sy78wno.fsf@credativ.de>
-
- 18 Nov, 2016 2 commits
-
-
Peter Eisentraut authored
Like pg_tables, pg_views, and others, this view contains information about sequences in a way that is independent of the system catalog layout but more comprehensive than the information schema. To help implement the view, add a new internal function pg_sequence_last_value() to return the last value of a sequence. This is kept separate from pg_sequence_parameters() to separate querying run-time state from catalog-like information. Reviewed-by: Andreas Karlsson <andreas@proxel.se>
-
Stephen Frost authored
Add a loop to check that each test covers all of the pg_dump runs. We (I) had been a bit sloppy when adding new runs and not making sure to mark if they should be under like or unlike for each test, this loop makes sure that the test system will complain if any are forgotten in the future. The loop also correctly handles the 'catch all' cases, which are used to avoid running unnecessary specific checks when a single catch-all can be done (eg: a no-acl run should not have any GRANT commands). Also, re-enable the testing of blobs, but use lo_from_bytea() instead of trying to be cute and writing out to a file and then reading it back in with psql, which proved to be difficult for some buildfarm members. This allows us to add support for testing the --no-blobs option which will be getting added shortly, provided the buildfarm doesn't blow up on this.
-
- 17 Nov, 2016 5 commits
-
-
Robert Haas authored
The debug messages that merely print StartTransactionCommand, CommitTransactionCommand, ProcessUtilty, or ProcessQuery with no additional details seem to be useless. Get rid of them. The transaction status messages produced by ShowTransactionState are occasionally useful, but they are extremely verbose, producing multiple lines of log output every time they fire, which can happens multiple times per transaction. So, reduce the level to DEBUG5; avoid emitting an extra line just to explain which debug point is at issue; and tighten up the rest of the message so it doesn't use quite so much horizontal space. With these changes, it's possible to run a somewhat busy system with a log level even as high as DEBUG4, whereas previously anything above DEBUG2 would flood the log with output that probably wasn't really all that useful.
-
Tom Lane authored
pg_dump's traditional solution for breaking a circular dependency involving a view was to create the view with CREATE TABLE and then later issue CREATE RULE "_RETURN" ... to convert the table to a view, relying on the backend's very very ancient code that supports making views that way. We've wanted to get rid of that kluge for a long time, but the thing that finally motivates doing something about it is the recognition that this method fails with the --clean option, because it leads to issuing DROP RULE "_RETURN" followed by DROP TABLE --- and the backend won't let you drop a view's _RETURN rule. Instead, let's break circular dependencies by initially creating the view using CREATE VIEW AS SELECT NULL::columntype AS columnname, ... (so that it has the right column names and types to support external references, but no dependencies beyond the column data types), and then later dumping the ON SELECT rule using the spelling CREATE OR REPLACE VIEW. This method wasn't available when this code was originally written, but it's been possible since PG 7.3, so it seems fine to start relying on it now. To solve the --clean problem, make the dropStmt for an ON SELECT rule be CREATE OR REPLACE VIEW with the same dummy target list as above. In this way, during the DROP phase, we first reduce the view to have no extra dependencies, and then we can drop it entirely when we've gotten rid of whatever had a circular dependency on it. (Note: this should work adequately well with the --if-exists option, since the CREATE OR REPLACE VIEW will go through whether the view exists or not. It could fail if the view exists with a conflicting column set, but we don't really support --clean against a non-matching database anyway.) This allows cleaning up some other kluges inside pg_dump, notably that we don't need a notion of reloptions attached to a rule anymore. Although this is a bug fix, commit to HEAD only for now. The problem's existed for a long time and we've had relatively few complaints, so it doesn't really seem worth taking risks to fix it in the back branches. We might revisit that choice if no problems emerge. Discussion: <19092.1479325184@sss.pgh.pa.us>
-
Tom Lane authored
Teach it not to complain if the dropStmt attached to an archive entry is actually spelled CREATE OR REPLACE VIEW, since that will happen due to an upcoming bug fix. Also, if it doesn't recognize a dropStmt, have it print a WARNING and then emit the dropStmt unmodified. That seems like a much saner behavior than Assert'ing or dumping core due to a null-pointer dereference, which is what would happen before :-(. Back-patch to 9.4 where this option was introduced. Discussion: <19092.1479325184@sss.pgh.pa.us>
-
Tom Lane authored
Cleanup for recent patches --- it's not much change, but I got annoyed while re-indenting the view-rule fix I'm working on.
-
Alvaro Herrera authored
Replay of XLOG_BTREE_VACUUM during Hot Standby was previously thought to require complex interlocking that matched the requirements on the master. This required an O(N) operation that became a significant problem with large indexes, causing replication delays of seconds or in some cases minutes while the XLOG_BTREE_VACUUM was replayed. This commit skips the “pin scan” that was previously required, by observing in detail when and how it is safe to do so, with full documentation. The pin scan is skipped only in replay; the VACUUM code path on master is not touched here. No tests included. Manual tests using an additional patch to view WAL records and their timing have shown the change in WAL records and their handling has successfully reduced replication delay. This is a back-patch of commits 687f2cd7, 3e4b7d87, b6028426 by Simon Riggs, to branches 9.4 and 9.5. No further backpatch is possible because this depends on catalog scans being MVCC. I (Álvaro) additionally updated a slight problem in the README, which explains why this touches the 9.6 and master branches.
-
- 16 Nov, 2016 2 commits
-
-
Peter Eisentraut authored
Cut out some expensive stuff from the HTML head element that we don't really need. This was previously discussed as part of e8306745, but ended up separate because it changes the output contents slightly.
-
Peter Eisentraut authored
The old DSSSL build is still available for a while using the make target "oldhtml".
-
- 15 Nov, 2016 7 commits
-
-
Tom Lane authored
This should always be true, but since we're relying on a tuple descriptor passed from outside pltcl itself, let's check. Per a gripe from Coverity.
-
Robert Haas authored
Previously, the handle for the control segment could not be zero, but some other DSM segment could potentially have a handle value of zero. However, that means that if someone wanted to store a dsm_handle that might or might not be valid, they would need a separate boolean to keep track of whether the associated value is legal. That's annoying, so change things so that no DSM segment can ever have a handle of 0 - or as we call it here, DSM_HANDLE_INVALID. Thomas Munro. This was submitted as part of a much larger patch to add an malloc-like allocator for dynamic shared memory, but this part seems like a good idea independently of the rest of the patch.
-
Tom Lane authored
On Windows, libc will mask \r\n line endings for us, since we read the password file in text mode. But that doesn't happen on Unix. People who share password files across both systems might have \r\n line endings in a file they use on Unix, so as a convenience, ignore trailing \r. Per gripe from Josh Berkus. In passing, put the existing check for empty line somewhere where it's actually useful, ie after stripping the newline not before. Vik Fearing, adjusted a bit by me Discussion: <0de37763-5843-b2cc-855e-5d0e5df25807@agliodbs.com>
-
Tom Lane authored
The CatalogSnapshot was not plugged into SnapshotResetXmin()'s accounting for whether MyPgXact->xmin could be cleared or advanced. In normal transactions this was masked by the fact that the transaction snapshot would be older, but during backend startup and certain utility commands it was possible to re-use the CatalogSnapshot after MyPgXact->xmin had been cleared, meaning that recently-deleted rows could be pruned even though this snapshot could still see them, causing unexpected catalog lookup failures. This effect appears to be the explanation for a recent failure on buildfarm member piculet. To fix, add the CatalogSnapshot to the RegisteredSnapshots heap whenever it is valid. In the previous logic, it was possible for the CatalogSnapshot to remain valid across waits for client input, but with this change that would mean it delays advance of global xmin in cases where it did not before. To avoid possibly causing new table-bloat problems with clients that sit idle for long intervals, add code to invalidate the CatalogSnapshot before waiting for client input. (When the backend is busy, it's unlikely that the CatalogSnapshot would be the oldest snap for very long, so we don't worry about forcing early invalidation of it otherwise.) In passing, remove the CatalogSnapshotStale flag in favor of using "CatalogSnapshot != NULL" to represent validity, as we do for the other special snapshots in snapmgr.c. And improve some obsolete comments. No regression test because I don't know a deterministic way to cause this failure. But the stress test shown in the original discussion provokes "cache lookup failed for relation 1255" within a few dozen seconds for me. Back-patch to 9.4 where MVCC catalog scans were introduced. (Note: it's quite easy to produce similar failures with the same test case in branches before 9.4. But MVCC catalog scans were supposed to fix that.) Discussion: <16447.1478818294@sss.pgh.pa.us>
-
Robert Haas authored
Gigantic numbers of tapes don't work out well. Original patch by Peter Geoghegan; comments entirely rewritten by me.
-
Robert Haas authored
This has been wrong for a very long time, and it's puzzling to me how it ever worked for anyone. Kyotaro Horiguchi
-
Robert Haas authored
Commit 41124a91 allowed the transaction log file prefix to be changed but left in place the existing 64-character limit on the total length of a log file name. It's possible that could be inconvenient for somebody, so increase the limit to MAXPGPATH, which ought to be enough for anybody. Per a suggestion from Tom Lane.
-
- 14 Nov, 2016 6 commits
-
-
Andres Freund authored
This allows us to avoid running the regression tests in contrib modules like pg_stat_statement in a less ugly manner. Discussion: <22432.1478968242@sss.pgh.pa.us>
-
Magnus Hagander authored
The function was renamed in 908e2347, but the comment never learned about it.
-
Peter Eisentraut authored
Add a new optional Makefile variable PROVE_TESTS that, if passed as a space-separated list of paths relative to the Makefile invoking $(prove_check) or $(prove_installcheck), runs just those tests instead of t/*.pl . From: Craig Ringer <craig@2ndquadrant.com> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
-
Alvaro Herrera authored
Commit 3c4cf080 should have removed SET TABLESPACE from the synopsis of ALTER MATERIALIZE VIEW as a possible "action" when it added a separate line for it in the main command listing, but failed to. Repair. Backpatch to 9.4, like the aforementioned commit.
-
Peter Eisentraut authored
Previously, pg_upgrade migrated sequence data like tables by copying the on-disk file. This does not allow any changes in the on-disk format for sequences. It's simpler to just have pg_dump set the new sequence values as it normally does. To do that, create a hidden submode in pg_dump that dumps sequence data even when a schema-only dump is requested, and trigger that submode in binary upgrade mode. (This new submode could easily be exposed as a command-line option, but it has limited use outside of pg_dump and would probably cause some confusion, so we don't do that at this time.) Reviewed-by: Anastasia Lubennikova <a.lubennikova@postgrespro.ru> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
-
Peter Eisentraut authored
Instead of handling both sequence data and table data internally as "table data", handle sequences separately under a "sequence set" type. We already handled materialized view data differently, so it makes the code somewhat cleaner to handle each relation kind separately at the top level. This does not change the output format, since there already was a separate "SEQUENCE SET" archive entry type. A noticeable difference is that SEQUENCE SET entries now always appear after TABLE DATA entries. And in parallel mode there is less sorting to do, because the sequence data entries are no longer considered table data. Reviewed-by: Anastasia Lubennikova <a.lubennikova@postgrespro.ru> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
-
- 13 Nov, 2016 2 commits
-
-
Tom Lane authored
The documentation for ts_headline() recommends using a sub-select to avoid extra evaluations of ts_headline() in a query with ORDER BY+LIMIT. Since commit 9118d03a this contortionism is unnecessary, so remove the recommendation. Noted by Oleg Bartunov. Discussion: <CAF4Au4w6rrH_j1bvVhzpOsRiHCog7sGJ3LSX0tY8ZdwhHT88LQ@mail.gmail.com>
-
Peter Eisentraut authored
-
- 12 Nov, 2016 1 commit
-
-
Andres Freund authored
While the set of covered functionality is fairly small, the added tests still are useful to get some basic buildfarm testing of pg_stat_statements itself, but also to exercise the lwlock tranch code on the buildfarm. Author: Amit Kapila, slightly editorialized by me Reviewed-By: Ashutosh Sharma, Andres Freund Discussion: <CAA4eK1JOjkdXYtHxh=2aDK4VgDtN-LNGKY_YqX0N=YEvuzQVWg@mail.gmail.com>
-
- 11 Nov, 2016 1 commit
-
- 10 Nov, 2016 5 commits
-
-
Tom Lane authored
Be sure to pull up the subquery's hasRowSecurity flag when flattening a subquery in pull_up_simple_subquery(). This isn't a bug today because we don't look at the hasRowSecurity flag during planning, but it could easily be a bug tomorrow. Likewise, make rewriteRuleAction() pull up the hasRowSecurity flag when absorbing RTEs from a rule action. This isn't a bug either, for the opposite reason: the flag should never be set yet. But again, it seems like good future proofing. Add a comment explaining why rewriteTargetView() should *not* set hasRowSecurity when adding stuff to securityQuals. Improve some nearby comments about securityQuals processing, and document that field more completely in parsenodes.h. Patch by me, analysis by Dean Rasheed. Discussion: <CAEZATCXZ8tb2DV6f=bkhsMV6u_gRcZ0CZBw2J-qU84RxSukZog@mail.gmail.com>
-
Tom Lane authored
The reloptions stuff allows this option to be set on a matview. While it's questionable whether that is useful or was really intended, it does work, and we shouldn't change that in minor releases. Commit e3e66d8a disabled the option since I didn't realize that it was possible for it to be set on a matview. Tweak the test to re-allow it. Discussion: <19749.1478711862@sss.pgh.pa.us>
-
Tom Lane authored
We just pass the data to the INSTEAD trigger. Haribabu Kommi, reviewed by Dilip Kumar Patch: <CAJrrPGcSQkrNkO+4PhLm4B8UQQQmU9YVUuqmtgM=pmzMfxWaWQ@mail.gmail.com>
-
Tom Lane authored
The plan generated for sorted partial aggregation with "GROUP BY constant" included a Sort node with no sort keys, which the executor does not like. Per report from Steve Randall. I'd add a regression test case if I could think of a compact one, but it doesn't seem worth expending lots of cycles on. Report: <CABVd52UAdGXpg_rCk46egpNKYdXOzCjuJ1zG26E2xBe_8bj+Fg@mail.gmail.com>
-
Tom Lane authored
Discussion: <5019.1478790246@sss.pgh.pa.us>
-
- 09 Nov, 2016 2 commits
-
-
Robert Haas authored
Masahiko Sawada, reviewed by Fabien Coelho and Beena Emerson, with some a bit of wordsmithing and cosmetic adjustment by me.
-
Peter Eisentraut authored
-
- 08 Nov, 2016 4 commits
-
-
Tom Lane authored
The idea behind SPI_push was to allow transitioning back into an "unconnected" state when a SPI-using procedure calls unrelated code that might or might not invoke SPI. That sounds good, but in practice the only thing it does for us is to catch cases where a called SPI-using function forgets to call SPI_connect --- which is a highly improbable failure mode, since it would be exposed immediately by direct testing of said function. As against that, we've had multiple bugs induced by forgetting to call SPI_push/SPI_pop around code that might invoke SPI-using functions; these are much harder to catch and indeed have gone undetected for years in some cases. And we've had to band-aid around some problems of this ilk by introducing conditional push/pop pairs in some places, which really kind of defeats the purpose altogether; if we can't draw bright lines between connected and unconnected code, what's the point? Hence, get rid of SPI_push[_conditional], SPI_pop[_conditional], and the underlying state variable _SPI_curid. It turns out SPI_restore_connection can go away too, which is a nice side benefit since it was never more than a kluge. Provide no-op macros for the deleted functions so as to avoid an API break for external modules. A side effect of this removal is that SPI_palloc and allied functions no longer permit being called when unconnected; they'll throw an error instead. The apparent usefulness of the previous behavior was a mirage as well, because it was depended on by only a few places (which I fixed in preceding commits), and it posed a risk of allocations being unexpectedly long-lived if someone forgot a SPI_push call. Discussion: <20808.1478481403@sss.pgh.pa.us>
-
Robert Haas authored
For ALTER TYPE .. RENAME, add "VALUE" to the list of possible completions. Complete ALTER TYPE .. RENAME VALUE with possible enum values. After that, complete with "TO". Dagfinn Ilmari Mannsåker, reviewed by Artur Zakirov.
-
Tom Lane authored
Invent a new function heap_modify_tuple_by_cols() that is functionally equivalent to SPI_modifytuple except that it always allocates its result by simple palloc. I chose however to make the API details a bit more like heap_modify_tuple: pass a tupdesc rather than a Relation, and use bool convention for the isnull array. Use this function in place of SPI_modifytuple at all call sites where the intended behavior is to allocate in current context. (There actually are only two call sites left that depend on the old behavior, which makes me wonder if we should just drop this function rather than keep it.) This new function is easier to use than heap_modify_tuple() for purposes of replacing a single column (or, really, any fixed number of columns). There are a number of places where it would simplify the code to change over, but I resisted that temptation for the moment ... everywhere except in plpgsql's exec_assign_value(); changing that might offer some small performance benefit, so I did it. This is on the way to removing SPI_push/SPI_pop, but it seems like good code cleanup in its own right. Discussion: <9633.1478552022@sss.pgh.pa.us>
-
Robert Haas authored
Michael Paquier
-