- 13 Jan, 2016 6 commits
-
-
Tom Lane authored
pg_dump's original approach to handling extension member objects was to run around and clear (or set) their dump flags rather late in its data collection process. Unfortunately, quite a lot of code expects those flags to be valid before that; which was an entirely reasonable expectation before we added extensions. In particular, this explains Karsten Hilbert's recent report of pg_upgrade failing on a database in which an extension has been installed into the pg_catalog schema. Its objects are initially marked as not-to-be-dumped on the strength of their schema, and later we change them to must-dump because we're doing a binary upgrade of their extension; but we've already skipped essential tasks like making associated DO_SHELL_TYPE objects. To fix, collect extension membership data first, and incorporate it in the initial setting of the dump flags, so that those are once again correct from the get-go. This has the undesirable side effect of slightly lengthening the time taken before pg_dump acquires table locks, but testing suggests that the increase in that window is not very much. Along the way, get rid of ugly special-case logic for deciding whether to dump procedural languages, FDWs, and foreign servers; dump decisions for those are now correct up-front, too. In 9.3 and up, this also fixes erroneous logic about when to dump event triggers (basically, they were *always* dumped before). In 9.5 and up, transform objects had that problem too. Since this problem came in with extensions, back-patch to all supported versions.
-
Tom Lane authored
Rather than passing around DumpOptions and RestoreOptions as separate arguments, add fields to struct Archive to carry pointers to these objects, and access them through those fields when needed. There already was a RestoreOptions pointer in Archive, though for no obvious reason it was part of the "private" struct rather than out where pg_dump.c could see it. Doing this allows reversion of quite a lot of parameter-addition changes made in commit 0eea8047, which is a good thing IMO because this will reduce the code delta between 9.4 and 9.5, probably easing a few future back-patch efforts. Moreover, the previous commit only added a DumpOptions argument to functions that had to have it at the time, which means we could anticipate still more code churn (and more back-patch hazard) as the requirement spread further. I'd hit exactly that problem in my upcoming patch to fix extension membership marking, which is what motivated me to do this.
-
Tom Lane authored
To ease doing indent fixups on a couple of patches I have in progress.
-
Peter Eisentraut authored
The completion of CREATE INDEX CONCURRENTLY was lacking in several ways compared to a plain CREATE INDEX command: - CREATE INDEX <name> ON completes table names, but didn't with CONCURRENTLY. - CREATE INDEX completes ON and existing index names, but with CONCURRENTLY it only completed ON. - CREATE INDEX <name> completes ON, but didn't with CONCURRENTLY. These are now all fixed.
-
Peter Eisentraut authored
The previous code supported a syntax like CREATE INDEX name CONCURRENTLY, which never existed. Mistake introduced in commit 37ec19a1. Remove the addition of CONCURRENTLY at that point.
-
Peter Eisentraut authored
This just updates a comment to match the code. from Michael Paquier
-
- 12 Jan, 2016 5 commits
-
-
Simon Riggs authored
Tomas Vondra, reviewed by Michael Paquier and Amit Kapila Minor edits by me
-
Simon Riggs authored
Teach GetFlushRecPtr() to update LogwrtResult cache as performed by all other functions in xlog.c
-
Tom Lane authored
Commit 866566a6 introduced a new mechanism for incompatible plpythons to detect each other. I left the old mechanism in place, because it seems possible that a plpython predating that commit might be used with one postdating it. (This would require updating plpython3 but not plpython2 or vice versa, but that seems well within the realm of possibility.) However, surely it will not be able to happen in 9.6 or later, so we can delete the old mechanism in HEAD.
-
Tom Lane authored
Commit 866566a6 is insufficient to prevent dump/reload failures when using transform modules in a database with both plpython2 and plpython3 installed. The reason is that the transform extension scripts use DO blocks as a mechanism to pull in the libpython library before creating the transform function. It's necessary to preload the library because the dynamic loader won't do it for us on every platform, leading to "unresolved symbol" failures when the transform library is loaded. But it's *not* necessary to execute Python code, and doing so will provoke a multiple-Pythons-are-loaded error even after the preceding commit. To fix, use LOAD instead of a DO block. That requires superuser privilege, but creation of a C function does anyway. It also embeds knowledge of the underlying library name for each PL language; but that's wired into the initdb-time contents of pg_pltemplate too, so that doesn't seem like a large problem either. Note that CREATE TRANSFORM as such doesn't call the language module at all. Per a report from Paul Jones. Back-patch to 9.5 where transform modules were introduced.
-
Tom Lane authored
Commit 80371601 installed a safeguard against loading plpython2 and plpython3 at the same time, but asserted that both could still be used in the same database, just not in the same session. However, that's not actually all that practical because dumping and reloading will fail (since both libraries necessarily get loaded into the restoring session). pg_upgrade is even worse, because it checks for missing libraries by loading every .so library mentioned in the entire installation into one session, so that you can have only one across the whole cluster. We can improve matters by not throwing the error immediately in _PG_init, but only when and if we're asked to do something that requires calling into libpython. This ameliorates both of the above situations, since while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in loading plpython, it isn't asked to do anything interesting (at least not if check_function_bodies is off, as it will be during a restore). It's possible that this opens some corner-case holes in which a crash could be provoked with sufficient effort. However, since plpython only exists as an untrusted language, any such crash would require superuser privileges, making it "don't do that" not a security issue. To reduce the hazards in this area, the error is still FATAL when it does get thrown. Per a report from Paul Jones. Back-patch to 9.2, which is as far back as the patch applies without work. (It could be made to work in 9.1, but given the lack of previous complaints, I'm disinclined to expend effort so far back. We've been pretty desultory about support for Python 3 in 9.1 anyway.)
-
- 11 Jan, 2016 2 commits
-
-
Robert Haas authored
Noted while reviewing a question from Dickson S. Guedes.
-
Peter Eisentraut authored
From: Petr Jelinek <petr@2ndquadrant.com>
-
- 09 Jan, 2016 7 commits
-
-
Tom Lane authored
This loop uselessly fetched the argument after the one it's currently looking at. No real harm is done since we couldn't possibly fetch off the end of memory, but it's confusing to the reader. Also remove a duplicate (and therefore confusing) PG_ARGISNULL check in jsonb_build_object. I happened to notice these things while trolling for missed null-arg checks earlier today. Back-patch to 9.5, not because there is any real bug, but just because 9.5 and HEAD are still in sync in this file and we might as well keep them so. In passing, re-pgindent.
-
Tom Lane authored
I noticed that the sanity checks in the regression tests omitted to check a couple of "poor man's enum" columns that you'd reasonably expect them to check. There are other "char"-type columns in system catalogs that are not covered by either type_sanity or opr_sanity, e.g. pg_rewrite.ev_type. However, those catalogs are not populated with any manually-created data during bootstrap, so it seems less necessary to check them this way.
-
Tom Lane authored
A scan for missed proisstrict markings in the core code turned up these functions: brin_summarize_new_values pg_stat_reset_single_table_counters pg_stat_reset_single_function_counters pg_create_logical_replication_slot pg_create_physical_replication_slot pg_drop_replication_slot The first three of these take OID, so a null argument will normally look like a zero to them, resulting in "ERROR: could not open relation with OID 0" for brin_summarize_new_values, and no action for the pg_stat_reset_XXX functions. The other three will dump core on a null argument, though this is mitigated by the fact that they won't do so until after checking that the caller is superuser or has rolreplication privilege. In addition, the pg_logical_slot_get/peek[_binary]_changes family was intentionally marked nonstrict, but failed to make nullness checks on all the arguments; so again a null-pointer-dereference crash is possible but only for superusers and rolreplication users. Add the missing ARGISNULL checks to the latter functions, and mark the former functions as strict in pg_proc. Make that change in the back branches too, even though we can't force initdb there, just so that installations initdb'd in future won't have the issue. Since none of these bugs rise to the level of security issues (and indeed the pg_stat_reset_XXX functions hardly misbehave at all), it seems sufficient to do this. In addition, fix some order-of-operations oddities in the slot_get_changes family, mostly cosmetic, but not the part that moves the function's last few operations into the PG_TRY block. As it stood, there was significant risk for an error to exit without clearing historical information from the system caches. The slot_get_changes bugs go back to 9.4 where that code was introduced. Back-patch appropriate subsets of the pg_proc changes into all active branches, as well.
-
Tom Lane authored
Given syntactically wrong input, widget_in() could call atof() with an indeterminate pointer argument, typically leading to a crash; or if it didn't do that, it might return a NULL pointer, which again would lead to a crash since old-style C functions aren't supposed to do things that way. Fix that by correcting the off-by-one syntax test and throwing a proper error rather than just returning NULL. Also, since widget_in and widget_out have been marked STRICT for a long time, their tests for null inputs are just dead code; remove 'em. In the oldest branches, also improve widget_out to use snprintf not sprintf, just to be sure. In passing, get rid of a long-since-useless sprintf into a local buffer that nothing further is done with, and make some other minor coding style cleanups. In the intended regression-testing usage of these functions, none of this is very significant; but if the regression test database were left around in a production installation, these bugs could amount to a minor security hazard. Piotr Stefaniak, Michael Paquier, and Tom Lane
-
Simon Riggs authored
Per discussion with Andres Freund
-
Tom Lane authored
These functions readily crash when passed a NULL input value. The tests themselves do not pass NULL values to them; but when the regression database is used as a basis for fuzz testing, they cause a lot of noise. Also, if someone were to leave a regression database lying about in a production installation, these would create a minor security hazard. Andreas Seltenreich
-
Simon Riggs authored
Replay of XLOG_BTREE_VACUUM during Hot Standby was previously thought to require complex interlocking that matched the requirements on the master. This required an O(N) operation that became a significant problem with large indexes, causing replication delays of seconds or in some cases minutes while the XLOG_BTREE_VACUUM was replayed. This commit skips the “pin scan” that was previously required, by observing in detail when and how it is safe to do so, with full documentation. The pin scan is skipped only in replay; the VACUUM code path on master is not touched here. The current commit still performs the pin scan for toast indexes, though this can also be avoided if we recheck scans on toast indexes. Later patch will address this. No tests included. Manual tests using an additional patch to view WAL records and their timing have shown the change in WAL records and their handling has successfully reduced replication delay.
-
- 08 Jan, 2016 6 commits
-
-
Alvaro Herrera authored
This reverts commit e9282e95, which blew up in a pretty spectacular way. Re-introduce the original code while we search for a real fix.
-
Alvaro Herrera authored
Further portability fix for a9676139. Mingw- and MSVC-based builds appear to be working fine, but Cygwin needs an extra tweak whereby the new win32security.c file is explicitely added to the list of files to build in pgport, per Cygwin members brolga and lorikeet. Author: Michael Paquier
-
Magnus Hagander authored
Tatsuro Yamada
-
Magnus Hagander authored
Kyotaro HORIGUCHI
-
Tom Lane authored
Improve comments and make it a shade less messy. I think we might want to move all of this somewhere else later, but it needs to be more readable first. In passing, re-pgindent the file, affecting some recently-added comments concerning parallel query planning.
-
Tom Lane authored
Once upon a time it was necessary for grouping_planner() to determine the tlist it wanted from the scan/join plan subtree before it called query_planner(), because query_planner() would actually make a Plan using that. But we refactored things a long time ago to delay construction of the Plan tree till later, so there's no need to build that tlist until (and indeed unless) we're ready to plaster it onto the Plan. The only thing query_planner() cares about is what Vars are going to be needed for the tlist, and it can perfectly well get that by looking at the real tlist rather than some masticated version. Well, actually, there is one minor glitch in that argument, which is that make_subplanTargetList also adds Vars appearing only in HAVING to the tlist it produces. So now we have to account for HAVING explicitly in build_base_rel_tlists. But that just adds a few lines of code, and I doubt it moves the needle much on processing time; we might be doing pull_var_clause() twice on the havingQual, but before we had it scanning dummy tlist entries instead. This is a very small down payment on rationalizing grouping_planner enough so it can be refactored.
-
- 07 Jan, 2016 8 commits
-
-
Alvaro Herrera authored
-
Tom Lane authored
Turns out the only reason initdb -X worked is that pg_mkdir_p won't whine if you point it at something that's a symlink to a directory. Otherwise, the attempt to create pg_xlog/ just like all the other subdirectories would have failed. Let's be a little more explicit about what's happening. Oversight in my patch for bug #13853 (mea culpa for not testing -X ...)
-
Alvaro Herrera authored
This seems to fix Mingw's compile that was broken in a9676139, as evidenced by buildfarm.
-
Tom Lane authored
When we're creating subdirectories of PGDATA during initdb, we know darn well that the parent directory exists (or should exist) and that the new subdirectory doesn't (or shouldn't). There is therefore no need to use anything more complicated than mkdir(). Using pg_mkdir_p() just opens us up to unexpected failure modes, such as the one exhibited in bug #13853 from Nuri Boardman. It's not very clear why pg_mkdir_p() went wrong there, but it is clear that we didn't need to be trying to create parent directories in the first place. We're not even saving any code, as proven by the fact that this patch nets out at minus five lines. Since this is a response to a field bug report, back-patch to all branches.
-
Alvaro Herrera authored
This new view provides insight into the state of a running WAL receiver in a HOT standby node. The information returned includes the PID of the WAL receiver process, its status (stopped, starting, streaming, etc), start LSN and TLI, last received LSN and TLI, timestamp of last message send and receipt, latest end-of-WAL LSN and time, and the name of the slot (if any). Access to the detailed data is only granted to superusers; others only get the PID. Author: Michael Paquier Reviewer: Haribabu Kommi
-
Tom Lane authored
Commit e710b65c inserted code in md5_crypt_verify to disable and later re-enable interrupts, with a CHECK_FOR_INTERRUPTS call as part of the second step, to process any interrupts that had been held off. Commit 6647248e removed the interrupt disable/re-enable code, but left behind the CHECK_FOR_INTERRUPTS, even though this is now an entirely random, pointless place for one. md5_crypt_verify doesn't run long enough to need such a check, and if it did, this would still be the wrong place to put one.
-
Tom Lane authored
We tell people to examine the postmaster log if they're unsure why they are getting auth failures, but actually only a few relatively-uncommon failure cases were given their own log detail messages in commit 64e43c59. Expand on that so that every failure case detected within md5_crypt_verify gets a specific log detail message. This should cover pretty much every ordinary password auth failure cause. So far I've not noticed user demand for a similar level of auth detail for the other auth methods, but sooner or later somebody might want to work on them. This is not that patch, though.
-
Alvaro Herrera authored
pg_ctl is using isatty() to verify whether the process is running in a terminal, and if not it sends its output to Windows' Event Log ... which does the wrong thing when the output has been redirected to a pipe, as reported in bug #13592. To fix, make pg_ctl use the code we already have to detect service-ness: in the master branch, move src/backend/port/win32/security.c to src/port (with suitable tweaks so that it runs properly in backend and frontend environments); pg_ctl already has access to pgport so it Just Works. In older branches, that's likely to cause trouble, so instead duplicate the required code in pg_ctl.c. Author: Michael Paquier Bug report and diagnosis: Egon Kocjan Backpatch: all supported branches
-
- 06 Jan, 2016 2 commits
-
-
Tom Lane authored
Although these temp tables will get removed from template1 at the end of the standalone-backend run, that's too late to keep them from getting copied into the template0 and postgres databases, now that we use only a single backend run for the whole sequence. While no real harm is done by the extra copies (since they'd be deleted on first use of the temp schema), it's still unsightly, and it would mean some wasted cycles for every database creation for the life of the installation. Oversight in commit c4a8812c. Noticed by Amit Langote.
-
Tom Lane authored
Per Amit Langote.
-
- 05 Jan, 2016 4 commits
-
-
Tatsuo Ishii authored
-
Alvaro Herrera authored
Author: Marko Tiikkaja
-
Tom Lane authored
In commit 92119191 I claimed that we weren't testing encoding conversion functions, but further poking around reveals that we did have an equivalent though hard-wired set of tests in conversion.sql. AFAICS there is no advantage to doing it like that as compared to letting the catalog contents drive the test, so let the opr_sanity addition stand and remove the now-redundant tests in conversion.sql. Also, remove some infrastructure in src/backend/utils/mb/conversion_procs for building conversion.sql's list of tests. That was unmaintained, and had not corresponded to the actual contents of conversion.sql since 2007 or perhaps even further back.
-
Tom Lane authored
The order of inclusion of .o files makes a difference in linker output; not a functional difference, but still a bitwise difference, which annoys some packagers who would like reproducible builds. Report and patch by Christoph Berg
-