- 19 Jun, 2014 1 commit
-
-
Tom Lane authored
Arrange for postmaster child processes to respond to two environment variables, PG_OOM_ADJUST_FILE and PG_OOM_ADJUST_VALUE, to determine whether they reset their OOM score adjustments and if so to what. This is superior to the previous design involving #ifdef's in several ways. The behavior is now available in a default build, and both ends of the adjustment --- the original adjustment of the postmaster's level and the subsequent readjustment by child processes --- can now be controlled in one place, namely the postmaster launch script. So it's no longer necessary for the launch script to act on faith that the server was compiled with the appropriate options. In addition, if someone wants to use an OOM score other than zero for the child processes, that doesn't take a recompile anymore; and we no longer have to cater separately to the two different historical kernel APIs for this adjustment. Gurjeet Singh, somewhat revised by me
-
- 18 Jun, 2014 5 commits
-
-
Andrew Dunstan authored
The check was confusing and is a condition that should never in fact happen. Per gripe from Dmitry Dolgov.
-
Andrew Dunstan authored
-
Tom Lane authored
This SQL-standard feature allows a sub-SELECT yielding multiple columns (but only one row) to be used to compute the new values of several columns to be updated. While the same results can be had with an independent sub-SELECT per column, such a workaround can require a great deal of duplicated computation. The standard actually says that the source for a multi-column assignment could be any row-valued expression. The implementation used here is tightly tied to our existing sub-SELECT support and can't handle other cases; the Bison grammar would have some issues with them too. However, I don't feel too bad about this since other cases can be converted into sub-SELECTs. For instance, "SET (a,b,c) = row_valued_function(x)" could be written "SET (a,b,c) = (SELECT * FROM row_valued_function(x))".
-
Noah Misch authored
Catch up with commit b8cc8f94's introduction of the HAVE_UUID_OSSP symbol to the principal build process. Back-patch to 9.4, where that commit appeared.
-
- 17 Jun, 2014 2 commits
-
-
Bruce Momjian authored
Report by Peter Geoghegan
-
Heikki Linnakangas authored
Oops.
-
- 16 Jun, 2014 2 commits
-
-
Tom Lane authored
Since most of the system thinks AND and OR are N-argument expressions anyway, let's have the grammar generate a representation of that form when dealing with input like "x AND y AND z AND ...", rather than generating a deeply-nested binary tree that just has to be flattened later by the planner. This avoids stack overflow in parse analysis when dealing with queries having more than a few thousand such clauses; and in any case it removes some rather unsightly inconsistencies, since some parts of parse analysis were generating N-argument ANDs/ORs already. It's still possible to get a stack overflow with weirdly parenthesized input, such as "x AND (y AND (z AND ( ... )))", but such cases are not mainstream usage. The maximum depth of parenthesization is already limited by Bison's stack in such cases, anyway, so that the limit is probably fairly platform-independent. Patch originally by Gurjeet Singh, heavily revised by me
-
Bruce Momjian authored
This prevents several compiler warnings on Windows.
-
- 14 Jun, 2014 3 commits
-
-
Noah Misch authored
Any OS user able to access the socket can connect as the bootstrap superuser and proceed to execute arbitrary code as the OS user running the test. Protect against that by placing the socket in a temporary, mode-0700 subdirectory of /tmp. The pg_regress-based test suites and the pg_upgrade test suite were vulnerable; the $(prove_check)-based test suites were already secure. Back-patch to 8.4 (all supported versions). The hazard remains wherever the temporary cluster accepts TCP connections, notably on Windows. As a convenient side effect, this lets testing proceed smoothly in builds that override DEFAULT_PGSOCKET_DIR. Popular non-default values like /var/run/postgresql are often unwritable to the build user. Security: CVE-2014-0067
-
Noah Misch authored
This function is pervasive on free software operating systems; import NetBSD's implementation. Back-patch to 8.4, like the commit that will harness it.
-
Heikki Linnakangas authored
Just feels more natural, and is more consistent with rm_redo.
-
- 13 Jun, 2014 5 commits
-
-
Noah Misch authored
Per buildfarm member prairiedog. Back-patch to 9.4, where the test was introduced. Reviewed by Tom Lane.
-
Noah Misch authored
Back-patch to 9.4.
-
Noah Misch authored
Back-patch to 9.4, where .dir-locals.el was introduced.
-
Tom Lane authored
We have for a long time been able to prove implications and refutations between clauses structured like "expr op const" with the same subexpression and btree-related operators; for example that "x < 4" implies "x <= 5". The implication machinery is needed to detect usability of partial indexes, and the refutation machinery is needed to implement constraint exclusion. This patch extends that machinery to make proofs for operator expressions involving the same two immutable-but-not-necessarily-just-Const input expressions, ie does "expr1 op1 expr2" prove or refute "expr1 op2 expr2" or "expr2 op2 expr1"? An important example is that we can now prove "x = y" given "y = x", which formerly the code could not deduce unless x or y was a constant. We can make use of the system's knowledge of operator commutator and negator pairs, and can also make use of btree opclass relationships, for example "x < y" implies "x <= y" and refutes "x > y" (notice that neither of these could be proven just from commutator or negator links). Inspired by a gripe from Brian Dunavant. This seems more like a new feature than a bug fix, though, so no back-patch.
-
Tom Lane authored
Prior to 9.0, pg_dump handled comments on large objects by dumping a bunch of COMMENT commands into a single BLOB COMMENTS archive object. With sufficiently many such comments, some of the commands would likely get split across bufferloads when restoring, causing failures in direct-to-database restores (though no problem would be evident in text output). This is the same type of issue we have with table data dumped as INSERT commands, and it can be fixed in the same way, by using a mini SQL lexer to figure out where the command boundaries are. Fortunately, the COMMENT commands are no more complex to lex than INSERTs, so we can just re-use the existing lexer for INSERTs. Per bug #10611 from Jacek Zalewski. Back-patch to all active branches.
-
- 12 Jun, 2014 9 commits
-
-
Tom Lane authored
We should report the errno when we get a failure from functions like BufFileWrite. "ERROR: write failed" is unreasonably taciturn for a case that's well within the realm of possibility; I've seen it a couple times in the buildfarm recently, in situations that were probably out-of-disk-space, but it'd be good to see the errno to confirm it. I think this code was originally written without assuming that the buffile.c functions would return useful errno; but most other callers *are* assuming that, and a quick look at the buffile code gives no reason to suppose otherwise. Also, a couple of the old messages were phrased on the assumption that a short read might indicate a logic bug in tuplestore itself; but that code's pretty well tested by now, so a filesystem-level problem seems much more likely.
-
Tom Lane authored
Since we commonly test pg_dump/pg_restore by seeing whether they can dump and restore the regression test database, it behooves us to include some large objects in that test scenario. I tried to include a comment on one of these large objects to improve the test scenario further ... but it turns out that pg_upgrade fails to preserve comments on large objects, and its regression test notices the discrepancy. So uncommenting that COMMENT is a TODO for later.
-
Tom Lane authored
I thought I could get away with hardcoded int4 here, but the buildfarm says differently.
-
Tom Lane authored
Robert Frost is no longer with us, but his copyrights still are, so let's stop using "Stopping by Woods on a Snowy Evening" as test data before somebody decides to sue us. Wordsworth is more safely dead.
-
Tom Lane authored
Memorialize the expected output of the query that libpq has been using for many years to get the OIDs of large-object support functions. Although we really ought to change the way libpq does this, we must expect that this query will remain in use in the field for the foreseeable future, so until we're ready to break compatibility with old libpq versions we'd better check the results stay the same. See the recent lo_create() fiasco.
-
Tom Lane authored
The previous naming broke the query that libpq's lo_initialize() uses to collect the OIDs of the server-side functions it requires, because that query effectively assumes that there is only one function named lo_create in the pg_catalog schema (and likewise only one lo_open, etc). While we should certainly make libpq more robust about this, the naive query will remain in use in the field for the foreseeable future, so it seems the only workable choice is to use a different name for the new function. lo_from_bytea() won a small straw poll. Back-patch into 9.4 where the new function was introduced.
-
Alvaro Herrera authored
-
Tom Lane authored
If a sub-select-in-FROM gets flattened into the upper query, then we naturally get rid of any output columns that are defined in the sub-select text but not actually used in the upper query. However, this doesn't happen when it's not possible to flatten the subquery, for example because it contains GROUP BY, LIMIT, etc. Allowing the subquery to compute useless output columns is often fairly harmless, but sometimes it has significant performance cost: the unused output might be an expensive expression, or it might be a Var from a relation that we could remove entirely (via the join-removal logic) if only we realized that we didn't really need that Var. Situations like this are common when expanding views, so it seems worth taking the trouble to detect and remove unused outputs. Because the upper query's Var numbering for subquery references depends on positions in the subquery targetlist, we don't want to renumber the items we leave behind. Instead, we can implement "removal" by replacing the unwanted expressions with simple NULL constants. This wastes a few cycles at runtime, but not enough to justify more work in the planner.
-
Andres Freund authored
Change the order of checks in similar functions to be the same; remove a parameter that's not needed anymore; rename a memory context and expand a couple of comments. Per review comments from Amit Kapila
-
- 11 Jun, 2014 7 commits
-
-
Noah Misch authored
This preserves user-specified LDFLAGS; we already kept user-specified CFLAGS and CPPFLAGS. Given the shortage of complaints and the fact that any problem caused is likely to appear at build time, no back-patch. Dag-Erling Smørgrav and Noah Misch
-
Noah Misch authored
The MSVC build process already did so; this fixes the principal build process to match. Both processes already did likewise for src/common. This lets server builds of src/port reference postgres.exe data symbols.
-
Noah Misch authored
-
Fujii Masao authored
-
Tom Lane authored
When we grabbed this file off the Snowball project's website, we mistakenly supposed that it was in LATIN1 encoding, but evidently it was actually in LATIN2. This resulted in ő (o-double-acute, U+0151, which is code 0xF5 in LATIN2) being misconverted into õ (o-tilde, U+00F5), as complained of in bug #10589 from Zoltán Sörös. We'd have messed up u-double-acute too, but there aren't any of those in the file. Other characters used in the file have the same codes in LATIN1 and LATIN2, which no doubt helped hide the problem for so long. The error is not only ours: the Snowball project also was confused about which encoding is required for Hungarian. But dealing with that will require source-code changes that I'm not at all sure we'll wish to back-patch. Fixing the stopword file seems reasonably safe to back-patch however.
-
Tom Lane authored
-
Tom Lane authored
Let the hacking begin ...
-
- 10 Jun, 2014 1 commit
-
-
Tom Lane authored
Although this bug is already fixed in post-9.2 branches, the case triggering it is quite different from what was under consideration at the time. It seems worth memorializing this example in HEAD just to make sure it doesn't get broken again in future. Extracted from commit 187ae17300776f48b2bd9d0737923b1bf70f606e.
-
- 09 Jun, 2014 2 commits
-
-
Tom Lane authored
Previously, the code used a node label of zero both for strings that contain no bytes beyond the inner tuple's prefix, and for cases where an "allTheSame" inner tuple has to be split to allow a string with a different next byte to be inserted into it. Failing to distinguish these cases meant that if a string ending with the current prefix needed to be inserted into an allTheSame tuple, we got into an infinite loop, because after splitting the tuple we'd descend into the child allTheSame tuple and then find we need to split again. To fix, instead use -1 and -2 as the node labels for these two cases. This requires widening the node label type from "char" to int2, but fortunately SPGiST stores all pass-by-value node label types in their Datum representation, which means that this change is transparently upward compatible so far as the on-disk representation goes. We continue to recognize zero as a dummy node label for reading purposes, but will not attempt to push new index entries down into such a label, so that the loop won't occur even when dealing with an existing index. Per report from Teodor Sigaev. Back-patch to 9.2 where the faulty code was introduced.
-
Alvaro Herrera authored
In a50d9762 I already changed this, but got it wrong for the case where the number of members is larger than the number of entries that fit in the last page of the last segment. As reported by Serge Negodyuck in a followup to bug #8673.
-
- 05 Jun, 2014 3 commits
-
-
Andres Freund authored
A ReorderBufferTransaction's end_lsn, the sentPtr advocated by walsender keepalive messages, and the end location remembered by the decoding get_*changes* SQL functions all use the location of the last read record + 1. I.e. the LSN points to the beginning of the next record. That cannot realistically be changed without changing the replication protocol because that's how keepalive messages have worked since 9.0. The bug is that the logic inside the snapshot builder, which decides whether a transaction's contents should be decoded, assumed the start location would point towards the last byte of the last record. The reason this didn't actually cause visible problems is that currently that decision is only made for commit records. Since interesting transactions always have at least one additional record - containing actual data - we'd never skip a transaction. But if there ever were transactions, or other events, with just one record containing important information, we'd skip them after stopping and restarting logical decoding.
-
Tom Lane authored
It's critical that the backend's idea of LOBLKSIZE match the way data has actually been divided up in pg_largeobject. While we don't provide any direct way to adjust that value, doing so is a one-line source code change and various people have expressed interest recently in changing it. So, just as with TOAST_MAX_CHUNK_SIZE, it seems prudent to record the value in pg_control and cross-check that the backend's compiled-in setting matches the on-disk data. Also tweak the code in inv_api.c so that fetches from pg_largeobject explicitly verify that the length of the data field is not more than LOBLKSIZE. Formerly we just had Asserts() for that, which is no protection at all in production builds. In some of the call sites an overlength data value would translate directly to a security-relevant stack clobber, so it seems worth one extra runtime comparison to be sure. In the back branches, we can't change the contents of pg_control; but we can still make the extra checks in inv_api.c, which will offer some amount of protection against running with the wrong value of LOBLKSIZE.
-
Andres Freund authored
Previously there's been a mix between 'slotname' and 'slot_name'. It's not nice to be unneccessarily inconsistent in a new feature. As a post beta1 initdb now is required in the wake of eeca4cd3, fix the inconsistencies. Most the changes won't affect usage of replication slots because the majority of changes is around function parameter names. The prominent exception to that is that the recovery.conf parameter 'primary_slotname' is now named 'primary_slot_name'.
-