- 18 Dec, 2014 4 commits
-
-
Fujii Masao authored
Etsuro Fujita
-
Andres Freund authored
When starting up from a basebackup taken off a standby extra logic has to be applied to compute the point where the data directory is consistent. Normal base backups use a WAL record for that purpose, but that isn't possible on a standby. That logic had a error check ensuring that the cluster's control file indicates being in recovery. Unfortunately that check was too strict, disregarding the fact that the control file could also indicate that the cluster was shut down while in recovery. That's possible when the a cluster starting from a basebackup is shut down before the backup label has been removed. When everything goes well that's a short window, but when either restore_command or primary_conninfo isn't configured correctly the window can get much wider. That's because inbetween reading and unlinking the label we restore the last checkpoint from WAL which can need additional WAL. To fix simply also allow starting when the control file indicates "shutdown in recovery". There's nicer fixes imaginable, but they'd be more invasive. Backpatch to 9.2 where support for taking basebackups from standbys was added.
-
Noah Misch authored
Per buildfarm member crake. Back-patch to 9.4, where the TAP suites were introduced.
-
Noah Misch authored
Use SSPI authentication to allow connections exclusively from the OS user that launched the test suite. This closes on Windows the vulnerability that commit be76a6d3 closed on other platforms. Users of "make installcheck" or custom test harnesses can run "pg_regress --config-auth=DATADIR" to activate the same authentication configuration that "make check" would use. Back-patch to 9.0 (all supported versions). Security: CVE-2014-0067
-
- 17 Dec, 2014 7 commits
-
-
Tom Lane authored
As with NOT NULL constraints, we consider that such constraints are merely reports of constraints that are being enforced by the remote server (or other underlying storage mechanism). Their only real use is to allow planner optimizations, for example in constraint-exclusion checks. Thus, the code changes here amount to little more than removal of the error that was formerly thrown for applying CHECK to a foreign table. (In passing, do a bit of cleanup of the ALTER FOREIGN TABLE reference page, which had accumulated some weird decisions about ordering etc.) Shigeru Hanada and Etsuro Fujita, reviewed by Kyotaro Horiguchi and Ashutosh Bapat.
-
Heikki Linnakangas authored
The old pattern would match files with strange extensions like *.ry or *.lpp. Refactor it to only include files with known extensions, and to make it more readable. Per Andrew Dunstan's suggestion.
-
Tom Lane authored
Spotted by Álvaro Herrera.
-
Tom Lane authored
Adam Brightwell, per report from Martín Marqués.
-
Magnus Hagander authored
Add Windows versions of generated scripts, and make sure we only ignore the scripts int he root directory. Michael Paquier
-
Magnus Hagander authored
Michael Paquier
-
Magnus Hagander authored
Spotted by David Johnston
-
- 16 Dec, 2014 7 commits
-
-
Tom Lane authored
MapArrayTypeName would copy up to NAMEDATALEN-1 bytes of the base type name, which of course is wrong: after prepending '_' there is only room for NAMEDATALEN-2 bytes. Aside from being the wrong result, this case would lead to overrunning the statically allocated work buffer. This would be a security bug if the function were ever used outside bootstrap mode, but it isn't, at least not in any currently supported branches. Aside from fixing the off-by-one loop logic, this patch gets rid of the static work buffer by having MapArrayTypeName pstrdup its result; the sole caller was already doing that, so this just requires moving the pstrdup call. This saves a few bytes but mainly it makes the API a lot cleaner. Back-patch on the off chance that there is some third-party code using MapArrayTypeName with less-secure input. Pushing pstrdup into the function should not cause any serious problems for such hypothetical code; at worst there might be a short term memory leak. Per Coverity scanning.
-
Tom Lane authored
Code added in 9.4 would attempt to divide by zero in such cases. Noted while testing fix for missing-pclose problem.
-
Tom Lane authored
If the called command fails to return data, runShellCommand forgot to pclose() the pipe before returning. This is fairly harmless in the current code, because pgbench would then abandon further processing of that client thread; so no more than nclients descriptors could be leaked this way. But it's not hard to imagine future improvements whereby that wouldn't be true. In any case, it's sloppy coding, so patch all branches. Found by Coverity.
-
Andrew Dunstan authored
Mostly these issues concern the non-use of function results. These have been changed to use (void) pushJsonbValue(...) instead of assigning the result to a variable that gets overwritten before it is used. There is a larger issue that we should possibly examine the API for pushJsonbValue(), so that instead of returning a value it modifies a state argument. The current idiom is rather clumsy. However, changing that requires quite a bit more work, so this change should do for the moment.
-
Heikki Linnakangas authored
Backpatch the applicable parts, just to make backpatching future patches easier.
-
Heikki Linnakangas authored
It does not include the possible full-page image. While at it, reformat the comment slightly to make it more readable. Reported by Rahila Syed
-
Noah Misch authored
Noticed on a couple of Windows configurations. Petr Jelinek, reviewed by Michael Paquier.
-
- 15 Dec, 2014 6 commits
-
-
Peter Eisentraut authored
-
Alvaro Herrera authored
-
Tom Lane authored
"PG_RETURN_FLOAT8(x)" is not "return x", except perhaps by accident on some platforms.
-
Heikki Linnakangas authored
Alexander Korotkov, reviewed by Emre Hasegeli.
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
- 14 Dec, 2014 2 commits
-
-
Tom Lane authored
The ALTER SYSTEM ref page hadn't been held to a very high standard, nor was the feature well integrated into section 18.1 (parameter setting). Also, though commit 4c4654af had improved the structure of 18.1, it also introduced a lot of poor wording, imprecision, and outright falsehoods. Try to clean that up.
-
Tom Lane authored
Set release date, do a final pass of wordsmithing, improve some other new-in-9.4 documentation.
-
- 13 Dec, 2014 5 commits
-
-
Peter Eisentraut authored
-
Andrew Dunstan authored
Fabrízio de Royes Mello reviewed by Rushabh Lathia.
-
Tom Lane authored
The code for advancing through the input rows overlooked the case that we might already be past the first row of the row pair now being considered, in case the previous percentile also fell between the same two input rows. Report and patch by Andrew Gierth; logic rewritten a bit for clarity by me.
-
Heikki Linnakangas authored
Mark Dilger
-
- 12 Dec, 2014 9 commits
-
-
Tom Lane authored
The planner seems to like to do this join query as a hash join, making the output ordering machine-dependent; worse, it's a hash on OIDs, so that it's a bit astonishing that the result doesn't change from run to run even on one machine. Add an ORDER BY to get consistent results. Per buildfarm. I also suppressed output from the final DROP SCHEMA CASCADE, to avoid occasional failures similar to those fixed in commit 81d815dc. That hasn't been observed in the buildfarm yet, but it seems likely to happen in future if we leave it as-is.
-
Andrew Dunstan authored
The functions are: to_jsonb() jsonb_object() jsonb_build_object() jsonb_build_array() jsonb_agg() jsonb_object_agg() Also along the way some better logic is implemented in json_categorize_type() to match that in the newly implemented jsonb_categorize_type(). Andrew Dunstan, reviewed by Pavel Stehule and Alvaro Herrera.
-
Tom Lane authored
In commit 462bd957, I changed postgres_fdw to rely on get_plan_rowmark() instead of get_parse_rowmark(). I still think that's a good idea in the long run, but as Etsuro Fujita pointed out, it doesn't work today because planner.c forces PlanRowMarks to have markType = ROW_MARK_COPY for all foreign tables. There's no urgent reason to change this in the back branches, so let's just revert that part of yesterday's commit rather than trying to design a better solution under time pressure. Also, add a regression test case showing what postgres_fdw does with FOR UPDATE/SHARE. I'd blithely assumed there was one already, else I'd have realized yesterday that this code didn't work.
-
Andrew Dunstan authored
The functions remove object fields, including in nested objects, that have null as a value. In certain cases this can lead to considerably smaller datums, with no loss of semantic information. Andrew Dunstan, reviewed by Pavel Stehule.
-
Heikki Linnakangas authored
This avoids duplicating the code. Michael Paquier, reviewed by Simon Riggs and me
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
Peter Eisentraut authored
Otherwise the pg_ctl start and stop messages get mixed up with the TAP output, which isn't technically valid.
-
Tom Lane authored
Ordinarily we can omit checking of a WHERE condition that matches a partial index's condition, when we are using an indexscan on that partial index. However, in SELECT FOR UPDATE we must include the "redundant" filter condition in the plan so that it gets checked properly in an EvalPlanQual recheck. The planner got this mostly right, but improperly omitted the filter condition if the index in question was on an inheritance child table. In READ COMMITTED mode, this could result in incorrectly returning just-updated rows that no longer satisfy the filter condition. The cause of the error is using get_parse_rowmark() when get_plan_rowmark() is what should be used during planning. In 9.3 and up, also fix the same mistake in contrib/postgres_fdw. It's currently harmless there (for lack of inheritance support) but wrong is wrong, and the incorrect code might get copied to someplace where it's more significant. Report and fix by Kyotaro Horiguchi. Back-patch to all supported branches.
-