- 02 Oct, 2015 7 commits
-
-
Tom Lane authored
In cfindloop(), if the initial call to shortest() reports that a zero-length match is possible at the current search start point, but then it is unable to construct any actual match to that, it'll just loop around with the same start point, and thus make no progress. We need to force the start point to be advanced. This is safe because the loop over "begin" points has already tried and failed to match starting at "close", so there is surely no need to try that again. This bug was introduced in commit e2bd9049, wherein we allowed continued searching after we'd run out of match possibilities, but evidently failed to think hard enough about exactly where we needed to search next. Because of the way this code works, such a match failure is only possible in the presence of backrefs --- otherwise, shortest()'s judgment that a match is possible should always be correct. That probably explains how come the bug has escaped detection for several years. The actual fix is a one-liner, but I took the trouble to add/improve some comments related to the loop logic. After fixing that, the submitted test case "()*\1" didn't loop anymore. But it reported failure, though it seems like it ought to match a zero-length string; both Tcl and Perl think it does. That seems to be from overenthusiastic optimization on my part when I rewrote the iteration match logic in commit 173e29aa: we can't just "declare victory" for a zero-length match without bothering to set match data for capturing parens inside the iterator node. Per fuzz testing by Greg Stark. The first part of this is a bug in all supported branches, and the second part is a bug since 9.2 where the iteration rewrite happened.
-
Tom Lane authored
Commit 9662143f added infrastructure to allow regular-expression operations to be terminated early in the event of SIGINT etc. However, fuzz testing by Greg Stark disclosed that there are still cases where regex compilation could run for a long time without noticing a cancel request. Specifically, the fixempties() phase never adds new states, only new arcs, so it doesn't hit the cancel check I'd put in newstate(). Add one to newarc() as well to cover that. Some experimentation of my own found that regex execution could also run for a long time despite a pending cancel. We'd put a high-level cancel check into cdissect(), but there was none inside the core text-matching routines longest() and shortest(). Ordinarily those inner loops are very very fast ... but in the presence of lookahead constraints, not so much. As a compromise, stick a cancel check into the stateset cache-miss function, which is enough to guarantee a cancel check at least once per lookahead constraint test. Making this work required more attention to error handling throughout the regex executor. Henry Spencer had apparently originally intended longest() and shortest() to be incapable of incurring errors while running, so neither they nor their subroutines had well-defined error reporting behaviors. However, that was already broken by the lookahead constraint feature, since lacon() can surely suffer an out-of-memory failure --- which, in the code as it stood, might never be reported to the user at all, but just silently be treated as a non-match of the lookahead constraint. Normalize all that by inserting explicit error tests as needed. I took the opportunity to add some more comments to the code, too. Back-patch to all supported branches, like the previous patch.
-
Tom Lane authored
It's not terribly hard to devise regular expressions that take large amounts of time and/or memory to process. Recent testing by Greg Stark has also shown that machines with small stack limits can be driven to stack overflow by suitably crafted regexps. While we intend to fix these things as much as possible, it's probably impossible to eliminate slow-execution cases altogether. In any case we don't want to treat such things as security issues. The history of that code should already discourage prudent DBAs from allowing execution of regexp patterns coming from possibly-hostile sources, but it seems like a good idea to warn about the hazard explicitly. Currently, similar_escape() allows access to enough of the underlying regexp behavior that the warning has to apply to SIMILAR TO as well. We might be able to make it safer if we tightened things up to allow only SQL-mandated capabilities in SIMILAR TO; but that would be a subtly non-backwards-compatible change, so it requires discussion and probably could not be back-patched. Per discussion among pgsql-security list.
-
Tom Lane authored
The "floatrange" example is a bit too simple because float8mi can be used without any additional type conversion. Add an example that does have to account for that, and do some minor other wordsmithing.
-
Alvaro Herrera authored
Bug noticed by Fujii Masao
-
Peter Eisentraut authored
The output of a typical pg_rewind run contained a mix of capitalized and not-capitalized and punctuated and not-punctuated phrases for no apparent reason. Make that consistent. Also fix some problems in other messages.
-
Peter Eisentraut authored
-
- 01 Oct, 2015 8 commits
-
-
Tom Lane authored
This case seems to have been overlooked when unvalidated check constraints were introduced, in 9.2. The code would attempt to dump such constraints over again for each child table, even though adding them to the parent table is sufficient. In 9.2 and 9.3, also fix contrib/pg_upgrade/Makefile so that the "make clean" target fully cleans up after a failed test. This evidently got dealt with at some point in 9.4, but it wasn't back-patched. I ran into it while testing this fix ... Per bug #13656 from Ingmar Brouns.
-
Alvaro Herrera authored
Module initialization was still not completely correct after commit 6b619551, per crash report from Takashi Ohnishi. To fix, instead of trying to monkey around with the value of the GUC setting directly, add a separate boolean flag that enables the feature on a standby, but only for the startup (recovery) process, when it sees that its master server has the feature enabled. Discussion: http://www.postgresql.org/message-id/ca44c6c7f9314868bdc521aea4f77cbf@MP-MSGSS-MBX004.msg.nttdata.co.jp Also change the deactivation routine to delete all segment files rather than leaving the last one around. (This doesn't need separate WAL-logging, because on recovery we execute the same deactivation routine anyway.) In passing, clean up the code structure somewhat, particularly so that xlog.c doesn't know so much about when to activate/deactivate the feature. Thanks to Fujii Masao for testing and Petr Jelínek for off-list discussion. Back-patch to 9.5, where commit_ts was introduced.
-
Fujii Masao authored
Previously "GRANT * ON * TO " was tab-completed to add an extra "TO", rather than with a list of roles. This is the bug that commit 2f888070 introduced unexpectedly. This commit fixes that incorrect tab-completion. Thomas Munro, reviewed by Jeff Janes.
-
Tom Lane authored
Etsuro Fujita spotted a thinko in the README commentary.
-
Fujii Masao authored
Previously it was documented that the details on HeapTupleHeaderData struct could be found in htup.h. This is not correct because it's now defined in htup_details.h. Back-patch to 9.3 where the definition of HeapTupleHeaderData struct was moved from htup.h to htup_details.h. Michael Paquier
-
Robert Haas authored
KaiGai Kohei, with one correction by me.
-
Tom Lane authored
Not a lot of commentary needed here really.
-
Tom Lane authored
If some existing listener is far behind, incoming new listener sessions would start from that session's read pointer and then need to advance over many already-committed notification messages, which they have no interest in. This was expensive in itself and also thrashed the pg_notify SLRU buffers a lot more than necessary. We can improve matters considerably in typical scenarios, without much added cost, by starting from the furthest-ahead read pointer, not the furthest-behind one. We do have to consider only sessions in our own database when doing this, which requires an extra field in the data structure, but that's a pretty small cost. Back-patch to 9.0 where the current LISTEN/NOTIFY logic was introduced. Matt Newell, slightly adjusted by me
-
- 30 Sep, 2015 5 commits
-
-
Robert Haas authored
A Gather executor node runs any number of copies of a plan in an equal number of workers and merges all of the results into a single tuple stream. It can also run the plan itself, if the workers are unavailable or haven't started up yet. It is intended to work with the Partial Seq Scan node which will be added in future commits. It could also be used to implement parallel query of a different sort by itself, without help from Partial Seq Scan, if the single_copy mode is used. In that mode, a worker executes the plan, and the parallel leader does not, merely collecting the worker's results. So, a Gather node could be inserted into a plan to split the execution of that plan across two processes. Nested Gather nodes aren't currently supported, but we might want to add support for that in the future. There's nothing in the planner to actually generate Gather nodes yet, so it's not quite time to break out the champagne. But we're getting close. Amit Kapila. Some designs suggestions were provided by me, and I also reviewed the patch. Single-copy mode, documentation, and other minor changes also by me.
-
Robert Haas authored
If a transaction or subtransaction creates a ParallelContext but ends without calling InitializeParallelDSM, the previous code would seg fault. Fix that.
-
Stephen Frost authored
When considering which policies should be included, rather than look at individual bits of the query (eg: if a RETURNING clause exists, or if a WHERE clause exists which is referencing the table, or if it's a FOR SHARE/UPDATE query), consider any case where we've determined the user needs SELECT rights on the relation while doing an UPDATE or DELETE to be a case where we apply SELECT policies, and any case where we've deteremind that the user needs UPDATE rights on the relation while doing a SELECT to be a case where we apply UPDATE policies. This simplifies the logic and addresses concerns that a user could use UPDATE or DELETE with a WHERE clauses to determine if rows exist, or they could use SELECT .. FOR UPDATE to lock rows which they are not actually allowed to modify through UPDATE policies. Use list_append_unique() to avoid adding the same quals multiple times, as, on balance, the cost of checking when adding the quals will almost always be cheaper than keeping them and doing busywork for each tuple during execution. Back-patch to 9.5 where RLS was added.
-
Tom Lane authored
We seem to have lost a line somewhere along the way in the comment block that discusses async.c's locks, because it suddenly refers to "both locks" without previously having mentioned more than one. Add a sentence to make that read more sanely. Also, refer to the "pos of the slowest backend" not the "tail of the slowest backend", since we have no per-backend value called "tail".
-
Tatsuo Ishii authored
The tolerance (larger than actual tps number) increases as the number of threads decreases. The bug has been there since the thread support was introduced in 9.0. Because back patching introduces incompatible behavior changes regarding the tps number, the fix is committed to master and 9.5 stable branches only. Problem spotted by me and fix proposed by Fabien COELHO. Note that his original patch included more than fixes (a code re-factoring) which is not related to the problem and I omitted the part.
-
- 29 Sep, 2015 4 commits
-
-
Alvaro Herrera authored
There are three main changes here: 1. No longer cause a start failure in a standby if the feature is disabled in postgresql.conf but enabled in the master. This reverts one part of commit 4f3924d9; what we keep is the ability of the standby to activate/deactivate the module (which includes creating and removing segments as appropriate) during replay of such actions in the master. 2. Replay WAL records affecting commitTS even if the feature is disabled. This means the standby will always have the same state as the master after replay. 3. Have COMMIT PREPARE record the transaction commit time as well. We were previously only applying it in the normal transaction commit path. Author: Petr Jelínek Discussion: http://www.postgresql.org/message-id/CAHGQGwHereDzzzmfxEBYcVQu3oZv6vZcgu1TPeERWbDc+gQ06g@mail.gmail.com Discussion: http://www.postgresql.org/message-id/CAHGQGwFuzfO4JscM9LCAmCDCxp_MfLvN4QdB+xWsS-FijbjTYQ@mail.gmail.com Additionally, I cleaned up nearby code related to replication origins, which I found a bit hard to follow, and fixed a couple of typos. Backpatch to 9.5, where this code was introduced. Per bug reports from Fujii Masao and subsequent discussion.
-
Tom Lane authored
We were passing error message texts to croak() verbatim, which turns out not to work if the text contains non-ASCII characters; Perl mangles their encoding, as reported in bug #13638 from Michal Leinweber. To fix, convert the text into a UTF8-encoded SV first. It's hard to test this without risking failures in different database encodings; but we can follow the lead of plpython, which is already assuming that no-break space (U+00A0) has an equivalent in all encodings we care about running the regression tests in (cf commit 2dfa15de). Back-patch to 9.1. The code is quite different in 9.0, and anyway it seems too risky to put something like this into 9.0's final minor release. Alex Hunsaker, with suggestions from Tim Bunce and Tom Lane
-
Robert Haas authored
Etsuro Fujita
-
Robert Haas authored
This code provides infrastructure for a parallel leader to start up parallel workers to execute subtrees of the plan tree being executed in the master. User-supplied parameters from ParamListInfo are passed down, but PARAM_EXEC parameters are not. Various other constructs, such as initplans, subplans, and CTEs, are also not currently shared. Nevertheless, there's enough here to support a basic implementation of parallel query, and we can lift some of the current restrictions as needed. Amit Kapila and Robert Haas
-
- 28 Sep, 2015 10 commits
-
-
Andrew Dunstan authored
Backpatch to 9.5 where the error was introduced.
-
Andrew Dunstan authored
Backpatch to all live branches to keep the code in sync.
-
Alvaro Herrera authored
It was introduced alongside replication origins, by commit 5aa23504, so backpatch to 9.5. Pointed out by Fujii Masao
-
Tom Lane authored
Thom Brown reported that SSL connections didn't seem to work on Windows in 9.5. Asif Naeem figured out that the cause was my_sock_read() looking at "errno" when it needs to look at "SOCK_ERRNO". This mistake was introduced in commit 680513ab, which cloned the backend's custom SSL BIO code into libpq, and didn't translate the errno handling properly. Moreover, it introduced unnecessary errno save/restore logic, which was particularly confusing because it was incomplete; and it failed to check for all three of EINTR, EAGAIN, and EWOULDBLOCK in my_sock_write. (That might not be necessary; but since we're copying well-tested backend code that does do that, it seems prudent to copy it faithfully.)
-
Stephen Frost authored
To make sure that pg_dump/pg_restore function properly with RLS policies, arrange to have a few of them left around at the end of the regression tests. Back-patch to 9.5 where RLS was added.
-
Alvaro Herrera authored
While at it, trim the includes list in copy.c. The planner headers cannot be removed, but there are a few others that are not of any use.
-
Andres Freund authored
When taking the UPDATE path in an INSERT .. ON CONFLICT .. UPDATE tables with oids were not supported. The tuple generated by the update target list was projected without space for an oid - a simple oversight. Reported-By: Peter Geoghegan Author: Andres Freund Backpatch: 9.5, where ON CONFLICT was introduced
-
Robert Haas authored
We do this mostly everywhere, so it seems just as well to do it here, too. Thomas Munro
-
Robert Haas authored
Otherwise, we effectively act as if abs_top_builddir were the root directory, which is quite dangerous if the user happens to have permissions to do things there. This can crop up in PGXS builds, for example. Report by Sandro Santilli, patch by me, review by Noah Misch.
-
Peter Eisentraut authored
Make quoting style match existing style. Improve plural support.
-
- 27 Sep, 2015 3 commits
-
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
Peter Eisentraut authored
With the arrival of the CUBE key word/feature, the index entries for the cube extension and the CUBE feature were collapsed into one. Tweak the entry for the cube extension so they are separate entries.
-
- 26 Sep, 2015 2 commits
-
-
Andres Freund authored
In 9.5 and master there is no need to support legacy truncation. This is just committed separately to make it easier to backpatch the WAL logged multixact truncation to 9.3 and 9.4 if we later decide to do so. I bumped master's magic from 0xD086 to 0xD088 and 9.5's from 0xD085 to 0xD087 to avoid 9.5 reusing a value that has been in use on master while keeping the numbers increasing between major versions. Discussion: 20150621192409.GA4797@alap3.anarazel.de Backpatch: 9.5
-
Andres Freund authored
The fact that multixact truncations are not WAL logged has caused a fair share of problems. Amongst others it requires to do computations during recovery while the database is not in a consistent state, delaying truncations till checkpoints, and handling members being truncated, but offset not. We tried to put bandaids on lots of these issues over the last years, but it seems time to change course. Thus this patch introduces WAL logging for multixact truncations. This allows: 1) to perform the truncation directly during VACUUM, instead of delaying it to the checkpoint. 2) to avoid looking at the offsets SLRU for truncation during recovery, we can just use the master's values. 3) simplify a fair amount of logic to keep in memory limits straight, this has gotten much easier During the course of fixing this a bunch of additional bugs had to be fixed: 1) Data was not purged from memory the member's SLRU before deleting segments. This happened to be hard or impossible to hit due to the interlock between checkpoints and truncation. 2) find_multixact_start() relied on SimpleLruDoesPhysicalPageExist - but that doesn't work for offsets that haven't yet been flushed to disk. Add code to flush the SLRUs to fix. Not pretty, but it feels slightly safer to only make decisions based on actual on-disk state. 3) find_multixact_start() could be called concurrently with a truncation and thus fail. Via SetOffsetVacuumLimit() that could lead to a round of emergency vacuuming. The problem remains in pg_get_multixact_members(), but that's quite harmless. For now this is going to only get applied to 9.5+, leaving the issues in the older branches in place. It is quite possible that we need to backpatch at a later point though. For the case this gets backpatched we need to handle that an updated standby may be replaying WAL from a not-yet upgraded primary. We have to recognize that situation and use "old style" truncation (i.e. looking at the SLRUs) during WAL replay. In contrast to before, this now happens in the startup process, when replaying a checkpoint record, instead of the checkpointer. Doing truncation in the restartpoint is incorrect, they can happen much later than the original checkpoint, thereby leading to wraparound. To avoid "multixact_redo: unknown op code 48" errors standbys would have to be upgraded before primaries. A later patch will bump the WAL page magic, and remove the legacy truncation codepaths. Legacy truncation support is just included to make a possible future backpatch easier. Discussion: 20150621192409.GA4797@alap3.anarazel.de Reviewed-By: Robert Haas, Alvaro Herrera, Thomas Munro Backpatch: 9.5 for now
-
- 25 Sep, 2015 1 commit
-
-
Tom Lane authored
This replaces ill-fated commit 5ddc7288, which was reverted because it broke active uses of FK cache entries. In this patch, we still do nothing more to invalidatable cache entries than mark them as needing revalidation, so we won't break active uses. To keep down the overhead of InvalidateConstraintCacheCallBack(), keep a list of just the currently-valid cache entries. (The entries are large enough that some added space for list links doesn't seem like a big problem.) This would still be O(N^2) when there are many valid entries, though, so when the list gets too long, just force the "sinval reset" behavior to remove everything from the list. I set the threshold at 1000 entries, somewhat arbitrarily. Possibly that could be fine-tuned later. Another item for future study is whether it's worth adding reference counting so that we could safely remove invalidated entries. As-is, problem cases are likely to end up with large and mostly invalid FK caches. Like the previous attempt, backpatch to 9.3. Jan Wieck and Tom Lane
-