- 18 Nov, 2013 1 commit
-
-
Heikki Linnakangas authored
Previously, if VACUUM skipped vacuuming a page because it's pinned, it didn't count that page as scanned. However, that meant that relfrozenxid was not bumped up either, which prevented anti-wraparound vacuum from doing its job. Report by Миша Тюрин, analysis and patch by Sergey Burladyn and Jeff Janes. Backpatch to 9.2, where the skip-locked-pages behavior was introduced.
-
- 17 Nov, 2013 1 commit
-
-
Tom Lane authored
Pavel Stehule, reviewed by Jeevan Chalke and Atri Sharma
-
- 16 Nov, 2013 4 commits
-
-
Tom Lane authored
This patch improves performance of most built-in aggregates that formerly used a NUMERIC or NUMERIC array as their transition type; this includes not only aggregates on numeric inputs, but some aggregates on integer inputs where overflow of an int8 value is a possibility. The code now uses a special-purpose data structure to avoid array construction and deconstruction overhead, as well as packing and unpacking overhead for numeric values. These aggregates' transition type is now declared as INTERNAL, since it doesn't correspond to any SQL data type. To keep the planner from thinking that that means a lot of storage will be used, we make use of the just-added pg_aggregate.aggtransspace feature. The space estimate is set to 128 bytes, which is at least in the right ballpark. Hadi Moshayedi, reviewed by Pavel Stehule and Tomas Vondra
-
Tom Lane authored
Formerly the planner had a hard-wired rule of thumb for guessing the amount of space consumed by an aggregate function's transition state data. This estimate is critical to deciding whether it's OK to use hash aggregation, and in many situations the built-in estimate isn't very good. This patch adds a column to pg_aggregate wherein a per-aggregate estimate can be provided, overriding the planner's default, and infrastructure for setting the column via CREATE AGGREGATE. It may be that additional smarts will be required in future, perhaps even a per-aggregate estimation function. But this is already a step forward. This is extracted from a larger patch to improve the performance of numeric and int8 aggregates. I (tgl) thought it was worth reviewing and committing this infrastructure separately. In this commit, all built-in aggregates are given aggtransspace = 0, so no behavior should change. Hadi Moshayedi, reviewed by Pavel Stehule and Tomas Vondra
-
Peter Eisentraut authored
-
Tom Lane authored
pgbench formerly failed on lines longer than BUFSIZ, unexpectedly splitting them into multiple commands. Allow it to work with any length of input line. Sawada Masahiko
-
- 15 Nov, 2013 10 commits
-
-
Tom Lane authored
A couple of places that should have been iterating over WORDS_PER_CHUNK words were iterating over WORDS_PER_PAGE words instead. This thinko accidentally failed to fail, because (at least on common architectures with default BLCKSZ) WORDS_PER_CHUNK is a bit less than WORDS_PER_PAGE, and the extra words being looked at were always zero so nothing happened. Still, it's a bug waiting to happen if anybody ever fools with the parameters affecting TIDBitmap sizes, and it's a small waste of cycles too. So back-patch to all active branches. Etsuro Fujita
-
Tom Lane authored
In --inserts and especially --column-inserts mode, we can get a useful speedup by generating the common prefix of all a table's INSERT commands just once, and then printing the prebuilt string for each row. This avoids multiple invocations of fmtId() and other minor fooling around. David Rowley
-
Tom Lane authored
The previous coding was fairly unreadable and drew double-free warnings from clang. I believe the double free was actually not reachable, because PQconnectionNeedsPassword is coded to not return true if a password was provided, so that the loop can't iterate more than twice. Nonetheless it seems worth rewriting. No back-patch since this is just cosmetic.
-
Tom Lane authored
Bug #8591 from Claudio Freire demonstrates that get_eclass_for_sort_expr must be able to compute valid em_nullable_relids for any new equivalence class members it creates. I'd worried about this in the commit message for db9f0e1d, but claimed that it wasn't a problem because multi-member ECs should already exist when it runs. That is transparently wrong, though, because this function is also called by initialize_mergeclause_eclasses, which runs during deconstruct_jointree. The example given in the bug report (which the new regression test item is based upon) fails because the COALESCE() expression is first seen by initialize_mergeclause_eclasses rather than process_equivalence. Fixing this requires passing the appropriate nullable_relids set to get_eclass_for_sort_expr, and it requires new code to compute that set for top-level expressions such as ORDER BY, GROUP BY, etc. We store the top-level nullable_relids in a new field in PlannerInfo to avoid computing it many times. In the back branches, I've added the new field at the end of the struct to minimize ABI breakage for planner plugins. There doesn't seem to be a good alternative to changing get_eclass_for_sort_expr's API signature, though. There probably aren't any third-party extensions calling that function directly; moreover, if there are, they probably need to think about what to pass for nullable_relids anyway. Back-patch to 9.2, like the previous patch in this area.
-
Tom Lane authored
plpgsql likes to cache query plans and simple-expression execution state trees across calls. This is a considerable win for multiple executions of the same function. However, it's useless for DO blocks, since by definition those are executed only once and discarded. Nonetheless, we were allowing a DO block's expression execution trees to survive until end of transaction, resulting in a significant intra-transaction memory leak, as reported by Yeb Havinga. Worse, if the DO block exited with an error, the compiled form of the block's code was leaked till end of session --- along with subsidiary plancache entries. To fix, make DO blocks keep their expression execution trees in a private EState that's deleted at exit from the block, and add a PG_TRY block to plpgsql_inline_handler to make sure that memory cleanup happens even on error exits. Also add a regression test covering error handling in a DO block, because my first try at this broke that. (The test is not meant to prove that we don't leak memory anymore, though it could be used for that with a much larger loop count.) Ideally we'd back-patch this into all versions supporting DO blocks; but the patch needs to add a field to struct PLpgSQL_execstate, and that would break ABI compatibility for third-party plugins such as the plpgsql debugger. Given the small number of complaints so far, fixing this in HEAD only seems like an acceptable choice.
-
Tom Lane authored
There were enough typos in the comments to annoy me ...
-
Kevin Grittner authored
Commit 061b88c7 saved argv0 to a global buffer without ensuring that it was zero terminated, allowing references to it to overrun the buffer and access other memory. This probably would not have presented any security risk, but could have resulted in very confusing failures if the path to the executable was very long. Reported by David Rowley
-
Robert Haas authored
Colin 't Hart
-
Heikki Linnakangas authored
Andres Freund
-
Heikki Linnakangas authored
This speeds up nextval() and currval(), when you touch a lot of different sequences in the same backend. David Rowley
-
- 14 Nov, 2013 3 commits
-
-
Tom Lane authored
Previous commit shows the need for this. The coverage isn't really thorough, but it's better than nothing.
-
Peter Eisentraut authored
-
- 13 Nov, 2013 8 commits
-
-
Tom Lane authored
The previous text was a bit misleading, as well as unnecessarily vague about what information would be discarded. Per gripe from Craig Skinner.
-
Andrew Dunstan authored
-
Robert Haas authored
The old code entered a new hash table entry first, then scanned pg_class to determine what value to fill in, and then populated the entry. This fails to work properly if a cache invalidation happens as a result of opening pg_class. Repair. Along the way, get rid of the idea of blowing away the entire hash table as a method of processing invalidations. Instead, just delete all the entries one by one. This is probably not quite as cheap but it's simpler, and shouldn't happen often. Andres Freund
-
Bruce Momjian authored
-
Kevin Grittner authored
It's a trivial amount of RAM held until the end of the regression test run; but it's probably worth fixing to silence future warnings from code analyzers. This was the only memory leak pointed out by clang's static code analysis tool.
-
Heikki Linnakangas authored
The root page is filled with as many items as fit, and the rest are inserted using normal insertions. However, I fumbled the variable names, and the code actually memcpy'd all the items on the page, overflowing the buffer. While at it, rename the variable to make the distinction more clear. Reported by Teodor Sigaev. This bug was introduced by my recent refactorings, so no backpatching required.
-
Peter Eisentraut authored
This avoids an unused variable warning on Windows when building without asserts From: David Rowley <dgrowleyml@gmail.com>
-
Peter Eisentraut authored
Avoid the use of **, which was only introduced in Git version 1.8.2.
-
- 12 Nov, 2013 4 commits
-
-
Robert Haas authored
We can't search for the isolationtester binary until after we've set up the environment, because otherwise when find_other_exec() tries to invoke it with the -V option, it might fail for inability to locate a working libpq. So postpone that step. Andres Freund
-
Robert Haas authored
Reported by Thom Brown.
-
Magnus Hagander authored
Per report from Colin 't Hart
-
Peter Eisentraut authored
This removes the remaining pieces of the IRIX port that was removed by ea91a6be.
-
- 11 Nov, 2013 4 commits
-
-
Tom Lane authored
The pretty-printing logic in ruleutils.c operates by inserting a newline and some indentation whitespace into strings that are already valid SQL. This naturally results in leaving some trailing whitespace before the newline in many cases; which can be annoying when processing the output with other tools, as complained of by Joe Abbate. We can fix that in a pretty localized fashion by deleting any trailing whitespace before we append a pretty-printing newline. In addition, we have to modify the code inserted by commit 2f582f76 so that we also delete trailing whitespace when transposing items from temporary buffers into the main result string, when a temporary item starts with a newline. This results in rather voluminous changes to the regression test results, but it's easily verified that they are only removal of trailing whitespace. Back-patch to 9.3, because the aforementioned commit resulted in many more cases of trailing whitespace than had occurred in earlier branches.
-
Tom Lane authored
Although the SQL spec forbids duplicate table aliases, historically we've allowed queries like SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z on the grounds that the aliased join (z) hides the aliases within it, therefore there is no conflict between the two RTEs named "x". The LATERAL patch broke this, on the misguided basis that "x" could be ambiguous if tab3 were a LATERAL subquery. To avoid breaking existing queries, it's better to allow this situation and complain only if tab3 actually does contain an ambiguous reference. We need only remove the check that was throwing an error, because the column lookup code is already prepared to handle ambiguous references. Per bug #8444.
-
Magnus Hagander authored
This is a similar fix as c6ec8793aa59d1842082e14b4b4aae7d4bd883fd 9.2. This should never happen in 9.3 and newer since the special case cannot happen there, but this patch synchronizes up the code so there is no confusion on why they're different. An empty block is as harmless in 9.3 as it was in 9.2, and can safely be ignored.
- 10 Nov, 2013 1 commit
-
-
Peter Eisentraut authored
Set per file type attributes in .gitattributes to fine-tune whitespace checks. With the associated cleanups, the tree is now clean for git
-
- 09 Nov, 2013 1 commit
-
-
Robert Haas authored
Commit 9b4d52f2 failed to notice that pg_regress_ecpg needed updating. This patch was independently submitted by both David Rowley and Andres Freund.
-
- 08 Nov, 2013 3 commits
-
-
Heikki Linnakangas authored
If a page is deleted, and reused for something else, just as a search is following a rightlink to it from its left sibling, the search would continue scanning whatever the new contents of the page are. That could lead to incorrect query results, or even something more curious if the page is reused for a different kind of a page. To fix, modify the search algorithm to lock the next page before releasing the previous one, and refrain from deleting pages from the leftmost branch of the tree. Add a new Concurrency section to the README, explaining why this works. There is a lot more one could say about concurrency in GIN, but that's for another patch. Backpatch to all supported versions.
-
Robert Haas authored
Inspired by, but different from, a patch from Ivan Lezhnjov IV
-
Robert Haas authored
This makes it possible to, for example, use the isolation tester to test a contrib module. Andres Freund
-