- 05 Sep, 2013 3 commits
-
-
Heikki Linnakangas authored
If the hash table backing a catalog cache becomes too full (fillfactor > 2), enlarge it. A new buckets array, double the size of the old, is allocated, and all entries in the old hash are moved to the right bucket in the new hash. This has two benefits. First, cache lookups don't get so expensive when there are lots of entries in a cache, like if you access hundreds of thousands of tables. Second, we can make the (initial) sizes of the caches much smaller, which saves memory. This patch dials down the initial sizes of the catcaches. The new sizes are chosen so that a backend that only runs a few basic queries still won't need to enlarge any of them.
-
Jeff Davis authored
This reverts commit 269e7808 and commit 5b571bb8. Unfortunately, the initial patch had insufficient performance testing, and resulted in a regression. Per report by Thom Brown.
-
Jeff Davis authored
Make the examples self-contained to avoid confusion. Per bug report 8367 from KOIZUMI Satoru.
-
- 04 Sep, 2013 4 commits
-
-
Bruce Momjian authored
Previous text was "No description available". Tianyin Xu
-
Bruce Momjian authored
Backpatch to 9.3. Per suggestion from Gavan Schneider
-
Heikki Linnakangas authored
Performance testing shows that if the insertpos_lck spinlock and the fields that it protects are on the same cache line with other variables that are frequently accessed, the false sharing can hurt performance a lot. Keep them apart by adding some padding.
-
Robert Haas authored
Andres Freund
-
- 03 Sep, 2013 10 commits
-
-
Tom Lane authored
This GUC context value was once only used by ALTER DATABASE SET and ALTER USER SET. That's not true anymore, though, so rewrite the comments to be a bit more general. Patch in HEAD only, since this is just an internal documentation issue.
-
Tom Lane authored
The previous coding attempted to activate all the GUC settings specified in SET clauses, so that the function validator could operate in the GUC environment expected by the function body. However, this is problematic when restoring a dump, since the SET clauses might refer to database objects that don't exist yet. We already have the parameter check_function_bodies that's meant to prevent forward references in function definitions from breaking dumps, so let's change CREATE FUNCTION to not install the SET values if check_function_bodies is off. Authors of function validators were already advised not to make any "context sensitive" checks when check_function_bodies is off, if indeed they're checking anything at all in that mode. But extend the documentation to point out the GUC issue in particular. (Note that we still check the SET clauses to some extent; the behavior with !check_function_bodies is now approximately equivalent to what ALTER DATABASE/ROLE have been doing for awhile with context-dependent GUCs.) This problem can be demonstrated in all active branches, so back-patch all the way.
-
Tom Lane authored
There's no inherent reason why an aggregate function can't be variadic (even VARIADIC ANY) if its transition function can handle the case. Indeed, this patch to add the feature touches none of the planner or executor, and little of the parser; the main missing stuff was DDL and pg_dump support. It is true that variadic aggregates can create the same sort of ambiguity about parameters versus ORDER BY keys that was complained of when we (briefly) had both one- and two-argument forms of string_agg(). However, the policy formed in response to that discussion only said that we'd not create any built-in aggregates with varying numbers of arguments, not that we shouldn't allow users to do it. So the logical extension of that is we can allow users to make variadic aggregates as long as we're wary about shipping any such in core. In passing, this patch allows aggregate function arguments to be named, to the extent of remembering the names in pg_proc and dumping them in pg_dump. You can't yet call an aggregate using named-parameter notation. That seems like a likely future extension, but it'll take some work, and it's not what this patch is really about. Likewise, there's still some work needed to make window functions handle VARIADIC fully, but I left that for another day. initdb forced because of new aggvariadic field in Aggref parse nodes.
-
Alvaro Herrera authored
-
Bruce Momjian authored
-
Bruce Momjian authored
per suggestion from Francisco Olart
-
Robert Haas authored
MauMau
-
Greg Stark authored
-
Heikki Linnakangas authored
Also line-wrap an over-wide line in a comment that's ignored by pgindent.
-
- 02 Sep, 2013 3 commits
-
-
Tom Lane authored
DST law changes in Israel, Morocco, Palestine, Paraguay. Historical corrections for Macquarie Island.
-
Andrew Dunstan authored
The original query ignored TOAST tables which could result in tables needing a vacuum not being reported. Backpatch to all live branches.
-
Peter Eisentraut authored
-
- 01 Sep, 2013 2 commits
-
-
Tom Lane authored
It seems like a good idea to update these examples since some fairly basic planner behaviors have changed in 9.3; notably that the startup cost for an indexscan plan node is no longer invariably estimated at 0.00.
-
Tom Lane authored
Some corrections, a lot of copy-editing. Set projected release date as 2013-09-09.
-
- 31 Aug, 2013 1 commit
-
-
Tom Lane authored
The previous version of the query disregarded the result of the MergeAppend instead of checking its results. Andres Freund
-
- 30 Aug, 2013 2 commits
-
-
Tom Lane authored
Per Andres Freund.
-
Tom Lane authored
Failing to do so can cause queries to return wrong data, error out or crash. This requires adding a new binaryheap_reset() method to binaryheap.c, but that probably should have been there anyway. Per bug #8410 from Terje Elde. Diagnosis and patch by Andres Freund.
-
- 29 Aug, 2013 2 commits
-
-
Alvaro Herrera authored
-
Heikki Linnakangas authored
Testing done in 2011 by Tom Lane concluded that this is a win on Intel Xeons and AMD Opterons, but it was not changed back then, because of an old comment in tas() that suggested that it's a huge loss on older Opterons. However, didn't have separate TAS() and TAS_SPIN() macros back then, so the comment referred to doing a non-locked initial test even on the first access, in uncontended case. I don't have access to older Opterons, but I'm pretty sure that doing an initial unlocked test is unlikely to be a loss while spinning, even though it might be for the first access. We probably should do the same on 32-bit x86, but I'm afraid of changing it without any testing. Hence just add a note to the x86 implementation suggesting that we probably should do the same there.
-
- 28 Aug, 2013 3 commits
-
-
Robert Haas authored
Using the infrastructure provided by this patch, it's possible either to wait for the startup of a dynamically-registered background worker, or to poll the status of such a worker without waiting. In either case, the current PID of the worker process can also be obtained. As usual, worker_spi is updated to demonstrate the new functionality. Patch by me. Review by Andres Freund.
-
Robert Haas authored
As noted by Tom Lane, commit 813fb031 was overly optimistic about how safe it is to concurrently change enumsortorder values under MVCC catalog scan semantics. Restore some of the previous text, with hopefully-correct adjustments for the new state of play.
-
Heikki Linnakangas authored
We already did this for -t (--table) in 9.3, but missed the other similar options. For consistency, allow all of them to be specified multiple times. Unfortunately it's too late to sneak this into 9.3, so commit to master only.
-
- 27 Aug, 2013 2 commits
-
-
Alvaro Herrera authored
Andres Freund; bug detected by valgrind
-
Alvaro Herrera authored
-
- 26 Aug, 2013 1 commit
-
-
Robert Haas authored
Christophe Pettus
-
- 24 Aug, 2013 2 commits
-
-
Tom Lane authored
The previous coding in plancache.c essentially used 10% of the estimated runtime as its cost estimate for planning. This can be pretty bogus, especially when the estimated runtime is very small, such as in a simple expression plan created by plpgsql, or a simple INSERT ... VALUES. While we don't have a really good handle on how planning time compares to runtime, it seems reasonable to use an estimate based on the number of relations referenced in the query, with a rather large multiplier. This patch uses 1000 * cpu_operator_cost * (nrelations + 1), so that even a trivial query will be charged 1000 * cpu_operator_cost for planning. This should address the problem reported by Marc Cousin and others that 9.2 and up prefer custom plans in cases where the planning time greatly exceeds what can be saved.
-
Magnus Hagander authored
The backup will not work (without a logarchive, and that's the whole point of -x) in this case, this patch just changes it to throw an error instead of crashing when this happens. Noticed and diagnosed by TAKATSUKA Haruka
-
- 23 Aug, 2013 1 commit
-
-
Tom Lane authored
It's possible that inlining of SQL functions (or perhaps other changes?) has exposed typmod information not known at parse time. In such cases, Vars generated by query_planner might have valid typmod values while the original grouping columns only have typmod -1. This isn't a semantic problem since the behavior of grouping only depends on type not typmod, but it breaks locate_grouping_columns' use of tlist_member to locate the matching entry in query_planner's result tlist. We can fix this without an excessive amount of new code or complexity by relying on the fact that locate_grouping_columns only gets called when make_subplanTargetList has set need_tlist_eval == false, and that can only happen if all the grouping columns are simple Vars. Therefore we only need to search the sub_tlist for a matching Var, and we can reasonably define a "match" as being a match of the Var identity fields varno/varattno/varlevelsup. The code still Asserts that vartype matches, but ignores vartypmod. Per bug #8393 from Evan Martin. The added regression test case is basically the same as his example. This has been broken for a very long time, so back-patch to all supported branches.
-
- 21 Aug, 2013 2 commits
-
-
Tom Lane authored
We should account for the per-group hashtable entry overhead when considering whether to use a hash aggregate to implement DISTINCT. The comparable logic in choose_hashed_grouping() gets this right, but I think I omitted it here in the mistaken belief that there would be no overhead if there were no aggregate functions to be evaluated. This can result in more than 2X underestimate of the hash table size, if the tuples being aggregated aren't very wide. Per report from Tomas Vondra. This bug is of long standing, but per discussion we'll only back-patch into 9.3. Changing the estimation behavior in stable branches seems to carry too much risk of destabilizing plan choices for already-tuned applications.
-
Bruce Momjian authored
Per suggestion from Vik Fearing
-
- 20 Aug, 2013 2 commits
-
-
Andrew Dunstan authored
This change will only apply to mingw compilers, and has been found necessary by late versions of the mingw-w64 compiler. It's the same as what is done elsewhere for the Microsoft compilers. If this doesn't upset older compilers in the buildfarm, it will be backpatched to 9.1. Problem reported by Michael Cronenworth, although not his patch.
-
Bruce Momjian authored
Backpatch to 9.3. Pavel Stehule
-