- 19 Nov, 2010 1 commit
-
-
Tom Lane authored
As per the ancient comment for set_rel_width, it really wasn't much good for relations that aren't plain tables: it would never find any stats and would always fall back on datatype-based estimates, which are often pretty silly. Fix that by copying up width estimates from the subquery planning process. At some point we might want to do this for CTEs too, but that would be a significantly more invasive patch because the sub-PlannerInfo is no longer accessible by the time it's needed. I refrained from doing anything about that, partly for fear of breaking the unmerged CTE-related patches. In passing, also generate less bogus width estimates for whole-row Vars. Per a gripe from Jon Nelson.
-
- 18 Nov, 2010 7 commits
-
-
Tom Lane authored
Given a column reference foo.bar, where there is a composite plpgsql variable foo but it doesn't contain a column bar, the pre-9.0 coding would immediately throw a "record foo has no field bar" error. In 9.0 the parser hook instead falls through to let the core parser see if it can resolve the reference. If not, you get a complaint about "missing FROM-clause entry for table foo", which while in some sense correct isn't terribly helpful. Complicate things a bit so that we can throw the old error message if neither the core parser nor the hook are able to resolve the column reference, while not changing the behavior in any other case. Per bug #5757 from Andrey Galkin.
-
Alvaro Herrera authored
This function is useful to obtain textual descriptions of objects as stored in pg_depend.
-
Tom Lane authored
If we have Limit->Result->Sort, the Result might be projecting a tlist that contains a set-returning function. If so, it's possible for the SRF to sometimes return zero rows, which means we could need to fetch more than N rows from the Sort in order to satisfy LIMIT N. So top-N sorting cannot be used in this scenario.
-
Robert Haas authored
Problems noted by Thom Brown.
-
Heikki Linnakangas authored
-
Tom Lane authored
Fix things so that top-N sorting can be used in child Sort nodes of a MergeAppend node, when there is a LIMIT and no intervening joins or grouping. Actually doing this on the executor side isn't too bad, but it's a bit messier to get the planner to cost it properly. Per gripe from Robert Haas. In passing, fix an oversight in the original top-N-sorting patch: query_planner should not assume that a LIMIT can be used to make an explicit sort cheaper when there will be grouping or aggregation in between. Possibly this should be back-patched, but I'm not sure the mistake is serious enough to be a real problem in practice.
-
Robert Haas authored
KaiGai Kohei, with editing and markup fixes by me.
-
- 17 Nov, 2010 3 commits
-
-
Tom Lane authored
In the previous coding, we simply issued ALTER SEQUENCE RESTART commands, which do not roll back on error. This meant that an error between truncating and committing left the sequences out of sync with the table contents, with potentially bad consequences as were noted in a Warning on the TRUNCATE man page. To fix, create a new storage file (relfilenode) for a sequence that is to be reset due to RESTART IDENTITY. If the transaction aborts, we'll automatically revert to the old storage file. This acts just like a rewriting ALTER TABLE operation. A penalty is that we have to take exclusive lock on the sequence, but since we've already got exclusive lock on its owning table, that seems unlikely to be much of a problem. The interaction of this with usual nontransactional behaviors of sequence operations is a bit weird, but it's hard to see what would be completely consistent. Our choice is to discard cached-but-unissued sequence values both when the RESTART is executed, and at rollback if any; but to not touch the currval() state either time. In passing, move the sequence reset operations to happen before not after any AFTER TRUNCATE triggers are fired. The previous ordering was not logically sensible, but was forced by the need to minimize inconsistency if the triggers caused an error. Transactional rollback is a much better solution to that. Patch by Steve Singer, rather heavily adjusted by me.
-
Peter Eisentraut authored
Add some additional dependencies to constrain the build order to prevent parallel make from failing. In the case of src/Makefile, this is likely to be too complicated to be worth maintaining, so just add .NOTPARALLEL to get the old for-loop-like behavior. More fine-tuning might be necessary for some platforms or configurations.
-
Andrew Dunstan authored
-
- 16 Nov, 2010 2 commits
-
-
Magnus Hagander authored
The handle to the shared memory segment containing startup parameters was sent as 32-bit even on 64-bit systems. Since HANDLEs appear to be allocated sequentially this shouldn't be a problem until we reach 2^32 open handles in the postmaster, but a 64-bit value should be sent across as 64-bit, and not zero out the top 32 bits. Noted by Tom Lane.
-
Heikki Linnakangas authored
temporary indexes are not WAL-logged. We used a constant LSN for temporary indexes, on the assumption that we don't need to worry about concurrent page splits in temporary indexes because they're only visible to the current session. But that assumption is wrong, it's possible to insert rows and split pages in the same session, while a scan is in progress. For example, by opening a cursor and fetching some rows, and INSERTing new rows before fetching some more. Fix by generating fake increasing LSNs, used in place of real LSNs in temporary GiST indexes.
-
- 15 Nov, 2010 10 commits
-
-
Tom Lane authored
We must stay in the function's SPI context until done calling the iterator that returns the set result. Otherwise, any attempt to invoke SPI features in the python code called by the iterator will malfunction. Diagnosis and patch by Jan Urbanski, per bug report from Jean-Baptiste Quenot. Back-patch to 8.2; there was no support for SRFs in previous versions of plpython.
-
Robert Haas authored
This new field counts the number of times that a backend which writes a buffer out to the OS must also fsync() it. This happens when the bgwriter fsync request queue is full, and is generally detrimental to performance, so it's good to know when it's happening. Along the way, log a new message at level DEBUG1 whenever we fail to hand off an fsync, so that the problem can also be seen in examination of log files (if the logging level is cranked up high enough). Greg Smith, with minor tweaks by me.
-
Robert Haas authored
Since 2004, int2 and int4 operators do detect overflow; this was fixed by commit 4171bb86. Extracted from a larger patch by Andres Freund.
-
Robert Haas authored
copydir.c is no longer in src/port
-
Alvaro Herrera authored
-
Simon Riggs authored
Similar conflicts were already avoided for related record types. Massive over-caution resulted in a usability bug. Clear theoretical basis for doing this is now confirmed by me. Request to remove from Heikki (twice), over-caution by me.
-
Tom Lane authored
... based on further tracing through that code.
-
Robert Haas authored
-
Robert Haas authored
Alexander Korotkov
-
Robert Haas authored
Itagaki Takahiro, with slight modifications.
-
- 14 Nov, 2010 2 commits
-
-
Tom Lane authored
We must not return any "okay to proceed" result code without having checked for too many children, else we might fail later on when trying to add the new child to one of the per-child state arrays. It's not clear whether this oversight explains Stefan Kaltenbrunner's recent report, but it could certainly produce a similar symptom. Back-patch to 8.4; the logic was not broken before that.
-
Tom Lane authored
3.80 breaks if the expansion of $(eval) is long enough to require expansion of its internal variable_buffer. For the purposes of $(recurse) that means it'll work so long as no single evaluation of _create_recursive_target produces more than 195 bytes. We can manage that by looping over subdirectories outside the call instead of complicating the generated rule. This coding is simpler and more readable anyway. Or at least, this works for me. We'll see if the buildfarm likes it.
-
- 13 Nov, 2010 2 commits
-
-
Tom Lane authored
This is needed to support debug_print_parse, per report from Jon Nelson. Cursory testing via the regression tests suggests we aren't missing anything else.
-
Andrew Dunstan authored
-
- 12 Nov, 2010 5 commits
-
-
Robert Haas authored
Having this in src/include/port.h makes no sense, now that copydir.c lives in src/backend/strorage rather than src/port. Along the way, remove an obsolete comment from contrib/pg_upgrade that makes reference to the old location.
-
Tom Lane authored
Once we have found a non-null constant argument, there is no need to examine additional arguments of the COALESCE. The previous coding got it right only if the constant was in the first argument position; otherwise it tried to simplify following arguments too, leading to unexpected behavior like this: regression=# select coalesce(f1, 42, 1/0) from int4_tbl; ERROR: division by zero It's a minor corner case, but a bug is a bug, so back-patch all the way.
-
Peter Eisentraut authored
Replace for loops in makefiles with proper dependencies. Parallel make can now span across directories. Also, make -k and make -q work properly. GNU make 3.80 or newer is now required.
-
Peter Eisentraut authored
-
Heikki Linnakangas authored
belonging to a user at DROP OWNED BY. Foreign data wrappers and servers don't do anything useful yet, which is why no-one has noticed, but since we have them, seems prudent to fix this. Per report from Chetan Suttraway. Backpatch to 9.0, 8.4 has the same problem but this patch didn't apply there so I'm not going to bother.
-
- 11 Nov, 2010 1 commit
-
-
Heikki Linnakangas authored
location read from backup label file can be found: wasShutdown was set incorrectly when a backup label file was found. Jeff Davis, with a little tweaking by me.
-
- 10 Nov, 2010 4 commits
-
-
Tom Lane authored
This code was just plain wrong: what you got was not a line through the given point but a line almost indistinguishable from the Y-axis, although not truly vertical. The only caller that tries to use this function with m == DBL_MAX is dist_ps_internal for the case where the lseg is horizontal; it would end up producing the distance from the given point to the place where the lseg's line crosses the Y-axis. That function is used by other operators too, so there are several operators that could compute wrong distances from a line segment to something else. Per bug #5745 from jindiax. Back-patch to all supported branches.
-
Bruce Momjian authored
-
Robert Haas authored
Fujii Masao, with a little wordsmithing by me.
-
Itagaki Takahiro authored
by gcc version 4 on mingw and cygwin. We don't use dllexport here because dllexport and dllwrap don't work well together.
-
- 09 Nov, 2010 3 commits
-
-
Alvaro Herrera authored
-
Tom Lane authored
Explicitly document that the -o options of pg_ctl init mode are meant for initdb, not postgres (Euler Taveira de Oliveira). Assorted other copy-editing (Tom).
-
Tom Lane authored
The general design of memory management in Postgres is that intermediate results computed by an expression are not freed until the end of the tuple cycle. For expression indexes, ANALYZE has to re-evaluate each expression for each of its sample rows, and it wasn't bothering to free intermediate results until the end of processing of that index. This could lead to very substantial leakage if the intermediate results were large, as in a recent example from Jakub Ouhrabka. Fix by doing ResetExprContext for each sample row. This necessitates adding a datumCopy step to ensure that the final expression value isn't recycled too. Some quick testing suggests that this change adds at worst about 10% to the time needed to analyze a table with an expression index; which is annoying, but seems a tolerable price to pay to avoid unexpected out-of-memory problems. Back-patch to all supported branches.
-