- 10 Apr, 2012 7 commits
-
-
Bruce Momjian authored
default tablespace, but part of a database that is in a user-defined tablespace. Caused "file not found" error during upgrade. Per bug report from Ants Aasma. Backpatch to 9.1 and 9.0.
-
Peter Eisentraut authored
Since xgettext provides options to do this now, we might as well use them.
-
Peter Eisentraut authored
Only match when WITH is the first word, as WITH may appear in many other contexts. Josh Kupershmidt
-
Tom Lane authored
This patch reverts commit 191ef2b4 and thereby restores the pre-7.3 behavior of EXTRACT(EPOCH FROM timestamp-without-tz). Per discussion, the more recent behavior was misguided on a couple of grounds: it makes it hard to get a non-timezone-aware epoch value for a timestamp, and it makes this one case dependent on the value of the timezone GUC, which is incompatible with having timestamp_part() labeled as immutable. The other behavior is still available (in all releases) by explicitly casting the timestamp to timestamp with time zone before applying EXTRACT. This will need to be called out as an incompatible change in the 9.2 release notes. Although having mutable behavior in a function marked immutable is clearly a bug, we're not going to back-patch such a change.
-
Heikki Linnakangas authored
It used to point to a top-level page that contains client-side tools as well. It was hard to find the procedural language there.
-
Heikki Linnakangas authored
Thom Brown
-
Tom Lane authored
It's still non-deterministic in some sense ... but given fixed settings and identical planning problems, it will now always choose the same plan, so we probably shouldn't tar it with that brush. Per bug #6565 from Guillaume Cottenceau. Back-patch to 9.0 where the behavior was fixed.
-
- 09 Apr, 2012 8 commits
-
-
Bruce Momjian authored
storage.
-
Bruce Momjian authored
-
Bruce Momjian authored
because there was only a beta for 9.0 and it does not compile on 9.1.
-
Tom Lane authored
estimate_num_groups() gets unhappy with create table empty(); select * from empty except select * from empty e2; I can't see any actual use-case for such a query (and the table is illegal per SQL spec), but it seems like a good idea that it not cause an assert failure.
-
Tom Lane authored
The case could not arise when this code was originally written, but it can now (since we made zero-column MergeJoins work for the benefit of FULL JOIN ON TRUE). I don't think there is any actual bug here, but we might as well treat it consistently with other uses of COPY_POINTER_FIELD(). Per comment from Ashutosh Bapat.
-
Tom Lane authored
There's no need to sit there and increment the stats when we know all the increments would be zero anyway. The actual additions might not be very expensive, but skipping acquisition of the spinlock seems like a good thing. Pushing the logic about initialization of the usage count down into entry_alloc() allows us to do that while making the code actually simpler, not more complex. Expansion on a suggestion by Peter Geoghegan.
-
Heikki Linnakangas authored
Thom Browne pointed out that the URL was out of date, and Devrim GÜNDÜZ pointed out that the project isn't maintained anymore.
-
Robert Haas authored
Patch by me; review by Tom Lane and others.
-
- 08 Apr, 2012 3 commits
-
-
Tom Lane authored
This patch addresses a deficiency in the previous pg_stat_statements patch. We want to give sticky entries an initial "usage" factor high enough that they probably will stick around until their query is completed. However, if the query never completes (eg it gets an error during execution), the entry shouldn't persist indefinitely. Manage this by starting out with a usage setting equal to the (approximate) median usage value within the whole hashtable, but decaying the value much more aggressively than we do for normal entries. Peter Geoghegan
-
Heikki Linnakangas authored
This was a thinko in previous commit. Now that stack base pointer is now set in PostmasterMain and SubPostmasterMain, it doesn't need to be set in PostgresMain anymore.
-
Heikki Linnakangas authored
We used to only initialize the stack base pointer when starting up a regular backend, not in other processes. In particular, autovacuum workers can run arbitrary user code, and without stack-depth checking, infinite recursion in e.g an index expression will bring down the whole cluster. The comment about PL/Java using set_stack_base() is not yet true. As the code stands, PL/java still modifies the stack_base_ptr variable directly. However, it's been discussed in the PL/Java mailing list that it should be changed to use the function, because PL/Java is currently oblivious to the register stack used on Itanium. There's another issues with PL/Java, namely that the stack base pointer it sets is not really the base of the stack, it could be something close to the bottom of the stack. That's a separate issue that might need some further changes to this code, but that's a different story. Backpatch to all supported releases.
-
- 07 Apr, 2012 4 commits
-
-
Tom Lane authored
-
Tom Lane authored
-
Tom Lane authored
-
Bruce Momjian authored
-
- 06 Apr, 2012 10 commits
-
-
Tom Lane authored
Thom Brown
-
Tom Lane authored
XLOG_GIN_UPDATE_META_PAGE and XLOG_GIN_DELETE_LISTPAGE records were printed with a list link field labeled as "blkno", which was confusing, especially when the link was empty (InvalidBlockNumber). Print the metapage block number instead, since that's what's actually being updated. We could include the link values too as a separate field, but not clear it's worth the trouble. Back-patch to 8.4 where the dubious code was added.
-
Peter Eisentraut authored
Thom Brown
-
Peter Eisentraut authored
Thom Brown
-
Tom Lane authored
If we make the initially-called function return the table physical-size estimate, acquire_inherited_sample_rows will be able to use that to allocate numbers of samples among child tables, when the day comes that we want to support foreign tables in inheritance trees.
-
Tom Lane authored
ANALYZE now accepts foreign tables and allows the table's FDW to control how the sample rows are collected. (But only manual ANALYZEs will touch foreign tables, for the moment, since among other things it's not very clear how to handle remote permissions checks in an auto-analyze.) contrib/file_fdw is extended to support this. Etsuro Fujita, reviewed by Shigeru Hanada, some further tweaking by me.
-
Simon Riggs authored
-
Robert Haas authored
Report by Guillaume Lelarge.
-
Robert Haas authored
Report by Andrew Dunstan.
-
- 05 Apr, 2012 7 commits
-
-
Peter Eisentraut authored
The option --no-wrap prevents wars with (most?) editors about proper line wrapping. --sort-by-file ensures consistent file order, for easier diffing.
-
Robert Haas authored
Greg Smith and Jaime Casanova, reviewed by Alex Shulgin and myself. e
-
Robert Haas authored
Greg Smith, Peter Geoghegan, and Robert Haas
-
Tom Lane authored
Somebody didn't bother to fix this comment while adding foreign table support to the code below it. In passing, remove the explicit calling-out of relkind letters, which adds complexity to the comment but doesn't help in understanding the code.
-
Robert Haas authored
The views are in milliseconds, but the raw functions return microseconds.
-
Robert Haas authored
Ants Aasma's original patch to add timing information for buffer I/O requests exposed this data at the relation level, which was judged too costly. I've here exposed it at the database level instead.
-
Tom Lane authored
The parser got confused if a cursor parameter had the same name as a plpgsql variable. Reported and diagnosed by Yeb Havinga, though this isn't exactly his proposed fix. Also, some mostly-but-not-entirely-cosmetic adjustments to the original named-cursor-parameter patch, for code readability and better error diagnostics.
-
- 04 Apr, 2012 1 commit
-
-
Tom Lane authored
This patch provides a test case for libpq's row processor API. contrib/dblink can deal with very large result sets by dumping them into a tuplestore (which can spill to disk) --- but until now, the intermediate storage of the query result in a PGresult meant memory bloat for any large result. Now we use a row processor to convert the data to tuple form and dump it directly into the tuplestore. A limitation is that this only works for plain dblink() queries, not dblink_send_query() followed by dblink_get_result(). In the latter case we don't know the desired tuple rowtype soon enough. While hack solutions to that are possible, a different user-level API would probably be a better answer. Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
-