- 10 Apr, 2012 2 commits
-
-
Heikki Linnakangas authored
Thom Brown
-
Tom Lane authored
It's still non-deterministic in some sense ... but given fixed settings and identical planning problems, it will now always choose the same plan, so we probably shouldn't tar it with that brush. Per bug #6565 from Guillaume Cottenceau. Back-patch to 9.0 where the behavior was fixed.
-
- 09 Apr, 2012 8 commits
-
-
Bruce Momjian authored
storage.
-
Bruce Momjian authored
-
Bruce Momjian authored
because there was only a beta for 9.0 and it does not compile on 9.1.
-
Tom Lane authored
estimate_num_groups() gets unhappy with create table empty(); select * from empty except select * from empty e2; I can't see any actual use-case for such a query (and the table is illegal per SQL spec), but it seems like a good idea that it not cause an assert failure.
-
Tom Lane authored
The case could not arise when this code was originally written, but it can now (since we made zero-column MergeJoins work for the benefit of FULL JOIN ON TRUE). I don't think there is any actual bug here, but we might as well treat it consistently with other uses of COPY_POINTER_FIELD(). Per comment from Ashutosh Bapat.
-
Tom Lane authored
There's no need to sit there and increment the stats when we know all the increments would be zero anyway. The actual additions might not be very expensive, but skipping acquisition of the spinlock seems like a good thing. Pushing the logic about initialization of the usage count down into entry_alloc() allows us to do that while making the code actually simpler, not more complex. Expansion on a suggestion by Peter Geoghegan.
-
Heikki Linnakangas authored
Thom Browne pointed out that the URL was out of date, and Devrim GÜNDÜZ pointed out that the project isn't maintained anymore.
-
Robert Haas authored
Patch by me; review by Tom Lane and others.
-
- 08 Apr, 2012 3 commits
-
-
Tom Lane authored
This patch addresses a deficiency in the previous pg_stat_statements patch. We want to give sticky entries an initial "usage" factor high enough that they probably will stick around until their query is completed. However, if the query never completes (eg it gets an error during execution), the entry shouldn't persist indefinitely. Manage this by starting out with a usage setting equal to the (approximate) median usage value within the whole hashtable, but decaying the value much more aggressively than we do for normal entries. Peter Geoghegan
-
Heikki Linnakangas authored
This was a thinko in previous commit. Now that stack base pointer is now set in PostmasterMain and SubPostmasterMain, it doesn't need to be set in PostgresMain anymore.
-
Heikki Linnakangas authored
We used to only initialize the stack base pointer when starting up a regular backend, not in other processes. In particular, autovacuum workers can run arbitrary user code, and without stack-depth checking, infinite recursion in e.g an index expression will bring down the whole cluster. The comment about PL/Java using set_stack_base() is not yet true. As the code stands, PL/java still modifies the stack_base_ptr variable directly. However, it's been discussed in the PL/Java mailing list that it should be changed to use the function, because PL/Java is currently oblivious to the register stack used on Itanium. There's another issues with PL/Java, namely that the stack base pointer it sets is not really the base of the stack, it could be something close to the bottom of the stack. That's a separate issue that might need some further changes to this code, but that's a different story. Backpatch to all supported releases.
-
- 07 Apr, 2012 4 commits
-
-
Tom Lane authored
-
Tom Lane authored
-
Tom Lane authored
-
Bruce Momjian authored
-
- 06 Apr, 2012 10 commits
-
-
Tom Lane authored
Thom Brown
-
Tom Lane authored
XLOG_GIN_UPDATE_META_PAGE and XLOG_GIN_DELETE_LISTPAGE records were printed with a list link field labeled as "blkno", which was confusing, especially when the link was empty (InvalidBlockNumber). Print the metapage block number instead, since that's what's actually being updated. We could include the link values too as a separate field, but not clear it's worth the trouble. Back-patch to 8.4 where the dubious code was added.
-
Peter Eisentraut authored
Thom Brown
-
Peter Eisentraut authored
Thom Brown
-
Tom Lane authored
If we make the initially-called function return the table physical-size estimate, acquire_inherited_sample_rows will be able to use that to allocate numbers of samples among child tables, when the day comes that we want to support foreign tables in inheritance trees.
-
Tom Lane authored
ANALYZE now accepts foreign tables and allows the table's FDW to control how the sample rows are collected. (But only manual ANALYZEs will touch foreign tables, for the moment, since among other things it's not very clear how to handle remote permissions checks in an auto-analyze.) contrib/file_fdw is extended to support this. Etsuro Fujita, reviewed by Shigeru Hanada, some further tweaking by me.
-
Simon Riggs authored
-
Robert Haas authored
Report by Guillaume Lelarge.
-
Robert Haas authored
Report by Andrew Dunstan.
-
- 05 Apr, 2012 7 commits
-
-
Peter Eisentraut authored
The option --no-wrap prevents wars with (most?) editors about proper line wrapping. --sort-by-file ensures consistent file order, for easier diffing.
-
Robert Haas authored
Greg Smith and Jaime Casanova, reviewed by Alex Shulgin and myself. e
-
Robert Haas authored
Greg Smith, Peter Geoghegan, and Robert Haas
-
Tom Lane authored
Somebody didn't bother to fix this comment while adding foreign table support to the code below it. In passing, remove the explicit calling-out of relkind letters, which adds complexity to the comment but doesn't help in understanding the code.
-
Robert Haas authored
The views are in milliseconds, but the raw functions return microseconds.
-
Robert Haas authored
Ants Aasma's original patch to add timing information for buffer I/O requests exposed this data at the relation level, which was judged too costly. I've here exposed it at the database level instead.
-
Tom Lane authored
The parser got confused if a cursor parameter had the same name as a plpgsql variable. Reported and diagnosed by Yeb Havinga, though this isn't exactly his proposed fix. Also, some mostly-but-not-entirely-cosmetic adjustments to the original named-cursor-parameter patch, for code readability and better error diagnostics.
-
- 04 Apr, 2012 5 commits
-
-
Tom Lane authored
This patch provides a test case for libpq's row processor API. contrib/dblink can deal with very large result sets by dumping them into a tuplestore (which can spill to disk) --- but until now, the intermediate storage of the query result in a PGresult meant memory bloat for any large result. Now we use a row processor to convert the data to tuple form and dump it directly into the tuplestore. A limitation is that this only works for plain dblink() queries, not dblink_send_query() followed by dblink_get_result(). In the latter case we don't know the desired tuple rowtype soon enough. While hack solutions to that are possible, a different user-level API would probably be a better answer. Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
-
Tom Lane authored
Traditionally libpq has collected an entire query result before passing it back to the application. That provides a simple and transactional API, but it's pretty inefficient for large result sets. This patch allows the application to process each row on-the-fly instead of accumulating the rows into the PGresult. Error recovery becomes a bit more complex, but often that tradeoff is well worth making. Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
-
Tom Lane authored
There is no existing or foreseeable case in which psql should see a PGRES_COPY_BOTH PQresultStatus; and if such a case ever emerges, it's a pretty good bet that these code fragments wouldn't do the right thing anyway. Remove them, and let the existing default cases do the appropriate thing, namely emit an "unexpected PQresultStatus" bleat. Noted while working on libpq row processor patch, for which I was considering adding a PGRES_SUSPENDED status code --- the same default-case treatment would be appropriate for that.
-
Tom Lane authored
The original coding of the syslogger had an arbitrary limit of 20 large messages concurrently in progress, after which it would just punt and dump message fragments to the output file separately. Our ambitions are a bit higher than that now, so allow the data structure to expand as necessary. Reported and patched by Andrew Dunstan; some editing by Tom
-
Tom Lane authored
dblink_exec leaked temporary database connections if any error occurred after connection setup, for example SELECT dblink_exec('...connect string...', 'select 1/0'); Add a PG_TRY block to ensure PQfinish gets done when it is needed. (dblink_record_internal is on the hairy edge of needing similar treatment, but seems not to be actively broken at the moment.) Also, in 9.0 and up, only one of the three functions using tuplestore return mode was properly checking that the query context would allow a tuplestore result. Noted while reviewing dblink patch. Back-patch to all supported branches.
-
- 03 Apr, 2012 1 commit
-
-
Robert Haas authored
Extracted from Joachim Wieland's parallel pg_dump patch, with some additional comments by me.
-