- 08 Apr, 2012 1 commit
-
-
Heikki Linnakangas authored
We used to only initialize the stack base pointer when starting up a regular backend, not in other processes. In particular, autovacuum workers can run arbitrary user code, and without stack-depth checking, infinite recursion in e.g an index expression will bring down the whole cluster. The comment about PL/Java using set_stack_base() is not yet true. As the code stands, PL/java still modifies the stack_base_ptr variable directly. However, it's been discussed in the PL/Java mailing list that it should be changed to use the function, because PL/Java is currently oblivious to the register stack used on Itanium. There's another issues with PL/Java, namely that the stack base pointer it sets is not really the base of the stack, it could be something close to the bottom of the stack. That's a separate issue that might need some further changes to this code, but that's a different story. Backpatch to all supported releases.
-
- 07 Apr, 2012 4 commits
-
-
Tom Lane authored
-
Tom Lane authored
-
Tom Lane authored
-
Bruce Momjian authored
-
- 06 Apr, 2012 10 commits
-
-
Tom Lane authored
Thom Brown
-
Tom Lane authored
XLOG_GIN_UPDATE_META_PAGE and XLOG_GIN_DELETE_LISTPAGE records were printed with a list link field labeled as "blkno", which was confusing, especially when the link was empty (InvalidBlockNumber). Print the metapage block number instead, since that's what's actually being updated. We could include the link values too as a separate field, but not clear it's worth the trouble. Back-patch to 8.4 where the dubious code was added.
-
Peter Eisentraut authored
Thom Brown
-
Peter Eisentraut authored
Thom Brown
-
Tom Lane authored
If we make the initially-called function return the table physical-size estimate, acquire_inherited_sample_rows will be able to use that to allocate numbers of samples among child tables, when the day comes that we want to support foreign tables in inheritance trees.
-
Tom Lane authored
ANALYZE now accepts foreign tables and allows the table's FDW to control how the sample rows are collected. (But only manual ANALYZEs will touch foreign tables, for the moment, since among other things it's not very clear how to handle remote permissions checks in an auto-analyze.) contrib/file_fdw is extended to support this. Etsuro Fujita, reviewed by Shigeru Hanada, some further tweaking by me.
-
Simon Riggs authored
-
Robert Haas authored
Report by Guillaume Lelarge.
-
Robert Haas authored
Report by Andrew Dunstan.
-
- 05 Apr, 2012 7 commits
-
-
Peter Eisentraut authored
The option --no-wrap prevents wars with (most?) editors about proper line wrapping. --sort-by-file ensures consistent file order, for easier diffing.
-
Robert Haas authored
Greg Smith and Jaime Casanova, reviewed by Alex Shulgin and myself. e
-
Robert Haas authored
Greg Smith, Peter Geoghegan, and Robert Haas
-
Tom Lane authored
Somebody didn't bother to fix this comment while adding foreign table support to the code below it. In passing, remove the explicit calling-out of relkind letters, which adds complexity to the comment but doesn't help in understanding the code.
-
Robert Haas authored
The views are in milliseconds, but the raw functions return microseconds.
-
Robert Haas authored
Ants Aasma's original patch to add timing information for buffer I/O requests exposed this data at the relation level, which was judged too costly. I've here exposed it at the database level instead.
-
Tom Lane authored
The parser got confused if a cursor parameter had the same name as a plpgsql variable. Reported and diagnosed by Yeb Havinga, though this isn't exactly his proposed fix. Also, some mostly-but-not-entirely-cosmetic adjustments to the original named-cursor-parameter patch, for code readability and better error diagnostics.
-
- 04 Apr, 2012 5 commits
-
-
Tom Lane authored
This patch provides a test case for libpq's row processor API. contrib/dblink can deal with very large result sets by dumping them into a tuplestore (which can spill to disk) --- but until now, the intermediate storage of the query result in a PGresult meant memory bloat for any large result. Now we use a row processor to convert the data to tuple form and dump it directly into the tuplestore. A limitation is that this only works for plain dblink() queries, not dblink_send_query() followed by dblink_get_result(). In the latter case we don't know the desired tuple rowtype soon enough. While hack solutions to that are possible, a different user-level API would probably be a better answer. Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
-
Tom Lane authored
Traditionally libpq has collected an entire query result before passing it back to the application. That provides a simple and transactional API, but it's pretty inefficient for large result sets. This patch allows the application to process each row on-the-fly instead of accumulating the rows into the PGresult. Error recovery becomes a bit more complex, but often that tradeoff is well worth making. Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
-
Tom Lane authored
There is no existing or foreseeable case in which psql should see a PGRES_COPY_BOTH PQresultStatus; and if such a case ever emerges, it's a pretty good bet that these code fragments wouldn't do the right thing anyway. Remove them, and let the existing default cases do the appropriate thing, namely emit an "unexpected PQresultStatus" bleat. Noted while working on libpq row processor patch, for which I was considering adding a PGRES_SUSPENDED status code --- the same default-case treatment would be appropriate for that.
-
Tom Lane authored
The original coding of the syslogger had an arbitrary limit of 20 large messages concurrently in progress, after which it would just punt and dump message fragments to the output file separately. Our ambitions are a bit higher than that now, so allow the data structure to expand as necessary. Reported and patched by Andrew Dunstan; some editing by Tom
-
Tom Lane authored
dblink_exec leaked temporary database connections if any error occurred after connection setup, for example SELECT dblink_exec('...connect string...', 'select 1/0'); Add a PG_TRY block to ensure PQfinish gets done when it is needed. (dblink_record_internal is on the hairy edge of needing similar treatment, but seems not to be actively broken at the moment.) Also, in 9.0 and up, only one of the three functions using tuplestore return mode was properly checking that the query context would allow a tuplestore result. Noted while reviewing dblink patch. Back-patch to all supported branches.
-
- 03 Apr, 2012 2 commits
-
-
Robert Haas authored
Extracted from Joachim Wieland's parallel pg_dump patch, with some additional comments by me.
-
Peter Eisentraut authored
-
- 01 Apr, 2012 2 commits
-
-
Peter Eisentraut authored
Use msgmerge --lang option to seed the Language field, recently introduced by gettext, in the header of the new PO file.
-
Peter Eisentraut authored
-
- 31 Mar, 2012 5 commits
-
-
Tom Lane authored
Combining the loop workspace with the record of already-processed objects might have been a cute trick, but it behaves horridly if there are many dependency loops to repair: the time spent in the first step of findLoop() grows as O(N^2). Instead use a separate flag array indexed by dump ID, which we can check in constant time. The length of the workspace array is now never more than the actual length of a dependency chain, which should be reasonably short in all cases of practical interest. The code is noticeably easier to understand this way, too. Per gripe from Mike Roest. Since this is a longstanding performance bug, backpatch to all supported versions.
-
Tom Lane authored
The loop that matched owned sequences to their owning tables required time proportional to number of owned sequences times number of tables; although this work was only expended in selective-dump situations, which is probably why the issue wasn't recognized long since. Refactor slightly so that we can perform this work after the index array for findTableByOid has been set up, reducing the time to O(M log N). Per gripe from Mike Roest. Since this is a longstanding performance bug, backpatch to all supported versions.
-
Tom Lane authored
ecpg and pg_dump each contain keyword arrays with structure similar to the backend's keyword array. Up to now, we actually named those arrays the same as the backend's and relied on parser/keywords.h to declare them. This seems a tad too cute, though, and it breaks now that we need to PGDLLIMPORT-decorate the backend symbols. Rename to avoid the problem. Per buildfarm. (It strikes me that maybe we should get rid of the separate keywords.c files altogether, and just define these arrays in the modules that use them, but that's a rather more invasive change.)
-
Tom Lane authored
Over-optimization (by me, looks like :-() broke the case of recognizing a word boundary just before a quoted identifier. Reported and diagnosed by Dean Rasheed.
-
Tom Lane authored
Per buildfarm, this is now needed by contrib/pg_stat_statements.
-
- 30 Mar, 2012 4 commits
-
-
Peter Eisentraut authored
Some of these are newly added, some are older and were forgotten, some don't contain any translatable strings right now but look like they could in the future.
-
Peter Eisentraut authored
see also ce8d7bb6
-
Peter Eisentraut authored
Otherwise, the availability of these variables depends on what happened to be available at the time the PostgreSQL build was configured.
-
Robert Haas authored
Fujii Masao, plus a comment by me. While I'm at it, correctly tabify this chunk of code.
-