- 13 Feb, 2014 1 commit
-
-
Tom Lane authored
We have a practice of providing a "bread crumb" trail between the minor versions where the migration section actually tells you to do something. Historically that was just plain text, eg, "see the release notes for 9.2.4"; but if you're using a browser or PDF reader, it's a lot nicer if it's a live hyperlink. So use "<xref>" instead. Any argument against doing this vanished with the recent decommissioning of plain-text release notes. Vik Fearing
-
- 12 Feb, 2014 13 commits
-
-
Tom Lane authored
Per Peter Eisentraut.
-
Tom Lane authored
In pqSendSome, if the connection is already closed at entry, discard any queued output data before returning. There is no possibility of ever sending the data, and anyway this corresponds to what we'd do if we'd detected a hard error while trying to send(). This avoids possible indefinite bloat of the output buffer if the application keeps trying to send data (or even just keeps trying to do PQputCopyEnd, as psql indeed will). Because PQputCopyEnd won't transition out of PGASYNC_COPY_IN state until it's successfully queued the COPY END message, and pqPutMsgEnd doesn't distinguish a queuing failure from a pqSendSome failure, this omission allowed an infinite loop in psql if the connection closure occurred when we had at least 8K queued to send. It might be worth refactoring so that we can make that distinction, but for the moment the other changes made here seem to offer adequate defenses. To guard against other variants of this scenario, do not allow PQgetResult to return a PGRES_COPY_XXX result if the connection is already known dead. Make sure it returns PGRES_FATAL_ERROR instead. Per report from Stephen Frost. Back-patch to all active branches.
-
Bruce Momjian authored
Backbranch release note changes cause merge conflicts.
-
Bruce Momjian authored
This simplifies the docs and makes it easier to cut/paste command lines.
-
Bruce Momjian authored
Report from Jeff Janes
-
Bruce Momjian authored
Report from Marti Raudsepp
-
Tom Lane authored
In a database that's not yet reached consistency, it's possible that some segments of a relation are not full-size but are not the last ones either. Because of the way smgrnblocks() works, asking for a new page with P_NEW will fill in the last not-full-size segment --- and if that makes it full size, the apparent EOF of the relation will increase by more than one page, so that the next P_NEW request will yield a page past the next consecutive one. This breaks the relation-extension logic in XLogReadBufferExtended, possibly allowing a page update to be applied to some page far past where it was intended to go. This appears to be the explanation for reports of table bloat on replication slaves compared to their masters, and probably explains some corrupted-slave reports as well. Fix the loop to check the page number it actually got, rather than merely Assert()'ing that dead reckoning got it to the desired place. AFAICT, there are no other places that make assumptions about exactly which page they'll get from P_NEW. Problem identified by Greg Stark, though this is not the same as his proposed patch. It's been like this for a long time, so back-patch to all supported branches.
-
Magnus Hagander authored
Noted by the buildfarm and Andres Freund
-
Magnus Hagander authored
If an error occurs in the foreground (backup) process of pg_basebackup, and we exit in a controlled way, the background process (streaming xlog process) would stay around and keep streaming.
-
Tom Lane authored
This is evidently the default on buildfarm member narwhal, but that is a pretty ancient Mingw version, and there is reason to think that more recent versions of GNU ld have this feature turned on by default. Since we are trying to achieve consistency of link behavior across all Windows toolchains, let's just make sure here.
-
Tom Lane authored
This is expected to make it start failing when contrib modules reference non-PGDLLIMPORT'ed global variables, as the other Windows build methods do. Aside from the value of consistency, the underlying implementation of this switch is pretty ugly and not really something we want to rely on if we have to use PGDLLIMPORT anyway for MSVC.
-
Bruce Momjian authored
Backpatch to 9.3 Report from MauMau
-
- 11 Feb, 2014 6 commits
-
-
Tom Lane authored
This should make the MSVC build act more like builds for other platforms, i.e. backend global variables will be automatically available to loadable libraries without need for explicit PGDLLIMPORT marking. Craig Ringer
-
Tom Lane authored
Even if this is needed, it'd be configure's responsibility to set it.
-
Tom Lane authored
We are almost completely out of the dlltool game, if this works. Hiroshi Inoue
-
Tom Lane authored
Get rid of use of dlltool for linking the main postgres executable. dlltool is obsolete and we'd prefer to stop depending on it. Also, include $(LDAP_LIBS_FE) in $(libpq_pgport). (It's not clear that this is really needed, or why it's not a linker bug if it is needed. But reports are that it's needed on current Cygwin.) We might want to back-patch this if it works, but first let's see what the buildfarm thinks. Marco Atzeri
-
Peter Eisentraut authored
This results in spurious empty lines in the server log. Instead, add the newlines only when printing out the --echo output. In some cases, this was already done, leading to two newlines being printed. Clean that up as well. From: Fabrízio de Royes Mello <fabriziomello@gmail.com>
-
Tom Lane authored
Providing this information as plain text was doubtless worth the trouble ten years ago, but it seems likely that hardly anyone reads it in this format anymore. And the effort required to maintain these files (in the form of extra-complex markup rules in the relevant parts of the SGML documentation) is significant. So, let's stop doing that and rely solely on the other documentation formats. Per discussion, the plain-text INSTALL instructions might still be worth their keep, so we continue to generate that file. Rather than remove HISTORY and src/test/regress/README from distribution tarballs entirely, replace them with simple stub files that tell the reader where to find the relevant documentation. This is mainly to avoid possibly breaking packaging recipes that expect these files to exist. Back-patch to all supported branches, because simplifying the markup requirements for release notes won't help much unless we do it in all branches.
-
- 10 Feb, 2014 2 commits
-
-
Heikki Linnakangas authored
WakeupWaiters() is supposed to wake up all LW_WAIT_UNTIL_FREE waiters of the slot, but the loop incorrectly also woke up the first LW_EXCLUSIVE waiter, if there was no LW_WAIT_UNTIL_FREE waiters in the queue. Noted by Andres Freund. This code is new in 9.4, so no backpatching.
-
Heikki Linnakangas authored
In commit d2495f27, I fixed this bug in to_tsquery(), but missed the fact that plainto_tsquery() has the same bug.
-
- 09 Feb, 2014 6 commits
-
-
Stephen Frost authored
Make ftello error-checking consistent to all calls and remove a bit of ftello-related code which has been #if 0'd out since 2001. Note that we are not concerned with the ftello() call under snprintf() failing as it is just building a string to call exit_horribly() with; printing -1 in such a case is fine.
-
Stephen Frost authored
Rather than reset errno (or just hope that its cleared already), check just the result of the ftello for < 0 to determine if there was an issue. Oversight by me, pointed out by Tom.
-
Magnus Hagander authored
This prevents pg_basebackup from generating excessive output when dumping large clusters. The status is now updated once / second, still making it possible to see that there is progress happening, but limiting the total bandwidth. Mika Eloranta, reviewed by Sawada Masahiko and Oskari Saarenmaa
-
Magnus Hagander authored
When using verbose mode for pg_basebackup, in tar format sent to stdout, we'd print an unitialized buffer as the filename. Reported by Pontus Lundkvist
-
Stephen Frost authored
Improve pg_dump by checking results on various fgetc() calls which previously were unchecked, ditto for ftello. Also clean up a couple of very minor memory leaks by waiting to allocate structures until after the initial check(s). Issues spotted by Coverity.
-
Peter Eisentraut authored
Detected by clang's -Wmissing-variable-declarations. From: Andres Freund <andres@anarazel.de>
-
- 07 Feb, 2014 4 commits
-
-
Heikki Linnakangas authored
The shimTriConstistentFn, which calls the opclass's consistent function with all combinations of TRUE/FALSE for any MAYBE argument, modifies the entryRes array passed by the caller. Change startScanKey to re-initialize it between each call to accommodate that. It's actually a bad habit by shimTriConsistentFn to modify its argument. But the only caller that doesn't already re-initialize the entryRes array was startScanKey, and it's easy for startScanKey to do so. Add a comment to shimTriConsistentFn about that. Note: this does not give a free pass to opclass-provided consistent functions to modify the entryRes argument; shimTriConsistent assumes that they don't, even though it does it itself. While at it, refactor startScanKey to allocate the requiredEntries and additionalEntries after it knows exactly how large they need to be. Saves a little bit of memory, and looks nicer anyway. Per complaint by Tom Lane, buildfarm and the pg_trgm regression test.
-
Heikki Linnakangas authored
If you have a GIN query like "rare & frequent", we currently fetch all the items that match either rare or frequent, call the consistent function for each item, and let the consistent function filter out items that only match one of the terms. However, if we can deduce that "rare" must be present for the overall qual to be true, we can scan all the rare items, and for each rare item, skip over to the next frequent item with the same or greater TID. That greatly speeds up "rare & frequent" type queries. To implement that, introduce the concept of a tri-state consistent function, where the 3rd value is MAYBE, indicating that we don't know if that term is present. Operator classes only provide a boolean consistent function, so we simulate the tri-state consistent function by calling the boolean function several times, with the MAYBE arguments set to all combinations of TRUE and FALSE. Testing all combinations is only feasible for a small number of MAYBE arguments, but it is envisioned that we'll provide a way for operator classes to provide a native tri-state consistent function, which can be much more efficient. But that is not included in this patch. We were already using that trick to for lossy pages, calling the consistent function with the lossy entry set to TRUE and FALSE. Now that we have the tri-state consistent function, use it for lossy pages too. Alexander Korotkov, with fair amount of refactoring by me.
-
Heikki Linnakangas authored
Amit Langote
-
Tom Lane authored
We may process relcache flush requests during transaction startup or shutdown. In general it's not terribly safe to do catalog access at those times, so the code's habit of trying to immediately revalidate unflushable relcache entries is risky. Although there are no field trouble reports that are positively traceable to this, we have been able to demonstrate failure of the assertions recently added in RelationIdGetRelation() and SearchCatCache(). On the other hand, it seems safe to just postpone revalidation of the cache entry until we're inside a valid transaction. The one case where this is questionable is where we're exiting a subtransaction and the outer transaction is holding the relcache entry open --- but if we made any significant changes to the rel inside such a subtransaction, we've got problems anyway. There are mechanisms in place to prevent that (to wit, locks for cross-session cases and CheckTableNotInUse() for intra-session cases), so let's trust to those mechanisms to keep us out of trouble.
-
- 06 Feb, 2014 4 commits
-
-
Andrew Dunstan authored
-
Peter Eisentraut authored
Indenting the XHTML output can lead to incorrect rendering. This only affects the build via XSLT.
-
Peter Eisentraut authored
-
- 05 Feb, 2014 3 commits
-
-
Tom Lane authored
These flushes were added in my commit d2896a9e, which added the btree logic that keeps a cached copy of the index metapage data in index relcache entries. The idea was to ensure that other backends would promptly update their cached copies after a change. However, this is not really necessary, since _bt_getroot() has adequate defenses against believing a stale root page link, and _bt_getrootheight() doesn't have to be 100% right. Moreover, if it were necessary, a relcache flush would be an unreliable way to do it, since the sinval mechanism believes that relcache flush requests represent transactional updates, and therefore discards them on transaction rollback. Therefore, we might as well drop these flush requests and save the time to rebuild the whole relcache entry after a metapage change. If we ever try to support in-place truncation of btree indexes, it might be necessary to revisit this issue so that _bt_getroot() can't get caught by trying to follow a metapage link to a page that no longer exists. A possible solution to that is to make use of an smgr, rather than relcache, inval request to force other backends to discard their cached metapages. But for the moment this is not worth pursuing.
-
Robert Haas authored
Fix a thinko pointed out by Jeff Davis, and convert a couple of other references into links.
-
Peter Eisentraut authored
The code was assigning a (Datum) 0 to a void pointer. That creates a warning from clang 3.4. It was probably a thinko to begin with.
-
- 04 Feb, 2014 1 commit
-
-
Tom Lane authored
postgres_fdw tended to say "unknown error" if it tried to execute a command on an already-dead connection, because some paths in libpq just return a null PGresult for such cases. Out-of-memory might result in that, too. To fix, pass the PGconn to pgfdw_report_error, and look at its PQerrorMessage() string if we can't get anything out of the PGresult. Also, fix the transaction-exit logic to reliably drop a dead connection. It was attempting to do that already, but it assumed that only connection cache entries with xact_depth > 0 needed to be examined. The folly in that is that if we fail while issuing START TRANSACTION, we'll not have bumped xact_depth. (At least for the case I was testing, this fix masks the other problem; but it still seems like a good idea to have the PGconn fallback logic.) Per investigation of bug #9087 from Craig Lucas. Backpatch to 9.3 where this code was introduced.
-