- 14 Feb, 2014 3 commits
-
-
Heikki Linnakangas authored
If there is a WAL segment with same ID but different TLI present in both the WAL archive and pg_xlog, prefer the one with higher TLI. Before this patch, the archive was polled first, for all expected TLIs, and only if no file was found was pg_xlog scanned. This was a change in behavior from 9.3, which first scanned archive and pg_xlog for the highest TLI, then archive and pg_xlog for the next highest TLI and so forth. This patch reverts the behavior back to what it was in 9.2. The reason for this is that if for example you try to do archive recovery to timeline 2, which branched off timeline 1, but the WAL for timeline 2 is not archived yet, we would replay past the timeline switch point on timeline 1 using the archived files, before even looking timeline 2's files in pg_xlog Report and patch by Kyotaro Horiguchi. Backpatch to 9.3 where the behavior was changed.
-
Peter Eisentraut authored
Stefan Kaltenbrunner
-
Bruce Momjian authored
-
- 13 Feb, 2014 8 commits
-
-
Tom Lane authored
Adjust handleCopyOut() to stop trying to write data once it's failed one time. For typical cases such as out-of-disk-space or broken-pipe, additional attempts aren't going to do anything but waste time, and in any case clean truncation of the output seems like a better behavior than randomly dropping blocks in the middle. Also remove dubious (and misleadingly documented) attempt to force our way out of COPY_OUT state if libpq didn't do that. If we did have a situation like that, it'd be a bug in libpq and would be better fixed there, IMO. We can hope that commit fa4440f5 took care of any such problems, anyway. Also fix longstanding bug in handleCopyIn(): PQputCopyEnd() only supports a non-null errormsg parameter in protocol version 3, and will actively fail if one is passed in version 2. This would've made our attempts to get out of COPY_IN state after a failure into infinite loops when talking to pre-7.4 servers. Back-patch the COPY_OUT state change business back to 9.2 where it was introduced, and the other two fixes into all supported branches.
-
Alvaro Herrera authored
Previously we were piggybacking on transaction ID parameters to freeze multixacts; but since there isn't necessarily any relationship between rates of Xid and multixact consumption, this turns out not to be a good idea. Therefore, we now have multixact-specific freezing parameters: vacuum_multixact_freeze_min_age: when to remove multis as we come across them in vacuum (default to 5 million, i.e. early in comparison to Xid's default of 50 million) vacuum_multixact_freeze_table_age: when to force whole-table scans instead of scanning only the pages marked as not all visible in visibility map (default to 150 million, same as for Xids). Whichever of both which reaches the 150 million mark earlier will cause a whole-table scan. autovacuum_multixact_freeze_max_age: when for cause emergency, uninterruptible whole-table scans (default to 400 million, double as that for Xids). This means there shouldn't be more frequent emergency vacuuming than previously, unless multixacts are being used very rapidly. Backpatch to 9.3 where multixacts were made to persist enough to require freezing. To avoid an ABI break in 9.3, VacuumStmt has a couple of fields in an unnatural place, and StdRdOptions is split in two so that the newly added fields can go at the end. Patch by me, reviewed by Robert Haas, with additional input from Andres Freund and Tom Lane.
-
Bruce Momjian authored
Report from Marc Mamin
-
Bruce Momjian authored
Per suggestion from Peter Eisentraut
-
Tom Lane authored
We used the length of the input string, not the de-escaped string, as the trigger for NAMEDATALEN truncation. AFAICS this would only result in sometimes printing a phony truncation warning; but it's just luck that there was no worse problem, since we were violating the API spec for truncate_identifier(). Per bug #9204 from Joshua Yanovski. This has been wrong since the Unicode-identifier support was added, so back-patch to all supported branches.
-
Bruce Momjian authored
-
Bruce Momjian authored
-
Tom Lane authored
We have a practice of providing a "bread crumb" trail between the minor versions where the migration section actually tells you to do something. Historically that was just plain text, eg, "see the release notes for 9.2.4"; but if you're using a browser or PDF reader, it's a lot nicer if it's a live hyperlink. So use "<xref>" instead. Any argument against doing this vanished with the recent decommissioning of plain-text release notes. Vik Fearing
-
- 12 Feb, 2014 13 commits
-
-
Tom Lane authored
Per Peter Eisentraut.
-
Tom Lane authored
In pqSendSome, if the connection is already closed at entry, discard any queued output data before returning. There is no possibility of ever sending the data, and anyway this corresponds to what we'd do if we'd detected a hard error while trying to send(). This avoids possible indefinite bloat of the output buffer if the application keeps trying to send data (or even just keeps trying to do PQputCopyEnd, as psql indeed will). Because PQputCopyEnd won't transition out of PGASYNC_COPY_IN state until it's successfully queued the COPY END message, and pqPutMsgEnd doesn't distinguish a queuing failure from a pqSendSome failure, this omission allowed an infinite loop in psql if the connection closure occurred when we had at least 8K queued to send. It might be worth refactoring so that we can make that distinction, but for the moment the other changes made here seem to offer adequate defenses. To guard against other variants of this scenario, do not allow PQgetResult to return a PGRES_COPY_XXX result if the connection is already known dead. Make sure it returns PGRES_FATAL_ERROR instead. Per report from Stephen Frost. Back-patch to all active branches.
-
Bruce Momjian authored
Backbranch release note changes cause merge conflicts.
-
Bruce Momjian authored
This simplifies the docs and makes it easier to cut/paste command lines.
-
Bruce Momjian authored
Report from Jeff Janes
-
Bruce Momjian authored
Report from Marti Raudsepp
-
Tom Lane authored
In a database that's not yet reached consistency, it's possible that some segments of a relation are not full-size but are not the last ones either. Because of the way smgrnblocks() works, asking for a new page with P_NEW will fill in the last not-full-size segment --- and if that makes it full size, the apparent EOF of the relation will increase by more than one page, so that the next P_NEW request will yield a page past the next consecutive one. This breaks the relation-extension logic in XLogReadBufferExtended, possibly allowing a page update to be applied to some page far past where it was intended to go. This appears to be the explanation for reports of table bloat on replication slaves compared to their masters, and probably explains some corrupted-slave reports as well. Fix the loop to check the page number it actually got, rather than merely Assert()'ing that dead reckoning got it to the desired place. AFAICT, there are no other places that make assumptions about exactly which page they'll get from P_NEW. Problem identified by Greg Stark, though this is not the same as his proposed patch. It's been like this for a long time, so back-patch to all supported branches.
-
Magnus Hagander authored
Noted by the buildfarm and Andres Freund
-
Magnus Hagander authored
If an error occurs in the foreground (backup) process of pg_basebackup, and we exit in a controlled way, the background process (streaming xlog process) would stay around and keep streaming.
-
Tom Lane authored
This is evidently the default on buildfarm member narwhal, but that is a pretty ancient Mingw version, and there is reason to think that more recent versions of GNU ld have this feature turned on by default. Since we are trying to achieve consistency of link behavior across all Windows toolchains, let's just make sure here.
-
Tom Lane authored
This is expected to make it start failing when contrib modules reference non-PGDLLIMPORT'ed global variables, as the other Windows build methods do. Aside from the value of consistency, the underlying implementation of this switch is pretty ugly and not really something we want to rely on if we have to use PGDLLIMPORT anyway for MSVC.
-
Bruce Momjian authored
Backpatch to 9.3 Report from MauMau
-
- 11 Feb, 2014 6 commits
-
-
Tom Lane authored
This should make the MSVC build act more like builds for other platforms, i.e. backend global variables will be automatically available to loadable libraries without need for explicit PGDLLIMPORT marking. Craig Ringer
-
Tom Lane authored
Even if this is needed, it'd be configure's responsibility to set it.
-
Tom Lane authored
We are almost completely out of the dlltool game, if this works. Hiroshi Inoue
-
Tom Lane authored
Get rid of use of dlltool for linking the main postgres executable. dlltool is obsolete and we'd prefer to stop depending on it. Also, include $(LDAP_LIBS_FE) in $(libpq_pgport). (It's not clear that this is really needed, or why it's not a linker bug if it is needed. But reports are that it's needed on current Cygwin.) We might want to back-patch this if it works, but first let's see what the buildfarm thinks. Marco Atzeri
-
Peter Eisentraut authored
This results in spurious empty lines in the server log. Instead, add the newlines only when printing out the --echo output. In some cases, this was already done, leading to two newlines being printed. Clean that up as well. From: Fabrízio de Royes Mello <fabriziomello@gmail.com>
-
Tom Lane authored
Providing this information as plain text was doubtless worth the trouble ten years ago, but it seems likely that hardly anyone reads it in this format anymore. And the effort required to maintain these files (in the form of extra-complex markup rules in the relevant parts of the SGML documentation) is significant. So, let's stop doing that and rely solely on the other documentation formats. Per discussion, the plain-text INSTALL instructions might still be worth their keep, so we continue to generate that file. Rather than remove HISTORY and src/test/regress/README from distribution tarballs entirely, replace them with simple stub files that tell the reader where to find the relevant documentation. This is mainly to avoid possibly breaking packaging recipes that expect these files to exist. Back-patch to all supported branches, because simplifying the markup requirements for release notes won't help much unless we do it in all branches.
-
- 10 Feb, 2014 2 commits
-
-
Heikki Linnakangas authored
WakeupWaiters() is supposed to wake up all LW_WAIT_UNTIL_FREE waiters of the slot, but the loop incorrectly also woke up the first LW_EXCLUSIVE waiter, if there was no LW_WAIT_UNTIL_FREE waiters in the queue. Noted by Andres Freund. This code is new in 9.4, so no backpatching.
-
Heikki Linnakangas authored
In commit d2495f27, I fixed this bug in to_tsquery(), but missed the fact that plainto_tsquery() has the same bug.
-
- 09 Feb, 2014 6 commits
-
-
Stephen Frost authored
Make ftello error-checking consistent to all calls and remove a bit of ftello-related code which has been #if 0'd out since 2001. Note that we are not concerned with the ftello() call under snprintf() failing as it is just building a string to call exit_horribly() with; printing -1 in such a case is fine.
-
Stephen Frost authored
Rather than reset errno (or just hope that its cleared already), check just the result of the ftello for < 0 to determine if there was an issue. Oversight by me, pointed out by Tom.
-
Magnus Hagander authored
This prevents pg_basebackup from generating excessive output when dumping large clusters. The status is now updated once / second, still making it possible to see that there is progress happening, but limiting the total bandwidth. Mika Eloranta, reviewed by Sawada Masahiko and Oskari Saarenmaa
-
Magnus Hagander authored
When using verbose mode for pg_basebackup, in tar format sent to stdout, we'd print an unitialized buffer as the filename. Reported by Pontus Lundkvist
-
Stephen Frost authored
Improve pg_dump by checking results on various fgetc() calls which previously were unchecked, ditto for ftello. Also clean up a couple of very minor memory leaks by waiting to allocate structures until after the initial check(s). Issues spotted by Coverity.
-
Peter Eisentraut authored
Detected by clang's -Wmissing-variable-declarations. From: Andres Freund <andres@anarazel.de>
-
- 07 Feb, 2014 2 commits
-
-
Heikki Linnakangas authored
The shimTriConstistentFn, which calls the opclass's consistent function with all combinations of TRUE/FALSE for any MAYBE argument, modifies the entryRes array passed by the caller. Change startScanKey to re-initialize it between each call to accommodate that. It's actually a bad habit by shimTriConsistentFn to modify its argument. But the only caller that doesn't already re-initialize the entryRes array was startScanKey, and it's easy for startScanKey to do so. Add a comment to shimTriConsistentFn about that. Note: this does not give a free pass to opclass-provided consistent functions to modify the entryRes argument; shimTriConsistent assumes that they don't, even though it does it itself. While at it, refactor startScanKey to allocate the requiredEntries and additionalEntries after it knows exactly how large they need to be. Saves a little bit of memory, and looks nicer anyway. Per complaint by Tom Lane, buildfarm and the pg_trgm regression test.
-
Heikki Linnakangas authored
If you have a GIN query like "rare & frequent", we currently fetch all the items that match either rare or frequent, call the consistent function for each item, and let the consistent function filter out items that only match one of the terms. However, if we can deduce that "rare" must be present for the overall qual to be true, we can scan all the rare items, and for each rare item, skip over to the next frequent item with the same or greater TID. That greatly speeds up "rare & frequent" type queries. To implement that, introduce the concept of a tri-state consistent function, where the 3rd value is MAYBE, indicating that we don't know if that term is present. Operator classes only provide a boolean consistent function, so we simulate the tri-state consistent function by calling the boolean function several times, with the MAYBE arguments set to all combinations of TRUE and FALSE. Testing all combinations is only feasible for a small number of MAYBE arguments, but it is envisioned that we'll provide a way for operator classes to provide a native tri-state consistent function, which can be much more efficient. But that is not included in this patch. We were already using that trick to for lossy pages, calling the consistent function with the lossy entry set to TRUE and FALSE. Now that we have the tri-state consistent function, use it for lossy pages too. Alexander Korotkov, with fair amount of refactoring by me.
-