- 07 Aug, 2012 3 commits
-
-
Bruce Momjian authored
4741e9af. This was done by adding an optional second log file parameter to exec_prog(), and closing and reopening the log file between system() calls. Backpatch to 9.2.
-
Alvaro Herrera authored
-
Simon Riggs authored
Dave Kerr
-
- 06 Aug, 2012 5 commits
-
-
Robert Haas authored
Noted by Thom Brown.
-
Robert Haas authored
Craig Ringer, edited fairly heavily by me
-
Alvaro Herrera authored
-
Magnus Hagander authored
In particular, with a controlled shutdown of the master, pg_basebackup with streaming log could terminate without an error message, even though the backup is not consistent. In passing, fix a few cases where walfile wasn't properly set to -1 after closing. Fujii Masao
-
Heikki Linnakangas authored
We used to convert the unicode object directly to a string in the server encoding by calling Python's PyUnicode_AsEncodedString function. In other words, we used Python's routines to do the encoding. However, that has a few problems. First of all, it required keeping a mapping table of Python encoding names and PostgreSQL encodings. But the real killer was that Python doesn't support EUC_TW and MULE_INTERNAL encodings at all. Instead, convert the Python unicode object to UTF-8, and use PostgreSQL's encoding conversion functions to convert from UTF-8 to server encoding. We were already doing the same in the other direction in PLyUnicode_FromString, so this is more consistent, too. Note: This makes SQL_ASCII to behave more leniently. We used to map SQL_ASCII to Python's 'ascii', which on Python means strict 7-bit ASCII only, so you got an error if the python string contained anything but pure ASCII. You no longer get an error; you get the UTF-8 representation of the string instead. Backpatch to 9.0, where these conversions were introduced. Jan Urbański
-
- 04 Aug, 2012 2 commits
-
-
Bruce Momjian authored
instructions to perltidy Perl files that lack Perl file extensions. pgindent Perl coding by Andrew Dunstan, restructured by me.
-
Bruce Momjian authored
Backpatch to 9.1 and 9.2.
-
- 03 Aug, 2012 7 commits
-
-
Tom Lane authored
DecodeInterval() failed to honor the "range" parameter (the special SQL syntax for indicating which fields appear in the literal string) if the time was signed. This seems inappropriate, so make it work like the not-signed case. The inconsistency was introduced in my commit f867339c, which as noted in its log message was only really focused on making SQL-compliant literals work per spec. Including a sign here is not per spec, but if we're going to allow it then it's reasonable to expect it to work like the not-signed case. Also, remove bogus setting of tmask, which caused subsequent processing to think that what had been given was a timezone and not an hh:mm(:ss) field, thus confusing checks for redundant fields. This seems to be an aboriginal mistake in Lockhart's commit 2cf16424. Add regression test cases to illustrate the changed behaviors. Back-patch as far as 8.4, where support for spec-compliant interval literals was added. Range problem reported and diagnosed by Amit Kapila, tmask problem by me.
-
Bruce Momjian authored
Backpatch to 9.2 Erik Rijkers
-
Tom Lane authored
As noted by Noah Misch, btree_xlog_delete_get_latestRemovedXid is critically dependent on the assumption that it's examining a consistent state of the database. This was undocumented though, so the seemingly-unrelated check for no active HS sessions might be thought to be merely an optional optimization. Improve comments, and add an explicit check of reachedConsistency just to be sure. This function returns InvalidTransactionId (thereby killing all HS transactions) in several cases that are not nearly unlikely enough for my taste. This commit doesn't attempt to fix those deficiencies, just document them. Back-patch to 9.2, not from any real functional need but just to keep the branches more closely synced to simplify possible future back-patching.
-
Tom Lane authored
In yesterday's commit 962e0cc7, I added the ResolveRecoveryConflictWithSnapshot call in the wrong place. I correctly put it before spgRedoVacuumRedirect itself would modify the index page --- but not before RestoreBkpBlocks, so replay of a record with a full-page image would modify the page before kicking off any conflicting HS transactions. Oops.
-
Bruce Momjian authored
Backpatch to 9.2.
-
Bruce Momjian authored
returned, per report from Aleksey Tsalolikhin Backpatch to 9.2 and 9.1.
-
Bruce Momjian authored
newline-terminated messages, per suggestion from Tom. Backpatch to 9.2.
-
- 02 Aug, 2012 3 commits
-
-
Tom Lane authored
The correct test for whether a redirection tuple is removable is whether tuple's xid < RecentGlobalXmin, not OldestXmin; the previous coding failed to protect index searches being done in concurrent transactions that have no XID. This mirrors the recent fix in btree's page recycling logic made in commit d3abbbeb. Also, WAL-log the newest XID of any removed redirection tuple on an index page, and apply ResolveRecoveryConflictWithSnapshot during InHotStandby WAL replay. This protects against concurrent Hot Standby transactions possibly needing to see the redirection tuple(s). Per my query of 2012-03-12 and subsequent discussion.
-
Tom Lane authored
-
Tom Lane authored
After taking awhile to digest the row-processor feature that was added to libpq in commit 92785dac, we've concluded it is over-complicated and too hard to use. Leave the core infrastructure changes in place (that is, there's still a row processor function inside libpq), but remove the exposed API pieces, and instead provide a "single row" mode switch that causes PQgetResult to return one row at a time in separate PGresult objects. This approach incurs more overhead than proper use of a row processor callback would, since construction of a PGresult per row adds extra cycles. However, it is far easier to use and harder to break. The single-row mode still affords applications the primary benefit that the row processor API was meant to provide, namely not having to accumulate large result sets in memory before processing them. Preliminary testing suggests that we can probably buy back most of the extra cycles by micro-optimizing construction of the extra results, but that task will be left for another day. Marko Kreen
-
- 01 Aug, 2012 1 commit
-
-
Tom Lane authored
Thom Brown
-
- 31 Jul, 2012 3 commits
-
-
Tom Lane authored
Parse analysis neglected to cover the case of a WITH clause attached to an intermediate-level set operation; it only handled WITH at the top level or WITH attached to a leaf-level SELECT. Per report from Adam Mackler. In HEAD, I rearranged the order of SelectStmt's fields to put withClause with the other fields that can appear on non-leaf SelectStmts. In back branches, leave it alone to avoid a possible ABI break for third-party code. Back-patch to 8.4 where WITH support was added.
-
Tom Lane authored
In the original coding of the log rotation stuff, we did not bother to make the truncation logic work for the very first rotation after postmaster start (or after a syslogger crash and restart). It just always appended in that case. It did not seem terribly important at the time, but we've recently had two separate complaints from people who expected it to work unsurprisingly. (Both users tend to restart the postmaster about as often as a log rotation is configured to happen, which is maybe not typical use, but still...) Since the initial log file is opened in the postmaster, fixing this requires passing down some more state to the syslogger child process. It's always been like this, so back-patch to all supported branches.
-
Alvaro Herrera authored
The most user-visible part of this is to change the long options --statusint and --noloop to --status-interval and --no-loop, respectively, per discussion. Also, consistently enclose file names in double quotes, per our conventions; and consistently use the term "transaction log file" to talk about WAL segments. (Someday we may need to go over this terminology and make it consistent across the whole source code.) Finally, reflow the code to better fit in 80 columns, and have pgindent fix it up some more.
-
- 30 Jul, 2012 1 commit
-
-
Bruce Momjian authored
website, revert the separate link to the download git repository. Backpatch from 9.0 to current.
-
- 27 Jul, 2012 2 commits
-
-
Tom Lane authored
This function suppressed any stderr output from the called program, which is unnecessary in the normal case and unhelpful in error cases. It also gave a rather opaque message along the lines of "fgets failure: Success" in case the called program failed to return anything on stdout. Since we've seen multiple reports of people not understanding what's wrong when pg_ctl reports this, improve the message. Back-patch to all active branches.
-
Bruce Momjian authored
for description. Patch to 9.0 and later, where script is mentioned.
-
- 26 Jul, 2012 5 commits
-
-
Bruce Momjian authored
files, like postmaster.pid. Backpatch to 9.2.
-
Tom Lane authored
In the original coding of the autovacuum cancel feature, commit acac68b2, an autovacuum process was considered a target for cancellation if it was found to hard-block any process examined in the deadlock search. This patch tightens the test so that the autovacuum must directly hard-block the current process. This should make the behavior more predictable in general, and in particular it ensures that an autovacuum will not be canceled with less than deadlock_timeout grace period. In the old coding, it was possible for an autovacuum to be canceled almost instantly, given unfortunate timing of two or more other processes' lock attempts. This also justifies the logging methodology in the recent commit d7318d43; without this restriction, that patch isn't providing enough information to see the connection of the canceling process to the autovacuum. Like that one, patch all the way back.
-
Robert Haas authored
Jeff Janes
-
Robert Haas authored
The old message was at DEBUG2, so typically it didn't show up in the log at all. As a result, in most cases where autovacuum was canceled, the only information that was logged was the table being vacuumed, with no indication as to what problem caused the cancel. Crank up the level to LOG and add some more details to assist with debugging. Back-patch all the way, per discussion on pgsql-hackers.
-
Bruce Momjian authored
Backpatch to 9.2.
-
- 25 Jul, 2012 3 commits
-
-
Tom Lane authored
If a crash occurred immediately after the first nextval() call for a serial column, WAL replay would restore the sequence to a state in which it appeared that no nextval() had been done, thus allowing the first sequence value to be returned again by the next nextval() call; as reported in bug #6748 from Xiangming Mei. More generally, the problem would occur if an ALTER SEQUENCE was executed on a freshly created or reset sequence. (The manifestation with serial columns was introduced in 8.2 when we added an ALTER SEQUENCE OWNED BY step to serial column creation.) The cause is that sequence creation attempted to save one WAL entry by writing out a WAL record that made it appear that the first nextval() had already happened (viz, with is_called = true), while marking the sequence's in-database state with log_cnt = 1 to show that the first nextval() need not emit a WAL record. However, ALTER SEQUENCE would emit a new WAL entry reflecting the actual in-database state (with is_called = false). Then, nextval would allocate the first sequence value and set is_called = true, but it would trust the log_cnt value and not emit any WAL record. A crash at this point would thus restore the sequence to its post-ALTER state, causing the next nextval() call to return the first sequence value again. To fix, get rid of the idea of logging an is_called status different from reality. This means that the first nextval-driven WAL record will happen at the first nextval call not the second, but the marginal cost of that is pretty negligible. In addition, make sure that ALTER SEQUENCE resets log_cnt to zero in any case where it touches sequence parameters that affect future nextval results. This will result in some user-visible changes in the contents of a sequence's log_cnt column, as reflected in the patch's regression test changes; but no application should be depending on that anyway, since it was already true that log_cnt changes rather unpredictably depending on checkpoint timing. In addition, make some basically-cosmetic improvements to get rid of sequence.c's undesirable intimacy with page layout details. It was always really trying to WAL-log the contents of the sequence tuple, so we should have it do that directly using a HeapTuple's t_data and t_len, rather than backing into it with some magic assumptions about where the tuple would be on the sequence's page. Back-patch to all supported branches.
-
Peter Eisentraut authored
-
Alvaro Herrera authored
-
- 24 Jul, 2012 1 commit
-
-
Alvaro Herrera authored
The initially implemented syntax, "CHECK NO INHERIT (expr)" was not deemed very good, so switch to "CHECK (expr) NO INHERIT" instead. This way it looks similar to SQL-standards compliant constraint attribute. Backport to 9.2 where the new syntax and feature was introduced. Per discussion.
-
- 23 Jul, 2012 2 commits
-
-
Peter Eisentraut authored
This is just a section renumbering for now. Some details might be filled in later.
-
Robert Haas authored
This is apparently faster than doing things the other way around when the scale factor is large. Along the way, adjust -n to suppress vacuuming during initialization as well as during test runs. Jeff Janes, with some small changes by me.
-
- 22 Jul, 2012 2 commits
-
-
Tom Lane authored
We should avoid calling sync_file_range or posix_fadvise in this case, since (a) we don't really care if the data gets synced, and might as well save the kernel calls; (b) at least on Linux we know that the kernel might block us until it's scheduled the write. Also, avoid making a useless second traversal of the directory tree if we're not actually going to call fsync(2) after all.