- 24 May, 2014 1 commit
-
-
Andres Freund authored
Define padding bytes in SharedInvalidationMessage structs to be defined. Otherwise the sinvaladt.c ringbuffer, which is accessed by multiple processes, will cause spurious valgrind warnings about undefined memory being used. That's because valgrind remembers the undefined bytes from the last local process's store, not realizing that another process has written since, filling the previously uninitialized bytes.
-
- 23 May, 2014 2 commits
-
-
Bruce Momjian authored
Report by Tomonari Katsumata
-
Heikki Linnakangas authored
-
- 22 May, 2014 4 commits
-
-
Robert Haas authored
This is all inside a block guarded by op == DSM_OP_ATTACH, so it can never be the case that op == DSM_OP_CREATE. Reported by Coverity.
-
Fujii Masao authored
Erik Rijkers
-
Fujii Masao authored
-
Heikki Linnakangas authored
-
- 21 May, 2014 2 commits
-
-
Bruce Momjian authored
Report by Simon Riggs
-
Peter Eisentraut authored
-
- 20 May, 2014 2 commits
-
-
Bruce Momjian authored
Report by David Johnston
-
Tom Lane authored
Commit af7914c6, which introduced the EXPLAIN (TIMING) option, for some reason coded explain.c to look at planstate->instrument->need_timer rather than es->timing to decide whether to print timing info. However, the former flag might get set as a result of contrib/auto_explain wanting timing information. We certainly don't want activation of auto_explain to change user-visible statement behavior, so fix that. Also fix an independent bug introduced in the same patch: in the code path for a never-executed node with a machine-friendly output format, if timing was selected, it would fail to print the Actual Rows and Actual Loops items. Per bug #10404 from Tomonari Katsumata. Back-patch to 9.2 where the faulty code was introduced.
-
- 19 May, 2014 9 commits
-
-
Tom Lane authored
Peter Geoghegan
-
Fujii Masao authored
-
Heikki Linnakangas authored
Lowercase help statements. Use an existing message to reduce the number of strings to be translated. Euler Taveira
-
Heikki Linnakangas authored
I got the backup block numbers off-by-one in the commit that changed the way incomplete-splits are handled. I blame the comments, which said "backup block 1" and "backup block 2", even though the backup blocks are numbered starting from 0, in the macros and functions used in replay. Fix the comments and the code. Per Jeff Janes' bug report about corruption caused by torn page writes. The incorrect code is new in git master, but backpatch the comment change down to 9.0, where the numbering in the redo-side macros was changed.
-
Fujii Masao authored
Fabrízio de Royes Mello
-
Bruce Momjian authored
Report by Andrew Dunstan
-
Bruce Momjian authored
Text from David G Johnston
-
Tom Lane authored
C89 says that compound initializers may only contain constant expressions; a restriction violated by commit 89d00cbe. While we've had no actual field complaints about this, C89 is still the project standard, and it's not saving all that much code to break compatibility here. So let's adhere to the old restriction. In passing, replace a bunch of hardwired constants "256" with sizeof(target-variable), just because the latter is more readable and less breakable. And const-ify where possible. Back-patch to 9.3 where the nonportable code was added. Andres Freund and Tom Lane
-
Bruce Momjian authored
Patch by Andres Freund
-
- 18 May, 2014 2 commits
-
-
Tom Lane authored
That's what I get for not fully retesting the final version of the patch. The replace_allowed cross-check needs an additional special case for bootstrapping.
-
Tom Lane authored
RelationCacheInsert() ignored the possibility that hash_search(HASH_ENTER) might find a hashtable entry already present for the same OID. However, that can in fact occur during recursive relcache load scenarios. When it did happen, we overwrote the pointer to the pre-existing Relation, causing a session-lifespan leakage of that entire structure. As far as is known, the pre-existing Relation would always have reference count zero by the time we arrive back at the outer insertion, so add code that deletes the pre-existing Relation if so. If by some chance its refcount is positive, elog a WARNING and allow the pre-existing Relation to be leaked as before. Also, AttrDefaultFetch() was sloppy about leaking the cstring form of the pg_attrdef.adbin value it's copying into the relcache structure. This is only a query-lifespan leakage, and normally not very significant, but it adds up during CLOBBER_CACHE testing. These bugs are of very ancient vintage, but I'll refrain from back-patching since there's no evidence that these leaks amount to anything in ordinary usage.
-
- 17 May, 2014 4 commits
-
-
Tom Lane authored
The fallback implementation involves acquiring and releasing a spinlock variable that is otherwise unreferenced --- not even to the extent of initializing it. This accidentally fails to fail on platforms where spinlocks should be initialized to zeroes, but elsewhere it results in a "stuck spinlock" failure during startup. I griped about this last July, and put in a hack that worked for gcc on HPPA, but didn't get around to fixing the general case. Per the discussion back then, the best thing to do seems to be to initialize dummy_spinlock in main.c.
-
Tom Lane authored
Per testing with a compiler that whines about this.
-
Tom Lane authored
The xl_heap_header_len structures in an XLOG_HEAP_UPDATE record aren't necessarily aligned adequately. The regular replay function for these records is aware of that, but decode.c didn't get the memo. I'm not sure why the buildfarm failed to catch this; the test_decoding test certainly blows up real good on my old HPPA box. Also, I'm pretty sure that the address arithmetic was wrong for the case of XLOG_HEAP_CONTAINS_OLD and not XLOG_HEAP_CONTAINS_NEW_TUPLE, though this apparently can't happen when logical decoding is active.
-
Heikki Linnakangas authored
transam/README explained how B-tree incomplete splits were tracked and fixed after recovery, as an example of handling complex actions that need multiple WAL records, but that's not how it works anymore. Explain the new paradigm.
-
- 16 May, 2014 9 commits
-
-
Tom Lane authored
Several years ago we changed chr(int) so that if the database encoding is UTF8, it would interpret its argument as a Unicode code point and expand it into the appropriate multibyte sequence. However, we weren't sufficiently careful about checking validity of the input. According to RFC3629, UTF8 disallows code points above U+10FFFF (note that the predecessor standard RFC2279 was more liberal). Also, both versions of the UTF8 spec agree that Unicode surrogate-pair codes should never appear in UTF8. Because our encoding validity checks follow RFC3629, our failure to enforce these restrictions in chr() means it could be used to produce text strings that will be rejected when the database is dumped and reloaded. To ensure consistency with the input functions, let's actually apply pg_utf8_islegal() to the proposed output of chr(). Per discussion, this seems like too much of a behavioral change to back-patch, but it's not too late to squeeze it into 9.4.
-
Tom Lane authored
A couple of functions didn't bother to zero out pad bytes in datums that would ultimately go to disk. Harmless, but valgrind doesn't know that.
-
Tom Lane authored
gbt_macad_union also allocated 12-byte structs where we really need 16. Per report from Andres Freund. No back-patch since there's no current risk of a real problem.
-
Tom Lane authored
The macaddr opclass stores two macaddr structs (each of size 6) in an index column that's declared as being of type gbtreekey16, ie 16 bytes. In the original coding this led to passing a palloc'd value of size 12 to the index insertion code, so that data would be fetched past the end of the allocated value during index tuple construction. This makes valgrind unhappy. In principle it could result in a SIGSEGV, though with the current implementation of palloc there's no risk since the 12-byte request size would be rounded up to 16 bytes anyway. To fix, add a field to struct gbtree_ninfo showing the declared size of the index datums, and use that in the palloc requests; and use palloc0 to be sure that any wasted bytes are cleanly initialized. Per report from Andres Freund. No back-patch since there's no current risk of a real problem.
-
Heikki Linnakangas authored
Andres Freund
-
Heikki Linnakangas authored
pg_stat_replication shows connected replication clients. The ddl test case never has any replication clients connected, so querying pg_stat_replication is pointless. To check that a slot has been dropped correctly, query pg_replication_slots instead. Andres Freund
-
Heikki Linnakangas authored
The decoding of prepared transaction commits accidentally used the XID of the transaction performing the COMMIT PREPARED, not the XID of the prepared transaction. Before bb38fb0d that lead to those transactions not being decoded, afterwards to a assertion failure.
-
Heikki Linnakangas authored
Let's complain about e.g an invalid path or permission problem sooner rather than later. Before this patch, we would only try to open the output file after receiving the first decoded message from the server.
-
Heikki Linnakangas authored
Commit dd428c79 added dbId and tsId to the xl_xact_commit struct but missed that prepared transaction commits reuse that struct. Fix that. Because those fields were left unitialized, replaying a commit prepared WAL record in a hot standby node would fail to remove the relcache init file. That can lead to "could not open file" errors on the standby. Relcache init file only needs to be removed when a system table/index is rewritten in the transaction using two phase commit, so that should be rare in practice. In HEAD, the incorrect dbId/tsId values are also used for filtering in logical replication code, causing the transaction to always be filtered out. Analysis and fix by Andres Freund. Backpatch to 9.0 where hot standby was introduced.
-
- 15 May, 2014 5 commits
-
-
Tom Lane authored
In yesterday's commit 2dc4f011, I tried to force buffering of stdout/stderr in initdb to be what it is by default when the program is run interactively on Unix (since that's how most manual testing is done). This tripped over the fact that Windows doesn't support _IOLBF mode. We dealt with that a long time ago in syslogger.c by falling back to unbuffered mode on Windows. Export that solution in port.h and use it in initdb. Back-patch to 8.4, like the previous commit.
-
Peter Eisentraut authored
-
Heikki Linnakangas authored
Don't close stdout on SIGHUP. Also, when a SIGHUP is received, close the file immediately, rather than only after receiving some more data from the server. Rename a variable, to avoid mentally dealing with double negatives (not unsynced means synced).
-
Heikki Linnakangas authored
The proc array can contain duplicate XIDs, when a transaction is just being prepared for two-phase commit. To cope, remove any duplicates in txid_current_snapshot(). Also ignore duplicates in the input functions, so that if e.g. you have an old pg_dump file that already contains duplicates, it will be accepted. Report and fix by Jan Wieck. Backpatch to all supported versions.
-
Heikki Linnakangas authored
To lock a prepared transaction's shared memory entry, we used to mark it with the XID of the backend. When the XID was no longer active according to the proc array, the entry was implicitly considered as not locked anymore. However, when preparing a transaction, the backend's proc array entry was cleared before transfering the locks (and some other state) to the prepared transaction's dummy PGPROC entry, so there was a window where another backend could finish the transaction before it was in fact fully prepared. To fix, rewrite the locking mechanism of global transaction entries. Instead of an XID, just have simple locked-or-not flag in each entry (we store the locking backend's backend id rather than a simple boolean, but that's just for debugging purposes). The backend is responsible for explicitly unlocking the entry, and to make sure that that happens, install a callback to unlock it on abort or process exit. Backpatch to all supported versions.
-