- 20 Apr, 2015 2 commits
-
-
Bruce Momjian authored
Previously, tables created by CREATE LIKE never had OIDs. Report by Tom Lane
-
Peter Eisentraut authored
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
-
- 18 Apr, 2015 1 commit
-
-
Bruce Momjian authored
Was broken by commit 30982be4. Patch by Jeff Janes
-
- 17 Apr, 2015 2 commits
-
-
Stephen Frost authored
The USING policies were not being checked for differences as the same policy was being passed in to both sides of the equal(). This could result in backends not realizing that a policy had been changed, if none of the other attributes had been changed. Fix by passing to equal() the policy1 and policy2 using quals for comparison. No need to back-patch as this is not yet released. Noticed while testing changes to RLS proposed by Dean Rasheed.
-
Alvaro Herrera authored
This allows an MSVC build to run regression tests related to modules in src/test/modules. Author: Michael Paquier Reviewed by: Andrew Dunstan
-
- 16 Apr, 2015 5 commits
-
-
Bruce Momjian authored
Report by CJ Estel Backpatch through 9.4
-
Alvaro Herrera authored
These modules have to be installed so that the testing module can access them. (We don't have that yet, but will soon have it.) Author: Michael Paquier Reviewed by: Andrew Dunstan
-
Heikki Linnakangas authored
Logical decoding set SnapshotData's regd_count field to avoid the snapshot manager from prematurely freeing snapshots that are generated by the decoding system. That was always an abuse of the field, as it was never supposed to be used outside the snapshot manager. Commit 94028691 made snapshot manager's tracking of the snapshots smarter, and that scheme fell apart. The snapshot manager got confused and hit the assertion, when a snapshot that was marked with regd_count==1 was not found in the heap, where the snapshot manager tracks registered the snapshots. To fix, don't abuse the regd_count field like that. Logical decoding still abuses the active_count field for similar purposes, but that's currently harmless. The assertion failure was first reported by Michael Paquier
-
Alvaro Herrera authored
commit_ts, being only a module used for test purposes, is ignored in the process for now. Author: Michael Paquier Reviewed by: Andrew Dunstan
-
Heikki Linnakangas authored
-
- 15 Apr, 2015 4 commits
-
-
Heikki Linnakangas authored
A "file not found" is expected if the source server is running, so don't complain about that. But any other error is definitely not expected.
-
Heikki Linnakangas authored
Update comments and function names to use the terms "source" and "target" consistently. Some places were calling them remote and local instead, which was confusing. Fix incorrect comment in extractPageInfo on database creation record - it was wrong on what happens for databases created in the target that don't exist in source.
-
Heikki Linnakangas authored
Now that the test servers are initialized twice in each .pl script, the single END block is not enough to stop them. Add a new clean_rewind_test function that is called at the end of each test. Michael Paquier
-
Heikki Linnakangas authored
After the WAL format changes, the calculation of the size of a checkpoint record became incorrect. Instead of trying to fix the math, check that the previous record, i.e. the xl_prev value that we'd write for the next record, matches the last checkpoint's redo pointer. That way it's not dependent on the size of the checkpoint record at all. The old logic was actually slightly wrong all along: if the previous checkpoint record crossed a page boundary, the page headers threw off the record size calculation, and the checkpoint was not skipped. The new checkpoint would not cross a page boundary, so this only resulted in at most one extra checkpoint after the system became idle. The new logic fixes that. (It's not worth fixing in backbranches). However, it makes some sense to try to keep the latest checkpoint contained fully in a page, or at least in a single WAL segment, just on general robustness grounds. If something goes awfully wrong, it's more likely that you can recover the latest WAL segment, than the last two WAL segments. So I added an extra check that the checkpoint is not skipped if the previous checkpoint crossed a WAL segment. Reported by Jeff Janes.
-
- 14 Apr, 2015 8 commits
-
-
Peter Eisentraut authored
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
-
Peter Eisentraut authored
Previously, these functions were created in a schema "binary_upgrade", which was deleted after pg_upgrade was finished. Because we don't want to keep that schema around permanently, move them to pg_catalog but rename them with a binary_upgrade_... prefix. The provided functions are only small wrappers around global variables that were added specifically for pg_upgrade use, so keeping the module separate does not create any modularity. The functions still check that they are only called in binary upgrade mode, so it is not possible to call these during normal operation. Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
-
Heikki Linnakangas authored
Eliminate the separate 'len' variable from the loops, and also use the 4 byte instruction. This shaves off a few more cycles. Even though this routine that uses the special SSE 4.2 instructions is much faster than a generic routine, it's still a hot spot, so let's make it as fast as possible. Change the configure test to not test _mm_crc32_u64. That variant is only available in the 64-bit x86-64 architecture, not in 32-bit x86. Modify pg_comp_crc32c_sse42 so that it only uses _mm_crc32_u64 on x86-64. With these changes, the SSE accelerated CRC-32C implementation can also be used on 32-bit x86 systems. This also fixes the 32-bit MSVC build.
-
Heikki Linnakangas authored
I hope this fixes the Windows builfarm failures.
-
Heikki Linnakangas authored
On gcc and clang, the _mm_crc32_u8 and _mm_crc32_u64 intrinsics are not defined at all, when not building with -msse4.2. But on icc, they are. So we cannot assume that if those intrinsics are defined, we can always use them safely, we might still need the runtime check. To fix, check if the __SSE_4_2__ preprocessor symbol is defined. That's supposed to be defined only when the compiler is targeting a processor that has SSE 4.2 support. Per buildfarm members fulmar and okapi.
-
Alvaro Herrera authored
SLRU_SEGMENTS_PER_PAGE -> SLRU_PAGES_PER_SEGMENT I introduced this ancient typo in subtrans.c and later propagated it to multixact.c. I fixed the latter in f741300c, but only back to 9.3; backpatch to all supported branches for consistency.
-
Heikki Linnakangas authored
Modern x86 and x86-64 processors with SSE 4.2 support have special instructions, crc32b and crc32q, for calculating CRC-32C. They greatly speed up CRC calculation. Whether the instructions can be used or not depends on the compiler and the target architecture. If generation of SSE 4.2 instructions is allowed for the target (-msse4.2 flag on gcc and clang), use them. If they are not allowed by default, but the compiler supports the -msse4.2 flag to enable them, compile just the CRC-32C function with -msse4.2 flag, and check at runtime whether the processor we're running on supports it. If it doesn't, fall back to the slicing-by-8 algorithm. (With the common defaults on current operating systems, the runtime-check variant is what you get in practice.) Abhijit Menon-Sen, heavily modified by me, reviewed by Andres Freund.
-
Heikki Linnakangas authored
Now that we use CRC-32C in WAL and the control file, the "traditional" and "legacy" CRC-32 variants are not used in any frontend programs anymore. Move the code for those back from src/common to src/backend/utils/hash. Also move the slicing-by-8 implementation (back) to src/port. This is in preparation for next patch that will add another implementation that uses Intel SSE 4.2 instructions to calculate CRC-32C, where available.
-
- 13 Apr, 2015 8 commits
-
-
Peter Eisentraut authored
-
Alvaro Herrera authored
-
Peter Eisentraut authored
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
-
Heikki Linnakangas authored
Should call just "pg_rewind", instead of "./pg_rewind". The tests are called so that PATH contains the temporariy installation bin dir. Per report from Alvaro Herrera
-
Heikki Linnakangas authored
* Don't pass arguments to prove, since that's not supported on perl 5.8 which is the minimum version supported by the TAP tests. Refactor the test files themselves to run the tests twice, in both local and remote mode. * Use eq rather than == for string comparison. This thinko caused the remote versions of the tests to never run. * Add "use strict" and "use warnings", and fix warnings that that produced. * Increase the delay after standby promotion, to make the tests more robust. * In remote mode, the connection string to the promoted standby was incorrect, leading to connection errors. Patch by Michael Paquier, to address Peter Eisentraut's report.
-
Heikki Linnakangas authored
After a timeline switch, we would leave behind recycled WAL segments that are in the future, but on the old timeline. After promotion, and after they become old enough to be recycled again, we would notice that they don't have a .ready or .done file, create a .ready file for them, and archive them. That's bogus, because the files contain garbage, recycled from an older timeline (or prealloced as zeros). We shouldn't archive such files. This could happen when we're following a timeline switch during replay, or when we switch to new timeline at end-of-recovery. To fix, whenever we switch to a new timeline, scan the data directory for WAL segments on the old timeline, but with a higher segment number, and remove them. Those don't belong to our timeline history, and are most likely bogus recycled or preallocated files. They could also be valid files that we streamed from the primary ahead of time, but in any case, they're not needed to recover to the new timeline.
-
Fujii Masao authored
gettext was unhappy about the commit b216ad7b because it revealed the problem that internationalized messages may contain '\r' escape sequence in pg_rewind. This commit moves '\r' to a separate printf() call. Michael Paquier, bug reported by Peter Eisentraut
-
Peter Eisentraut authored
This matches existing practice, but makes the setup complete and consistent with the C code setup.
-
- 12 Apr, 2015 4 commits
-
-
Heikki Linnakangas authored
It was not significant in practice, it was just one instance of a small result set, but let's pacify Coverity. Michael Paquier
-
Magnus Hagander authored
This view shows information about all connections, such as if the connection is using SSL, which cipher is used, and which client certificate (if any) is used. Reviews by Alex Shulgin, Heikki Linnakangas, Andres Freund & Michael Paquier
-
Heikki Linnakangas authored
David Rowley
-
Peter Eisentraut authored
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
-
- 10 Apr, 2015 2 commits
-
-
Alvaro Herrera authored
Locking and updating the same tuple repeatedly led to some strange multixacts being created which had several subtransactions of the same parent transaction holding locks of the same strength. However, once a subxact of the current transaction holds a lock of a given strength, it's not necessary to acquire the same lock again. This made some coding patterns much slower than required. The fix is twofold. First we change HeapTupleSatisfiesUpdate to return HeapTupleBeingUpdated for the case where the current transaction is already a single-xid locker for the given tuple; it used to return HeapTupleMayBeUpdated for that case. The new logic is simpler, and the change to pgrowlocks is a testament to that: previously we needed to check for the single-xid locker separately in a very ugly way. That test is simpler now. As fallout from the HTSU change, some of its callers need to be amended so that tuple-locked-by-own-transaction is taken into account in the BeingUpdated case rather than the MayBeUpdated case. For many of them there is no difference; but heap_delete() and heap_update now check explicitely and do not grab tuple lock in that case. The HTSU change also means that routine MultiXactHasRunningRemoteMembers introduced in commit 11ac4c73 is no longer necessary and can be removed; the case that used to require it is now handled naturally as result of the changes to heap_delete and heap_update. The second part of the fix to the performance issue is to adjust heap_lock_tuple to avoid the slowness: 1. Previously we checked for the case that our own transaction already held a strong enough lock and returned MayBeUpdated, but only in the multixact case. Now we do it for the plain Xid case as well, which saves having to LockTuple. 2. If the current transaction is the only locker of the tuple (but with a lock not as strong as what we need; otherwise it would have been caught in the check mentioned above), we can skip sleeping on the multixact, and instead go straight to create an updated multixact with the additional lock strength. 3. Most importantly, make sure that both the single-xid-locker case and the multixact-locker case optimization are applied always. We do this by checking both in a single place, rather than them appearing in two separate portions of the routine -- something that is made possible by the HeapTupleSatisfiesUpdate API change. Previously we would only check for the single-xid case when HTSU returned MayBeUpdated, and only checked for the multixact case when HTSU returned BeingUpdated. This was at odds with what HTSU actually returned in one case: if our own transaction was locker in a multixact, it returned MayBeUpdated, so the optimization never applied. This is what led to the large multixacts in the first place. Per bug report #8470 by Oskari Saarenmaa.
-
Peter Eisentraut authored
If someone else already set the callbacks, don't overwrite them with ours. When unsetting the callbacks, only unset them if they point to ours. Author: Jan Urbański <wulczer@wulczer.org>
-
- 09 Apr, 2015 4 commits
-
-
Magnus Hagander authored
-
Heikki Linnakangas authored
Use perl 'glob' and File::Copy instead of "cp". This takes us one step closer to running the suite on Windows. Michael Paquier
-
Heikki Linnakangas authored
Michael Paquier
-
Magnus Hagander authored
Michael Paquier
-