- 26 Jun, 2012 7 commits
-
-
Robert Haas authored
When LWLOCK_STATS is *not* defined, the only change is that SpinLockAcquire now returns the number of delays. Patch by me, review by Jeff Janes.
-
Tom Lane authored
The original coding failed miserably for BLCKSZ of 4K or less, as reported by Josh Kupershmidt. With the present design for text indexes, a given inner tuple could have up to 256 labels (requiring either 3K or 4K bytes depending on MAXALIGN), which means that we can't positively guarantee no failures for smaller blocksizes. But we can at least make it behave sanely so long as there are few enough labels to fit on a page. Considering that btree is also more prone to "index tuple too large" failures when BLCKSZ is small, it's not clear that we should expend more work than this on this case.
-
Robert Haas authored
If you decide you want to take the hint, this gives you something you can paste right back to the server. Dean Rasheed
-
Robert Haas authored
Avoid using LockPage(rel, 0, lockmode) to protect against changes to the bucket mapping. Instead, an exclusive buffer content lock is now viewed as sufficient permission to modify the metapage, and a shared buffer content lock is used when such modifications need to be prevented. This more relaxed locking regimen makes it possible that, when we're busy getting a heavyweight bucket on the bucket we intend to search or insert into, a bucket split might occur underneath us. To compenate for that possibility, we use a loop-and-retry system: release the metapage content lock, acquire the heavyweight lock on the target bucket, and then reacquire the metapage content lock and check that the bucket mapping has not changed. Normally it hasn't, and we're done. But if by chance it has, we simply unlock the metapage, release the heavyweight lock we acquired previously, lock the new bucket, and loop around again. Even in the worst case we cannot loop very many times here, since we don't split the same bucket again until we've split all the other buckets, and 2^N gets big pretty fast. This results in greatly improved concurrency, because we're effectively replacing two lwlock acquire-and-release cycles in exclusive mode (on one of the lock manager locks) with a single acquire-and-release cycle in shared mode (on the metapage buffer content lock). Testing shows that it's still not quite as good as btree; for that, we'd probably have to find some way of getting rid of the heavyweight bucket locks as well, which does not appear straightforward. Patch by me, review by Jeff Janes.
-
Heikki Linnakangas authored
The xlogid + segno representation of a particular WAL segment doesn't make much sense in pg_resetxlog anymore, now that we don't use that anywhere else. Use the WAL filename instead, since that's a convenient way to name a particular WAL segment. I did this partially for pg_resetxlog in the original xlogid/segno -> uint64 patch, but I neglected pg_upgrade and the docs. This should now be more complete.
-
Tom Lane authored
While pg_dump has included dependency information in archive-format output ever since 7.3, it never made any large effort to ensure that that information was actually useful. In particular, in common situations where dependency chains include objects that aren't separately emitted in the dump, the dependencies shown for objects that were emitted would reference the dump IDs of these un-dumped objects, leaving no clue about which other objects the visible objects indirectly depend on. So far, parallel pg_restore has managed to avoid tripping over this misfeature, but only by dint of some crude hacks like not trusting dependency information in the pre-data section of the archive. It seems prudent to do something about this before it rises up to bite us, so instead of emitting the "raw" dependencies of each dumped object, recursively search for its actual dependencies among the subset of objects that are being dumped. Back-patch to 9.2, since that code hasn't yet diverged materially from HEAD. At some point we might need to back-patch further, but right now there are no known cases where this is actively necessary. (The one known case, bug #6699, is fixed in a different way by my previous patch.) Since this patch depends on 9.2 changes that made TOC entries be marked before output commences as to whether they'll be dumped, back-patching further would require additional surgery; and as of now there's no evidence that it's worth the risk.
-
Tom Lane authored
As of 9.2, with the --section option, it is very important that the concept of "pre data", "data", and "post data" sections of the output be honored strictly; else a dump divided into separate sectional files might be unrestorable. However, the dependency-sorting logic knew nothing of sections and would happily select output orderings that didn't fit that structure. Doing so was mostly harmless before 9.2, but now we need to be sure it doesn't do that. To fix, create dummy objects representing the section boundaries and add dependencies between them and all the normal objects. (This might sound expensive but it seems to only add a percent or two to pg_dump's runtime.) This also fixes a problem introduced in 9.1 by the feature that allows incomplete GROUP BY lists when a primary key is given in GROUP BY. That means that views can depend on primary key constraints. Previously, pg_dump would deal with that by simply emitting the primary key constraint before the view definition (and hence before the data section of the output). That's bad enough for simple serial restores, where creating an index before the data is loaded works, but is undesirable for speed reasons. But it could lead to outright failure of parallel restores, as seen in bug #6699 from Joe Van Dyk. That happened because pg_restore would switch into parallel mode as soon as it reached the constraint, and then very possibly would try to emit the view definition before the primary key was committed (as a consequence of another bug that causes the view not to be correctly marked as depending on the constraint). Adding the section boundary constraints forces the dependency-sorting code to break the view into separate table and rule declarations, allowing the rule, and hence the primary key constraint it depends on, to revert to their intended location in the post-data section. This also somewhat accidentally works around the bogus-dependency-marking problem, because the rule will be correctly shown as depending on the constraint, so parallel pg_restore will now do the right thing. (We will fix the bogus-dependency problem for real in a separate patch, but that patch is not easily back-portable to 9.1, so the fact that this patch is enough to dodge the only known symptom is fortunate.) Back-patch to 9.1, except for the hunk that adds verification that the finished archive TOC list is in correct section order; the place where it was convenient to add that doesn't exist in 9.1.
-
- 25 Jun, 2012 6 commits
-
-
Alvaro Herrera authored
Remove proc.h from sinvaladt.h and twophase.h; also replace xlog.h in proc.h with xlogdefs.h.
-
Peter Eisentraut authored
There was a wild mix of calling conventions: Some were declared to return void and didn't return, some returned an int exit code, some claimed to return an exit code, which the callers checked, but actually never returned, and so on. Now all of these functions are declared to return void and decorated with attribute noreturn and don't return. That's easiest, and most code already worked that way.
-
Robert Haas authored
Fujii Masao
-
Robert Haas authored
Fujii Masao
-
Robert Haas authored
Commit 061e7efb changed the rules for splitting xlog records across pages, but neglected to update this test. It's possible that there's some better action here than just removing the test completely, but this at least appears to get some of the things that are currently broken (like initdb on MacOS X) working again.
-
Kevin Grittner authored
-
- 24 Jun, 2012 9 commits
-
-
Peter Eisentraut authored
The latter was already the dominant use, and it's preferable because in C the convention is that intXX means XX bits. Therefore, allowing mixed use of int2, int4, int8, int16, int32 is obviously confusing. Remove the typedefs for int2 and int4 for now. They don't seem to be widely used outside of the PostgreSQL source tree, and the few uses can probably be cleaned up by the time this ships.
-
Heikki Linnakangas authored
-
Heikki Linnakangas authored
Peter Eisentraut advised me that UINT64CONST is the proper way to do that, not LL suffix.
-
Heikki Linnakangas authored
I didn't notice this on my laptop as I don't HAVE_FSYNC_WRITETHROUGH.
-
Heikki Linnakangas authored
Per warning from buildfarm member 'locust'. At least I think this what's making it upset.
-
Heikki Linnakangas authored
This simplifies code that needs to do arithmetic on XLogRecPtrs. To avoid changing on-disk format of data pages, the LSN on data pages is still stored in the old format. That should keep pg_upgrade happy. However, we have XLogRecPtrs embedded in the control file, and in the structs that are sent over the replication protocol, so this changes breaks compatibility of pg_basebackup and server. I didn't do anything about this in this patch, per discussion on -hackers, the right thing to do would to be to change the replication protocol to be architecture-independent, so that you could use a newer version of pg_receivexlog, for example, against an older server version.
-
Heikki Linnakangas authored
This saves a few bytes of WAL space, but the real motivation is to make it predictable how much WAL space a record requires, as it no longer depends on whether we need to waste the last few bytes at end of WAL page because the header doesn't fit. The total length field of WAL record, xl_tot_len, is moved to the beginning of the WAL record header, so that it is still always found on the first page where a WAL record begins. Bump WAL version number again as this is an incompatible change.
-
Heikki Linnakangas authored
The continuation record only contained one field, xl_rem_len, so it makes things simpler to just include it in the WAL page header. This wastes four bytes on pages that don't begin with a continuation from previos page, plus four bytes on every page, because of padding. The motivation of this is to make it easier to calculate how much space a WAL record needs. Before this patch, it depended on how many page boundaries the record crosses. The motivation of that, in turn, is to separate the allocation of space in the WAL from the copying of the record data to the allocated space. Keeping the calculation of space required simple helps to keep the critical section of allocating the space from WAL short. But that's not included in this patch yet. Bump WAL version number again, as this is an incompatible change.
-
Heikki Linnakangas authored
The comments claimed that wasting the last segment made it easier to do calculations with XLogRecPtrs, because you don't have problems representing last-byte-position-plus-1 that way. In my experience, however, it only made things more complicated, because the there was two ways to represent the boundary at the beginning of a logical log file: logid = n+1 and xrecoff = 0, or as xlogid = n and xrecoff = 4GB - XLOG_SEG_SIZE. Some functions were picky about which representation was used. Also, use a 64-bit segment number instead of the log/seg combination, to point to a certain WAL segment. We assume that all platforms have a working 64-bit integer type nowadays. This is an incompatible change in WAL format, so bumping WAL version number.
-
- 22 Jun, 2012 2 commits
-
-
Robert Haas authored
These days, even a wimpy system can insert 10000 tuples in the blink of an eye, so there's no real need for this much verbosity. Per complaint from Tatsuo Ishii.
-
Robert Haas authored
Also, add some cross-links to the indexing documentation, so it's easier to notice that && and other array operators have index support. Ryan Kelly, edited by me.
-
- 21 Jun, 2012 5 commits
-
-
Peter Eisentraut authored
To avoid divergent names on related pages, avoid ambiguities, and reduce translation work a little.
-
Tom Lane authored
Repeated execution of an uncorrelated ARRAY_SUBLINK sub-select (which I think can only happen if the sub-select is embedded in a larger, correlated subquery) would leak memory for the duration of the query, due to not reclaiming the array generated in the previous execution. Per bug #6698 from Armando Miraglia. Diagnosis and fix idea by Heikki, patch itself by me. This has been like this all along, so back-patch to all supported versions.
-
Alvaro Herrera authored
-
Heikki Linnakangas authored
This speeds up reassigning locks to the parent owner, when the transaction holds a lot of locks, but only a few of them belong to the current resource owner. This is particularly helps pg_dump when dumping a large number of objects. The cache can hold up to 15 locks in each resource owner. After that, the cache is marked as overflowed, and we fall back to the old method of scanning the whole local lock table. The tradeoff here is that the cache has to be scanned whenever a lock is released, so if the cache is too large, lock release becomes more expensive. 15 seems enough to cover pg_dump, and doesn't have much impact on lock release. Jeff Janes, reviewed by Amit Kapila and Heikki Linnakangas.
-
Tom Lane authored
The original coding in ri_triggers.c had partial support for the concept of zero-column foreign key constraints. But this is not defined in the SQL standard, nor was it ever allowed by any other part of Postgres, nor was it very fully implemented even here (eg there was no support for preventing PK-table deletions that would violate the constraint). Doesn't seem very useful to carry 100-plus lines of code for a corner case that no one is interested in making work. Instead, just add a check that the column list read from pg_constraint is non-empty.
-
- 20 Jun, 2012 3 commits
-
-
Tom Lane authored
By my count there are 18 callers of CacheRegisterSyscacheCallback in the core code in HEAD, so we are potentially leaving as few as 2 slots for any add-on code to use (though possibly not all these callers would actually activate in any particular session). That doesn't seem like a lot of headroom, so let's pump it up a little.
-
Tom Lane authored
Extracting data from pg_constraint turned out to take as much as 10% of the runtime in a bulk-update case where the foreign key column wasn't changing, because we did it over again for each tuple. Fix that by maintaining a backend-local cache of the results. This is really a pretty small patch, but converting the trigger functions to work with pointers rather than local struct variables requires a lot of mechanical changes.
-
Tom Lane authored
During an update of a PK row, we can skip firing the RI trigger if any old key value is NULL, because then the row could not have had any matching rows in the FK table. Conversely, during an update of an FK row, the outcome is determined if any new key value is NULL. In either case it becomes unnecessary to compare individual key values. This patch was inspired by discussion of Vik Reykja's patch to use IS NOT DISTINCT semantics for the key comparisons. In the event there is no need for that and so this patch looks nothing like his, but he should still get credit for having re-opened consideration of the trigger skip logic.
-
- 19 Jun, 2012 5 commits
-
-
Tom Lane authored
The option --foreign-keys, used at initialization time, will create foreign key constraints for the columns that represent references to other tables' primary keys. This can help in benchmarking FK performance. Jeff Janes
-
Alvaro Herrera authored
In passing, reword another instance of the same message that was gratuitously different. Author: Josh Kupershmidt after a bug report by Bosco Rama
-
Peter Eisentraut authored
pointed out by Stefan Kaltenbrunner
-
Tom Lane authored
These triggers are identical except for whether ri_Check_Pk_Match is to be called, so factor out the common code to save a couple hundred lines. Also, eliminate null-column checks in ri_Check_Pk_Match, since they're duplicate with the calling functions and require unnecessary complication in its API statement. Simplify the way code is shared between RI_FKey_check_ins and RI_FKey_check_upd, too.
-
Tom Lane authored
I was confused about this, so try to make it clearer for the next person. (This seems like a fairly inefficient way of dealing with a corner case, but I don't have a better idea offhand. Maybe if there were a way to turn off the RI_FKey_keyequal_upd_fk event filter temporarily?)
-
- 18 Jun, 2012 3 commits
-
-
Tom Lane authored
Once upon a time, somebody was worried that cached RI plans wouldn't get remade with new default values after ALTER TABLE ... SET DEFAULT, so they didn't allow caching of plans for ON UPDATE/DELETE SET DEFAULT actions. That time is long gone, though (and even at the time I doubt this was the greatest hazard posed by ALTER TABLE...). So allow these triggers to cache their plans just like the others. The cache_plan argument to ri_PlanCheck is now vestigial, since there are no callers that don't pass "true"; but I left it alone in case there is any future need for it.
-
Tom Lane authored
We really only need the foreign key constraint's OID and the query type code to uniquely identify each plan we are caching for FK checks. The other stuff that was in the struct had no business being used as part of a hash key, and was all just being copied from struct RI_ConstraintInfo anyway. Get rid of the unnecessary fields, and readjust various function APIs to make them use RI_ConstraintInfo not RI_QueryKey as info source. I'd be surprised if this makes any measurable performance difference, but it certainly feels cleaner.
-
Peter Eisentraut authored
-