1. 27 Jun, 2012 1 commit
  2. 26 Jun, 2012 8 commits
    • Robert Haas's avatar
      Allow pg_terminate_backend() to be used on backends with matching role. · c60ca19d
      Robert Haas authored
      A similar change was made previously for pg_cancel_backend, so now it
      all matches again.
      
      Dan Farina, reviewed by Fujii Masao, Noah Misch, and Jeff Davis,
      with slight kibitzing on the doc changes by me.
      c60ca19d
    • Robert Haas's avatar
      When LWLOCK_STATS is defined, count spindelays. · b79ab001
      Robert Haas authored
      When LWLOCK_STATS is *not* defined, the only change is that
      SpinLockAcquire now returns the number of delays.
      
      Patch by me, review by Jeff Janes.
      b79ab001
    • Tom Lane's avatar
      Cope with smaller-than-normal BLCKSZ setting in SPGiST indexes on text. · 75777360
      Tom Lane authored
      The original coding failed miserably for BLCKSZ of 4K or less, as reported
      by Josh Kupershmidt.  With the present design for text indexes, a given
      inner tuple could have up to 256 labels (requiring either 3K or 4K bytes
      depending on MAXALIGN), which means that we can't positively guarantee no
      failures for smaller blocksizes.  But we can at least make it behave sanely
      so long as there are few enough labels to fit on a page.  Considering that
      btree is also more prone to "index tuple too large" failures when BLCKSZ is
      small, it's not clear that we should expend more work than this on this
      case.
      75777360
    • Robert Haas's avatar
      Make DROP FUNCTION hint more informative. · 0caa0d04
      Robert Haas authored
      If you decide you want to take the hint, this gives you something you
      can paste right back to the server.
      
      Dean Rasheed
      0caa0d04
    • Robert Haas's avatar
      Reduce use of heavyweight locking inside hash AM. · 76837c15
      Robert Haas authored
      Avoid using LockPage(rel, 0, lockmode) to protect against changes to
      the bucket mapping.  Instead, an exclusive buffer content lock is now
      viewed as sufficient permission to modify the metapage, and a shared
      buffer content lock is used when such modifications need to be
      prevented.  This more relaxed locking regimen makes it possible that,
      when we're busy getting a heavyweight bucket on the bucket we intend
      to search or insert into, a bucket split might occur underneath us.
      To compenate for that possibility, we use a loop-and-retry system:
      release the metapage content lock, acquire the heavyweight lock on the
      target bucket, and then reacquire the metapage content lock and check
      that the bucket mapping has not changed.   Normally it hasn't, and
      we're done.  But if by chance it has, we simply unlock the metapage,
      release the heavyweight lock we acquired previously, lock the new
      bucket, and loop around again.  Even in the worst case we cannot loop
      very many times here, since we don't split the same bucket again until
      we've split all the other buckets, and 2^N gets big pretty fast.
      
      This results in greatly improved concurrency, because we're
      effectively replacing two lwlock acquire-and-release cycles in
      exclusive mode (on one of the lock manager locks) with a single
      acquire-and-release cycle in shared mode (on the metapage buffer
      content lock).  Testing shows that it's still not quite as good as
      btree; for that, we'd probably have to find some way of getting rid
      of the heavyweight bucket locks as well, which does not appear
      straightforward.
      
      Patch by me, review by Jeff Janes.
      76837c15
    • Heikki Linnakangas's avatar
      Fix pg_upgrade, broken by the xlogid/segno -> 64-bit int refactoring. · 038f3a05
      Heikki Linnakangas authored
      The xlogid + segno representation of a particular WAL segment doesn't make
      much sense in pg_resetxlog anymore, now that we don't use that anywhere
      else. Use the WAL filename instead, since that's a convenient way to name a
      particular WAL segment.
      
      I did this partially for pg_resetxlog in the original xlogid/segno -> uint64
      patch, but I neglected pg_upgrade and the docs. This should now be more
      complete.
      038f3a05
    • Tom Lane's avatar
      Make pg_dump emit more accurate dependency information. · 8a504a36
      Tom Lane authored
      While pg_dump has included dependency information in archive-format output
      ever since 7.3, it never made any large effort to ensure that that
      information was actually useful.  In particular, in common situations where
      dependency chains include objects that aren't separately emitted in the
      dump, the dependencies shown for objects that were emitted would reference
      the dump IDs of these un-dumped objects, leaving no clue about which other
      objects the visible objects indirectly depend on.  So far, parallel
      pg_restore has managed to avoid tripping over this misfeature, but only
      by dint of some crude hacks like not trusting dependency information in
      the pre-data section of the archive.
      
      It seems prudent to do something about this before it rises up to bite us,
      so instead of emitting the "raw" dependencies of each dumped object,
      recursively search for its actual dependencies among the subset of objects
      that are being dumped.
      
      Back-patch to 9.2, since that code hasn't yet diverged materially from
      HEAD.  At some point we might need to back-patch further, but right now
      there are no known cases where this is actively necessary.  (The one known
      case, bug #6699, is fixed in a different way by my previous patch.)  Since
      this patch depends on 9.2 changes that made TOC entries be marked before
      output commences as to whether they'll be dumped, back-patching further
      would require additional surgery; and as of now there's no evidence that
      it's worth the risk.
      8a504a36
    • Tom Lane's avatar
      Improve pg_dump's dependency-sorting logic to enforce section dump order. · a1ef01fe
      Tom Lane authored
      As of 9.2, with the --section option, it is very important that the concept
      of "pre data", "data", and "post data" sections of the output be honored
      strictly; else a dump divided into separate sectional files might be
      unrestorable.  However, the dependency-sorting logic knew nothing of
      sections and would happily select output orderings that didn't fit that
      structure.  Doing so was mostly harmless before 9.2, but now we need to be
      sure it doesn't do that.  To fix, create dummy objects representing the
      section boundaries and add dependencies between them and all the normal
      objects.  (This might sound expensive but it seems to only add a percent or
      two to pg_dump's runtime.)
      
      This also fixes a problem introduced in 9.1 by the feature that allows
      incomplete GROUP BY lists when a primary key is given in GROUP BY.
      That means that views can depend on primary key constraints.  Previously,
      pg_dump would deal with that by simply emitting the primary key constraint
      before the view definition (and hence before the data section of the
      output).  That's bad enough for simple serial restores, where creating an
      index before the data is loaded works, but is undesirable for speed
      reasons.  But it could lead to outright failure of parallel restores, as
      seen in bug #6699 from Joe Van Dyk.  That happened because pg_restore would
      switch into parallel mode as soon as it reached the constraint, and then
      very possibly would try to emit the view definition before the primary key
      was committed (as a consequence of another bug that causes the view not to
      be correctly marked as depending on the constraint).  Adding the section
      boundary constraints forces the dependency-sorting code to break the view
      into separate table and rule declarations, allowing the rule, and hence the
      primary key constraint it depends on, to revert to their intended location
      in the post-data section.  This also somewhat accidentally works around the
      bogus-dependency-marking problem, because the rule will be correctly shown
      as depending on the constraint, so parallel pg_restore will now do the
      right thing.  (We will fix the bogus-dependency problem for real in a
      separate patch, but that patch is not easily back-portable to 9.1, so the
      fact that this patch is enough to dodge the only known symptom is
      fortunate.)
      
      Back-patch to 9.1, except for the hunk that adds verification that the
      finished archive TOC list is in correct section order; the place where
      it was convenient to add that doesn't exist in 9.1.
      a1ef01fe
  3. 25 Jun, 2012 6 commits
  4. 24 Jun, 2012 9 commits
    • Peter Eisentraut's avatar
      Replace int2/int4 in C code with int16/int32 · b8b2e3b2
      Peter Eisentraut authored
      The latter was already the dominant use, and it's preferable because
      in C the convention is that intXX means XX bits.  Therefore, allowing
      mixed use of int2, int4, int8, int16, int32 is obviously confusing.
      
      Remove the typedefs for int2 and int4 for now.  They don't seem to be
      widely used outside of the PostgreSQL source tree, and the few uses
      can probably be cleaned up by the time this ships.
      b8b2e3b2
    • Heikki Linnakangas's avatar
    • Heikki Linnakangas's avatar
      Use UINT64CONST for 64-bit integer constants. · 0687a260
      Heikki Linnakangas authored
      Peter Eisentraut advised me that UINT64CONST is the proper way to do that,
      not LL suffix.
      0687a260
    • Heikki Linnakangas's avatar
      Oops. Remove stray paren. · a218e23a
      Heikki Linnakangas authored
      I didn't notice this on my laptop as I don't HAVE_FSYNC_WRITETHROUGH.
      a218e23a
    • Heikki Linnakangas's avatar
      Use LL suffix for 64-bit constants. · 96ff85e2
      Heikki Linnakangas authored
      Per warning from buildfarm member 'locust'. At least I think this what's
      making it upset.
      96ff85e2
    • Heikki Linnakangas's avatar
      Replace XLogRecPtr struct with a 64-bit integer. · 0ab9d1c4
      Heikki Linnakangas authored
      This simplifies code that needs to do arithmetic on XLogRecPtrs.
      
      To avoid changing on-disk format of data pages, the LSN on data pages is
      still stored in the old format. That should keep pg_upgrade happy. However,
      we have XLogRecPtrs embedded in the control file, and in the structs that
      are sent over the replication protocol, so this changes breaks compatibility
      of pg_basebackup and server. I didn't do anything about this in this patch,
      per discussion on -hackers, the right thing to do would to be to change the
      replication protocol to be architecture-independent, so that you could use
      a newer version of pg_receivexlog, for example, against an older server
      version.
      0ab9d1c4
    • Heikki Linnakangas's avatar
      Allow WAL record header to be split across pages. · 061e7efb
      Heikki Linnakangas authored
      This saves a few bytes of WAL space, but the real motivation is to make it
      predictable how much WAL space a record requires, as it no longer depends
      on whether we need to waste the last few bytes at end of WAL page because
      the header doesn't fit.
      
      The total length field of WAL record, xl_tot_len, is moved to the beginning
      of the WAL record header, so that it is still always found on the first page
      where a WAL record begins.
      
      Bump WAL version number again as this is an incompatible change.
      061e7efb
    • Heikki Linnakangas's avatar
      Move WAL continuation record information to WAL page header. · 20ba5ca6
      Heikki Linnakangas authored
      The continuation record only contained one field, xl_rem_len, so it makes
      things simpler to just include it in the WAL page header. This wastes four
      bytes on pages that don't begin with a continuation from previos page, plus
      four bytes on every page, because of padding.
      
      The motivation of this is to make it easier to calculate how much space a
      WAL record needs. Before this patch, it depended on how many page boundaries
      the record crosses. The motivation of that, in turn, is to separate the
      allocation of space in the WAL from the copying of the record data to the
      allocated space. Keeping the calculation of space required simple helps to
      keep the critical section of allocating the space from WAL short. But that's
      not included in this patch yet.
      
      Bump WAL version number again, as this is an incompatible change.
      20ba5ca6
    • Heikki Linnakangas's avatar
      Don't waste the last segment of each 4GB logical log file. · dfda6eba
      Heikki Linnakangas authored
      The comments claimed that wasting the last segment made it easier to do
      calculations with XLogRecPtrs, because you don't have problems representing
      last-byte-position-plus-1 that way. In my experience, however, it only made
      things more complicated, because the there was two ways to represent the
      boundary at the beginning of a logical log file: logid = n+1 and xrecoff = 0,
      or as xlogid = n and xrecoff = 4GB - XLOG_SEG_SIZE. Some functions were
      picky about which representation was used.
      
      Also, use a 64-bit segment number instead of the log/seg combination, to
      point to a certain WAL segment. We assume that all platforms have a working
      64-bit integer type nowadays.
      
      This is an incompatible change in WAL format, so bumping WAL version number.
      dfda6eba
  5. 22 Jun, 2012 2 commits
  6. 21 Jun, 2012 5 commits
    • Peter Eisentraut's avatar
      Make placeholders in SQL command help more consistent and precise · 6753ced3
      Peter Eisentraut authored
      To avoid divergent names on related pages, avoid ambiguities, and
      reduce translation work a little.
      6753ced3
    • Tom Lane's avatar
      Fix memory leak in ARRAY(SELECT ...) subqueries. · d14241c2
      Tom Lane authored
      Repeated execution of an uncorrelated ARRAY_SUBLINK sub-select (which
      I think can only happen if the sub-select is embedded in a larger,
      correlated subquery) would leak memory for the duration of the query,
      due to not reclaiming the array generated in the previous execution.
      Per bug #6698 from Armando Miraglia.  Diagnosis and fix idea by Heikki,
      patch itself by me.
      
      This has been like this all along, so back-patch to all supported versions.
      d14241c2
    • Alvaro Herrera's avatar
      68d0e3cb
    • Heikki Linnakangas's avatar
      Add a small cache of locks owned by a resource owner in ResourceOwner. · eeb6f37d
      Heikki Linnakangas authored
      This speeds up reassigning locks to the parent owner, when the transaction
      holds a lot of locks, but only a few of them belong to the current resource
      owner. This is particularly helps pg_dump when dumping a large number of
      objects.
      
      The cache can hold up to 15 locks in each resource owner. After that, the
      cache is marked as overflowed, and we fall back to the old method of
      scanning the whole local lock table. The tradeoff here is that the cache has
      to be scanned whenever a lock is released, so if the cache is too large,
      lock release becomes more expensive. 15 seems enough to cover pg_dump, and
      doesn't have much impact on lock release.
      
      Jeff Janes, reviewed by Amit Kapila and Heikki Linnakangas.
      eeb6f37d
    • Tom Lane's avatar
      Remove incomplete/incorrect support for zero-column foreign keys. · dfd9c116
      Tom Lane authored
      The original coding in ri_triggers.c had partial support for the concept of
      zero-column foreign key constraints.  But this is not defined in the SQL
      standard, nor was it ever allowed by any other part of Postgres, nor was it
      very fully implemented even here (eg there was no support for preventing
      PK-table deletions that would violate the constraint).  Doesn't seem very
      useful to carry 100-plus lines of code for a corner case that no one is
      interested in making work.  Instead, just add a check that the column list
      read from pg_constraint is non-empty.
      dfd9c116
  7. 20 Jun, 2012 3 commits
    • Tom Lane's avatar
      Increase MAX_SYSCACHE_CALLBACKS from 20 to 32. · 0ce4459a
      Tom Lane authored
      By my count there are 18 callers of CacheRegisterSyscacheCallback in the
      core code in HEAD, so we are potentially leaving as few as 2 slots for any
      add-on code to use (though possibly not all these callers would actually
      activate in any particular session).  That doesn't seem like a lot of
      headroom, so let's pump it up a little.
      0ce4459a
    • Tom Lane's avatar
      Cache the results of ri_FetchConstraintInfo in a backend-local cache. · 45ba424f
      Tom Lane authored
      Extracting data from pg_constraint turned out to take as much as 10% of the
      runtime in a bulk-update case where the foreign key column wasn't changing,
      because we did it over again for each tuple.  Fix that by maintaining a
      backend-local cache of the results.  This is really a pretty small patch,
      but converting the trigger functions to work with pointers rather than
      local struct variables requires a lot of mechanical changes.
      45ba424f
    • Tom Lane's avatar
      Improve tests for whether we can skip queueing RI enforcement triggers. · cfa0f425
      Tom Lane authored
      During an update of a PK row, we can skip firing the RI trigger if any old
      key value is NULL, because then the row could not have had any matching
      rows in the FK table.  Conversely, during an update of an FK row, the
      outcome is determined if any new key value is NULL.  In either case it
      becomes unnecessary to compare individual key values.
      
      This patch was inspired by discussion of Vik Reykja's patch to use IS NOT
      DISTINCT semantics for the key comparisons.  In the event there is no need
      for that and so this patch looks nothing like his, but he should still get
      credit for having re-opened consideration of the trigger skip logic.
      cfa0f425
  8. 19 Jun, 2012 5 commits
  9. 18 Jun, 2012 1 commit
    • Tom Lane's avatar
      Allow ON UPDATE/DELETE SET DEFAULT plans to be cached. · e8c9fd5f
      Tom Lane authored
      Once upon a time, somebody was worried that cached RI plans wouldn't get
      remade with new default values after ALTER TABLE ... SET DEFAULT, so they
      didn't allow caching of plans for ON UPDATE/DELETE SET DEFAULT actions.
      That time is long gone, though (and even at the time I doubt this was the
      greatest hazard posed by ALTER TABLE...).  So allow these triggers to cache
      their plans just like the others.
      
      The cache_plan argument to ri_PlanCheck is now vestigial, since there
      are no callers that don't pass "true"; but I left it alone in case there
      is any future need for it.
      e8c9fd5f