1. 29 Jun, 2012 3 commits
    • Peter Eisentraut's avatar
      Make init-po and update-po recursive make targets · b344c651
      Peter Eisentraut authored
      This is for convenience, now that adding recursive targets is much
      easier than it used to be when the NLS stuff was initially added.
      b344c651
    • Tom Lane's avatar
      Fix NOTIFY to cope with I/O problems, such as out-of-disk-space. · ae90128d
      Tom Lane authored
      The LISTEN/NOTIFY subsystem got confused if SimpleLruZeroPage failed,
      which would typically happen as a result of a write() failure while
      attempting to dump a dirty pg_notify page out of memory.  Subsequently,
      all attempts to send more NOTIFY messages would fail with messages like
      "Could not read from file "pg_notify/nnnn" at offset nnnnn: Success".
      Only restarting the server would clear this condition.  Per reports from
      Kevin Grittner and Christoph Berg.
      
      Back-patch to 9.0, where the problem was introduced during the
      LISTEN/NOTIFY rewrite.
      ae90128d
    • Alvaro Herrera's avatar
      pg_upgrade: fix off-by-one mistake in snprintf · 9e26326a
      Alvaro Herrera authored
      snprintf counts trailing NUL towards the char limit.  Failing to account
      for that was causing an invalid value to be passed to pg_resetxlog -l,
      aborting the upgrade process.
      9e26326a
  2. 28 Jun, 2012 7 commits
    • Tom Lane's avatar
      Provide MAP_FAILED if sys/mman.h doesn't. · c1494b73
      Tom Lane authored
      On old HPUX this has to be #defined to -1.  It might be that other values
      are required on other dinosaur systems, but we'll worry about that when
      and if we get reports.
      c1494b73
    • Heikki Linnakangas's avatar
      Update outdated commit; xlp_rem_len field is in page header now. · 8f85667a
      Heikki Linnakangas authored
      Spotted by Amit Kapila
      8f85667a
    • Peter Eisentraut's avatar
      Further fix install program detection · dcd5af6c
      Peter Eisentraut authored
      The $(or) make function was introduced in GNU make 3.81, so the
      previous coding didn't work in 3.80.  Write it differently, and
      improve the variable naming to make more sense in the new coding.
      dcd5af6c
    • Robert Haas's avatar
      Fix broken mmap failure-detection code, and improve error message. · 39715af2
      Robert Haas authored
      Per an observation by Thom Brown that my previous commit made an
      overly large shmem allocation crash the server, on Linux.
      39715af2
    • Robert Haas's avatar
      Dramatically reduce System V shared memory consumption. · b0fc0df9
      Robert Haas authored
      Except when compiling with EXEC_BACKEND, we'll now allocate only a tiny
      amount of System V shared memory (as an interlock to protect the data
      directory) and allocate the rest as anonymous shared memory via mmap.
      This will hopefully spare most users the hassle of adjusting operating
      system parameters before being able to start PostgreSQL with a
      reasonable value for shared_buffers.
      
      There are a bunch of documentation updates needed here, and we might
      need to adjust some of the HINT messages related to shared memory as
      well.  But it's not 100% clear how portable this is, so before we
      write the documentation, let's give it a spin on the buildfarm and
      see what turns red.
      b0fc0df9
    • Robert Haas's avatar
      Add missing space in event_source GUC description. · c5b3451a
      Robert Haas authored
      This has apparently been wrong since event_source was added.
      
      Alexander Lakhin
      c5b3451a
    • Tom Lane's avatar
      Make UtilityContainsQuery recurse until it finds a non-utility Query. · bde689f8
      Tom Lane authored
      The callers of UtilityContainsQuery want it to return a non-utility Query
      if it returns anything at all.  However, since we made CREATE TABLE
      AS/SELECT INTO into a utility command instead of a variant of SELECT,
      a command like "EXPLAIN SELECT INTO" results in two nested utility
      statements.  So what we need UtilityContainsQuery to do is drill down
      to the bottom non-utility Query.
      
      I had thought of this possibility in setrefs.c, and fixed it there by
      looping around the UtilityContainsQuery call; but overlooked that the call
      sites in plancache.c have a similar issue.  In those cases it's
      notationally inconvenient to provide an external loop, so let's redefine
      UtilityContainsQuery as recursing down to a non-utility Query instead.
      
      Noted by Rushabh Lathia.  This is a somewhat cleaned-up version of his
      proposed patch.
      bde689f8
  3. 27 Jun, 2012 5 commits
  4. 26 Jun, 2012 8 commits
    • Robert Haas's avatar
      Allow pg_terminate_backend() to be used on backends with matching role. · c60ca19d
      Robert Haas authored
      A similar change was made previously for pg_cancel_backend, so now it
      all matches again.
      
      Dan Farina, reviewed by Fujii Masao, Noah Misch, and Jeff Davis,
      with slight kibitzing on the doc changes by me.
      c60ca19d
    • Robert Haas's avatar
      When LWLOCK_STATS is defined, count spindelays. · b79ab001
      Robert Haas authored
      When LWLOCK_STATS is *not* defined, the only change is that
      SpinLockAcquire now returns the number of delays.
      
      Patch by me, review by Jeff Janes.
      b79ab001
    • Tom Lane's avatar
      Cope with smaller-than-normal BLCKSZ setting in SPGiST indexes on text. · 75777360
      Tom Lane authored
      The original coding failed miserably for BLCKSZ of 4K or less, as reported
      by Josh Kupershmidt.  With the present design for text indexes, a given
      inner tuple could have up to 256 labels (requiring either 3K or 4K bytes
      depending on MAXALIGN), which means that we can't positively guarantee no
      failures for smaller blocksizes.  But we can at least make it behave sanely
      so long as there are few enough labels to fit on a page.  Considering that
      btree is also more prone to "index tuple too large" failures when BLCKSZ is
      small, it's not clear that we should expend more work than this on this
      case.
      75777360
    • Robert Haas's avatar
      Make DROP FUNCTION hint more informative. · 0caa0d04
      Robert Haas authored
      If you decide you want to take the hint, this gives you something you
      can paste right back to the server.
      
      Dean Rasheed
      0caa0d04
    • Robert Haas's avatar
      Reduce use of heavyweight locking inside hash AM. · 76837c15
      Robert Haas authored
      Avoid using LockPage(rel, 0, lockmode) to protect against changes to
      the bucket mapping.  Instead, an exclusive buffer content lock is now
      viewed as sufficient permission to modify the metapage, and a shared
      buffer content lock is used when such modifications need to be
      prevented.  This more relaxed locking regimen makes it possible that,
      when we're busy getting a heavyweight bucket on the bucket we intend
      to search or insert into, a bucket split might occur underneath us.
      To compenate for that possibility, we use a loop-and-retry system:
      release the metapage content lock, acquire the heavyweight lock on the
      target bucket, and then reacquire the metapage content lock and check
      that the bucket mapping has not changed.   Normally it hasn't, and
      we're done.  But if by chance it has, we simply unlock the metapage,
      release the heavyweight lock we acquired previously, lock the new
      bucket, and loop around again.  Even in the worst case we cannot loop
      very many times here, since we don't split the same bucket again until
      we've split all the other buckets, and 2^N gets big pretty fast.
      
      This results in greatly improved concurrency, because we're
      effectively replacing two lwlock acquire-and-release cycles in
      exclusive mode (on one of the lock manager locks) with a single
      acquire-and-release cycle in shared mode (on the metapage buffer
      content lock).  Testing shows that it's still not quite as good as
      btree; for that, we'd probably have to find some way of getting rid
      of the heavyweight bucket locks as well, which does not appear
      straightforward.
      
      Patch by me, review by Jeff Janes.
      76837c15
    • Heikki Linnakangas's avatar
      Fix pg_upgrade, broken by the xlogid/segno -> 64-bit int refactoring. · 038f3a05
      Heikki Linnakangas authored
      The xlogid + segno representation of a particular WAL segment doesn't make
      much sense in pg_resetxlog anymore, now that we don't use that anywhere
      else. Use the WAL filename instead, since that's a convenient way to name a
      particular WAL segment.
      
      I did this partially for pg_resetxlog in the original xlogid/segno -> uint64
      patch, but I neglected pg_upgrade and the docs. This should now be more
      complete.
      038f3a05
    • Tom Lane's avatar
      Make pg_dump emit more accurate dependency information. · 8a504a36
      Tom Lane authored
      While pg_dump has included dependency information in archive-format output
      ever since 7.3, it never made any large effort to ensure that that
      information was actually useful.  In particular, in common situations where
      dependency chains include objects that aren't separately emitted in the
      dump, the dependencies shown for objects that were emitted would reference
      the dump IDs of these un-dumped objects, leaving no clue about which other
      objects the visible objects indirectly depend on.  So far, parallel
      pg_restore has managed to avoid tripping over this misfeature, but only
      by dint of some crude hacks like not trusting dependency information in
      the pre-data section of the archive.
      
      It seems prudent to do something about this before it rises up to bite us,
      so instead of emitting the "raw" dependencies of each dumped object,
      recursively search for its actual dependencies among the subset of objects
      that are being dumped.
      
      Back-patch to 9.2, since that code hasn't yet diverged materially from
      HEAD.  At some point we might need to back-patch further, but right now
      there are no known cases where this is actively necessary.  (The one known
      case, bug #6699, is fixed in a different way by my previous patch.)  Since
      this patch depends on 9.2 changes that made TOC entries be marked before
      output commences as to whether they'll be dumped, back-patching further
      would require additional surgery; and as of now there's no evidence that
      it's worth the risk.
      8a504a36
    • Tom Lane's avatar
      Improve pg_dump's dependency-sorting logic to enforce section dump order. · a1ef01fe
      Tom Lane authored
      As of 9.2, with the --section option, it is very important that the concept
      of "pre data", "data", and "post data" sections of the output be honored
      strictly; else a dump divided into separate sectional files might be
      unrestorable.  However, the dependency-sorting logic knew nothing of
      sections and would happily select output orderings that didn't fit that
      structure.  Doing so was mostly harmless before 9.2, but now we need to be
      sure it doesn't do that.  To fix, create dummy objects representing the
      section boundaries and add dependencies between them and all the normal
      objects.  (This might sound expensive but it seems to only add a percent or
      two to pg_dump's runtime.)
      
      This also fixes a problem introduced in 9.1 by the feature that allows
      incomplete GROUP BY lists when a primary key is given in GROUP BY.
      That means that views can depend on primary key constraints.  Previously,
      pg_dump would deal with that by simply emitting the primary key constraint
      before the view definition (and hence before the data section of the
      output).  That's bad enough for simple serial restores, where creating an
      index before the data is loaded works, but is undesirable for speed
      reasons.  But it could lead to outright failure of parallel restores, as
      seen in bug #6699 from Joe Van Dyk.  That happened because pg_restore would
      switch into parallel mode as soon as it reached the constraint, and then
      very possibly would try to emit the view definition before the primary key
      was committed (as a consequence of another bug that causes the view not to
      be correctly marked as depending on the constraint).  Adding the section
      boundary constraints forces the dependency-sorting code to break the view
      into separate table and rule declarations, allowing the rule, and hence the
      primary key constraint it depends on, to revert to their intended location
      in the post-data section.  This also somewhat accidentally works around the
      bogus-dependency-marking problem, because the rule will be correctly shown
      as depending on the constraint, so parallel pg_restore will now do the
      right thing.  (We will fix the bogus-dependency problem for real in a
      separate patch, but that patch is not easily back-portable to 9.1, so the
      fact that this patch is enough to dodge the only known symptom is
      fortunate.)
      
      Back-patch to 9.1, except for the hunk that adds verification that the
      finished archive TOC list is in correct section order; the place where
      it was convenient to add that doesn't exist in 9.1.
      a1ef01fe
  5. 25 Jun, 2012 6 commits
  6. 24 Jun, 2012 9 commits
    • Peter Eisentraut's avatar
      Replace int2/int4 in C code with int16/int32 · b8b2e3b2
      Peter Eisentraut authored
      The latter was already the dominant use, and it's preferable because
      in C the convention is that intXX means XX bits.  Therefore, allowing
      mixed use of int2, int4, int8, int16, int32 is obviously confusing.
      
      Remove the typedefs for int2 and int4 for now.  They don't seem to be
      widely used outside of the PostgreSQL source tree, and the few uses
      can probably be cleaned up by the time this ships.
      b8b2e3b2
    • Heikki Linnakangas's avatar
    • Heikki Linnakangas's avatar
      Use UINT64CONST for 64-bit integer constants. · 0687a260
      Heikki Linnakangas authored
      Peter Eisentraut advised me that UINT64CONST is the proper way to do that,
      not LL suffix.
      0687a260
    • Heikki Linnakangas's avatar
      Oops. Remove stray paren. · a218e23a
      Heikki Linnakangas authored
      I didn't notice this on my laptop as I don't HAVE_FSYNC_WRITETHROUGH.
      a218e23a
    • Heikki Linnakangas's avatar
      Use LL suffix for 64-bit constants. · 96ff85e2
      Heikki Linnakangas authored
      Per warning from buildfarm member 'locust'. At least I think this what's
      making it upset.
      96ff85e2
    • Heikki Linnakangas's avatar
      Replace XLogRecPtr struct with a 64-bit integer. · 0ab9d1c4
      Heikki Linnakangas authored
      This simplifies code that needs to do arithmetic on XLogRecPtrs.
      
      To avoid changing on-disk format of data pages, the LSN on data pages is
      still stored in the old format. That should keep pg_upgrade happy. However,
      we have XLogRecPtrs embedded in the control file, and in the structs that
      are sent over the replication protocol, so this changes breaks compatibility
      of pg_basebackup and server. I didn't do anything about this in this patch,
      per discussion on -hackers, the right thing to do would to be to change the
      replication protocol to be architecture-independent, so that you could use
      a newer version of pg_receivexlog, for example, against an older server
      version.
      0ab9d1c4
    • Heikki Linnakangas's avatar
      Allow WAL record header to be split across pages. · 061e7efb
      Heikki Linnakangas authored
      This saves a few bytes of WAL space, but the real motivation is to make it
      predictable how much WAL space a record requires, as it no longer depends
      on whether we need to waste the last few bytes at end of WAL page because
      the header doesn't fit.
      
      The total length field of WAL record, xl_tot_len, is moved to the beginning
      of the WAL record header, so that it is still always found on the first page
      where a WAL record begins.
      
      Bump WAL version number again as this is an incompatible change.
      061e7efb
    • Heikki Linnakangas's avatar
      Move WAL continuation record information to WAL page header. · 20ba5ca6
      Heikki Linnakangas authored
      The continuation record only contained one field, xl_rem_len, so it makes
      things simpler to just include it in the WAL page header. This wastes four
      bytes on pages that don't begin with a continuation from previos page, plus
      four bytes on every page, because of padding.
      
      The motivation of this is to make it easier to calculate how much space a
      WAL record needs. Before this patch, it depended on how many page boundaries
      the record crosses. The motivation of that, in turn, is to separate the
      allocation of space in the WAL from the copying of the record data to the
      allocated space. Keeping the calculation of space required simple helps to
      keep the critical section of allocating the space from WAL short. But that's
      not included in this patch yet.
      
      Bump WAL version number again, as this is an incompatible change.
      20ba5ca6
    • Heikki Linnakangas's avatar
      Don't waste the last segment of each 4GB logical log file. · dfda6eba
      Heikki Linnakangas authored
      The comments claimed that wasting the last segment made it easier to do
      calculations with XLogRecPtrs, because you don't have problems representing
      last-byte-position-plus-1 that way. In my experience, however, it only made
      things more complicated, because the there was two ways to represent the
      boundary at the beginning of a logical log file: logid = n+1 and xrecoff = 0,
      or as xlogid = n and xrecoff = 4GB - XLOG_SEG_SIZE. Some functions were
      picky about which representation was used.
      
      Also, use a 64-bit segment number instead of the log/seg combination, to
      point to a certain WAL segment. We assume that all platforms have a working
      64-bit integer type nowadays.
      
      This is an incompatible change in WAL format, so bumping WAL version number.
      dfda6eba
  7. 22 Jun, 2012 2 commits