1. 30 Nov, 2013 6 commits
    • Peter Eisentraut's avatar
      doc: Simplify handling of variablelists in XSLT build · 1eafea5d
      Peter Eisentraut authored
      The previously used custom template is no longer necessary because
      parameters provided by the standard style sheet can achieve the same
      outcome.
      1eafea5d
    • Alvaro Herrera's avatar
      Fix a couple of bugs in MultiXactId freezing · 2393c7d1
      Alvaro Herrera authored
      Both heap_freeze_tuple() and heap_tuple_needs_freeze() neglected to look
      into a multixact to check the members against cutoff_xid.  This means
      that a very old Xid could survive hidden within a multi, possibly
      outliving its CLOG storage.  In the distant future, this would cause
      clog lookup failures:
      ERROR:  could not access status of transaction 3883960912
      DETAIL:  Could not open file "pg_clog/0E78": No such file or directory.
      
      This mostly was problematic when the updating transaction aborted, since
      in that case the row wouldn't get pruned away earlier in vacuum and the
      multixact could possibly survive for a long time.  In many cases, data
      that is inaccessible for this reason way can be brought back
      heuristically.
      
      As a second bug, heap_freeze_tuple() didn't properly handle multixacts
      that need to be frozen according to cutoff_multi, but whose updater xid
      is still alive.  Instead of preserving the update Xid, it just set Xmax
      invalid, which leads to both old and new tuple versions becoming
      visible.  This is pretty rare in practice, but a real threat
      nonetheless.  Existing corrupted rows, unfortunately, cannot be repaired
      in an automated fashion.
      
      Existing physical replicas might have already incorrectly frozen tuples
      because of different behavior than in master, which might only become
      apparent in the future once pg_multixact/ is truncated; it is
      recommended that all clones be rebuilt after upgrading.
      
      Following code analysis caused by bug report by J Smith in message
      CADFUPgc5bmtv-yg9znxV-vcfkb+JPRqs7m2OesQXaM_4Z1JpdQ@mail.gmail.com
      and privately by F-Secure.
      
      Backpatch to 9.3, where freezing of MultiXactIds was introduced.
      
      Analysis and patch by Andres Freund, with some tweaks by Álvaro.
      2393c7d1
    • Alvaro Herrera's avatar
      Don't TransactionIdDidAbort in HeapTupleGetUpdateXid · 1ce150b7
      Alvaro Herrera authored
      It is dangerous to do so, because some code expects to be able to see what's
      the true Xmax even if it is aborted (particularly while traversing HOT
      chains).  So don't do it, and instead rely on the callers to verify for
      abortedness, if necessary.
      
      Several race conditions and bugs fixed in the process.  One isolation test
      changes the expected output due to these.
      
      This also reverts commit c235a6a5, which is no longer necessary.
      
      Backpatch to 9.3, where this function was introduced.
      
      Andres Freund
      1ce150b7
    • Alvaro Herrera's avatar
      Truncate pg_multixact/'s contents during crash recovery · 1df0122d
      Alvaro Herrera authored
      Commit 9dc842f0 of 8.2 era prevented MultiXact truncation during crash
      recovery, because there was no guarantee that enough state had been
      setup, and because it wasn't deemed to be a good idea to remove data
      during crash recovery anyway.  Since then, due to Hot-Standby, streaming
      replication and PITR, the amount of time a cluster can spend doing crash
      recovery has increased significantly, to the point that a cluster may
      even never come out of it.  This has made not truncating the content of
      pg_multixact/ not defensible anymore.
      
      To fix, take care to setup enough state for multixact truncation before
      crash recovery starts (easy since checkpoints contain the required
      information), and move the current end-of-recovery actions to a new
      TrimMultiXact() function, analogous to TrimCLOG().
      
      At some later point, this should probably done similarly to the way
      clog.c is doing it, which is to just WAL log truncations, but we can't
      do that for the back branches.
      
      Back-patch to 9.0.  8.4 also has the problem, but since there's no hot
      standby there, it's much less pressing.  In 9.2 and earlier, this patch
      is simpler than in newer branches, because multixact access during
      recovery isn't required.  Add appropriate checks to make sure that's not
      happening.
      
      Andres Freund
      1df0122d
    • Alvaro Herrera's avatar
      Fix full-table-vacuum request mechanism for MultiXactIds · f54106f7
      Alvaro Herrera authored
      While autovacuum dutifully launched anti-multixact-wraparound vacuums
      when the multixact "age" was reached, the vacuum code was not aware that
      it needed to make them be full table vacuums.  As the resulting
      partial-table vacuums aren't capable of actually increasing relminmxid,
      autovacuum continued to launch anti-wraparound vacuums that didn't have
      the intended effect, until age of relfrozenxid caused the vacuum to
      finally be a full table one via vacuum_freeze_table_age.
      
      To fix, introduce logic for multixacts similar to that for plain
      TransactionIds, using the same GUCs.
      
      Backpatch to 9.3, where permanent MultiXactIds were introduced.
      
      Andres Freund, some cleanup by Álvaro
      f54106f7
    • Alvaro Herrera's avatar
      Replace hardcoded 200000000 with autovacuum_freeze_max_age · 76a31c68
      Alvaro Herrera authored
      Parts of the code used autovacuum_freeze_max_age to determine whether
      anti-multixact-wraparound vacuums are necessary, while others used a
      hardcoded 200000000 value.  This leads to problems when
      autovacuum_freeze_max_age is set to a non-default value.  Use the latter
      everywhere.
      
      Backpatch to 9.3, where vacuuming of multixacts was introduced.
      
      Andres Freund
      76a31c68
  2. 29 Nov, 2013 6 commits
    • Tom Lane's avatar
      Fix assorted issues in pg_ctl's pgwin32_CommandLine(). · 79193c75
      Tom Lane authored
      Ensure that the invocation command for postgres or pg_ctl runservice
      double-quotes the executable's pathname; failure to do this leads to
      trouble when the path contains spaces.
      
      Also, ensure that the path ends in ".exe" in both cases and uses
      backslashes rather than slashes as directory separators.  The latter issue
      is reported to confuse some third-party tools such as Symantec Backup Exec.
      
      Also, rewrite the function to avoid buffer overrun issues by using a
      PQExpBuffer instead of a fixed-size static buffer.  Combinations of
      very long executable pathnames and very long data directory pathnames
      could have caused trouble before, for example.
      
      Back-patch to all active branches, since this code has been like this
      for a long while.
      
      Naoya Anzai and Tom Lane, reviewed by Rajeev Rastogi
      79193c75
    • Tom Lane's avatar
      Be sure to release proc->backendLock after SetupLockInTable() failure. · 8b151558
      Tom Lane authored
      The various places that transferred fast-path locks to the main lock table
      neglected to release the PGPROC's backendLock if SetupLockInTable failed
      due to being out of shared memory.  In most cases this is no big deal since
      ensuing error cleanup would release all held LWLocks anyway.  But there are
      some hot-standby functions that don't consider failure of
      FastPathTransferRelationLocks to be a hard error, and in those cases this
      oversight could lead to system lockup.  For consistency, make all of these
      places look the same as FastPathTransferRelationLocks.
      
      Noted while looking for the cause of Dan Wood's bugs --- this wasn't it,
      but it's a bug anyway.
      8b151558
    • Tom Lane's avatar
      Fix assorted race conditions in the new timeout infrastructure. · 16e1b7a1
      Tom Lane authored
      Prevent handle_sig_alarm from losing control partway through due to a query
      cancel (either an asynchronous SIGINT, or a cancel triggered by one of the
      timeout handler functions).  That would at least result in failure to
      schedule any required future interrupt, and might result in actual
      corruption of timeout.c's data structures, if the interrupt happened while
      we were updating those.
      
      We could still lose control if an asynchronous SIGINT arrives just as the
      function is entered.  This wouldn't break any data structures, but it would
      have the same effect as if the SIGALRM interrupt had been silently lost:
      we'd not fire any currently-due handlers, nor schedule any new interrupt.
      To forestall that scenario, forcibly reschedule any pending timer interrupt
      during AbortTransaction and AbortSubTransaction.  We can avoid any extra
      kernel call in most cases by not doing that until we've allowed
      LockErrorCleanup to kill the DEADLOCK_TIMEOUT and LOCK_TIMEOUT events.
      
      Another hazard is that some platforms (at least Linux and *BSD) block a
      signal before calling its handler and then unblock it on return.  When we
      longjmp out of the handler, the unblock doesn't happen, and the signal is
      left blocked indefinitely.  Again, we can fix that by forcibly unblocking
      signals during AbortTransaction and AbortSubTransaction.
      
      These latter two problems do not manifest when the longjmp reaches
      postgres.c, because the error recovery code there kills all pending timeout
      events anyway, and it uses sigsetjmp(..., 1) so that the appropriate signal
      mask is restored.  So errors thrown outside any transaction should be OK
      already, and cleaning up in AbortTransaction and AbortSubTransaction should
      be enough to fix these issues.  (We're assuming that any code that catches
      a query cancel error and doesn't re-throw it will do at least a
      subtransaction abort to clean up; but that was pretty much required already
      by other subsystems.)
      
      Lastly, ProcSleep should not clear the LOCK_TIMEOUT indicator flag when
      disabling that event: if a lock timeout interrupt happened after the lock
      was granted, the ensuing query cancel is still going to happen at the next
      CHECK_FOR_INTERRUPTS, and we want to report it as a lock timeout not a user
      cancel.
      
      Per reports from Dan Wood.
      
      Back-patch to 9.3 where the new timeout handling infrastructure was
      introduced.  We may at some point decide to back-patch the signal
      unblocking changes further, but I'll desist from that until we hear
      actual field complaints about it.
      16e1b7a1
    • Peter Eisentraut's avatar
    • Peter Eisentraut's avatar
    • Robert Haas's avatar
      Refine our definition of what constitutes a system relation. · 8e18d04d
      Robert Haas authored
      Although user-defined relations can't be directly created in
      pg_catalog, it's possible for them to end up there, because you can
      create them in some other schema and then use ALTER TABLE .. SET SCHEMA
      to move them there.  Previously, such relations couldn't afterwards
      be manipulated, because IsSystemRelation()/IsSystemClass() rejected
      all attempts to modify objects in the pg_catalog schema, regardless
      of their origin.  With this patch, they now reject only those
      objects in pg_catalog which were created at initdb-time, allowing
      most operations on user-created tables in pg_catalog to proceed
      normally.
      
      This patch also adds new functions IsCatalogRelation() and
      IsCatalogClass(), which is similar to IsSystemRelation() and
      IsSystemClass() but with a slightly narrower definition: only TOAST
      tables of system catalogs are included, rather than *all* TOAST tables.
      This is currently used only for making decisions about when
      invalidation messages need to be sent, but upcoming logical decoding
      patches will find other uses for this information.
      
      Andres Freund, with some modifications by me.
      8e18d04d
  3. 28 Nov, 2013 11 commits
    • Heikki Linnakangas's avatar
      Another gin_desc fix. · 2fe69cac
      Heikki Linnakangas authored
      The number of items inserted was incorrectly printed as if it was a boolean.
      2fe69cac
    • Heikki Linnakangas's avatar
      Fix gin_desc routine to match the WAL format. · 97c19e6c
      Heikki Linnakangas authored
      In the GIN incomplete-splits patch, I used BlockIdDatas to store the block
      number of left and right children, when inserting a downlink after a split
      to an internal page posting list page. But gin_desc thought they were stored
      as BlockNumbers.
      97c19e6c
    • Tom Lane's avatar
      Fix latent(?) race condition in LockReleaseAll. · da8a7160
      Tom Lane authored
      We have for a long time checked the head pointer of each of the backend's
      proclock lists and skipped acquiring the corresponding locktable partition
      lock if the head pointer was NULL.  This was safe enough in the days when
      proclock lists were changed only by the owning backend, but it is pretty
      questionable now that the fast-path patch added cases where backends add
      entries to other backends' proclock lists.  However, we don't really wish
      to revert to locking each partition lock every time, because in simple
      transactions that would add a lot of useless lock/unlock cycles on
      already-heavily-contended LWLocks.  Fortunately, the only way that another
      backend could be modifying our proclock list at this point would be if it
      was promoting a formerly fast-path lock of ours; and any such lock must be
      one that we'd decided not to delete in the previous loop over the locallock
      table.  So it's okay if we miss seeing it in this loop; we'd just decide
      not to delete it again.  However, once we've detected a non-empty list,
      we'd better re-fetch the list head pointer after acquiring the partition
      lock.  This guards against possibly fetching a corrupt-but-non-null pointer
      if pointer fetch/store isn't atomic.  It's not clear if any practical
      architectures are like that, but we've never assumed that before and don't
      wish to start here.  In any case, the situation certainly deserves a code
      comment.
      
      While at it, refactor the partition traversal loop to use a for() construct
      instead of a while() loop with goto's.
      
      Back-patch, just in case the risk is real and not hypothetical.
      da8a7160
    • Alvaro Herrera's avatar
      Unbreak buildfarm · d51a8c52
      Alvaro Herrera authored
      I removed an intermediate commit before pushing and forgot to test the
      resulting tree :-(
      d51a8c52
    • Alvaro Herrera's avatar
      Use a more granular approach to follow update chains · 247c76a9
      Alvaro Herrera authored
      Instead of simply checking the KEYS_UPDATED bit, we need to check
      whether each lock held on the future version of the tuple conflicts with
      the lock we're trying to acquire.
      
      Per bug report #8434 by Tomonari Katsumata
      247c76a9
    • Alvaro Herrera's avatar
      Compare Xmin to previous Xmax when locking an update chain · e4828e9c
      Alvaro Herrera authored
      Not doing so causes us to traverse an update chain that has been broken
      by concurrent page pruning.  All other code that traverses update chains
      uses this check as one of the cases in which to stop iterating, so
      replicate it here too.  Failure to do so leads to erroneous CLOG,
      subtrans or multixact lookups.
      
      Per discussion following the bug report by J Smith in
      CADFUPgc5bmtv-yg9znxV-vcfkb+JPRqs7m2OesQXaM_4Z1JpdQ@mail.gmail.com
      as diagnosed by Andres Freund.
      e4828e9c
    • Alvaro Herrera's avatar
      Don't try to set InvalidXid as page pruning hint · c235a6a5
      Alvaro Herrera authored
      If a transaction updates/deletes a tuple just before aborting, and a
      concurrent transaction tries to prune the page concurrently, the pruner
      may see HeapTupleSatisfiesVacuum return HEAPTUPLE_DELETE_IN_PROGRESS,
      but a later call to HeapTupleGetUpdateXid() return InvalidXid.  This
      would cause an assertion failure in development builds, but would be
      otherwise Mostly Harmless.
      
      Fix by checking whether the updater Xid is valid before trying to apply
      it as page prune point.
      
      Reported by Andres in 20131124000203.GA4403@alap2.anarazel.de
      c235a6a5
    • Alvaro Herrera's avatar
      Cope with heap_fetch failure while locking an update chain · e518fa7a
      Alvaro Herrera authored
      The reason for the fetch failure is that the tuple was removed because
      it was dead; so the failure is innocuous and can be ignored.  Moreover,
      there's no need for further work and we can return success to the caller
      immediately.  EvalPlanQualFetch is doing something very similar to this
      already.
      
      Report and test case from Andres Freund in
      20131124000203.GA4403@alap2.anarazel.de
      e518fa7a
    • Peter Eisentraut's avatar
    • Bruce Momjian's avatar
      pg_buffercache docs: adjust order of fields · 9ef780d4
      Bruce Momjian authored
      Adjust order of fields to match view order.
      
      Jaime Casanova
      9ef780d4
    • Peter Eisentraut's avatar
      doc: Put data types in alphabetical order · a607b690
      Peter Eisentraut authored
      From: Andreas Karlsson <andreas@proxel.se>
      a607b690
  4. 27 Nov, 2013 13 commits
    • Tom Lane's avatar
      Fix stale-pointer problem in fast-path locking logic. · 7db285af
      Tom Lane authored
      When acquiring a lock in fast-path mode, we must reset the locallock
      object's lock and proclock fields to NULL.  They are not necessarily that
      way to start with, because the locallock could be left over from a failed
      lock acquisition attempt earlier in the transaction.  Failure to do this
      led to all sorts of interesting misbehaviors when LockRelease tried to
      clean up no-longer-related lock and proclock objects in shared memory.
      Per report from Dan Wood.
      
      In passing, modify LockRelease to elog not just Assert if it doesn't find
      lock and proclock objects for a formerly fast-path lock, matching the code
      in FastPathGetRelationLockEntry and LockRefindAndRelease.  This isn't a
      bug but it will help in diagnosing any future bugs in this area.
      
      Also, modify FastPathTransferRelationLocks and FastPathGetRelationLockEntry
      to break out of their loops over the fastpath array once they've found the
      sole matching entry.  This was inconsistently done in some search loops
      and not others.
      
      Improve assorted related comments, too.
      
      Back-patch to 9.2 where the fast-path mechanism was introduced.
      7db285af
    • Kevin Grittner's avatar
      Minor correction of READ COMMITTED isolation level docs. · 89ba8150
      Kevin Grittner authored
      Per report from AK
      89ba8150
    • Tom Lane's avatar
      Minor corrections in lmgr/README. · 8c84803e
      Tom Lane authored
      Correct an obsolete statement that no backend touches another backend's
      PROCLOCK lists.  This was probably wrong even when written (the deadlock
      checker looks at everybody's lists), and it's certainly quite wrong now
      that fast-path locking can require creation of lock and proclock objects
      on behalf of another backend.  Also improve some statements in the hot
      standby explanation, and do one or two other trivial bits of wordsmithing/
      reformatting.
      8c84803e
    • Heikki Linnakangas's avatar
      Get rid of the post-recovery cleanup step of GIN page splits. · 631118fe
      Heikki Linnakangas authored
      Replace it with an approach similar to what GiST uses: when a page is split,
      the left sibling is marked with a flag indicating that the parent hasn't been
      updated yet. When the parent is updated, the flag is cleared. If an insertion
      steps on a page with the flag set, it will finish split before proceeding
      with the insertion.
      
      The post-recovery cleanup mechanism was never totally reliable, as insertion
      to the parent could fail e.g because of running out of memory or disk space,
      leaving the tree in an inconsistent state.
      
      This also divides the responsibility of WAL-logging more clearly between
      the generic ginbtree.c code, and the parts specific to entry and posting
      trees. There is now a common WAL record format for insertions and deletions,
      which is written by ginbtree.c, followed by tree-specific payload, which is
      returned by the placetopage- and split- callbacks.
      631118fe
    • Heikki Linnakangas's avatar
      More GIN refactoring. · ce5326ee
      Heikki Linnakangas authored
      Separate the insertion payload from the more static portions of GinBtree.
      GinBtree now only contains information related to searching the tree, and
      the information of what to insert is passed separately.
      
      Add root block number to GinBtree, instead of passing it around all the
      functions as argument.
      
      Split off ginFinishSplit() from ginInsertValue(). ginFinishSplit is
      responsible for finding the parent and inserting the downlink to it.
      ce5326ee
    • Heikki Linnakangas's avatar
      Fix plpython3 expected output. · 4118f7e8
      Heikki Linnakangas authored
      I neglected this in the previous commit that updated the plpython2 output,
      which I forgot to "git add" earlier.
      
      As pointed out by Rodolfo Campero and Marko Kreen.
      4118f7e8
    • Heikki Linnakangas's avatar
      Don't update relfrozenxid if any pages were skipped. · 82b43f7d
      Heikki Linnakangas authored
      Vacuum recognizes that it can update relfrozenxid by checking whether it has
      processed all pages of a relation. Unfortunately it performed that check
      after truncating the dead pages at the end of the relation, and used the new
      number of pages to decide whether all pages have been scanned. If the new
      number of pages happened to be smaller or equal to the number of pages
      scanned, it incorrectly decided that all pages were scanned.
      
      This can lead to relfrozenxid being updated, even though some pages were
      skipped that still contain old XIDs. That can lead to data loss due to xid
      wraparounds with some rows suddenly missing. This likely has escaped notice
      so far because it takes a large number (~2^31) of xids being used to see the
      effect, while a full-table vacuum before that would fix the issue.
      
      The incorrect logic was introduced by commit
      b4b6923e. Backpatch this fix down to 8.4,
      like that commit.
      
      Andres Freund, with some modifications by me.
      82b43f7d
    • Michael Meskes's avatar
      Documentation fix for ecpg. · 2390f2b2
      Michael Meskes authored
      The latest fixes removed a limitation that was still in the docs, so Zoltan updated the docs, too.
      2390f2b2
    • Michael Meskes's avatar
      ECPG: Fix searching for quoted cursor names case-sensitively. · 51867a0f
      Michael Meskes authored
      Patch by Böszörményi Zoltán <zb@cybertec.at>
      51867a0f
    • Fujii Masao's avatar
      Add --xlogdir option to pg_basebackup, for specifying the pg_xlog directory. · d1b88f6b
      Fujii Masao authored
      Haribabu kommi, slightly modified by me.
      d1b88f6b
    • Fujii Masao's avatar
      Fix typo in release note. · 551c7828
      Fujii Masao authored
      Backpatch to 9.1.
      
      Josh Kupershmidt
      551c7828
    • Peter Eisentraut's avatar
    • Peter Eisentraut's avatar
      doc: Add id to index in XSLT build · 3803ff98
      Peter Eisentraut authored
      That way, the HTML file name of the index will be the same as currently
      for the DSSSL build.
      3803ff98
  5. 26 Nov, 2013 4 commits