1. 07 Nov, 2011 3 commits
  2. 06 Nov, 2011 2 commits
  3. 05 Nov, 2011 4 commits
    • Magnus Hagander's avatar
      Update regression tests for \d+ modification · 3a6e4076
      Magnus Hagander authored
      Noted by Tom
      3a6e4076
    • Magnus Hagander's avatar
      ebcadba2
    • Magnus Hagander's avatar
    • Tom Lane's avatar
      Don't assume that a tuple's header size is unchanged during toasting. · 039680af
      Tom Lane authored
      This assumption can be wrong when the toaster is passed a raw on-disk
      tuple, because the tuple might pre-date an ALTER TABLE ADD COLUMN operation
      that added columns without rewriting the table.  In such a case the tuple's
      natts value is smaller than what we expect from the tuple descriptor, and
      so its t_hoff value could be smaller too.  In fact, the tuple might not
      have a null bitmap at all, and yet our current opinion of it is that it
      contains some trailing nulls.
      
      In such a situation, toast_insert_or_update did the wrong thing, because
      to save a few lines of code it would use the old t_hoff value as the offset
      where heap_fill_tuple should start filling data.  This did not leave enough
      room for the new nulls bitmap, with the result that the first few bytes of
      data could be overwritten with null flag bits, as in a recent report from
      Hubert Depesz Lubaczewski.
      
      The particular case reported requires ALTER TABLE ADD COLUMN followed by
      CREATE TABLE AS SELECT * FROM ... or INSERT ... SELECT * FROM ..., and
      further requires that there be some out-of-line toasted fields in one of
      the tuples to be copied; else we'll not reach the troublesome code.
      The problem can only manifest in this form in 8.4 and later, because
      before commit a77eaa6a, CREATE TABLE AS or
      INSERT/SELECT wouldn't result in raw disk tuples getting passed directly
      to heap_insert --- there would always have been at least a junkfilter in
      between, and that would reconstitute the tuple header with an up-to-date
      t_natts and hence t_hoff.  But I'm backpatching the tuptoaster change all
      the way anyway, because I'm not convinced there are no older code paths
      that present a similar risk.
      039680af
  4. 04 Nov, 2011 7 commits
  5. 03 Nov, 2011 12 commits
  6. 02 Nov, 2011 11 commits
    • Tom Lane's avatar
      Avoid scanning nulls at the beginning of a btree index scan. · 1a77f8b6
      Tom Lane authored
      If we have an inequality key that constrains the other end of the index,
      it doesn't directly help us in doing the initial positioning ... but it
      does imply a NOT NULL constraint on the index column.  If the index stores
      nulls at this end, we can use the implied NOT NULL condition for initial
      positioning, just as if it had been stated explicitly.  This avoids wasting
      time when there are a lot of nulls in the column.  This is the reverse of
      the examples given in bugs #6278 and #6283, which were about failing to
      stop early when we encounter nulls at the end of the indexscan.
      1a77f8b6
    • Tom Lane's avatar
      Fix btree stop-at-nulls logic properly. · 882368e8
      Tom Lane authored
      As pointed out by Naoya Anzai, my previous try at this was a few bricks
      shy of a load, because I had forgotten that the initial-positioning logic
      might not try to skip over nulls at the end of the index the scan will
      start from.  We ought to fix that, because it represents an unnecessary
      inefficiency, but first let's get the scan-stop logic back to a safe
      state.  With this patch, we preserve the performance benefit requested
      in bug #6278 for the case of scanning forward into NULLs (in a NULLS
      LAST index), but the reverse case of scanning backward across NULLs
      when there's no suitable initial-positioning qual is still inefficient.
      882368e8
    • Simon Riggs's avatar
    • Simon Riggs's avatar
      Reduce checkpoints and WAL traffic on low activity database server · 18fb9d8d
      Simon Riggs authored
      Previously, we skipped a checkpoint if no WAL had been written since
      last checkpoint, though this does not appear in user documentation.
      As of now, we skip a checkpoint until we have written at least one
      enough WAL to switch the next WAL file. This greatly reduces the
      level of activity and number of WAL messages generated by a very
      low activity server. This is safe because the purpose of a checkpoint
      is to act as a starting place for a recovery, in case of crash.
      This patch maintains minimal WAL volume for replay in case of crash,
      thus maintaining very low crash recovery time.
      18fb9d8d
    • Simon Riggs's avatar
      Refactor xlog.c to create src/backend/postmaster/startup.c · 9aceb6ab
      Simon Riggs authored
      Startup process now has its own dedicated file, just like all other
      special/background processes. Reduces role and size of xlog.c
      9aceb6ab
    • Simon Riggs's avatar
      Derive oldestActiveXid at correct time for Hot Standby. · 86e33648
      Simon Riggs authored
      There was a timing window between when oldestActiveXid was derived
      and when it should have been derived that only shows itself under
      heavy load. Move code around to ensure correct timing of derivation.
      No change to StartupSUBTRANS() code, which is where this failed.
      
      Bug report by Chris Redekop
      86e33648
    • Simon Riggs's avatar
      Start Hot Standby faster when initial snapshot is incomplete. · 10b7c686
      Simon Riggs authored
      If the initial snapshot had overflowed then we can start whenever
      the latest snapshot is empty, not overflowed or as we did already,
      start when the xmin on primary was higher than xmax of our starting
      snapshot, which proves we have full snapshot data.
      
      Bug report by Chris Redekop
      10b7c686
    • Simon Riggs's avatar
    • Simon Riggs's avatar
      Fix timing of Startup CLOG and MultiXact during Hot Standby · f8409b39
      Simon Riggs authored
      Patch by me, bug report by Chris Redekop, analysis by Florian Pflug
      f8409b39
    • Robert Haas's avatar
      Initialize myProcLocks queues just once, at postmaster startup. · c2891b46
      Robert Haas authored
      In assert-enabled builds, we assert during the shutdown sequence that
      the queues have been properly emptied, and during process startup that
      we are inheriting empty queues.  In non-assert enabled builds, we just
      save a few cycles.
      c2891b46
    • Tom Lane's avatar
      Preserve Var location information during flatten_join_alias_vars. · 391af9f7
      Tom Lane authored
      This allows us to give correct syntax error pointers when complaining
      about ungrouped variables in a join query with aggregates or GROUP BY.
      It's pretty much irrelevant for the planner's use of the function, though
      perhaps it might aid debugging sometimes.
      391af9f7
  7. 01 Nov, 2011 1 commit
    • Tom Lane's avatar
      Fix race condition with toast table access from a stale syscache entry. · 08e261cb
      Tom Lane authored
      If a tuple in a syscache contains an out-of-line toasted field, and we
      try to fetch that field shortly after some other transaction has committed
      an update or deletion of the tuple, there is a race condition: vacuum
      could come along and remove the toast tuples before we can fetch them.
      This leads to transient failures like "missing chunk number 0 for toast
      value NNNNN in pg_toast_2619", as seen in recent reports from Andrew
      Hammond and Tim Uckun.
      
      The design idea of syscache is that access to stale syscache entries
      should be prevented by relation-level locks, but that fails for at least
      two cases where toasted fields are possible: ANALYZE updates pg_statistic
      rows without locking out sessions that might want to plan queries on the
      same table, and CREATE OR REPLACE FUNCTION updates pg_proc rows without
      any meaningful lock at all.
      
      The least risky fix seems to be an idea that Heikki suggested when we
      were dealing with a related problem back in August: forcibly detoast any
      out-of-line fields before putting a tuple into syscache in the first place.
      This avoids the problem because at the time we fetch the parent tuple from
      the catalog, we should be holding an MVCC snapshot that will prevent
      removal of the toast tuples, even if the parent tuple is outdated
      immediately after we fetch it.  (Note: I'm not convinced that this
      statement holds true at every instant where we could be fetching a syscache
      entry at all, but it does appear to hold true at the times where we could
      fetch an entry that could have a toasted field.  We will need to be a bit
      wary of adding toast tables to low-level catalogs that don't have them
      already.)  An additional benefit is that subsequent uses of the syscache
      entry should be faster, since they won't have to detoast the field.
      
      Back-patch to all supported versions.  The problem is significantly harder
      to reproduce in pre-9.0 releases, because of their willingness to flush
      every entry in a syscache whenever the underlying catalog is vacuumed
      (cf CatalogCacheFlushRelation); but there is still a window for trouble.
      08e261cb