1. 10 Mar, 2008 6 commits
  2. 09 Mar, 2008 2 commits
    • Tom Lane's avatar
      Remove postmaster.c's check that NBuffers is at least twice MaxBackends. · d9384a4b
      Tom Lane authored
      With the addition of multiple autovacuum workers, our choices were to delete
      the check, document the interaction with autovacuum_max_workers, or complicate
      the check to try to hide that interaction.  Since this restriction has never
      been adequate to ensure backends can't run out of pinnable buffers, it doesn't
      really have enough excuse to live to justify the second or third choices.
      Per discussion of a complaint from Andreas Kling (see also bug #3888).
      
      This commit also removes several documentation references to this restriction,
      but I'm not sure I got them all.
      d9384a4b
    • Tom Lane's avatar
      Change patternsel() so that instead of switching from a pure · f4230d29
      Tom Lane authored
      pattern-examination heuristic method to purely histogram-driven selectivity at
      histogram size 100, we compute both estimates and use a weighted average.
      The weight put on the heuristic estimate decreases linearly with histogram
      size, dropping to zero for 100 or more histogram entries.
      Likewise in ltreeparentsel().  After a patch by Greg Stark, though I
      reorganized the logic a bit to give the caller of histogram_selectivity()
      more control.
      f4230d29
  3. 08 Mar, 2008 5 commits
    • Tom Lane's avatar
      Modify prefix_selectivity() so that it will never estimate the selectivity · 422495d0
      Tom Lane authored
      of the generated range condition var >= 'foo' AND var < 'fop' as being less
      than what eqsel() would estimate for var = 'foo'.  This is intuitively
      reasonable and it gets rid of the need for some entirely ad-hoc coding we
      formerly used to reject bogus estimates.  The basic problem here is that
      if the prefix is more than a few characters long, the two boundary values
      are too close together to be distinguishable by comparison to the column
      histogram, resulting in a selectivity estimate of zero, which is often
      not very sane.  Change motivated by an example from Peter Eisentraut.
      
      Arguably this is a bug fix, but I'll refrain from back-patching it
      for the moment.
      422495d0
    • Tom Lane's avatar
      Refactor heap_page_prune so that instead of changing item states on-the-fly, · 6f10eb21
      Tom Lane authored
      it accumulates the set of changes to be made and then applies them.  It had
      to accumulate the set of changes anyway to prepare a WAL record for the
      pruning action, so this isn't an enormous change; the only new complexity is
      to not doubly mark tuples that are visited twice in the scan.  The main
      advantage is that we can substantially reduce the scope of the critical
      section in which the changes are applied, thus avoiding PANIC in foreseeable
      cases like running out of memory in inval.c.  A nice secondary advantage is
      that it is now far clearer that WAL replay will actually do the same thing
      that the original pruning did.
      
      This commit doesn't do anything about the open problem that
      CacheInvalidateHeapTuple doesn't have the right semantics for a CTID change
      caused by collapsing out a redirect pointer.  But whatever we do about that,
      it'll be a good idea to not do it inside a critical section.
      6f10eb21
    • Bruce Momjian's avatar
      Add: · cc05d051
      Bruce Momjian authored
      >
      > * Consider a function-based API for '@@' full text searches
      >
      >   http://archives.postgresql.org/pgsql-hackers/2007-11/msg00511.php
      >
      cc05d051
    • Andrew Dunstan's avatar
      Improve efficiency of attribute scanning in CopyReadAttributesCSV. · 95c238d9
      Andrew Dunstan authored
      The loop is split into two parts, inside quotes, and outside quotes, saving some instructions in both parts.
      
      Heikki Linnakangas
      95c238d9
    • Tom Lane's avatar
      Improve pglz_decompress() so that it cannot clobber memory beyond the · 9c767ad5
      Tom Lane authored
      available output buffer when presented with corrupt input.  Some testing
      suggests that this slows the decompression loop about 1%, which seems an
      acceptable price to pay for more robustness.  (Curiously, the penalty
      seems to be *less* on not-very-compressible data, which I didn't expect
      since the overhead per output byte ought to be more in the literal-bytes
      path.)
      
      Patch from Zdenek Kotala.  I fixed a corner case and did some renaming
      of variables to make the routine more readable.
      9c767ad5
  4. 07 Mar, 2008 17 commits
  5. 06 Mar, 2008 10 commits