1. 03 Sep, 2003 7 commits
  2. 02 Sep, 2003 6 commits
    • Tom Lane's avatar
      In _bt_check_unique() loop, don't bother applying _bt_isequal() to · 5ac2d7c0
      Tom Lane authored
      killed items; just skip to the next item immediately.  Only check for
      key equality when we reach a non-killed item or the end of the index
      page.  This saves key comparisons when there are lots of killed items,
      as for example in a heavily-updated table that's not been vacuumed lately.
      Seems to be a win for pgbench anyway.
      5ac2d7c0
    • Peter Eisentraut's avatar
      Remove outdated CLI things. · 30b4abf5
      Peter Eisentraut authored
      30b4abf5
    • Tom Lane's avatar
      Cause standalone backend (including bootstrap case) to read the GUC · b916cc43
      Tom Lane authored
      config file if it exists.  This was already discussed as being a good
      idea, and now seems the cleanest way to deal with initdb-time failures
      on machines with small SHMMAX.  (The submitted patches instead modified
      initdb.sh to pass the correct sizing parameters, but that would still
      leave standalone backends prone to failure later.  An admin who needs
      to use a standalone backend has enough trouble already, he shouldn't
      have to manually configure its shmem settings...)
      b916cc43
    • Tom Lane's avatar
      Several fixes for hash indexes that involve changing the on-disk index · d70610c4
      Tom Lane authored
      layout; therefore, this change forces REINDEX of hash indexes (though
      not a full initdb).  Widen hashm_ntuples to double so that hash space
      management doesn't get confused by more than 4G entries; enlarge the
      allowed number of free-space-bitmap pages; replace the useless bshift
      field with a useful bmshift field; eliminate 4 bytes of wasted space
      in the per-page special area.
      d70610c4
    • Tom Lane's avatar
      Fix a couple typos, add some more comments. · 8b2450c8
      Tom Lane authored
      8b2450c8
    • Tom Lane's avatar
      Rewrite hashbulkdelete() to make it amenable to new bucket locking · 39673ca4
      Tom Lane authored
      scheme.  A pleasant side effect is that it is *much* faster when deleting
      a large fraction of the indexed tuples, because of elimination of
      redundant hash_step activity induced by hash_adjscans.  Various other
      continuing code cleanup.
      39673ca4
  3. 01 Sep, 2003 7 commits
  4. 31 Aug, 2003 3 commits
  5. 30 Aug, 2003 2 commits
  6. 28 Aug, 2003 5 commits
  7. 27 Aug, 2003 7 commits
  8. 26 Aug, 2003 3 commits