1. 05 Jan, 2010 3 commits
    • Itagaki Takahiro's avatar
      Verify input in pg_read_file(). · c3a1eae2
      Itagaki Takahiro authored
      c3a1eae2
    • Tom Lane's avatar
      Fix parallel-make timing problem. · 54b47c80
      Tom Lane authored
      54b47c80
    • Tom Lane's avatar
      Get rid of the need for manual maintenance of the initial contents of · 64737e93
      Tom Lane authored
      pg_attribute, by having genbki.pl derive the information from the various
      catalog header files.  This greatly simplifies modification of the
      "bootstrapped" catalogs.
      
      This patch finally kills genbki.sh and Gen_fmgrtab.sh; we now rely entirely on
      Perl scripts for those build steps.  To avoid creating a Perl build dependency
      where there was not one before, the output files generated by these scripts
      are now treated as distprep targets, ie, they will be built and shipped in
      tarballs.  But you will need a reasonably modern Perl (probably at least
      5.6) if you want to build from a CVS pull.
      
      The changes to the MSVC build process are untested, and may well break ---
      we'll soon find out from the buildfarm.
      
      John Naylor, based on ideas from Robert Haas and others
      64737e93
  2. 04 Jan, 2010 7 commits
    • Andrew Dunstan's avatar
    • Andrew Dunstan's avatar
      Check values passed back from PLPerl to the database, via function return, · 1c4c741e
      Andrew Dunstan authored
      trigger tuple modification or SPI call, to ensure they are valid in the
      server encoding. Along the way, replace uses of SvPV(foo, PL_na)
      with SvPV_nolen(foo) as recommended in the perl docs. Bug report from
      Hannu Krosing.
      1c4c741e
    • Magnus Hagander's avatar
      Add a Win64-specific spin_delay() function. · 305e85b0
      Magnus Hagander authored
      We can't use the same as before, since MSVC on Win64 doesn't
      support inline assembly.
      305e85b0
    • Tom Lane's avatar
      Improve PGXS makefile system to allow the module's makefile to specify · 4c5b4c8b
      Tom Lane authored
      where to install DATA and DOCS files.  This is mainly intended to allow
      versioned installation, eg, install into contrib/fooM.N/ rather than
      directly into contrib/.
      
      Mark Cave-Ayland
      4c5b4c8b
    • Heikki Linnakangas's avatar
      Write an end-of-backup WAL record at pg_stop_backup(), and wait for it at · 06f82b29
      Heikki Linnakangas authored
      recovery instead of reading the backup history file. This is more robust,
      as it stops you from prematurely starting up an inconsisten cluster if the
      backup history file is lost for some reason, or if the base backup was
      never finished with pg_stop_backup().
      
      This also paves the way for a simpler streaming replication patch, which
      doesn't need to care about backup history files anymore.
      
      The backup history file is still created and archived as before, but it's
      not used by the system anymore. It's just for informational purposes now.
      
      Bump PG_CONTROL_VERSION as the location of the backup startpoint is now
      written to a new field in pg_control, and catversion because initdb is
      required
      
      Original patch by Fujii Masao per Simon's idea, with further fixes by me.
      06f82b29
    • Tom Lane's avatar
      When estimating the selectivity of an inequality "column > constant" or · 40608e7f
      Tom Lane authored
      "column < constant", and the comparison value is in the first or last
      histogram bin or outside the histogram entirely, try to fetch the actual
      column min or max value using an index scan (if there is an index on the
      column).  If successful, replace the lower or upper histogram bound with
      that value before carrying on with the estimate.  This limits the
      estimation error caused by moving min/max values when the comparison
      value is close to the min or max.  Per a complaint from Josh Berkus.
      
      It is tempting to consider using this mechanism for mergejoinscansel as well,
      but that would inject index fetches into main-line join estimation not just
      endpoint cases.  I'm refraining from that until we can get a better handle
      on the costs of doing this type of lookup.
      40608e7f
    • Itagaki Takahiro's avatar
  3. 03 Jan, 2010 1 commit
    • Tom Lane's avatar
      Dept of second thoughts: my first cut at supporting "x IS NOT NULL" btree · 5b76bb18
      Tom Lane authored
      indexscans would do the wrong thing if index_rescan() was called with a
      NULL instead of a new set of scankeys and the index was DESC order,
      because sk_strategy would not get flipped a second time.  I think
      that those provisions for a NULL argument are dead code now as far as the
      core backend goes, but possibly somebody somewhere is still using it.
      In any case, this refactoring seems clearer, and it's definitely shorter.
      5b76bb18
  4. 02 Jan, 2010 10 commits
  5. 01 Jan, 2010 7 commits
  6. 31 Dec, 2009 7 commits
  7. 30 Dec, 2009 5 commits
    • Peter Eisentraut's avatar
    • Tom Lane's avatar
      Dept of second thoughts: recursive case in ANALYZE shouldn't emit a · e6df063c
      Tom Lane authored
      pgstats message.  This might need to be done differently later, but
      with the current logic that's what should happen.
      e6df063c
    • Tom Lane's avatar
      Revise pgstat's tracking of tuple changes to improve the reliability of · 48c192c1
      Tom Lane authored
      decisions about when to auto-analyze.
      
      The previous code depended on n_live_tuples + n_dead_tuples - last_anl_tuples,
      where all three of these numbers could be bad estimates from ANALYZE itself.
      Even worse, in the presence of a steady flow of HOT updates and matching
      HOT-tuple reclamations, auto-analyze might never trigger at all, even if all
      three numbers are exactly right, because n_dead_tuples could hold steady.
      
      To fix, replace last_anl_tuples with an accurately tracked count of the total
      number of committed tuple inserts + updates + deletes since the last ANALYZE
      on the table.  This can still be compared to the same threshold as before, but
      it's much more trustworthy than the old computation.  Tracking this requires
      one more intra-transaction counter per modified table within backends, but no
      additional memory space in the stats collector.  There probably isn't any
      measurable speed difference; if anything it might be a bit faster than before,
      since I was able to eliminate some per-tuple arithmetic operations in favor of
      adding sums once per (sub)transaction.
      
      Also, simplify the logic around pgstat vacuum and analyze reporting messages
      by not trying to fold VACUUM ANALYZE into a single pgstat message.
      
      The original thought behind this patch was to allow scheduling of analyzes
      on parent tables by artificially inflating their changes_since_analyze count.
      I've left that for a separate patch since this change seems to stand on its
      own merit.
      48c192c1
    • Peter Eisentraut's avatar
    • Peter Eisentraut's avatar
      Revert makefile refactoring (version 1.123) because it doesn't work · ab1725d5
      Peter Eisentraut authored
      when building several files at once (e.g.,
      gmake postgres-A4.pdf postgres-US.pdf).
      ab1725d5