1. 05 Sep, 2013 3 commits
    • Heikki Linnakangas's avatar
      Make catalog cache hash tables resizeable. · 20cb18db
      Heikki Linnakangas authored
      If the hash table backing a catalog cache becomes too full (fillfactor > 2),
      enlarge it. A new buckets array, double the size of the old, is allocated,
      and all entries in the old hash are moved to the right bucket in the new
      hash.
      
      This has two benefits. First, cache lookups don't get so expensive when
      there are lots of entries in a cache, like if you access hundreds of
      thousands of tables. Second, we can make the (initial) sizes of the caches
      much smaller, which saves memory.
      
      This patch dials down the initial sizes of the catcaches. The new sizes are
      chosen so that a backend that only runs a few basic queries still won't need
      to enlarge any of them.
      20cb18db
    • Jeff Davis's avatar
      Revert WAL posix_fallocate() patches. · b1892aae
      Jeff Davis authored
      This reverts commit 269e7808
      and commit 5b571bb8.
      
      Unfortunately, the initial patch had insufficient performance testing,
      and resulted in a regression.
      
      Per report by Thom Brown.
      b1892aae
    • Jeff Davis's avatar
      Improve Range Types and Exclusion Constraints example. · be6fcb67
      Jeff Davis authored
      Make the examples self-contained to avoid confusion. Per bug report
      8367 from KOIZUMI Satoru.
      be6fcb67
  2. 04 Sep, 2013 4 commits
  3. 03 Sep, 2013 10 commits
    • Tom Lane's avatar
      Update comments concerning PGC_S_TEST. · 0c66a223
      Tom Lane authored
      This GUC context value was once only used by ALTER DATABASE SET and
      ALTER USER SET.  That's not true anymore, though, so rewrite the
      comments to be a bit more general.
      
      Patch in HEAD only, since this is just an internal documentation issue.
      0c66a223
    • Tom Lane's avatar
      Don't fail for bad GUCs in CREATE FUNCTION with check_function_bodies off. · 546f7c2e
      Tom Lane authored
      The previous coding attempted to activate all the GUC settings specified
      in SET clauses, so that the function validator could operate in the GUC
      environment expected by the function body.  However, this is problematic
      when restoring a dump, since the SET clauses might refer to database
      objects that don't exist yet.  We already have the parameter
      check_function_bodies that's meant to prevent forward references in
      function definitions from breaking dumps, so let's change CREATE FUNCTION
      to not install the SET values if check_function_bodies is off.
      
      Authors of function validators were already advised not to make any
      "context sensitive" checks when check_function_bodies is off, if indeed
      they're checking anything at all in that mode.  But extend the
      documentation to point out the GUC issue in particular.
      
      (Note that we still check the SET clauses to some extent; the behavior
      with !check_function_bodies is now approximately equivalent to what ALTER
      DATABASE/ROLE have been doing for awhile with context-dependent GUCs.)
      
      This problem can be demonstrated in all active branches, so back-patch
      all the way.
      546f7c2e
    • Tom Lane's avatar
      Allow aggregate functions to be VARIADIC. · 0d3f4406
      Tom Lane authored
      There's no inherent reason why an aggregate function can't be variadic
      (even VARIADIC ANY) if its transition function can handle the case.
      Indeed, this patch to add the feature touches none of the planner or
      executor, and little of the parser; the main missing stuff was DDL and
      pg_dump support.
      
      It is true that variadic aggregates can create the same sort of ambiguity
      about parameters versus ORDER BY keys that was complained of when we
      (briefly) had both one- and two-argument forms of string_agg().  However,
      the policy formed in response to that discussion only said that we'd not
      create any built-in aggregates with varying numbers of arguments, not that
      we shouldn't allow users to do it.  So the logical extension of that is
      we can allow users to make variadic aggregates as long as we're wary about
      shipping any such in core.
      
      In passing, this patch allows aggregate function arguments to be named, to
      the extent of remembering the names in pg_proc and dumping them in pg_dump.
      You can't yet call an aggregate using named-parameter notation.  That seems
      like a likely future extension, but it'll take some work, and it's not what
      this patch is really about.  Likewise, there's still some work needed to
      make window functions handle VARIADIC fully, but I left that for another
      day.
      
      initdb forced because of new aggvariadic field in Aggref parse nodes.
      0d3f4406
    • Alvaro Herrera's avatar
      Update obsolete comment · 8b290f31
      Alvaro Herrera authored
      8b290f31
    • Tom Lane's avatar
      Docs: wording improvements in discussion of timestamp arithmetic. · 7489eb4d
      Tom Lane authored
      I started out just to fix the broken markup in commit
      1c208576, but got distracted by
      copy-editing.  I see Bruce already fixed the markup, but I'll
      commit the wordsmithing anyway.
      7489eb4d
    • Bruce Momjian's avatar
      doc: Fix SGML markup for date patch · b642bc55
      Bruce Momjian authored
      b642bc55
    • Bruce Momjian's avatar
      Docs: add paragraph about date/timestamp subtraction · 1c208576
      Bruce Momjian authored
      per suggestion from Francisco Olart
      1c208576
    • Robert Haas's avatar
      9d323bda
    • Greg Stark's avatar
    • Heikki Linnakangas's avatar
      Fix typo in comment. · a93bdfc7
      Heikki Linnakangas authored
      Also line-wrap an over-wide line in a comment that's ignored by pgindent.
      a93bdfc7
  4. 02 Sep, 2013 3 commits
  5. 01 Sep, 2013 2 commits
  6. 31 Aug, 2013 1 commit
    • Tom Lane's avatar
      Improve regression test for #8410. · abd3f8ca
      Tom Lane authored
      The previous version of the query disregarded the result of the MergeAppend
      instead of checking its results.
      
      Andres Freund
      abd3f8ca
  7. 30 Aug, 2013 2 commits
    • Tom Lane's avatar
      Add test case for bug #8410. · ac2d0e46
      Tom Lane authored
      Per Andres Freund.
      ac2d0e46
    • Tom Lane's avatar
      Reset the binary heap in MergeAppend rescans. · 8e2b71d2
      Tom Lane authored
      Failing to do so can cause queries to return wrong data, error out or crash.
      This requires adding a new binaryheap_reset() method to binaryheap.c,
      but that probably should have been there anyway.
      
      Per bug #8410 from Terje Elde.  Diagnosis and patch by Andres Freund.
      8e2b71d2
  8. 29 Aug, 2013 2 commits
    • Alvaro Herrera's avatar
      Make error wording more consistent · 9381cb52
      Alvaro Herrera authored
      9381cb52
    • Heikki Linnakangas's avatar
      Use a non-locking initial test in TAS_SPIN on x86_64. · b03d196b
      Heikki Linnakangas authored
      Testing done in 2011 by Tom Lane concluded that this is a win on Intel Xeons
      and AMD Opterons, but it was not changed back then, because of an old
      comment in tas() that suggested that it's a huge loss on older Opterons.
      However, didn't have separate TAS() and TAS_SPIN() macros back then, so the
      comment referred to doing a non-locked initial test even on the first
      access, in uncontended case. I don't have access to older Opterons, but I'm
      pretty sure that doing an initial unlocked test is unlikely to be a loss
      while spinning, even though it might be for the first access.
      
      We probably should do the same on 32-bit x86, but I'm afraid of changing it
      without any testing. Hence just add a note to the x86 implementation
      suggesting that we probably should do the same there.
      b03d196b
  9. 28 Aug, 2013 3 commits
    • Robert Haas's avatar
      Allow discovery of whether a dynamic background worker is running. · 090d0f20
      Robert Haas authored
      Using the infrastructure provided by this patch, it's possible either
      to wait for the startup of a dynamically-registered background worker,
      or to poll the status of such a worker without waiting.  In either
      case, the current PID of the worker process can also be obtained.
      As usual, worker_spi is updated to demonstrate the new functionality.
      
      Patch by me.  Review by Andres Freund.
      090d0f20
    • Robert Haas's avatar
      Partially restore comments discussing enum renumbering hazards. · c9e2e2db
      Robert Haas authored
      As noted by Tom Lane, commit 813fb031
      was overly optimistic about how safe it is to concurrently change
      enumsortorder values under MVCC catalog scan semantics.  Restore
      some of the previous text, with hopefully-correct adjustments for
      the new state of play.
      c9e2e2db
    • Heikki Linnakangas's avatar
      Accept multiple -I, -P, -T and -n options in pg_restore. · da85fb47
      Heikki Linnakangas authored
      We already did this for -t (--table) in 9.3, but missed the other similar
      options. For consistency, allow all of them to be specified multiple times.
      
      Unfortunately it's too late to sneak this into 9.3, so commit to master
      only.
      da85fb47
  10. 27 Aug, 2013 2 commits
  11. 26 Aug, 2013 1 commit
  12. 24 Aug, 2013 2 commits
    • Tom Lane's avatar
      Account better for planning cost when choosing whether to use custom plans. · 2aac3399
      Tom Lane authored
      The previous coding in plancache.c essentially used 10% of the estimated
      runtime as its cost estimate for planning.  This can be pretty bogus,
      especially when the estimated runtime is very small, such as in a simple
      expression plan created by plpgsql, or a simple INSERT ... VALUES.
      
      While we don't have a really good handle on how planning time compares
      to runtime, it seems reasonable to use an estimate based on the number of
      relations referenced in the query, with a rather large multiplier.  This
      patch uses 1000 * cpu_operator_cost * (nrelations + 1), so that even a
      trivial query will be charged 1000 * cpu_operator_cost for planning.
      This should address the problem reported by Marc Cousin and others that
      9.2 and up prefer custom plans in cases where the planning time greatly
      exceeds what can be saved.
      2aac3399
    • Magnus Hagander's avatar
      Don't crash when pg_xlog is empty and pg_basebackup -x is used · db4ef737
      Magnus Hagander authored
      The backup will not work (without a logarchive, and that's the whole
      point of -x) in this case, this patch just changes it to throw an
      error instead of crashing when this happens.
      
      Noticed and diagnosed by TAKATSUKA Haruka
      db4ef737
  13. 23 Aug, 2013 1 commit
    • Tom Lane's avatar
      In locate_grouping_columns(), don't expect an exact match of Var typmods. · fcf9ecad
      Tom Lane authored
      It's possible that inlining of SQL functions (or perhaps other changes?)
      has exposed typmod information not known at parse time.  In such cases,
      Vars generated by query_planner might have valid typmod values while the
      original grouping columns only have typmod -1.  This isn't a semantic
      problem since the behavior of grouping only depends on type not typmod,
      but it breaks locate_grouping_columns' use of tlist_member to locate the
      matching entry in query_planner's result tlist.
      
      We can fix this without an excessive amount of new code or complexity by
      relying on the fact that locate_grouping_columns only gets called when
      make_subplanTargetList has set need_tlist_eval == false, and that can only
      happen if all the grouping columns are simple Vars.  Therefore we only need
      to search the sub_tlist for a matching Var, and we can reasonably define a
      "match" as being a match of the Var identity fields
      varno/varattno/varlevelsup.  The code still Asserts that vartype matches,
      but ignores vartypmod.
      
      Per bug #8393 from Evan Martin.  The added regression test case is
      basically the same as his example.  This has been broken for a very long
      time, so back-patch to all supported branches.
      fcf9ecad
  14. 21 Aug, 2013 2 commits
    • Tom Lane's avatar
      Fix hash table size estimation error in choose_hashed_distinct(). · 34548763
      Tom Lane authored
      We should account for the per-group hashtable entry overhead when
      considering whether to use a hash aggregate to implement DISTINCT.  The
      comparable logic in choose_hashed_grouping() gets this right, but I think
      I omitted it here in the mistaken belief that there would be no overhead
      if there were no aggregate functions to be evaluated.  This can result in
      more than 2X underestimate of the hash table size, if the tuples being
      aggregated aren't very wide.  Per report from Tomas Vondra.
      
      This bug is of long standing, but per discussion we'll only back-patch into
      9.3.  Changing the estimation behavior in stable branches seems to carry too
      much risk of destabilizing plan choices for already-tuned applications.
      34548763
    • Bruce Momjian's avatar
      docs: Remove second 'trim' index reference · 5dcc48c2
      Bruce Momjian authored
      Per suggestion from Vik Fearing
      5dcc48c2
  15. 20 Aug, 2013 2 commits