1. 15 Mar, 2018 5 commits
  2. 14 Mar, 2018 9 commits
  3. 13 Mar, 2018 11 commits
    • Michael Meskes's avatar
      Add Oracle like handling of char arrays. · 3b7ab438
      Michael Meskes authored
      In some cases Oracle Pro*C handles char array differently than ECPG. This patch
      adds a Oracle compatibility mode to make ECPG behave like Pro*C.
      
      Patch by David Rader <davidr@openscg.com>
      3b7ab438
    • Michael Meskes's avatar
      Fix double frees in ecpg. · db2fc801
      Michael Meskes authored
      Patch by Patrick Krecker <patrick@judicata.com>
      db2fc801
    • Andres Freund's avatar
      Expand AND/OR regression tests around NULL handling. · 1e22166e
      Andres Freund authored
      Previously there were no tests verifying that NULL handling in AND/OR
      was correct (i.e. that NULL rather than false is returned if
      expression doesn't return true).
      
      Author: Andres Freund
      1e22166e
    • Andres Freund's avatar
    • Robert Haas's avatar
      Let Parallel Append over simple UNION ALL have partial subpaths. · 0927d2f4
      Robert Haas authored
      A simple UNION ALL gets flattened into an appendrel of subquery
      RTEs, but up until now it's been impossible for the appendrel to use
      the partial paths for the subqueries, so we can implement the
      appendrel as a Parallel Append but only one with non-partial paths
      as children.
      
      There are three separate obstacles to removing that limitation.
      First, when planning a subquery, propagate any partial paths to the
      final_rel so that they are potentially visible to outer query levels
      (but not if they have initPlans attached, because that wouldn't be
      safe).  Second, after planning a subquery, propagate any partial paths
      for the final_rel to the subquery RTE in the outer query level in the
      same way we do for non-partial paths.  Third, teach finalize_plan() to
      account for the possibility that the fake parameter we use for rescan
      signalling when the plan contains a Gather (Merge) node may be
      propagated from an outer query level.
      
      Patch by me, reviewed and tested by Amit Khandekar, Rajkumar
      Raghuwanshi, and Ashutosh Bapat.  Test cases based on examples by
      Rajkumar Raghuwanshi.
      
      Discussion: http://postgr.es/m/CA+Tgmoa6L9A1nNCk3aTDVZLZ4KkHDn1+tm7mFyFvP+uQPS7bAg@mail.gmail.com
      0927d2f4
    • Tom Lane's avatar
      When updating reltuples after ANALYZE, just extrapolate from our sample. · d04900de
      Tom Lane authored
      The existing logic for updating pg_class.reltuples trusted the sampling
      results only for the pages ANALYZE actually visited, preferring to
      believe the previous tuple density estimate for all the unvisited pages.
      While there's some rationale for doing that for VACUUM (first that
      VACUUM is likely to visit a very nonrandom subset of pages, and second
      that we know for sure that the unvisited pages did not change), there's
      no such rationale for ANALYZE: by assumption, it's looked at an unbiased
      random sample of the table's pages.  Furthermore, in a very large table
      ANALYZE will have examined only a tiny fraction of the table's pages,
      meaning it cannot slew the overall density estimate very far at all.
      In a table that is physically growing, this causes reltuples to increase
      nearly proportionally to the change in relpages, regardless of what is
      actually happening in the table.  This has been observed to cause reltuples
      to become so much larger than reality that it effectively shuts off
      autovacuum, whose threshold for doing anything is a fraction of reltuples.
      (Getting to the point where that would happen seems to require some
      additional, not well understood, conditions.  But it's undeniable that if
      reltuples is seriously off in a large table, ANALYZE alone will not fix it
      in any reasonable number of iterations, especially not if the table is
      continuing to grow.)
      
      Hence, restrict the use of vac_estimate_reltuples() to VACUUM alone,
      and in ANALYZE, just extrapolate from the sample pages on the assumption
      that they provide an accurate model of the whole table.  If, by very bad
      luck, they don't, at least another ANALYZE will fix it; in the old logic
      a single bad estimate could cause problems indefinitely.
      
      In HEAD, let's remove vac_estimate_reltuples' is_analyze argument
      altogether; it was never used for anything and now it's totally pointless.
      But keep it in the back branches, in case any third-party code is calling
      this function.
      
      Per bug #15005.  Back-patch to all supported branches.
      
      David Gould, reviewed by Alexander Kuzmenkov, cosmetic changes by me
      
      Discussion: https://postgr.es/m/20180117164916.3fdcf2e9@engels
      d04900de
    • Tom Lane's avatar
      Avoid holding AutovacuumScheduleLock while rechecking table statistics. · 38f7831d
      Tom Lane authored
      In databases with many tables, re-fetching the statistics takes some time,
      so that this behavior seriously decreases the available concurrency for
      multiple autovac workers.  There's discussion afoot about more complete
      fixes, but a simple and back-patchable amelioration is to claim the table
      and release the lock before rechecking stats.  If we find out there's no
      longer a reason to process the table, re-taking the lock to un-claim the
      table is cheap enough.
      
      (This patch is quite old, but got lost amongst a discussion of more
      aggressive fixes.  It's not clear when or if such a fix will be
      accepted, but in any case it'd be unlikely to get back-patched.
      Let's do this now so we have some improvement for the back branches.)
      
      In passing, make the normal un-claim step take AutovacuumScheduleLock
      not AutovacuumLock, since that is what is documented to protect the
      wi_tableoid field.  This wasn't an actual bug in view of the fact that
      readers of that field hold both locks, but it creates some concurrency
      penalty against operations that need only AutovacuumLock.
      
      Back-patch to all supported versions.
      
      Jeff Janes
      
      Discussion: https://postgr.es/m/26118.1520865816@sss.pgh.pa.us
      38f7831d
    • Michael Meskes's avatar
      Set connection back to NULL after freeing it. · b32fad52
      Michael Meskes authored
      Patch by Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>
      b32fad52
    • Peter Eisentraut's avatar
      Move strtoint() to common · 17bb6250
      Peter Eisentraut authored
      Several places used similar code to convert a string to an int, so take
      the function that we already had and make it globally available.
      Reviewed-by: default avatarMichael Paquier <michael@paquier.xyz>
      17bb6250
    • Peter Eisentraut's avatar
      Change internal integer representation of Value node · 6cf86f43
      Peter Eisentraut authored
      A Value node would store an integer as a long.  This causes needless
      portability risks, as long can be of varying sizes.  Change it to use
      int instead.  All code using this was already careful to only store
      32-bit values anyway.
      Reviewed-by: default avatarMichael Paquier <michael@paquier.xyz>
      6cf86f43
    • Peter Eisentraut's avatar
      Fix CREATE TABLE / LIKE with bigint identity column · 377b5ac4
      Peter Eisentraut authored
      CREATE TABLE / LIKE with a bigint identity column would fail on
      platforms where long is 32 bits.  Copying the sequence values used
      makeInteger(), which would truncate the 64-bit sequence data to 32 bits.
      To fix, use makeFloat() instead, like the parser.  (This does not
      actually make use of floats, but stores the values as strings.)
      
      Bug: #15096
      Reviewed-by: default avatarMichael Paquier <michael@paquier.xyz>
      377b5ac4
  4. 12 Mar, 2018 4 commits
  5. 11 Mar, 2018 2 commits
    • Tom Lane's avatar
      Fix improper uses of canonicalize_qual(). · 4a4e2442
      Tom Lane authored
      One of the things canonicalize_qual() does is to remove constant-NULL
      subexpressions of top-level AND/OR clauses.  It does that on the assumption
      that what it's given is a top-level WHERE clause, so that NULL can be
      treated like FALSE.  Although this is documented down inside a subroutine
      of canonicalize_qual(), it wasn't mentioned in the documentation of that
      function itself, and some callers hadn't gotten that memo.
      
      Notably, commit d007a950 caused get_relation_constraints() to apply
      canonicalize_qual() to CHECK constraints.  That allowed constraint
      exclusion to misoptimize situations in which a CHECK constraint had a
      provably-NULL subclause, as seen in the regression test case added here,
      in which a child table that should be scanned is not.  (Although this
      thinko is ancient, the test case doesn't fail before 9.2, for reasons
      I've not bothered to track down in detail.  There may be related cases
      that do fail before that.)
      
      More recently, commit f0e44751 added an independent bug by applying
      canonicalize_qual() to index expressions, which is even sillier since
      those might not even be boolean.  If they are, though, I think this
      could lead to making incorrect index entries for affected index
      expressions in v10.  I haven't attempted to prove that though.
      
      To fix, add an "is_check" parameter to canonicalize_qual() to specify
      whether it should assume WHERE or CHECK semantics, and make it perform
      NULL-elimination accordingly.  Adjust the callers to apply the right
      semantics, or remove the call entirely in cases where it's not known
      that the expression has one or the other semantics.  I also removed
      the call in some cases involving partition expressions, where it should
      be a no-op because such expressions should be canonical already ...
      and was a no-op, independently of whether it could in principle have
      done something, because it was being handed the qual in implicit-AND
      format which isn't what it expects.  In HEAD, add an Assert to catch
      that type of mistake in future.
      
      This represents an API break for external callers of canonicalize_qual().
      While that's intentional in HEAD to make such callers think about which
      case applies to them, it seems like something we probably wouldn't be
      thanked for in released branches.  Hence, in released branches, the
      extra parameter is added to a new function canonicalize_qual_ext(),
      and canonicalize_qual() is a wrapper that retains its old behavior.
      
      Patch by me with suggestions from Dean Rasheed.  Back-patch to all
      supported branches.
      
      Discussion: https://postgr.es/m/24475.1520635069@sss.pgh.pa.us
      4a4e2442
    • Magnus Hagander's avatar
      Clarify initdb --help message for --wal-segsize · fedabe1f
      Magnus Hagander authored
      Specify that the value is in megabytes. This aligns the message with
      what's in the documentation.
      fedabe1f
  6. 10 Mar, 2018 1 commit
    • Tom Lane's avatar
      In psql, restore old behavior of Query_for_list_of_functions. · b6e132dd
      Tom Lane authored
      Historically, tab completion for functions has offered the names of
      aggregates as well.  This is essential in at least one context, namely
      GRANT/REVOKE, because there is no GRANT ON AGGREGATE syntax.  There
      are other cases where a command that nominally is for functions will
      allow aggregates as well, though not all do.
      
      Commit fd1a421f changed this query to disallow aggregates, but that
      doesn't seem to have been thought through very carefully.  Change it
      to allow aggregates (but still ignore procedures).
      
      We might at some point tighten this up, but it'd require sorting through
      all the uses of this query to see which ones should offer aggregate
      names and which shouldn't.  Given the lack of field complaints about
      the historical laxity here, that's work I'm not eager to do right now.
      
      Discussion: https://postgr.es/m/14268.1520283126@sss.pgh.pa.us
      b6e132dd
  7. 09 Mar, 2018 2 commits
    • Tom Lane's avatar
      Improve predtest.c's internal docs, and enhance its functionality a bit. · 5748f3a0
      Tom Lane authored
      Commit b08df9ca left things rather poorly documented as far as the
      exact semantics of "clause_is_check" mode went.  Also, that mode did
      not really work correctly for predicate_refuted_by; although given the
      lack of specification as to what it should do, as well as the lack
      of any actual use-case, that's perhaps not surprising.
      
      Rename "clause_is_check" to "weak" proof mode, and provide specifications
      for what it should do.  I defined weak refutation as meaning "truth of A
      implies non-truth of B", which makes it possible to use the mode in the
      part of relation_excluded_by_constraints that checks for mutually
      contradictory WHERE clauses.  Fix up several places that did things wrong
      for that definition.  (As far as I can see, these errors would only lead
      to failure-to-prove, not incorrect claims of proof, making them not
      serious bugs even aside from the fact that v10 contains no use of this
      mode.  So there seems no need for back-patching.)
      
      In addition, teach predicate_refuted_by_recurse that it can use
      predicate_implied_by_recurse after all when processing a strong NOT-clause,
      so long as it asks for the correct proof strength.  This is an optimization
      that could have been included in commit b08df9ca, but wasn't.
      
      Also, simplify and generalize the logic that checks for whether nullness of
      the argument of IS [NOT] NULL would force overall nullness of the predicate
      or clause.  (This results in a change in the partition_prune test's output,
      as it is now able to prune an all-nulls partition that it did not recognize
      before.)
      
      In passing, in PartConstraintImpliedByRelConstraint, remove bogus
      conversion of the constraint list to explicit-AND form and then right back
      again; that accomplished nothing except forcing a useless extra level of
      recursion inside predicate_implied_by.
      
      Discussion: https://postgr.es/m/5983.1520487191@sss.pgh.pa.us
      5748f3a0
    • Tom Lane's avatar
      Fix test_predtest's idea of what weak refutation means. · a63c3274
      Tom Lane authored
      I'd initially supposed that predicate_refuted_by(..., true) ought to
      say that "A refutes B" means "non-falsity of A implies non-truth of B".
      But it seems better to define it as "truth of A implies non-truth of B".
      This is more useful in the current system, slightly easier to prove,
      and in closer correspondence to the existing code behavior.
      
      With this change, test_predtest no longer claims that any existing
      test cases show false proof reports, though there still are cases
      where we could prove something and fail to.
      
      Discussion: https://postgr.es/m/5983.1520487191@sss.pgh.pa.us
      a63c3274
  8. 08 Mar, 2018 5 commits
    • Robert Haas's avatar
      Correctly assess parallel-safety of tlists when SRFs are used. · 960df2a9
      Robert Haas authored
      Since commit 69f4b9c8, the existing
      code was no longer assessing the parallel-safety of the real tlist
      for each upper rel, but rather the first of possibly several tlists
      created by split_pathtarget_at_srfs().  Repair.
      
      Even though this is clearly wrong, it's not clear that it has any
      user-visible consequences at the moment, so no back-patch for now.  If
      we discover later that it does have user-visible consequences, we
      might need to back-patch this to v10.
      
      Patch by me, per a report from Rajkumar Raghuwanshi.
      
      Discussion: http://postgr.es/m/CA+Tgmoaob_Strkg4Dcx=VyxnyXtrmkV=ofj=pX7gH9hSre-g0Q@mail.gmail.com
      960df2a9
    • Tom Lane's avatar
      Add test scaffolding for exercising optimizer's predicate-proof logic. · 44468f49
      Tom Lane authored
      The predicate-proof code in predtest.c is a bit hard to test as-is:
      you have to construct a query whose plan changes depending on the success
      of a test, and in tests that have been around for awhile, it's always
      possible that the plan shape is now being determined by some other factor.
      Our existing regression tests aren't doing real well at providing full
      code coverage of predtest.c, either.  So, let's add a small test module
      that allows directly inspecting the results of predicate_implied_by()
      and predicate_refuted_by() for arbitrary boolean expressions.
      
      I chose the set of tests committed here in order to get reasonably
      complete code coverage of predtest.c just from running this test
      module, and to cover some cases called out as being interesting in
      the existing comments.  We might want to add more later.  But this
      set already shows a few cases where perhaps things could be improved.
      
      Indeed, this exercise proves that predicate_refuted_by() is buggy for
      the case of clause_is_check = true, though fortunately we aren't using
      that case anywhere yet.  I'll look into doing something about that in
      a separate commit.  For now, just memorialize the current behavior.
      
      Discussion: https://postgr.es/m/5983.1520487191@sss.pgh.pa.us
      44468f49
    • Tom Lane's avatar
      Revert "Temporarily instrument postgres_fdw test to look for statistics changes." · 04e7ecad
      Tom Lane authored
      This reverts commit c2c537c5.
      It's now clear that whatever is going on there, it can't be blamed
      on unexpected ANALYZE runs, because the statistics are the same
      just before the failing query as they were at the start of the test.
      04e7ecad
    • Tom Lane's avatar
      In initdb, don't bother trying max_connections = 10. · 6a0b30f0
      Tom Lane authored
      The server won't actually start with that setting anymore, not since
      we raised the default max_wal_senders to 10.  Per discussion, we don't
      wish to back down on that default, so instead raise the effective floor
      for max_connections (to 20).  It's still possible to configure a smaller
      setting manually, but initdb won't set it that way.
      
      Since that change happened in v10, back-patch to v10.
      
      Kyotaro Horiguchi
      
      Discussion: https://postgr.es/m/20180209.170823.42008365.horiguchi.kyotaro@lab.ntt.co.jp
      6a0b30f0
    • Tom Lane's avatar
      Fix cross-checking of ReservedBackends/max_wal_senders/MaxConnections. · 4e0c743c
      Tom Lane authored
      We were independently checking ReservedBackends < MaxConnections and
      max_wal_senders < MaxConnections, but because walsenders aren't allowed
      to use superuser-reserved connections, that's really the wrong thing.
      Correct behavior is to insist on ReservedBackends + max_wal_senders being
      less than MaxConnections.  Fix the code and associated documentation.
      
      This has been wrong for a long time, but since the situation probably
      hardly ever arises in the field (especially pre-v10, when the default
      for max_wal_senders was zero), no back-patch.
      
      Discussion: https://postgr.es/m/28271.1520195491@sss.pgh.pa.us
      4e0c743c
  9. 07 Mar, 2018 1 commit