1. 14 Dec, 2018 1 commit
    • Tom Lane's avatar
      Make pg_statistic and related code account more honestly for collations. · 5e092800
      Tom Lane authored
      When we first put in collations support, we basically punted on teaching
      pg_statistic, ANALYZE, and the planner selectivity functions about that.
      They've just used DEFAULT_COLLATION_OID independently of the actual
      collation of the data.  It's time to improve that, so:
      
      * Add columns to pg_statistic that record the specific collation associated
      with each statistics slot.
      
      * Teach ANALYZE to use the column's actual collation when comparing values
      for statistical purposes, and record this in the appropriate slot.  (Note
      that type-specific typanalyze functions are now expected to fill
      stats->stacoll with the appropriate collation, too.)
      
      * Teach assorted selectivity functions to use the actual collation of
      the stats they are looking at, instead of just assuming it's
      DEFAULT_COLLATION_OID.
      
      This should give noticeably better results in selectivity estimates for
      columns with nondefault collations, at least for query clauses that use
      that same collation (which would be the default behavior in most cases).
      It's still true that comparisons with explicit COLLATE clauses different
      from the stored data's collation won't be well-estimated, but that's no
      worse than before.  Also, this patch does make the first step towards
      doing better with that, which is that it's now theoretically possible to
      collect stats for a collation other than the column's own collation.
      
      Patch by me; thanks to Peter Eisentraut for review.
      
      Discussion: https://postgr.es/m/14706.1544630227@sss.pgh.pa.us
      5e092800
  2. 13 Dec, 2018 7 commits
    • Michael Paquier's avatar
      Introduce new extended routines for FDW and foreign server lookups · 8fb569e9
      Michael Paquier authored
      The cache lookup routines for foreign-data wrappers and foreign servers
      are extended with an extra argument to handle a set of flags.  The only
      value which can be used now is to indicate if a missing object should
      result in an error or not, and are designed to be extensible on need.
      Those new routines are added into the existing set of user-visible
      FDW APIs and documented in consequence.  They will be used for future
      patches to improve the SQL interface for object addresses.
      
      Author: Michael Paquier
      Reviewed-by: Álvaro Herrera
      Discussion: https://postgr.es/m/CAB7nPqSZxrSmdHK-rny7z8mi=EAFXJ5J-0RbzDw6aus=wB5azQ@mail.gmail.com
      8fb569e9
    • Andres Freund's avatar
      Create a separate oid range for oids assigned by genbki.pl. · 09568ec3
      Andres Freund authored
      The changes I made in 578b2297 assigned oids below
      FirstBootstrapObjectId to objects in include/catalog/*.dat files that
      did not have an oid assigned, starting at the max oid explicitly
      assigned.  Tom criticized that for mainly two reasons:
      1) It's not clear which values are manually and which explicitly
         assigned.
      2) The space below FirstBootstrapObjectId gets pretty crowded, and
         some PostgreSQL forks have used oids >= 9000 for their own objects,
         to avoid conflicting.
      
      Thus create a new range for objects not assigned explicit oids, but
      assigned by genbki.pl. For now 1-9999 is for explicitly assigned oids,
      FirstGenbkiObjectId (10000) to FirstBootstrapObjectId (1200) -1 is for
      genbki.pl assigned oids, and < FirstNormalObjectId (16384) is for oids
      assigned during bootstrap.  It's possible that we'll have to adjust
      these boundaries, but there's some headroom for now.
      
      Add a note suggesting that oids in forks should be assigned in the
      9000-9999 range.
      
      Catversion bump for obvious reasons.
      
      Per complaint from Tom Lane.
      
      Author: Andres Freund
      Discussion: https://postgr.es/m/16845.1544393682@sss.pgh.pa.us
      09568ec3
    • Tom Lane's avatar
      Fix bogus logic for skipping unnecessary partcollation dependencies. · 84d51488
      Tom Lane authored
      The idea here is to not call recordDependencyOn for the default collation,
      since we know that's pinned.  But what the code actually did was to record
      the partition key's dependency on the opclass twice, instead.
      
      Evidently introduced by sloppy coding in commit 2186b608.  Back-patch
      to v10 where that came in.
      84d51488
    • Tom Lane's avatar
      Drop no-op CoerceToDomain nodes from expressions at planning time. · 04fe805a
      Tom Lane authored
      If a domain has no constraints, then CoerceToDomain doesn't really do
      anything and can be simplified to a RelabelType.  This not only
      eliminates cycles at execution, but allows the planner to optimize better
      (for instance, match the coerced expression to an index on the underlying
      column).  However, we do have to support invalidating the plan later if
      a constraint gets added to the domain.  That's comparable to the case of
      a change to a SQL function that had been inlined into a plan, so all the
      necessary logic already exists for plans depending on functions.  We
      need only duplicate or share that logic for domains.
      
      ALTER DOMAIN ADD/DROP CONSTRAINT need to be taught to send out sinval
      messages for the domain's pg_type entry, since those operations don't
      update that row.  (ALTER DOMAIN SET/DROP NOT NULL do update that row,
      so no code change is needed for them.)
      
      Testing this revealed what's really a pre-existing bug in plpgsql:
      it caches the SQL-expression-tree expansion of type coercions and
      had no provision for invalidating entries in that cache.  Up to now
      that was only a problem if such an expression had inlined a SQL
      function that got changed, which is unlikely though not impossible.
      But failing to track changes of domain constraints breaks an existing
      regression test case and would likely cause practical problems too.
      
      We could fix that locally in plpgsql, but what seems like a better
      idea is to build some generic infrastructure in plancache.c to store
      standalone expressions and track invalidation events for them.
      (It's tempting to wonder whether plpgsql's "simple expression" stuff
      could use this code with lower overhead than its current use of the
      heavyweight plancache APIs.  But I've left that idea for later.)
      
      Other stuff fixed in passing:
      
      * Allow estimate_expression_value() to drop CoerceToDomain
      unconditionally, effectively assuming that the coercion will succeed.
      This will improve planner selectivity estimates for cases involving
      estimatable expressions that are coerced to domains.  We could have
      done this independently of everything else here, but there wasn't
      previously any need for eval_const_expressions_mutator to know about
      CoerceToDomain at all.
      
      * Use a dlist for plancache.c's list of cached plans, rather than a
      manually threaded singly-linked list.  That eliminates a potential
      performance problem in DropCachedPlan.
      
      * Fix a couple of inconsistencies in typecmds.c about whether
      operations on domains drop RowExclusiveLock on pg_type.  Our common
      practice is that DDL operations do drop catalog locks, so standardize
      on that choice.
      
      Discussion: https://postgr.es/m/19958.1544122124@sss.pgh.pa.us
      04fe805a
    • Alexander Korotkov's avatar
      Prevent GIN deleted pages from being reclaimed too early · 52ac6cd2
      Alexander Korotkov authored
      When GIN vacuum deletes a posting tree page, it assumes that no concurrent
      searchers can access it, thanks to ginStepRight() locking two pages at once.
      However, since 9.4 searches can skip parts of posting trees descending from the
      root.  That leads to the risk that page is deleted and reclaimed before
      concurrent search can access it.
      
      This commit prevents the risk of above by waiting for every transaction, which
      might wait to reference this page, to finish.  Due to binary compatibility
      we can't change GinPageOpaqueData to store corresponding transaction id.
      Instead we reuse page header pd_prune_xid field, which is unused in index pages.
      
      Discussion: https://postgr.es/m/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.com
      Author: Andrey Borodin, Alexander Korotkov
      Reviewed-by: Alexander Korotkov
      Backpatch-through: 9.4
      52ac6cd2
    • Alexander Korotkov's avatar
      Prevent deadlock in ginRedoDeletePage() · c6ade7a8
      Alexander Korotkov authored
      On standby ginRedoDeletePage() can work concurrently with read-only queries.
      Those queries can traverse posting tree in two ways.
      1) Using rightlinks by ginStepRight(), which locks the next page before
         unlocking its left sibling.
      2) Using downlinks by ginFindLeafPage(), which locks at most one page at time.
      
      Original lock order was: page, parent, left sibling.  That lock order can
      deadlock with ginStepRight().  In order to prevent deadlock this commit changes
      lock order to: left sibling, page, parent.  Note, that position of parent in
      locking order seems insignificant, because we only lock one page at time while
      traversing downlinks.
      
      Reported-by: Chen Huajun
      Diagnosed-by: Chen Huajun, Peter Geoghegan, Andrey Borodin
      Discussion: https://postgr.es/m/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.com
      Author: Alexander Korotkov
      Backpatch-through: 9.4
      c6ade7a8
    • Alexander Korotkov's avatar
      Fix deadlock in GIN vacuum introduced by 218f5158 · fd83c83d
      Alexander Korotkov authored
      Before 218f5158 if posting tree page is about to be deleted, then the whole
      posting tree is locked by LockBufferForCleanup() on root preventing all the
      concurrent inserts.  218f5158 reduced locking to the subtree containing
      page to be deleted.  However, due to concurrent parent split, inserter doesn't
      always holds pins on all the pages constituting path from root to the target
      leaf page.  That could cause a deadlock between GIN vacuum process and GIN
      inserter.  And we didn't find non-invasive way to fix this.
      
      This commit reverts VACUUM behavior to lock the whole posting tree before
      delete any page.  However, we keep another useful change by 218f5158: the
      tree is locked only if there are pages to be deleted.
      
      Reported-by: Chen Huajun
      Diagnosed-by: Chen Huajun, Andrey Borodin, Peter Geoghegan
      Discussion: https://postgr.es/m/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.com
      Author: Alexander Korotkov, based on ideas from Andrey Borodin and Peter Geoghegan
      Reviewed-by: Andrey Borodin
      Backpatch-through: 10
      fd83c83d
  3. 12 Dec, 2018 3 commits
    • Tom Lane's avatar
      Repair bogus EPQ plans generated for postgres_fdw foreign joins. · 77d4d88a
      Tom Lane authored
      postgres_fdw's postgresGetForeignPlan() assumes without checking that the
      outer_plan it's given for a join relation must have a NestLoop, MergeJoin,
      or HashJoin node at the top.  That's been wrong at least since commit
      4bbf6edf (which could cause insertion of a Sort node on top) and it seems
      like a pretty unsafe thing to Just Assume even without that.
      
      Through blind good fortune, this doesn't seem to have any worse
      consequences today than strange EXPLAIN output, but it's clearly trouble
      waiting to happen.
      
      To fix, test the node type explicitly before touching Join-specific
      fields, and avoid jamming the new tlist into a node type that can't
      do projection.  Export a new support function from createplan.c
      to avoid building low-level knowledge about the latter into FDWs.
      
      Back-patch to 9.6 where the faulty coding was added.  Note that the
      associated regression test cases don't show any changes before v11,
      apparently because the tests back-patched with 4bbf6edf don't actually
      exercise the problem case before then (there's no top-level Sort
      in those plans).
      
      Discussion: https://postgr.es/m/8946.1544644803@sss.pgh.pa.us
      77d4d88a
    • Tom Lane's avatar
      Repair bogus handling of multi-assignment Params in upper plan levels. · 0f7ec8d9
      Tom Lane authored
      Our support for multiple-set-clauses in UPDATE assumes that the Params
      referencing a MULTIEXPR_SUBLINK SubPlan will appear before that SubPlan
      in the targetlist of the plan node that calculates the updated row.
      (Yeah, it's a hack...)  In some PG branches it's possible that a Result
      node gets inserted between the primary calculation of the update tlist
      and the ModifyTable node.  setrefs.c did the wrong thing in this case
      and left the upper-level Params as Params, causing a crash at runtime.
      What it should do is replace them with "outer" Vars referencing the child
      plan node's output.  That's a result of careless ordering of operations
      in fix_upper_expr_mutator, so we can fix it just by reordering the code.
      
      Fix fix_join_expr_mutator similarly for consistency, even though join
      nodes could never appear in such a context.  (In general, it seems
      likely to be a bit cheaper to use Vars than Params in such situations
      anyway, so this patch might offer a tiny performance improvement.)
      
      The hazard extends back to 9.5 where the MULTIEXPR_SUBLINK stuff
      was introduced, so back-patch that far.  However, this may be a live
      bug only in 9.6.x and 10.x, as the other branches don't seem to want
      to calculate the final tlist below the Result node.  (That plan shape
      change between branches might be a mini-bug in itself, but I'm not
      really interested in digging into the reasons for that right now.
      Still, add a regression test memorializing what we expect there,
      so we'll notice if it changes again.)
      
      Per bug report from Eduards Bezverhijs.
      
      Discussion: https://postgr.es/m/b6cd572a-3e44-8785-75e9-c512a5a17a73@tieto.com
      0f7ec8d9
    • Michael Paquier's avatar
      Tweak pg_partition_tree for undefined relations and unsupported relkinds · cc53123b
      Michael Paquier authored
      This fixes a crash which happened when calling the function directly
      with a relation OID referring to a non-existing object, and changes the
      behavior so as NULL is returned for unsupported relkinds instead of
      generating an error.  This puts the new function in line with many other
      system functions, and eases actions like full scans of pg_class.
      
      Author: Michael Paquier
      Reviewed-by: Amit Langote, Stephen Frost
      Discussion: https://postgr.es/m/20181207010406.GO2407@paquier.xyz
      cc53123b
  4. 11 Dec, 2018 3 commits
  5. 10 Dec, 2018 7 commits
  6. 09 Dec, 2018 1 commit
  7. 07 Dec, 2018 6 commits
  8. 06 Dec, 2018 6 commits
  9. 05 Dec, 2018 2 commits
    • Alvaro Herrera's avatar
      Don't mark partitioned indexes invalid unnecessarily · 71a05b22
      Alvaro Herrera authored
      When an indexes is created on a partitioned table using ONLY (don't
      recurse to partitions), it gets marked invalid until index partitions
      are attached for each table partition.  But there's no reason to do this
      if there are no partitions ... and moreover, there's no way to get the
      index to become valid afterwards, because all partitions that get
      created/attached get their own index partition already attached to the
      parent index, so there's no chance to do ALTER INDEX ... ATTACH PARTITION
      that would make the parent index valid.
      
      Fix by not marking the index as invalid to begin with.
      
      This is very similar to 9139aa19, but the pg_dump aspect does not
      appear to be relevant until we add FKs that can point to PKs on
      partitioned tables.  (I tried to cause the pg_upgrade test to break by
      leaving some of these bogus tables around, but wasn't able to.)
      
      Making this change means that an index that was supposed to be invalid
      in the insert_conflict regression test is no longer invalid; reorder the
      DDL so that the test continues to verify the behavior we want it to.
      
      Author: Álvaro Herrera
      Reviewed-by: Amit Langote
      Discussion: https://postgr.es/m/20181203225019.2vvdef2ybnkxt364@alvherre.pgsql
      71a05b22
    • Michael Paquier's avatar
      Fix invalid value of synchronous_commit in description of flush_lag · 99f9ccee
      Michael Paquier authored
      "remote_flush" has never been a valid user-facing value, but "on" is.
      
      Author: Maksim Milyutin
      Discussion: https://postgr.es/m/27b3b80c-3615-2d76-02c5-44566b53136c@gmail.com
      99f9ccee
  10. 04 Dec, 2018 2 commits
    • Stephen Frost's avatar
      Fix typo · f502fc88
      Stephen Frost authored
      Backends don't typically exist uncleanly, but they can certainly exit
      uncleanly, and it's exiting uncleanly that's being discussed here.
      f502fc88
    • Etsuro Fujita's avatar
      postgres_fdw: Improve cost and size estimation for aggregate pushdown. · f8f6e446
      Etsuro Fujita authored
      In commit 7012b132, which added aggregate
      pushdown to postgres_fdw, we didn't account for the evaluation cost and the
      selectivity of HAVING quals attached to ForeignPaths performing aggregate
      pushdown, as core had never accounted for that for AggPaths and GroupPaths.
      And we didn't set these values of the locally-checked quals (ie, fpinfo's
      local_conds_cost and local_conds_sel), which were initialized to zeros, but
      since estimate_path_cost_size factors in these to estimate the result size
      and the evaluation cost of such a ForeignPath when the use_remote_estimate
      option is enabled, this caused it to produce underestimated results in that
      case.
      
      By commit 7b6c0754 core was changed so that
      it accounts for the evaluation cost and the selectivity of HAVING quals in
      aggregation paths, so change the postgres_fdw's aggregate pushdown code as
      well as such.  This not only fixes the underestimation issue mentioned
      above, but improves the estimation using local statistics in that function
      when that option is disabled.
      
      This would be a bug fix rather than an improvement, but apply it to HEAD
      only to avoid destabilizing existing plan choices.
      
      Author: Etsuro Fujita
      Discussion: https://postgr.es/m/5BFD3EAD.2060301%40lab.ntt.co.jp
      f8f6e446
  11. 03 Dec, 2018 2 commits