1. 27 Jul, 2015 2 commits
  2. 26 Jul, 2015 8 commits
    • Tom Lane's avatar
      Fix oversight in flattening of subqueries with empty FROM. · fca8e59c
      Tom Lane authored
      I missed a restriction that commit f4abd024
      should have enforced: we can't pull up an empty-FROM subquery if it's under
      an outer join, because then we'd need to wrap its output columns in
      PlaceHolderVars.  As the code currently stands, the PHVs end up with empty
      relid sets, which doesn't work (and is correctly caught by an Assert).
      
      It's possible that this could be fixed by assigning the PHVs the relid
      sets of the parent FromExpr/JoinExpr, but getting that to work is more
      complication than I care to add right now; indeed it's likely that
      we'll never bother, since pulling up empty-FROM subqueries is a rather
      marginal optimization anyway.
      
      Per report from Andreas Seltenreich.  Back-patch to 9.5 where the faulty
      code was added.
      fca8e59c
    • Tom Lane's avatar
      Make entirely-dummy appendrels get marked as such in set_append_rel_size. · 358eaa01
      Tom Lane authored
      The planner generally expects that the estimated rowcount of any relation
      is at least one row, *unless* it has been proven empty by constraint
      exclusion or similar mechanisms, which is marked by installing a dummy path
      as the rel's cheapest path (cf. IS_DUMMY_REL).  When I split up
      allpaths.c's processing of base rels into separate set_base_rel_sizes and
      set_base_rel_pathlists steps, the intention was that dummy rels would get
      marked as such during the "set size" step; this is what justifies an Assert
      in indxpath.c's get_loop_count that other relations should either be dummy
      or have positive rowcount.  Unfortunately I didn't get that quite right
      for append relations: if all the child rels have been proven empty then
      set_append_rel_size would come up with a rowcount of zero, which is
      correct, but it didn't then do set_dummy_rel_pathlist.  (We would have
      ended up with the right state after set_append_rel_pathlist, but that's
      too late, if we generate indexpaths for some other rel first.)
      
      In addition to fixing the actual bug, I installed an Assert enforcing this
      convention in set_rel_size; that then allows simplification of a couple
      of now-redundant tests for zero rowcount in set_append_rel_size.
      
      Also, to cover the possibility that third-party FDWs have been careless
      about not returning a zero rowcount estimate, apply clamp_row_est to
      whatever an FDW comes up with as the rows estimate.
      
      Per report from Andreas Seltenreich.  Back-patch to 9.2.  Earlier branches
      did not have the separation between set_base_rel_sizes and
      set_base_rel_pathlists steps, so there was no intermediate state where an
      appendrel would have had inconsistent rowcount and pathlist.  It's possible
      that adding the Assert to set_rel_size would be a good idea in older
      branches too; but since they're not under development any more, it's likely
      not worth the trouble.
      358eaa01
    • Andres Freund's avatar
      Check the relevant index element in ON CONFLICT unique index inference. · 159cff58
      Andres Freund authored
      ON CONFLICT unique index inference had a thinko that could affect cases
      where the user-supplied inference clause required that an attribute
      match a particular (user specified) collation and/or opclass.
      
      infer_collation_opclass_match() has to check for opclass and/or
      collation matches and that the attribute is in the list of attributes or
      expressions known to be in the definition of the index under
      consideration. The bug was that these two conditions weren't necessarily
      evaluated for the same index attribute.
      
      Author: Peter Geoghegan
      Discussion: CAM3SWZR4uug=WvmGk7UgsqHn2MkEzy9YU-+8jKGO4JPhesyeWg@mail.gmail.com
      Backpatch: 9.5, where ON CONFLICT was introduced
      159cff58
    • Andres Freund's avatar
      Fix flattening of nested grouping sets. · faab14ec
      Andres Freund authored
      Previously nested grouping set specifications accidentally weren't
      flattened, but instead contained the nested specification as a element
      in the outer list.
      
      Fix this by, as actually documented in comments, concatenating the
      nested set specification into the outer one. Also add tests to prevent
      this from breaking again.
      
      Author: Andrew Gierth, with tests from Jeevan Chalke
      Reported-By: Jeevan Chalke
      Discussion: CAM2+6=V5YvuxB+EyN4iH=GbD-XTA435TCNvnDFSD--YvXs+pww@mail.gmail.com
      Backpatch: 9.5, where grouping sets were introduced
      faab14ec
    • Andres Freund's avatar
      Allow to push down clauses from HAVING to WHERE when grouping sets are used. · 61444bfb
      Andres Freund authored
      Previously we disallowed pushing down quals to WHERE in the presence of
      grouping sets. That's overly restrictive.
      
      We now instead copy quals to WHERE if applicable, leaving the
      one in HAVING in place. That's because, at that stage of the planning
      process, it's nontrivial to determine if it's safe to remove the one in
      HAVING.
      
      Author: Andrew Gierth
      Discussion: 874mkt3l59.fsf@news-spur.riddles.org.uk
      Backpatch: 9.5, where grouping sets were introduced. This isn't exactly
          a bugfix, but it seems better to keep the branches in sync at this point.
      61444bfb
    • Andres Freund's avatar
      Recognize GROUPING() as a aggregate expression. · e6d8cb77
      Andres Freund authored
      Previously GROUPING() was not recognized as a aggregate expression,
      erroneously allowing the planner to move it from HAVING to WHERE.
      
      Author: Jeevan Chalke
      Reviewed-By: Andrew Gierth
      Discussion: CAM2+6=WG9omG5rFOMAYBweJxmpTaapvVp5pCeMrE6BfpCwr4Og@mail.gmail.com
      Backpatch: 9.5, where grouping sets were introduced
      e6d8cb77
    • Andres Freund's avatar
      Build column mapping for grouping sets in all required cases. · 144666f6
      Andres Freund authored
      The previous coding frequently failed to fail because for one it's
      unusual to have rollup clauses with one column, and for another
      sometimes the wrong mapping didn't cause obvious problems.
      
      Author: Jeevan Chalke
      Reviewed-By: Andrew Gierth
      Discussion: CAM2+6=W=9=hQOipH0HAPbkun3Z3TFWij_EiHue0_6UX=oR=1kw@mail.gmail.com
      Backpatch: 9.5, where grouping sets were introduced
      144666f6
    • Joe Conway's avatar
      Improve markup for row_security. · cf80ddee
      Joe Conway authored
      Wrap the literals on, off, force, and BYPASSRLS with appropriate
      markup. Per Kevin Grittner.
      cf80ddee
  3. 25 Jul, 2015 6 commits
    • Tom Lane's avatar
      Dodge portability issue (apparent compiler bug) in new tablesample code. · d9476b83
      Tom Lane authored
      Some of the older OS X critters in the buildfarm are failing regression,
      with symptoms showing that a request for 100% sampling in BERNOULLI or
      SYSTEM methods actually gets only around 50% of the table.  gdb revealed
      that the computation of the "cutoff" number was producing 0x7FFFFFFF
      rather than the expected 0x100000000.  Inspecting the assembly code,
      it looks like gcc is trying to use lrint() instead of rint() and then
      fumbling the conversion from long double to uint64.  This seems like a
      clear compiler bug, but assigning the intermediate result into a plain
      double variable works around it, so let's just do that.  (Another idea
      would be to give up one bit of hash width so that we don't need to use
      a uint64 cutoff, but let's see if this is enough.)
      d9476b83
    • Andrew Dunstan's avatar
      Restore use of zlib default compression in pg_dump directory mode. · caef94d5
      Andrew Dunstan authored
      This was broken by commit 0e7e355f and
      friends, which ignored the fact that gzopen() will treat "-1" in the
      mode argument as an invalid character, which it ignores, and a flag for
      compression level 1. Now, when this value is encountered no compression
      level flag is passed  to gzopen, leaving it to use the zlib default.
      
      Also, enforce the documented allowed range for pg_dump's -Z option,
      namely 0 .. 9, and remove some consequently dead code from
      pg_backup_tar.c.
      
      Problem reported by Marc Mamin.
      
      Backpatch to 9.1, like the patch that introduced the bug.
      caef94d5
    • Tom Lane's avatar
      Some platforms now need contrib/tsm_system_time to be linked with libm. · c879d51c
      Tom Lane authored
      Buildfarm member hornet, at least, seems to want -lm in the link command.
      Probably this is due to the just-added use of isnan().
      c879d51c
    • Tom Lane's avatar
      In pg_ctl, report unexpected failure to stat() the postmaster.pid file. · b7b5a189
      Tom Lane authored
      Any error other than ENOENT is a bit suspicious here, and perhaps should
      not be grounds for assuming the postmaster has failed.  For the moment
      though, just report it, and don't change the behavior otherwise.  The
      intent is mainly to try to determine why we are seeing intermittent
      failures in this area on some buildfarm members.
      
      Back-patch to 9.5 where some of these failures have happened.
      b7b5a189
    • Tom Lane's avatar
      Update oidjoins regression test for 9.5. · 158d6153
      Tom Lane authored
      New FK relationships for pg_transform.  Also findoidjoins now detects a few
      relationships it didn't before for pre-existing catalogs, as a result of
      new regression tests leaving entries in those catalogs that weren't there
      before.
      158d6153
    • Tom Lane's avatar
      Redesign tablesample method API, and do extensive code review. · dd7a8f66
      Tom Lane authored
      The original implementation of TABLESAMPLE modeled the tablesample method
      API on index access methods, which wasn't a good choice because, without
      specialized DDL commands, there's no way to build an extension that can
      implement a TSM.  (Raw inserts into system catalogs are not an acceptable
      thing to do, because we can't undo them during DROP EXTENSION, nor will
      pg_upgrade behave sanely.)  Instead adopt an API more like procedural
      language handlers or foreign data wrappers, wherein the only SQL-level
      support object needed is a single handler function identified by having
      a special return type.  This lets us get rid of the supporting catalog
      altogether, so that no custom DDL support is needed for the feature.
      
      Adjust the API so that it can support non-constant tablesample arguments
      (the original coding assumed we could evaluate the argument expressions at
      ExecInitSampleScan time, which is undesirable even if it weren't outright
      unsafe), and discourage sampling methods from looking at invisible tuples.
      Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable
      within and across queries, as required by the SQL standard, and deal more
      honestly with methods that can't support that requirement.
      
      Make a full code-review pass over the tablesample additions, and fix
      assorted bugs, omissions, infelicities, and cosmetic issues (such as
      failure to put the added code stanzas in a consistent ordering).
      Improve EXPLAIN's output of tablesample plans, too.
      
      Back-patch to 9.5 so that we don't have to support the original API
      in production.
      dd7a8f66
  4. 24 Jul, 2015 3 commits
    • Joe Conway's avatar
      Make RLS work with UPDATE ... WHERE CURRENT OF · b26e3d66
      Joe Conway authored
      UPDATE ... WHERE CURRENT OF would not work in conjunction with
      RLS. Arrange to allow the CURRENT OF expression to be pushed down.
      Issue noted by Peter Geoghegan. Patch by Dean Rasheed. Back patch
      to 9.5 where RLS was introduced.
      b26e3d66
    • Andrew Dunstan's avatar
      Fix treatment of nulls in jsonb_agg and jsonb_object_agg · d9a356ff
      Andrew Dunstan authored
      The wrong is_null flag was being passed to datum_to_json. Also, null
      object key values are not permitted, and this was not being checked
      for. Add regression tests covering these cases, and also add those tests
      to the json set, even though it was doing the right thing.
      
      Fixes bug #13514, initially diagnosed by Tom Lane.
      d9a356ff
    • Andres Freund's avatar
      Fix bug around assignment expressions containing indirections. · c1ca3a19
      Andres Freund authored
      Handling of assigned-to expressions with indirection (e.g. set f1[1] =
      3) was broken for ON CONFLICT DO UPDATE.  The problem was that
      ParseState was consulted to determine if an INSERT-appropriate or
      UPDATE-appropriate behavior should be used when transforming expressions
      with indirections. When the wrong path was taken the old row was
      substituted with NULL, leading to wrong results..
      
      To fix remove p_is_update and only use p_is_insert to decide how to
      transform the assignment expression, and uset p_is_insert while parsing
      the on conflict statement. This isn't particularly pretty, but it's not
      any worse than before.
      
      Author: Peter Geoghegan, slightly edited by me
      Discussion: CAM3SWZS8RPvA=KFxADZWw3wAHnnbxMxDzkEC6fNaFc7zSm411w@mail.gmail.com
      Backpatch: 9.5, where the feature was introduced
      c1ca3a19
  5. 23 Jul, 2015 1 commit
    • Andrew Dunstan's avatar
      Redirect install output of make check into a log file · 16c33c50
      Andrew Dunstan authored
      dbf2ec1a changed make check so that the installation logs get directed
      to stdout and stderr. Per discussion on -hackers, this patch restores
      saving it to a file. It is now saved in /tmp_install/log, which is
      created once per invocation of any make target doing regression tests.
      
      Along the way, add a missing /log/ entry to test_ddl_deparse's
      .gitignore.
      
      Michael Paquier.
      16c33c50
  6. 22 Jul, 2015 2 commits
    • Heikki Linnakangas's avatar
      Fix off-by-one error in calculating subtrans/multixact truncation point. · 766dcfb1
      Heikki Linnakangas authored
      If there were no subtransactions (or multixacts) active, we would calculate
      the oldestxid == next xid. That's correct, but if next XID happens to be
      on the next pg_subtrans (pg_multixact) page, the page does not exist yet,
      and SimpleLruTruncate will produce an "apparent wraparound" warning. The
      warning is harmless in this case, but looks very alarming to users.
      
      Backpatch to all supported versions. Patch and analysis by Thomas Munro.
      766dcfb1
    • Tom Lane's avatar
      Fix add_rte_to_flat_rtable() for recent feature additions. · 46d0a9bf
      Tom Lane authored
      The TABLESAMPLE and row security patches each overlooked this function,
      though their errors of omission were opposite: RLS failed to zero out the
      securityQuals field, leading to wasteful copying of useless expression
      trees in finished plans, while TABLESAMPLE neglected to add a comment
      saying that it intentionally *isn't* deleting the tablesample subtree.
      There probably should be a similar comment about ctename, too.
      
      Back-patch as appropriate.
      46d0a9bf
  7. 21 Jul, 2015 4 commits
  8. 20 Jul, 2015 9 commits
  9. 19 Jul, 2015 1 commit
  10. 18 Jul, 2015 4 commits
    • Tom Lane's avatar
      Make WaitLatchOrSocket's timeout detection more robust. · 576a95b3
      Tom Lane authored
      In the previous coding, timeout would be noticed and reported only when
      poll() or socket() returned zero (or the equivalent behavior on Windows).
      Ordinarily that should work well enough, but it seems conceivable that we
      could get into a state where poll() always returns a nonzero value --- for
      example, if it is noticing a condition on one of the file descriptors that
      we do not think is reason to exit the loop.  If that happened, we'd be in a
      busy-wait loop that would fail to terminate even when the timeout expires.
      
      We can make this more robust at essentially no cost, by deciding to exit
      of our own accord if we compute a zero or negative time-remaining-to-wait.
      Previously the code noted this but just clamped the time-remaining to zero,
      expecting that we'd detect timeout on the next loop iteration.
      
      Back-patch to 9.2.  While 9.1 had a version of WaitLatchOrSocket, it was
      primitive compared to later versions, and did not guarantee reliable
      detection of timeouts anyway.  (Essentially, this is a refinement of
      commit 3e7fdcff, which was back-patched only as far as 9.2.)
      576a95b3
    • Andrew Dunstan's avatar
      Enable transforms modules to build and test on Cygwin. · 00eff86c
      Andrew Dunstan authored
      This still doesn't work correctly with Python 3, but I am committing
      this so we can get Cygwin buildfarm members building with Python 2.
      00eff86c
    • Andrew Dunstan's avatar
      Release note compatibility item · 47386504
      Andrew Dunstan authored
      Note that json and jsonb extraction operators no longer consider a
      negative subscript to be invalid.
      47386504
    • Andrew Dunstan's avatar
      Support JSON negative array subscripts everywhere · e02d44b8
      Andrew Dunstan authored
      Previously, there was an inconsistency across json/jsonb operators that
      operate on datums containing JSON arrays -- only some operators
      supported negative array count-from-the-end subscripting.  Specifically,
      only a new-to-9.5 jsonb deletion operator had support (the new "jsonb -
      integer" operator).  This inconsistency seemed likely to be
      counter-intuitive to users.  To fix, allow all places where the user can
      supply an integer subscript to accept a negative subscript value,
      including path-orientated operators and functions, as well as other
      extraction operators.  This will need to be called out as an
      incompatibility in the 9.5 release notes, since it's possible that users
      are relying on certain established extraction operators changed here
      yielding NULL in the event of a negative subscript.
      
      For the json type, this requires adding a way of cheaply getting the
      total JSON array element count ahead of time when parsing arrays with a
      negative subscript involved, necessitating an ad-hoc lex and parse.
      This is followed by a "conversion" from a negative subscript to its
      equivalent positive-wise value using the count.  From there on, it's as
      if a positive-wise value was originally provided.
      
      Note that there is still a minor inconsistency here across jsonb
      deletion operators.  Unlike the aforementioned new "-" deletion operator
      that accepts an integer on its right hand side, the new "#-" path
      orientated deletion variant does not throw an error when it appears like
      an array subscript (input that could be recognized by as an integer
      literal) is being used on an object, which is wrong-headed.  The reason
      for not being stricter is that it could be the case that an object pair
      happens to have a key value that looks like an integer; in general,
      these two possibilities are impossible to differentiate with rhs path
      text[] argument elements.  However, we still don't allow the "#-"
      path-orientated deletion operator to perform array-style subscripting.
      Rather, we just return the original left operand value in the event of a
      negative subscript (which seems analogous to how the established
      "jsonb/json #> text[]" path-orientated operator may yield NULL in the
      event of an invalid subscript).
      
      In passing, make SetArrayPath() stricter about not accepting cases where
      there is trailing non-numeric garbage bytes rather than a clean NUL
      byte.  This means, for example, that strings like "10e10" are now not
      accepted as an array subscript of 10 by some new-to-9.5 path-orientated
      jsonb operators (e.g. the new #- operator).  Finally, remove dead code
      for jsonb subscript deletion; arguably, this should have been done in
      commit b81c7b40.
      
      Peter Geoghegan and Andrew Dunstan
      e02d44b8