1. 30 Nov, 2017 1 commit
    • Peter Eisentraut's avatar
      SQL procedures · e4128ee7
      Peter Eisentraut authored
      This adds a new object type "procedure" that is similar to a function
      but does not have a return type and is invoked by the new CALL statement
      instead of SELECT or similar.  This implementation is aligned with the
      SQL standard and compatible with or similar to other SQL implementations.
      
      This commit adds new commands CALL, CREATE/ALTER/DROP PROCEDURE, as well
      as ALTER/DROP ROUTINE that can refer to either a function or a
      procedure (or an aggregate function, as an extension to SQL).  There is
      also support for procedures in various utility commands such as COMMENT
      and GRANT, as well as support in pg_dump and psql.  Support for defining
      procedures is available in all the languages supplied by the core
      distribution.
      
      While this commit is mainly syntax sugar around existing functionality,
      future features will rely on having procedures as a separate object
      type.
      Reviewed-by: default avatarAndrew Dunstan <andrew.dunstan@2ndquadrant.com>
      e4128ee7
  2. 29 Nov, 2017 1 commit
    • Robert Haas's avatar
      Add extensive tests for partition pruning. · 8d4e70a6
      Robert Haas authored
      Currently, partition pruning happens via constraint exclusion, but
      there are pending places to replace that with a different and
      hopefully faster mechanism.  To be sure that we don't change behavior
      without realizing it, add extensive test coverage.
      
      Note that not all of these behaviors are optimal; in some cases,
      partitions are not pruned even though it would be safe to do so.
      These tests therefore serve to memorialize the current state rather
      than the ideal state.  Patches that improve things can update the test
      results as appropriate.
      
      Amit Langote, adjusted by me.  Review and testing of the larger patch
      set of which this is a part by Ashutosh Bapat, David Rowley, Dilip
      Kumar, Jesper Pedersen, Rajkumar Raghuwanshi, Beena Emerson, Amul Sul,
      and Kyotaro Horiguchi.
      
      Discussion: http://postgr.es/m/098b9c71-1915-1a2a-8d52-1a7a50ce79e8@lab.ntt.co.jp
      8d4e70a6
  3. 21 Nov, 2017 1 commit
    • Robert Haas's avatar
      Fix multiple problems with satisfies_hash_partition. · f3b0897a
      Robert Haas authored
      Fix the function header comment to describe the actual behavior.
      Check that table OID, modulus, and remainder arguments are not NULL
      before accessing them.  Check that the modulus and remainder are
      sensible.  If the table OID doesn't exist, return NULL instead of
      emitting an internal error, similar to what we do elsewhere.  Check
      that the actual argument types match, or at least are binary coercible
      to, the expected argument types.  Correctly handle invocation of this
      function using the VARIADIC syntax.  Add regression tests.
      
      Robert Haas and Amul Sul, per a report by Andreas Seltenreich and
      subsequent followup investigation.
      
      Discussion: http://postgr.es/m/871sl4sdrv.fsf@ansel.ydns.eu
      f3b0897a
  4. 19 Oct, 2017 1 commit
  5. 07 Oct, 2017 1 commit
  6. 06 Oct, 2017 1 commit
    • Robert Haas's avatar
      Basic partition-wise join functionality. · f49842d1
      Robert Haas authored
      Instead of joining two partitioned tables in their entirety we can, if
      it is an equi-join on the partition keys, join the matching partitions
      individually.  This involves teaching the planner about "other join"
      rels, which are related to regular join rels in the same way that
      other member rels are related to baserels.  This can use significantly
      more CPU time and memory than regular join planning, because there may
      now be a set of "other" rels not only for every base relation but also
      for every join relation.  In most practical cases, this probably
      shouldn't be a problem, because (1) it's probably unusual to join many
      tables each with many partitions using the partition keys for all
      joins and (2) if you do that scenario then you probably have a big
      enough machine to handle the increased memory cost of planning and (3)
      the resulting plan is highly likely to be better, so what you spend in
      planning you'll make up on the execution side.  All the same, for now,
      turn this feature off by default.
      
      Currently, we can only perform joins between two tables whose
      partitioning schemes are absolutely identical.  It would be nice to
      cope with other scenarios, such as extra partitions on one side or the
      other with no match on the other side, but that will have to wait for
      a future patch.
      
      Ashutosh Bapat, reviewed and tested by Rajkumar Raghuwanshi, Amit
      Langote, Rafia Sabih, Thomas Munro, Dilip Kumar, Antonin Houska, Amit
      Khandekar, and by me.  A few final adjustments by me.
      
      Discussion: http://postgr.es/m/CAFjFpRfQ8GrQvzp3jA2wnLqrHmaXna-urjm_UY9BqXj=EaDTSA@mail.gmail.com
      Discussion: http://postgr.es/m/CAFjFpRcitjfrULr5jfuKWRPsGUX0LQ0k8-yG0Qw2+1LBGNpMdw@mail.gmail.com
      f49842d1
  7. 05 Oct, 2017 1 commit
  8. 01 Sep, 2017 1 commit
    • Robert Haas's avatar
      Introduce 64-bit hash functions with a 64-bit seed. · 81c5e46c
      Robert Haas authored
      This will be useful for hash partitioning, which needs a way to seed
      the hash functions to avoid problems such as a hash index on a hash
      partitioned table clumping all values into a small portion of the
      bucket space; it's also useful for anything that wants a 64-bit hash
      value rather than a 32-bit hash value.
      
      Just in case somebody wants a 64-bit hash value that is compatible
      with the existing 32-bit hash values, make the low 32-bits of the
      64-bit hash value match the 32-bit hash value when the seed is 0.
      
      Robert Haas and Amul Sul
      
      Discussion: http://postgr.es/m/CA+Tgmoafx2yoJuhCQQOL5CocEi-w_uG4S2xT0EtgiJnPGcHW3g@mail.gmail.com
      81c5e46c
  9. 17 Jul, 2017 1 commit
    • Tom Lane's avatar
      Merge large_object.sql test into largeobject.source. · a570feaf
      Tom Lane authored
      It seems pretty confusing to have tests named both largeobject and
      large_object.  The latter is of very recent vintage (commit ff992c07),
      so get rid of it in favor of merging into the former.
      
      Also, enable the LO comment test that was added by commit 70ad7ed4,
      since the later commit added the then-missing pg_upgrade functionality.
      The large_object.sql test case is almost completely redundant with that,
      but not quite: it seems like creating a user-defined LO with an OID in
      the system range might be an interesting case for pg_upgrade, so let's
      keep it.
      
      Like the earlier patch, back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/18665.1500306372@sss.pgh.pa.us
      a570feaf
  10. 23 Jun, 2017 1 commit
    • Tom Lane's avatar
      Add testing to detect errors of omission in "pin" dependency creation. · 8be8510c
      Tom Lane authored
      It's essential that initdb.c's setup_depend() scan each system catalog
      that could contain objects that need to have "p" (pin) entries in pg_depend
      or pg_shdepend.  Forgetting to add that, either when a catalog is first
      invented or when it first acquires DATA() entries, is an obvious bug
      hazard.  We can detect such omissions at reasonable cost by probing every
      OID-containing system catalog to see whether the lowest-numbered OID in it
      is pinned.  If so, the catalog must have been properly accounted for in
      setup_depend().  If the lowest OID is above FirstNormalObjectId then the
      catalog must have been empty at the end of initdb, so it doesn't matter.
      There are a small number of catalogs whose first entry is made later in
      initdb than setup_depend(), resulting in nonempty expected output of the
      test, but these can be manually inspected to see that they are OK.  Any
      future mistake of this ilk will manifest as a new entry in the test's
      output.
      
      Since pg_conversion is already in the test's output, add it to the set of
      catalogs scanned by setup_depend().  That has no effect today (hence, no
      catversion bump here) but it will protect us if we ever do add pin-worthy
      conversions.
      
      This test is very much like the catalog sanity checks embodied in
      opr_sanity.sql and type_sanity.sql, but testing pg_depend doesn't seem to
      fit naturally into either of those scripts' charters.  Hence, invent a new
      test script misc_sanity.sql, which can be a home for this as well as tests
      on any other catalogs we might want in future.
      
      Discussion: https://postgr.es/m/8068.1498155068@sss.pgh.pa.us
      8be8510c
  11. 06 Apr, 2017 1 commit
    • Peter Eisentraut's avatar
      Identity columns · 32173270
      Peter Eisentraut authored
      This is the SQL standard-conforming variant of PostgreSQL's serial
      columns.  It fixes a few usability issues that serial columns have:
      
      - CREATE TABLE / LIKE copies default but refers to same sequence
      - cannot add/drop serialness with ALTER TABLE
      - dropping default does not drop sequence
      - need to grant separate privileges to sequence
      - other slight weirdnesses because serial is some kind of special macro
      Reviewed-by: default avatarVitaly Burovoy <vitaly.burovoy@gmail.com>
      32173270
  12. 25 Mar, 2017 1 commit
    • Tom Lane's avatar
      Re-adhere to policy of no more than 20 tests per parallel group. · 7d3957e5
      Tom Lane authored
      As explained at the head of parallel_schedule, we place an arbitrary limit
      of 20 test cases per parallel group.  Commit c7a9fa39 overlooked this.
      
      Least messy solution seems to be to move the "comments" test to the next
      group, since it doesn't really belong in a group of datatype tests anyway.
      7d3957e5
  13. 24 Mar, 2017 1 commit
    • Alvaro Herrera's avatar
      Implement multivariate n-distinct coefficients · 7b504eb2
      Alvaro Herrera authored
      Add support for explicitly declared statistic objects (CREATE
      STATISTICS), allowing collection of statistics on more complex
      combinations that individual table columns.  Companion commands DROP
      STATISTICS and ALTER STATISTICS ... OWNER TO / SET SCHEMA / RENAME are
      added too.  All this DDL has been designed so that more statistic types
      can be added later on, such as multivariate most-common-values and
      multivariate histograms between columns of a single table, leaving room
      for permitting columns on multiple tables, too, as well as expressions.
      
      This commit only adds support for collection of n-distinct coefficient
      on user-specified sets of columns in a single table.  This is useful to
      estimate number of distinct groups in GROUP BY and DISTINCT clauses;
      estimation errors there can cause over-allocation of memory in hashed
      aggregates, for instance, so it's a worthwhile problem to solve.  A new
      special pseudo-type pg_ndistinct is used.
      
      (num-distinct estimation was deemed sufficiently useful by itself that
      this is worthwhile even if no further statistic types are added
      immediately; so much so that another version of essentially the same
      functionality was submitted by Kyotaro Horiguchi:
      https://postgr.es/m/20150828.173334.114731693.horiguchi.kyotaro@lab.ntt.co.jp
      though this commit does not use that code.)
      
      Author: Tomas Vondra.  Some code rework by Álvaro.
      Reviewed-by: Dean Rasheed, David Rowley, Kyotaro Horiguchi, Jeff Janes,
          Ideriha Takeshi
      Discussion: https://postgr.es/m/543AFA15.4080608@fuzzy.cz
          https://postgr.es/m/20170320190220.ixlaueanxegqd5gr@alvherre.pgsql
      7b504eb2
  14. 20 Mar, 2017 1 commit
  15. 15 Mar, 2017 1 commit
    • Stephen Frost's avatar
      Add support for EUI-64 MAC addresses as macaddr8 · c7a9fa39
      Stephen Frost authored
      This adds in support for EUI-64 MAC addresses by adding a new data type
      called 'macaddr8' (using our usual convention of indicating the number
      of bytes stored).
      
      This was largely a copy-and-paste from the macaddr data type, with
      appropriate adjustments for having 8 bytes instead of 6 and adding
      support for converting a provided EUI-48 (6 byte format) to the EUI-64
      format.  Conversion from EUI-48 to EUI-64 inserts FFFE as the 4th and
      5th bytes but does not perform the IPv6 modified EUI-64 action of
      flipping the 7th bit, but we add a function to perform that specific
      action for the user as it may be commonly done by users who wish to
      calculate their IPv6 address based on their network prefix and 48-bit
      MAC address.
      
      Author: Haribabu Kommi, with a good bit of rework of macaddr8_in by me.
      Reviewed by: Vitaly Burovoy, Kuntal Ghosh
      
      Discussion: https://postgr.es/m/CAJrrPGcUi8ZH+KkK+=TctNQ+EfkeCEHtMU_yo1mvX8hsk_ghNQ@mail.gmail.com
      c7a9fa39
  16. 11 Mar, 2017 1 commit
    • Andres Freund's avatar
      Improve expression evaluation test coverage. · ce38949b
      Andres Freund authored
      Upcoming patches are revamping expression evaluation significantly. It
      therefore seems prudent to try to ensure that the coverage of the
      existing evaluation code is high.
      
      This commit adds coverage for the cases that can reasonably be
      tested. There's still a bunch of unreachable error messages and such,
      but otherwise this achieves nearly full regression test coverage (with
      the exception of the unused GetAttributeByNum/GetAttributeByName).
      
      Author: Andres Freund
      Discussion: https://postgr.es/m/20170310194021.ek4bs4bl2khxkmll@alap3.anarazel.de
      ce38949b
  17. 07 Mar, 2017 1 commit
  18. 06 Mar, 2017 1 commit
    • Stephen Frost's avatar
      pg_upgrade: Fix large object COMMENTS, SECURITY LABELS · ff992c07
      Stephen Frost authored
      When performing a pg_upgrade, we copy the files behind pg_largeobject
      and pg_largeobject_metadata, allowing us to avoid having to dump out and
      reload the actual data for large objects and their ACLs.
      
      Unfortunately, that isn't all of the information which can be associated
      with large objects.  Currently, we also support COMMENTs and SECURITY
      LABELs with large objects and these were being silently dropped during a
      pg_upgrade as pg_dump would skip everything having to do with a large
      object and pg_upgrade only copied the tables mentioned to the new
      cluster.
      
      As the file copies happen after the catalog dump and reload, we can't
      simply include the COMMENTs and SECURITY LABELs in pg_dump's binary-mode
      output but we also have to include the actual large object definition as
      well.  With the definition, comments, and security labels in the pg_dump
      output and the file copies performed by pg_upgrade, all of the data and
      metadata associated with large objects is able to be successfully pulled
      forward across a pg_upgrade.
      
      In 9.6 and master, we can simply adjust the dump bitmask to indicate
      which components we don't want.  In 9.5 and earlier, we have to put
      explciit checks in in dumpBlob() and dumpBlobs() to not include the ACL
      or the data when in binary-upgrade mode.
      
      Adjustments made to the privileges regression test to allow another test
      (large_object.sql) to be added which explicitly leaves a large object
      with a comment in place to provide coverage of that case with
      pg_upgrade.
      
      Back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/20170221162655.GE9812@tamriel.snowman.net
      ff992c07
  19. 30 Jan, 2017 1 commit
    • Tom Lane's avatar
      Add a regression test script dedicated to exercising system views. · d002f16c
      Tom Lane authored
      Quite a few of our built-in system views were not exercised anywhere
      in the regression tests.  This is perhaps not so exciting for the ones
      that are simple projections/joins of system catalogs, but for the ones
      that are wrappers for set-returning C functions, the omission translates
      directly to lack of test coverage for those functions.
      
      In many cases, the reason for the omission is that the view doesn't have
      much to do with any specific SQL feature, so there's no natural place to
      test it.  To remedy that, invent a new script sysviews.sql that's dedicated
      to testing SRF-based views.  Move a couple of tests that did fit this
      charter into the new script, and add simple "count(*)" based tests of
      other views within the charter.  That's enough to ensure we at least
      exercise the main code path through the SRF, although it does little to
      prove that the output is sane.
      
      More could be done here, no doubt, and I hope someone will think about
      how we can test these views more thoroughly.  But this is a starting
      point.
      
      Discussion: https://postgr.es/m/19359.1485723741@sss.pgh.pa.us
      d002f16c
  20. 20 Jan, 2017 1 commit
  21. 13 Sep, 2016 1 commit
    • Andres Freund's avatar
      Add more tests for targetlist SRFs. · bfe16d1a
      Andres Freund authored
      We're considering changing the implementation of targetlist SRFs
      considerably, and a lot of the current behaviour isn't tested in our
      regression tests. Thus it seems useful to increase coverage to avoid
      accidental behaviour changes.
      
      It's quite possible that some of the plans here will require adjustments
      to avoid falling afoul of ordering differences (e.g. hashed group
      bys). The buildfarm will tell us.
      
      Reviewed-By: Tom Lane
      Discussion: <20160827214829.zo2dfb5jaikii5nw@alap3.anarazel.de>
      bfe16d1a
  22. 22 Aug, 2016 1 commit
    • Peter Eisentraut's avatar
      Run select_parallel test by itself · f9472d72
      Peter Eisentraut authored
      Remove the plpgsql wrapping that hides the context.  So now the test
      will fail if the work doesn't actually happen in a parallel worker.  Run
      the test in its own test group to ensure it won't run out of resources
      for that.
      f9472d72
  23. 13 Aug, 2016 1 commit
    • Tom Lane's avatar
      Add SQL-accessible functions for inspecting index AM properties. · ed0097e4
      Tom Lane authored
      Per discussion, we should provide such functions to replace the lost
      ability to discover AM properties by inspecting pg_am (cf commit
      65c5fcd3).  The added functionality is also meant to displace any code
      that was looking directly at pg_index.indoption, since we'd rather not
      believe that the bit meanings in that field are part of any client API
      contract.
      
      As future-proofing, define the SQL API to not assume that properties that
      are currently AM-wide or index-wide will remain so unless they logically
      must be; instead, expose them only when inquiring about a specific index
      or even specific index column.  Also provide the ability for an index
      AM to override the behavior.
      
      In passing, document pg_am.amtype, overlooked in commit 473b9328.
      
      Andrew Gierth, with kibitzing by me and others
      
      Discussion: <87mvl5on7n.fsf@news-spur.riddles.org.uk>
      ed0097e4
  24. 08 Jun, 2016 1 commit
  25. 09 Apr, 2016 1 commit
  26. 08 Apr, 2016 2 commits
    • Teodor Sigaev's avatar
      Revert CREATE INDEX ... INCLUDING ... · 8b99edef
      Teodor Sigaev authored
      It's not ready yet, revert two commits
      690c5435 - unstable test output
      386e3d76 - patch itself
      8b99edef
    • Teodor Sigaev's avatar
      CREATE INDEX ... INCLUDING (column[, ...]) · 386e3d76
      Teodor Sigaev authored
      Now indexes (but only B-tree for now) can contain "extra" column(s) which
      doesn't participate in index structure, they are just stored in leaf
      tuples. It allows to use index only scan by using single index instead
      of two or more indexes.
      
      Author: Anastasia Lubennikova with minor editorializing by me
      Reviewers: David Rowley, Peter Geoghegan, Jeff Janes
      386e3d76
  27. 07 Apr, 2016 1 commit
    • Stephen Frost's avatar
      In pg_dump, include pg_catalog and extension ACLs, if changed · 23f34fa4
      Stephen Frost authored
      Now that all of the infrastructure exists, add in the ability to
      dump out the ACLs of the objects inside of pg_catalog or the ACLs
      for objects which are members of extensions, but only if they have
      been changed from their original values.
      
      The original values are tracked in pg_init_privs.  When pg_dump'ing
      9.6-and-above databases, we will dump out the ACLs for all objects
      in pg_catalog and the ACLs for all extension members, where the ACL
      has been changed from the original value which was set during either
      initdb or CREATE EXTENSION.
      
      This should not change dumps against pre-9.6 databases.
      
      Reviews by Alexander Korotkov, Jose Luis Tallon
      23f34fa4
  28. 25 Mar, 2016 1 commit
    • Tom Lane's avatar
      Fix DROP OPERATOR to reset oprcom/oprnegate links to the dropped operator. · c94959d4
      Tom Lane authored
      This avoids leaving dangling links in pg_operator; which while fairly
      harmless are also unsightly.
      
      While we're at it, simplify OperatorUpd, which went through
      heap_modify_tuple for no very good reason considering it had already made
      a tuple copy it could just scribble on.
      
      Roma Sokolov, reviewed by Tomas Vondra, additional hacking by Robert Haas
      and myself.
      c94959d4
  29. 24 Mar, 2016 1 commit
  30. 05 Feb, 2016 1 commit
    • Tom Lane's avatar
      Add num_nulls() and num_nonnulls() to count NULL arguments. · 6819514f
      Tom Lane authored
      An example use-case is "CHECK(num_nonnulls(a,b,c) = 1)" to assert that
      exactly one of a,b,c isn't NULL.  The functions are variadic, so they
      can also be pressed into service to count the number of null or nonnull
      elements in an array.
      
      Marko Tiikkaja, reviewed by Pavel Stehule
      6819514f
  31. 27 Nov, 2015 1 commit
  32. 06 Nov, 2015 1 commit
  33. 08 Oct, 2015 1 commit
    • Andrew Dunstan's avatar
      Factor out encoding specific tests for json · b6363772
      Andrew Dunstan authored
      This lets us remove the large alternative results files for the main
      json and jsonb tests, which makes modifying those tests simpler for
      committers and patch submitters.
      
      Backpatch to 9.4 for jsonb and 9.3 for json.
      b6363772
  34. 21 Aug, 2015 1 commit
    • Stephen Frost's avatar
      In AlterRole, make bypassrls an int · 7ec8296e
      Stephen Frost authored
      When reworking bypassrls in AlterRole to operate the same way the other
      attribute handling is done, I missed that the variable was incorrectly a
      bool rather than an int.  This meant that on platforms with an unsigned
      char, we could end up with incorrect behavior during ALTER ROLE.
      
      Pointed out by Andres thanks to tests he did changing our bool to be the
      one from stdbool.h which showed this and a number of other issues.
      
      Add regression tests to test CREATE/ALTER role for the various role
      attributes.  Arrange to leave roles behind for testing pg_dumpall, but
      none which have the LOGIN attribute.
      
      Back-patch to 9.5 where the AlterRole bug exists.
      7ec8296e
  35. 14 Jul, 2015 1 commit
  36. 16 May, 2015 1 commit
    • Andres Freund's avatar
      Support GROUPING SETS, CUBE and ROLLUP. · f3d31185
      Andres Freund authored
      This SQL standard functionality allows to aggregate data by different
      GROUP BY clauses at once. Each grouping set returns rows with columns
      grouped by in other sets set to NULL.
      
      This could previously be achieved by doing each grouping as a separate
      query, conjoined by UNION ALLs. Besides being considerably more concise,
      grouping sets will in many cases be faster, requiring only one scan over
      the underlying data.
      
      The current implementation of grouping sets only supports using sorting
      for input. Individual sets that share a sort order are computed in one
      pass. If there are sets that don't share a sort order, additional sort &
      aggregation steps are performed. These additional passes are sourced by
      the previous sort step; thus avoiding repeated scans of the source data.
      
      The code is structured in a way that adding support for purely using
      hash aggregation or a mix of hashing and sorting is possible. Sorting
      was chosen to be supported first, as it is the most generic method of
      implementation.
      
      Instead of, as in an earlier versions of the patch, representing the
      chain of sort and aggregation steps as full blown planner and executor
      nodes, all but the first sort are performed inside the aggregation node
      itself. This avoids the need to do some unusual gymnastics to handle
      having to return aggregated and non-aggregated tuples from underlying
      nodes, as well as having to shut down underlying nodes early to limit
      memory usage.  The optimizer still builds Sort/Agg node to describe each
      phase, but they're not part of the plan tree, but instead additional
      data for the aggregation node. They're a convenient and preexisting way
      to describe aggregation and sorting.  The first (and possibly only) sort
      step is still performed as a separate execution step. That retains
      similarity with existing group by plans, makes rescans fairly simple,
      avoids very deep plans (leading to slow explains) and easily allows to
      avoid the sorting step if the underlying data is sorted by other means.
      
      A somewhat ugly side of this patch is having to deal with a grammar
      ambiguity between the new CUBE keyword and the cube extension/functions
      named cube (and rollup). To avoid breaking existing deployments of the
      cube extension it has not been renamed, neither has cube been made a
      reserved keyword. Instead precedence hacking is used to make GROUP BY
      cube(..) refer to the CUBE grouping sets feature, and not the function
      cube(). To actually group by a function cube(), unlikely as that might
      be, the function name has to be quoted.
      
      Needs a catversion bump because stored rules may change.
      
      Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund
      Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas
          Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule
      Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
      f3d31185
  37. 15 May, 2015 1 commit
    • Simon Riggs's avatar
      TABLESAMPLE, SQL Standard and extensible · f6d208d6
      Simon Riggs authored
      Add a TABLESAMPLE clause to SELECT statements that allows
      user to specify random BERNOULLI sampling or block level
      SYSTEM sampling. Implementation allows for extensible
      sampling functions to be written, using a standard API.
      Basic version follows SQLStandard exactly. Usable
      concrete use cases for the sampling API follow in later
      commits.
      
      Petr Jelinek
      
      Reviewed by Michael Paquier and Simon Riggs
      f6d208d6
  38. 08 May, 2015 1 commit
    • Andres Freund's avatar
      Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. · 168d5805
      Andres Freund authored
      The newly added ON CONFLICT clause allows to specify an alternative to
      raising a unique or exclusion constraint violation error when inserting.
      ON CONFLICT refers to constraints that can either be specified using a
      inference clause (by specifying the columns of a unique constraint) or
      by naming a unique or exclusion constraint.  DO NOTHING avoids the
      constraint violation, without touching the pre-existing row.  DO UPDATE
      SET ... [WHERE ...] updates the pre-existing tuple, and has access to
      both the tuple proposed for insertion and the existing tuple; the
      optional WHERE clause can be used to prevent an update from being
      executed.  The UPDATE SET and WHERE clauses have access to the tuple
      proposed for insertion using the "magic" EXCLUDED alias, and to the
      pre-existing tuple using the table name or its alias.
      
      This feature is often referred to as upsert.
      
      This is implemented using a new infrastructure called "speculative
      insertion". It is an optimistic variant of regular insertion that first
      does a pre-check for existing tuples and then attempts an insert.  If a
      violating tuple was inserted concurrently, the speculatively inserted
      tuple is deleted and a new attempt is made.  If the pre-check finds a
      matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
      If the insertion succeeds without detecting a conflict, the tuple is
      deemed inserted.
      
      To handle the possible ambiguity between the excluded alias and a table
      named excluded, and for convenience with long relation names, INSERT
      INTO now can alias its target table.
      
      Bumps catversion as stored rules change.
      
      Author: Peter Geoghegan, with significant contributions from Heikki
          Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
      Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
          Dean Rasheed, Stephen Frost and many others.
      168d5805
  39. 09 Mar, 2015 1 commit
    • Alvaro Herrera's avatar
      Allow CURRENT/SESSION_USER to be used in certain commands · 31eae602
      Alvaro Herrera authored
      Commands such as ALTER USER, ALTER GROUP, ALTER ROLE, GRANT, and the
      various ALTER OBJECT / OWNER TO, as well as ad-hoc clauses related to
      roles such as the AUTHORIZATION clause of CREATE SCHEMA, the FOR clause
      of CREATE USER MAPPING, and the FOR ROLE clause of ALTER DEFAULT
      PRIVILEGES can now take the keywords CURRENT_USER and SESSION_USER as
      user specifiers in place of an explicit user name.
      
      This commit also fixes some quite ugly handling of special standards-
      mandated syntax in CREATE USER MAPPING, which in particular would fail
      to work in presence of a role named "current_user".
      
      The special role specifiers PUBLIC and NONE also have more consistent
      handling now.
      
      Also take the opportunity to add location tracking to user specifiers.
      
      Authors: Kyotaro Horiguchi.  Heavily reworked by Álvaro Herrera.
      Reviewed by: Rushabh Lathia, Adam Brightwell, Marti Raudsepp.
      31eae602