1. 07 Mar, 2017 24 commits
    • Robert Haas's avatar
    • Robert Haas's avatar
      Properly initialize variable. · 506f0542
      Robert Haas authored
      Commit 3bc7dafa forgot to do this.
      
      Noted while experimenting with valgrind.
      506f0542
    • Tom Lane's avatar
      Invent start_proc parameters for PL/Tcl. · 0d2b1f30
      Tom Lane authored
      Define GUCs pltcl.start_proc and pltclu.start_proc.  When set to a
      nonempty value at the time a new Tcl interpreter is created, the
      parameterless pltcl or pltclu function named by the GUC is called to
      allow user-controlled initialization to occur within the interpreter.
      This is modeled on plv8's start_proc parameter, and also has much in
      common with plperl's on_init feature.  It allows users to fully
      replace the "modules" feature that was removed in commit 817f2a58.
      
      Since an initializer function could subvert later Tcl code in nearly
      arbitrary ways, mark both GUCs as SUSET for now.  It would be nice
      to find a way to relax that someday; but the corresponding GUCs in
      plperl are also SUSET, and there's not been much complaint.
      
      Discussion: https://postgr.es/m/22067.1488046447@sss.pgh.pa.us
      0d2b1f30
    • Tom Lane's avatar
      Clean up test_ifaddrs a bit. · 03cf2219
      Tom Lane authored
      We customarily #include <netinet/in.h> before <arpa/inet.h>; according
      to our git history (cf commit 527f8bab) there used to be platform(s)
      where <arpa/inet.h> didn't compile otherwise.  That's probably not
      really an issue anymore, but since test_ifaddrs.c is the one and only
      place in our code that's not following that rule, bring it into line.
      Also remove #include <sys/socket.h>, as that's duplicative given that
      libpq/ifaddr.h does so (via pqcomm.h).
      
      In passing, add a .gitignore file so nobody accidentally commits the
      test_ifaddrs executable, as I nearly did.
      
      I see no particular need to back-patch this, as it's just neatnik-ism
      considering we don't build test_ifaddrs by default, or even document
      it anywhere.
      03cf2219
    • Heikki Linnakangas's avatar
      A collection of small fixes for the SCRAM patch. · 95c1dbcd
      Heikki Linnakangas authored
      * Add required #includes for htonl. Per buildfarm members pademelon/gaur.
      
      * Remove unnecessary "#include <utils/memutils>".
      
      * Fix checking for empty string in pg_SASL_init. (Reported by Peter
        Eisentraut and his compiler)
      
      * Move code in pg_SASL_init to match the recent changes (commit ba005f19)
        to pg_fe_sendauth() function, where it's copied from.
      
      * Return value of malloc() was not checked for NULL in
        scram_SaltedPassword(). Fix by avoiding the malloc().
      95c1dbcd
    • Robert Haas's avatar
      Consider parallel merge joins. · 3bc7dafa
      Robert Haas authored
      Commit 45be99f8 took the position
      that performing a merge join in parallel was not likely to work out
      well, but this conclusion was greeted with skepticism even at the
      time.  Whether it was true then or not, it's clearly not true any
      more now that we have parallel index scan.
      
      Dilip Kumar, reviewed by Amit Kapila and by me.
      
      Discussion: http://postgr.es/m/CAFiTN-v3=cM6nyFwFGp0fmvY4=kk79Hq9Fgu0u8CSJ-EEq1Tiw@mail.gmail.com
      3bc7dafa
    • Tom Lane's avatar
      Fix pgbench's failure to honor the documented long-form option "--builtin". · ef266239
      Tom Lane authored
      Not only did it not accept --builtin as a synonym for -b, but what it did
      accept as a synonym was --tpc-b (huh?), which it got even further wrong
      by marking as no_argument, so that if you did try that you got a core
      dump.  I suppose this is leftover from some early design for the new
      switches added by commit 8bea3d22, but it's still pretty sloppy work.
      
      Per bug #14580 from Stepan Pesternikov.  Back-patch to 9.6 where the
      error was introduced.
      
      Report: https://postgr.es/m/20170307123347.25054.73207@wrigleys.postgresql.org
      ef266239
    • Robert Haas's avatar
      Give partitioned table "p" in regression tests a less generic name. · aa566718
      Robert Haas authored
      And don't drop it, so that we improve the coverage of the pg_upgrade
      regression tests.
      
      Amit Langote, per a gripe from Tom Lane
      
      Discussion: http://postgr.es/m/9071.1488863082@sss.pgh.pa.us
      aa566718
    • Robert Haas's avatar
      Fix relcache reference leak. · d88d06cd
      Robert Haas authored
      Reported by Kevin Grittner.  Faulty commit identified by Tom Lane.
      Patch by Amit Langote, reviewed by Michael Paquier.
      
      Discussion: http://postgr.es/m/CACjxUsOHbH1=99u8mGxmLHfy5hov4ENEpvM6=3ARjos7wG7rtQ@mail.gmail.com
      d88d06cd
    • Robert Haas's avatar
      Remove duplicated word. · 508dabaf
      Robert Haas authored
      Amit Langote
      508dabaf
    • Robert Haas's avatar
      Document what values postgres_fdw sets for each parameter it sets. · 889a3f48
      Robert Haas authored
      David Rader, reviewed by me.
      889a3f48
    • Robert Haas's avatar
      Fix wrong word in comment. · cd87a5ed
      Robert Haas authored
      Third time's the charm.
      cd87a5ed
    • Tom Lane's avatar
      Remove vestigial grammar support for CHARACTER ... CHARACTER SET option. · 11324e40
      Tom Lane authored
      The SQL standard says that you should be able to write "CHARACTER SET foo"
      as part of the declaration of a char-type column.  We don't implement that,
      but a rough form of support has existed in gram.y since commit f10b6392.
      That's now sat there for nigh 20 years without anyone fleshing it out ---
      and even if someone did, the contemplated approach of having separate data
      type name(s) for every character set certainly isn't what we'd do today.
      Let's just remove the grammar production; if anyone is ever motivated to
      work on this, reinventing the grammar support is a trivial fraction of
      what they'd have to do.  And we've never documented anything about
      supporting such a clause.
      
      Per gripe from Neha Khatri.
      
      Discussion: https://postgr.es/m/CAFO0U+-iOS5oYN5v3SBuZvfhPUTRrkDFEx8w7H17B07Rwg3YUA@mail.gmail.com
      11324e40
    • Robert Haas's avatar
      Preparatory refactoring for parallel merge join support. · a71f1018
      Robert Haas authored
      Extract the logic used by hash_inner_and_outer into a separate
      function, get_cheapest_parallel_safe_total_inner, so that it can
      also be used to plan parallel merge joins.
      
      Also, add a require_parallel_safe argument to the existing function
      get_cheapest_path_for_pathkeys, because parallel merge join needs
      to find the cheapest path for a given set of pathkeys that is
      parallel-safe, not just the cheapest one overall.
      
      Patch by me, reviewed by Dilip Kumar.
      
      Discussion: http://postgr.es/m/CA+TgmoYOv+dFK0MWW6366dFj_xTnohQfoBDrHyB7d1oZhrgPjA@mail.gmail.com
      a71f1018
    • Robert Haas's avatar
      Fix parallel hash join path search. · 655393a0
      Robert Haas authored
      When the very cheapest path is not parallel-safe, we want to instead use
      the cheapest unparameterized path that is.  The old code searched
      innerrel->cheapest_parameterized_paths, but that isn't right, because
      the path we want may not be in that list.  Search innerrel->pathlist
      instead.
      
      Spotted by Dilip Kumar.
      
      Discussion: http://postgr.es/m/CAFiTN-szCEcZrQm0i_w4xqSaRUTOUFstNu32Zn4rxxDcoa8gnA@mail.gmail.com
      655393a0
    • Stephen Frost's avatar
      psql: Add \gx command · b2678efd
      Stephen Frost authored
      It can often be useful to use expanded mode output (\x) for just a
      single query.  Introduce a \gx which acts exactly like \g except that it
      will force expanded output mode for that one \gx call.  This is simpler
      than having to use \x as a toggle and also means that the user doesn't
      have to worry about the current state of the expanded variable, or
      resetting it later, to ensure a given query is always returned in
      expanded mode.
      
      Primairly Christoph's patch, though I did tweak the documentation and help
      text a bit, and re-indented the tab completion section.
      
      Author: Christoph Berg
      Reviewed By: Daniel Verite
      Discussion: https://postgr.es/m/20170127132737.6skslelaf4txs6iw%40msg.credativ.de
      b2678efd
    • Simon Riggs's avatar
      Allow pg_dumpall to dump roles w/o user passwords · 9a83d56b
      Simon Riggs authored
      Add new option --no-role-passwords which dumps roles without passwords.
      Since we don’t need passwords, we choose to use pg_roles in preference
      to pg_authid since access may be restricted for security reasons in
      some configrations.
      
      Robins Tharakan and Simon Riggs
      9a83d56b
    • Heikki Linnakangas's avatar
      Fix comments in SCRAM-SHA-256 patch. · 55acfcbf
      Heikki Linnakangas authored
      Amit Kapila.
      55acfcbf
    • Simon Riggs's avatar
      Ensure ThisTimeLineID is valid before START_REPLICATION · 5ee21977
      Simon Riggs authored
      Craig Ringer
      5ee21977
    • Heikki Linnakangas's avatar
      Add regression tests for passwords. · 1fff35d8
      Heikki Linnakangas authored
      Michael Paquier.
      1fff35d8
    • Heikki Linnakangas's avatar
      Support SCRAM-SHA-256 authentication (RFC 5802 and 7677). · 818fd4a6
      Heikki Linnakangas authored
      This introduces a new generic SASL authentication method, similar to the
      GSS and SSPI methods. The server first tells the client which SASL
      authentication mechanism to use, and then the mechanism-specific SASL
      messages are exchanged in AuthenticationSASLcontinue and PasswordMessage
      messages. Only SCRAM-SHA-256 is supported at the moment, but this allows
      adding more SASL mechanisms in the future, without changing the overall
      protocol.
      
      Support for channel binding, aka SCRAM-SHA-256-PLUS is left for later.
      
      The SASLPrep algorithm, for pre-processing the password, is not yet
      implemented. That could cause trouble, if you use a password with
      non-ASCII characters, and a client library that does implement SASLprep.
      That will hopefully be added later.
      
      Authorization identities, as specified in the SCRAM-SHA-256 specification,
      are ignored. SET SESSION AUTHORIZATION provides more or less the same
      functionality, anyway.
      
      If a user doesn't exist, perform a "mock" authentication, by constructing
      an authentic-looking challenge on the fly. The challenge is derived from
      a new system-wide random value, "mock authentication nonce", which is
      created at initdb, and stored in the control file. We go through these
      motions, in order to not give away the information on whether the user
      exists, to unauthenticated users.
      
      Bumps PG_CONTROL_VERSION, because of the new field in control file.
      
      Patch by Michael Paquier and Heikki Linnakangas, reviewed at different
      stages by Robert Haas, Stephen Frost, David Steele, Aleksander Alekseev,
      and many others.
      
      Discussion: https://www.postgresql.org/message-id/CAB7nPqRbR3GmFYdedCAhzukfKrgBLTLtMvENOmPrVWREsZkF8g%40mail.gmail.com
      Discussion: https://www.postgresql.org/message-id/CAB7nPqSMXU35g%3DW9X74HVeQp0uvgJxvYOuA4A-A3M%2B0wfEBv-w%40mail.gmail.com
      Discussion: https://www.postgresql.org/message-id/55192AFE.6080106@iki.fi
      818fd4a6
    • Heikki Linnakangas's avatar
      Refactor SHA2 functions and move them to src/common/. · 273c458a
      Heikki Linnakangas authored
      This way both frontend and backends can use them. The functions are taken
      from pgcrypto, which now fetches the source files it needs from
      src/common/.
      
      A new interface is designed for the SHA2 functions, which allow linking
      to either OpenSSL or the in-core stuff taken from KAME as needed.
      
      Michael Paquier, reviewed by Robert Haas.
      
      Discussion: https://www.postgresql.org/message-id/CAB7nPqTGKuTM5jiZriHrNaQeVqp5e_iT3X4BFLWY_HyHxLvySQ%40mail.gmail.com
      273c458a
    • Stephen Frost's avatar
      pg_dump: Properly handle public schema ACLs with --clean · 330b84d8
      Stephen Frost authored
      pg_dump has always handled the public schema in a special way when it
      comes to the "--clean" option.  To wit, we do not drop or recreate the
      public schema in "normal" mode, but when we are run in "--clean" mode
      then we do drop and recreate the public schema.
      
      When running in "--clean" mode, the public schema is dropped and then
      recreated and it is recreated with the normal schema-default privileges
      of "nothing".  This is unlike how the public schema starts life, which
      is to have CREATE and USAGE GRANT'd to the PUBLIC role, and that is what
      is recorded in pg_init_privs.
      
      Due to this, in "--clean" mode, pg_dump would mistakenly only dump out
      the set of privileges required to go from the initdb-time privileges on
      the public schema to whatever the current-state privileges are.  If the
      privileges were not changed from initdb time, then no privileges would
      be dumped out for the public schema, but with the schema being dropped
      and recreated, the result was that the public schema would have no ACLs
      on it instead of what it should have, which is the initdb-time
      privileges.
      
      Practically speaking, this meant that pg_dump with --clean mode dumping
      a database where the ACLs on the public schema were not changed from the
      default would, upon restore, result in a public schema with *no*
      privileges GRANT'd, not matching the state of the existing database
      (where the initdb-time privileges would have been CREATE and USAGE to
      the PUBLIC role for the public schema).
      
      To fix, adjust the query in getNamespaces() to ignore the pg_init_privs
      entry for the public schema when running in "--clean" mode, meaning that
      the privileges for the public schema would be dumped, correctly, as if
      it was going from a newly-created schema to the current state (which is,
      indeed, what will happen during the restore thanks to the DROP/CREATE).
      
      Only the public schema is handled in this special way by pg_dump, no
      other initdb-time objects are dropped/recreated in --clean mode.
      
      Back-patch to 9.6 where the bug was introduced.
      
      Discussion: https://postgr.es/m/3534542.o3cNaKiDID%40techfox
      330b84d8
    • Tom Lane's avatar
      Repair incorrect pg_dump labeling for some comments and security labels. · 299990ba
      Tom Lane authored
      We attached no schema label to comments for procedural languages, casts,
      transforms, operator classes, operator families, or text search objects.
      The first three categories of objects don't really have schemas, but
      pg_dump treats them as if they do, and it seems like the TocEntry fields
      for their comments had better match the TocEntry fields for the parent
      objects.  (As an example of a possible hazard, the type names in a CAST
      will be formatted with the assumption of a particular search_path, so
      failing to ensure that this same path is active for the COMMENT ON command
      could lead to an error or to attaching the comment to the wrong cast.)
      In the last six cases, this was a flat-out error --- possibly mine to
      begin with, but it was a long time ago.
      
      The security label for a procedural language was likewise not correctly
      labeled as to schema, and both the comment and security label for a
      procedural language were not correctly labeled as to owner.
      
      In simple cases the restore would accidentally work correctly anyway, since
      these comments and security labels would normally get emitted right after
      the owning object, and so the search path and active user would be correct
      anyhow.  But it could fail in corner cases; for example a schema-selective
      restore would omit comments it should include.
      
      Giuseppe Broccolo noted the oversight, and proposed the correct fix, for
      text search dictionary objects; I found the rest by cross-checking other
      dumpComment() calls.  These oversights are ancient, so back-patch all
      the way.
      
      Discussion: https://postgr.es/m/CAFzmHiWwwzLjzwM4x5ki5s_PDMR6NrkipZkjNnO3B0xEpBgJaA@mail.gmail.com
      299990ba
  2. 06 Mar, 2017 16 commits
    • Andres Freund's avatar
      Make simplehash.h grow hashtable in additional cases. · d4c62a6b
      Andres Freund authored
      Increase the size when either the distance between actual and optimal
      slot grows too large, or when too many subsequent entries would have
      to be moved.
      
      This addresses reports that the simplehash performed, sometimes
      considerably, worse than dynahash.
      
      The reason turned out to be that insertions into the hashtable where,
      due to the use of parallel query, in effect done from another
      hashtable, in hash-value order.  If the target hashtable, due to
      mis-estimation, was sized a lot smaller than the source table(s) that
      lead to very imbalanced tables; a lot of entries in many close-by
      buckets from the source tables were inserted into a single, wider,
      bucket on the target table.  As the growth factor was solely computed
      based on the fillfactor, the performance of the table decreased
      further and further.
      
      b81b5a96 was an attempt to address this problem for hash
      aggregates (but not for bitmap scans), but it turns out that the
      current method of mixing hash values often actually leaves neighboring
      hash-values close to each other, just in different value range.  It
      might be worth revisiting that independently of the performance issues
      addressed in this patch..
      
      To address that problem resize tables in two additional cases: Firstly
      when the optimal position for an entry would be far from the actual
      position, secondly when many entries would have to be moved to make
      space for the new entry (while satisfying the robin hood property).
      
      Due to the additional resizing threshold it seems possible, and
      testing confirms that so far, that a higher fillfactor doesn't hurt
      performance and saves a bit of memory.  It seems better to increase it
      now, before a release containing any of this code, rather than wonder
      in some later release.
      
      The various boundaries aren't determined in a particularly scientific
      manner, they might need some fine-tuning.
      
      In all my tests the new code now, even with parallelism, performs at
      least as good as the old code, in several scenarios significantly
      better.
      
      Reported-By: Dilip Kumar, Robert Haas, Kuntal Ghosh
      Discussion:
          https://postgr.es/m/CAFiTN-vagvuAydKG9VnWcoK=ADAhxmOa4ZTrmNsViBBooTnriQ@mail.gmail.com
          https://postgr.es/m/CAGz5QC+=fNTYgzMLTBUNeKt6uaWZFXJbkB5+7oWm-n9DwVxcLA@mail.gmail.com
      d4c62a6b
    • Stephen Frost's avatar
      pg_upgrade: Fix large object COMMENTS, SECURITY LABELS · ff992c07
      Stephen Frost authored
      When performing a pg_upgrade, we copy the files behind pg_largeobject
      and pg_largeobject_metadata, allowing us to avoid having to dump out and
      reload the actual data for large objects and their ACLs.
      
      Unfortunately, that isn't all of the information which can be associated
      with large objects.  Currently, we also support COMMENTs and SECURITY
      LABELs with large objects and these were being silently dropped during a
      pg_upgrade as pg_dump would skip everything having to do with a large
      object and pg_upgrade only copied the tables mentioned to the new
      cluster.
      
      As the file copies happen after the catalog dump and reload, we can't
      simply include the COMMENTs and SECURITY LABELs in pg_dump's binary-mode
      output but we also have to include the actual large object definition as
      well.  With the definition, comments, and security labels in the pg_dump
      output and the file copies performed by pg_upgrade, all of the data and
      metadata associated with large objects is able to be successfully pulled
      forward across a pg_upgrade.
      
      In 9.6 and master, we can simply adjust the dump bitmask to indicate
      which components we don't want.  In 9.5 and earlier, we have to put
      explciit checks in in dumpBlob() and dumpBlobs() to not include the ACL
      or the data when in binary-upgrade mode.
      
      Adjustments made to the privileges regression test to allow another test
      (large_object.sql) to be added which explicitly leaves a large object
      with a comment in place to provide coverage of that case with
      pg_upgrade.
      
      Back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/20170221162655.GE9812@tamriel.snowman.net
      ff992c07
    • Tom Lane's avatar
      Avoid dangling pointer to relation name in RLS code path in DoCopy(). · a8df75b0
      Tom Lane authored
      With RLS active, "COPY tab TO ..." failed under -DRELCACHE_FORCE_RELEASE,
      and would sometimes fail without that, because it used the relation name
      directly from the relcache as part of the parsetree it's building.  That
      becomes a potentially-dangling pointer as soon as the relcache entry is
      closed, a bit further down.  Typical symptom if the relcache entry chanced
      to get cleared would be "relation does not exist" error with a garbage
      relation name, or possibly a core dump; but if you were really truly
      unlucky, the COPY might copy from the wrong table.
      
      Per report from Andrew Dunstan that regression tests fail with
      -DRELCACHE_FORCE_RELEASE.  The core tests now pass for me (but have
      not tried "make check-world" yet).
      
      Discussion: https://postgr.es/m/7b52f900-0579-cda9-ae2e-de5da17090e6@2ndQuadrant.com
      a8df75b0
    • Peter Eisentraut's avatar
      Combine several DROP variants into generic DropStmt · e6477a81
      Peter Eisentraut authored
      Combine DROP of FOREIGN DATA WRAPPER, SERVER, POLICY, RULE, and TRIGGER
      into generic DropStmt grammar.
      Reviewed-by: default avatarJim Nasby <Jim.Nasby@BlueTreble.com>
      Reviewed-by: default avatarMichael Paquier <michael.paquier@gmail.com>
      e6477a81
    • Peter Eisentraut's avatar
      Allow dropping multiple functions at once · 583f6c41
      Peter Eisentraut authored
      The generic drop support already supported dropping multiple objects of
      the same kind at once.  But the previous representation
      of function signatures across two grammar symbols and structure members
      made this cumbersome to do for functions, so it was not supported.  Now
      that function signatures are represented by a single structure, it's
      trivial to add this support.  Same for aggregates and operators.
      Reviewed-by: default avatarJim Nasby <Jim.Nasby@BlueTreble.com>
      Reviewed-by: default avatarMichael Paquier <michael.paquier@gmail.com>
      583f6c41
    • Peter Eisentraut's avatar
      Replace LookupFuncNameTypeNames() with LookupFuncWithArgs() · 2ca64c6f
      Peter Eisentraut authored
      The old function took function name and function argument list as
      separate arguments.  Now that all function signatures are passed around
      as ObjectWithArgs structs, this is no longer necessary and can be
      replaced by a function that takes ObjectWithArgs directly.  Similarly
      for aggregates and operators.
      Reviewed-by: default avatarJim Nasby <Jim.Nasby@BlueTreble.com>
      Reviewed-by: default avatarMichael Paquier <michael.paquier@gmail.com>
      2ca64c6f
    • Peter Eisentraut's avatar
      Remove objname/objargs split for referring to objects · 8b6d6cf8
      Peter Eisentraut authored
      In simpler times, it might have worked to refer to all kinds of objects
      by a list of name components and an optional argument list.  But this
      doesn't work for all objects, which has resulted in a collection of
      hacks to place various other nodes types into these fields, which have
      to be unpacked at the other end.  This makes it also weird to represent
      lists of such things in the grammar, because they would have to be lists
      of singleton lists, to make the unpacking work consistently.  The other
      problem is that keeping separate name and args fields makes it awkward
      to deal with lists of functions.
      
      Change that by dropping the objargs field and have objname, renamed to
      object, be a generic Node, which can then be flexibly assigned and
      managed using the normal Node mechanisms.  In many cases it will still
      be a List of names, in some cases it will be a string Value, for types
      it will be the existing Typename, for functions it will now use the
      existing ObjectWithArgs node type.  Some of the more obscure object
      types still use somewhat arbitrary nested lists.
      Reviewed-by: default avatarJim Nasby <Jim.Nasby@BlueTreble.com>
      Reviewed-by: default avatarMichael Paquier <michael.paquier@gmail.com>
      8b6d6cf8
    • Peter Eisentraut's avatar
      Add operator_with_argtypes grammar rule · 550214a4
      Peter Eisentraut authored
      This makes the handling of operators similar to that of functions and
      aggregates.
      
      Rename node FuncWithArgs to ObjectWithArgs, to reflect the expanded use.
      Reviewed-by: default avatarJim Nasby <Jim.Nasby@BlueTreble.com>
      Reviewed-by: default avatarMichael Paquier <michael.paquier@gmail.com>
      550214a4
    • Peter Eisentraut's avatar
      Use class_args field in opclass_drop · 63ebd377
      Peter Eisentraut authored
      This makes it consistent with the usage in opclass_item.
      Reviewed-by: default avatarJim Nasby <Jim.Nasby@BlueTreble.com>
      Reviewed-by: default avatarMichael Paquier <michael.paquier@gmail.com>
      63ebd377
    • Robert Haas's avatar
      Fix incorrect comments. · 12a2544c
      Robert Haas authored
      Commit 19dc233c introduced these
      comments.  Michael Paquier noticed that one of them had a typo, but
      a bigger problem is that they were not an accurate description of
      what the code was doing.
      
      Patch by me.
      12a2544c
    • Robert Haas's avatar
      Mark pg_start_backup and pg_stop_backup as parallel-restricted. · 9fe3c644
      Robert Haas authored
      They depend on backend-private state that will not be synchronized by
      the parallel machinery, so they should not be marked parallel-safe.
      This issue also exists in 9.6, but we obviously can't do anything
      about 9.6 clusters that already exist.  Possibly this could be
      back-patched so that future 9.6 clusters would come out OK, or
      possibly we should back-patch some other fix, but that would need more
      discussion.
      
      David Steele, reviewed by Michael Paquier
      
      Discussion: http://postgr.es/m/CA+TgmoYCWfO2UM-t=HUMFJyxJywLDiLL0nAJpx88LKtvBvNECw@mail.gmail.com
      9fe3c644
    • Robert Haas's avatar
      Fix user-after-free bug. · 7f6fa29f
      Robert Haas authored
      Introduced by commit aea5d298.
      
      Patch from Amit Kapila.  Issue discovered independently by Amit Kapila
      and Ashutosh Sharma.
      7f6fa29f
    • Peter Eisentraut's avatar
      Reorder the asynchronous libpq calls for replication connection · e434ad39
      Peter Eisentraut authored
      Per libpq documentation, the initial state must be
      PGRES_POLLING_WRITING.  Failing to do that appears to cause some issues
      on some Windows systems.
      
      From: Petr Jelinek <petr.jelinek@2ndquadrant.com>
      e434ad39
    • Simon Riggs's avatar
      Enhance docs for ALTER TABLE lock levels of storage parms · 6f3a13ff
      Simon Riggs authored
      As requested by Robert Haas
      6f3a13ff
    • Simon Riggs's avatar
      Reduce lock levels for table storage params related to planning · 21d4e2e2
      Simon Riggs authored
      The following parameters are now updateable with ShareUpdateExclusiveLock
      effective_io_concurrency
      parallel_workers
      seq_page_cost
      random_page_cost
      n_distinct
      n_distinct_inherited
      
      Simon Riggs and Fabrízio Mello
      21d4e2e2
    • Simon Riggs's avatar
      Allow partitioned tables to be dropped without CASCADE · 8b4d582d
      Simon Riggs authored
      Record partitioned table dependencies as DEPENDENCY_AUTO
      rather than DEPENDENCY_NORMAL, so that DROP TABLE just works.
      
      Remove all the tests for partitioned tables where earlier
      work had deliberately avoided using CASCADE.
      
      Amit Langote, reviewed by Ashutosh Bapat and myself
      8b4d582d