1. 09 Aug, 2018 4 commits
  2. 08 Aug, 2018 4 commits
    • Peter Geoghegan's avatar
      Doc: Correct description of amcheck example query. · 313cbdc7
      Peter Geoghegan authored
      The amcheck documentation incorrectly claimed that its example query
      verifies every catalog index in the database.  In fact, the query only
      verifies the 10 largest indexes (as determined by pg_class.relpages).
      Adjust the description accordingly.
      
      Backpatch: 10-, where contrib/amcheck was introduced.
      313cbdc7
    • Tom Lane's avatar
      Remove unwanted "garbage cleanup" logic in Makefiles. · 1eee8d49
      Tom Lane authored
      GNUmakefile.in defined a macro "garbage" that seems to have been meant
      as a suitable target for automatic "rm -rf" treatment, but it isn't
      actually used anywhere (and indeed never was, AFAICT).
      
      Moreover, we have concluded that the Makefiles shouldn't take it upon
      themselves to remove files that aren't expected by-products of building,
      so that doing anything like that would be against project policy anyway.
      Hence, just remove the macro.
      
      Grepping around finds another violation of that policy in ecpg/preproc,
      so clean that up too.
      
      Daniel Gustafsson (ecpg change by me)
      
      Discussion: https://postgr.es/m/AFBEF63E-E19D-4EBB-9F08-4617CDC751ED@yesql.se
      1eee8d49
    • Heikki Linnakangas's avatar
      Don't run atexit callbacks in quickdie signal handlers. · 8e19a826
      Heikki Linnakangas authored
      exit() is not async-signal safe. Even if the libc implementation is, 3rd
      party libraries might have installed unsafe atexit() callbacks. After
      receiving SIGQUIT, we really just want to exit as quickly as possible, so
      we don't really want to run the atexit() callbacks anyway.
      
      The original report by Jimmy Yih was a self-deadlock in startup_die().
      However, this patch doesn't address that scenario; the signal handling
      while waiting for the startup packet is more complicated. But at least this
      alleviates similar problems in the SIGQUIT handlers, like that reported
      by Asim R P later in the same thread.
      
      Backpatch to 9.3 (all supported versions).
      
      Discussion: https://www.postgresql.org/message-id/CAOMx_OAuRUHiAuCg2YgicZLzPVv5d9_H4KrL_OFsFP%3DVPekigA%40mail.gmail.com
      8e19a826
    • Tom Lane's avatar
      Match RelOptInfos by relids not pointer equality. · 11e22e48
      Tom Lane authored
      Commit 1c2cb274 added some code that tried to detect whether two
      RelOptInfos were the "same" rel by pointer comparison; but it turns
      out that inheritance_planner breaks that, through its shenanigans
      with copying some relations forward into new subproblems.  Compare
      relid sets instead.  Add a regression test case to exercise this
      area.
      
      Problem reported by Rushabh Lathia; diagnosis and fix by Amit Langote,
      modified a bit by me.
      
      Discussion: https://postgr.es/m/CAGPqQf3anJGj65bqAQ9edDr8gF7qig6_avRgwMT9MsZ19COUPw@mail.gmail.com
      11e22e48
  3. 07 Aug, 2018 4 commits
    • Tom Lane's avatar
      Don't record FDW user mappings as members of extensions. · 9b7c56d6
      Tom Lane authored
      CreateUserMapping has a recordDependencyOnCurrentExtension call that's
      been there since extensions were introduced (very possibly my fault).
      However, there's no support anywhere else for user mappings as members
      of extensions, nor are they listed as a possible member object type in
      the documentation.  Nor does it really seem like a good idea for user
      mappings to belong to extensions when roles don't.  Hence, remove the
      bogus call.
      
      (As we saw in bug #15310, the lack of any pg_dump support for this case
      ensures that any such membership record would silently disappear during
      pg_upgrade.  So there's probably no need for us to do anything else
      about cleaning up after this mistake.)
      
      Discussion: https://postgr.es/m/27952.1533667213@sss.pgh.pa.us
      9b7c56d6
    • Tom Lane's avatar
      Fix incorrect initialization of BackendActivityBuffer. · 41db9739
      Tom Lane authored
      Since commit c8e8b5a6, this has been zeroed out using the wrong length.
      In practice the length would always be too small, leading to not zeroing
      the whole buffer rather than clobbering additional memory; and that's
      pretty harmless, both because shmem would likely start out as zeroes
      and because we'd reinitialize any given entry before use.  Still,
      it's bogus, so fix it.
      
      Reported by Petru-Florin Mihancea (bug #15312)
      
      Discussion: https://postgr.es/m/153363913073.1303.6518849192351268091@wrigleys.postgresql.org
      41db9739
    • Tom Lane's avatar
      Fix pg_upgrade to handle event triggers in extensions correctly. · 03838b80
      Tom Lane authored
      pg_dump with --binary-upgrade must emit ALTER EXTENSION ADD commands
      for all objects that are members of extensions.  It forgot to do so for
      event triggers, as per bug #15310 from Nick Barnes.  Back-patch to 9.3
      where event triggers were introduced.
      
      Haribabu Kommi
      
      Discussion: https://postgr.es/m/153360083872.1395.4593932457718151600@wrigleys.postgresql.org
      03838b80
    • Tom Lane's avatar
      Ensure pg_dump_sort.c sorts null vs non-null namespace consistently. · 5b5ed475
      Tom Lane authored
      The original coding here (which is, I believe, my fault) supposed that
      it didn't need to concern itself with the possibility that one object
      of a given type-priority has a namespace while another doesn't.  But
      that's not reliably true anymore, if it ever was; and if it does happen
      then it's possible that DOTypeNameCompare returns self-inconsistent
      comparison results.  That leads to unspecified behavior in qsort()
      and a resultant weird output order from pg_dump.
      
      This should end up being only a cosmetic problem, because any ordering
      constraints that actually matter should be enforced by the later
      dependency-based sort.  Still, it's a bug, so back-patch.
      
      Report and fix by Jacob Champion, though I editorialized on his
      patch to the extent of making NULL sort after non-NULL, for consistency
      with our usual sorting definitions.
      
      Discussion: https://postgr.es/m/CABAq_6Hw+V-Kj7PNfD5tgOaWT_-qaYkc+SRmJkPLeUjYXLdxwQ@mail.gmail.com
      5b5ed475
  4. 06 Aug, 2018 2 commits
    • Tom Lane's avatar
      Last-minute updates for release notes. · e0ee9305
      Tom Lane authored
      Security: CVE-2018-10915, CVE-2018-10925
      e0ee9305
    • Tom Lane's avatar
      Fix failure to reset libpq's state fully between connection attempts. · d1c6a14b
      Tom Lane authored
      The logic in PQconnectPoll() did not take care to ensure that all of
      a PGconn's internal state variables were reset before trying a new
      connection attempt.  If we got far enough in the connection sequence
      to have changed any of these variables, and then decided to try a new
      server address or server name, the new connection might be completed
      with some state that really only applied to the failed connection.
      
      While this has assorted bad consequences, the only one that is clearly
      a security issue is that password_needed didn't get reset, so that
      if the first server asked for a password and the second didn't,
      PQconnectionUsedPassword() would return an incorrect result.  This
      could be leveraged by unprivileged users of dblink or postgres_fdw
      to allow them to use server-side login credentials that they should
      not be able to use.
      
      Other notable problems include the possibility of forcing a v2-protocol
      connection to a server capable of supporting v3, or overriding
      "sslmode=prefer" to cause a non-encrypted connection to a server that
      would have accepted an encrypted one.  Those are certainly bugs but
      it's harder to paint them as security problems in themselves.  However,
      forcing a v2-protocol connection could result in libpq having a wrong
      idea of the server's standard_conforming_strings setting, which opens
      the door to SQL-injection attacks.  The extent to which that's actually
      a problem, given the prerequisite that the attacker needs control of
      the client's connection parameters, is unclear.
      
      These problems have existed for a long time, but became more easily
      exploitable in v10, both because it introduced easy ways to force libpq
      to abandon a connection attempt at a late stage and then try another one
      (rather than just giving up), and because it provided an easy way to
      specify multiple target hosts.
      
      Fix by rearranging PQconnectPoll's state machine to provide centralized
      places to reset state properly when moving to a new target host or when
      dropping and retrying a connection to the same host.
      
      Tom Lane, reviewed by Noah Misch.  Our thanks to Andrew Krasichkov
      for finding and reporting the problem.
      
      Security: CVE-2018-10915
      d1c6a14b
  5. 05 Aug, 2018 5 commits
    • Tom Lane's avatar
      aa291a4c
    • Tom Lane's avatar
      Doc: fix incorrectly stated argument list for pgcrypto's hmac() function. · a3274e0d
      Tom Lane authored
      The bytea variant takes (bytea, bytea, text).
      Per unsigned report.
      
      Discussion: https://postgr.es/m/153344327294.1404.654155870612982042@wrigleys.postgresql.org
      a3274e0d
    • Heikki Linnakangas's avatar
      Remove now unused check for HAVE_X509_GET_SIGNATURE_NID in test. · 6b9eb503
      Heikki Linnakangas authored
      I removed the code that used this in the previous commit.
      
      Spotted by Michael Paquier.
      6b9eb503
    • Heikki Linnakangas's avatar
      Remove support for tls-unique channel binding. · 77291139
      Heikki Linnakangas authored
      There are some problems with the tls-unique channel binding type. It's not
      supported by all SSL libraries, and strictly speaking it's not defined for
      TLS 1.3 at all, even though at least in OpenSSL, the functions used for it
      still seem to work with TLS 1.3 connections. And since we had no
      mechanism to negotiate what channel binding type to use, there would be
      awkward interoperability issues if a server only supported some channel
      binding types. tls-server-end-point seems feasible to support with any SSL
      library, so let's just stick to that.
      
      This removes the scram_channel_binding libpq option altogether, since there
      is now only one supported channel binding type.
      
      This also removes all the channel binding tests from the SSL test suite.
      They were really just testing the scram_channel_binding option, which
      is now gone. Channel binding is used if both client and server support it,
      so it is used in the existing tests. It would be good to have some tests
      specifically for channel binding, to make sure it really is used, and the
      different combinations of a client and a server that support or doesn't
      support it. The current set of settings we have make it hard to write such
      tests, but I did test those things manually, by disabling
      HAVE_BE_TLS_GET_CERTIFICATE_HASH and/or
      HAVE_PGTLS_GET_PEER_CERTIFICATE_HASH.
      
      I also removed the SCRAM_CHANNEL_BINDING_TLS_END_POINT constant. This is a
      matter of taste, but IMO it's more readable to just use the
      "tls-server-end-point" string.
      
      Refactor the checks on whether the SSL library supports the functions
      needed for tls-server-end-point channel binding. Now the server won't
      advertise, and the client won't choose, the SCRAM-SHA-256-PLUS variant, if
      compiled with an OpenSSL version too old to support it.
      
      In the passing, add some sanity checks to check that the chosen SASL
      mechanism, SCRAM-SHA-256 or SCRAM-SHA-256-PLUS, matches whether the SCRAM
      exchange used channel binding or not. For example, if the client selects
      the non-channel-binding variant SCRAM-SHA-256, but in the SCRAM message
      uses channel binding anyway. It's harmless from a security point of view,
      I believe, and I'm not sure if there are some other conditions that would
      cause the connection to fail, but it seems better to be strict about these
      things and check explicitly.
      
      Discussion: https://www.postgresql.org/message-id/ec787074-2305-c6f4-86aa-6902f98485a4%40iki.fi
      77291139
    • Tom Lane's avatar
      Update version 11 release notes. · 7a46068f
      Tom Lane authored
      Remove description of commit 1944cdc9, which has now been back-patched
      so it's not relevant to v11 any longer.  Add descriptions of other
      recent commits that seemed worth mentioning.
      
      I marked the update as stopping at 2018-07-30, because it's unclear
      whether d06eebce5 will be allowed to stay in v11, and I didn't feel like
      putting effort into writing a description of it yet.  If it does stay,
      I think it will deserve mention in the Source Code section.
      7a46068f
  6. 04 Aug, 2018 3 commits
    • Tom Lane's avatar
      Fix INSERT ON CONFLICT UPDATE through a view that isn't just SELECT *. · b8a1247a
      Tom Lane authored
      When expanding an updatable view that is an INSERT's target, the rewriter
      failed to rewrite Vars in the ON CONFLICT UPDATE clause.  This accidentally
      worked if the view was just "SELECT * FROM ...", as the transformation
      would be a no-op in that case.  With more complicated view targetlists,
      this omission would often lead to "attribute ... has the wrong type" errors
      or even crashes, as reported by Mario De Frutos Dieguez.
      
      Fix by adding code to rewriteTargetView to fix up the data structure
      correctly.  The easiest way to update the exclRelTlist list is to rebuild
      it from scratch looking at the new target relation, so factor the code
      for that out of transformOnConflictClause to make it sharable.
      
      In passing, avoid duplicate permissions checks against the EXCLUDED
      pseudo-relation, and prevent useless view expansion of that relation's
      dummy RTE.  The latter is only known to happen (after this patch) in cases
      where the query would fail later due to not having any INSTEAD OF triggers
      for the view.  But by exactly that token, it would create an unintended
      and very poorly tested state of the query data structure, so it seems like
      a good idea to prevent it from happening at all.
      
      This has been broken since ON CONFLICT was introduced, so back-patch
      to 9.5.
      
      Dean Rasheed, based on an earlier patch by Amit Langote;
      comment-kibitzing and back-patching by me
      
      Discussion: https://postgr.es/m/CAFYwGJ0xfzy8jaK80hVN2eUWr6huce0RU8AgU04MGD00igqkTg@mail.gmail.com
      b8a1247a
    • Michael Paquier's avatar
      Reset properly errno before calling write() · 5a23c74b
      Michael Paquier authored
      6cb33724 enforces errno to ENOSPC when less bytes than what is expected
      have been written when it is unset, though it forgot to properly reset
      errno before doing a system call to write(), causing errno to
      potentially come from a previous system call.
      
      Reported-by: Tom Lane
      Author: Michael Paquier
      Reviewed-by: Tom Lane
      Discussion: https://postgr.es/m/31797.1533326676@sss.pgh.pa.us
      5a23c74b
    • Noah Misch's avatar
      Make "kerberos" test suite independent of "localhost" name resolution. · e61f21b9
      Noah Misch authored
      This suite malfunctioned if the canonical name of "localhost" was
      something other than "localhost", such as "localhost.localdomain".  Use
      hostaddr=127.0.0.1 and a fictitious host=, so the resolver's answers for
      "localhost" don't affect the outcome.  Back-patch to v11, which
      introduced this test suite.
      
      Discussion: https://postgr.es/m/20180801050903.GA1392916@rfd.leadboat.com
      e61f21b9
  7. 03 Aug, 2018 8 commits
    • Peter Geoghegan's avatar
      Add table relcache invalidation to index builds. · b3f919da
      Peter Geoghegan authored
      It's necessary to make sure that owning tables have a relcache
      invalidation prior to advancing the command counter to make
      newly-entered catalog tuples for the index visible.  inval.c must be
      able to maintain the consistency of the local caches in the event of
      transaction abort.  There is usually only a problem when CREATE INDEX
      transactions abort, since there is a generic invalidation once we reach
      index_update_stats().
      
      This bug is of long standing.  Problems were made much more likely by
      the addition of parallel CREATE INDEX (commit 9da0cc35), but it is
      strongly suspected that similar problems can be triggered without
      involving plan_create_index_workers().  (plan_create_index_workers()
      triggers a relcache build or rebuild, which previously only happened in
      rare edge cases.)
      
      Author: Peter Geoghegan
      Reported-By: Luca Ferrari
      Diagnosed-By: Andres Freund
      Reviewed-By: Andres Freund
      Discussion: https://postgr.es/m/CAKoxK+5fVodiCtMsXKV_1YAKXbzwSfp7DgDqUmcUAzeAhf=HEQ@mail.gmail.com
      Backpatch: 9.3-
      b3f919da
    • Tom Lane's avatar
      First-draft release notes for 10.5. · c1455de2
      Tom Lane authored
      As usual, the release notes for other branches will be made by cutting
      these down, but put them up for community review first.
      c1455de2
    • Alvaro Herrera's avatar
      Add 'n' to list of possible values to pg_default_acl.defaclobjtype · f6f8d55c
      Alvaro Herrera authored
      This was missed in commit ab89e465; backpatch to v10.
      
      Author: Fabien Coelho <coelho@cri.ensmp.fr>
      Discussion: https://postgr.es/m/alpine.DEB.2.21.1807302243001.13230@lancre
      f6f8d55c
    • Alvaro Herrera's avatar
      Fix pg_replication_slot example output · 416db241
      Alvaro Herrera authored
      The example output of pg_replication_slot is wrong.  Correct it and make
      the output stable by explicitly listing columns to output.
      
      Author: Kyotaro Horiguchi <horiguchi.kyotaro@lab.ntt.co.jp>
      Reviewed-by: default avatarMichael Paquier <michael@paquier.xyz>
      Discussion: https://postgr.es/m/20180731.190909.42582169.horiguchi.kyotaro@lab.ntt.co.jp
      416db241
    • Tom Lane's avatar
      Remove no-longer-appropriate special case in psql's \conninfo code. · c7a8f786
      Tom Lane authored
      \conninfo prints the results of PQhost() and some other libpq functions.
      It used to override the PQhost() result with the hostaddr parameter if
      that'd been given, but that's unhelpful when multiple hosts were listed
      in the connection string.  Furthermore, it seems unnecessary in the wake
      of commit 1944cdc9, since PQhost does any useful substitution itself.
      So let's just remove the extra code and print PQhost()'s result without
      any editorialization.
      
      Back-patch to v10, as 1944cdc9 (just) was.
      
      Discussion: https://postgr.es/m/23287.1533227021@sss.pgh.pa.us
      c7a8f786
    • Tom Lane's avatar
      Change libpq's internal uses of PQhost() to inspect host field directly. · 24986c95
      Tom Lane authored
      Commit 1944cdc9 changed PQhost() to return the hostaddr value when that
      is specified and host isn't.  This is a good idea in general, but
      fe-auth.c and related files contain PQhost() calls for which it isn't.
      Specifically, when we compare SSL certificates or other server identity
      information to the host field, we do not want to use hostaddr instead;
      that's not what's documented, that's not what happened pre-v10, and
      it doesn't seem like a good idea.
      
      Instead, we can just look at connhost[].host directly.  This does what
      we want in v10 and up; in particular, if neither host nor hostaddr
      were given, the host field will be replaced with the default host name.
      That seems useful, and it's likely the reason that these places were
      coded to call PQhost() originally (since pre-v10, the stored field was
      not replaced with the default).
      
      Back-patch to v10, as 1944cdc9 (just) was.
      
      Discussion: https://postgr.es/m/23287.1533227021@sss.pgh.pa.us
      24986c95
    • Amit Kapila's avatar
      Fix buffer usage stats for parallel nodes. · 85c9d347
      Amit Kapila authored
      The buffer usage stats is accounted only for the execution phase of the
      node.  For Gather and Gather Merge nodes, such stats are accumulated at
      the time of shutdown of workers which is done after execution of node due
      to which we missed to account them for such nodes.  Fix it by treating
      nodes as running while we shut down them.
      
      We can also miss accounting for a Limit node when Gather or Gather Merge
      is beneath it, because it can finish the execution before shutting down
      such nodes.  So we allow a Limit node to shut down the resources before it
      completes the execution.
      
      In the passing fix the gather node code to allow workers to shut down as
      soon as we find that all the tuples from the workers have been retrieved.
      The original code use to do that, but is accidently removed by commit
      01edb5c7fc.
      
      Reported-by: Adrien Nayrat
      Author: Amit Kapila and Robert Haas
      Reviewed-by: Robert Haas and Andres Freund
      Backpatch-through: 9.6 where this code was introduced
      Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
      85c9d347
    • Amit Kapila's avatar
      Match the buffer usage tracking for leader and worker backends. · ccc84a95
      Amit Kapila authored
      In the leader backend, we don't track the buffer usage for ExecutorStart
      phase whereas in worker backend we track it for ExecutorStart phase as
      well.  This leads to different value for buffer usage stats for the
      parallel and non-parallel query.  Change the code so that worker backend
      also starts tracking buffer usage after ExecutorStart.
      
      Author: Amit Kapila and Robert Haas
      Reviewed-by: Robert Haas and Andres Freund
      Backpatch-through: 9.6 where this code was introduced
      Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
      ccc84a95
  8. 02 Aug, 2018 1 commit
  9. 01 Aug, 2018 7 commits
    • Tom Lane's avatar
      Fix run-time partition pruning for appends with multiple source rels. · 1c2cb274
      Tom Lane authored
      The previous coding here supposed that if run-time partitioning applied to
      a particular Append/MergeAppend plan, then all child plans of that node
      must be members of a single partitioning hierarchy.  This is totally wrong,
      since an Append could be formed from a UNION ALL: we could have multiple
      hierarchies sharing the same Append, or child plans that aren't part of any
      hierarchy.
      
      To fix, restructure the related plan-time and execution-time data
      structures so that we can have a separate list or array for each
      partitioning hierarchy.  Also track subplans that are not part of any
      hierarchy, and make sure they don't get pruned.
      
      Per reports from Phil Florent and others.  Back-patch to v11, since
      the bug originated there.
      
      David Rowley, with a lot of cosmetic adjustments by me; thanks also
      to Amit Langote for review.
      
      Discussion: https://postgr.es/m/HE1PR03MB17068BB27404C90B5B788BCABA7B0@HE1PR03MB1706.eurprd03.prod.outlook.com
      1c2cb274
    • Alvaro Herrera's avatar
      Fix logical replication slot initialization · c40489e4
      Alvaro Herrera authored
      This was broken in commit 9c7d06d6, which inadvertently gave the
      wrong value to fast_forward in one StartupDecodingContext call.  Fix by
      flipping the value.  Add a test for the obvious error, namely trying to
      initialize a replication slot with an nonexistent output plugin.
      
      While at it, move the CreateDecodingContext call earlier, so that any
      errors are reported before sending the CopyBoth message.
      
      Author: Dave Cramer <davecramer@gmail.com>
      Reviewed-by: default avatarAndres Freund <andres@anarazel.de>
      Discussion: https://postgr.es/m/CADK3HHLVkeRe1v4P02-5hj55H3_yJg3AEtpXyEY5T3wuzO2jSg@mail.gmail.com
      c40489e4
    • Alvaro Herrera's avatar
      Fix unnoticed variable shadowing in previous commit · 91bc213d
      Alvaro Herrera authored
      Per buildfarm.
      91bc213d
    • Alvaro Herrera's avatar
      Fix per-tuple memory leak in partition tuple routing · 1c9bb02d
      Alvaro Herrera authored
      Some operations were being done in a longer-lived memory context,
      causing intra-query leaks.  It's not noticeable unless you're doing a
      large COPY, but if you are, it eats enough memory to cause a problem.
      Co-authored-by: default avatarKohei KaiGai <kaigai@heterodb.com>
      Co-authored-by: default avatarAmit Langote <Langote_Amit_f8@lab.ntt.co.jp>
      Co-authored-by: default avatarÁlvaro Herrera <alvherre@alvh.no-ip.org>
      Discussion: https://postgr.es/m/CAOP8fzYtVFWZADq4c=KoTAqgDrHWfng+AnEPEZccyxqxPVbbWQ@mail.gmail.com
      1c9bb02d
    • Tom Lane's avatar
      Fix libpq's code for searching .pgpass; rationalize empty-list-item cases. · e3f99e03
      Tom Lane authored
      Before v10, we always searched ~/.pgpass using the host parameter,
      and nothing else, to match to the "hostname" field of ~/.pgpass.
      (However, null host or host matching DEFAULT_PGSOCKET_DIR was replaced by
      "localhost".)  In v10, this got broken by commit 274bb2b3, repaired by
      commit bdac9836, and broken again by commit 7b02ba62; in the code
      actually shipped, we'd search with hostaddr if both that and host were
      specified --- though oddly, *not* if only hostaddr were specified.
      Since this is directly contrary to the documentation, and not
      backwards-compatible, it's clearly a bug.
      
      However, the change wasn't totally without justification, even though it
      wasn't done quite right, because the pre-v10 behavior has arguably been
      buggy since we added hostaddr.  If hostaddr is specified and host isn't,
      the pre-v10 code will search ~/.pgpass for "localhost", and ship that
      password off to a server that most likely isn't local at all.  That's
      unhelpful at best, and could be a security breach at worst.
      
      Therefore, rather than just revert to that old behavior, let's define
      the behavior as "search with host if provided, else with hostaddr if
      provided, else search for localhost".  (As before, a host name matching
      DEFAULT_PGSOCKET_DIR is replaced by localhost.)  This matches the
      behavior of the actual connection code, so that we don't pick up an
      inappropriate password; and it allows useful searches to happen when
      only hostaddr is given.
      
      While we're messing around here, ensure that empty elements within a
      host or hostaddr list select the same behavior as a totally-empty
      field would; for instance "host=a,,b" is equivalent to "host=a,/tmp,b"
      if DEFAULT_PGSOCKET_DIR is /tmp.  Things worked that way in some cases
      already, but not consistently so, which contributed to the confusion
      about what key ~/.pgpass would get searched with.
      
      Update documentation accordingly, and also clarify some nearby text.
      
      Back-patch to v10 where the host/hostaddr list functionality was
      introduced.
      
      Discussion: https://postgr.es/m/30805.1532749137@sss.pgh.pa.us
      e3f99e03
    • Robert Haas's avatar
      Update parallel.sgml for Parallel Append · e80f2b33
      Robert Haas authored
      Patch by me, reviewed by Thomas Munro, in response to a complaint
      from Adrien Nayrat.
      
      Discussion: http://postgr.es/m/baa0d036-7349-f722-ef88-2d8bb3413045@anayrat.info
      e80f2b33
    • Peter Eisentraut's avatar
      Allow multi-inserts during COPY into a partitioned table · 0d5f05cd
      Peter Eisentraut authored
      CopyFrom allows multi-inserts to be used for non-partitioned tables, but
      this was disabled for partitioned tables.  The reason for this appeared
      to be that the tuple may not belong to the same partition as the
      previous tuple did.  Not allowing multi-inserts here greatly slowed down
      imports into partitioned tables.  These could take twice as long as a
      copy to an equivalent non-partitioned table.  It seems wise to do
      something about this, so this change allows the multi-inserts by
      flushing the so-far inserted tuples to the partition when the next tuple
      does not belong to the same partition, or when the buffer fills.  This
      improves performance when the next tuple in the stream commonly belongs
      to the same partition as the previous tuple.
      
      In cases where the target partition changes on every tuple, using
      multi-inserts slightly slows the performance.  To get around this we
      track the average size of the batches that have been inserted and
      adaptively enable or disable multi-inserts based on the size of the
      batch.  Some testing was done and the regression only seems to exist
      when the average size of the insert batch is close to 1, so let's just
      enable multi-inserts when the average size is at least 1.3.  More
      performance testing might reveal a better number for, this, but since
      the slowdown was only 1-2% it does not seem critical enough to spend too
      much time calculating it.  In any case it may depend on other factors
      rather than just the size of the batch.
      
      Allowing multi-inserts for partitions required a bit of work around the
      per-tuple memory contexts as we must flush the tuples when the next
      tuple does not belong the same partition.  In which case there is no
      good time to reset the per-tuple context, as we've already built the new
      tuple by this time.  In order to work around this we maintain two
      per-tuple contexts and just switch between them every time the partition
      changes and reset the old one.  This does mean that the first of each
      batch of tuples is not allocated in the same memory context as the
      others, but that does not matter since we only reset the context once
      the previous batch has been inserted.
      
      Author: David Rowley <david.rowley@2ndquadrant.com>
      Reviewed-by: default avatarMelanie Plageman <melanieplageman@gmail.com>
      0d5f05cd
  10. 31 Jul, 2018 2 commits