1. 04 Aug, 2018 1 commit
  2. 03 Aug, 2018 8 commits
    • Peter Geoghegan's avatar
      Add table relcache invalidation to index builds. · b3f919da
      Peter Geoghegan authored
      It's necessary to make sure that owning tables have a relcache
      invalidation prior to advancing the command counter to make
      newly-entered catalog tuples for the index visible.  inval.c must be
      able to maintain the consistency of the local caches in the event of
      transaction abort.  There is usually only a problem when CREATE INDEX
      transactions abort, since there is a generic invalidation once we reach
      index_update_stats().
      
      This bug is of long standing.  Problems were made much more likely by
      the addition of parallel CREATE INDEX (commit 9da0cc35), but it is
      strongly suspected that similar problems can be triggered without
      involving plan_create_index_workers().  (plan_create_index_workers()
      triggers a relcache build or rebuild, which previously only happened in
      rare edge cases.)
      
      Author: Peter Geoghegan
      Reported-By: Luca Ferrari
      Diagnosed-By: Andres Freund
      Reviewed-By: Andres Freund
      Discussion: https://postgr.es/m/CAKoxK+5fVodiCtMsXKV_1YAKXbzwSfp7DgDqUmcUAzeAhf=HEQ@mail.gmail.com
      Backpatch: 9.3-
      b3f919da
    • Tom Lane's avatar
      First-draft release notes for 10.5. · c1455de2
      Tom Lane authored
      As usual, the release notes for other branches will be made by cutting
      these down, but put them up for community review first.
      c1455de2
    • Alvaro Herrera's avatar
      Add 'n' to list of possible values to pg_default_acl.defaclobjtype · f6f8d55c
      Alvaro Herrera authored
      This was missed in commit ab89e465; backpatch to v10.
      
      Author: Fabien Coelho <coelho@cri.ensmp.fr>
      Discussion: https://postgr.es/m/alpine.DEB.2.21.1807302243001.13230@lancre
      f6f8d55c
    • Alvaro Herrera's avatar
      Fix pg_replication_slot example output · 416db241
      Alvaro Herrera authored
      The example output of pg_replication_slot is wrong.  Correct it and make
      the output stable by explicitly listing columns to output.
      
      Author: Kyotaro Horiguchi <horiguchi.kyotaro@lab.ntt.co.jp>
      Reviewed-by: default avatarMichael Paquier <michael@paquier.xyz>
      Discussion: https://postgr.es/m/20180731.190909.42582169.horiguchi.kyotaro@lab.ntt.co.jp
      416db241
    • Tom Lane's avatar
      Remove no-longer-appropriate special case in psql's \conninfo code. · c7a8f786
      Tom Lane authored
      \conninfo prints the results of PQhost() and some other libpq functions.
      It used to override the PQhost() result with the hostaddr parameter if
      that'd been given, but that's unhelpful when multiple hosts were listed
      in the connection string.  Furthermore, it seems unnecessary in the wake
      of commit 1944cdc9, since PQhost does any useful substitution itself.
      So let's just remove the extra code and print PQhost()'s result without
      any editorialization.
      
      Back-patch to v10, as 1944cdc9 (just) was.
      
      Discussion: https://postgr.es/m/23287.1533227021@sss.pgh.pa.us
      c7a8f786
    • Tom Lane's avatar
      Change libpq's internal uses of PQhost() to inspect host field directly. · 24986c95
      Tom Lane authored
      Commit 1944cdc9 changed PQhost() to return the hostaddr value when that
      is specified and host isn't.  This is a good idea in general, but
      fe-auth.c and related files contain PQhost() calls for which it isn't.
      Specifically, when we compare SSL certificates or other server identity
      information to the host field, we do not want to use hostaddr instead;
      that's not what's documented, that's not what happened pre-v10, and
      it doesn't seem like a good idea.
      
      Instead, we can just look at connhost[].host directly.  This does what
      we want in v10 and up; in particular, if neither host nor hostaddr
      were given, the host field will be replaced with the default host name.
      That seems useful, and it's likely the reason that these places were
      coded to call PQhost() originally (since pre-v10, the stored field was
      not replaced with the default).
      
      Back-patch to v10, as 1944cdc9 (just) was.
      
      Discussion: https://postgr.es/m/23287.1533227021@sss.pgh.pa.us
      24986c95
    • Amit Kapila's avatar
      Fix buffer usage stats for parallel nodes. · 85c9d347
      Amit Kapila authored
      The buffer usage stats is accounted only for the execution phase of the
      node.  For Gather and Gather Merge nodes, such stats are accumulated at
      the time of shutdown of workers which is done after execution of node due
      to which we missed to account them for such nodes.  Fix it by treating
      nodes as running while we shut down them.
      
      We can also miss accounting for a Limit node when Gather or Gather Merge
      is beneath it, because it can finish the execution before shutting down
      such nodes.  So we allow a Limit node to shut down the resources before it
      completes the execution.
      
      In the passing fix the gather node code to allow workers to shut down as
      soon as we find that all the tuples from the workers have been retrieved.
      The original code use to do that, but is accidently removed by commit
      01edb5c7fc.
      
      Reported-by: Adrien Nayrat
      Author: Amit Kapila and Robert Haas
      Reviewed-by: Robert Haas and Andres Freund
      Backpatch-through: 9.6 where this code was introduced
      Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
      85c9d347
    • Amit Kapila's avatar
      Match the buffer usage tracking for leader and worker backends. · ccc84a95
      Amit Kapila authored
      In the leader backend, we don't track the buffer usage for ExecutorStart
      phase whereas in worker backend we track it for ExecutorStart phase as
      well.  This leads to different value for buffer usage stats for the
      parallel and non-parallel query.  Change the code so that worker backend
      also starts tracking buffer usage after ExecutorStart.
      
      Author: Amit Kapila and Robert Haas
      Reviewed-by: Robert Haas and Andres Freund
      Backpatch-through: 9.6 where this code was introduced
      Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
      ccc84a95
  3. 02 Aug, 2018 1 commit
  4. 01 Aug, 2018 7 commits
    • Tom Lane's avatar
      Fix run-time partition pruning for appends with multiple source rels. · 1c2cb274
      Tom Lane authored
      The previous coding here supposed that if run-time partitioning applied to
      a particular Append/MergeAppend plan, then all child plans of that node
      must be members of a single partitioning hierarchy.  This is totally wrong,
      since an Append could be formed from a UNION ALL: we could have multiple
      hierarchies sharing the same Append, or child plans that aren't part of any
      hierarchy.
      
      To fix, restructure the related plan-time and execution-time data
      structures so that we can have a separate list or array for each
      partitioning hierarchy.  Also track subplans that are not part of any
      hierarchy, and make sure they don't get pruned.
      
      Per reports from Phil Florent and others.  Back-patch to v11, since
      the bug originated there.
      
      David Rowley, with a lot of cosmetic adjustments by me; thanks also
      to Amit Langote for review.
      
      Discussion: https://postgr.es/m/HE1PR03MB17068BB27404C90B5B788BCABA7B0@HE1PR03MB1706.eurprd03.prod.outlook.com
      1c2cb274
    • Alvaro Herrera's avatar
      Fix logical replication slot initialization · c40489e4
      Alvaro Herrera authored
      This was broken in commit 9c7d06d6, which inadvertently gave the
      wrong value to fast_forward in one StartupDecodingContext call.  Fix by
      flipping the value.  Add a test for the obvious error, namely trying to
      initialize a replication slot with an nonexistent output plugin.
      
      While at it, move the CreateDecodingContext call earlier, so that any
      errors are reported before sending the CopyBoth message.
      
      Author: Dave Cramer <davecramer@gmail.com>
      Reviewed-by: default avatarAndres Freund <andres@anarazel.de>
      Discussion: https://postgr.es/m/CADK3HHLVkeRe1v4P02-5hj55H3_yJg3AEtpXyEY5T3wuzO2jSg@mail.gmail.com
      c40489e4
    • Alvaro Herrera's avatar
      Fix unnoticed variable shadowing in previous commit · 91bc213d
      Alvaro Herrera authored
      Per buildfarm.
      91bc213d
    • Alvaro Herrera's avatar
      Fix per-tuple memory leak in partition tuple routing · 1c9bb02d
      Alvaro Herrera authored
      Some operations were being done in a longer-lived memory context,
      causing intra-query leaks.  It's not noticeable unless you're doing a
      large COPY, but if you are, it eats enough memory to cause a problem.
      Co-authored-by: default avatarKohei KaiGai <kaigai@heterodb.com>
      Co-authored-by: default avatarAmit Langote <Langote_Amit_f8@lab.ntt.co.jp>
      Co-authored-by: default avatarÁlvaro Herrera <alvherre@alvh.no-ip.org>
      Discussion: https://postgr.es/m/CAOP8fzYtVFWZADq4c=KoTAqgDrHWfng+AnEPEZccyxqxPVbbWQ@mail.gmail.com
      1c9bb02d
    • Tom Lane's avatar
      Fix libpq's code for searching .pgpass; rationalize empty-list-item cases. · e3f99e03
      Tom Lane authored
      Before v10, we always searched ~/.pgpass using the host parameter,
      and nothing else, to match to the "hostname" field of ~/.pgpass.
      (However, null host or host matching DEFAULT_PGSOCKET_DIR was replaced by
      "localhost".)  In v10, this got broken by commit 274bb2b3, repaired by
      commit bdac9836, and broken again by commit 7b02ba62; in the code
      actually shipped, we'd search with hostaddr if both that and host were
      specified --- though oddly, *not* if only hostaddr were specified.
      Since this is directly contrary to the documentation, and not
      backwards-compatible, it's clearly a bug.
      
      However, the change wasn't totally without justification, even though it
      wasn't done quite right, because the pre-v10 behavior has arguably been
      buggy since we added hostaddr.  If hostaddr is specified and host isn't,
      the pre-v10 code will search ~/.pgpass for "localhost", and ship that
      password off to a server that most likely isn't local at all.  That's
      unhelpful at best, and could be a security breach at worst.
      
      Therefore, rather than just revert to that old behavior, let's define
      the behavior as "search with host if provided, else with hostaddr if
      provided, else search for localhost".  (As before, a host name matching
      DEFAULT_PGSOCKET_DIR is replaced by localhost.)  This matches the
      behavior of the actual connection code, so that we don't pick up an
      inappropriate password; and it allows useful searches to happen when
      only hostaddr is given.
      
      While we're messing around here, ensure that empty elements within a
      host or hostaddr list select the same behavior as a totally-empty
      field would; for instance "host=a,,b" is equivalent to "host=a,/tmp,b"
      if DEFAULT_PGSOCKET_DIR is /tmp.  Things worked that way in some cases
      already, but not consistently so, which contributed to the confusion
      about what key ~/.pgpass would get searched with.
      
      Update documentation accordingly, and also clarify some nearby text.
      
      Back-patch to v10 where the host/hostaddr list functionality was
      introduced.
      
      Discussion: https://postgr.es/m/30805.1532749137@sss.pgh.pa.us
      e3f99e03
    • Robert Haas's avatar
      Update parallel.sgml for Parallel Append · e80f2b33
      Robert Haas authored
      Patch by me, reviewed by Thomas Munro, in response to a complaint
      from Adrien Nayrat.
      
      Discussion: http://postgr.es/m/baa0d036-7349-f722-ef88-2d8bb3413045@anayrat.info
      e80f2b33
    • Peter Eisentraut's avatar
      Allow multi-inserts during COPY into a partitioned table · 0d5f05cd
      Peter Eisentraut authored
      CopyFrom allows multi-inserts to be used for non-partitioned tables, but
      this was disabled for partitioned tables.  The reason for this appeared
      to be that the tuple may not belong to the same partition as the
      previous tuple did.  Not allowing multi-inserts here greatly slowed down
      imports into partitioned tables.  These could take twice as long as a
      copy to an equivalent non-partitioned table.  It seems wise to do
      something about this, so this change allows the multi-inserts by
      flushing the so-far inserted tuples to the partition when the next tuple
      does not belong to the same partition, or when the buffer fills.  This
      improves performance when the next tuple in the stream commonly belongs
      to the same partition as the previous tuple.
      
      In cases where the target partition changes on every tuple, using
      multi-inserts slightly slows the performance.  To get around this we
      track the average size of the batches that have been inserted and
      adaptively enable or disable multi-inserts based on the size of the
      batch.  Some testing was done and the regression only seems to exist
      when the average size of the insert batch is close to 1, so let's just
      enable multi-inserts when the average size is at least 1.3.  More
      performance testing might reveal a better number for, this, but since
      the slowdown was only 1-2% it does not seem critical enough to spend too
      much time calculating it.  In any case it may depend on other factors
      rather than just the size of the batch.
      
      Allowing multi-inserts for partitions required a bit of work around the
      per-tuple memory contexts as we must flush the tuples when the next
      tuple does not belong the same partition.  In which case there is no
      good time to reset the per-tuple context, as we've already built the new
      tuple by this time.  In order to work around this we maintain two
      per-tuple contexts and just switch between them every time the partition
      changes and reset the old one.  This does mean that the first of each
      batch of tuples is not allocated in the same memory context as the
      others, but that does not matter since we only reset the context once
      the previous batch has been inserted.
      
      Author: David Rowley <david.rowley@2ndquadrant.com>
      Reviewed-by: default avatarMelanie Plageman <melanieplageman@gmail.com>
      0d5f05cd
  5. 31 Jul, 2018 6 commits
  6. 30 Jul, 2018 9 commits
  7. 29 Jul, 2018 8 commits