1. 03 Aug, 2018 2 commits
    • Amit Kapila's avatar
      Fix buffer usage stats for parallel nodes. · 85c9d347
      Amit Kapila authored
      The buffer usage stats is accounted only for the execution phase of the
      node.  For Gather and Gather Merge nodes, such stats are accumulated at
      the time of shutdown of workers which is done after execution of node due
      to which we missed to account them for such nodes.  Fix it by treating
      nodes as running while we shut down them.
      
      We can also miss accounting for a Limit node when Gather or Gather Merge
      is beneath it, because it can finish the execution before shutting down
      such nodes.  So we allow a Limit node to shut down the resources before it
      completes the execution.
      
      In the passing fix the gather node code to allow workers to shut down as
      soon as we find that all the tuples from the workers have been retrieved.
      The original code use to do that, but is accidently removed by commit
      01edb5c7fc.
      
      Reported-by: Adrien Nayrat
      Author: Amit Kapila and Robert Haas
      Reviewed-by: Robert Haas and Andres Freund
      Backpatch-through: 9.6 where this code was introduced
      Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
      85c9d347
    • Amit Kapila's avatar
      Match the buffer usage tracking for leader and worker backends. · ccc84a95
      Amit Kapila authored
      In the leader backend, we don't track the buffer usage for ExecutorStart
      phase whereas in worker backend we track it for ExecutorStart phase as
      well.  This leads to different value for buffer usage stats for the
      parallel and non-parallel query.  Change the code so that worker backend
      also starts tracking buffer usage after ExecutorStart.
      
      Author: Amit Kapila and Robert Haas
      Reviewed-by: Robert Haas and Andres Freund
      Backpatch-through: 9.6 where this code was introduced
      Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
      ccc84a95
  2. 02 Aug, 2018 1 commit
  3. 01 Aug, 2018 7 commits
    • Tom Lane's avatar
      Fix run-time partition pruning for appends with multiple source rels. · 1c2cb274
      Tom Lane authored
      The previous coding here supposed that if run-time partitioning applied to
      a particular Append/MergeAppend plan, then all child plans of that node
      must be members of a single partitioning hierarchy.  This is totally wrong,
      since an Append could be formed from a UNION ALL: we could have multiple
      hierarchies sharing the same Append, or child plans that aren't part of any
      hierarchy.
      
      To fix, restructure the related plan-time and execution-time data
      structures so that we can have a separate list or array for each
      partitioning hierarchy.  Also track subplans that are not part of any
      hierarchy, and make sure they don't get pruned.
      
      Per reports from Phil Florent and others.  Back-patch to v11, since
      the bug originated there.
      
      David Rowley, with a lot of cosmetic adjustments by me; thanks also
      to Amit Langote for review.
      
      Discussion: https://postgr.es/m/HE1PR03MB17068BB27404C90B5B788BCABA7B0@HE1PR03MB1706.eurprd03.prod.outlook.com
      1c2cb274
    • Alvaro Herrera's avatar
      Fix logical replication slot initialization · c40489e4
      Alvaro Herrera authored
      This was broken in commit 9c7d06d6, which inadvertently gave the
      wrong value to fast_forward in one StartupDecodingContext call.  Fix by
      flipping the value.  Add a test for the obvious error, namely trying to
      initialize a replication slot with an nonexistent output plugin.
      
      While at it, move the CreateDecodingContext call earlier, so that any
      errors are reported before sending the CopyBoth message.
      
      Author: Dave Cramer <davecramer@gmail.com>
      Reviewed-by: default avatarAndres Freund <andres@anarazel.de>
      Discussion: https://postgr.es/m/CADK3HHLVkeRe1v4P02-5hj55H3_yJg3AEtpXyEY5T3wuzO2jSg@mail.gmail.com
      c40489e4
    • Alvaro Herrera's avatar
      Fix unnoticed variable shadowing in previous commit · 91bc213d
      Alvaro Herrera authored
      Per buildfarm.
      91bc213d
    • Alvaro Herrera's avatar
      Fix per-tuple memory leak in partition tuple routing · 1c9bb02d
      Alvaro Herrera authored
      Some operations were being done in a longer-lived memory context,
      causing intra-query leaks.  It's not noticeable unless you're doing a
      large COPY, but if you are, it eats enough memory to cause a problem.
      Co-authored-by: default avatarKohei KaiGai <kaigai@heterodb.com>
      Co-authored-by: default avatarAmit Langote <Langote_Amit_f8@lab.ntt.co.jp>
      Co-authored-by: default avatarÁlvaro Herrera <alvherre@alvh.no-ip.org>
      Discussion: https://postgr.es/m/CAOP8fzYtVFWZADq4c=KoTAqgDrHWfng+AnEPEZccyxqxPVbbWQ@mail.gmail.com
      1c9bb02d
    • Tom Lane's avatar
      Fix libpq's code for searching .pgpass; rationalize empty-list-item cases. · e3f99e03
      Tom Lane authored
      Before v10, we always searched ~/.pgpass using the host parameter,
      and nothing else, to match to the "hostname" field of ~/.pgpass.
      (However, null host or host matching DEFAULT_PGSOCKET_DIR was replaced by
      "localhost".)  In v10, this got broken by commit 274bb2b3, repaired by
      commit bdac9836, and broken again by commit 7b02ba62; in the code
      actually shipped, we'd search with hostaddr if both that and host were
      specified --- though oddly, *not* if only hostaddr were specified.
      Since this is directly contrary to the documentation, and not
      backwards-compatible, it's clearly a bug.
      
      However, the change wasn't totally without justification, even though it
      wasn't done quite right, because the pre-v10 behavior has arguably been
      buggy since we added hostaddr.  If hostaddr is specified and host isn't,
      the pre-v10 code will search ~/.pgpass for "localhost", and ship that
      password off to a server that most likely isn't local at all.  That's
      unhelpful at best, and could be a security breach at worst.
      
      Therefore, rather than just revert to that old behavior, let's define
      the behavior as "search with host if provided, else with hostaddr if
      provided, else search for localhost".  (As before, a host name matching
      DEFAULT_PGSOCKET_DIR is replaced by localhost.)  This matches the
      behavior of the actual connection code, so that we don't pick up an
      inappropriate password; and it allows useful searches to happen when
      only hostaddr is given.
      
      While we're messing around here, ensure that empty elements within a
      host or hostaddr list select the same behavior as a totally-empty
      field would; for instance "host=a,,b" is equivalent to "host=a,/tmp,b"
      if DEFAULT_PGSOCKET_DIR is /tmp.  Things worked that way in some cases
      already, but not consistently so, which contributed to the confusion
      about what key ~/.pgpass would get searched with.
      
      Update documentation accordingly, and also clarify some nearby text.
      
      Back-patch to v10 where the host/hostaddr list functionality was
      introduced.
      
      Discussion: https://postgr.es/m/30805.1532749137@sss.pgh.pa.us
      e3f99e03
    • Robert Haas's avatar
      Update parallel.sgml for Parallel Append · e80f2b33
      Robert Haas authored
      Patch by me, reviewed by Thomas Munro, in response to a complaint
      from Adrien Nayrat.
      
      Discussion: http://postgr.es/m/baa0d036-7349-f722-ef88-2d8bb3413045@anayrat.info
      e80f2b33
    • Peter Eisentraut's avatar
      Allow multi-inserts during COPY into a partitioned table · 0d5f05cd
      Peter Eisentraut authored
      CopyFrom allows multi-inserts to be used for non-partitioned tables, but
      this was disabled for partitioned tables.  The reason for this appeared
      to be that the tuple may not belong to the same partition as the
      previous tuple did.  Not allowing multi-inserts here greatly slowed down
      imports into partitioned tables.  These could take twice as long as a
      copy to an equivalent non-partitioned table.  It seems wise to do
      something about this, so this change allows the multi-inserts by
      flushing the so-far inserted tuples to the partition when the next tuple
      does not belong to the same partition, or when the buffer fills.  This
      improves performance when the next tuple in the stream commonly belongs
      to the same partition as the previous tuple.
      
      In cases where the target partition changes on every tuple, using
      multi-inserts slightly slows the performance.  To get around this we
      track the average size of the batches that have been inserted and
      adaptively enable or disable multi-inserts based on the size of the
      batch.  Some testing was done and the regression only seems to exist
      when the average size of the insert batch is close to 1, so let's just
      enable multi-inserts when the average size is at least 1.3.  More
      performance testing might reveal a better number for, this, but since
      the slowdown was only 1-2% it does not seem critical enough to spend too
      much time calculating it.  In any case it may depend on other factors
      rather than just the size of the batch.
      
      Allowing multi-inserts for partitions required a bit of work around the
      per-tuple memory contexts as we must flush the tuples when the next
      tuple does not belong the same partition.  In which case there is no
      good time to reset the per-tuple context, as we've already built the new
      tuple by this time.  In order to work around this we maintain two
      per-tuple contexts and just switch between them every time the partition
      changes and reset the old one.  This does mean that the first of each
      batch of tuples is not allocated in the same memory context as the
      others, but that does not matter since we only reset the context once
      the previous batch has been inserted.
      
      Author: David Rowley <david.rowley@2ndquadrant.com>
      Reviewed-by: default avatarMelanie Plageman <melanieplageman@gmail.com>
      0d5f05cd
  4. 31 Jul, 2018 6 commits
  5. 30 Jul, 2018 9 commits
  6. 29 Jul, 2018 8 commits
  7. 28 Jul, 2018 5 commits
  8. 27 Jul, 2018 2 commits