1. 12 Jul, 2017 4 commits
  2. 11 Jul, 2017 1 commit
    • Tom Lane's avatar
      Fix multiple assignments to a column of a domain type. · b1cb32fb
      Tom Lane authored
      We allow INSERT and UPDATE commands to assign to the same column more than
      once, as long as the assignments are to subfields or elements rather than
      the whole column.  However, this failed when the target column was a domain
      over array rather than plain array.  Fix by teaching process_matched_tle()
      to look through CoerceToDomain nodes, and add relevant test cases.
      
      Also add a group of test cases exercising domains over array of composite.
      It's doubtless accidental that CREATE DOMAIN allows this case while not
      allowing straight domain over composite; but it does, so we'd better make
      sure we don't break it.  (I could not find any documentation mentioning
      either side of that, so no doc changes.)
      
      It's been like this for a long time, so back-patch to all supported
      branches.
      
      Discussion: https://postgr.es/m/4206.1499798337@sss.pgh.pa.us
      b1cb32fb
  3. 10 Jul, 2017 10 commits
  4. 09 Jul, 2017 3 commits
  5. 08 Jul, 2017 1 commit
  6. 07 Jul, 2017 2 commits
  7. 06 Jul, 2017 4 commits
    • Teodor Sigaev's avatar
      Fix potential data corruption during freeze · 31b8db8e
      Teodor Sigaev authored
      Fix oversight in 3b97e682 bug fix. Bitwise AND is used instead of OR and
      it cleans all bits in t_infomask heap tuple field.
      
      Backpatch to 9.3
      31b8db8e
    • Dean Rasheed's avatar
      Clarify the contract of partition_rbound_cmp(). · f1dae097
      Dean Rasheed authored
      partition_rbound_cmp() is intended to compare range partition bounds
      in a way such that if all the bound values are equal but one is an
      upper bound and one is a lower bound, the upper bound is treated as
      smaller than the lower bound. This particular ordering is required by
      RelationBuildPartitionDesc() when building the PartitionBoundInfoData,
      so that it can consistently keep only the upper bounds when upper and
      lower bounds coincide.
      
      Update the function comment to make that clearer.
      
      Also, fix a (currently unreachable) corner-case bug -- if the bound
      values coincide and they contain unbounded values, fall through to the
      lower-vs-upper comparison code, rather than immediately returning
      0. Currently it is not possible to define coincident upper and lower
      bounds containing unbounded columns, but that may change in the
      future, so code defensively.
      
      Discussion: https://postgr.es/m/CAAJ_b947mowpLdxL3jo3YLKngRjrq9+Ej4ymduQTfYR+8=YAYQ@mail.gmail.com
      f1dae097
    • Dean Rasheed's avatar
      Simplify the logic checking new range partition bounds. · c03911d9
      Dean Rasheed authored
      The previous logic, whilst not actually wrong, was overly complex and
      involved doing two binary searches, where only one was really
      necessary. This simplifies that logic and improves the comments.
      
      One visible change is that if the new partition overlaps multiple
      existing partitions, the error message now always reports the overlap
      with the first existing partition (the one with the lowest
      bounds). The old code would sometimes report the clash with the first
      partition and sometimes with the last one.
      
      Original patch idea from Amit Langote, substantially rewritten by me.
      
      Discussion: https://postgr.es/m/CAAJ_b947mowpLdxL3jo3YLKngRjrq9+Ej4ymduQTfYR+8=YAYQ@mail.gmail.com
      c03911d9
    • Tom Lane's avatar
      Fix another race-condition-ish issue in recovery/t/001_stream_rep.pl. · ec86af91
      Tom Lane authored
      Buildfarm members hornet and sungazer have shown multiple instances of
      "Failed test 'xmin of non-cascaded slot with hs feedback has changed'".
      The reason seems to be that the test is checking the current xmin of the
      master server's replication slot against a past xmin of the first slave
      server's replication slot.  Even though the latter slot is downstream of
      the former, it's possible for its reported xmin to be ahead of the former's
      reported xmin, because those numbers are updated whenever the respective
      downstream walreceiver feels like it (see logic in WalReceiverMain).
      Instrumenting this test shows that indeed the slave slot's xmin does often
      advance before the master's does, especially if an autovacuum transaction
      manages to occur during the relevant window.  If we happen to capture such
      an advanced xmin as $xmin, then the subsequent wait_slot_xmins call can
      fall through before the master's xmin has advanced at all, and then if it
      advances before the get_slot_xmins call, we can get the observed failure.
      Yeah, that's a bit of a long chain of deduction, but it's hard to explain
      any other way how the test can get past an "xmin <> '$xmin'" check only
      to have the next query find that xmin does equal $xmin.
      
      Fix by keeping separate images of the master and slave slots' xmins
      and testing their has-xmin-advanced conditions independently.
      ec86af91
  8. 05 Jul, 2017 5 commits
  9. 04 Jul, 2017 2 commits
  10. 03 Jul, 2017 4 commits
  11. 02 Jul, 2017 4 commits
    • Tom Lane's avatar
      Fix bug in PostgresNode::query_hash's split() call. · efdb4f29
      Tom Lane authored
      By default, Perl's split() function drops trailing empty fields,
      which is not what we want here.  Oversight in commit fb093e4c.
      We'd managed to miss it thus far thanks to the very limited usage
      of this function.
      
      Discussion: https://postgr.es/m/14837.1499029831@sss.pgh.pa.us
      efdb4f29
    • Tom Lane's avatar
      Try to improve readability of recovery/t/009_twophase.pl test. · 4e15387d
      Tom Lane authored
      The original coding here was very confusing, because it named the
      two servers it set up "master" and "slave" even though it swapped
      their replication roles multiple times.  At any given point in the
      script it was very unobvious whether "$node_master" actually referred
      to the server named "master" or the other one.  Instead, pick arbitrary
      names for the two servers --- I used "london" and "paris" --- and
      distinguish those permanent names from the nonce references $cur_master
      and $cur_slave.  Add logging to help distinguish which is which at
      any given point.  Also, use distinct data and transaction names to
      make all the prepared transactions easily distinguishable in the
      postmaster logs.  (There was one place where we intentionally tested
      that the server could cope with re-use of a transaction name, but
      it seems like one place is sufficient for that purpose.)
      
      Also, add checks at the end to make sure that all the transactions
      that were supposed to be committed did survive.
      
      Discussion: https://postgr.es/m/28238.1499010855@sss.pgh.pa.us
      4e15387d
    • Tom Lane's avatar
      Improve TAP test function PostgresNode::poll_query_until(). · de3de0af
      Tom Lane authored
      Add an optional "expected" argument to override the default assumption
      that we're waiting for the query to return "t".  This allows replacing
      a handwritten polling loop in recovery/t/007_sync_rep.pl with use of
      poll_query_until(); AFAICS that's the only remaining ad-hoc polling
      loop in our TAP tests.
      
      Change poll_query_until() to probe ten times per second not once per
      second.  Like some similar changes I've been making recently, the
      one-second interval seems to be rooted in ancient traditions rather
      than the actual likely wait duration on modern machines.  I'd consider
      reducing it further if there were a convenient way to spawn just one
      psql for the whole loop rather than one per probe attempt.
      
      Discussion: https://postgr.es/m/12486.1498938782@sss.pgh.pa.us
      de3de0af
    • Peter Eisentraut's avatar
      doc: Document that logical replication supports synchronous replication · 2dca0343
      Peter Eisentraut authored
      Update the documentation a bit to include that logical replication as
      well as other and third-party replication clients can participate in
      synchronous replication.
      2dca0343