1. 02 Mar, 2019 1 commit
  2. 01 Mar, 2019 9 commits
  3. 28 Feb, 2019 15 commits
  4. 27 Feb, 2019 9 commits
  5. 26 Feb, 2019 6 commits
    • Robert Haas's avatar
      Change lock acquisition order in expand_inherited_rtentry. · f4b6341d
      Robert Haas authored
      Previously, this function acquired locks in the order using
      find_all_inheritors(), which locks the children of each table that it
      processes in ascending OID order, and which processes the inheritance
      hierarchy as a whole in a breadth-first fashion.  Now, it processes
      the inheritance hierarchy in a depth-first fashion, and at each level
      it proceeds in the order in which tables appear in the PartitionDesc.
      If table inheritance rather than table partitioning is used, the old
      order is preserved.
      
      This change moves the locking of any given partition much closer to
      the code that actually expands that partition.  This seems essential
      if we ever want to allow concurrent DDL to add or remove partitions,
      because if the set of partitions can change, we must use the same data
      to decide which partitions to lock as we do to decide which partitions
      to expand; otherwise, we might expand a partition that we haven't
      locked.  It should hopefully also facilitate efforts to postpone
      inheritance expansion or locking for performance reasons, because
      there's really no way to postpone locking some partitions if
      we're blindly locking them all using find_all_inheritors().
      
      The only downside of this change which is known to me is that it
      further deviates from the principle that we should always lock the
      inheritance hierarchy in find_all_inheritors() order to avoid deadlock
      risk.  However, we've already crossed that bridge in commit
      9eefba18 and there are futher patches
      pending that make similar changes, so this isn't really giving up
      anything that we haven't surrendered already -- and it seems entirely
      worth it, given the performance benefits some of those changes seem
      likely to bring.
      
      Patch by me; thanks to David Rowley for discussion of these issues.
      
      Discussion: http://postgr.es/m/CAKJS1f_eEYVEq5tM8sm1k-HOwG0AyCPwX54XG9x4w0zy_N4Q_Q@mail.gmail.com
      Discussion: http://postgr.es/m/CA+TgmoZUwPf_uanjF==gTGBMJrn8uCq52XYvAEorNkLrUdoawg@mail.gmail.com
      f4b6341d
    • Michael Meskes's avatar
      Free memory in ecpg bytea regression test. · 42ccbe43
      Michael Meskes authored
      While not really a problem it's easier to run tools like valgrind against it
      when fixed.
      42ccbe43
    • Michael Meskes's avatar
    • Michael Paquier's avatar
      Simplify some code in pg_rewind when syncing target directory · 6e52209e
      Michael Paquier authored
      9a4059d4 simplified the flush of target data folder when finishing
      processing, and could have done a bit more.
      
      Discussion: https://postgr.es/m/20190131064759.GA13429@paquier.xyz
      6e52209e
    • Peter Geoghegan's avatar
      Remove unneeded argument from _bt_getstackbuf(). · 2ab23445
      Peter Geoghegan authored
      _bt_getstackbuf() is called at exactly two points following commit
      efada2b8 (one call site is concerned with page splits, while the
      other is concerned with page deletion).  The parent buffer returned by
      _bt_getstackbuf() is write-locked in both cases.  Remove the 'access'
      argument and make _bt_getstackbuf() assume that callers require a
      write-lock.
      2ab23445
    • Peter Geoghegan's avatar
      Correct obsolete nbtree page deletion comment. · 067786ce
      Peter Geoghegan authored
      Commit efada2b8, which made the nbtree page deletion algorithm more
      robust, removed _bt_getstackbuf() calls from _bt_pagedel().  It failed
      to update a comment that referenced the earlier approach.  Update the
      comment to explain that the _bt_getstackbuf() page deletion call site
      mirrors the only other remaining _bt_getstackbuf() call site, which is
      reached during page splits.
      067786ce