1. 19 Apr, 2019 6 commits
    • Andres Freund's avatar
      Fix two memory leaks around force-storing tuples in slots. · 88e6ad30
      Andres Freund authored
      As reported by Tom, when ExecStoreMinimalTuple() had to perform a
      conversion to store the minimal tuple in the slot, it forgot to
      respect the shouldFree flag, and leaked the tuple into the current
      memory context if true.  Fix that by freeing the tuple in that case.
      
      Looking at the relevant code made me (Andres) realize that not having
      the shouldFree parameter to ExecForceStoreHeapTuple() was a bad
      idea. Some callers had to locally implement the necessary logic, and
      in one case it was missing, creating a potential per-group leak in
      non-hashed aggregation.
      
      The choice to not free the tuple in ExecComputeStoredGenerated() is
      not pretty, but not introduced by this commit - I'll start a separate
      discussion about it.
      
      Reported-By: Tom Lane
      Discussion: https://postgr.es/m/366.1555382816@sss.pgh.pa.us
      88e6ad30
    • Tom Lane's avatar
      Fix problems with auto-held portals. · 4d5840ce
      Tom Lane authored
      HoldPinnedPortals() did things in the wrong order: it must not mark
      a portal autoHeld until it's been successfully held.  Otherwise,
      a failure while persisting the portal results in a server crash
      because we think the portal is in a good state when it's not.
      
      Also add a check that portal->status is READY before attempting to
      hold a pinned portal.  We have such a check before the only other
      use of HoldPortal(), so it seems unwise not to check it here.
      
      Lastly, rethink the responsibility for where to call HoldPinnedPortals.
      The comment for it imagined that it was optional for any individual PL
      to call it or not, but that cannot be the case: if some outer level of
      procedure has a pinned portal, failing to persist it when an inner
      procedure commits is going to be trouble.  Let's have SPI do it instead
      of the individual PLs.  That's not a complete solution, since in theory
      a PL might not be using SPI to perform commit/rollback, but such a PL
      is going to have to be aware of lots of related requirements anyway.
      (This change doesn't cause an API break for any external PLs that might
      be calling HoldPinnedPortals per the old regime, because calling it
      twice during a commit or rollback sequence won't hurt.)
      
      Per bug #15703 from Julian Schauder.  Back-patch to v11 where this code
      came in.
      
      Discussion: https://postgr.es/m/15703-c12c5bc0ea34ba26@postgresql.org
      4d5840ce
    • Michael Paquier's avatar
    • Michael Paquier's avatar
      bc540f98
    • Michael Paquier's avatar
      Remove dependency to pageinspect in recovery tests · 5a9323ea
      Michael Paquier authored
      If contrib/pageinspect is not installed, this causes the test checking
      the minimum recovery point to fail.  The point is that the dependency
      with pageinspect is not really necessary as the test does also all
      checks with an offline cluster by scanning directly the on-disk pages,
      which is enough for the purpose of the test.
      
      Per complaint from Tom Lane.
      
      Author: Michael Paquier
      Discussion: https://postgr.es/m/17806.1555566345@sss.pgh.pa.us
      5a9323ea
    • Andres Freund's avatar
      Fix potential use-after-free for BEFORE UPDATE row triggers on non-core AMs. · 75e03eab
      Andres Freund authored
      When such a trigger returns the old row version, it naturally get
      stored in the slot for the trigger result. When a table AMs doesn't
      store HeapTuples internally, ExecBRUpdateTriggers() frees the old row
      version passed to triggers - but before this fix it might still be
      referenced by the slot holding the new tuple.
      
      Noticed when running the out-of-core zheap AM against the in-core
      version of tableam.
      
      Author: Andres Freund
      75e03eab
  2. 18 Apr, 2019 4 commits
  3. 17 Apr, 2019 11 commits
  4. 16 Apr, 2019 4 commits
  5. 15 Apr, 2019 10 commits
  6. 14 Apr, 2019 2 commits
  7. 13 Apr, 2019 3 commits
    • Noah Misch's avatar
      When Perl "kill(9, ...)" fails, try "pg_ctl kill". · 947a3501
      Noah Misch authored
      Per buildfarm member jacana, the former fails under msys Perl 5.8.8.
      Back-patch to 9.6, like the code in question.
      
      Discussion: https://postgr.es/m/GrdLgAdUK9FdyZg8VIcTDKVOkys122ZINEb3CjjoySfGj2KyPiMKTh1zqtRp0TAD7FJ27G-OBB3eplxIB5GhcQH5o8zzGZfp0MuJaXJxVxk=@yesql.se
      947a3501
    • Tom Lane's avatar
      Prevent memory leaks associated with relcache rd_partcheck structures. · 5f1433ac
      Tom Lane authored
      The original coding of generate_partition_qual() just copied the list
      of predicate expressions into the global CacheMemoryContext, making it
      effectively impossible to clean up when the owning relcache entry is
      destroyed --- the relevant code in RelationDestroyRelation() only managed
      to free the topmost List header :-(.  This resulted in a session-lifespan
      memory leak whenever a table partition's relcache entry is rebuilt.
      Fortunately, that's not normally a large data structure, and rebuilds
      shouldn't occur all that often in production situations; but this is
      still a bug worth fixing back to v10 where the code was introduced.
      
      To fix, put the cached expression tree into its own small memory context,
      as we do with other complicated substructures of relcache entries.
      Also, deal more honestly with the case that a partition has an empty
      partcheck list; while that probably isn't a case that's very interesting
      for production use, it's legal.
      
      In passing, clarify comments about how partitioning-related relcache
      data structures are managed, and add some Asserts that we're not leaking
      old copies when we overwrite these data fields.
      
      Amit Langote and Tom Lane
      
      Discussion: https://postgr.es/m/7961.1552498252@sss.pgh.pa.us
      5f1433ac
    • Noah Misch's avatar
      Consistently test for in-use shared memory. · c0985099
      Noah Misch authored
      postmaster startup scrutinizes any shared memory segment recorded in
      postmaster.pid, exiting if that segment matches the current data
      directory and has an attached process.  When the postmaster.pid file was
      missing, a starting postmaster used weaker checks.  Change to use the
      same checks in both scenarios.  This increases the chance of a startup
      failure, in lieu of data corruption, if the DBA does "kill -9 `head -n1
      postmaster.pid` && rm postmaster.pid && pg_ctl -w start".  A postmaster
      will no longer stop if shmat() of an old segment fails with EACCES.  A
      postmaster will no longer recycle segments pertaining to other data
      directories.  That's good for production, but it's bad for integration
      tests that crash a postmaster and immediately delete its data directory.
      Such a test now leaks a segment indefinitely.  No "make check-world"
      test does that.  win32_shmem.c already avoided all these problems.  In
      9.6 and later, enhance PostgresNode to facilitate testing.  Back-patch
      to 9.4 (all supported versions).
      
      Reviewed (in earlier versions) by Daniel Gustafsson and Kyotaro HORIGUCHI.
      
      Discussion: https://postgr.es/m/20190408064141.GA2016666@rfd.leadboat.com
      c0985099