1. 01 Sep, 2022 2 commits
  2. 31 Aug, 2022 5 commits
  3. 30 Aug, 2022 1 commit
    • Tom Lane's avatar
      On NetBSD, force dynamic symbol resolution at postmaster start. · 464db467
      Tom Lane authored
      The default of lazy symbol resolution means that when the postmaster
      first reaches the select() call in ServerLoop, it'll need to resolve
      the link to that libc entry point.  NetBSD's dynamic loader takes
      an internal lock while doing that, and if a signal interrupts the
      operation then there is a risk of self-deadlock should the signal
      handler do anything that requires that lock, as several of the
      postmaster signal handlers do.  The window for this is pretty narrow,
      and timing considerations make it unlikely that a signal would arrive
      right then anyway.  But it's semi-repeatable on slow single-CPU
      machines, and in principle the race could happen with any hardware.
      
      The least messy solution to this is to force binding of dynamic
      symbols at postmaster start, using the "-z now" linker option.
      While we're at it, also use "-z relro" so as to provide a small
      security gain.
      
      It's not entirely clear whether any other platforms share this
      issue, but for now we'll assume it's NetBSD-specific.  (We might
      later try to use "-z now" on more platforms for performance
      reasons, but that would not likely be something to back-patch.)
      
      Report and patch by me; the idea to fix it this way is from
      Andres Freund.
      
      Discussion: https://postgr.es/m/3384826.1661802235@sss.pgh.pa.us
      464db467
  4. 29 Aug, 2022 1 commit
    • Robert Haas's avatar
      Prevent WAL corruption after a standby promotion. · 0e54a5e2
      Robert Haas authored
      When a PostgreSQL instance performing archive recovery but not using
      standby mode is promoted, and the last WAL segment that it attempted
      to read ended in a partial record, the previous code would create
      invalid WAL on the new timeline. The WAL from the previously timeline
      would be copied to the new timeline up until the end of the last valid
      record, but instead of beginning to write WAL at immediately
      afterwards, the promoted server would write an overwrite contrecord at
      the beginning of the next segment. The end of the previous segment
      would be left as all-zeroes, resulting in failures if anything tried
      to read WAL from that file.
      
      The root of the issue is that ReadRecord() decides whether to set
      abortedRecPtr and missingContrecPtr based on the value of StandbyMode,
      but ReadRecord() switches to a new timeline based on the value of
      ArchiveRecoveryRequested. We shouldn't try to write an overwrite
      contrecord if we're switching to a new timeline, so change the test in
      ReadRecod() to check ArchiveRecoveryRequested instead.
      
      Code fix by Dilip Kumar. Comments by me incorporating suggested
      language from Álvaro Herrera. Further review from Kyotaro Horiguchi
      and Sami Imseih.
      
      Discussion: http://postgr.es/m/CAFiTN-t7umki=PK8dT1tcPV=mOUe2vNhHML6b3T7W7qqvvajjg@mail.gmail.com
      Discussion: http://postgr.es/m/FB0DEA0B-E14E-43A0-811F-C1AE93D00FF3%40amazon.com
      0e54a5e2
  5. 28 Aug, 2022 1 commit
  6. 27 Aug, 2022 1 commit
  7. 26 Aug, 2022 1 commit
  8. 24 Aug, 2022 1 commit
    • Tom Lane's avatar
      Defend against stack overrun in a few more places. · 444ec169
      Tom Lane authored
      SplitToVariants() in the ispell code, lseg_inside_poly() in geo_ops.c,
      and regex_selectivity_sub() in selectivity estimation could recurse
      until stack overflow; fix by adding check_stack_depth() calls.
      So could next() in the regex compiler, but that case is better fixed by
      converting its tail recursion to a loop.  (We probably get better code
      that way too, since next() can now be inlined into its sole caller.)
      
      There remains a reachable stack overrun in the Turkish stemmer, but
      we'll need some advice from the Snowball people about how to fix that.
      
      Per report from Egor Chindyaskin and Alexander Lakhin.  These mistakes
      are old, so back-patch to all supported branches.
      
      Richard Guo and Tom Lane
      
      Discussion: https://postgr.es/m/1661334672.728714027@f473.i.mail.ru
      444ec169
  9. 23 Aug, 2022 3 commits
  10. 18 Aug, 2022 1 commit
    • Tom Lane's avatar
      Fix subtly-incorrect matching of parent and child partitioned indexes. · 3bfea5cb
      Tom Lane authored
      When creating a partitioned index, DefineIndex tries to identify
      any existing indexes on the partitions that match the partitioned
      index, so that it can absorb those as child indexes instead of
      building new ones.  Part of the matching is to compare IndexInfo
      structs --- but that wasn't done quite right.  We're comparing
      the IndexInfo built within DefineIndex itself to one made from
      existing catalog contents by BuildIndexInfo.  Notably, while
      BuildIndexInfo will run index expressions and predicates through
      expression preprocessing, that has not happened to DefineIndex's
      struct.  The result is failure to match and subsequent creation
      of duplicate indexes.
      
      The easiest and most bulletproof fix is to build a new IndexInfo
      using BuildIndexInfo, thereby guaranteeing that the processing done
      is identical.
      
      While here, let's also extract the opfamily and collation data
      from the new partitioned index, removing ad-hoc logic that
      duplicated knowledge about how those are constructed.
      
      Per report from Christophe Pettus.  Back-patch to v11 where
      we invented partitioned indexes.
      
      Richard Guo and Tom Lane
      
      Discussion: https://postgr.es/m/8864BFAA-81FD-4BF9-8E06-7DEB8D4164ED@thebuild.com
      3bfea5cb
  11. 16 Aug, 2022 4 commits
  12. 15 Aug, 2022 2 commits
  13. 14 Aug, 2022 1 commit
    • Tom Lane's avatar
      Preserve memory context of VarStringSortSupport buffers. · 06602372
      Tom Lane authored
      When enlarging the work buffers of a VarStringSortSupport object,
      varstrfastcmp_locale was careful to keep them in the ssup_cxt
      memory context; but varstr_abbrev_convert just used palloc().
      The latter creates a hazard that the buffers could be freed out
      from under the VarStringSortSupport object, resulting in stomping
      on whatever gets allocated in that memory later.
      
      In practice, because we only use this code for ICU collations
      (cf. 3df9c374), the problem is confined to use of ICU collations.
      I believe it may have been unreachable before the introduction
      of incremental sort, too, as traditional sorting usually just
      uses one context for the duration of the sort.
      
      We could fix this by making the broken stanzas in varstr_abbrev_convert
      match the non-broken ones in varstrfastcmp_locale.  However, it seems
      like a better idea to dodge the issue altogether by replacing the
      pfree-and-allocate-anew coding with repalloc, which automatically
      preserves the chunk's memory context.  This fix does add a few cycles
      because repalloc will copy the chunk's content, which the existing
      coding assumes is useless.  However, we don't expect that these buffer
      enlargement operations are performance-critical.  Besides that, it's
      far from obvious that copying the buffer contents isn't required, since
      these stanzas make no effort to mark the buffers invalid by resetting
      last_returned, cache_blob, etc.  That seems to be safe upon examination,
      but it's fragile and could easily get broken in future, which wouldn't
      get revealed in testing with short-to-moderate-size strings.
      
      Per bug #17584 from James Inform.  Whether or not the issue is
      reachable in the older branches, this code has been broken on its
      own terms from its introduction, so patch all the way back.
      
      Discussion: https://postgr.es/m/17584-95c79b4a7d771f44@postgresql.org
      06602372
  14. 13 Aug, 2022 3 commits
    • Tom Lane's avatar
      Avoid misbehavior when hash_table_bytes < bucket_size. · 1dfc9193
      Tom Lane authored
      It's possible to reach this case when work_mem is very small and tupsize
      is (relatively) very large.  In that case ExecChooseHashTableSize would
      get an assertion failure, or with asserts off it'd compute nbuckets = 0,
      which'd likely cause misbehavior later (I've not checked).  To fix,
      clamp the number of buckets to be at least 1.
      
      This is due to faulty conversion of old my_log2() coding in 28d936031.
      Back-patch to v13, as that was.
      
      Zhang Mingli
      
      Discussion: https://postgr.es/m/beb64ca0-91e2-44ac-bf4a-7ea36275ec02@Spark
      1dfc9193
    • Tom Lane's avatar
      Catch stack overflow when recursing in transformFromClauseItem(). · 496ab1d6
      Tom Lane authored
      Most parts of the parser can expect that the stack overflow check
      in transformExprRecurse() will trigger before things get desperate.
      However, transformFromClauseItem() can recurse directly to self
      without having analyzed any expressions, so it's possible to drive
      it to a stack-overrun crash.  Add a check to prevent that.
      
      Per bug #17583 from Egor Chindyaskin.  Back-patch to all supported
      branches.
      
      Richard Guo
      
      Discussion: https://postgr.es/m/17583-33be55b9f981f75c@postgresql.org
      496ab1d6
    • Peter Eisentraut's avatar
      Add missing fields to _outConstraint() · 2db574a2
      Peter Eisentraut authored
      As of 89779524, check constraints can
      be declared invalid.  But that patch didn't update _outConstraint() to
      also show the relevant struct fields (which were only applicable to
      foreign keys before that).  This currently only affects debugging
      output, so no impact in practice.
      2db574a2
  15. 12 Aug, 2022 11 commits
  16. 11 Aug, 2022 1 commit
    • Amit Kapila's avatar
      Fix catalog lookup with the wrong snapshot during logical decoding. · 68dcce24
      Amit Kapila authored
      Previously, we relied on HEAP2_NEW_CID records and XACT_INVALIDATION
      records to know if the transaction has modified the catalog, and that
      information is not serialized to snapshot. Therefore, after the restart,
      if the logical decoding decodes only the commit record of the transaction
      that has actually modified a catalog, we will miss adding its XID to the
      snapshot. Thus, we will end up looking at catalogs with the wrong
      snapshot.
      
      To fix this problem, this changes the snapshot builder so that it
      remembers the last-running-xacts list of the decoded RUNNING_XACTS record
      after restoring the previously serialized snapshot. Then, we mark the
      transaction as containing catalog changes if it's in the list of initial
      running transactions and its commit record has XACT_XINFO_HAS_INVALS. To
      avoid ABI breakage, we store the array of the initial running transactions
      in the static variables InitialRunningXacts and NInitialRunningXacts,
      instead of storing those in SnapBuild or ReorderBuffer.
      
      This approach has a false positive; we could end up adding the transaction
      that didn't change catalog to the snapshot since we cannot distinguish
      whether the transaction has catalog changes only by checking the COMMIT
      record. It doesn't have the information on which (sub) transaction has
      catalog changes, and XACT_XINFO_HAS_INVALS doesn't necessarily indicate
      that the transaction has catalog change. But that won't be a problem since
      we use snapshot built during decoding only to read system catalogs.
      
      On the master branch, we took a more future-proof approach by writing
      catalog modifying transactions to the serialized snapshot which avoids the
      above false positive. But we cannot backpatch it because of a change in
      the SnapBuild.
      
      Reported-by: Mike Oh
      Author: Masahiko Sawada
      Reviewed-by: Amit Kapila, Shi yu, Takamichi Osumi, Kyotaro Horiguchi, Bertrand Drouvot, Ahsan Hadi
      Backpatch-through: 10
      Discussion: https://postgr.es/m/81D0D8B0-E7C4-4999-B616-1E5004DBDCD2%40amazon.com
      68dcce24
  17. 10 Aug, 2022 1 commit