1. 15 Feb, 2019 4 commits
    • Peter Eisentraut's avatar
      Use standard diff separator for regression.diffs · 8f27a14b
      Peter Eisentraut authored
      Instead of ======..., use the standard separator for a multi-file
      diff, which is, per POSIX,
      
          "diff %s %s %s\n", <diff_options>, <filename1>, <filename2>
      
      This makes regression.diffs behave more like a proper diff file, for
      use with other tools.  And it shows the diff options used, for
      clarity.
      
      Discussion: https://www.postgresql.org/message-id/70440c81-37bb-76dd-e48b-b5a9550d5613@2ndquadrant.com
      8f27a14b
    • Michael Paquier's avatar
      Fix support for CREATE TABLE IF NOT EXISTS AS EXECUTE · 331a613e
      Michael Paquier authored
      The grammar IF NOT EXISTS for CTAS is supported since 9.5 and documented
      as such, however the case of using EXECUTE as query has never been
      covered as EXECUTE CTAS statements and normal CTAS statements are parsed
      separately.
      
      Author: Andreas Karlsson
      Discussion: https://postgr.es/m/2ddcc188-e37c-a0be-32bf-a56b07c3559e@proxel.se
      Backpatch-through: 9.5
      331a613e
    • Thomas Munro's avatar
      Fix race in dsm_attach() when handles are reused. · 6c0fb941
      Thomas Munro authored
      DSM handle values can be reused as soon as the underlying shared memory
      object has been destroyed.  That means that for a brief moment we
      might have two DSM slots with the same handle.  While trying to attach,
      if we encounter a slot with refcnt == 1, meaning that it is currently
      being destroyed, we should continue our search in case the same handle
      exists in another slot.
      
      The race manifested as a rare "dsa_area could not attach to segment"
      error, and was more likely in 10 and 11 due to the lack of distinct
      seed for random() in parallel workers.  It was made very unlikely in
      in master by commit 197e4af9, and older releases don't usually create
      new DSM segments in background workers so it was also unlikely there.
      
      This fixes the root cause of bug report #15585, in which the error
      could also sometimes result in a self-deadlock in the error path.
      It's not yet clear if further changes are needed to avoid that failure
      mode.
      
      Back-patch to 9.4, where dsm.c arrived.
      
      Author: Thomas Munro
      Reported-by: Justin Pryzby, Sergei Kornilov
      Discussion: https://postgr.es/m/20190207014719.GJ29720@telsasoft.com
      Discussion: https://postgr.es/m/15585-324ff6a93a18da46@postgresql.org
      6c0fb941
    • Tom Lane's avatar
      Simplify the planner's new representation of indexable clauses a little. · 8fd3fdd8
      Tom Lane authored
      In commit 1a8d5afb, I thought it'd be a good idea to define
      IndexClause.indexquals as NIL in the most common case where the given
      clause (IndexClause.rinfo) is usable exactly as-is.  It'd be more
      consistent to define the indexquals in that case as being a one-element
      list containing IndexClause.rinfo, but I thought saving the palloc
      overhead for making such a list would be worthwhile.
      
      In hindsight, that was a great example of "premature optimization is the
      root of all evil": it's complicated everyplace that needs to deal with
      the indexquals, requiring duplicative code to handle both the simple
      case and the not-simple case.  I'd initially found that tolerable but
      it's getting less so as I mop up some areas that I'd not touched in
      1a8d5afb.  In any case, two more pallocs during a planner run are
      surely at the noise level (a conclusion confirmed by a bit of
      microbenchmarking).  So let's change this decision before it becomes
      set in stone, and insist that IndexClause.indexquals always be a valid
      list of the actual index quals for the clause.
      
      Discussion: https://postgr.es/m/24586.1550106354@sss.pgh.pa.us
      8fd3fdd8
  2. 14 Feb, 2019 3 commits
  3. 13 Feb, 2019 12 commits
    • Alvaro Herrera's avatar
      Fix portability issues in pg_bitutils · 109de05c
      Alvaro Herrera authored
      We were using uint64 function arguments as "long int" arguments to
      compiler builtins, which fails on machines where long ints are 32 bits:
      the upper half of the uint64 was being ignored.  Fix by using the "ll"
      builtin variants instead, which on those machines take 64 bit arguments.
      
      Also, remove configure tests for __builtin_popcountl() (as well as
      "long" variants for ctz and clz): the theory here is that any compiler
      version will provide all widths or none, so one test suffices.  Were
      this theory to be wrong, we'd have to add tests for
      __builtin_popcountll() and friends, which would be tedious.
      
      Per failures in buildfarm member lapwing and ensuing discussion.
      109de05c
    • Andrew Gierth's avatar
      Remove a stray subnormal value from float tests. · 80c468b4
      Andrew Gierth authored
      We don't care to assume that input of subnormal float values works,
      but a stray subnormal value from the upstream Ryu regression test had
      been left in the test data by mistake. Remove it.
      
      Per buildfarm member fulmar.
      80c468b4
    • Alvaro Herrera's avatar
      Add -mpopcnt to all compiles of pg_bitutils.c · d0b4663c
      Alvaro Herrera authored
      The way this makefile works, we need to specify it three times.
      d0b4663c
    • Andrew Gierth's avatar
      More float test and portability fixes. · da6520be
      Andrew Gierth authored
      Avoid assuming exact results in tstypes test; some platforms vary.
      (per buildfarm members eulachon, danio, lapwing)
      
      Avoid dubious usage (inherited from upstream) of bool parameters to
      copy_special_str, to see if this fixes the mac/ppc failures (per
      buildfarm members prariedog and locust). (Isolated test programs on a
      ppc mac don't seem to show any other cause that would explain them.)
      da6520be
    • Alvaro Herrera's avatar
      Add basic support for using the POPCNT and SSE4.2s LZCNT opcodes · 711bab1e
      Alvaro Herrera authored
      These opcodes have been around in the AMD world since 2007, and 2008 in
      the case of intel.  They're supported in GCC and Clang via some __builtin
      macros.  The opcodes may be unavailable during runtime, in which case we
      fall back on a C-based implementation of the code.  In order to get the
      POPCNT instruction we must pass the -mpopcnt option to the compiler.  We
      do this only for the pg_bitutils.c file.
      
      David Rowley (with fragments taken from a patch by Thomas Munro)
      
      Discussion: https://postgr.es/m/CAKJS1f9WTAGG1tPeJnD18hiQW5gAk59fQ6WK-vfdAKEHyRg2RA@mail.gmail.com
      711bab1e
    • Andrew Gierth's avatar
      Fix an overlooked UINT32_MAX. · 754ca993
      Andrew Gierth authored
      Replace with PG_UINT32_MAX. Per buildfarm members dory and woodlouse.
      754ca993
    • Andrew Gierth's avatar
      Change floating-point output format for improved performance. · 02ddd499
      Andrew Gierth authored
      Previously, floating-point output was done by rounding to a specific
      decimal precision; by default, to 6 or 15 decimal digits (losing
      information) or as requested using extra_float_digits. Drivers that
      wanted exact float values, and applications like pg_dump that must
      preserve values exactly, set extra_float_digits=3 (or sometimes 2 for
      historical reasons, though this isn't enough for float4).
      
      Unfortunately, decimal rounded output is slow enough to become a
      noticable bottleneck when dealing with large result sets or COPY of
      large tables when many floating-point values are involved.
      
      Floating-point output can be done much faster when the output is not
      rounded to a specific decimal length, but rather is chosen as the
      shortest decimal representation that is closer to the original float
      value than to any other value representable in the same precision. The
      recently published Ryu algorithm by Ulf Adams is both relatively
      simple and remarkably fast.
      
      Accordingly, change float4out/float8out to output shortest decimal
      representations if extra_float_digits is greater than 0, and make that
      the new default. Applications that need rounded output can set
      extra_float_digits back to 0 or below, and take the resulting
      performance hit.
      
      We make one concession to portability for systems with buggy
      floating-point input: we do not output decimal values that fall
      exactly halfway between adjacent representable binary values (which
      would rely on the reader doing round-to-nearest-even correctly). This
      is known to be a problem at least for VS2013 on Windows.
      
      Our version of the Ryu code originates from
      https://github.com/ulfjack/ryu/ at commit c9c3fb1979, but with the
      following (significant) modifications:
      
       - Output format is changed to use fixed-point notation for small
         exponents, as printf would, and also to use lowercase 'e', a
         minimum of 2 exponent digits, and a mandatory sign on the exponent,
         to keep the formatting as close as possible to previous output.
      
       - The output of exact midpoint values is disabled as noted above.
      
       - The integer fast-path code is changed somewhat (since we have
         fixed-point output and the upstream did not).
      
       - Our project style has been largely applied to the code with the
         exception of C99 declaration-after-statement, which has been
         retained as an exception to our present policy.
      
       - Most of upstream's debugging and conditionals are removed, and we
         use our own configure tests to determine things like uint128
         availability.
      
      Changing the float output format obviously affects a number of
      regression tests. This patch uses an explicit setting of
      extra_float_digits=0 for test output that is not expected to be
      exactly reproducible (e.g. due to numerical instability or differing
      algorithms for transcendental functions).
      
      Conversions from floats to numeric are unchanged by this patch. These
      may appear in index expressions and it is not yet clear whether any
      change should be made, so that can be left for another day.
      
      This patch assumes that the only supported floating point format is
      now IEEE format, and the documentation is updated to reflect that.
      
      Code by me, adapting the work of Ulf Adams and other contributors.
      
      References:
      https://dl.acm.org/citation.cfm?id=3192369
      
      Reviewed-by: Tom Lane, Andres Freund, Donald Dong
      Discussion: https://postgr.es/m/87r2el1bx6.fsf@news-spur.riddles.org.uk
      02ddd499
    • Andrew Gierth's avatar
      Use strtof() and not strtod() for float4 input. · f397e085
      Andrew Gierth authored
      Using strtod() creates a double-rounding problem; the input decimal
      value is first rounded to the nearest double; rounding that to the
      nearest float may then give an incorrect result.
      
      An example is that 7.038531e-26 when input via strtod and then rounded
      to float4 gives 0xAE43FEp-107 instead of the correct 0xAE43FDp-107.
      
      Values output by earlier PG versions with extra_float_digits=3 should
      all be read in with the same values as previously. However, values
      supplied by other software using shortest representations could be
      mis-read.
      
      On platforms that lack a strtof() entirely, we fall back to the old
      incorrect rounding behavior. (As strtof() is required by C99, such
      platforms are considered of primarily historical interest.) On VS2013,
      some workarounds are used to get correct error handling.
      
      The regression tests now test for the correct input values, so
      platforms that lack strtof() will need resultmap entries. An entry for
      HP-UX 10 is included (more may be needed).
      
      Reviewed-By: Tom Lane
      Discussion: https://postgr.es/m/871s5emitx.fsf@news-spur.riddles.org.uk
      Discussion: https://postgr.es/m/87d0owlqpv.fsf@news-spur.riddles.org.uk
      f397e085
    • Peter Eisentraut's avatar
      More unconstify use · 37d99160
      Peter Eisentraut authored
      Replace casts whose only purpose is to cast away const with the
      unconstify() macro.
      
      Discussion: https://www.postgresql.org/message-id/flat/53a28052-f9f3-1808-fed9-460fd43035ab%402ndquadrant.com
      37d99160
    • Peter Eisentraut's avatar
      Remove useless casts · cf40dc65
      Peter Eisentraut authored
      Some of these were uselessly casting away "const", some were just
      nearby, but they where all unnecessary anyway.
      
      Discussion: https://www.postgresql.org/message-id/flat/53a28052-f9f3-1808-fed9-460fd43035ab%402ndquadrant.com
      cf40dc65
    • Michael Paquier's avatar
      Fix comment related to calculation location of total_table_pages · 6ea95166
      Michael Paquier authored
      As of commit c6e4133f, the calculation happens in make_one_rel() and not
      query_planner().
      
      Author: Amit Langote
      Discussion: https://postgr.es/m/c7a04a90-42e6-28a4-811a-a7e352831ba1@lab.ntt.co.jp
      6ea95166
    • Thomas Munro's avatar
      Fix rare dsa_allocate() failures due to freepage.c corruption. · 7215efdc
      Thomas Munro authored
      In a corner case, a btree page was allocated during a clean-up operation
      that could cause the tracking of the largest contiguous span of free
      space to get out of whack.  That was supposed to be prevented by the use
      of the "soft" flag to avoid allocating internal pages during incidental
      clean-up work, but the flag was ignored in the case where the FPM was
      promoted from singleton format to btree format.  Repair.
      
      Remove an obsolete comment in passing.
      
      Back-patch to 10, where freepage.c arrived (as support for dsa.c).
      
      Author: Robert Haas
      Diagnosed-by: Thomas Munro and Robert Haas
      Reported-by: Justin Pryzby, Rick Otten, Sand Stone, Arne Roland and others
      Discussion: https://postgr.es/m/CAMAYy4%2Bw3NTBM5JLWFi8twhWK4%3Dk_5L4nV5%2BbYDSPu8r4b97Zg%40mail.gmail.com
      7215efdc
  4. 12 Feb, 2019 10 commits
    • Tom Lane's avatar
      Clean up planner confusion between ncolumns and nkeycolumns. · 75c46149
      Tom Lane authored
      We're only going to consider key columns when creating indexquals,
      so there is no point in having the outer loops in indxpath.c iterate
      further than nkeycolumns.
      
      Doing so in match_pathkeys_to_index() is actually wrong, and would have
      caused crashes by now, except that we have no index AMs supporting both
      amcanorderbyop and amcaninclude.
      
      It's also wrong in relation_has_unique_index_for().  The effect there is
      to fail to prove uniqueness even when the index does prove it, if there
      are extra columns.
      
      Also future-proof examine_variable() for the day when extra columns can
      be expressions, and fix what's either a thinko or just an oversight in
      btcostestimate(): we should consider the number of key columns, not the
      total, when deciding whether to derate correlation.
      
      None of these things seemed important enough to risk changing in a
      just-before-wrap patch, but since we're past the release wrap window,
      time to fix 'em.
      
      Discussion: https://postgr.es/m/25526.1549847928@sss.pgh.pa.us
      75c46149
    • Alvaro Herrera's avatar
      Relax overly strict assertion · 8c67d29f
      Alvaro Herrera authored
      Ever since its birth, ReorderBufferBuildTupleCidHash() has contained an
      assertion that a catalog tuple cannot change Cmax after acquiring one.  But
      that's wrong: if a subtransaction executes DDL that affects that catalog
      tuple, and later aborts and another DDL affects the same tuple, it will
      change Cmax.  Relax the assertion to merely verify that the Cmax remains
      valid and monotonically increasing, instead.
      
      Add a test that tickles the relevant code.
      
      Diagnosed by, and initial patch submitted by: Arseny Sher
      Co-authored-by: Arseny Sher
      Discussion: https://postgr.es/m/874l9p8hyw.fsf@ars-thinkpad
      8c67d29f
    • Alvaro Herrera's avatar
      Blind attempt at fixing Windows build · d357a169
      Alvaro Herrera authored
      Broken since fe33a196.
      d357a169
    • Alvaro Herrera's avatar
      Use Getopt::Long for catalog scripts · fe33a196
      Alvaro Herrera authored
      Replace hand-rolled option parsing with the Getopt module. This is
      shorter and easier to read. In passing, make some cosmetic adjustments
      for consistency.
      
      Author: John Naylor
      Reviewed-by: David Fetter
      Discussion: https://postgr.es/m/CACPNZCvRjepXh5b2N50njN+rO_2Nzcf=jhMkKX7=79XWUKJyKA@mail.gmail.com
      fe33a196
    • Tom Lane's avatar
      Fix erroneous error reports in snapbuild.c. · 232a8e23
      Tom Lane authored
      It's pretty unhelpful to report the wrong file name in a complaint
      about syscall failure, but SnapBuildSerialize managed to do that twice
      in a span of 50 lines.  Also fix half a dozen missing or poorly-chosen
      errcode assignments; that's mostly cosmetic, but still wrong.
      
      Noted while studying recent failures on buildfarm member nightjar.
      I'm not sure whether those reports are actually giving the wrong
      filename, because there are two places here with identically
      spelled error messages.  The other one is specifically coded not
      to report ENOENT, but if it's this one, how could we be getting
      ENOENT from open() with O_CREAT?  Need to sit back and await results.
      
      However, these ereports are clearly broken from birth, so back-patch.
      232a8e23
    • Michael Paquier's avatar
      Fix description of WAL record XLOG_PARAMETER_CHANGE · b7ec8205
      Michael Paquier authored
      max_wal_senders and max_worker_processes got reversed in the output
      generated because of ea92368c.
      
      Reported-by: Kevin Hale Boyes
      Discussion: https://postgr.es/m/CADAecHVAD4=26KAx4nj5DBvxqqvJkuwsy+riiiNhQqwnZg2K8Q@mail.gmail.com
      b7ec8205
    • Tom Lane's avatar
      Fix header inclusion issue. · b07c695d
      Tom Lane authored
      partprune.h failed to compile by itself; needs to include partdefs.h.
      
      I think I must've broken this in fa2cf164, though I'd swear I ran
      the appropriate tests when removing #includes.  Anyway, it's very
      sensible for this file to include partdefs.h, so let's just do that.
      
      Per cpluspluscheck.
      b07c695d
    • Michael Paquier's avatar
      Clarify docs about limitations of constraint exclusion with partitions · ea05b221
      Michael Paquier authored
      The current wording can confuse the reader about constraint exclusion
      being available at query execution, but this only applies to partition
      pruning.
      
      Reported-by: Shouyu Luo
      Author: David Rowley
      Reviewed-by: Chapman Flack, Amit Langote
      Discussion: https://postgr.es/m/15629-2ef8b22e61f8333f@postgresql.org
      ea05b221
    • Tom Lane's avatar
      Allow extensions to generate lossy index conditions. · 74dfe58a
      Tom Lane authored
      For a long time, indxpath.c has had the ability to extract derived (lossy)
      index conditions from certain operators such as LIKE.  For just as long,
      it's been obvious that we really ought to make that capability available
      to extensions.  This commit finally accomplishes that, by adding another
      API for planner support functions that lets them create derived index
      conditions for their functions.  As proof of concept, the hardwired
      "special index operator" code formerly present in indxpath.c is pushed
      out to planner support functions attached to LIKE and other relevant
      operators.
      
      A weak spot in this design is that an extension needs to know OIDs for
      the operators, datatypes, and opfamilies involved in the transformation
      it wants to make.  The core-code prototypes use hard-wired OID references
      but extensions don't have that option for their own operators etc.  It's
      usually possible to look up the required info, but that may be slow and
      inconvenient.  However, improving that situation is a separate task.
      
      I want to do some additional refactorization around selfuncs.c, but
      that also seems like a separate task.
      
      Discussion: https://postgr.es/m/15193.1548028093@sss.pgh.pa.us
      74dfe58a
    • Michael Paquier's avatar
      Move max_wal_senders out of max_connections for connection slot handling · ea92368c
      Michael Paquier authored
      Since its introduction, max_wal_senders is counted as part of
      max_connections when it comes to define how many connection slots can be
      used for replication connections with a WAL sender context.  This can
      lead to confusion for some users, as it could be possible to block a
      base backup or replication from happening because other backend sessions
      are already taken for other purposes by an application, and
      superuser-only connection slots are not a correct solution to handle
      that case.
      
      This commit makes max_wal_senders independent of max_connections for its
      handling of PGPROC entries in ProcGlobal, meaning that connection slots
      for WAL senders are handled using their own free queue, like autovacuum
      workers and bgworkers.
      
      One compatibility issue that this change creates is that a standby now
      requires to have a value of max_wal_senders at least equal to its
      primary.  So, if a standby created enforces the value of
      max_wal_senders to be lower than that, then this could break failovers.
      Normally this should not be an issue though, as any settings of a
      standby are inherited from its primary as postgresql.conf gets normally
      copied as part of a base backup, so parameters would be consistent.
      
      Author: Alexander Kukushkin
      Reviewed-by: Kyotaro Horiguchi, Petr Jelínek, Masahiko Sawada, Oleksii
      Kliukin
      Discussion: https://postgr.es/m/CAFh8B=nBzHQeYAu0b8fjK-AF1X4+_p6GRtwG+cCgs6Vci2uRuQ@mail.gmail.com
      ea92368c
  5. 11 Feb, 2019 5 commits
    • Tom Lane's avatar
      Redesign the partition dependency mechanism. · 1d92a0c9
      Tom Lane authored
      The original setup for dependencies of partitioned objects had
      serious problems:
      
      1. It did not verify that a drop cascading to a partition-child object
      also cascaded to at least one of the object's partition parents.  Now,
      normally a child object would share all its dependencies with one or
      another parent (e.g. a child index's opclass dependencies would be shared
      with the parent index), so that this oversight is usually harmless.
      But if some dependency failed to fit this pattern, the child could be
      dropped while all its parents remain, creating a logically broken
      situation.  (It's easy to construct artificial cases that break it,
      such as attaching an unrelated extension dependency to the child object
      and then dropping the extension.  I'm not sure if any less-artificial
      cases exist.)
      
      2. Management of partition dependencies during ATTACH/DETACH PARTITION
      was complicated and buggy; for example, after detaching a partition
      table it was possible to create cases where a formerly-child index
      should be dropped and was not, because the correct set of dependencies
      had not been reconstructed.
      
      Less seriously, because multiple partition relationships were
      represented identically in pg_depend, there was an order-of-traversal
      dependency on which partition parent was cited in error messages.
      We also had some pre-existing order-of-traversal hazards for error
      messages related to internal and extension dependencies.  This is
      cosmetic to users but causes testing problems.
      
      To fix #1, add a check at the end of the partition tree traversal
      to ensure that at least one partition parent got deleted.  To fix #2,
      establish a new policy that partition dependencies are in addition to,
      not instead of, a child object's usual dependencies; in this way
      ATTACH/DETACH PARTITION need not cope with adding or removing the
      usual dependencies.
      
      To fix the cosmetic problem, distinguish between primary and secondary
      partition dependency entries in pg_depend, by giving them different
      deptypes.  (They behave identically except for having different
      priorities for being cited in error messages.)  This means that the
      former 'I' dependency type is replaced with new 'P' and 'S' types.
      
      This also fixes a longstanding bug that after handling an internal
      dependency by recursing to the owning object, findDependentObjects
      did not verify that the current target was now scheduled for deletion,
      and did not apply the current recursion level's objflags to it.
      Perhaps that should be back-patched; but in the back branches it
      would only matter if some concurrent transaction had removed the
      internal-linkage pg_depend entry before the recursive call found it,
      or the recursive call somehow failed to find it, both of which seem
      unlikely.
      
      Catversion bump because the contents of pg_depend change for
      partitioning relationships.
      
      Patch HEAD only.  It's annoying that we're not fixing #2 in v11,
      but there seems no practical way to do so given that the problem
      is exactly a poor choice of what entries to put in pg_depend.
      We can't really fix that while staying compatible with what's
      in pg_depend in existing v11 installations.
      
      Discussion: https://postgr.es/m/CAH2-Wzkypv1R+teZrr71U23J578NnTBt2X8+Y=Odr4pOdW1rXg@mail.gmail.com
      1d92a0c9
    • Alvaro Herrera's avatar
      Fix misleading PG_RE_THROW commentary · c603b392
      Alvaro Herrera authored
      The old verbiage indicated that PG_RE_THROW is optional, which is not
      really true.  This has confused many people, so it seems worth fixing.
      
      Discussion: https://postgr.es/m/20190206160958.GA22304@alvherre.pgsql
      c603b392
    • Peter Eisentraut's avatar
      256fc004
    • Peter Eisentraut's avatar
      Remove unused macro · 78b0cac7
      Peter Eisentraut authored
      Last use was removed in 2c66f992.
      78b0cac7
    • Tom Lane's avatar
      Fix indexable-row-comparison logic to account for covering indexes. · 6bdc3005
      Tom Lane authored
      indxpath.c needs a good deal more attention for covering indexes than
      it's gotten.  But so far as I can tell, the only really awful breakage
      is in expand_indexqual_rowcompare (nee adjust_rowcompare_for_index),
      which was only half fixed in c266ed31.  The other problems aren't
      bad enough to take the risk of a just-before-wrap fix.
      
      The problem here is that if the leading column of a row comparison
      matches an index (allowing this code to be reached), and some later
      column doesn't match the index, it'll nonetheless believe that that
      column matches the first included index column.  Typically that'll
      lead to an error like "operator M is not a member of opfamily N" as
      a result of fetching a garbage opfamily OID.  But with enough bad
      luck, maybe a broken plan would be generated.
      
      Discussion: https://postgr.es/m/25526.1549847928@sss.pgh.pa.us
      6bdc3005
  6. 10 Feb, 2019 4 commits
    • Tom Lane's avatar
      Add per-test-script runtime display to pg_regress. · 72d71e03
      Tom Lane authored
      It seems useful to have this information available, so that it's
      easier to tell when a test script is taking a disproportionate
      amount of time.
      
      Discussion: https://postgr.es/m/16646.1549770618@sss.pgh.pa.us
      72d71e03
    • Alvaro Herrera's avatar
      Fix trigger drop procedure · cb90de1a
      Alvaro Herrera authored
      After commit 123cc697a8eb, we remove redundant FK action triggers during
      partition ATTACH by merely deleting the catalog tuple, but that's wrong:
      it should use performDeletion() instead.  Repair, and make the comments
      more explicit.
      
      Per code review from Tom Lane.
      
      Discussion: https://postgr.es/m/18885.1549642539@sss.pgh.pa.us
      cb90de1a
    • Tom Lane's avatar
      Solve cross-version-upgrade testing problem induced by 1fb57af9. · 068503c7
      Tom Lane authored
      Renaming varchar_transform to varchar_support had a side effect
      I hadn't foreseen: the core regression tests leave around a
      transform object that relies on that function, so the name
      change breaks cross-version upgrade tests, because the name
      used in the older branches doesn't match.
      
      Since the dependency on varchar_transform was chosen with the
      aid of a dartboard anyway (it would surely not work as a
      language transform support function), fix by just choosing
      a different random builtin function with the right signature.
      Also add some comments explaining why this isn't horribly unsafe.
      
      I chose to make the same substitution in a couple of other
      copied-and-pasted test cases, for consistency, though those
      aren't directly contributing to the testing problem.
      
      Per buildfarm.  Back-patch, else it doesn't fix the problem.
      068503c7
    • Tom Lane's avatar
      Repair unsafe/unportable snprintf usage in pg_restore. · 4dbe1969
      Tom Lane authored
      warn_or_exit_horribly() was blithely passing a potentially-NULL
      string pointer to a %s format specifier.  That works (at least
      to the extent of not crashing) on some platforms, but not all,
      and since we switched to our own snprintf.c it doesn't work
      for us anywhere.
      
      Of the three string fields being handled this way here, I think
      that only "owner" is supposed to be nullable ... but considering
      that this is error-reporting code, it has very little business
      assuming anything, so put in defenses for all three.
      
      Per a crash observed on buildfarm member crake and then
      reproduced here.  Because of the portability aspect,
      back-patch to all supported versions.
      4dbe1969
  7. 09 Feb, 2019 2 commits
    • Tom Lane's avatar
      Build out the planner support function infrastructure. · a391ff3c
      Tom Lane authored
      Add support function requests for estimating the selectivity, cost,
      and number of result rows (if a SRF) of the target function.
      
      The lack of a way to estimate selectivity of a boolean-returning
      function in WHERE has been a recognized deficiency of the planner
      since Berkeley days.  This commit finally fixes it.
      
      In addition, non-constant estimates of cost and number of output
      rows are now possible.  We still fall back to looking at procost
      and prorows if the support function doesn't service the request,
      of course.
      
      To make concrete use of the possibility of estimating output rowcount
      for SRFs, this commit adds support functions for array_unnest(anyarray)
      and the integer variants of generate_series; the lack of plausible
      rowcount estimates for those, even when it's obvious to a human,
      has been a repeated subject of complaints.  Obviously, much more
      could now be done in this line, but I'm mostly just trying to get
      the infrastructure in place.
      
      Discussion: https://postgr.es/m/15193.1548028093@sss.pgh.pa.us
      a391ff3c
    • Tom Lane's avatar
      Create the infrastructure for planner support functions. · 1fb57af9
      Tom Lane authored
      Rename/repurpose pg_proc.protransform as "prosupport".  The idea is
      still that it names an internal function that provides knowledge to
      the planner about the behavior of the function it's attached to;
      but redesign the API specification so that it's not limited to doing
      just one thing, but can support an extensible set of requests.
      
      The original purpose of simplifying a function call is handled by
      the first request type to be invented, SupportRequestSimplify.
      Adjust all the existing transform functions to handle this API,
      and rename them fron "xxx_transform" to "xxx_support" to reflect
      the potential generalization of what they do.  (Since we never
      previously provided any way for extensions to add transform functions,
      this change doesn't create an API break for them.)
      
      Also add DDL and pg_dump support for attaching a support function to a
      user-defined function.  Unfortunately, DDL access has to be restricted
      to superusers, at least for now; but seeing that support functions
      will pretty much have to be written in C, that limitation is just
      theoretical.  (This support is untested in this patch, but a follow-on
      patch will add cases that exercise it.)
      
      Discussion: https://postgr.es/m/15193.1548028093@sss.pgh.pa.us
      1fb57af9