1. 01 Oct, 2017 3 commits
  2. 30 Sep, 2017 5 commits
    • Tom Lane's avatar
      Fix pg_dump to assign domain array type OIDs during pg_upgrade. · 2632bcce
      Tom Lane authored
      During a binary upgrade, all type OIDs are supposed to be assigned by
      pg_dump based on their values in the old cluster.  But now that domains
      have arrays, there's nothing to base the arrays' type OIDs on, if we're
      upgrading from a pre-v11 cluster.  Make pg_dump search for an unused type
      OID to use for this purpose.  Per buildfarm.
      
      Discussion: https://postgr.es/m/E1dyLlE-0002gT-H5@gemulon.postgresql.org
      2632bcce
    • Tom Lane's avatar
      Support arrays over domains. · c12d570f
      Tom Lane authored
      Allowing arrays with a domain type as their element type was left un-done
      in the original domain patch, but not for any very good reason.  This
      omission leads to such surprising results as array_agg() not working on
      a domain column, because the parser can't identify a suitable output type
      for the polymorphic aggregate.
      
      In order to fix this, first clean up the APIs of coerce_to_domain() and
      some internal functions in parse_coerce.c so that we consistently pass
      around a CoercionContext along with CoercionForm.  Previously, we sometimes
      passed an "isExplicit" boolean flag instead, which is strictly less
      information; and coerce_to_domain() didn't even get that, but instead had
      to reverse-engineer isExplicit from CoercionForm.  That's contrary to the
      documentation in primnodes.h that says that CoercionForm only affects
      display and not semantics.  I don't think this change fixes any live bugs,
      but it makes things more consistent.  The main reason for doing it though
      is that now build_coercion_expression() receives ccontext, which it needs
      in order to be able to recursively invoke coerce_to_target_type().
      
      Next, reimplement ArrayCoerceExpr so that the node does not directly know
      any details of what has to be done to the individual array elements while
      performing the array coercion.  Instead, the per-element processing is
      represented by a sub-expression whose input is a source array element and
      whose output is a target array element.  This simplifies life in
      parse_coerce.c, because it can build that sub-expression by a recursive
      invocation of coerce_to_target_type().  The executor now handles the
      per-element processing as a compiled expression instead of hard-wired code.
      The main advantage of this is that we can use a single ArrayCoerceExpr to
      handle as many as three successive steps per element: base type conversion,
      typmod coercion, and domain constraint checking.  The old code used two
      stacked ArrayCoerceExprs to handle type + typmod coercion, which was pretty
      inefficient, and adding yet another array deconstruction to do domain
      constraint checking seemed very unappetizing.
      
      In the case where we just need a single, very simple coercion function,
      doing this straightforwardly leads to a noticeable increase in the
      per-array-element runtime cost.  Hence, add an additional shortcut evalfunc
      in execExprInterp.c that skips unnecessary overhead for that specific form
      of expression.  The runtime speed of simple cases is within 1% or so of
      where it was before, while cases that previously required two levels of
      array processing are significantly faster.
      
      Finally, create an implicit array type for every domain type, as we do for
      base types, enums, etc.  Everything except the array-coercion case seems
      to just work without further effort.
      
      Tom Lane, reviewed by Andrew Dunstan
      
      Discussion: https://postgr.es/m/9852.1499791473@sss.pgh.pa.us
      c12d570f
    • Andres Freund's avatar
      Fix copy & pasto in 510b8cbf. · 248e3375
      Andres Freund authored
      Reported-By: Peter Geoghegan
      248e3375
    • Andres Freund's avatar
      Fix typo. · f1424123
      Andres Freund authored
      Reported-By: Thomas Munro and Jesper Pedersen
      f1424123
    • Andres Freund's avatar
      Extend & revamp pg_bswap.h infrastructure. · 510b8cbf
      Andres Freund authored
      Upcoming patches are going to address performance issues that involve
      slow system provided ntohs/htons etc. To address that expand
      pg_bswap.h to provide pg_ntoh{16,32,64}, pg_hton{16,32,64} and
      optimize their respective implementations by using compiler intrinsics
      for gcc compatible compilers and msvc. Fall back to manual
      implementations using shifts etc otherwise.
      
      Additionally remove multiple evaluation hazards from the existing
      BSWAP32/64 macros, by replacing them with inline functions when
      necessary. In the course of that the naming scheme is changed to
      pg_bswap16/32/64.
      
      Author: Andres Freund
      Discussion: https://postgr.es/m/20170927172019.gheidqy6xvlxb325@alap3.anarazel.de
      510b8cbf
  3. 29 Sep, 2017 10 commits
    • Peter Eisentraut's avatar
      Use Py_RETURN_NONE where suitable · 0008a106
      Peter Eisentraut authored
      This is more idiomatic style and available as of Python 2.4, which is
      our minimum.
      0008a106
    • Tom Lane's avatar
      Fix inadequate locking during get_rel_oids(). · 19de0ab2
      Tom Lane authored
      get_rel_oids used to not take any relation locks at all, but that stopped
      being a good idea with commit 3c3bb993, which inserted a syscache lookup
      into the function.  A concurrent DROP TABLE could now produce "cache lookup
      failed", which we don't want to have happen in normal operation.  The best
      solution seems to be to transiently take a lock on the relation named by
      the RangeVar (which also makes the result of RangeVarGetRelid a lot less
      spongy).  But we shouldn't hold the lock beyond this function, because we
      don't want VACUUM to lock more than one table at a time.  (That would not
      be a big problem right now, but it will become one after the pending
      feature patch to allow multiple tables to be named in VACUUM.)
      
      In passing, adjust vacuum_rel and analyze_rel to document that we don't
      trust the passed RangeVar to be accurate, and allow the RangeVar to
      possibly be NULL --- which it is anyway for a whole-database VACUUM,
      though we accidentally didn't crash for that case.
      
      The passed RangeVar is in fact inaccurate when dealing with a child
      partition, as of v10, and it has been wrong for a whole long time in the
      case of vacuum_rel() recursing to a TOAST table.  None of these things
      present visible bugs up to now, because the passed RangeVar is in fact
      only consulted for autovacuum logging, and in that particular context it's
      always accurate because autovacuum doesn't let vacuum.c expand partitions
      nor recurse to toast tables.  Still, this seems like trouble waiting to
      happen, so let's nail the door at least partly shut.  (Further cleanup
      is planned, in HEAD only, as part of the pending feature patch.)
      
      Fix some sadly inaccurate/obsolete comments too.  Back-patch to v10.
      
      Michael Paquier and Tom Lane
      
      Discussion: https://postgr.es/m/25023.1506107590@sss.pgh.pa.us
      19de0ab2
    • Robert Haas's avatar
      psql: Don't try to print a partition constraint we didn't fetch. · 69c16983
      Robert Haas authored
      If \d rather than \d+ is used, then verbose is false and we don't ask
      the server for the partition constraint; so we shouldn't print it in
      that case either.
      
      Maksim Milyutin, per a report from Jesper Pedersen.  Reviewed by
      Jesper Pedersen and Amit Langote.
      
      Discussion: http://postgr.es/m/2af5fc4d-7bcc-daa8-4fe6-86274bea363c@redhat.com
      69c16983
    • Robert Haas's avatar
      pgbench: If we fail to send a command to the server, fail. · e55d9643
      Robert Haas authored
      This beats the old behavior of busy-waiting hands down.
      
      Oversight in commit 12788ae4.
      
      Report by Pavan Deolasee. Patch by Fabien Coelho.  Reviewed by
      Pavan Deolasee.
      
      Discussion: http://postgr.es/m/CABOikdPhfXTypckMC1Ux6Ko+hKBWwUBA=EXsvamXYSg8M9J94w@mail.gmail.com
      e55d9643
    • Peter Eisentraut's avatar
      psql: Update \d sequence display · 2a14b960
      Peter Eisentraut authored
      For \d sequencename, the psql code just did SELECT * FROM sequencename
      to get the information to display, but this does not contain much
      interesting information anymore in PostgreSQL 10, because the metadata
      has been moved to a separate system catalog.
      
      This patch creates a newly designed sequence display that is not merely
      an extension of the general relation/table display as it was previously.
      
      Example:
      
      PostgreSQL 9.6:
      
      => \d foobar
                 Sequence "public.foobar"
          Column     |  Type   |        Value
      ---------------+---------+---------------------
       sequence_name | name    | foobar
       last_value    | bigint  | 1
       start_value   | bigint  | 1
       increment_by  | bigint  | 1
       max_value     | bigint  | 9223372036854775807
       min_value     | bigint  | 1
       cache_value   | bigint  | 1
       log_cnt       | bigint  | 0
       is_cycled     | boolean | f
       is_called     | boolean | f
      
      PostgreSQL 10 before this change:
      
      => \d foobar
         Sequence "public.foobar"
         Column   |  Type   | Value
      ------------+---------+-------
       last_value | bigint  | 1
       log_cnt    | bigint  | 0
       is_called  | boolean | f
      
      New:
      
      => \d foobar
                                 Sequence "public.foobar"
        Type  | Start | Minimum |       Maximum       | Increment | Cycles? | Cache
      --------+-------+---------+---------------------+-----------+---------+-------
       bigint |     1 |       1 | 9223372036854775807 |         1 | no      |     1
      Reviewed-by: default avatarFabien COELHO <coelho@cri.ensmp.fr>
      2a14b960
    • Tom Lane's avatar
      Marginal improvement for generated code in execExprInterp.c. · 136ab7c5
      Tom Lane authored
      Avoid the coding pattern "*op->resvalue = f();", as some compilers think
      that requires them to evaluate "op->resvalue" before the function call.
      Unless there are lots of free registers, this can lead to a useless
      register spill and reload across the call.
      
      I changed all the cases like this in ExecInterpExpr(), but didn't bother
      in the out-of-line opcode eval subroutines, since those are presumably
      not as performance-critical.
      
      Discussion: https://postgr.es/m/2508.1506630094@sss.pgh.pa.us
      136ab7c5
    • Peter Eisentraut's avatar
      Add background worker type · 5373bc2a
      Peter Eisentraut authored
      Add bgw_type field to background worker structure.  It is intended to be
      set to the same value for all workers of the same type, so they can be
      grouped in pg_stat_activity, for example.
      
      The backend_type column in pg_stat_activity now shows bgw_type for a
      background worker.  The ps listing also no longer calls out that a
      process is a background worker but just show the bgw_type.  That way,
      being a background worker is more of an implementation detail now that
      is not shown to the user.  However, most log messages still refer to
      'background worker "%s"'; otherwise constructing sensible and
      translatable log messages would become tricky.
      Reviewed-by: default avatarMichael Paquier <michael.paquier@gmail.com>
      Reviewed-by: default avatarDaniel Gustafsson <daniel@yesql.se>
      5373bc2a
    • Robert Haas's avatar
      Remove replacement selection sort. · 8b304b8b
      Robert Haas authored
      At the time replacement_sort_tuples was introduced, there were still
      cases where replacement selection sort noticeably outperformed using
      quicksort even for the first run.  However, those cases seem to have
      evaporated as a result of further improvements made since that time
      (and perhaps also advances in CPU technology).  So remove replacement
      selection and the controlling GUC entirely.  This makes tuplesort.c
      noticeably simpler and probably paves the way for further
      optimizations someone might want to do later.
      
      Peter Geoghegan, with review and testing by Tomas Vondra and me.
      
      Discussion: https://postgr.es/m/CAH2-WzmmNjG_K0R9nqYwMq3zjyJJK+hCbiZYNGhAy-Zyjs64GQ@mail.gmail.com
      8b304b8b
    • Peter Eisentraut's avatar
      Add PostgreSQL version to coverage output · d2773f9b
      Peter Eisentraut authored
      Also make overriding the title easier.  That helps telling where the
      report came from and labeling different variants of a report.
      Reviewed-by: default avatarMichael Paquier <michael.paquier@gmail.com>
      d2773f9b
    • Peter Eisentraut's avatar
      Add lcov --initial · 4bb5a253
      Peter Eisentraut authored
      By just running lcov on the produced .gcda data files, we don't account
      for source files that are not touched by tests at all.  To fix that, run
      lcov --initial to create a base line info file with all zero counters,
      and merge that with the actual counters when creating the final report.
      Reviewed-by: default avatarMichael Paquier <michael.paquier@gmail.com>
      4bb5a253
  4. 28 Sep, 2017 4 commits
    • Peter Eisentraut's avatar
      Remove SGML marked sections · 22d97646
      Peter Eisentraut authored
      For XML compatibility, replace marked sections <![IGNORE[ ]]> with
      comments <!-- -->.  In some cases it seemed better to remove the ignored
      text altogether, and in one case the text should not have been ignored.
      22d97646
    • Alvaro Herrera's avatar
      Fix freezing of a dead HOT-updated tuple · 20b65522
      Alvaro Herrera authored
      Vacuum calls page-level HOT prune to remove dead HOT tuples before doing
      liveness checks (HeapTupleSatisfiesVacuum) on the remaining tuples.  But
      concurrent transaction commit/abort may turn DEAD some of the HOT tuples
      that survived the prune, before HeapTupleSatisfiesVacuum tests them.
      This happens to activate the code that decides to freeze the tuple ...
      which resuscitates it, duplicating data.
      
      (This is especially bad if there's any unique constraints, because those
      are now internally violated due to the duplicate entries, though you
      won't know until you try to REINDEX or dump/restore the table.)
      
      One possible fix would be to simply skip doing anything to the tuple,
      and hope that the next HOT prune would remove it.  But there is a
      problem: if the tuple is older than freeze horizon, this would leave an
      unfrozen XID behind, and if no HOT prune happens to clean it up before
      the containing pg_clog segment is truncated away, it'd later cause an
      error when the XID is looked up.
      
      Fix the problem by having the tuple freezing routines cope with the
      situation: don't freeze the tuple (and keep it dead).  In the cases that
      the XID is older than the freeze age, set the HEAP_XMAX_COMMITTED flag
      so that there is no need to look up the XID in pg_clog later on.
      
      An isolation test is included, authored by Michael Paquier, loosely
      based on Daniel Wood's original reproducer.  It only tests one
      particular scenario, though, not all the possible ways for this problem
      to surface; it be good to have a more reliable way to test this more
      fully, but it'd require more work.
      In message https://postgr.es/m/20170911140103.5akxptyrwgpc25bw@alvherre.pgsql
      I outlined another test case (more closely matching Dan Wood's) that
      exposed a few more ways for the problem to occur.
      
      Backpatch all the way back to 9.3, where this problem was introduced by
      multixact juggling.  In branches 9.3 and 9.4, this includes a backpatch
      of commit e5ff9fefcd50 (of 9.5 era), since the original is not
      correctable without matching the coding pattern in 9.5 up.
      
      Reported-by: Daniel Wood
      Diagnosed-by: Daniel Wood
      Reviewed-by: Yi Wen Wong, Michaël Paquier
      Discussion: https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com
      20b65522
    • Peter Eisentraut's avatar
      Have lcov exclude external files · 66fd86a6
      Peter Eisentraut authored
      Call lcov with --no-external option to exclude external files (for
      example, system headers with inline functions) from output.
      Reviewed-by: default avatarMichael Paquier <michael.paquier@gmail.com>
      66fd86a6
    • Peter Eisentraut's avatar
      Run only top-level recursive lcov · 504923a0
      Peter Eisentraut authored
      This is the way lcov was intended to be used.  It is much faster and
      more robust and makes the makefiles simpler than running it in each
      subdirectory.
      
      The previous coding ran gcov before lcov, but that is useless because
      lcov/geninfo call gcov internally and use that information.  Moreover,
      this led to complications and failures during parallel make.  This
      separates the two targets:  You either use "make coverage" to get
      textual output from gcov or "make coverage-html" to get an HTML report
      via lcov.  (Using both is still problematic because they write the same
      output files.)
      Reviewed-by: default avatarMichael Paquier <michael.paquier@gmail.com>
      504923a0
  5. 27 Sep, 2017 8 commits
    • Tom Lane's avatar
      Fix behavior when converting a float infinity to numeric. · 7769fc00
      Tom Lane authored
      float8_numeric() and float4_numeric() failed to consider the possibility
      that the input is an IEEE infinity.  The results depended on the
      platform-specific behavior of sprintf(): on most platforms you'd get
      something like
      
      ERROR:  invalid input syntax for type numeric: "inf"
      
      but at least on Windows it's possible for the conversion to succeed and
      deliver a finite value (typically 1), due to a nonstandard output format
      from sprintf and lack of syntax error checking in these functions.
      
      Since our numeric type lacks the concept of infinity, a suitable conversion
      is impossible; the best thing to do is throw an explicit error before
      letting sprintf do its thing.
      
      While at it, let's use snprintf not sprintf.  Overrunning the buffer
      should be impossible if sprintf does what it's supposed to, but this
      is cheap insurance against a stack smash if it doesn't.
      
      Problem reported by Taiki Kondo.  Patch by me based on fix suggestion
      from KaiGai Kohei.  Back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/12A9442FBAE80D4E8953883E0B84E088C8C7A2@BPXM01GP.gisp.nec.co.jp
      7769fc00
    • Tom Lane's avatar
      Revert to 9.6 treatment of ALTER TYPE enumtype ADD VALUE. · 28e07270
      Tom Lane authored
      This reverts commit 15bc038f, along with the followon commits 1635e80d
      and 984c9207 that tried to clean up the problems exposed by bug #14825.
      The result was incomplete because it failed to address parallel-query
      requirements.  With 10.0 release so close upon us, now does not seem like
      the time to be adding more code to fix that.  I hope we can un-revert this
      code and add the missing parallel query support during the v11 cycle.
      
      Back-patch to v10.
      
      Discussion: https://postgr.es/m/20170922185904.1448.16585@wrigleys.postgresql.org
      28e07270
    • Peter Eisentraut's avatar
      Fix plperl build · 65c86562
      Peter Eisentraut authored
      The changes in 639928c9 turned out to
      require Perl 5.9.3, which is newer than our minimum required version.
      So revert back to the old code for the normal case and only use the new
      variant when both coverage and vpath are used.  As the minimum Perl
      version moves forward, we can drop the old code sometime.
      65c86562
    • Dean Rasheed's avatar
      Improve the CREATE POLICY documentation. · af44cbd5
      Dean Rasheed authored
      Provide a correct description of how multiple policies are combined,
      clarify when SELECT permissions are required, mention SELECT FOR
      UPDATE/SHARE, and do some other more minor tidying up.
      
      Reviewed by Stephen Frost
      
      Discussion: https://postgr.es/m/CAEZATCVrxyYbOFU8XbGHicz%2BmXPYzw%3DhfNL2XTphDt-53TomQQ%40mail.gmail.com
      
      Back-patch to 9.5.
      af44cbd5
    • Peter Eisentraut's avatar
      Improve vpath support in plperl build · 639928c9
      Peter Eisentraut authored
      Run xsubpp with the -output option instead of redirecting stdout.  That
      ensures that the #line directives in the output file point to the right
      place in a vpath build.  This in turn fixes an error in coverage builds
      that it can't find the source files.
      
      Refactor the makefile rules while we're here.
      Reviewed-by: default avatarMichael Paquier <michael.paquier@gmail.com>
      639928c9
    • Peter Eisentraut's avatar
      Get rid of parameterized marked sections in SGML · 684cf76b
      Peter Eisentraut authored
      Previously, we created a variant of the installation instructions for
      producing the plain-text INSTALL file by marking up certain parts of
      installation.sgml using SGML parameterized marked sections.  Marked
      sections will not work anymore in XML, so before we can convert the
      documentation to XML, we need a new approach.
      
      DocBook provides a "profiling" feature that allows selecting content
      based on attributes, which would work here.  But it imposes a noticeable
      overhead when building the full documentation and causes complications
      when building some output formats, and given that we recently spent a
      fair amount of effort optimizing the documentation build time, it seems
      sad to have to accept that.
      
      So as an alternative, (1) we create our own mini-profiling layer that
      adjusts just the text we want, and (2) assemble the pieces of content
      that we want in the INSTALL file using XInclude.  That way, there is no
      overhead when building the full documentation and most of the "ugly"
      stuff in installation.sgml can be removed and dealt with out of line.
      684cf76b
    • Peter Eisentraut's avatar
      pg_basebackup: Add option to create replication slot · 3709ca1c
      Peter Eisentraut authored
      When requesting a particular replication slot, the new pg_basebackup
      option -C/--create-slot creates it before starting to replicate from it.
      
      Further refactor the slot creation logic to include the temporary slot
      creation logic into the same function.  Add new arguments is_temporary
      and preserve_wal to CreateReplicationSlot().  Print in --verbose mode
      that a slot has been created.
      
      Author: Michael Banck <michael.banck@credativ.de>
      3709ca1c
    • Noah Misch's avatar
      Don't recommend "DROP SCHEMA information_schema CASCADE". · 59597e64
      Noah Misch authored
      It drops objects outside information_schema that depend on objects
      inside information_schema.  For example, it will drop a user-defined
      view if the view query refers to information_schema.
      
      Discussion: https://postgr.es/m/20170831025345.GE3963697@rfd.leadboat.com
      59597e64
  6. 26 Sep, 2017 10 commits
    • Peter Eisentraut's avatar
      Add some more pg_receivewal tests · fa414612
      Peter Eisentraut authored
      Add some more tests for the --create-slot and --drop-slot options,
      verifying that the right kind of slot was created and that the slot was
      dropped.  While working on an unrelated patch for pg_basebackup, some of
      this was temporarily broken without any tests noticing.
      fa414612
    • Peter Eisentraut's avatar
      Turn on log_replication_commands in PostgresNode · 43588f58
      Peter Eisentraut authored
      This is useful for example for the pg_basebackup and related tests.
      43588f58
    • Tom Lane's avatar
      Improve wording of error message added in commit 71480501. · 9a50a93c
      Tom Lane authored
      Per suggestions from Peter Eisentraut and David Johnston.
      Back-patch, like the previous commit.
      
      Discussion: https://postgr.es/m/E1dv9jI-0006oT-Fn@gemulon.postgresql.org
      9a50a93c
    • Tom Lane's avatar
      Fix failure-to-read-man-page in commit 899bd785. · 5ea96efa
      Tom Lane authored
      posix_fallocate() is not quite a drop-in replacement for fallocate(),
      because it is defined to return the error code as its function result,
      not in "errno".  I (tgl) missed this because RHEL6's version seems
      to set errno as well.  That is not the case on more modern Linuxen,
      though, as per buildfarm results.
      
      Aside from fixing the return-convention confusion, remove the test
      for ENOSYS; we expect that glibc will mask that for posix_fallocate,
      though it does not for fallocate.  Keep the test for EINTR, because
      POSIX specifies that as a possible result, and buildfarm results
      suggest that it can happen in practice.
      
      Back-patch to 9.4, like the previous commit.
      
      Thomas Munro
      
      Discussion: https://postgr.es/m/1002664500.12301802.1471008223422.JavaMail.yahoo@mail.yahoo.com
      5ea96efa
    • Tom Lane's avatar
      Remove heuristic same-transaction test from check_safe_enum_use(). · 984c9207
      Tom Lane authored
      The blacklist mechanism added by the preceding commit directly fixes
      most of the practical cases that the same-transaction test was meant
      to cover.  What remains is use-cases like
      
      	begin;
      	create type e as enum('x');
      	alter type e add value 'y';
      	-- use 'y' somehow
      	commit;
      
      However, because the same-transaction test is heuristic, it fails on
      small variants of that, such as renaming the type or changing its
      owner.  Rather than try to explain the behavior to users, let's
      remove it and just have a rule that the newly added value can't be
      used before being committed, full stop.  Perhaps later it will be
      worth the implementation effort and overhead to have a more accurate
      test for type-was-created-in-this-transaction.  We'll wait for some
      field experience with v10 before deciding to do that.
      
      Back-patch to v10.
      
      Discussion: https://postgr.es/m/20170922185904.1448.16585@wrigleys.postgresql.org
      984c9207
    • Tom Lane's avatar
      Use a blacklist to distinguish original from add-on enum values. · 1635e80d
      Tom Lane authored
      Commit 15bc038f allowed ALTER TYPE ADD VALUE to be executed inside
      transaction blocks, by disallowing the use of the added value later
      in the same transaction, except under limited circumstances.  However,
      the test for "limited circumstances" was heuristic and could reject
      references to enum values that were created during CREATE TYPE AS ENUM,
      not just later.  This breaks the use-case of restoring pg_dump scripts
      in a single transaction, as reported in bug #14825 from Balazs Szilfai.
      
      We can improve this by keeping a "blacklist" table of enum value OIDs
      created by ALTER TYPE ADD VALUE during the current transaction.  Any
      visible-but-uncommitted value whose OID is not in the blacklist must
      have been created by CREATE TYPE AS ENUM, and can be used safely
      because it could not have a lifespan shorter than its parent enum type.
      
      This change also removes the restriction that a renamed enum value
      can't be used before being committed (unless it was on the blacklist).
      
      Andrew Dunstan, with cosmetic improvements by me.
      Back-patch to v10.
      
      Discussion: https://postgr.es/m/20170922185904.1448.16585@wrigleys.postgresql.org
      1635e80d
    • Peter Eisentraut's avatar
      Sort pg_basebackup options better · 15a8010e
      Peter Eisentraut authored
      The --slot option somehow ended up under options controlling the output,
      and some other options were in a nonsensical place or were not moved
      after recent renamings, so tidy all that up a bit.
      15a8010e
    • Peter Eisentraut's avatar
      Handle heap rewrites better in logical replication · ab28feae
      Peter Eisentraut authored
      A FOR ALL TABLES publication naturally considers all base tables to be a
      candidate for replication.  This includes transient heaps that are
      created during a table rewrite during DDL.  This causes failures on the
      subscriber side because it will not have a table like pg_temp_16386 to
      receive data (and if it did, it would be the wrong table).
      
      The prevent this problem, we filter out any tables that match this
      naming pattern and match an actual table from FOR ALL TABLES
      publications.  This is only a heuristic, meaning that user tables that
      match that naming could accidentally be omitted.  A more robust solution
      might require an explicit marking of such tables in pg_class somehow.
      Reported-by: default avataryxq <yxq@o2.pl>
      Bug: #14785
      Reviewed-by: default avatarAndres Freund <andres@anarazel.de>
      Reviewed-by: default avatarPetr Jelinek <petr.jelinek@2ndquadrant.com>
      ab28feae
    • Robert Haas's avatar
      Remove lsn from HashScanPosData. · 22c5e735
      Robert Haas authored
      This was intended as infrastructure for weakening VACUUM's locking
      requirements, similar to what was done for btree indexes in commit
      2ed5b87f.  However, for hash indexes,
      it seems that the improvements which are possible are actually
      extremely marginal.  Furthermore, performing the LSN cross-check will
      end up skipping cleanup far more often than is necessary; we only care
      about page modifications due to a VACUUM, but the LSN check will fail
      if ANY modification has occurred.  So, rather than pressing forward
      with that "optimization", just rip the LSN field out.
      
      Patch by me, reviewed by Ashutosh Sharma and Amit Kapila
      
      Discussion: http://postgr.es/m/CAA4eK1JxqqcuC5Un7YLQVhOYSZBS+t=3xqZuEkt5RyquyuxpwQ@mail.gmail.com
      22c5e735
    • Robert Haas's avatar
      Fix trivial mistake in README. · 79a4a665
      Robert Haas authored
      You might think I (Robert) could manage to count to five without
      messing it up, but if you did, you would be wrong.
      
      Amit Kapila
      
      Discussion: http://postgr.es/m/CAA4eK1JxqqcuC5Un7YLQVhOYSZBS+t=3xqZuEkt5RyquyuxpwQ@mail.gmail.com
      79a4a665