1. 05 Sep, 2018 2 commits
    • Peter Eisentraut's avatar
      PL/Python: Remove use of simple slicing API · f5a6509b
      Peter Eisentraut authored
      The simple slicing API (sq_slice, sq_ass_slice) has been deprecated
      since Python 2.0 and has been removed altogether in Python 3, so remove
      those functions from the PLyResult class.  Instead, the non-slice
      mapping functions mp_subscript and mp_ass_subscript can take slice
      objects as an index.  Since we just pass the index through to the
      underlying list object, we already support that.  Test coverage was
      already in place.
      f5a6509b
    • Bruce Momjian's avatar
      docs: improve AT TIME ZONE description · dd6073f2
      Bruce Momjian authored
      The previous description was unclear.  Also add a third example, change
      use of time zone acronyms to more verbose descriptions, and add a
      mention that using 'time' with AT TIME ZONE uses the current time zone
      rules.
      
      Backpatch-through: 9.3
      dd6073f2
  2. 04 Sep, 2018 5 commits
    • Michael Paquier's avatar
      Improve some error message strings and errcodes · d6e98ebe
      Michael Paquier authored
      This makes a bit less work for translators, by unifying error strings a
      bit more with what the rest of the code does, this time for three error
      strings in autoprewarm and one in base backup code.
      
      After some code review of slot.c, some file-access errcodes are reported
      but lead to an incorrect internal error, while corrupted data makes the
      most sense, similarly to the previous work done in e41d0a10.  Also,
      after calling rmtree(), a WARNING gets reported, which is a duplicate of
      what the internal call report, so make the code more consistent with all
      other code paths calling this function.
      
      Author: Michael Paquier
      Discussion: https://postgr.es/m/20180902200747.GC1343@paquier.xyz
      d6e98ebe
    • Tom Lane's avatar
      Fully enforce uniqueness of constraint names. · 17b7c302
      Tom Lane authored
      It's been true for a long time that we expect names of table and domain
      constraints to be unique among the constraints of that table or domain.
      However, the enforcement of that has been pretty haphazard, and it missed
      some corner cases such as creating a CHECK constraint and then an index
      constraint of the same name (as per recent report from André Hänsel).
      Also, due to the lack of an actual unique index enforcing this, duplicates
      could be created through race conditions.
      
      Moreover, the code that searches pg_constraint has been quite inconsistent
      about how to handle duplicate names if one did occur: some places checked
      and threw errors if there was more than one match, while others just
      processed the first match they came to.
      
      To fix, create a unique index on (conrelid, contypid, conname).  Since
      either conrelid or contypid is zero, this will separately enforce
      uniqueness of constraint names among constraints of any one table and any
      one domain.  (If we ever implement SQL assertions, and put them into this
      catalog, more thought might be needed.  But it'd be at least as reasonable
      to put them into a new catalog; having overloaded this one catalog with
      two kinds of constraints was a mistake already IMO.)  This index can replace
      the existing non-unique index on conrelid, though we need to keep the one
      on contypid for query performance reasons.
      
      Having done that, we can simplify the logic in various places that either
      coped with duplicates or neglected to, as well as potentially improve
      lookup performance when searching for a constraint by name.
      
      Also, as per our usual practice, install a preliminary check so that you
      get something more friendly than a unique-index violation report in the
      case complained of by André.  And teach ChooseIndexName to avoid choosing
      autogenerated names that would draw such a failure.
      
      While it's not possible to make such a change in the back branches,
      it doesn't seem quite too late to put this into v11, so do so.
      
      Discussion: https://postgr.es/m/0c1001d4428f$0942b430$1bc81c90$@webkr.de
      17b7c302
    • Tom Lane's avatar
      Clean up after TAP tests in oid2name and vacuumlo. · f30c6f52
      Tom Lane authored
      Oversights in commits 1aaf532d and bfea331a.  Unlike the case for
      traditional-style REGRESS tests, pgxs.mk doesn't have any builtin support
      for TAP tests, so it doesn't realize it should remove tmp_check/.
      Maybe we should build some actual pgxs infrastructure for TAP tests ...
      but for the moment, just remove explicitly.
      f30c6f52
    • Amit Kapila's avatar
      Prohibit pushing subqueries containing window function calculation to · 14e9b2a7
      Amit Kapila authored
      workers.
      
      Allowing window function calculation in workers leads to inconsistent
      results because if the input row ordering is not fully deterministic, the
      output of window functions might vary across workers.  The fix is to treat
      them as parallel-restricted.
      
      In the passing, improve the coding pattern in max_parallel_hazard_walker
      so that it has a chain of mutually-exclusive if ... else if ... else if
      ... else if ... IsA tests.
      
      Reported-by: Marko Tiikkaja
      Bug: 15324
      Author: Amit Kapila
      Reviewed-by: Tom Lane
      Backpatch-through: 9.6
      Discussion: https://postgr.es/m/CAL9smLAnfPJCDUUG4ckX2iznj53V7VSMsYefzZieN93YxTNOcw@mail.gmail.com
      14e9b2a7
    • Amit Kapila's avatar
      During the split, set checksum on an empty hash index page. · 7c9e19ca
      Amit Kapila authored
      On a split, we allocate a new splitpoint's worth of bucket pages wherein
      we initialize the last page with zeros which is fine, but we forgot to set
      the checksum for that last page.
      
      We decided to back-patch this fix till 10 because we don't have an easy
      way to test it in prior versions.  Another reason is that the hash-index
      code is changed heavily in 10, so it is not advisable to push the fix
      without testing it in prior versions.
      
      Author: Amit Kapila
      Reviewed-by: Yugo Nagata
      Backpatch-through: 10
      Discussion: https://postgr.es/m/5d03686d-727c-dbf8-0064-bf8b97ffe850@2ndquadrant.com
      7c9e19ca
  3. 03 Sep, 2018 2 commits
    • Alvaro Herrera's avatar
      Remove pg_constraint.conincluding · c076f3d7
      Alvaro Herrera authored
      This column was added in commit 8224de4f ("Indexes with INCLUDE
      columns and their support in B-tree") to ease writing the ruleutils.c
      supporting code for that feature, but it turns out to be unnecessary --
      we can do the same thing with just one more syscache lookup.
      
      Even the documentation for the new column being removed in this commit
      is awkward.
      
      Discussion: https://postgr.es/m/20180902165018.33otxftp3olgtu4t@alvherre.pgsql
      c076f3d7
    • Tomas Vondra's avatar
      Fix memory leak in TRUNCATE decoding · 4ddd8f5f
      Tomas Vondra authored
      When decoding a TRUNCATE record, the relids array was being allocated in
      the main ReorderBuffer memory context, but not released with the change
      resulting in a memory leak.
      
      The array was also ignored when serializing/deserializing the change,
      assuming all the information is stored in the change itself.  So when
      spilling the change to disk, we've only we have serialized only the
      pointer to the relids array.  Thanks to never releasing the array,
      the pointer however remained valid even after loading the change back
      to memory, preventing an actual crash.
      
      This fixes both the memory leak and (de)serialization.  The relids array
      is still allocated in the main ReorderBuffer memory context (none of the
      existing ones seems like a good match, and adding an extra context seems
      like an overkill).  The allocation is wrapped in a new ReorderBuffer API
      functions, to keep the details within reorderbuffer.c, just like the
      other ReorderBufferGet methods do.
      
      Author: Tomas Vondra
      Discussion: https://www.postgresql.org/message-id/flat/66175a41-9342-2845-652f-1bd4c3ee50aa%402ndquadrant.com
      Backpatch: 11, where decoding of TRUNCATE was introduced
      4ddd8f5f
  4. 02 Sep, 2018 1 commit
  5. 01 Sep, 2018 6 commits
  6. 31 Aug, 2018 9 commits
    • Tom Lane's avatar
      Fix psql's \dC command to annotate I/O conversion casts as such. · 115bf1e7
      Tom Lane authored
      A cast declared WITH INOUT was described as '(binary coercible)',
      which seems pretty inaccurate; let's print '(with inout)' instead.
      Per complaint from Jean-Pierre Pelletier.
      
      This definitely seems like a bug fix, but given that it's been wrong
      since 8.4 and nobody complained before, I'm hesitant to back-patch a
      behavior change into stable branches.  It doesn't seem too late for
      v11 though.
      
      Discussion: https://postgr.es/m/5b887023.1c69fb81.ff96e.6a1d@mx.google.com
      115bf1e7
    • Michael Paquier's avatar
      Ensure correct minimum consistent point on standbys · c186ba13
      Michael Paquier authored
      Startup process has improved its calculation of incorrect minimum
      consistent point in 8d68ee6, which ensures that all WAL available gets
      replayed when doing crash recovery, and has introduced an incorrect
      calculation of the minimum recovery point for non-startup processes,
      which can cause incorrect page references on a standby when for example
      the background writer flushed a couple of pages on-disk but was not
      updating the control file to let a subsequent crash recovery replay to
      where it should have.
      
      The only case where this has been reported to be a problem is when a
      standby needs to calculate the latest removed xid when replaying a btree
      deletion record, so one would need connections on a standby that happen
      just after recovery has thought it reached a consistent point.  Using a
      background worker which is started after the consistent point is reached
      would be the easiest way to get into problems if it connects to a
      database.  Having clients which attempt to connect periodically could
      also be a problem, but the odds of seeing this problem are much lower.
      
      The fix used is pretty simple, as the idea is to give access to the
      minimum recovery point written in the control file to non-startup
      processes so as they use a reference, while the startup process still
      initializes its own references of the minimum consistent point so as the
      original problem with incorrect page references happening post-promotion
      with a crash do not show up.
      
      Reported-by: Alexander Kukushkin
      Diagnosed-by: Alexander Kukushkin
      Author: Michael Paquier
      Reviewed-by: Kyotaro Horiguchi, Alexander Kukushkin
      Discussion: https://postgr.es/m/153492341830.1368.3936905691758473953@wrigleys.postgresql.org
      Backpatch-through: 9.3
      c186ba13
    • Tom Lane's avatar
      Code review for pg_verify_checksums.c. · d9c366f9
      Tom Lane authored
      Use postgres_fe.h, since this is frontend code.  Pretend that we've heard
      of project style guidelines for, eg, #include order.  Use BlockNumber not
      int arithmetic for block numbers, to avoid misbehavior with relations
      exceeding 2^31 blocks.  Avoid an unnecessary strict-aliasing warning
      (per report from Michael Banck).  Const-ify assorted stuff.  Avoid
      scribbling on the output of readdir() -- perhaps that's safe in practice,
      but POSIX forbids it, and this code has so far earned exactly zero
      credibility portability-wise.  Editorialize on an ambiguously-worded
      message.
      
      I did not touch the problem of the "buf" local variable being possibly
      insufficiently aligned; that's not specific to this code, and seems like
      it should be fixed as part of a different, larger patch.
      
      Discussion: https://postgr.es/m/1535618100.1286.3.camel@credativ.de
      d9c366f9
    • Alexander Korotkov's avatar
      Enforce cube dimension limit in all cube construction functions · f919c165
      Alexander Korotkov authored
      contrib/cube has a limit to 100 dimensions for cube datatype.  However, it's
      not enforced everywhere, and one can actually construct cube with more than
      100 dimensions having then trouble with dump/restore.  This commit add checks
      for dimensions limit in all functions responsible for cube construction.
      Backpatch to all supported versions.
      
      Reported-by: Andrew Gierth
      Discussion: https://postgr.es/m/87va7uybt4.fsf%40news-spur.riddles.org.uk
      Author: Andrey Borodin with small additions by me
      Review: Tom Lane
      Backpatch-through: 9.3
      f919c165
    • Alexander Korotkov's avatar
      Split contrib/cube platform-depended checks into separate test · 38970ce8
      Alexander Korotkov authored
      We're currently maintaining two outputs for cube regression test.  But that
      appears to be unsuitable, because these outputs are different in out few checks
      involving scientific notation.  So, split checks involving scientific notation
      into separate test, making contrib/cube easier to maintain.  Backpatch to all
      supported versions in order to make further backpatching easier.
      
      Discussion: https://postgr.es/m/CAPpHfdvJgWjxHsJTtT%2Bo1tz3OR8EFHcLQjhp-d3%2BUcmJLh-fQA%40mail.gmail.com
      Author: Alexander Korotkov
      Backpatch-through: 9.3
      38970ce8
    • Tom Lane's avatar
      Make checksum_impl.h safe to compile with -fstrict-aliasing. · 8c62d9d1
      Tom Lane authored
      In general, Postgres requires -fno-strict-aliasing with compilers that
      implement C99 strict aliasing rules.  There's little hope of getting
      rid of that overall.  But it seems like it would be a good idea if
      storage/checksum_impl.h in particular didn't depend on it, because
      that header is explicitly intended to be included by external programs.
      We don't have a lot of control over the compiler switches that an
      external program might use, as shown by Michael Banck's report of
      failure in a privately-modified version of pg_verify_checksums.
      
      Hence, switch to using a union in place of willy-nilly pointer casting
      inside this file.  I think this makes the code a bit more readable
      anyway.
      
      checksum_impl.h hasn't changed since it was introduced in 9.3,
      so back-patch all the way.
      
      Discussion: https://postgr.es/m/1535618100.1286.3.camel@credativ.de
      8c62d9d1
    • Etsuro Fujita's avatar
      Disable support for partitionwise joins in problematic cases. · 7cfdc770
      Etsuro Fujita authored
      Commit f49842d1, which added support for partitionwise joins, built the
      child's tlist by applying adjust_appendrel_attrs() to the parent's.  So in
      the case where the parent's included a whole-row Var for the parent, the
      child's contained a ConvertRowtypeExpr.  To cope with that, that commit
      added code to the planner, such as setrefs.c, but some code paths still
      assumed that the tlist for a scan (or join) rel would only include Vars
      and PlaceHolderVars, which was true before that commit, causing errors:
      
      * When creating an explicit sort node for an input path for a mergejoin
        path for a child join, prepare_sort_from_pathkeys() threw the 'could not
        find pathkey item to sort' error.
      * When deparsing a relation participating in a pushed down child join as a
        subquery in contrib/postgres_fdw, get_relation_column_alias_ids() threw
        the 'unexpected expression in subquery output' error.
      * When performing set_plan_references() on a local join plan generated by
        contrib/postgres_fdw for EvalPlanQual support for a pushed down child
        join, fix_join_expr() threw the 'variable not found in subplan target
        lists' error.
      
      To fix these, two approaches have been proposed: one by Ashutosh Bapat and
      one by me.  While the former keeps building the child's tlist with a
      ConvertRowtypeExpr, the latter builds it with a whole-row Var for the
      child not to violate the planner assumption, and tries to fix it up later,
      But both approaches need more work, so refuse to generate partitionwise
      join paths when whole-row Vars are involved, instead.  We don't need to
      handle ConvertRowtypeExprs in the child's tlists for now, so this commit
      also removes the changes to the planner.
      
      Previously, partitionwise join computed attr_needed data for each child
      separately, and built the child join's tlist using that data, which also
      required an extra step for adding PlaceHolderVars to that tlist, but it
      would be more efficient to build it from the parent join's tlist through
      the adjust_appendrel_attrs() transformation.  So this commit builds that
      list that way, and simplifies build_joinrel_tlist() and placeholder.c as
      well as part of set_append_rel_size() to basically what they were before
      partitionwise join went in.
      
      Back-patch to PG11 where partitionwise join was introduced.
      
      Report by Rajkumar Raghuwanshi.  Analysis by Ashutosh Bapat, who also
      provided some of regression tests.  Patch by me, reviewed by Robert Haas.
      
      Discussion: https://postgr.es/m/CAKcux6ktu-8tefLWtQuuZBYFaZA83vUzuRd7c1YHC-yEWyYFpg@mail.gmail.com
      7cfdc770
    • Amit Kapila's avatar
      Fix pg_verify_checksums on Windows. · bb60f2cb
      Amit Kapila authored
      To verify the checksums, we open the file in text mode which doesn't work
      on Windows as WIN32 treats Control-Z as EOF in files opened in text mode.
      This leads to "short read of block .." error in some cases.
      
      Fix it by opening the files in the binary mode.
      
      Author: Amit Kapila
      Reviewed-by: Magnus Hagander
      Backpatch-through: 11
      Discussion: https://postgr.es/m/CAA4eK1+LOnzod+h85FGmyjWzXKy-XV1FYwEyP-Tky2WpD5cxwA@mail.gmail.com
      bb60f2cb
    • Etsuro Fujita's avatar
      2e39f69b
  7. 30 Aug, 2018 8 commits
  8. 28 Aug, 2018 7 commits
    • Tom Lane's avatar
      Make pg_restore's identify_locking_dependencies() more bulletproof. · e0a0cc28
      Tom Lane authored
      This function had a blacklist of dump object types that it believed
      needed exclusive lock ... but we hadn't maintained that, so that it
      was missing ROW SECURITY, POLICY, and INDEX ATTACH items, all of
      which need (or should be treated as needing) exclusive lock.
      
      Since the same oversight seems likely in future, let's reverse the
      sense of the test so that the code has a whitelist of safe object
      types; better to wrongly assume a command can't be run in parallel
      than the opposite.  Currently the only POST_DATA object type that's
      safe is CREATE INDEX ... and that list hasn't changed in a long time.
      
      Back-patch to 9.5 where RLS came in.
      
      Discussion: https://postgr.es/m/11450.1535483506@sss.pgh.pa.us
      e0a0cc28
    • Tom Lane's avatar
      Code review for pg_dump's handling of ALTER INDEX ATTACH PARTITION. · 8cff4f53
      Tom Lane authored
      Ensure the TOC entry is marked with the correct schema, so that its
      name is as unique as the index's is.
      
      Fix the dependencies: we want dependencies from this TOC entry to the
      two indexes it depends on, and we don't care (at least not for this
      purpose) what order the indexes are created in.  Also, add dependencies
      on the indexes' underlying tables.  Those might seem pointless given
      the index dependencies, but they are helpful to cue parallel restore
      to avoid running the ATTACH PARTITION in parallel with other DDL on
      the same tables.
      
      Discussion: https://postgr.es/m/10817.1535494963@sss.pgh.pa.us
      8cff4f53
    • Tom Lane's avatar
      Include contrib modules in the temp installation even without REGRESS. · 42e61c77
      Tom Lane authored
      Now that we have TAP tests, a contrib module may have something useful
      to do in "make check" even if it has no pg_regress-style regression
      scripts, and hence no REGRESS setting.  But the TAP tests will fail,
      or else test the wrong installed files, unless we install the contrib
      module into the temp installation.  So move the bit about adding to
      EXTRA_INSTALL so that it applies regardless.
      
      We might want this in back branches in future, but for the moment
      I only risked adding it to v11.
      
      Discussion: https://postgr.es/m/12438.1535488750@sss.pgh.pa.us
      42e61c77
    • Andrew Gierth's avatar
      postgres_fdw: don't push ORDER BY with no vars (bug #15352) · bf2d0462
      Andrew Gierth authored
      Commit aa09cd24 changed a condition in find_em_expr_for_rel from
      being a bms_equal comparison of relids to bms_is_subset, in order to
      support order by clauses on foreign joins. But this also allows
      through the degenerate case of expressions with no Vars at all (and
      hence empty relids), including integer constants which will be parsed
      unexpectedly on the remote (viz. "ERROR: ORDER BY position 0 is not in
      select list" as in the bug report).
      
      Repair by adding an additional !bms_is_empty test.
      
      Backpatch through to 9.6 where the aforementioned change was made.
      
      Per bug #15352 from Maksym Boguk; analysis and patch by me.
      
      Discussion: https://postgr.es/m/153518420278.1478.14875560810251994661@wrigleys.postgresql.org
      bf2d0462
    • Michael Paquier's avatar
      Rework option set of vacuumlo · bfea331a
      Michael Paquier authored
      Like oid2name, vacuumlo has been lacking consistency with other
      utilities for its options:
      - Connection options gain long aliases.
      - Document environment variables which could be used: PGHOST, PGPORT and
      PGUSER.
      
      Documentation and code is reordered to be more consistent. A basic set
      of TAP tests has been added while on it.
      
      Author: Tatsuro Yamada
      Reviewed-by: Michael Paquier
      Discussion: https://postgr.es/m/c7e7f25c-1747-cd0f-9335-390bc97b2db5@lab.ntt.co.jp
      bfea331a
    • Michael Paquier's avatar
      Rework option set of oid2name · 1aaf532d
      Michael Paquier authored
      oid2name has done little effort to keep an interface consistent with
      other binary utilities:
      - -H was used instead of -h/-host.  This option is now marked as
      deprecated, still its output is accepted to be backward-compatible.
      - -P has been removed from the code, and was still documented.
      - All options gain long aliases, making connection options more similar
      to other binaries.
      - Document environment variables which could be used: PGHOST, PGPORT and
      PGUSER.
      
      A basic set of TAP tests is added on the way, and documentation is
      cleaned up to be more consistent with other things.
      
      Author: Tatsuro Yamada
      Reviewed-by: Michael Paquier
      Discussion: https://postgr.es/m/c7e7f25c-1747-cd0f-9335-390bc97b2db5@lab.ntt.co.jp
      1aaf532d
    • Andrew Gierth's avatar
      Avoid quadratic slowdown in regexp match/split functions. · c8ea87e4
      Andrew Gierth authored
      regexp_matches, regexp_split_to_table and regexp_split_to_array all
      work by compiling a list of match positions as character offsets (NOT
      byte positions) in the source string.
      
      Formerly, they then used text_substr to extract the matched text; but
      in a multi-byte encoding, that counts the characters in the string,
      and the characters needed to reach the starting byte position, on
      every call. Accordingly, the performance degraded as the product of
      the input string length and the number of match positions, such that
      splitting a string of a few hundred kbytes could take many minutes.
      
      Repair by keeping the wide-character copy of the input string
      available (only in the case where encoding_max_length is not 1) after
      performing the match operation, and extracting substrings from that
      instead. This reduces the complexity to being linear in the number of
      result bytes, discounting the actual regexp match itself (which is not
      affected by this patch).
      
      In passing, remove cleanup using retail pfree() which was obsoleted by
      commit ff428cde (Feb 2008) which made cleanup of SRF multi-call
      contexts automatic. Also increase (to ~134 million) the maximum number
      of matches and provide an error message when it is reached.
      
      Backpatch all the way because this has been wrong forever.
      
      Analysis and patch by me; review by Kaiting Chen.
      
      Discussion: https://postgr.es/m/87pnyn55qh.fsf@news-spur.riddles.org.uk
      
      see also https://postgr.es/m/87lg996g4r.fsf@news-spur.riddles.org.uk
      c8ea87e4