1. 02 Dec, 2016 3 commits
    • Alvaro Herrera's avatar
      Permit dump/reload of not-too-large >1GB tuples · fa2fa995
      Alvaro Herrera authored
      Our documentation states that our maximum field size is 1 GB, and that
      our maximum row size of 1.6 TB.  However, while this might be attainable
      in theory with enough contortions, it is not workable in practice; for
      starters, pg_dump fails to dump tables containing rows larger than 1 GB,
      even if individual columns are well below the limit; and even if one
      does manage to manufacture a dump file containing a row that large, the
      server refuses to load it anyway.
      
      This commit enables dumping and reloading of such tuples, provided two
      conditions are met:
      
      1. no single column is larger than 1 GB (in output size -- for bytea
         this includes the formatting overhead)
      2. the whole row is not larger than 2 GB
      
      There are three related changes to enable this:
      
      a. StringInfo's API now has two additional functions that allow creating
      a string that grows beyond the typical 1GB limit (and "long" string).
      ABI compatibility is maintained.  We still limit these strings to 2 GB,
      though, for reasons explained below.
      
      b. COPY now uses long StringInfos, so that pg_dump doesn't choke
      trying to emit rows longer than 1GB.
      
      c. heap_form_tuple now uses the MCXT_ALLOW_HUGE flag in its allocation
      for the input tuple, which means that large tuples are accepted on
      input.  Note that at this point we do not apply any further limit to the
      input tuple size.
      
      The main reason to limit to 2 GB is that the FE/BE protocol uses 32 bit
      length words to describe each row; and because the documentation is
      ambiguous on its signedness and libpq does consider it signed, we cannot
      use the highest-order bit.  Additionally, the StringInfo API uses "int"
      (which is 4 bytes wide in most platforms) in many places, so we'd need
      to change that API too in order to improve, which has lots of fallout.
      
      Backpatch to 9.5, which is the oldest that has
      MemoryContextAllocExtended, a necessary piece of infrastructure.  We
      could apply to 9.4 with very minimal additional effort, but any further
      than that would require backpatching "huge" allocations too.
      
      This is the largest set of changes we could find that can be
      back-patched without breaking compatibility with existing systems.
      Fixing a bigger set of problems (for example, dumping tuples bigger than
      2GB, or dumping fields bigger than 1GB) would require changing the FE/BE
      protocol and/or changing the StringInfo API in an ABI-incompatible way,
      neither of which would be back-patchable.
      
      Authors: Daniel Vérité, Álvaro Herrera
      Reviewed by: Tomas Vondra
      Discussion: https://postgr.es/m/20160229183023.GA286012@alvherre.pgsql
      fa2fa995
    • Peter Eisentraut's avatar
      Refactor libpqwalreceiver · 78c8c814
      Peter Eisentraut authored
      The whole walreceiver API is now wrapped into a struct, like most of our
      other loadable module APIs.  The libpq connection is no longer a global
      variable in libpqwalreceiver.  Instead, it is encapsulated into a struct
      that is passed around the functions.  This allows multiple walreceivers
      to run at the same time.
      
      Add some rudimentary support for logical replication connections to
      libpqwalreceiver.
      
      These changes are mostly cosmetic and are going to be useful for the
      future logical replication patches.
      
      From: Petr Jelinek <petr@2ndquadrant.com>
      78c8c814
    • Peter Eisentraut's avatar
      Use latch instead of select() in walreceiver · 597a87cc
      Peter Eisentraut authored
      Replace use of poll()/select() by WaitLatchOrSocket(), which is more
      portable and flexible.
      
      Also change walreceiver to use its procLatch instead of a custom latch.
      
      From: Petr Jelinek <petr@2ndquadrant.com>
      597a87cc
  2. 01 Dec, 2016 7 commits
  3. 30 Nov, 2016 10 commits
    • Robert Haas's avatar
      Improve hash index bucket split behavior. · 6d46f478
      Robert Haas authored
      Previously, the right to split a bucket was represented by a
      heavyweight lock on the page number of the primary bucket page.
      Unfortunately, this meant that every scan needed to take a heavyweight
      lock on that bucket also, which was bad for concurrency.  Instead, use
      a cleanup lock on the primary bucket page to indicate the right to
      begin a split, so that scans only need to retain a pin on that page,
      which is they would have to acquire anyway, and which is also much
      cheaper.
      
      In addition to reducing the locking cost, this also avoids locking out
      scans and inserts for the entire lifetime of the split: while the new
      bucket is being populated with copies of the appropriate tuples from
      the old bucket, scans and inserts can happen in parallel.  There are
      minor concurrency improvements for vacuum operations as well, though
      the situation there is still far from ideal.
      
      This patch also removes the unworldly assumption that a split will
      never be interrupted.  With the new code, a split is done in a series
      of small steps and the system can pick up where it left off if it is
      interrupted prior to completion.  While this patch does not itself add
      write-ahead logging for hash indexes, it is clearly a necessary first
      step, since one of the things that could interrupt a split is the
      removal of electrical power from the machine performing it.
      
      Amit Kapila.  I wrote the original design on which this patch is
      based, and did a good bit of work on the comments and README through
      multiple rounds of review, but all of the code is Amit's.  Also
      reviewed by Jesper Pedersen, Jeff Janes, and others.
      
      Discussion: http://postgr.es/m/CAA4eK1LfzcZYxLoXS874Ad0+S-ZM60U9bwcyiUZx9mHZ-KCWhw@mail.gmail.com
      6d46f478
    • Tom Lane's avatar
      Doc: improve description of trim() and related functions. · 213c0f2d
      Tom Lane authored
      Per bug #14441 from Mark Pether, the documentation could be misread,
      mainly because some of the examples failed to show what happens with
      a multicharacter "characters to trim" string.  Also, while the text
      description in most of these entries was fairly clear that the
      "characters" argument is a set of characters not a substring to match,
      some of them used variant wording that was a bit less clear.
      trim() itself suffered from both deficiencies and was thus pretty
      misinterpretable.
      
      Also fix failure to explain which of LEADING/TRAILING/BOTH is the
      default.
      
      Discussion: https://postgr.es/m/20161130011710.6539.53657@wrigleys.postgresql.org
      213c0f2d
    • Heikki Linnakangas's avatar
      Make all unicode perl scripts to use strict, rearrange logic for clarity. · 021d254d
      Heikki Linnakangas authored
      The loops were a bit difficult to understand, due to breaking out of them
      early. Also fix things that perlcritic complained about.
      
      Daniel Gustafsson
      021d254d
    • Peter Eisentraut's avatar
      doc: Remove claim about large shared_buffers on Windows · 81c52728
      Peter Eisentraut authored
      Testing has shown that it is no longer correct.
      
      From: Tsunakawa, Takayuki <tsunakawa.takay@jp.fujitsu.com>
      Reviewed-by: default avataramul sul <sulamul@gmail.com>
      Discussion: http://www.postgresql.org/message-id/flat/0A3221C70F24FB45833433255569204D1F5EE995@G01JPEXMBYT05/
      81c52728
    • Peter Eisentraut's avatar
      doc: Fix typo · 2f0c7ff4
      Peter Eisentraut authored
      From: Tsunakawa, Takayuki <tsunakawa.takay@jp.fujitsu.com>
      2f0c7ff4
    • Heikki Linnakangas's avatar
      Rewrite the perl scripts to produce our Unicode conversion tables. · 1de9cc0d
      Heikki Linnakangas authored
      Generate EUC_CN mappings from gb-18030-2000.xml, because GB2312.TXT is no
      longer available.
      
      Get UHC from windows-949-2000.xml, it's more up-to-date.
      
      Plus tons more small changes. With these changes, the perl scripts
      faithfully produce the *.map files we have in the repository, from the
      external source files.
      
      In the passing, fix the Makefile to also download CP932.TXT and CP950.TXT.
      
      Based on patches by Kyotaro Horiguchi, reviewed by Daniel Gustafsson.
      
      Discussion: https://postgr.es/m/08e7892a-d55c-eefe-76e6-7910bc8dd1f3@iki.fi
      1de9cc0d
    • Heikki Linnakangas's avatar
      Remove leading zeros, for consistency with other map files. · 6c303223
      Heikki Linnakangas authored
      The common style is to pad to 4 digits.
      
      Running the current perl scripts to generate these map files would override
      this change, but the next commit will rewrite the perl scripts to produce
      this style. I'm doing this as a separate commit, to make it more clear what
      non-cosmetic changes the next commit makes to the map files.
      
      Discussion: https://postgr.es/m/08e7892a-d55c-eefe-76e6-7910bc8dd1f3@iki.fi
      6c303223
    • Heikki Linnakangas's avatar
      Remove code points < 0x80 from character conversion tables. · 2c09c93c
      Heikki Linnakangas authored
      PostgreSQL treats characters with < 0x80 leading byte  as plain ASCII, and
      they are not even passed to the conversion routines. There is no point in
      having them in the conversion tables.
      
      Everything in the tables were direct ASCII-ASCII mappings, except for two:
      * SHIFT_JIS_2004 code point 0x5C (backslash in ASCII) was mapped to Unicode
        YEN SIGN character.
      * Unicode 0x5C (backslash again) was mapped to "REVERSE SOLIDUS" in
        SHIFT_JIS_2004
      
      These mappings never had any effect, so there's no functional change from
      removing them.
      
      Discussion: https://postgr.es/m/08e7892a-d55c-eefe-76e6-7910bc8dd1f3@iki.fi
      2c09c93c
    • Heikki Linnakangas's avatar
      Remove dead stuff from pgcrypto. · b2cc748b
      Heikki Linnakangas authored
      pgp-pubkey-DISABLED test has been unused since 2006, when support for
      built-in bignum math was added (commit 1abf76e8). pgp-encrypt-DISABLED has
      been unused forever, AFAICS.
      
      Also remove a couple of unused error codes.
      b2cc748b
    • Tom Lane's avatar
      Fix bogus handling of JOIN_UNIQUE_OUTER/INNER cases for parallel joins. · 41e2b84c
      Tom Lane authored
      consider_parallel_nestloop passed the wrong jointype down to its
      subroutines for JOIN_UNIQUE_INNER cases (it should pass JOIN_INNER), and it
      thought that it could pass paths other than innerrel->cheapest_total_path
      to create_unique_path, which create_unique_path is not on board with.
      These bugs would lead to assertion failures or other errors, suggesting
      that this code path hasn't been tested much.
      
      hash_inner_and_outer's code for parallel join effectively treated both
      JOIN_UNIQUE_OUTER and JOIN_UNIQUE_INNER the same as JOIN_INNER (for
      different reasons :-(), leading to incorrect plans that treated a semijoin
      as if it were a plain join.
      
      Michael Day submitted a test case demonstrating that hash_inner_and_outer
      failed for JOIN_UNIQUE_OUTER, and I found the other cases through code
      review.
      
      Report: https://postgr.es/m/D0E8A029-D1AC-42E8-979A-5DE4A77E4413@rcmail.com
      41e2b84c
  4. 29 Nov, 2016 10 commits
  5. 28 Nov, 2016 3 commits
    • Alvaro Herrera's avatar
      Fix get_relation_info name typo'ed in a comment · eb681416
      Alvaro Herrera authored
      Plus add a missing comment about this in get_relation_info itself.
      
      Author: Amit Langote
      Discussion: https://postgr.es/m/e46c0569-0449-afa0-e2fe-f3776e4b3fd5@lab.ntt.co.jp
      eb681416
    • Tom Lane's avatar
      Fix busted tab-completion pattern for ALTER TABLE t ALTER c DROP ... · 404e6675
      Tom Lane authored
      Evidently a thinko in commit 9b181b03.
      
      Kyotaro Horiguchi
      404e6675
    • Tom Lane's avatar
      Code review for early drop of orphaned temp relations in autovacuum. · dafa0848
      Tom Lane authored
      Commit a734fd5d exposed some race conditions that existed previously
      in the autovac code, but were basically harmless because autovac would
      not try to delete orphaned relations immediately.  Specifically, the test
      for orphaned-ness was made on a pg_class tuple that might be dead by now,
      allowing autovac to try to remove a table that the owning backend had just
      finished deleting.  This resulted in a hard crash due to inadequate caution
      about accessing the table's catalog entries without any lock.  We must take
      a relation lock and then recheck whether the table is still present and
      still looks deletable before we do anything.
      
      Also, it seemed to me that deleting multiple tables per transaction, and
      trying to continue after errors, represented unjustifiable complexity.
      We do not expect this code path to be taken often in the field, nor even
      during testing, which means that prioritizing performance over correctness
      is a bad tradeoff.  Rip all that out in favor of just starting a new
      transaction after each successful temp table deletion.  If we're unlucky
      enough to get an error, which shouldn't happen anyway now that we're being
      more cautious, let the autovacuum worker fail as it normally would.
      
      In passing, improve the order of operations in the initial scan loop.
      Now that we don't care about whether a temp table is a wraparound hazard,
      there's no need to perform extract_autovac_opts, get_pgstat_tabentry_relid,
      or relation_needs_vacanalyze for temp tables.
      
      Also, if GetTempNamespaceBackendId returns InvalidBackendId (indicating
      it doesn't recognize the schema as temp), treat that as meaning it's NOT
      an orphaned temp table, not that it IS one, which is what happened before
      because BackendIdGetProc necessarily failed.  The case really shouldn't
      come up for a table that has RELPERSISTENCE_TEMP, but the consequences
      if it did seem undesirable.  (This might represent a back-patchable bug
      fix; not sure if it's worth the trouble.)
      
      Discussion: https://postgr.es/m/21299.1480272347@sss.pgh.pa.us
      dafa0848
  6. 27 Nov, 2016 1 commit
  7. 26 Nov, 2016 2 commits
    • Tom Lane's avatar
      Fix test about ignoring extension dependencies during extension scripts. · 182db070
      Tom Lane authored
      Commit 08dd23ce introduced an exception to the rule that extension member
      objects can only be dropped as part of dropping the whole extension,
      intending to allow such drops while running the extension's own creation or
      update scripts.  However, the exception was only applied at the outermost
      recursion level, because it was modeled on a pre-existing check to ignore
      dependencies on objects listed in pendingObjects.  Bug #14434 from Philippe
      Beaudoin shows that this is inadequate: in some cases we can reach an
      extension member object by recursion from another one.  (The bug concerns
      the serial-sequence case; I'm not sure if there are other cases, but there
      might well be.)
      
      To fix, revert 08dd23ce's changes to findDependentObjects() and instead
      apply the creating_extension exception regardless of stack level.
      
      Having seen this example, I'm a bit suspicious that the pendingObjects
      logic is also wrong and such cases should likewise be allowed at any
      recursion level.  However, changing that would interact in subtle ways
      with the recursion logic (at least it would need to be moved to after the
      recursing-from check).  Given that the code's been like that a long time,
      I'll refrain from touching it without a clear example showing it's wrong.
      
      Back-patch to all active branches.  In HEAD and 9.6, where suitable
      test infrastructure exists, add a regression test case based on the
      bug report.
      
      Report: <20161125151448.6529.33039@wrigleys.postgresql.org>
      Discussion: <13224.1480177514@sss.pgh.pa.us>
      182db070
    • Robert Haas's avatar
      Mark IsPostmasterEnvironment and IsBackgroundWorker as PGDLLIMPORT. · 27327059
      Robert Haas authored
      Per request from Craig Ringer.
      27327059
  8. 25 Nov, 2016 4 commits
    • Tom Lane's avatar
      Bring some clarity to the defaults for the xxx_flush_after parameters. · dbdfd114
      Tom Lane authored
      Instead of confusingly stating platform-dependent defaults for these
      parameters in the comments in postgresql.conf.sample (with the main
      entry being a lie on Linux), teach initdb to install the correct
      platform-dependent value in postgresql.conf, similarly to the way
      we handle other platform-dependent defaults.  This won't do anything
      for existing 9.6 installations, but since it's effectively only a
      documentation improvement, that seems OK.
      
      Since this requires initdb to have access to the default values,
      move the #define's for those to pg_config_manual.h; the original
      placement in bufmgr.h is unworkable because that file can't be
      included by frontend programs.
      
      Adjust the default value for wal_writer_flush_after so that it is 1MB
      regardless of XLOG_BLCKSZ, conforming to what is stated in both the
      SGML docs and postgresql.conf.  (We could alternatively make it scale
      with XLOG_BLCKSZ, but I'm not sure I see the point.)
      
      Copy-edit related SGML documentation.
      
      Fabien Coelho and Tom Lane, per a gripe from Tomas Vondra.
      
      Discussion: <30ebc6e3-8358-09cf-44a8-578252938424@2ndquadrant.com>
      dbdfd114
    • Tom Lane's avatar
      Mark a query's topmost Paths parallel-unsafe if they will have initPlans. · ab77a5a4
      Tom Lane authored
      Andreas Seltenreich found another case where we were being too optimistic
      about allowing a plan to be considered parallelizable despite it containing
      initPlans.  It seems like the real issue here is that if we know we are
      going to tack initPlans onto the topmost Plan node for a subquery, we
      had better mark that subquery's result Paths as not-parallel-safe.  That
      fixes this problem and allows reversion of a kluge (added in commit
      7b67a0a4 and extended in f24cf960) to not trust the parallel_safe flag
      at top level.
      
      Discussion: <874m2w4k5d.fsf@ex.ansel.ydns.eu>
      ab77a5a4
    • Tom Lane's avatar
      Check for pending trigger events on far end when dropping an FK constraint. · 4e026b32
      Tom Lane authored
      When dropping a foreign key constraint with ALTER TABLE DROP CONSTRAINT,
      we refuse the drop if there are any pending trigger events on the named
      table; this ensures that we won't remove the pg_trigger row that will be
      consulted by those events.  But we should make the same check for the
      referenced relation, else we might remove a due-to-be-referenced pg_trigger
      row for that relation too, resulting in "could not find trigger NNN" or
      "relation NNN has no triggers" errors at commit.  Per bug #14431 from
      Benjie Gillam.  Back-patch to all supported branches.
      
      Report: <20161124114911.6530.31200@wrigleys.postgresql.org>
      4e026b32
    • Magnus Hagander's avatar
      Fix typo in comment · 8afb8110
      Magnus Hagander authored
      Thomas Munro
      8afb8110