1. 22 Nov, 2016 1 commit
  2. 21 Nov, 2016 4 commits
    • Tom Lane's avatar
      Fix PGLC_localeconv() to handle errors better. · a4930e7c
      Tom Lane authored
      The code was intentionally not very careful about leaking strdup'd
      strings in case of an error.  That was forgivable probably, but it
      also failed to notice strdup() failures, which could lead to subsequent
      null-pointer-dereference crashes, since many callers unsurprisingly
      didn't check for null pointers in the struct lconv fields.  An even
      worse problem is that it could throw error while we were setlocale'd
      to a non-C locale, causing unwanted behavior in subsequent libc calls.
      
      Rewrite to ensure that we cannot throw elog(ERROR) until after we've
      restored the previous locale settings, or at least attempted to.
      (I'm sorely tempted to make restore failure be a FATAL error, but
      will refrain for the moment.)  Having done that, it's not much more
      work to ensure that we clean up strdup'd storage on the way out, too.
      
      This code is substantially the same in all supported branches, so
      back-patch all the way.
      
      Michael Paquier and Tom Lane
      
      Discussion: <CAB7nPqRMbGqa_mesopcn4MPyTs34eqtVEK7ELYxvvV=oqS00YA@mail.gmail.com>
      a4930e7c
    • Tom Lane's avatar
      Fix optimization for skipping searches for parallel-query hazards. · 4324ade9
      Tom Lane authored
      Fix thinko in commit da1c9163: even if the original query was free of
      parallel hazards, we might introduce such a hazard by adding PARAM_EXEC
      Param nodes.  Adjust is_parallel_safe() so that it will scan the given
      expression whenever any such nodes have been created.  Per report from
      Andreas Seltenreich.
      
      Discussion: <878tse6yvf.fsf@credativ.de>
      4324ade9
    • Robert Haas's avatar
      autovacuum: Drop orphan temp tables more quickly but with more caution. · a734fd5d
      Robert Haas authored
      Previously, we only dropped an orphan temp table when it became old
      enough to threaten wraparound; instead, doing it immediately.  The
      only value of waiting is that someone might be able to examine the
      contents of the orphan temp table for forensic purposes, but it's
      pretty difficult to actually do that and few users will wish to do so.
      On the flip side, not performing the drop immediately generates log
      spam and bloats pg_class.
      
      In addition, per a report from Grigory Smolkin, if a temporary schema
      contains a very large number of temporary tables, a backend attempting
      to clear the temporary schema might fail due to lock table exhaustion.
      It's helpful for autovacuum to clean up after such cases, and we don't
      want it to wait for wraparound to threaten before doing so.  To
      prevent autovacuum from failing in the same manner as a backend trying
      to drop an entire temp schema, remove orphan temp tables in batches of
      50, committing after each batch, so that we don't accumulate an
      unbounded number of locks.  If a drop fails, retry other orphan tables
      that need to be dropped up to 10 times before giving up.  With this
      system, if a backend does fail to clean a temporary schema due to
      lock table exhaustion, autovacuum should hopefully put things right
      the next time it processes the database.
      
      Discussion: CAB7nPqSbYT6dRwsXVgiKmBdL_ARemfDZMPA+RPeC_ge0GK70hA@mail.gmail.com
      
      Michael Paquier, with a bunch of comment changes by me.
      a734fd5d
    • Tom Lane's avatar
      Fix test for subplans in force-parallel mode. · f24cf960
      Tom Lane authored
      We mustn't force parallel mode if the query has any subplans, since
      ExecSerializePlan doesn't transmit them to workers.  Testing
      top_plan->initPlan is inadequate because (1) there might be initPlans
      attached to lower plan nodes, and (2) non-initPlan subplans don't
      work either.  There's certainly room for improvement in those
      restrictions, but for the moment that's what we've got.
      
      Amit Kapila, per report from Andreas Seltenreich
      
      Discussion: <8737im6pmh.fsf@credativ.de>
      f24cf960
  3. 20 Nov, 2016 1 commit
    • Tom Lane's avatar
      Prevent multicolumn expansion of "foo.*" in an UPDATE source expression. · c5f365f3
      Tom Lane authored
      Because we use transformTargetList() for UPDATE as well as SELECT
      tlists, the code accidentally tried to expand a "*" reference into
      several columns.  This is nonsensical, because the UPDATE syntax
      provides exactly one target column to put the value into.  The
      immediate result was that transformUpdateTargetList() got confused
      and reported "UPDATE target count mismatch --- internal error".
      It seems better to treat such a reference as a plain whole-row
      variable, as it would be in other contexts.  (This could produce
      useful results when the target column is of composite type.)
      
      Fix by tweaking transformTargetList() to perform *-expansion only
      conditionally, depending on its exprKind parameter.
      
      Back-patch to 9.3.  The problem exists further back, but a fix would be
      much more invasive before that, because transformTargetList() wasn't
      told what kind of list it was working on.  Doesn't seem worth the
      trouble given the lack of field reports.  (I only noticed it because
      I was checking the code while trying to improve the documentation about
      how we handle "foo.*".)
      
      Discussion: <4308.1479595330@sss.pgh.pa.us>
      c5f365f3
  4. 19 Nov, 2016 2 commits
    • Tom Lane's avatar
      Fix latent costing error in create_merge_append_path. · 0832f2db
      Tom Lane authored
      create_merge_append_path should use the path rowcount it just computed,
      not rel->tuples, for costing purposes.  Those numbers should always be
      the same at present, but if we ever support parameterized MergeAppend
      paths (a case this function is otherwise prepared for), the former would
      be right and the latter wrong.
      
      No need for back-patch since the problem is only latent.
      
      Ashutosh Bapat
      
      Discussion: <CAFjFpRek+cLCnTo24youuGtsq4zRphEB8EUUPjDxZjnL4n4HYQ@mail.gmail.com>
      0832f2db
    • Tom Lane's avatar
      Code review for GUC serialization/deserialization code. · 13671b4b
      Tom Lane authored
      The serialization code dumped core for a string-valued GUC whose value
      is NULL, which is a legal state.  The infrastructure isn't capable of
      transmitting that state exactly, but fortunately, transmitting an empty
      string instead should be close enough (compare, eg, commit e45e990e).
      
      The code potentially underestimated the space required to format a
      real-valued variable, both because it made an unwarranted assumption that
      %g output would never be longer than %e output, and because it didn't count
      right even for %e format.  In practice this would pretty much always be
      masked by overestimates for other variables, but it's still wrong.
      
      Also fix boundary-case error in read_gucstate, incorrect handling of the
      case where guc_sourcefile is non-NULL but zero length (not clear that can
      happen, but if it did, this code would get totally confused), and
      confusingly useless check for a NULL result from read_gucstate.
      
      Andreas Seltenreich discovered the core dump; other issues noted while
      reading nearby code.  Back-patch to 9.5 where this code was introduced.
      
      Michael Paquier and Tom Lane
      
      Discussion: <871sy78wno.fsf@credativ.de>
      13671b4b
  5. 18 Nov, 2016 2 commits
    • Peter Eisentraut's avatar
      Add pg_sequences view · 67dc4ccb
      Peter Eisentraut authored
      Like pg_tables, pg_views, and others, this view contains information
      about sequences in a way that is independent of the system catalog
      layout but more comprehensive than the information schema.
      
      To help implement the view, add a new internal function
      pg_sequence_last_value() to return the last value of a sequence.  This
      is kept separate from pg_sequence_parameters() to separate querying
      run-time state from catalog-like information.
      Reviewed-by: default avatarAndreas Karlsson <andreas@proxel.se>
      67dc4ccb
    • Stephen Frost's avatar
      Clean up pg_dump tests, re-enable BLOB testing · 8f91f323
      Stephen Frost authored
      Add a loop to check that each test covers all of the pg_dump runs.  We
      (I) had been a bit sloppy when adding new runs and not making sure to
      mark if they should be under like or unlike for each test, this loop
      makes sure that the test system will complain if any are forgotten in
      the future.
      
      The loop also correctly handles the 'catch all' cases, which are used to
      avoid running unnecessary specific checks when a single catch-all can be
      done (eg: a no-acl run should not have any GRANT commands).
      
      Also, re-enable the testing of blobs, but use lo_from_bytea() instead of
      trying to be cute and writing out to a file and then reading it back in
      with psql, which proved to be difficult for some buildfarm members.
      This allows us to add support for testing the --no-blobs option which
      will be getting added shortly, provided the buildfarm doesn't blow up on
      this.
      8f91f323
  6. 17 Nov, 2016 5 commits
    • Robert Haas's avatar
      Remove or reduce verbosity of some debug messages. · a43f1939
      Robert Haas authored
      The debug messages that merely print StartTransactionCommand,
      CommitTransactionCommand, ProcessUtilty, or ProcessQuery with no
      additional details seem to be useless.  Get rid of them.
      
      The transaction status messages produced by ShowTransactionState are
      occasionally useful, but they are extremely verbose, producing
      multiple lines of log output every time they fire, which can happens
      multiple times per transaction.  So, reduce the level to DEBUG5; avoid
      emitting an extra line just to explain which debug point is at issue;
      and tighten up the rest of the message so it doesn't use quite so much
      horizontal space.
      
      With these changes, it's possible to run a somewhat busy system with a
      log level even as high as DEBUG4, whereas previously anything above
      DEBUG2 would flood the log with output that probably wasn't really all
      that useful.
      a43f1939
    • Tom Lane's avatar
      Fix pg_dump's handling of circular dependencies in views. · d8c05aff
      Tom Lane authored
      pg_dump's traditional solution for breaking a circular dependency involving
      a view was to create the view with CREATE TABLE and then later issue CREATE
      RULE "_RETURN" ... to convert the table to a view, relying on the backend's
      very very ancient code that supports making views that way.  We've wanted
      to get rid of that kluge for a long time, but the thing that finally
      motivates doing something about it is the recognition that this method
      fails with the --clean option, because it leads to issuing DROP RULE
      "_RETURN" followed by DROP TABLE --- and the backend won't let you drop a
      view's _RETURN rule.
      
      Instead, let's break circular dependencies by initially creating the view
      using CREATE VIEW AS SELECT NULL::columntype AS columnname, ... (so that
      it has the right column names and types to support external references,
      but no dependencies beyond the column data types), and then later dumping
      the ON SELECT rule using the spelling CREATE OR REPLACE VIEW.  This method
      wasn't available when this code was originally written, but it's been
      possible since PG 7.3, so it seems fine to start relying on it now.
      
      To solve the --clean problem, make the dropStmt for an ON SELECT rule
      be CREATE OR REPLACE VIEW with the same dummy target list as above.
      In this way, during the DROP phase, we first reduce the view to have
      no extra dependencies, and then we can drop it entirely when we've
      gotten rid of whatever had a circular dependency on it.
      
      (Note: this should work adequately well with the --if-exists option, since
      the CREATE OR REPLACE VIEW will go through whether the view exists or not.
      It could fail if the view exists with a conflicting column set, but we
      don't really support --clean against a non-matching database anyway.)
      
      This allows cleaning up some other kluges inside pg_dump, notably that
      we don't need a notion of reloptions attached to a rule anymore.
      
      Although this is a bug fix, commit to HEAD only for now.  The problem's
      existed for a long time and we've had relatively few complaints, so it
      doesn't really seem worth taking risks to fix it in the back branches.
      We might revisit that choice if no problems emerge.
      
      Discussion: <19092.1479325184@sss.pgh.pa.us>
      d8c05aff
    • Tom Lane's avatar
      Improve pg_dump/pg_restore --create --if-exists logic. · ac888986
      Tom Lane authored
      Teach it not to complain if the dropStmt attached to an archive entry
      is actually spelled CREATE OR REPLACE VIEW, since that will happen due to
      an upcoming bug fix.  Also, if it doesn't recognize a dropStmt, have it
      print a WARNING and then emit the dropStmt unmodified.  That seems like a
      much saner behavior than Assert'ing or dumping core due to a null-pointer
      dereference, which is what would happen before :-(.
      
      Back-patch to 9.4 where this option was introduced.
      
      Discussion: <19092.1479325184@sss.pgh.pa.us>
      ac888986
    • Tom Lane's avatar
      Re-pgindent src/bin/pg_dump/* · fcf70e0d
      Tom Lane authored
      Cleanup for recent patches --- it's not much change, but I got annoyed
      while re-indenting the view-rule fix I'm working on.
      fcf70e0d
    • Alvaro Herrera's avatar
      Avoid pin scan for replay of XLOG_BTREE_VACUUM in all cases · f65b94f6
      Alvaro Herrera authored
      Replay of XLOG_BTREE_VACUUM during Hot Standby was previously thought to
      require complex interlocking that matched the requirements on the
      master. This required an O(N) operation that became a significant
      problem with large indexes, causing replication delays of seconds or in
      some cases minutes while the XLOG_BTREE_VACUUM was replayed.
      
      This commit skips the “pin scan” that was previously required, by
      observing in detail when and how it is safe to do so, with full
      documentation. The pin scan is skipped only in replay; the VACUUM code
      path on master is not touched here.
      
      No tests included. Manual tests using an additional patch to view WAL records
      and their timing have shown the change in WAL records and their handling has
      successfully reduced replication delay.
      
      This is a back-patch of commits 687f2cd7, 3e4b7d87, b6028426
      by Simon Riggs, to branches 9.4 and 9.5.  No further backpatch is
      possible because this depends on catalog scans being MVCC.  I (Álvaro)
      additionally updated a slight problem in the README, which explains why
      this touches the 9.6 and master branches.
      f65b94f6
  7. 16 Nov, 2016 2 commits
  8. 15 Nov, 2016 7 commits
    • Tom Lane's avatar
      Check that result tupdesc has exactly 1 column in return_next scalar case. · 4ecd1974
      Tom Lane authored
      This should always be true, but since we're relying on a tuple descriptor
      passed from outside pltcl itself, let's check.  Per a gripe from Coverity.
      4ecd1974
    • Robert Haas's avatar
      Reserve zero as an invalid DSM handle. · b40b4dd9
      Robert Haas authored
      Previously, the handle for the control segment could not be zero, but
      some other DSM segment could potentially have a handle value of zero.
      However, that means that if someone wanted to store a dsm_handle that
      might or might not be valid, they would need a separate boolean to
      keep track of whether the associated value is legal.  That's annoying,
      so change things so that no DSM segment can ever have a handle of 0 -
      or as we call it here, DSM_HANDLE_INVALID.
      
      Thomas Munro.  This was submitted as part of a much larger patch to
      add an malloc-like allocator for dynamic shared memory, but this part
      seems like a good idea independently of the rest of the patch.
      b40b4dd9
    • Tom Lane's avatar
      Allow DOS-style line endings in ~/.pgpass files. · 0a748193
      Tom Lane authored
      On Windows, libc will mask \r\n line endings for us, since we read the
      password file in text mode.  But that doesn't happen on Unix.  People
      who share password files across both systems might have \r\n line endings
      in a file they use on Unix, so as a convenience, ignore trailing \r.
      Per gripe from Josh Berkus.
      
      In passing, put the existing check for empty line somewhere where it's
      actually useful, ie after stripping the newline not before.
      
      Vik Fearing, adjusted a bit by me
      
      Discussion: <0de37763-5843-b2cc-855e-5d0e5df25807@agliodbs.com>
      0a748193
    • Tom Lane's avatar
      Account for catalog snapshot in PGXACT->xmin updates. · ffaa44cb
      Tom Lane authored
      The CatalogSnapshot was not plugged into SnapshotResetXmin()'s accounting
      for whether MyPgXact->xmin could be cleared or advanced.  In normal
      transactions this was masked by the fact that the transaction snapshot
      would be older, but during backend startup and certain utility commands
      it was possible to re-use the CatalogSnapshot after MyPgXact->xmin had
      been cleared, meaning that recently-deleted rows could be pruned even
      though this snapshot could still see them, causing unexpected catalog
      lookup failures.  This effect appears to be the explanation for a recent
      failure on buildfarm member piculet.
      
      To fix, add the CatalogSnapshot to the RegisteredSnapshots heap whenever
      it is valid.
      
      In the previous logic, it was possible for the CatalogSnapshot to remain
      valid across waits for client input, but with this change that would mean
      it delays advance of global xmin in cases where it did not before.  To
      avoid possibly causing new table-bloat problems with clients that sit idle
      for long intervals, add code to invalidate the CatalogSnapshot before
      waiting for client input.  (When the backend is busy, it's unlikely that
      the CatalogSnapshot would be the oldest snap for very long, so we don't
      worry about forcing early invalidation of it otherwise.)
      
      In passing, remove the CatalogSnapshotStale flag in favor of using
      "CatalogSnapshot != NULL" to represent validity, as we do for the other
      special snapshots in snapmgr.c.  And improve some obsolete comments.
      
      No regression test because I don't know a deterministic way to cause this
      failure.  But the stress test shown in the original discussion provokes
      "cache lookup failed for relation 1255" within a few dozen seconds for me.
      
      Back-patch to 9.4 where MVCC catalog scans were introduced.  (Note: it's
      quite easy to produce similar failures with the same test case in branches
      before 9.4.  But MVCC catalog scans were supposed to fix that.)
      
      Discussion: <16447.1478818294@sss.pgh.pa.us>
      ffaa44cb
    • Robert Haas's avatar
      Limit the number of number of tapes used for a sort to 501. · fc19c180
      Robert Haas authored
      Gigantic numbers of tapes don't work out well.
      
      Original patch by Peter Geoghegan; comments entirely rewritten by me.
      fc19c180
    • Robert Haas's avatar
      Fix broken statement in UCS_to_most.pl. · 00c6d807
      Robert Haas authored
      This has been wrong for a very long time, and it's puzzling to me how
      it ever worked for anyone.
      
      Kyotaro Horiguchi
      00c6d807
    • Robert Haas's avatar
      pgbench: Increase maximum size of log filename from 64 to MAXPGPATH. · 56eba9b8
      Robert Haas authored
      Commit 41124a91 allowed the
      transaction log file prefix to be changed but left in place the
      existing 64-character limit on the total length of a log file name.
      It's possible that could be inconvenient for somebody, so increase the
      limit to MAXPGPATH, which ought to be enough for anybody.
      
      Per a suggestion from Tom Lane.
      56eba9b8
  9. 14 Nov, 2016 6 commits
  10. 13 Nov, 2016 2 commits
  11. 12 Nov, 2016 1 commit
    • Andres Freund's avatar
      Add minimal set of regression tests for pg_stat_statements. · 9be244db
      Andres Freund authored
      While the set of covered functionality is fairly small, the added tests
      still are useful to get some basic buildfarm testing of
      pg_stat_statements itself, but also to exercise the lwlock tranch code
      on the buildfarm.
      
      Author: Amit Kapila, slightly editorialized by me
      Reviewed-By: Ashutosh Sharma, Andres Freund
      Discussion: <CAA4eK1JOjkdXYtHxh=2aDK4VgDtN-LNGKY_YqX0N=YEvuzQVWg@mail.gmail.com>
      9be244db
  12. 11 Nov, 2016 1 commit
  13. 10 Nov, 2016 5 commits
    • Tom Lane's avatar
      Cleanup of rewriter and planner handling of Query.hasRowSecurity flag. · 24aef338
      Tom Lane authored
      Be sure to pull up the subquery's hasRowSecurity flag when flattening a
      subquery in pull_up_simple_subquery().  This isn't a bug today because
      we don't look at the hasRowSecurity flag during planning, but it could
      easily be a bug tomorrow.
      
      Likewise, make rewriteRuleAction() pull up the hasRowSecurity flag when
      absorbing RTEs from a rule action.  This isn't a bug either, for the
      opposite reason: the flag should never be set yet.  But again, it seems
      like good future proofing.
      
      Add a comment explaining why rewriteTargetView() should *not* set
      hasRowSecurity when adding stuff to securityQuals.
      
      Improve some nearby comments about securityQuals processing, and document
      that field more completely in parsenodes.h.
      
      Patch by me, analysis by Dean Rasheed.
      
      Discussion: <CAEZATCXZ8tb2DV6f=bkhsMV6u_gRcZ0CZBw2J-qU84RxSukZog@mail.gmail.com>
      24aef338
    • Tom Lane's avatar
      Re-allow user_catalog_table option for materialized views. · 530f8065
      Tom Lane authored
      The reloptions stuff allows this option to be set on a matview.
      While it's questionable whether that is useful or was really intended,
      it does work, and we shouldn't change that in minor releases.  Commit
      e3e66d8a disabled the option since I didn't realize that it was
      possible for it to be set on a matview.  Tweak the test to re-allow it.
      
      Discussion: <19749.1478711862@sss.pgh.pa.us>
      530f8065
    • Tom Lane's avatar
      Support "COPY view FROM" for views with INSTEAD OF INSERT triggers. · 279c439c
      Tom Lane authored
      We just pass the data to the INSTEAD trigger.
      
      Haribabu Kommi, reviewed by Dilip Kumar
      
      Patch: <CAJrrPGcSQkrNkO+4PhLm4B8UQQQmU9YVUuqmtgM=pmzMfxWaWQ@mail.gmail.com>
      279c439c
    • Tom Lane's avatar
      Fix partial aggregation for the case of a degenerate GROUP BY clause. · e1b449be
      Tom Lane authored
      The plan generated for sorted partial aggregation with "GROUP BY constant"
      included a Sort node with no sort keys, which the executor does not like.
      
      Per report from Steve Randall.  I'd add a regression test case if I could
      think of a compact one, but it doesn't seem worth expending lots of cycles
      on.
      
      Report: <CABVd52UAdGXpg_rCk46egpNKYdXOzCjuJ1zG26E2xBe_8bj+Fg@mail.gmail.com>
      e1b449be
    • Tom Lane's avatar
      Doc: improve link. · 0b1b5033
      Tom Lane authored
      Discussion: <5019.1478790246@sss.pgh.pa.us>
      0b1b5033
  14. 09 Nov, 2016 1 commit