1. 13 Dec, 2017 8 commits
  2. 12 Dec, 2017 2 commits
  3. 11 Dec, 2017 3 commits
  4. 10 Dec, 2017 1 commit
    • Tom Lane's avatar
      Stabilize output of new regression test case. · 9edc97b7
      Tom Lane authored
      The test added by commit 390d5813 turns out to have different output
      in CLOBBER_CACHE_ALWAYS builds: there's an extra CONTEXT line in the
      error message as a result of detecting the error at a different place.
      Possibly we should do something to make that more consistent.  But as
      a stopgap measure to make the buildfarm green again, adjust the test
      to suppress CONTEXT entirely.  We can revert this if we do something
      in the backend to eliminate the inconsistency.
      
      Discussion: https://postgr.es/m/31545.1512924904@sss.pgh.pa.us
      9edc97b7
  5. 09 Dec, 2017 5 commits
    • Tom Lane's avatar
      Fix plpgsql to reinitialize record variables at block re-entry. · 390d5813
      Tom Lane authored
      If one exits and re-enters a DECLARE ... BEGIN ... END block within a
      single execution of a plpgsql function, perhaps due to a surrounding loop,
      the declared variables are supposed to get re-initialized to null (or
      whatever their initializer is).  But this failed to happen for variables
      of type "record", because while exec_stmt_block() expected such variables
      to be included in the block's initvarnos list, plpgsql_add_initdatums()
      only adds DTYPE_VAR variables to that list.  This bug appears to have
      been there since the aboriginal addition of plpgsql to our tree.
      
      Fix by teaching plpgsql_add_initdatums() to include DTYPE_REC variables
      as well.  (We don't need to consider other DTYPEs because they don't
      represent separately-stored values.)  I failed to resist the temptation
      to make some nearby cosmetic adjustments, too.
      
      No back-patch, because there have not been field complaints, and it
      seems possible that somewhere out there someone has code depending
      on the incorrect behavior.  In any case this change would have no
      impact on correctly-written code.
      
      Discussion: https://postgr.es/m/22994.1512800671@sss.pgh.pa.us
      390d5813
    • Magnus Hagander's avatar
      Fix regression test output · ce1468d0
      Magnus Hagander authored
      Missed this in the last commit.
      ce1468d0
    • Magnus Hagander's avatar
      Fix typo · d8f632ca
      Magnus Hagander authored
      Reported by Robins Tharakan
      d8f632ca
    • Noah Misch's avatar
      MSVC 2012+: Permit linking to 32-bit, MinGW-built libraries. · 7e0c574e
      Noah Misch authored
      Notably, this permits linking to the 32-bit Perl binaries advertised on
      perl.org, namely Strawberry Perl and ActivePerl.  This has a side effect
      of permitting linking to binaries built with obsolete MSVC versions.
      
      By default, MSVC 2012 and later require a "safe exception handler table"
      in each binary.  MinGW-built, 32-bit DLLs lack the relevant exception
      handler metadata, so linking to them failed with error LNK2026.  Restore
      the semantics of MSVC 2010, which omits the table from a given binary if
      some linker input lacks metadata.  This has no effect on 64-bit builds
      or on MSVC 2010 and earlier.  Back-patch to 9.3 (all supported
      versions).
      
      Reported by Victor Wagner.
      
      Discussion: https://postgr.es/m/20160326154321.7754ab8f@wagner.wagner.home
      7e0c574e
    • Noah Misch's avatar
      MSVC: Test whether 32-bit Perl needs -D_USE_32BIT_TIME_T. · 65a00f30
      Noah Misch authored
      Commits 5a5c2fec and
      b5178c5d introduced support for modern
      MSVC-built, 32-bit Perl, but they broke use of MinGW-built, 32-bit Perl
      distributions like Strawberry Perl and modern ActivePerl.  Perl has no
      robust means to report whether it expects a -D_USE_32BIT_TIME_T ABI, so
      test this.  Back-patch to 9.3 (all supported versions).
      
      The chief alternative was a heuristic of adding -D_USE_32BIT_TIME_T when
      $Config{gccversion} is nonempty.  That banks on every gcc-built Perl
      using the same ABI.  gcc could change its default ABI the way MSVC once
      did, and one could build Perl with gcc and the non-default ABI.
      
      The GNU make build system could benefit from a similar test, without
      which it does not support MSVC-built Perl.  For now, just add a comment.
      Most users taking the special step of building Perl with MSVC probably
      build PostgreSQL with MSVC.
      
      Discussion: https://postgr.es/m/20171130041441.GA3161526@rfd.leadboat.com
      65a00f30
  6. 08 Dec, 2017 4 commits
  7. 07 Dec, 2017 1 commit
  8. 06 Dec, 2017 4 commits
    • Robert Haas's avatar
      Report failure to start a background worker. · 28724fd9
      Robert Haas authored
      When a worker is flagged as BGW_NEVER_RESTART and we fail to start it,
      or if it is not marked BGW_NEVER_RESTART but is terminated before
      startup succeeds, what BgwHandleStatus should be reported?  The
      previous code really hadn't considered this possibility (as indicated
      by the comments which ignore it completely) and would typically return
      BGWH_NOT_YET_STARTED, but that's not a good answer, because then
      there's no way for code using GetBackgroundWorkerPid() to tell the
      difference between a worker that has not started but will start
      later and a worker that has not started and will never be started.
      So, when this case happens, return BGWH_STOPPED instead.  Update the
      comments to reflect this.
      
      The preceding fix by itself is insufficient to fix the problem,
      because the old code also didn't send a notification to the process
      identified in bgw_notify_pid when startup failed.  That might've
      been technically correct under the theory that the status of the
      worker was BGWH_NOT_YET_STARTED, because the status would indeed not
      change when the worker failed to start, but now that we're more
      usefully reporting BGWH_STOPPED, a notification is needed.
      
      Without these fixes, code which starts background workers and then
      uses the recommended APIs to wait for those background workers to
      start would hang indefinitely if the postmaster failed to fork a
      worker.
      
      Amit Kapila and Robert Haas
      
      Discussion: http://postgr.es/m/CAA4eK1KDfKkvrjxsKJi3WPyceVi3dH1VCkbTJji2fuwKuB=3uw@mail.gmail.com
      28724fd9
    • Robert Haas's avatar
      Fix Parallel Append crash. · 9c64ddd4
      Robert Haas authored
      Reported by Tom Lane and the buildfarm.
      
      Amul Sul and Amit Khandekar
      
      Discussion: http://postgr.es/m/17868.1512519318@sss.pgh.pa.us
      Discussion: http://postgr.es/m/CAJ3gD9cJQ4d-XhmZ6BqM9rMM2KDBfpkdgOAb4+psz56uBuMQ_A@mail.gmail.com
      9c64ddd4
    • Tom Lane's avatar
      Adjust regression test cases added by commit ab727167. · 979a36c3
      Tom Lane authored
      I suppose it is a copy-and-paste error that this test doesn't actually
      test the "Parallel Append with both partial and non-partial subplans"
      case (EXPLAIN alone surely doesn't qualify as a test of executor
      behavior).  Fix that.
      
      Also, add cosmetic aliases to make it possible to tell apart these
      otherwise-identical test cases in log_statement output.
      979a36c3
    • Peter Eisentraut's avatar
      doc: Flex is not a GNU package · 51cff91c
      Peter Eisentraut authored
      Remove the designation that Flex is a GNU package.  Even though Bison is
      a GNU package, leave out the designation to not make the sentence
      unnecessarily complicated.
      
      Author: Pavan Maddamsetti <pavan.maddamsetti@gmail.com>
      51cff91c
  9. 05 Dec, 2017 11 commits
    • Tom Lane's avatar
      Fix broken markup. · 7404704a
      Tom Lane authored
      7404704a
    • Robert Haas's avatar
      Support Parallel Append plan nodes. · ab727167
      Robert Haas authored
      When we create an Append node, we can spread out the workers over the
      subplans instead of piling on to each subplan one at a time, which
      should typically be a bit more efficient, both because the startup
      cost of any plan executed entirely by one worker is paid only once and
      also because of reduced contention.  We can also construct Append
      plans using a mix of partial and non-partial subplans, which may allow
      for parallelism in places that otherwise couldn't support it.
      Unfortunately, this patch doesn't handle the important case of
      parallelizing UNION ALL by running each branch in a separate worker;
      the executor infrastructure is added here, but more planner work is
      needed.
      
      Amit Khandekar, Robert Haas, Amul Sul, reviewed and tested by
      Ashutosh Bapat, Amit Langote, Rafia Sabih, Amit Kapila, and
      Rajkumar Raghuwanshi.
      
      Discussion: http://postgr.es/m/CAJ3gD9dy0K_E8r727heqXoBmWZ83HwLFwdcaSSmBQ1+S+vRuUQ@mail.gmail.com
      ab727167
    • Peter Eisentraut's avatar
      doc: Update memory requirements for FOP · 8097d189
      Peter Eisentraut authored
      Reported-by: default avatarDave Page <dpage@pgadmin.org>
      8097d189
    • Robert Haas's avatar
      Fix accumulation of parallel worker instrumentation. · 2c09a5c1
      Robert Haas authored
      When a Gather or Gather Merge node is started and stopped multiple
      times, the old code wouldn't reset the shared state between executions,
      potentially resulting in dramatically inflated instrumentation data
      for nodes beneath it.  (The per-worker instrumentation ended up OK,
      I think, but the overall totals were inflated.)
      
      Report by hubert depesz lubaczewski.  Analysis and fix by Amit Kapila,
      reviewed and tweaked a bit by me.
      
      Discussion: http://postgr.es/m/20171127175631.GA405@depesz.com
      2c09a5c1
    • Andres Freund's avatar
      Fix EXPLAIN ANALYZE of hash join when the leader doesn't participate. · 5bcf389e
      Andres Freund authored
      If a hash join appears in a parallel query, there may be no hash table
      available for explain.c to inspect even though a hash table may have
      been built in other processes.  This could happen either because
      parallel_leader_participation was set to off or because the leader
      happened to hit the end of the outer relation immediately (even though
      the complete relation is not empty) and decided not to build the hash
      table.
      
      Commit bf11e7ee introduced a way for workers to exchange
      instrumentation via the DSM segment for Sort nodes even though they
      are not parallel-aware.  This commit does the same for Hash nodes, so
      that explain.c has a way to find instrumentation data from an
      arbitrary participant that actually built the hash table.
      
      Author: Thomas Munro
      Reviewed-By: Andres Freund
      Discussion: https://postgr.es/m/CAEepm%3D3DUQC2-z252N55eOcZBer6DPdM%3DFzrxH9dZc5vYLsjaA%40mail.gmail.com
      5bcf389e
    • Robert Haas's avatar
      postgres_fdw: Fix failing regression test. · 82c5c533
      Robert Haas authored
      Commit ab3f008a broke this.
      
      Report by Stephen Frost.
      
      Discussion: http://postgr.es/m/20171205180342.GO4628@tamriel.snowman.net
      82c5c533
    • Robert Haas's avatar
      postgres_fdw: Judge password use by run-as user, not session user. · ab3f008a
      Robert Haas authored
      This is a backward incompatibility which should be noted in the
      release notes for PostgreSQL 11.
      
      For security reasons, we require that a postgres_fdw foreign table use
      password authentication when accessing a remote server, so that an
      unprivileged user cannot usurp the server's credentials.  Superusers
      are exempt from this requirement, because we assume they are entitled
      to usurp the server's credentials or, at least, can find some other
      way to do it.
      
      But what should happen when the foreign table is accessed by a view
      owned by a user different from the session user?  Is it the view owner
      that must be a superuser in order to avoid the requirement of using a
      password, or the session user?  Historically it was the latter, but
      this requirement makes it the former instead.  This allows superusers
      to delegate to other users the right to select from a foreign table
      that doesn't use password authentication by creating a view over the
      foreign table and handing out rights to the view.  It is also more
      consistent with the idea that access to a view should use the view
      owner's privileges rather than the session user's privileges.
      
      The upshot of this change is that a superuser selecting from a view
      created by a non-superuser may now get an error complaining that no
      password was used, while a non-superuser selecting from a view
      created by a superuser will no longer receive such an error.
      
      No documentation changes are present in this patch because the
      wording of the documentation already suggests that it works this
      way.  We should perhaps adjust the documentation in the back-branches,
      but that's a task for another patch.
      
      Originally proposed by Jeff Janes, but with different semantics;
      adjusted to work like this by me per discussion.
      
      Discussion: http://postgr.es/m/CA+TgmoaY4HsVZJv5SqEjCKLDwtCTSwXzKpRftgj50wmMMBwciA@mail.gmail.com
      ab3f008a
    • Robert Haas's avatar
      Mark assorted variables PGDLLIMPORT. · c572599c
      Robert Haas authored
      This makes life easier for extension authors who wish to support
      Windows.
      
      Brian Cloutier, slightly amended by me.
      
      Discussion: http://postgr.es/m/CAJCy68fscdNhmzFPS4kyO00CADkvXvEa-28H-OtENk-pa2OTWw@mail.gmail.com
      c572599c
    • Peter Eisentraut's avatar
      doc: Turn on generate.consistent.ids parameter · 28f8896a
      Peter Eisentraut authored
      This ensures that automatically generated HTML anchors don't change in
      every build.
      28f8896a
    • Tom Lane's avatar
      Treat directory open failures as hard errors in ResetUnloggedRelations(). · 8dc3c971
      Tom Lane authored
      Previously, this code just reported such problems at LOG level and kept
      going.  The problem with this approach is that transient failures (e.g.,
      ENFILE) could prevent us from resetting unlogged relations to empty,
      yet allow recovery to appear to complete successfully.  That seems like
      a data corruption hazard large enough to treat such problems as reasons
      to fail startup.
      
      For the same reason, treat unlink failures for unlogged files as hard
      errors not just LOG messages.  It's a little odd that we did it like that
      when file-level errors in other steps (copy_file, fsync_fname) are ERRORs.
      
      The sole case that I left alone is that ENOENT failure on a tablespace
      (not database) directory is not an error, though it will now be logged
      rather than just silently ignored.  This is to cover the scenario where
      a previous DROP TABLESPACE removed the tablespace directory but failed
      before removing the pg_tblspc symlink.  I'm not sure that that's very
      likely in practice, but that seems like the only real excuse for the
      old behavior here, so let's allow for it.  (As coded, this will also
      allow ENOENT on $PGDATA/base/.  But since we'll fail soon enough if
      that's gone, I don't think we need to complicate this code by
      distinguishing that from a true tablespace case.)
      
      Discussion: https://postgr.es/m/21040.1512418508@sss.pgh.pa.us
      8dc3c971
    • Peter Eisentraut's avatar
      Fix warnings from cpluspluscheck · e7cfb26f
      Peter Eisentraut authored
      Fix warnings about "comparison between signed and unsigned integer
      expressions" in inline functions in header files by adding some casts.
      e7cfb26f
  10. 04 Dec, 2017 1 commit
    • Tom Lane's avatar
      Simplify do_pg_start_backup's API by opening pg_tblspc internally. · 066bc21c
      Tom Lane authored
      do_pg_start_backup() expects its callers to pass in an open DIR pointer
      for the pg_tblspc directory, but there's no apparent advantage in that.
      It complicates the callers without adding any flexibility, and there's no
      robustness advantage, since we surely have to be prepared for errors during
      the scan of pg_tblspc anyway.  In fact, by holding an extra kernel resource
      during operations like the preliminary checkpoint, we might be making
      things a fraction more failure-prone not less.  Hence, remove that argument
      and open the directory just for the duration of the actual scan.
      
      Discussion: https://postgr.es/m/28752.1512413887@sss.pgh.pa.us
      066bc21c