1. 13 Dec, 2017 10 commits
    • Tom Lane's avatar
      Rethink MemoryContext creation to improve performance. · 9fa6f00b
      Tom Lane authored
      This patch makes a number of interrelated changes to reduce the overhead
      involved in creating/deleting memory contexts.  The key ideas are:
      
      * Include the AllocSetContext header of an aset.c context in its first
      malloc request, rather than allocating it separately in TopMemoryContext.
      This means that we now always create an initial or "keeper" block in an
      aset, even if it never receives any allocation requests.
      
      * Create freelists in which we can save and recycle recently-destroyed
      asets (this idea is due to Robert Haas).
      
      * In the common case where the name of a context is a constant string,
      just store a pointer to it in the context header, rather than copying
      the string.
      
      The first change eliminates a palloc/pfree cycle per context, and
      also avoids bloat in TopMemoryContext, at the price that creating
      a context now involves a malloc/free cycle even if the context never
      receives any allocations.  That would be a loser for some common
      usage patterns, but recycling short-lived contexts via the freelist
      eliminates that pain.
      
      Avoiding copying constant strings not only saves strlen() and strcpy()
      overhead, but is an essential part of the freelist optimization because
      it makes the context header size constant.  Currently we make no
      attempt to use the freelist for contexts with non-constant names.
      (Perhaps someday we'll need to think harder about that, but in current
      usage, most contexts with custom names are long-lived anyway.)
      
      The freelist management in this initial commit is pretty simplistic,
      and we might want to refine it later --- but in common workloads that
      will never matter because the freelists will never get full anyway.
      
      To create a context with a non-constant name, one is now required to
      call AllocSetContextCreateExtended and specify the MEMCONTEXT_COPY_NAME
      option.  AllocSetContextCreate becomes a wrapper macro, and it includes
      a test that will complain about non-string-literal context name
      parameters on gcc and similar compilers.
      
      An unfortunate side effect of making AllocSetContextCreate a macro is
      that one is now *required* to use the size parameter abstraction macros
      (ALLOCSET_DEFAULT_SIZES and friends) with it; the pre-9.6 habit of
      writing out individual size parameters no longer works unless you
      switch to AllocSetContextCreateExtended.
      
      Internally to the memory-context-related modules, the context creation
      APIs are simplified, removing the rather baroque original design whereby
      a context-type module called mcxt.c which then called back into the
      context-type module.  That saved a bit of code duplication, but not much,
      and it prevented context-type modules from exercising control over the
      allocation of context headers.
      
      In passing, I converted the test-and-elog validation of aset size
      parameters into Asserts to save a few more cycles.  The original thought
      was that callers might compute size parameters on the fly, but in practice
      nobody does that, so it's useless to expend cycles on checking those
      numbers in production builds.
      
      Also, mark the memory context method-pointer structs "const",
      just for cleanliness.
      
      Discussion: https://postgr.es/m/2264.1512870796@sss.pgh.pa.us
      9fa6f00b
    • Peter Eisentraut's avatar
      Start a separate test suite for plpgsql · 632b03da
      Peter Eisentraut authored
      The plpgsql.sql test file in the main regression tests is now by far the
      largest after numeric_big, making editing and managing the test cases
      very cumbersome.  The other PLs have their own test suites split up into
      smaller files by topic.  It would be nice to have that for plpgsql as
      well.  So, to get that started, set up test infrastructure in
      src/pl/plpgsql/src/ and split out the recently added procedure test
      cases into a new file there.  That file now mirrors the test cases added
      to the other PLs, making managing those matching tests a bit easier too.
      
      msvc build system changes with help from Michael Paquier
      632b03da
    • Peter Eisentraut's avatar
      Fix crash when using CALL on an aggregate · 3d887422
      Peter Eisentraut authored
      Author: Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>
      Reported-by: default avatarRushabh Lathia <rushabh.lathia@gmail.com>
      3d887422
    • Andres Freund's avatar
      Add float.h include to int8.c, for isnan(). · 8e211f53
      Andres Freund authored
      port.h redirects isnan() to _isnan() on windows, which in turn is
      provided by float.h rather than math.h. Therefore include the latter
      as well.
      
      Per buildfarm.
      8e211f53
    • Andres Freund's avatar
      Consistently use PG_INT(16|32|64)_(MIN|MAX). · f512a6e1
      Andres Freund authored
      Per buildfarm animal woodlouse.
      f512a6e1
    • Peter Eisentraut's avatar
      PL/Python: Fix potential NULL pointer dereference · 4c6744ed
      Peter Eisentraut authored
      After d0aa965c, one error path in
      PLy_spi_execute_fetch_result() could result in the variable "result"
      being dereferenced after being set to NULL.  Rearrange the code a bit to
      fix that.
      
      Also add another SPI_freetuptable() call so that that is cleared in all
      error paths.
      
      discovered by John Naylor <jcnaylor@gmail.com> via scan-build
      
      ideas and review by Tom Lane
      4c6744ed
    • Andres Freund's avatar
      Make PGAC_C_BUILTIN_OP_OVERFLOW link instead of just compiling. · 85abb5b2
      Andres Freund authored
      Otherwise the detection can spuriously detect symbol as available,
      because the compiler may just emits reference to non-existant symbol.
      85abb5b2
    • Andres Freund's avatar
      Use new overflow aware integer operations. · 101c7ee3
      Andres Freund authored
      A previous commit added inline functions that provide fast(er) and
      correct overflow checks for signed integer math. Use them in a
      significant portion of backend code.  There's more to touch in both
      backend and frontend code, but these were the easily identifiable
      cases.
      
      The old overflow checks are noticeable in integer heavy workloads.
      
      A secondary benefit is that getting rid of overflow checks that rely
      on signed integer overflow wrapping around, will allow us to get rid
      of -fwrapv in the future. Which in turn slows down other code.
      
      Author: Andres Freund
      Discussion: https://postgr.es/m/20171024103954.ztmatprlglz3rwke@alap3.anarazel.de
      101c7ee3
    • Andres Freund's avatar
      Provide overflow safe integer math inline functions. · 4d6ad312
      Andres Freund authored
      It's not easy to get signed integer overflow checks correct and
      fast. Therefore abstract the necessary infrastructure into a common
      header providing addition, subtraction and multiplication for 16, 32,
      64 bit signed integers.
      
      The new macros aren't yet used, but a followup commit will convert
      several open coded overflow checks.
      
      Author: Andres Freund, with some code stolen from Greg Stark
      Reviewed-By: Robert Haas
      Discussion: https://postgr.es/m/20171024103954.ztmatprlglz3rwke@alap3.anarazel.de
      4d6ad312
    • Robert Haas's avatar
      Remove obsolete comment. · 95b52351
      Robert Haas authored
      Commit 8b304b8b removed replacement
      selection, but left behind this comment text.  The optimization to
      which the comment refers is not relevant without replacement
      selection, because if we had so few tuples as to require only one
      tape, we would have just completed the sort in memory.
      
      Peter Geoghegan
      
      Discussion: http://postgr.es/m/CAH2-WznqupLA8CMjp+vqzoe0yXu0DYYbQSNZxmgN76tLnAOZ_w@mail.gmail.com
      95b52351
  2. 12 Dec, 2017 2 commits
  3. 11 Dec, 2017 3 commits
  4. 10 Dec, 2017 1 commit
    • Tom Lane's avatar
      Stabilize output of new regression test case. · 9edc97b7
      Tom Lane authored
      The test added by commit 390d5813 turns out to have different output
      in CLOBBER_CACHE_ALWAYS builds: there's an extra CONTEXT line in the
      error message as a result of detecting the error at a different place.
      Possibly we should do something to make that more consistent.  But as
      a stopgap measure to make the buildfarm green again, adjust the test
      to suppress CONTEXT entirely.  We can revert this if we do something
      in the backend to eliminate the inconsistency.
      
      Discussion: https://postgr.es/m/31545.1512924904@sss.pgh.pa.us
      9edc97b7
  5. 09 Dec, 2017 5 commits
    • Tom Lane's avatar
      Fix plpgsql to reinitialize record variables at block re-entry. · 390d5813
      Tom Lane authored
      If one exits and re-enters a DECLARE ... BEGIN ... END block within a
      single execution of a plpgsql function, perhaps due to a surrounding loop,
      the declared variables are supposed to get re-initialized to null (or
      whatever their initializer is).  But this failed to happen for variables
      of type "record", because while exec_stmt_block() expected such variables
      to be included in the block's initvarnos list, plpgsql_add_initdatums()
      only adds DTYPE_VAR variables to that list.  This bug appears to have
      been there since the aboriginal addition of plpgsql to our tree.
      
      Fix by teaching plpgsql_add_initdatums() to include DTYPE_REC variables
      as well.  (We don't need to consider other DTYPEs because they don't
      represent separately-stored values.)  I failed to resist the temptation
      to make some nearby cosmetic adjustments, too.
      
      No back-patch, because there have not been field complaints, and it
      seems possible that somewhere out there someone has code depending
      on the incorrect behavior.  In any case this change would have no
      impact on correctly-written code.
      
      Discussion: https://postgr.es/m/22994.1512800671@sss.pgh.pa.us
      390d5813
    • Magnus Hagander's avatar
      Fix regression test output · ce1468d0
      Magnus Hagander authored
      Missed this in the last commit.
      ce1468d0
    • Magnus Hagander's avatar
      Fix typo · d8f632ca
      Magnus Hagander authored
      Reported by Robins Tharakan
      d8f632ca
    • Noah Misch's avatar
      MSVC 2012+: Permit linking to 32-bit, MinGW-built libraries. · 7e0c574e
      Noah Misch authored
      Notably, this permits linking to the 32-bit Perl binaries advertised on
      perl.org, namely Strawberry Perl and ActivePerl.  This has a side effect
      of permitting linking to binaries built with obsolete MSVC versions.
      
      By default, MSVC 2012 and later require a "safe exception handler table"
      in each binary.  MinGW-built, 32-bit DLLs lack the relevant exception
      handler metadata, so linking to them failed with error LNK2026.  Restore
      the semantics of MSVC 2010, which omits the table from a given binary if
      some linker input lacks metadata.  This has no effect on 64-bit builds
      or on MSVC 2010 and earlier.  Back-patch to 9.3 (all supported
      versions).
      
      Reported by Victor Wagner.
      
      Discussion: https://postgr.es/m/20160326154321.7754ab8f@wagner.wagner.home
      7e0c574e
    • Noah Misch's avatar
      MSVC: Test whether 32-bit Perl needs -D_USE_32BIT_TIME_T. · 65a00f30
      Noah Misch authored
      Commits 5a5c2fec and
      b5178c5d introduced support for modern
      MSVC-built, 32-bit Perl, but they broke use of MinGW-built, 32-bit Perl
      distributions like Strawberry Perl and modern ActivePerl.  Perl has no
      robust means to report whether it expects a -D_USE_32BIT_TIME_T ABI, so
      test this.  Back-patch to 9.3 (all supported versions).
      
      The chief alternative was a heuristic of adding -D_USE_32BIT_TIME_T when
      $Config{gccversion} is nonempty.  That banks on every gcc-built Perl
      using the same ABI.  gcc could change its default ABI the way MSVC once
      did, and one could build Perl with gcc and the non-default ABI.
      
      The GNU make build system could benefit from a similar test, without
      which it does not support MSVC-built Perl.  For now, just add a comment.
      Most users taking the special step of building Perl with MSVC probably
      build PostgreSQL with MSVC.
      
      Discussion: https://postgr.es/m/20171130041441.GA3161526@rfd.leadboat.com
      65a00f30
  6. 08 Dec, 2017 4 commits
  7. 07 Dec, 2017 1 commit
  8. 06 Dec, 2017 4 commits
    • Robert Haas's avatar
      Report failure to start a background worker. · 28724fd9
      Robert Haas authored
      When a worker is flagged as BGW_NEVER_RESTART and we fail to start it,
      or if it is not marked BGW_NEVER_RESTART but is terminated before
      startup succeeds, what BgwHandleStatus should be reported?  The
      previous code really hadn't considered this possibility (as indicated
      by the comments which ignore it completely) and would typically return
      BGWH_NOT_YET_STARTED, but that's not a good answer, because then
      there's no way for code using GetBackgroundWorkerPid() to tell the
      difference between a worker that has not started but will start
      later and a worker that has not started and will never be started.
      So, when this case happens, return BGWH_STOPPED instead.  Update the
      comments to reflect this.
      
      The preceding fix by itself is insufficient to fix the problem,
      because the old code also didn't send a notification to the process
      identified in bgw_notify_pid when startup failed.  That might've
      been technically correct under the theory that the status of the
      worker was BGWH_NOT_YET_STARTED, because the status would indeed not
      change when the worker failed to start, but now that we're more
      usefully reporting BGWH_STOPPED, a notification is needed.
      
      Without these fixes, code which starts background workers and then
      uses the recommended APIs to wait for those background workers to
      start would hang indefinitely if the postmaster failed to fork a
      worker.
      
      Amit Kapila and Robert Haas
      
      Discussion: http://postgr.es/m/CAA4eK1KDfKkvrjxsKJi3WPyceVi3dH1VCkbTJji2fuwKuB=3uw@mail.gmail.com
      28724fd9
    • Robert Haas's avatar
      Fix Parallel Append crash. · 9c64ddd4
      Robert Haas authored
      Reported by Tom Lane and the buildfarm.
      
      Amul Sul and Amit Khandekar
      
      Discussion: http://postgr.es/m/17868.1512519318@sss.pgh.pa.us
      Discussion: http://postgr.es/m/CAJ3gD9cJQ4d-XhmZ6BqM9rMM2KDBfpkdgOAb4+psz56uBuMQ_A@mail.gmail.com
      9c64ddd4
    • Tom Lane's avatar
      Adjust regression test cases added by commit ab727167. · 979a36c3
      Tom Lane authored
      I suppose it is a copy-and-paste error that this test doesn't actually
      test the "Parallel Append with both partial and non-partial subplans"
      case (EXPLAIN alone surely doesn't qualify as a test of executor
      behavior).  Fix that.
      
      Also, add cosmetic aliases to make it possible to tell apart these
      otherwise-identical test cases in log_statement output.
      979a36c3
    • Peter Eisentraut's avatar
      doc: Flex is not a GNU package · 51cff91c
      Peter Eisentraut authored
      Remove the designation that Flex is a GNU package.  Even though Bison is
      a GNU package, leave out the designation to not make the sentence
      unnecessarily complicated.
      
      Author: Pavan Maddamsetti <pavan.maddamsetti@gmail.com>
      51cff91c
  9. 05 Dec, 2017 10 commits
    • Tom Lane's avatar
      Fix broken markup. · 7404704a
      Tom Lane authored
      7404704a
    • Robert Haas's avatar
      Support Parallel Append plan nodes. · ab727167
      Robert Haas authored
      When we create an Append node, we can spread out the workers over the
      subplans instead of piling on to each subplan one at a time, which
      should typically be a bit more efficient, both because the startup
      cost of any plan executed entirely by one worker is paid only once and
      also because of reduced contention.  We can also construct Append
      plans using a mix of partial and non-partial subplans, which may allow
      for parallelism in places that otherwise couldn't support it.
      Unfortunately, this patch doesn't handle the important case of
      parallelizing UNION ALL by running each branch in a separate worker;
      the executor infrastructure is added here, but more planner work is
      needed.
      
      Amit Khandekar, Robert Haas, Amul Sul, reviewed and tested by
      Ashutosh Bapat, Amit Langote, Rafia Sabih, Amit Kapila, and
      Rajkumar Raghuwanshi.
      
      Discussion: http://postgr.es/m/CAJ3gD9dy0K_E8r727heqXoBmWZ83HwLFwdcaSSmBQ1+S+vRuUQ@mail.gmail.com
      ab727167
    • Peter Eisentraut's avatar
      doc: Update memory requirements for FOP · 8097d189
      Peter Eisentraut authored
      Reported-by: default avatarDave Page <dpage@pgadmin.org>
      8097d189
    • Robert Haas's avatar
      Fix accumulation of parallel worker instrumentation. · 2c09a5c1
      Robert Haas authored
      When a Gather or Gather Merge node is started and stopped multiple
      times, the old code wouldn't reset the shared state between executions,
      potentially resulting in dramatically inflated instrumentation data
      for nodes beneath it.  (The per-worker instrumentation ended up OK,
      I think, but the overall totals were inflated.)
      
      Report by hubert depesz lubaczewski.  Analysis and fix by Amit Kapila,
      reviewed and tweaked a bit by me.
      
      Discussion: http://postgr.es/m/20171127175631.GA405@depesz.com
      2c09a5c1
    • Andres Freund's avatar
      Fix EXPLAIN ANALYZE of hash join when the leader doesn't participate. · 5bcf389e
      Andres Freund authored
      If a hash join appears in a parallel query, there may be no hash table
      available for explain.c to inspect even though a hash table may have
      been built in other processes.  This could happen either because
      parallel_leader_participation was set to off or because the leader
      happened to hit the end of the outer relation immediately (even though
      the complete relation is not empty) and decided not to build the hash
      table.
      
      Commit bf11e7ee introduced a way for workers to exchange
      instrumentation via the DSM segment for Sort nodes even though they
      are not parallel-aware.  This commit does the same for Hash nodes, so
      that explain.c has a way to find instrumentation data from an
      arbitrary participant that actually built the hash table.
      
      Author: Thomas Munro
      Reviewed-By: Andres Freund
      Discussion: https://postgr.es/m/CAEepm%3D3DUQC2-z252N55eOcZBer6DPdM%3DFzrxH9dZc5vYLsjaA%40mail.gmail.com
      5bcf389e
    • Robert Haas's avatar
      postgres_fdw: Fix failing regression test. · 82c5c533
      Robert Haas authored
      Commit ab3f008a broke this.
      
      Report by Stephen Frost.
      
      Discussion: http://postgr.es/m/20171205180342.GO4628@tamriel.snowman.net
      82c5c533
    • Robert Haas's avatar
      postgres_fdw: Judge password use by run-as user, not session user. · ab3f008a
      Robert Haas authored
      This is a backward incompatibility which should be noted in the
      release notes for PostgreSQL 11.
      
      For security reasons, we require that a postgres_fdw foreign table use
      password authentication when accessing a remote server, so that an
      unprivileged user cannot usurp the server's credentials.  Superusers
      are exempt from this requirement, because we assume they are entitled
      to usurp the server's credentials or, at least, can find some other
      way to do it.
      
      But what should happen when the foreign table is accessed by a view
      owned by a user different from the session user?  Is it the view owner
      that must be a superuser in order to avoid the requirement of using a
      password, or the session user?  Historically it was the latter, but
      this requirement makes it the former instead.  This allows superusers
      to delegate to other users the right to select from a foreign table
      that doesn't use password authentication by creating a view over the
      foreign table and handing out rights to the view.  It is also more
      consistent with the idea that access to a view should use the view
      owner's privileges rather than the session user's privileges.
      
      The upshot of this change is that a superuser selecting from a view
      created by a non-superuser may now get an error complaining that no
      password was used, while a non-superuser selecting from a view
      created by a superuser will no longer receive such an error.
      
      No documentation changes are present in this patch because the
      wording of the documentation already suggests that it works this
      way.  We should perhaps adjust the documentation in the back-branches,
      but that's a task for another patch.
      
      Originally proposed by Jeff Janes, but with different semantics;
      adjusted to work like this by me per discussion.
      
      Discussion: http://postgr.es/m/CA+TgmoaY4HsVZJv5SqEjCKLDwtCTSwXzKpRftgj50wmMMBwciA@mail.gmail.com
      ab3f008a
    • Robert Haas's avatar
      Mark assorted variables PGDLLIMPORT. · c572599c
      Robert Haas authored
      This makes life easier for extension authors who wish to support
      Windows.
      
      Brian Cloutier, slightly amended by me.
      
      Discussion: http://postgr.es/m/CAJCy68fscdNhmzFPS4kyO00CADkvXvEa-28H-OtENk-pa2OTWw@mail.gmail.com
      c572599c
    • Peter Eisentraut's avatar
      doc: Turn on generate.consistent.ids parameter · 28f8896a
      Peter Eisentraut authored
      This ensures that automatically generated HTML anchors don't change in
      every build.
      28f8896a
    • Tom Lane's avatar
      Treat directory open failures as hard errors in ResetUnloggedRelations(). · 8dc3c971
      Tom Lane authored
      Previously, this code just reported such problems at LOG level and kept
      going.  The problem with this approach is that transient failures (e.g.,
      ENFILE) could prevent us from resetting unlogged relations to empty,
      yet allow recovery to appear to complete successfully.  That seems like
      a data corruption hazard large enough to treat such problems as reasons
      to fail startup.
      
      For the same reason, treat unlink failures for unlogged files as hard
      errors not just LOG messages.  It's a little odd that we did it like that
      when file-level errors in other steps (copy_file, fsync_fname) are ERRORs.
      
      The sole case that I left alone is that ENOENT failure on a tablespace
      (not database) directory is not an error, though it will now be logged
      rather than just silently ignored.  This is to cover the scenario where
      a previous DROP TABLESPACE removed the tablespace directory but failed
      before removing the pg_tblspc symlink.  I'm not sure that that's very
      likely in practice, but that seems like the only real excuse for the
      old behavior here, so let's allow for it.  (As coded, this will also
      allow ENOENT on $PGDATA/base/.  But since we'll fail soon enough if
      that's gone, I don't think we need to complicate this code by
      distinguishing that from a true tablespace case.)
      
      Discussion: https://postgr.es/m/21040.1512418508@sss.pgh.pa.us
      8dc3c971