1. 12 Jun, 2020 4 commits
    • David Rowley's avatar
      Add missing extern keyword for a couple of numutils functions · 9a7fccd9
      David Rowley authored
      In passing, also remove a few surplus empty lines from pg_ltoa and
      pg_ulltoa_n in numutils.c
      
      Reported-by: Andrew Gierth
      Discussion: https://postgr.es/m/87y2ou3xuh.fsf@news-spur.riddles.org.uk
      Backpatch-through: 13, where these changes were introduced
      9a7fccd9
    • Tom Lane's avatar
      Avoid using a cursor in plpgsql's RETURN QUERY statement. · 2f48ede0
      Tom Lane authored
      plpgsql has always executed the query given in a RETURN QUERY command
      by opening it as a cursor and then fetching a few rows at a time,
      which it turns around and dumps into the function's result tuplestore.
      The point of this was to keep from blowing out memory with an oversized
      SPITupleTable result (note that while a tuplestore can spill tuples
      to disk, SPITupleTable cannot).  However, it's rather inefficient, both
      because of extra data copying and because of executor entry/exit
      overhead.  In recent versions, a new performance problem has emerged:
      use of a cursor prevents use of a parallel plan for the executed query.
      
      We can improve matters by skipping use of a cursor and having the
      executor push result tuples directly into the function's result
      tuplestore.  However, a moderate amount of new infrastructure is needed
      to make that idea work:
      
      * We can use the existing tstoreReceiver.c DestReceiver code to funnel
      executor output to the tuplestore, but it has to be extended to support
      plpgsql's requirement for possibly applying a tuple conversion map.
      
      * SPI needs to be extended to allow use of a caller-supplied
      DestReceiver instead of its usual receiver that puts tuples into
      a SPITupleTable.  Two new API calls are needed to handle both the
      RETURN QUERY and RETURN QUERY EXECUTE cases.
      
      I also felt that I didn't want these new API calls to use the legacy
      method of specifying query parameter values with "char" null flags
      (the old ' '/'n' convention); rather they should accept ParamListInfo
      objects containing the parameter type and value info.  This required
      a bit of additional new infrastructure since we didn't yet have any
      parse analysis callback that would interpret $N parameter symbols
      according to type data supplied in a ParamListInfo.  There seems to be
      no harm in letting makeParamList install that callback by default,
      rather than leaving a new ParamListInfo's parserSetup hook as NULL.
      (Indeed, as of HEAD, I couldn't find anyplace that was using the
      parserSetup field at all; plpgsql was using parserSetupArg for its
      own purposes, but parserSetup seemed to be write-only.)
      
      We can actually get plpgsql out of the business of using legacy null
      flags altogether, and using ParamListInfo instead of its ad-hoc
      PreparedParamsData structure; but this requires inventing one more
      SPI API call that can replace SPI_cursor_open_with_args.  That seems
      worth doing, though.
      
      SPI_execute_with_args and SPI_cursor_open_with_args are now unused
      anywhere in the core PG distribution.  Perhaps someday we could
      deprecate/remove them.  But cleaning up the crufty bits of the SPI
      API is a task for a different patch.
      
      Per bug #16040 from Jeremy Smith.  This is unfortunately too invasive to
      consider back-patching.  Patch by me; thanks to Hamid Akhtar for review.
      
      Discussion: https://postgr.es/m/16040-eaacad11fecfb198@postgresql.org
      2f48ede0
    • Michael Paquier's avatar
      aaf8c990
    • Peter Eisentraut's avatar
      Make more use of RELKIND_HAS_STORAGE() · ffd25822
      Peter Eisentraut authored
      Make use of RELKIND_HAS_STORAGE() where appropriate, instead of
      listing out the relkinds individually.  No behavior change intended.
      Reviewed-by: default avatarTom Lane <tgl@sss.pgh.pa.us>
      Discussion: https://www.postgresql.org/message-id/flat/7a22bf51-2480-d999-1794-191ba67ff47c%402ndquadrant.com
      ffd25822
  2. 11 Jun, 2020 12 commits
  3. 10 Jun, 2020 4 commits
  4. 09 Jun, 2020 6 commits
  5. 08 Jun, 2020 5 commits
  6. 07 Jun, 2020 9 commits
    • Noah Misch's avatar
      MSVC: Avoid warning when testing a TAP suite without PROVE_FLAGS. · 5a2398b0
      Noah Misch authored
      Commit 7be5d8df surfaced the logic
      error, which had no functional implications, by adding "use warnings".
      The buildfarm always customizes PROVE_FLAGS, so the warning did not
      appear there.  Back-patch to 9.5 (all supported versions).
      5a2398b0
    • Tom Lane's avatar
      Stamp HEAD as 14devel. · d10b19e2
      Tom Lane authored
      Let the hacking begin ...
      d10b19e2
    • Tom Lane's avatar
      pgindent run prior to branching v13. · b5d69b7c
      Tom Lane authored
      pgperltidy and reformat-dat-files too, though those didn't
      find anything to change.
      b5d69b7c
    • Tom Lane's avatar
      Try to read data from the socket in pqSendSome's write_failed paths. · 7247e243
      Tom Lane authored
      Even when we've concluded that we have a hard write failure on the
      socket, we should continue to try to read data.  This gives us an
      opportunity to collect any final error message that the backend might
      have sent before closing the connection; moreover it is the job of
      pqReadData not pqSendSome to close the socket once EOF is detected.
      
      Due to an oversight in 1f39a1c0, pqSendSome failed to try to collect
      data in the case where we'd already set write_failed.  The problem was
      masked for ordinary query operations (which really only make one write
      attempt anyway), but COPY to the server would continue to send data
      indefinitely after a mid-COPY connection loss.
      
      Hence, add pqReadData calls into the paths where pqSendSome drops data
      because of write_failed.  If we've lost the connection, this will
      eventually result in closing the socket and setting CONNECTION_BAD,
      which will cause PQputline and siblings to report failure, allowing
      the application to terminate the COPY sooner.  (Basically this restores
      what happened before 1f39a1c0.)
      
      There are related issues that this does not solve; for example, if the
      backend sends an error but doesn't drop the connection, we did and
      still will keep pumping COPY data as long as the application sends it.
      Fixing that will require application-visible behavior changes though,
      and anyway it's an ancient behavior that we've had few complaints about.
      For now I'm just trying to fix the regression from 1f39a1c0.
      
      Per a complaint from Andres Freund.  Back-patch into v12 where
      1f39a1c0 came in.
      
      Discussion: https://postgr.es/m/20200603201242.ofvm4jztpqytwfye@alap3.anarazel.de
      7247e243
    • Tom Lane's avatar
      Rethink definition of cancel.c's CancelRequested flag. · 92f33bb7
      Tom Lane authored
      As it stands, this flag is only set when we've successfully sent a
      cancel request, not if we get SIGINT and then fail to send a cancel.
      However, for almost all callers, that's the Wrong Thing: we'd prefer
      to abort processing after control-C even if no cancel could be sent.
      
      As an example, since commit 1d468b9a "pgbench -i" fails to give up
      sending COPY data even after control-C, if the postmaster has been
      stopped, which is clearly not what the code intends and not what anyone
      would want.  (The fact that it keeps going at all is the fault of a
      separate bug in libpq, but not letting CancelRequested become set is
      clearly not what we want here.)
      
      The sole exception, as far as I can find, is that scripts_parallel.c's
      ParallelSlotsGetIdle tries to consume a query result after issuing a
      cancel, which of course might not terminate quickly if no cancel
      happened.  But that behavior was poorly thought out too.  No user of
      ParallelSlotsGetIdle tries to continue processing after a cancel,
      so there is really no point in trying to clear the connection's state.
      Moreover this has the same defect as for other users of cancel.c,
      that if the cancel request fails for some reason then we end up with
      control-C being completely ignored.  (On top of that, select_loop failed
      to distinguish clearly between SIGINT and other reasons for select(2)
      failing, which means that it's possible that the existing code would
      think that a cancel has been sent when it hasn't.)
      
      Hence, redefine CancelRequested as simply meaning that SIGINT was
      received.  We could add a second flag with the other meaning, but
      in the absence of any compelling argument why such a flag is needed,
      I think it would just offer an opportunity for future callers to
      get it wrong.  Also remove the consumeQueryResult call in
      ParallelSlotsGetIdle's failure exit.  In passing, simplify the
      API of select_loop.
      
      It would now be possible to re-unify psql's cancel_pressed with
      CancelRequested, partly undoing 5d43c3c5.  But I'm not really
      convinced that that's worth the trouble, so I left psql alone,
      other than fixing a misleading comment.
      
      This code is new in v13 (cf a4fd3aa7), so no need for back-patch.
      
      Per investigation of a complaint from Andres Freund.
      
      Discussion: https://postgr.es/m/20200603201242.ofvm4jztpqytwfye@alap3.anarazel.de
      92f33bb7
    • Jeff Davis's avatar
      Fix platform-specific performance regression in logtape.c. · 1fbb6c93
      Jeff Davis authored
      Commit 24d85952 made a change that indirectly caused a performance
      regression by triggering a change in the way GCC optimizes memcpy() on
      some platforms.
      
      The behavior seemed to contradict a GCC document, so I filed a report:
      
      https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95556
      
      This patch implements a narrow workaround which eliminates the
      regression I observed. The workaround is benign enough that it seems
      unlikely to cause a different regression on another platform.
      
      Discussion: https://postgr.es/m/99b2eab335c1592c925d8143979c8e9e81e1575f.camel@j-davis.com
      1fbb6c93
    • Peter Eisentraut's avatar
      psql: Format \? output a little better · aa792769
      Peter Eisentraut authored
      aa792769
    • Peter Eisentraut's avatar
      Fix message translatability · 1e8ada0c
      Peter Eisentraut authored
      Two parts of the same message shouldn't be split across two function
      calls.
      1e8ada0c
    • Peter Eisentraut's avatar
      Spelling adjustments · 0fd2a79a
      Peter Eisentraut authored
      0fd2a79a