1. 01 Dec, 2020 7 commits
  2. 30 Nov, 2020 9 commits
  3. 29 Nov, 2020 3 commits
    • Tom Lane's avatar
      Fix recently-introduced breakage in psql's \connect command. · 7e5e1bba
      Tom Lane authored
      Through my misreading of what the existing code actually did,
      commits 85c54287 et al. broke psql's behavior for the case where
      "\c connstring" provides a password in the connstring.  We should
      use that password in such a case, but as of 85c54287 we ignored it
      (and instead, prompted for a password).
      
      Commit 94929f1c fixed that in HEAD, but since I thought it was
      cleaning up a longstanding misbehavior and not one I'd just created,
      I didn't back-patch it.
      
      Hence, back-patch the portions of 94929f1c having to do with
      password management.  In addition to fixing the introduced bug,
      this means that "\c -reuse-previous=on connstring" will allow
      re-use of an existing connection's password if the connstring
      doesn't change user/host/port.  That didn't happen before, but
      it seems like a bug fix, and anyway I'm loath to have significant
      differences in this code across versions.
      
      Also fix an error with the same root cause about whether or not to
      override a connstring's setting of client_encoding.  As of 85c54287
      we always did so; restore the previous behavior of overriding only
      when stdin/stdout are a terminal and there's no environment setting
      of PGCLIENTENCODING.  (I find that definition a bit surprising, but
      right now doesn't seem like the time to revisit it.)
      
      Per bug #16746 from Krzysztof Gradek.  As with the previous patch,
      back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/16746-44b30e2edf4335d4@postgresql.org
      7e5e1bba
    • Tom Lane's avatar
      Doc: clarify behavior of PQconnectdbParams(). · d5e2bdf7
      Tom Lane authored
      The documentation omitted the critical tidbit that a keyword-array entry
      is simply ignored if its corresponding value-array entry is NULL or an
      empty string; it will *not* override any previously-obtained value for
      the parameter.  (See conninfo_array_parse().)  I'd supposed that would
      force the setting back to default, which is what led me into bug #16746;
      but it doesn't.
      
      While here, I couldn't resist the temptation to do some copy-editing,
      both in the description of PQconnectdbParams() and in the section
      about connection URI syntax.
      
      Discussion: https://postgr.es/m/931505.1606618746@sss.pgh.pa.us
      d5e2bdf7
    • Noah Misch's avatar
      Retry initial slurp_file("current_logfiles"), in test 004_logrotate.pl. · 0f89ca08
      Noah Misch authored
      Buildfarm member topminnow failed when the test script attempted this
      before the syslogger would have created the file.  Back-patch to v12,
      which introduced the test.
      0f89ca08
  4. 28 Nov, 2020 2 commits
    • Tom Lane's avatar
      Clean up after tests in src/test/locale/. · b90a7fe1
      Tom Lane authored
      Oversight in 257836a7, which added these tests.
      b90a7fe1
    • Tom Lane's avatar
      Fix a recently-introduced race condition in LISTEN/NOTIFY handling. · 9c83b54a
      Tom Lane authored
      Commit 566372b3 fixed some race conditions involving concurrent
      SimpleLruTruncate calls, but it introduced new ones in async.c.
      A newly-listening backend could attempt to read Notify SLRU pages that
      were in process of being truncated, possibly causing an error.  Also,
      the QUEUE_TAIL pointer could become set to a value that's not equal to
      the queue position of any backend.  While that's fairly harmless in
      v13 and up (thanks to commit 51004c71), in older branches it resulted
      in near-permanent disabling of the queue truncation logic, so that
      continued use of NOTIFY led to queue-fill warnings and eventual
      inability to send any more notifies.  (A server restart is enough to
      make that go away, but it's still pretty unpleasant.)
      
      The core of the problem is confusion about whether QUEUE_TAIL
      represents the "logical" tail of the queue (i.e., the oldest
      still-interesting data) or the "physical" tail (the oldest data we've
      not yet truncated away).  To fix, split that into two variables.
      QUEUE_TAIL regains its definition as the logical tail, and we
      introduce a new variable to track the oldest un-truncated page.
      
      Per report from Mikael Gustavsson.  Like the previous patch,
      back-patch to all supported branches.
      
      Discussion: https://postgr.es/m/1b8561412e8a4f038d7a491c8b922788@smhi.se
      9c83b54a
  5. 27 Nov, 2020 4 commits
    • Fujii Masao's avatar
      Fix CLUSTER progress reporting of number of blocks scanned. · 3df51ca8
      Fujii Masao authored
      Previously pg_stat_progress_cluster view reported the current block
      number in heap scan as the number of heap blocks scanned (i.e.,
      heap_blks_scanned). This reported number could be incorrect when
      synchronize_seqscans is enabled, because it allowed the heap scan to
      start at block in middle. This could result in wraparounds in the
      heap_blks_scanned column when the heap scan wrapped around.
      This commit fixes the bug by calculating the number of blocks from
      the block that the heap scan starts at to the current block in scan,
      and reporting that number in the heap_blks_scanned column.
      
      Also, in pg_stat_progress_cluster view, previously heap_blks_scanned
      could not reach heap_blks_total at the end of heap scan phase
      if the last pages scanned were empty. This commit fixes the bug by
      manually updating heap_blks_scanned to the same value as
      heap_blks_total when the heap scan phase finishes.
      
      Back-patch to v12 where pg_stat_progress_cluster view was introduced.
      
      Reported-by: Matthias van de Meent
      Author: Matthias van de Meent
      Reviewed-by: Fujii Masao
      Discussion: https://postgr.es/m/CAEze2WjCBWSGkVfYag001Rc4+-nNLDpWM7QbyD6yPvuhKs-gYQ@mail.gmail.com
      3df51ca8
    • Fujii Masao's avatar
      Use standard SIGTERM signal handler die() in test_shm_mq worker. · ef848f4a
      Fujii Masao authored
      Previously test_shm_mq worker used the stripped-down version of die()
      as the SIGTERM signal handler. This commit makes it use die(), instead,
      to simplify the code.
      
      In terms of the code, the difference between die() and the stripped-down
      version previously used is whether the signal handler directly may call
      ProcessInterrupts() or not. But this difference doesn't exist in
      a background worker because, in bgworker, DoingCommandRead flag will
      never be true and die() will never call ProcessInterrupts() directly.
      Therefore test_shm_mq worker can safely use die(), like other bgworker
      proceses (e.g., logical replication apply launcher or autoprewarm worker)
      currently do.
      
      Thanks to Craig Ringer for the report and investigation of the issue.
      
      Author: Bharath Rupireddy
      Reviewed-by: Fujii Masao
      Discussion: https://postgr.es/m/CAGRY4nxsAe_1k_9g5b47orA0S011iBoHsXHFMH7cg7HV0O1bwQ@mail.gmail.com
      ef848f4a
    • Fujii Masao's avatar
      Use standard SIGHUP and SIGTERM signal handlers in worker_spi. · 2a084772
      Fujii Masao authored
      Previously worker_spi used its custom signal handlers for SIGHUP and
      SIGTERM. This commit makes worker_spi use the standard signal handlers,
      to simplify the code.
      
      Note that die() is used as the standard SIGTERM signal handler in
      worker_spi instead of SignalHandlerForShutdownRequest() or bgworker_die().
      Previously the exit handling was only able to exit from within the main loop,
      and not from within the backend code it calls. This is why die() needs to
      be used here, so worker_spi can respond to SIGTERM signal while it's
      executing a query.
      
      Maybe we can say that it's a bug that worker_spi could not respond to
      SIGTERM during query execution. But since worker_spi is a just example
      of the background worker code, we don't do the back-patch.
      
      Thanks to Craig Ringer for the report and investigation of the issue.
      
      Author: Bharath Rupireddy
      Reviewed-by: Fujii Masao
      Discussion: https://postgr.es/m/CALj2ACXDEZhAFOTDcqO9cFSRvrFLyYOnPKrcA1UG4uZn9hUAVg@mail.gmail.com
      Discussion: https://postgr.es/m/CAGRY4nxsAe_1k_9g5b47orA0S011iBoHsXHFMH7cg7HV0O1bwQ@mail.gmail.com
      2a084772
    • Amit Kapila's avatar
      Fix replication of in-progress transactions in tablesync worker. · 0926e96c
      Amit Kapila authored
      Tablesync worker runs under a single transaction but in streaming mode, we
      were committing the transaction on stream_stop, stream_abort, and
      stream_commit. We need to avoid committing the transaction in a streaming
      mode in tablesync worker.
      
      In passing move the call to process_syncing_tables in
      apply_handle_stream_commit after clean up of stream files. This will
      allow clean up of files to happen before the exit of tablesync worker
      which would otherwise be handled by one of the proc exit routines.
      
      Author: Dilip Kumar
      Reviewed-by: Amit Kapila and Peter Smith
      Tested-by: Peter Smith
      Discussion: https://postgr.es/m/CAHut+Pt4PyKQCwqzQ=EFF=bpKKJD7XKt_S23F6L20ayQNxg77A@mail.gmail.com
      0926e96c
  6. 26 Nov, 2020 4 commits
  7. 25 Nov, 2020 11 commits