1. 10 Mar, 2021 6 commits
    • Thomas Munro's avatar
      Add missing pthread_barrier_t. · 44bf3d50
      Thomas Munro authored
      Supply a simple implementation of the missing pthread_barrier_t type and
      functions, for macOS.
      
      Discussion: https://postgr.es/m/20200227180100.zyvjwzcpiokfsqm2%40alap3.anarazel.de
      44bf3d50
    • Thomas Munro's avatar
      pgbench: Improve time logic. · 547f04e7
      Thomas Munro authored
      Instead of instr_time (struct timespec) and the INSTR_XXX macros,
      introduce pg_time_usec_t and use integer arithmetic.  Don't include the
      connection time in TPS unless using -C mode, but report it separately.
      
      Author: Fabien COELHO <coelho@cri.ensmp.fr>
      Reviewed-by: default avatarKyotaro Horiguchi <horikyota.ntt@gmail.com>
      Reviewed-by: default avatarHayato Kuroda <kuroda.hayato@fujitsu.com>
      Discussion: https://postgr.es/m/20200227180100.zyvjwzcpiokfsqm2%40alap3.anarazel.de
      547f04e7
    • Thomas Munro's avatar
      pgbench: Refactor thread portability support. · b1d6a8f8
      Thomas Munro authored
      Instead of maintaining an incomplete emulation of POSIX threads for
      Windows, let's use an extremely minimalist macro-based abstraction for
      now.  A later patch will extend this, without the need to supply more
      complicated pthread emulation code.  (There may be a need for a more
      serious portable thread abstraction in later projects, but this is not
      it.)
      
      Minor incidental problems fixed: it wasn't OK to use (pthread_t) 0 as a
      special value, it wasn't OK to compare thread_t values with ==, and we
      incorrectly assumed that pthread functions set errno.
      
      Discussion: https://postgr.es/m/20200227180100.zyvjwzcpiokfsqm2%40alap3.anarazel.de
      b1d6a8f8
    • Amit Kapila's avatar
      Fix valgrind issue in commit 05c8482f. · e4e87a32
      Amit Kapila authored
      Initialize other newly added variables in max_parallel_hazard_context via
      is_parallel_safe() because we don't check the parallel-safety of target
      relations in that function.
      
      Reported-by: Tom Lane as per buildfarm
      Author: Amit Kapila
      Discussion: https://postgr.es/m/2060179.1615347455@sss.pgh.pa.us
      e4e87a32
    • Amit Kapila's avatar
      Enable parallel SELECT for "INSERT INTO ... SELECT ...". · 05c8482f
      Amit Kapila authored
      Parallel SELECT can't be utilized for INSERT in the following cases:
      - INSERT statement uses the ON CONFLICT DO UPDATE clause
      - Target table has a parallel-unsafe: trigger, index expression or
        predicate, column default expression or check constraint
      - Target table has a parallel-unsafe domain constraint on any column
      - Target table is a partitioned table with a parallel-unsafe partition key
        expression or support function
      
      The planner is updated to perform additional parallel-safety checks for
      the cases listed above, for determining whether it is safe to run INSERT
      in parallel-mode with an underlying parallel SELECT. The planner will
      consider using parallel SELECT for "INSERT INTO ... SELECT ...", provided
      nothing unsafe is found from the additional parallel-safety checks, or
      from the existing parallel-safety checks for SELECT.
      
      While checking parallel-safety, we need to check it for all the partitions
      on the table which can be costly especially when we decide not to use a
      parallel plan. So, in a separate patch, we will introduce a GUC and or a
      reloption to enable/disable parallelism for Insert statements.
      
      Prior to entering parallel-mode for the execution of INSERT with parallel
      SELECT, a TransactionId is acquired and assigned to the current
      transaction state. This is necessary to prevent the INSERT from attempting
      to assign the TransactionId whilst in parallel-mode, which is not allowed.
      This approach has a disadvantage in that if the underlying SELECT does not
      return any rows, then the TransactionId is not used, however that
      shouldn't happen in practice in many cases.
      
      Author: Greg Nancarrow, Amit Langote, Amit Kapila
      Reviewed-by: Amit Langote, Hou Zhijie, Takayuki Tsunakawa, Antonin Houska, Bharath Rupireddy, Dilip Kumar, Vignesh C, Zhihong Yu, Amit Kapila
      Tested-by: Tang, Haiying
      Discussion: https://postgr.es/m/CAJcOf-cXnB5cnMKqWEp2E2z7Mvcd04iLVmV=qpFJrR3AcrTS3g@mail.gmail.com
      Discussion: https://postgr.es/m/CAJcOf-fAdj=nDKMsRhQzndm-O13NY4dL6xGcEvdX5Xvbbi0V7g@mail.gmail.com
      05c8482f
    • Michael Paquier's avatar
      Revert changes for SSL compression in libpq · 0ba71107
      Michael Paquier authored
      This partially reverts 096bbf7c and 9d2d4570, undoing the libpq changes as
      it could cause breakages in distributions that share one single libpq
      version across multiple major versions of Postgres for extensions and
      applications linking to that.
      
      Note that the backend is unchanged here, and it still disables SSL
      compression while simplifying the underlying catalogs that tracked if
      compression was enabled or not for a SSL connection.
      
      Per discussion with Tom Lane and Daniel Gustafsson.
      
      Discussion: https://postgr.es/m/YEbq15JKJwIX+S6m@paquier.xyz
      0ba71107
  2. 09 Mar, 2021 6 commits
    • Alexander Korotkov's avatar
      Fix vague comment in jsonb documentation · 6540cc51
      Alexander Korotkov authored
      The sample query fails because of an attempt to update the key of a numeric.
      But the comment says it's just because of the missing object key.  That's not
      correct because jsonb subscription automatically adds missing keys.
      
      Reported-by: Nikita Konev
      6540cc51
    • Peter Eisentraut's avatar
      libpq: Remove deprecated connection parameters authtype and tty · 14d9b376
      Peter Eisentraut authored
      The authtype parameter was deprecated and made inactive in commit
      d5bbe2ac, but the environment variable was left defined and thus
      tested with a getenv call even though the value is of no use.  Also,
      if it would exist it would be copied but never freed as the cleanup
      code had been removed.
      
      tty was deprecated in commit cb7fb3ca but most of the
      infrastructure around it remained in place.
      
      Author: Daniel Gustafsson <daniel@yesql.se>
      Discussion: https://postgr.es/m/DDDF36F3-582A-4C02-8598-9B464CC42B34@yesql.se
      14d9b376
    • Michael Paquier's avatar
      Switch back sslcompression to be a normal input field in libpq · 096bbf7c
      Michael Paquier authored
      Per buildfarm member crake, any servers including a postgres_fdw server
      with this option set would fail to do a pg_upgrade properly as the
      option got hidden in f9264d15 by becoming a debug option, making the
      restore of the FDW server fail.
      
      This changes back the option in libpq to be visible, but still inactive
      to fix this upgrade issue.
      
      Discussion: https://postgr.es/m/YEbq15JKJwIX+S6m@paquier.xyz
      096bbf7c
    • Fujii Masao's avatar
      Track total amounts of times spent writing and syncing WAL data to disk. · ff99918c
      Fujii Masao authored
      This commit adds new GUC track_wal_io_timing. When this is enabled,
      the total amounts of time XLogWrite writes and issue_xlog_fsync syncs
      WAL data to disk are counted in pg_stat_wal. This information would be
      useful to check how much WAL write and sync affect the performance.
      
      Enabling track_wal_io_timing will make the server query the operating
      system for the current time every time WAL is written or synced,
      which may cause significant overhead on some platforms. To avoid such
      additional overhead in the server with track_io_timing enabled,
      this commit introduces track_wal_io_timing as a separate parameter from
      track_io_timing.
      
      Note that WAL write and sync activity by walreceiver has not been tracked yet.
      
      This commit makes the server also track the numbers of times XLogWrite
      writes and issue_xlog_fsync syncs WAL data to disk, in pg_stat_wal,
      regardless of the setting of track_wal_io_timing. This counters can be
      used to calculate the WAL write and sync time per request, for example.
      
      Bump PGSTAT_FILE_FORMAT_ID.
      
      Bump catalog version.
      
      Author: Masahiro Ikeda
      Reviewed-By: Japin Li, Hayato Kuroda, Masahiko Sawada, David Johnston, Fujii Masao
      Discussion: https://postgr.es/m/0509ad67b585a5b86a83d445dfa75392@oss.nttdata.com
      ff99918c
    • Michael Paquier's avatar
      Add support for more progress reporting in COPY · 9d2d4570
      Michael Paquier authored
      The command (TO or FROM), its type (file, pipe, program or callback),
      and the number of tuples excluded by a WHERE clause in COPY FROM are
      added to the progress reporting already available.
      
      The column "lines_processed" is renamed to "tuples_processed" to
      disambiguate the meaning of this column in the cases of CSV and BINARY
      COPY and to be more consistent with the other catalog progress views.
      
      Bump catalog version, again.
      
      Author: Matthias van de Meent
      Reviewed-by: Michael Paquier, Justin Pryzby, Bharath Rupireddy, Josef
      Šimánek, Tomas Vondra
      Discussion: https://postgr.es/m/CAEze2WiOcgdH4aQA8NtZq-4dgvnJzp8PohdeKchPkhMY-jWZXA@mail.gmail.com
      9d2d4570
    • Michael Paquier's avatar
      Remove support for SSL compression · f9264d15
      Michael Paquier authored
      PostgreSQL disabled compression as of e3bdb2d9 and the documentation
      recommends against using it since.  Additionally, SSL compression has
      been disabled in OpenSSL since version 1.1.0, and was disabled in many
      distributions long before that.  The most recent TLS version, TLSv1.3,
      disallows compression at the protocol level.
      
      This commit removes the feature itself, removing support for the libpq
      parameter sslcompression (parameter still listed for compatibility
      reasons with existing connection strings, just ignored), and removes
      the equivalent field in pg_stat_ssl and de facto PgBackendSSLStatus.
      
      Note that, on top of removing the ability to activate compression by
      configuration, compression is actively disabled in both frontend and
      backend to avoid overrides from local configurations.
      
      A TAP test is added for deprecated SSL parameters to check after
      backwards compatibility.
      
      Bump catalog version.
      
      Author: Daniel Gustafsson
      Reviewed-by: Peter Eisentraut, Magnus Hagander, Michael Paquier
      Discussion:  https://postgr.es/m/7E384D48-11C5-441B-9EC3-F7DB1F8518F6@yesql.se
      f9264d15
  3. 08 Mar, 2021 5 commits
    • Tom Lane's avatar
      Complain if a function-in-FROM returns a set when it shouldn't. · d4545dc1
      Tom Lane authored
      Throw a "function protocol violation" error if a function in FROM
      tries to return a set though it wasn't marked proretset.  Although
      such cases work at the moment, it doesn't seem like something we
      want to guarantee will keep working.  Besides, there are other
      negative consequences of not setting the proretset flag, such as
      potentially bad plans.
      
      No back-patch, since if there is any third-party code violating
      this expectation, people wouldn't appreciate us breaking it in
      a minor release.
      
      Discussion: https://postgr.es/m/1636062.1615141782@sss.pgh.pa.us
      d4545dc1
    • Tom Lane's avatar
      Properly mark pg_stat_get_subscription() as returning a set. · fed10d4e
      Tom Lane authored
      The initial catalog data for this function failed to set proretset
      or provide a prorows estimate.  It accidentally worked anyway when
      invoked in the FROM clause, because the executor isn't too picky
      about this; but the planner didn't expect the function to return
      multiple rows, which could lead to bad plans.  Also the function
      would fail if invoked in the SELECT list.
      
      We can't easily back-patch this fix, but fortunately the bug's
      consequences aren't awful in most cases.  Getting this right is
      mainly an exercise in future-proofing.
      
      Discussion: https://postgr.es/m/1636062.1615141782@sss.pgh.pa.us
      fed10d4e
    • Tom Lane's avatar
      Validate the OID argument of pg_import_system_collations(). · 5c06abb9
      Tom Lane authored
      "SELECT pg_import_system_collations(0)" caused an assertion failure.
      With a random nonzero argument --- or indeed with zero, in non-assert
      builds --- it would happily make pg_collation entries with garbage
      values of collnamespace.  These are harmless as far as I can tell
      (unless maybe the OID happens to become used for a schema, later on?).
      In any case this isn't a security issue, since the function is
      superuser-only.  But it seems like a gotcha for unwary DBAs, so let's
      add a check that the given OID belongs to some schema.
      
      Back-patch to v10 where this function was introduced.
      5c06abb9
    • Tom Lane's avatar
      Further tweak memory management for regex DFAs. · 6c20bdb2
      Tom Lane authored
      Coverity is still unhappy after commit 190c7988, and after looking
      closer I think it might be onto something.  The callers of newdfa()
      typically drop out if v->err has been set nonzero, which newdfa()
      is faithfully doing if it fails.  However, what if v->err was already
      nonzero before we entered newdfa()?  Then newdfa() could succeed and
      the caller would promptly leak its result.
      
      I don't think this scenario can actually happen, but the predicate
      "v->err is always zero when newdfa() is called" seems difficult to be
      entirely sure of; there's a good deal of code that potentially could
      get that wrong.
      
      It seems better to adjust the callers to directly check for a null
      result instead of relying on ISERR() tests.  This is slightly cheaper
      than the previous coding anyway.
      
      Lacking evidence that there's any real bug, no back-patch.
      6c20bdb2
    • Amit Kapila's avatar
      Track replication origin progress for rollbacks. · 8a812e51
      Amit Kapila authored
      Commit 1eb6d652 allowed to track replica origin replay progress for 2PC
      but it was not complete. It misses to properly track the progress for
      rollback prepared especially it missed updating the code for recovery.
      Additionally, we need to allow tracking it on subscriber nodes where
      wal_level might not be logical.
      
      It is required to track decoding of 2PC which is committed in PG14
      (a271a1b5) and also nobody complained about this till now so not
      backpatching it.
      
      Author: Amit Kapila
      Reviewed-by: Michael Paquier and Ajin Cherian
      Discussion: https://postgr.es/m/CAA4eK1L-kHmMnSdrRW6UhRbCjR7cgh04c+6psY15qzT6ktcd+g@mail.gmail.com
      8a812e51
  4. 06 Mar, 2021 5 commits
  5. 05 Mar, 2021 4 commits
  6. 04 Mar, 2021 5 commits
  7. 03 Mar, 2021 9 commits