1. 24 Mar, 2017 12 commits
    • Robert Haas's avatar
      Improve access to parallel query from procedural languages. · 61c2e1a9
      Robert Haas authored
      In SQL, the ability to use parallel query was previous contingent on
      fcache->readonly_func, which is only set for non-volatile functions;
      but the volatility of a function has no bearing on whether queries
      inside it can use parallelism.  Remove that condition.
      
      SPI_execute and SPI_execute_with_args always run the plan just once,
      though not necessarily to completion.  Given the changes in commit
      691b8d59, it's sensible to pass
      CURSOR_OPT_PARALLEL_OK here, so do that.  This improves access to
      parallelism for any caller that uses these functions to execute
      queries.  Such callers include plperl, plpython, pltcl, and plpgsql,
      though it's not the case that they all use these functions
      exclusively.
      
      In plpgsql, allow parallel query for plain SELECT queries (as
      opposed to PERFORM, which already worked) and for plain expressions
      (which probably won't go through the executor at all, because they
      will likely be simple expressions, but if they do then this helps).
      
      Rafia Sabih and Robert Haas, reviewed by Dilip Kumar and Amit Kapila
      
      Discussion: http://postgr.es/m/CAOGQiiMfJ+4SQwgG=6CVHWoisiU0+7jtXSuiyXBM3y=A=eJzmg@mail.gmail.com
      61c2e1a9
    • Alvaro Herrera's avatar
      Fix use-after-free bug · 8082bea2
      Alvaro Herrera authored
      Detected by buildfarm member prion
      8082bea2
    • Simon Riggs's avatar
      Reverting 42b4b0b2 · 3428ef79
      Simon Riggs authored
      Buildfarm issues and other reported issues
      3428ef79
    • Fujii Masao's avatar
      Make VACUUM VERBOSE report the number of skipped frozen pages. · 70adf2fb
      Fujii Masao authored
      Previously manual VACUUM did not report the number of skipped frozen
      pages even when VERBOSE option is specified. But this information is
      helpful to monitor the VACUUM activity, and also autovacuum reports that
      number in the log file when the condition of log_autovacuum_min_duration
      is met.
      
      This commit changes VACUUM VERBOSE so that it reports the number
      of frozen pages that it skips.
      
      Author: Masahiko Sawada
      Reviewed-by: Yugo Nagata and Jim Nasby
      
      Discussion: http://postgr.es/m/CAD21AoDZQKCxo0L39Mrq08cONNkXQKXuh=2DP1Q8ebmt35SoaA@mail.gmail.com
      70adf2fb
    • Alvaro Herrera's avatar
      Implement multivariate n-distinct coefficients · 7b504eb2
      Alvaro Herrera authored
      Add support for explicitly declared statistic objects (CREATE
      STATISTICS), allowing collection of statistics on more complex
      combinations that individual table columns.  Companion commands DROP
      STATISTICS and ALTER STATISTICS ... OWNER TO / SET SCHEMA / RENAME are
      added too.  All this DDL has been designed so that more statistic types
      can be added later on, such as multivariate most-common-values and
      multivariate histograms between columns of a single table, leaving room
      for permitting columns on multiple tables, too, as well as expressions.
      
      This commit only adds support for collection of n-distinct coefficient
      on user-specified sets of columns in a single table.  This is useful to
      estimate number of distinct groups in GROUP BY and DISTINCT clauses;
      estimation errors there can cause over-allocation of memory in hashed
      aggregates, for instance, so it's a worthwhile problem to solve.  A new
      special pseudo-type pg_ndistinct is used.
      
      (num-distinct estimation was deemed sufficiently useful by itself that
      this is worthwhile even if no further statistic types are added
      immediately; so much so that another version of essentially the same
      functionality was submitted by Kyotaro Horiguchi:
      https://postgr.es/m/20150828.173334.114731693.horiguchi.kyotaro@lab.ntt.co.jp
      though this commit does not use that code.)
      
      Author: Tomas Vondra.  Some code rework by Álvaro.
      Reviewed-by: Dean Rasheed, David Rowley, Kyotaro Horiguchi, Jeff Janes,
          Ideriha Takeshi
      Discussion: https://postgr.es/m/543AFA15.4080608@fuzzy.cz
          https://postgr.es/m/20170320190220.ixlaueanxegqd5gr@alvherre.pgsql
      7b504eb2
    • Robert Haas's avatar
      plpgsql: Don't generate parallel plans for RETURN QUERY. · f120b614
      Robert Haas authored
      Commit 7aea8e4f allowed a parallel
      plan to be generated when for a RETURN QUERY or RETURN QUERY EXECUTE
      statement in a PL/pgsql block, but that's a bad idea because plplgsql
      asks the executor for 50 rows at a time.  That means that we'll always
      be running serially a plan that was intended for parallel execution,
      which is not a good idea.  Fix by not requesting a parallel plan from
      the outset.
      
      Per discussion, back-patch to 9.6.  There is a slight risk that, due
      to optimizer error, somebody could have a case where the parallel plan
      executed serially is actually faster than the supposedly-best serial
      plan, but the consensus seems to be that that's not sufficient
      justification for leaving 9.6 unpatched.
      
      Discussion: http://postgr.es/m/CA+TgmoZ_ZuH+auEeeWnmtorPsgc_SmP+XWbDsJ+cWvWBSjNwDQ@mail.gmail.com
      Discussion: http://postgr.es/m/CA+TgmobXEhvHbJtWDuPZM9bVSLiTj-kShxQJ2uM5GPDze9fRYA@mail.gmail.com
      f120b614
    • Robert Haas's avatar
      Add a txid_status function. · 857ee8e3
      Robert Haas authored
      If your connection to the database server is lost while a COMMIT is
      in progress, it may be difficult to figure out whether the COMMIT was
      successful or not.  This function will tell you, provided that you
      don't wait too long to ask.  It may be useful in other situations,
      too.
      
      Craig Ringer, reviewed by Simon Riggs and by me
      
      Discussion: http://postgr.es/m/CAMsr+YHQiWNEi0daCTboS40T+V5s_+dst3PYv_8v2wNVH+Xx4g@mail.gmail.com
      857ee8e3
    • Simon Riggs's avatar
      Avoid SnapshotResetXmin() during AtEOXact_Snapshot() · 42b4b0b2
      Simon Riggs authored
      For normal commits and aborts we already reset PgXact->xmin
      Avoiding touching highly contented shmem improves concurrent
      performance.
      
      Simon Riggs
      
      Discussion: CANP8+jJdXE9b+b9F8CQT-LuxxO0PBCB-SZFfMVAdp+akqo4zfg@mail.gmail.com
      42b4b0b2
    • Peter Eisentraut's avatar
      Handle empty result set in libpqrcv_exec · 8398c836
      Peter Eisentraut authored
      Always return tupleslot and tupledesc from libpqrcv_exec.  This avoids
      requiring callers to handle that separately.
      
      Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
      Reported-by: default avatarMichael Banck <michael.banck@credativ.de>
      8398c836
    • Heikki Linnakangas's avatar
      Allow SCRAM authentication, when pg_hba.conf says 'md5'. · 7ac955b3
      Heikki Linnakangas authored
      If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason
      we cannot attempt to perform SCRAM authentication instead of MD5. The worst
      that can happen is that the client doesn't support SCRAM, and the
      authentication will fail. But previously, it would fail for sure, because
      we would not even try. SCRAM is strictly more secure than MD5, so there's
      no harm in trying it. This allows for a more graceful transition from MD5
      passwords to SCRAM, as user passwords can be changed to SCRAM verifiers
      incrementally, without changing pg_hba.conf.
      
      Refactor the code in auth.c to support that better. Notably, we now have to
      look up the user's pg_authid entry before sending the password challenge,
      also when performing MD5 authentication. Also simplify the concept of a
      "doomed" authentication. Previously, if a user had a password, but it had
      expired, we still performed SCRAM authentication (but always returned error
      at the end) using the salt and iteration count from the expired password.
      Now we construct a fake salt, like we do when the user doesn't have a
      password or doesn't exist at all. That simplifies get_role_password(), and
      we can don't need to distinguish the  "user has expired password", and
      "user does not exist" cases in auth.c.
      
      On second thoughts, also rename uaSASL to uaSCRAM. It refers to the
      mechanism specified in pg_hba.conf, and while we use SASL for SCRAM
      authentication at the protocol level, the mechanism should be called SCRAM,
      not SASL. As a comparison, we have uaLDAP, even though it looks like the
      plain 'password' authentication at the protocol level.
      
      Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us
      Reviewed-by: Michael Paquier
      7ac955b3
    • Teodor Sigaev's avatar
      Fix backup canceling · 78874531
      Teodor Sigaev authored
      Assert-enabled build crashes but without asserts it works by wrong way:
      it may not reset forcing full page write and preventing from starting
      exclusive backup with the same name as cancelled.
      Patch replaces pair of booleans
      nonexclusive_backup_running/exclusive_backup_running to single enum to
      correctly describe backup state.
      
      Backpatch to 9.6 where bug was introduced
      
      Reported-by: David Steele
      Authors: Michael Paquier, David Steele
      Reviewed-by: Anastasia Lubennikova
      
      https://commitfest.postgresql.org/13/1068/
      78874531
    • Tom Lane's avatar
      Avoid syntax error on platforms that have neither LOCALE_T nor ICU. · 457a4448
      Tom Lane authored
      Buildfarm member anole sees this union as empty, and doesn't like it.
      457a4448
  2. 23 Mar, 2017 16 commits
    • Bruce Momjian's avatar
      218747d2
    • Peter Eisentraut's avatar
      2e0c17dc
    • Peter Eisentraut's avatar
      Fix crash in ICU patch · 524e0f7a
      Peter Eisentraut authored
      This only happened with single-byte encodings.
      524e0f7a
    • Robert Haas's avatar
      Fix enum definition. · c23b186a
      Robert Haas authored
      Commit 249cf070 assigned to one of
      the labels in the middle the value that should have been assigned
      to the first member of the enum.  Rushabh's patch didn't have that
      defect as submitted, but I managed to mess it up while editing.
      Repair.
      c23b186a
    • Peter Eisentraut's avatar
      ICU support · eccfef81
      Peter Eisentraut authored
      Add a column collprovider to pg_collation that determines which library
      provides the collation data.  The existing choices are default and libc,
      and this adds an icu choice, which uses the ICU4C library.
      
      The pg_locale_t type is changed to a union that contains the
      provider-specific locale handles.  Users of locale information are
      changed to look into that struct for the appropriate handle to use.
      
      Also add a collversion column that records the version of the collation
      when it is created, and check at run time whether it is still the same.
      This detects potentially incompatible library upgrades that can corrupt
      indexes and other structures.  This is currently only supported by
      ICU-provided collations.
      
      initdb initializes the default collation set as before from the `locale
      -a` output but also adds all available ICU locales with a "-x-icu"
      appended.
      
      Currently, ICU-provided collations can only be explicitly named
      collations.  The global database locales are still always libc-provided.
      
      ICU support is enabled by configure --with-icu.
      Reviewed-by: default avatarThomas Munro <thomas.munro@enterprisedb.com>
      Reviewed-by: default avatarAndreas Karlsson <andreas@proxel.se>
      eccfef81
    • Robert Haas's avatar
      Track the oldest XID that can be safely looked up in CLOG. · ea42cc18
      Robert Haas authored
      This provides infrastructure for looking up arbitrary, user-supplied
      XIDs without a risk of scary-looking failures from within the clog
      module.  Normally, the oldest XID that can be safely looked up in CLOG
      is the same as the oldest XID that can reused without causing
      wraparound, and the latter is already tracked.  However, while
      truncation is in progress, the values are different, so we must
      keep track of them separately.
      
      Craig Ringer, reviewed by Simon Riggs and by me.
      
      Discussion: http://postgr.es/m/CAMsr+YHQiWNEi0daCTboS40T+V5s_+dst3PYv_8v2wNVH+Xx4g@mail.gmail.com
      ea42cc18
    • Peter Eisentraut's avatar
      Remove createlang and droplang · 50c956ad
      Peter Eisentraut authored
      They have been deprecated since PostgreSQL 9.1.
      Reviewed-by: default avatarMagnus Hagander <magnus@hagander.net>
      Reviewed-by: default avatarDaniel Gustafsson <daniel@yesql.se>
      50c956ad
    • Robert Haas's avatar
      Allow for parallel execution whenever ExecutorRun() is done only once. · 691b8d59
      Robert Haas authored
      Previously, it was unsafe to execute a plan in parallel if
      ExecutorRun() might be called with a non-zero row count.  However,
      it's quite easy to fix things up so that we can support that case,
      provided that it is known that we will never call ExecutorRun() a
      second time for the same QueryDesc.  Add infrastructure to signal
      this, and cross-checks to make sure that a caller who claims this is
      true doesn't later reneg.
      
      While that pattern never happens with queries received directly from a
      client -- there's no way to know whether multiple Execute messages
      will be sent unless the first one requests all the rows -- it's pretty
      common for queries originating from procedural languages, which often
      limit the result to a single tuple or to a user-specified number of
      tuples.
      
      This commit doesn't actually enable parallelism in any additional
      cases, because currently none of the places that would be able to
      benefit from this infrastructure pass CURSOR_OPT_PARALLEL_OK in the
      first place, but it makes it much more palatable to pass
      CURSOR_OPT_PARALLEL_OK in places where we currently don't, because it
      eliminates some cases where we'd end up having to run the parallel
      plan serially.
      
      Patch by me, based on some ideas from Rafia Sabih and corrected by
      Rafia Sabih based on feedback from Dilip Kumar and myself.
      
      Discussion: http://postgr.es/m/CA+TgmobXEhvHbJtWDuPZM9bVSLiTj-kShxQJ2uM5GPDze9fRYA@mail.gmail.com
      691b8d59
    • Teodor Sigaev's avatar
      Reduce page locking in GIN vacuum · 218f5158
      Teodor Sigaev authored
      GIN vacuum during cleaning posting tree can lock this whole tree for a long
      time with by holding LockBufferForCleanup() on root. Patch changes it with
      two ways: first, cleanup lock will be taken only if there is an empty page
      (which should be deleted) and, second, it tries to lock only subtree, not the
      whole posting tree.
      
      Author: Andrey Borodin with minor editorization by me
      Reviewed-by: Jeff Davis, me
      
      https://commitfest.postgresql.org/13/896/
      218f5158
    • Peter Eisentraut's avatar
      Remove trailing comma from enum definition · 73561013
      Peter Eisentraut authored
      Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
      73561013
    • Peter Eisentraut's avatar
      Assorted compilation and test fixes · 128e6ee0
      Peter Eisentraut authored
      related to 7c4f5240, per build farm
      
      Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
      128e6ee0
    • Simon Riggs's avatar
      Minor spelling correction in comment · 232c5322
      Simon Riggs authored
      Jon Nelson
      232c5322
    • Simon Riggs's avatar
      Replication lag tracking for walsenders · 6912acc0
      Simon Riggs authored
      Adds write_lag, flush_lag and replay_lag cols to pg_stat_replication.
      
      Implements a lag tracker module that reports the lag times based upon
      measurements of the time taken for recent WAL to be written, flushed and
      replayed and for the sender to hear about it. These times
      represent the commit lag that was (or would have been) introduced by each
      synchronous commit level, if the remote server was configured as a
      synchronous standby.  For an asynchronous standby, the replay_lag column
      approximates the delay before recent transactions became visible to queries.
      If the standby server has entirely caught up with the sending server and
      there is no more WAL activity, the most recently measured lag times will
      continue to be displayed for a short time and then show NULL.
      
      Physical replication lag tracking is automatic. Logical replication tracking
      is possible but is the responsibility of the logical decoding plugin.
      Tracking is a private module operating within each walsender individually,
      with values reported to shared memory. Module not used outside of walsender.
      
      Design and code is good enough now to commit - kudos to the author.
      In many ways a difficult topic, with important and subtle behaviour so this
      shoudl be expected to generate discussion and multiple open items: Test now!
      
      Author: Thomas Munro, following designs by Fujii Masao and Simon Riggs
      Review: Simon Riggs, Ian Barwick and Craig Ringer
      6912acc0
    • Peter Eisentraut's avatar
      Logical replication support for initial data copy · 7c4f5240
      Peter Eisentraut authored
      Add functionality for a new subscription to copy the initial data in the
      tables and then sync with the ongoing apply process.
      
      For the copying, add a new internal COPY option to have the COPY source
      data provided by a callback function.  The initial data copy works on
      the subscriber by receiving COPY data from the publisher and then
      providing it locally into a COPY that writes to the destination table.
      
      A WAL receiver can now execute full SQL commands.  This is used here to
      obtain information about tables and publications.
      
      Several new options were added to CREATE and ALTER SUBSCRIPTION to
      control whether and when initial table syncing happens.
      
      Change pg_dump option --no-create-subscription-slots to
      --no-subscription-connect and use the new CREATE SUBSCRIPTION
      ... NOCONNECT option for that.
      
      Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
      Tested-by: default avatarErik Rijkers <er@xs4all.nl>
      7c4f5240
    • Magnus Hagander's avatar
      Fix grammar in comment · 707576b5
      Magnus Hagander authored
      Author: Emil Iggland
      707576b5
    • Stephen Frost's avatar
      Expose waitforarchive option through pg_stop_backup() · 017e4f25
      Stephen Frost authored
      Internally, we have supported the option to either wait for all of the
      WAL associated with a backup to be archived, or to return immediately.
      This option is useful to users of pg_stop_backup() as well, when they
      are reading the stop backup record position and checking that the WAL
      they need has been archived independently.
      
      This patch adds an additional, optional, argument to pg_stop_backup()
      which allows the user to indicate if they wish to wait for the WAL to be
      archived or not.  The default matches current behavior, which is to
      wait.
      
      Author: David Steele, with some minor changes, doc updates by me.
      Reviewed by: Takayuki Tsunakawa, Fujii Masao
      Discussion: https://postgr.es/m/758e3fd1-45b4-5e28-75cd-e9e7f93a4c02@pgmasters.net
      017e4f25
  3. 22 Mar, 2017 12 commits