1. 20 May, 2019 6 commits
    • Fujii Masao's avatar
      Make VACUUM accept 1 and 0 as a boolean value. · fc7c281f
      Fujii Masao authored
      Commit 41b54ba7 allowed existing VACUUM options to take a boolean
      argument. It's documented that valid boolean values that VACUUM can
      accept are true, false, on, off, 1, and 0. But previously the parser
      failed to accept 1 and 0 as a boolean value in VACUUM syntax because
      of a lack of NumericOnly clause for vac_analyze_option_arg in gram.y.
      
      This commit adds such NumericOnly clause so that VACUUM options
      can take also 1 and 0 as a boolean value.
      
      Discussion: https://postgr.es/m/CAHGQGwGYg82A8UCQxZe7Zn9MnyUBGdyB=1CNpKF3jBny+RbyfA@mail.gmail.com
      fc7c281f
    • Peter Eisentraut's avatar
      Translation updates · 3c439a58
      Peter Eisentraut authored
      Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
      Source-Git-Hash: a20bf6b8a5b4e32450967055eb5b07cee4704edd
      3c439a58
    • Peter Eisentraut's avatar
      Remove bug.template file · 8bbb8166
      Peter Eisentraut authored
      It's outdated and not really in use anymore.
      
      Discussion: https://www.postgresql.org/message-id/flat/cf7ed2b1-1ebe-83cf-e05e-d5943f67af2d%402ndquadrant.com
      8bbb8166
    • Andres Freund's avatar
      Remove outdated comment in copy.c. · fb504c5e
      Andres Freund authored
      fb504c5e
    • Andres Freund's avatar
      Minimally fix partial aggregation for aggregates that don't have one argument. · 26572832
      Andres Freund authored
      For partial aggregation combine steps,
      AggStatePerTrans->numTransInputs was set to the transition function's
      number of inputs, rather than the combine function's number of
      inputs (always 1).
      
      That lead to partial aggregates with strict combine functions to
      wrongly check for NOT NULL input as required by strictness. When the
      aggregate wasn't exactly passed one argument, the strictness check was
      either omitted (in the 0 args case) or too many arguments were
      checked. In the latter case we'd read beyond the end of
      FunctionCallInfoData->args (only in master).
      
      AggStatePerTrans->numTransInputs actually has been wrong since since
      9.6, where partial aggregates were added. But it turns out to not be
      an active problem in 9.6 and 10, because numTransInputs wasn't used at
      all for combine functions: Before c253b722f6 there simply was no NULL
      check for the input to strict trans functions, and after that the
      check was simply hardcoded for the right offset in fcinfo, as it's
      done by code specific to combine functions.
      
      In bf6c614a (11) the strictness check was generalized, with common
      code doing the strictness checks for both plain and combine transition
      functions, based on numTransInputs. For combine functions this lead to
      not emitting an expression step to check for strict input in the 0
      arguments case, and in the > 1 arguments case, we'd check too many
      arguments.Due to the fact that the relevant fcinfo->isnull[2..] was
      always zero-initialized (more or less by accident, by being part of
      the AggStatePerTrans struct, which is palloc0'ed), there was no
      observable damage in the latter case before a9c35cf8, we just
      checked too many array elements.
      
      Due to the changes in a9c35cf8, > 1 argument bug became visible,
      because these days fcinfo is a) dynamically allocated without being
      zeroed b) exactly the length required for the number of specified
      arguments (hardcoded to 2 in this case).
      
      This commit only contains a fairly minimal fix, setting numTransInputs
      to a hardcoded 1 when building a pertrans for a combine function. It
      seems likely that we'll want to clean this up further (e.g. the
      arguments build_pertrans_for_aggref() aren't particularly meaningful
      for combine functions). But the wrap date for 12 beta1 is coming up
      fast, so it seems good to have a minimal fix in place.
      
      Backpatch to 11. While AggStatePerTrans->numTransInputs was set
      wrongly before that, the value was not used for combine functions.
      
      Reported-By: Rajkumar Raghuwanshi
      Diagnosed-By: Kyotaro Horiguchi, Jeevan Chalke, Andres Freund, David Rowley
      Author: David Rowley, Kyotaro Horiguchi, Andres Freund
      Discussion: https://postgr.es/m/CAKcux6=uZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew@mail.gmail.com
      26572832
    • Michael Paquier's avatar
      Fix some grammar in documentation of spgist and pgbench · 03310dbe
      Michael Paquier authored
      Discussion: https://postgr.es/m/92961161-9b49-e42f-0a72-d5d47e0ed4de@postgrespro.ru
      Author: Liudmila Mantrova
      Reviewed-by: Jonathan Katz, Tom Lane, Michael Paquier
      Backpatch-through: 9.4
      03310dbe
  2. 19 May, 2019 10 commits
  3. 18 May, 2019 5 commits
  4. 17 May, 2019 3 commits
    • Tom Lane's avatar
      Restructure creation of run-time pruning steps. · 6630ccad
      Tom Lane authored
      Previously, gen_partprune_steps() always built executor pruning steps
      using all suitable clauses, including those containing PARAM_EXEC
      Params.  This meant that the pruning steps were only completely safe
      for executor run-time (scan start) pruning.  To prune at executor
      startup, we had to ignore the steps involving exec Params.  But this
      doesn't really work in general, since there may be logic changes
      needed as well --- for example, pruning according to the last operator's
      btree strategy is the wrong thing if we're not applying that operator.
      The rules embodied in gen_partprune_steps() and its minions are
      sufficiently complicated that tracking their incremental effects in
      other logic seems quite impractical.
      
      Short of a complete redesign, the only safe fix seems to be to run
      gen_partprune_steps() twice, once to create executor startup pruning
      steps and then again for run-time pruning steps.  We can save a few
      cycles however by noting during the first scan whether we rejected
      any clauses because they involved exec Params --- if not, we don't
      need to do the second scan.
      
      In support of this, refactor the internal APIs in partprune.c to make
      more use of passing information in the GeneratePruningStepsContext
      struct, rather than as separate arguments.
      
      This is, I hope, the last piece of our response to a bug report from
      Alan Jackson.  Back-patch to v11 where this code came in.
      
      Discussion: https://postgr.es/m/FAD28A83-AC73-489E-A058-2681FA31D648@tvsquared.com
      6630ccad
    • Bruce Momjian's avatar
    • Michael Paquier's avatar
      Fix regression test outputs · 6ba500ca
      Michael Paquier authored
      75445c15 has caused various failures in tests across the tree after
      updating some error messages, so fix the newly-expected output.
      
      Author: Michael Paquier
      Reviewed-by: Tom Lane
      Discussion: https://postgr.es/m/8332.1558048838@sss.pgh.pa.us
      6ba500ca
  5. 16 May, 2019 5 commits
    • Michael Paquier's avatar
      41998f90
    • Alvaro Herrera's avatar
    • Peter Geoghegan's avatar
      Remove extra nbtree half-dead internal page check. · 3f58cc6d
      Peter Geoghegan authored
      It's not safe for nbtree VACUUM to attempt to delete a target page whose
      right sibling is already half-dead, since that would fail the
      cross-check when VACUUM attempts to re-find a downlink to the right
      sibling in the parent page.  Logic to prevent this from happening was
      added by commit 8da31837, which addressed a bug in the overhaul of
      page deletion that went into PostgreSQL 9.4 (commit efada2b8).
      VACUUM was made to check the right sibling page, and back off when it
      happened to be half-dead already.
      
      However, it is only truly necessary to do the right sibling check on the
      leaf level, since that transitively determines if the deletion target's
      parent's right sibling page is itself undergoing deletion.  Remove the
      internal page level check, and add a comment explaining why the leaf
      level check alone suffices.
      
      The extra check is also unnecessary due to the fact that internal pages
      that are marked half-dead are generally considered corrupt.  Commit
      efada2b8 established the principle that there should never be
      half-dead internal pages (internal pages pending deletion are possible,
      but that status is never directly represented in the internal page).
      VACUUM will complain about corruption when it encounters half-dead
      internal pages, so VACUUM is bound to raise an error one way or another
      when an nbtree index has a half-dead internal page (contrib/amcheck will
      also report that the page is corrupt).
      
      It's possible that a pg_upgrade'd 9.3 database will still have half-dead
      internal pages, so it may seem like there is an argument for leaving the
      check in place to reliably get a cleaner error message that advises the
      user to REINDEX.  However, leaf pages are also deleted in the first
      phase of deletion prior to PostgreSQL 9.4, so I believe we won't even
      attempt to re-find the parent page anyway (we won't have the fully
      deleted leaf page as the right sibling of our target page, so we won't
      even try to find a downlink for it).
      
      Discussion: https://postgr.es/m/CAH2-Wzm_ntmqJjWLRyKzimFmFvk+BnVAvUpaA4s1h9Ja58woaQ@mail.gmail.com
      3f58cc6d
    • Tom Lane's avatar
      Fix bogus logic for combining range-partitioned columns during pruning. · 3922f106
      Tom Lane authored
      gen_prune_steps_from_opexps's notion of how to do this was overly
      complicated and underly correct.
      
      Per discussion of a report from Alan Jackson (though this fixes only one
      aspect of that problem).  Back-patch to v11 where this code came in.
      
      Amit Langote
      
      Discussion: https://postgr.es/m/FAD28A83-AC73-489E-A058-2681FA31D648@tvsquared.com
      3922f106
    • Tom Lane's avatar
      Fix partition pruning to treat stable comparison operators properly. · 4b1fcb43
      Tom Lane authored
      Cross-type comparison operators in a btree or hash opclass might be
      only stable not immutable (this is true of timestamp vs. timestamptz
      for example).  partprune.c ignored this possibility and would perform
      plan-time pruning with them anyway, possibly leading to wrong answers
      if the environment changed between planning and execution.
      
      To fix, teach gen_partprune_steps() to do things differently when
      creating plan-time pruning steps vs. run-time pruning steps.
      analyze_partkey_exprs() also needs an extra check, which is rather
      annoying but now is not the time to restructure things enough to
      avoid that.
      
      While at it, simplify the logic for the plan-time case a little
      by insisting that the comparison value be a Const and nothing else.
      This relies on the assumption that eval_const_expressions will have
      reduced any immutable expression to a Const; which is not quite
      100% true, but certainly any case that comes up often enough to be
      interesting should have simplification logic there.
      
      Also improve a bunch of inadequate/obsolete/wrong comments.
      
      Per discussion of a report from Alan Jackson (though this fixes only one
      aspect of that problem).  Back-patch to v11 where this code came in.
      
      David Rowley, with some further hacking by me
      
      Discussion: https://postgr.es/m/FAD28A83-AC73-489E-A058-2681FA31D648@tvsquared.com
      4b1fcb43
  6. 15 May, 2019 4 commits
    • Peter Geoghegan's avatar
      Remove obsolete nbtree insertion comment. · 489e431b
      Peter Geoghegan authored
      Remove a Berkeley-era comment above _bt_insertonpg() that admonishes the
      reader to grok Lehman and Yao's paper before making any changes.  This
      made a certain amount of sense back when _bt_insertonpg() was
      responsible for most of the things that are now spread across
      _bt_insertonpg(), _bt_findinsertloc(), _bt_insert_parent(), and
      _bt_split(), but it doesn't work like that anymore.
      
      I believe that this comment alludes to the need to "couple" or "crab"
      buffer locks as we ascend the tree as page splits cascade upwards.  The
      nbtree README already explains this in detail, which seems sufficient.
      Besides, the changes to page splits made by commit 40dae7ec altered
      the exact details of how buffer locks are retained during splits; Lehman
      and Yao's original algorithm seems to release the lock on the left child
      page/buffer slightly earlier than _bt_insertonpg()/_bt_insert_parent()
      can.
      489e431b
    • Tom Lane's avatar
      Remove no-longer-used typedef. · 8a0f0ad5
      Tom Lane authored
      struct ClonedConstraint is no longer needed, so delete it.
      
      Discussion: https://postgr.es/m/18102.1557947143@sss.pgh.pa.us
      8a0f0ad5
    • Peter Geoghegan's avatar
      Reverse order of newitem nbtree candidate splits. · 7505da2f
      Peter Geoghegan authored
      Commit fab25024, which taught nbtree to choose candidate split points
      more carefully, had _bt_findsplitloc() record all possible split points
      in an initial pass over a page that is about to be split.  The order
      that candidate split points were processed and stored in was assumed to
      match the offset number order of split points on an imaginary version of
      the page that contains the same items as the original, but also fits
      newitem (the item that provoked the split precisely because it didn't
      fit).
      
      However, the order of split points in the final array was not quite what
      was expected: the split point that makes newitem the firstright item
      came after the split point that makes newitem the lastleft item -- not
      before.  As a result, _bt_findsplitloc() could get confused about the
      leftmost and rightmost tuples among all possible split points recorded
      for the page.  This seems to have no appreciable impact on the quality
      of the final split point chosen by _bt_findsplitloc(), but it's still
      wrong.
      
      To fix, switch the order in which newitem candidate splits are recorded
      in.  This also makes it possible to describe candidate split points in
      terms of which pair of adjoining tuples enclose the split point within
      _bt_findsplitloc(), making it clearer why it's generally safe for
      _bt_split() to expect lastleft and firstright tuples.
      7505da2f
    • Bruce Momjian's avatar
      docs: properly indent PG 12 release notes · a429164e
      Bruce Momjian authored
      a429164e
  7. 14 May, 2019 7 commits
    • Andres Freund's avatar
      Handle table_complete_speculative's succeeded argument as documented. · aa4b8c61
      Andres Freund authored
      For some reason both callsite and the implementation for heapam had
      the meaning inverted (i.e. succeeded == true was passed in case of
      conflict). That's confusing.
      
      I (Andres) briefly pondered whether it'd be better to rename
      table_complete_speculative's argument to 'bool specConflict' or such,
      but decided not to. The 'complete' in the function name for me makes
      `succeeded` sound a bit better.
      
      Reported-By: Ashwin Agrawal, Melanie Plageman, Heikki Linnakangas
      Discussion:
         https://postgr.es/m/CALfoeitk7-TACwYv3hCw45FNPjkA86RfXg4iQ5kAOPhR+F1Y4w@mail.gmail.com
         https://postgr.es/m/97673451-339f-b21e-a781-998d06b1067c@iki.fi
      aa4b8c61
    • Andres Freund's avatar
      Add isolation test for INSERT ON CONFLICT speculative insertion failure. · 08e2edc0
      Andres Freund authored
      This path previously was not reliably covered. There was some
      heuristic coverage via insert-conflict-toast.spec, but that test is
      not deterministic, and only tested for a somewhat specific bug.
      
      Backpatch, as this is a complicated and otherwise untested code
      path. Unfortunately 9.5 cannot handle two waiting sessions, and thus
      cannot execute this test.
      
      Triggered by a conversion with Melanie Plageman.
      
      Author: Andres Freund
      Discussion: https://postgr.es/m/CAAKRu_a7hbyrk=wveHYhr4LbcRnRCG=yPUVoQYB9YO1CdUBE9Q@mail.gmail.com
      Backpatch: 9.5-
      08e2edc0
    • Tom Lane's avatar
      Fix "make clean" to clean out junk files left behind after ssl tests. · 6d2fba31
      Tom Lane authored
      We .gitignore'd this junk, but we didn't actually remove it.
      6d2fba31
    • Tom Lane's avatar
      Move logging.h and logging.c from src/fe_utils/ to src/common/. · fc9a62af
      Tom Lane authored
      The original placement of this module in src/fe_utils/ is ill-considered,
      because several src/common/ modules have dependencies on it, meaning that
      libpgcommon and libpgfeutils now have mutual dependencies.  That makes it
      pointless to have distinct libraries at all.  The intended design is that
      libpgcommon is lower-level than libpgfeutils, so only dependencies from
      the latter to the former are acceptable.
      
      We already have the precedent that fe_memutils and a couple of other
      modules in src/common/ are frontend-only, so it's not stretching anything
      out of whack to treat logging.c as a frontend-only module in src/common/.
      To the extent that such modules help provide a common frontend/backend
      environment for the rest of common/ to use, it's a reasonable design.
      (logging.c does not yet provide an ereport() emulation, but one can
      dream.)
      
      Hence, move these files over, and revert basically all of the build-system
      changes made by commit cc8d4151.  There are no places that need to grow
      new dependencies on libpgcommon, further reinforcing the idea that this
      is the right solution.
      
      Discussion: https://postgr.es/m/a912ffff-f6e4-778a-c86a-cf5c47a12933@2ndquadrant.com
      fc9a62af
    • Bruce Momjian's avatar
      b71dad22
    • Tom Lane's avatar
      Remove pg_rewind's private logging.h/logging.c files. · 53ddefba
      Tom Lane authored
      The existence of these files became rather confusing with the
      introduction of a widely-known logging.h header in commit cc8d4151.
      (Indeed, there's already some duplicative #includes here, perhaps
      betraying such confusion.)  The only thing left in them, after that
      commit, is a progress-reporting function that's neither general-purpose
      nor tied in any way to other logging infrastructure.  Hence, let's just
      move that function to pg_rewind.c, and get rid of the separate files.
      
      Discussion: https://postgr.es/m/3971.1557787914@sss.pgh.pa.us
      53ddefba
    • Tom Lane's avatar
      Fix SQL-style substring() to have spec-compliant greediness behavior. · 7c850320
      Tom Lane authored
      SQL's regular-expression substring() function is defined to have a
      pattern argument that's separated into three subpatterns by escape-
      double-quote markers; the function result is the part of the input
      matching the second subpattern.  The standard makes it clear that
      if there is ambiguity about how to match the input to the subpatterns,
      the first and third subpatterns should be taken to match the smallest
      possible amount of text (i.e., they're "non greedy", in the terms of
      our regex code).  We were not doing it that way: the first subpattern
      would eat the largest possible amount of text, causing the function
      result to be shorter than what the spec requires.
      
      Fix that by attaching explicit greediness quantifiers to the
      subpatterns.  (This depends on the regex fix in commit 8a29ed05;
      before that, this didn't reliably change the regex engine's behavior.)
      
      Also, by adding parentheses around each subpattern, we ensure that
      "|" (OR) in the subpatterns behave sanely.  Previously, "|" in the
      first or third subpatterns didn't work.
      
      This patch also makes the function throw error if you write more than
      two escape-double-quote markers, and do something sane if you write
      just one, and document that behavior.  Previously, an odd number of
      markers led to a confusing complaint about unbalanced parentheses,
      while extra pairs of markers were just ignored.  (Note that the spec
      requires exactly two markers, but we've historically allowed there
      to be none, and this patch preserves the old behavior for that case.)
      
      In passing, adjust some substring() test cases that didn't really
      prove what they said they were testing for: they used patterns
      that didn't match the data string, so that the output would be
      NULL whether or not the function was really strict.
      
      Although this is certainly a bug fix, changing the behavior in back
      branches seems undesirable: applications could perhaps be depending on
      the old behavior, since it's not obviously wrong unless you read the
      spec very closely.  Hence, no back-patch.
      
      Discussion: https://postgr.es/m/5bb27a41-350d-37bf-901e-9d26f5592dd0@charter.net
      7c850320