- 01 Sep, 2021 4 commits
-
-
Tomas Vondra authored
Until now, when defining extended statistics, everything except a plain column reference was treated as complex expression. So for example "a" was a column reference, but "(a)" would be an expression. In most cases this does not matter much, but there were a couple strange consequences. For example CREATE STATISTICS s ON a FROM t; would fail, because extended stats require at least two columns. But CREATE STATISTICS s ON (a) FROM t; would succeed, because that requirement does not apply to expressions. Moreover, that statistics object is useless - the optimizer will always use the regular statistics collected for attribute "a". So do a bit more work to identify those expressions referencing a single column, and translate them to a simple column reference. Backpatch to 14, where support for extended statistics on expressions was introduced. Reported-by: Justin Pryzby Backpatch-through: 14 Discussion: https://postgr.es/m/20210816013255.GS10479%40telsasoft.com
-
Fujii Masao authored
When -C/--connect option is specified, pgbench establishes and closes the connection for each transaction. In this case pgbench needs to measure the times taken for all those connections and disconnections, to include the average connection time in the benchmark result. But previously pgbench could not measure those disconnection delays. To fix the bug, this commit makes pgbench measure the disconnection delays whenever the connection is closed at the end of transaction, if -C/--connect option is specified. Back-patch to v14. Per discussion, we concluded not to back-patch to v13 or before because changing that behavior in stable branches would surprise users rather than providing benefits. Author: Yugo Nagata Reviewed-by: Fabien COELHO, Tatsuo Ishii, Asif Rehman, Fujii Masao Discussion: https://postgr.es/m/20210614151155.a393bc7d8fed183e38c9f52a@sraoss.co.jp
-
Amit Kapila authored
The check to test whether the subscription workers were restarting after a change in the subscription was failing. The reason was that the test was assuming the walsender started before it reaches the 'streaming' state and the walsender was exiting due to an error before that. Now, the walsender was erroring out before reaching the 'streaming' state because it tries to acquire the slot before the previous walsender has exited. In passing, improve the die messages so that it is easier to investigate the failures in the future if any. Reported-by: Michael Paquier, as per buildfarm Author: Ajin Cherian Reviewed-by: Masahiko Sawada, Amit Kapila Backpatch-through: 10, where this test was introduced Discussion: https://postgr.es/m/YRnhFxa9bo73wfpV@paquier.xyz
-
Peter Geoghegan authored
It doesn't make any sense to report this information, since VACUUM VERBOSE reports on heap relation truncation directly. This was an oversight in commit 7ab96cf6, which made VACUUM VERBOSE output a little more consistent with nearby autovacuum-specific log output. Adjust comments that describe how this is supposed to work in passing. Also bring truncation-related VACUUM VERBOSE output in line with the convention established for VACUUM VERBOSE output by commit f4f4a649. Author: Peter Geoghegan <pg@bowt.ie> Backpatch: 14-, where VACUUM VERBOSE's output changed.
-
- 31 Aug, 2021 7 commits
-
-
Tomas Vondra authored
The code printing expressions for extended statistics doubled the parens, producing results like ((a+1)), which is unnecessary and not consistent with how we print expressions elsewhere. Fixed by tweaking the code to produce just a single set of parens. Reported by Mark Dilger, fix by me. Backpatch to 14, where support for extended statistics on expressions was added. Reported-by: Mark Dilger Discussion: https://postgr.es/m/20210122040101.GF27167%40telsasoft.com
-
John Naylor authored
Previously, it was immutable by lack of marking. This is not correct, since the time zone could change. Bump catversion Discussion: https://www.postgresql.org/message-id/CAFBsxsG2UHk8mOWL0tca%3D_cg%2B_oA5mVRNLhDF0TBw980iOg5NQ%40mail.gmail.com Backpatch to v14, when this function came in
-
Tom Lane authored
For no particularly good reason, getPolicies() queried pg_policy separately for each table. We can collect all the policies in a single query instead, and attach them to the correct TableInfo objects using findTableByOid() lookups. On the regression database, this reduces the number of queries substantially, and provides a visible savings even when running against a local server. Per complaint from Hubert Depesz Lubaczewski. Since this is such a simple fix and can have a visible performance benefit, back-patch to all supported branches. Discussion: https://postgr.es/m/20210826084430.GA26282@depesz.com
-
Tom Lane authored
There's long been a "TODO: there might be some value in caching the results" annotation on pg_dump's getFormattedTypeName function; but we hadn't gotten around to checking what it was costing us to repetitively look up type names. It turns out that when dumping the current regression database, about 10% of the total number of queries issued are duplicative format_type() queries. However, Hubert Depesz Lubaczewski reported a not-unusual case where these account for over half of the queries issued by pg_dump. Individually these queries aren't expensive, but when network lag is a factor, they add up to a problem. We can very easily add some caching to getFormattedTypeName to solve it. Since this is such a simple fix and can have a visible performance benefit, back-patch to all supported branches. Discussion: https://postgr.es/m/20210826084430.GA26282@depesz.com
-
Tomas Vondra authored
Commit 5be8ce82e8 added a new role to the stats_ext regression suite, but the role name did not start with regress_ causing failures when running with ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS. Fixed by renaming the role to start with the expected regress_ prefix. Backpatch-through: 10, same as the new regression test Discussion: https://postgr.es/m/1F238937-7CC2-4703-A1B1-6DC225B8978A%40enterprisedb.com
-
Tomas Vondra authored
When an ownership check on extended statistics object failed, the code was calling aclcheck_error_type to report the failure, which is clearly wrong, resulting in cache lookup errors. Fix by calling aclcheck_error. This issue exists since the introduction of extended statistics, so backpatch all the way back to PostgreSQL 10. It went unnoticed because there were no tests triggering the error, so add one. Reported-by: Mark Dilger Backpatch-through: 10, where extended stats were introduced Discussion: https://postgr.es/m/1F238937-7CC2-4703-A1B1-6DC225B8978A%40enterprisedb.com
-
Tom Lane authored
When starting to use a query parsetree loaded from the catalogs, we must begin by applying AcquireRewriteLocks(), to obtain the same relation locks that the parser would have gotten if the query were entered interactively, and to do some other cleanup such as dealing with later-dropped columns. New-style SQL functions are just as subject to this rule as other stored parsetrees; however, of the places dealing with such functions, only init_sql_fcache had gotten the memo. In particular, if we successfully inlined a new-style set-returning SQL function that contained any relation references, we'd either get an assertion failure or attempt to use those relation(s) sans locks. I also added AcquireRewriteLocks calls to fmgr_sql_validator and print_function_sqlbody. Desultory experiments didn't demonstrate any failures in those, but I suspect that I just didn't try hard enough. Certainly we don't expect nearby code paths to operate without locks. On the same logic of it-ought-to-have-the-same-effects-as-the-old-code, call pg_rewrite_query() in fmgr_sql_validator, too. It's possible that neither code path there needs to bother with rewriting, but doing the analysis to prove that is beyond my goals for today. Per bug #17161 from Alexander Lakhin. Discussion: https://postgr.es/m/17161-048a1cdff8422800@postgresql.org
-
- 30 Aug, 2021 5 commits
-
-
Alvaro Herrera authored
Most data-corruption reports mention the location of the problem, but this one failed to. Add it. Backpatch all the way back. In 12 and older, also assign the ERRCODE_DATA_CORRUPTED error code as was done in commit fd6ec93b for 13 and later. Discussion: https://postgr.es/m/202108191637.oqyzrdtnheir@alvherre.pgsql
-
Alvaro Herrera authored
Per our message style guidelines, for human consumption we quote qualified names as a whole rather than each part separately; but commits bc085205 introduced a deviation for extended statistics and a4d75c86 copied it. I don't agree with this policy applying to names shown by psql, but that's a poor reason to deviate from the practice only in two obscure corners, so make said corners use the same style as everywhere else. Backpatch to 14. The first of these is older, but I'm not sure we want to destabilize the psql output in the older branches for such a small thing. Discussion: https://postgr.es/m/20210828181618.GS26465@telsasoft.com
-
Fujii Masao authored
Commit 547f04e7 changed pgbench so that it used the measurement result of connection delays in its benchmark report only when -C/--connect option is specified. But previously those delays were unnecessarily measured even when that option is not specified. Which was a waste of cycles. This commit improves pgbench so that it avoids such unnecessary measurement. Back-patch to v14 where commit 547f04e7 first appeared. Author: Yugo Nagata Reviewed-by: Fabien COELHO, Asif Rehman, Fujii Masao Discussion: https://postgr.es/m/20210614151155.a393bc7d8fed183e38c9f52a@sraoss.co.jp
-
Peter Eisentraut authored
This contains all individuals mentioned in the commit messages during PostgreSQL 14 development. current through ed740b06
-
Amit Kapila authored
ERRCODE_CONFIGURATION_LIMIT_EXCEEDED was used for checksum failure, use ERRCODE_DATA_CORRUPTED instead. Reported-by: Tatsuhito Kasahara Author: Tatsuhito Kasahara Backpatch-through: 9.6, where it was introduced Discussion: https://postgr.es/m/CAP0=ZVLHtYffs8SOWcFJWrBGoRzT9QQbk+_aP+E5AHLNXiOorA@mail.gmail.com
-
- 28 Aug, 2021 4 commits
-
-
Alvaro Herrera authored
In the long-going saga for analyze on partitioned tables, one thing I missed while reverting 0827e8af is the maintenance of analyze count and last analyze time for partitioned tables. This is a mostly trivial change that enables users assess the need for invoking manual ANALYZE on partitioned tables. This patch, posted by Justin and modified a bit by me (Álvaro), can be mostly traced back to Hosoya-san, though any problems introduced with the scissors are mine. Backpatch to 14, in line with 6f8127b73901. Co-authored-by: Yuzuko Hosoya <yuzukohosoya@gmail.com> Co-authored-by: Justin Pryzby <pryzby@telsasoft.com> Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Reported-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20210816222810.GE10479@telsasoft.com
-
Alvaro Herrera authored
Déjà vu of commit fc40ba1296a7, for another backslash command. Strictly speaking this isn't a bug, but since all references to catalog objects are schema-qualified, we might as well be consistent. The omission first appeared in commit ad600bba and replicated in a4d75c86; backpatch to 14. Author: Justin Pryzby <pryzbyj@telsasoft.com> Discussion: https://postgr.es/m/20210827193151.GN26465@telsasoft.com
-
Alvaro Herrera authored
Strictly speaking this isn't a bug, but since all references to catalog objects are schema-qualified, we might as well be consistent. The omission first appeared in commit 1c5d9270, so backpatch to 12. Author: Justin Pryzby <pryzbyj@telsasoft.com> Discussion: https://postgr.es/m/20210827193151.GN26465@telsasoft.com
-
Noah Misch authored
If the system crashed between CREATE TABLESPACE and the next checkpoint, the result could be some files in the tablespace unexpectedly containing no rows. Affected files would be those for which the system did not write WAL; see the wal_skip_threshold documentation. Before v13, a different set of conditions governed the writing of WAL; see v12's <sect2 id="populate-pitr">. (The v12 conditions were broader in some ways and narrower in others.) Users may want to audit non-default tablespaces for unexpected short files. The bug could have truncated an index without affecting the associated table, and reindexing the index would fix that particular problem. This fixes the bug by making create_tablespace_directories() more like TablespaceCreateDbspace(). create_tablespace_directories() was recursively removing tablespace contents, reasoning that WAL redo would recreate everything removed that way. That assumption holds for other wal_level values. Under wal_level=minimal, the old approach could delete files for which no other copy existed. Back-patch to 9.6 (all supported versions). Reviewed by Robert Haas and Prabhat Sahu. Reported by Robert Haas. Discussion: https://postgr.es/m/CA+TgmoaLO9ncuwvr2nN-J4VEP5XyAcy=zKiHxQzBbFRxxGxm0w@mail.gmail.com
-
- 27 Aug, 2021 8 commits
-
-
Tom Lane authored
Somehow, spgist overlooked the need to call pgstat_count_index_scan(). Hence, pg_stat_all_indexes.idx_scan and equivalent columns never became nonzero for an SP-GiST index, although the related per-tuple counters worked fine. This fix works a bit differently from other index AMs, in that the counter increment occurs in spgrescan not spggettuple/spggetbitmap. It looks like this won't make the user-visible semantics noticeably different, so I won't go to the trouble of introducing an is-this- the-first-call flag just to make the counter bumps happen in the same places. Per bug #17163 from Christian Quest. Back-patch to all supported versions. Discussion: https://postgr.es/m/17163-b8c5cc88322a5e92@postgresql.org
-
Stephen Frost authored
When prefetching pages for ANALYZE, we should be using maintenance_io_concurrenty (by calling get_tablespace_maintenance_io_concurrency(), not get_tablespace_io_concurrency()). ANALYZE prefetching was introduced in c6fc50cb, so back-patch to 14. Backpatch-through: 14 Reported-By: Egor Rogov Discussion: https://postgr.es/m/9beada99-34ce-8c95-fadb-451768d08c64%40postgrespro.ru
-
Stephen Frost authored
Commit 6c3ffd69 added a couple new predefined roles but didn't properly wrap the SQL commands mentioned in the description of those roles with command tags, so add them now. Backpatch-through: 14 Reported-by: Michael Banck Discussion: https://postgr.es/m/606d8b1c.1c69fb81.3df04.1a99@mx.google.com
-
Daniel Gustafsson authored
Author: Dave Cramer <davecramer@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CADK3HHLZmqAQZ2ByPDQQ9yhGqax36kksq6sDkV0yYzsxw6ipvQ@mail.gmail.com
-
Peter Geoghegan authored
Adjust track_io_timing related logging code added by commit 94d13d47. Make it consistent with other nearby autovacuum and autoanalyze logging code by removing logic that suppressed zero millisecond outputs. log_autovacuum_min_duration log output now reliably shows "read:" and "write:" millisecond-based values in its report (when track_io_timing is enabled). Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Stephen Frost <sfrost@snowman.net> Discussion: https://postgr.es/m/CAH2-WznW0FNxSVQMSRazAMYNfZ6DR_gr5WE78hc6E1CBkkJpzw@mail.gmail.com Backpatch: 14-, where the track_io_timing logging was introduced.
-
Peter Geoghegan authored
This order seems more natural. It starts with details that are particular to heap and index data structures, and ends with system-level costs incurred during the autovacuum worker's VACUUM/ANALYZE operation. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-WzkzxK6ahA9xxsOftRtBX_R0swuHZsvo4QUbak1Bz7hb7Q@mail.gmail.com Backpatch: 14-, which enhanced the log output in various ways.
-
Tom Lane authored
Second thoughts about commit 824bf719: we apply makesearch() to an NFA after having determined whether it is a MATCHALL pattern. Prepending ".*" doesn't make it non-MATCHALL, but it does change the maximum possible match length, and makesearch() failed to update that. This has no ill effects given the stylized usage of search NFAs, but it seems like it's better to keep the data structure consistent. In particular, fixing this allows more honest handling of the MATCHALL check in matchuntil(): we can now assert that maxmatchall is infinity, instead of lamely assuming that it should act that way. In passing, improve the code in dump[c]nfa so that infinite maxmatchall is printed as "inf" not a magic number.
-
Peter Geoghegan authored
Add a CHECK_FOR_INTERRUPTS() call to make heap relation verification responsive to query cancellations. Author: Mark Dilger <mark.dilger@enterprisedb.com> Discussion: https://postgr.es/m/CAH2-Wzk-9RtQgb2QiuLv8j2O0j9tSFKPmmch5nWSZhguUxvbrw%40mail.gmail.com Backpatch: 14-, where amcheck heap verification was introduced.
-
- 25 Aug, 2021 9 commits
-
-
Tom Lane authored
The condition "context_start < context_end" is strictly weaker than "context_end - context_start >= 50", so we don't need both. Oversight in commit ffd3944a, noted by tanghy.fnst. In passing, line-wrap a nearby test to make it more readable. Discussion: https://postgr.es/m/OS0PR01MB61137C4054774F44E3A9DC89FBC69@OS0PR01MB6113.jpnprd01.prod.outlook.com
-
Robert Haas authored
Pengchengliu reported an assertion failure in a parallel woker while performing a parallel scan using an overflowed snapshot. The proximate cause is that TransactionXmin was set to an incorrect value. The underlying cause is incorrect snapshot handling in parallel.c. In particular, InitializeParallelDSM() was unconditionally calling GetTransactionSnapshot(), because I (rhaas) mistakenly thought that was always retrieving an existing snapshot whereas, at isolation levels less than REPEATABLE READ, it's actually taking a new one. So instead do this only at higher isolation levels where there actually is a single snapshot for the whole transaction. By itself, this is not a sufficient fix, because we still need to guarantee that TransactionXmin gets set properly in the workers. The easiest way to do that seems to be to install the leader's active snapshot as the transaction snapshot if the leader did not serialize a transaction snapshot. This doesn't affect the results of future GetTrasnactionSnapshot() calls since those have to take a new snapshot anyway; what we care about is the side effect of setting TransactionXmin. Report by Pengchengliu. Patch by Greg Nancarrow, except for some comment text which I supplied. Discussion: https://postgr.es/m/002f01d748ac$eaa781a0$bff684e0$@tju.edu.cn
-
Peter Eisentraut authored
-
Michael Paquier authored
The same condition was repeated twice when comparing the connection used by existing declared statement with the one coming from a fresh DECLARE statement. This had no consequences, but let's keep the code clean. Oversight in f576de1. Author: Shenhao Wang Discussion: https://postgr.es/m/OSBPR01MB42149653BC0AB0A49D23C1B8F2C69@OSBPR01MB4214.jpnprd01.prod.outlook.com Backpatch-through: 14
-
Amit Kapila authored
Commit 325f2ec5 introduced pg_class.relwrite to skip operations on tables created as part of a heap rewrite during DDL. It links such transient heaps to the original relation OID via this new field in pg_class but forgot to do anything about toast tables. So, logical decoding was not able to skip operations on internally created toast tables. This leads to an error when we tried to decode the WAL for the next operation for which it appeared that there is a toast data where actually it didn't have any toast data. To fix this, we set pg_class.relwrite for internally created toast tables as well which allowed skipping operations on them during logical decoding. Author: Bertrand Drouvot Reviewed-by: David Zhang, Amit Kapila Backpatch-through: 11, where it was introduced Discussion: https://postgr.es/m/b5146fb1-ad9e-7d6e-f980-98ed68744a7c@amazon.com
-
Etsuro Fujita authored
-
Fujii Masao authored
This commit improves the ecpg's error message that commit f576de1db1 updated, so that it gets rid of trailing period and uppercases the command name in the error message. Back-patch to v14 where the error message exists. Author: Kyotaro Horiguchi Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/20210819.170315.1413060634876301811.horikyota.ntt@gmail.com
-
Fujii Masao authored
There are two identical error messages about valid value of modulus for hash partition, in PostgreSQL source code. Commit 0e1275fb07 improved only one of them so that ambiguous word "positive" was avoided there, and forgot to improve the other. This commit improves the other. Which would reduce translator burden. Back-pach to v11 where the error message exists. Author: Kyotaro Horiguchi Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/20210819.170315.1413060634876301811.horikyota.ntt@gmail.com
-
Fujii Masao authored
The distance in phrase operator must be an integer value between zero and MAXENTRYPOS inclusive. But previously the error message about its valid value included the information about its upper limit but not lower limit (i.e., zero). This commit improves the error message so that it also includes the information about its lower limit. Back-patch to v9.6 where full-text phrase search was supported. Author: Kyotaro Horiguchi Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/20210819.170315.1413060634876301811.horikyota.ntt@gmail.com
-
- 24 Aug, 2021 2 commits
-
-
Tom Lane authored
Regexps like "(.){0}...\1" drew an "invalid backreference number". That's not unreasonable on its face, since the capture group will never be matched if it's iterated zero times. However, other engines such as Perl's don't complain about this, nor do we throw an error for related cases such as "(.)|\1", even though that backref can never succeed either. Also, if the zero-iterations case happens at runtime rather than compile time --- say, "(x)*...\1" when there's no "x" to be found --- that's not an error, we just deem the backref to not match. Making this even less defensible, no error was thrown for nested cases such as "((.)){0}...\2"; and to add insult to injury, those cases could result in assertion failures instead. (It seems that nothing especially bad happened in non-assert builds, though.) Let's just fix it so that no error is thrown and instead the backref is deemed to never match, so that compile-time detection of no iterations behaves the same as run-time detection. Per report from Mark Dilger. This appears to be an aboriginal error in Spencer's library, so back-patch to all supported versions. Pre-v14, it turns out to also be necessary to back-patch one aspect of commits cb76fbd7e/00116dee5, namely to create capture-node subREs with the begin/end states of their subexpressions, not the current lp/rp of the outer parseqatom invocation. Otherwise delsub complains that we're trying to disconnect a state from itself. This is a bit scary but code examination shows that it's safe: in the pre-v14 code, if we want to wrap iteration around the subexpression, the first thing we do is overwrite the atom's begin/end fields with new states. So the bogus values didn't survive long enough to be used for anything, except if no iteration is required, in which case it doesn't matter. Discussion: https://postgr.es/m/A099E4A8-4377-4C64-A98C-3DEDDC075502@enterprisedb.com
-
Amit Kapila authored
The current refresh behavior tries to just refresh added/dropped publications but that leads to removing wrong tables from subscription. We can't refresh just the dropped publication because it is quite possible that some of the tables are removed from publication by that time and now those will remain as part of the subscription. Also, there is a chance that the tables that were part of the publication being dropped are also part of another publication, so we can't remove those. So, we decided that by default, add/drop commands will also act like REFRESH PUBLICATION which means they will refresh all the publications. We can keep the old behavior for "add publication" but it is better to be consistent with "drop publication". Author: Hou Zhijie Reviewed-by: Masahiko Sawada, Amit Kapila Backpatch-through: 14, where it was introduced Discussion: https://postgr.es/m/OS0PR01MB5716935D4C2CC85A6143073F94EF9@OS0PR01MB5716.jpnprd01.prod.outlook.com
-
- 23 Aug, 2021 1 commit
-
-
Tom Lane authored
The recursion in cdissect() was careless about clearing match data for capturing parentheses after rejecting a partial match. This could allow a later back-reference to succeed when by rights it should fail for lack of a defined referent. To fix, think a little more rigorously about what the contract between different levels of cdissect's recursion needs to be. With the right spec, we can fix this using fewer rather than more resets of the match data; the key decision being that a failed sub-match is now explicitly responsible for clearing any matches it may have set. There are enough other cross-checks and optimizations in the code that it's not especially easy to exhibit this problem; usually, the match will fail as-expected. Plus, regexps that are even potentially vulnerable are most likely user errors, since there's just not much point in writing a back-ref that doesn't always have a referent. These facts perhaps explain why the issue hasn't been detected, even though it's almost certainly a couple of decades old. Discussion: https://postgr.es/m/151435.1629733387@sss.pgh.pa.us
-