- 12 May, 2021 9 commits
-
-
Tom Lane authored
Run renumber_oids.pl to move high-numbered OIDs down, as per pre-beta tasks specified by RELEASE_CHANGES. For reference, the command was ./renumber_oids.pl --first-mapped-oid 8000 --target-oid 6150
-
Tom Lane authored
Also "make reformat-dat-files". The only change worthy of note is that pgindent messed up the formatting of launcher.c's struct LogicalRepWorkerId, which led me to notice that that struct wasn't used at all anymore, so I just took it out.
-
Michael Paquier authored
The section of the code in charge of returning all the relations associated to a subscription only need one ScanKey, but allocated two of them. This code was introduced as a copy-paste from a different area on the same file by 7c4f5240, making the result confusing to follow. Author: Peter Smith Reviewed-by: Tom Lane, Julien Rouhaud, Bharath Rupireddy Discussion: https://postgr.es/m/CAHut+PsLKe+rN3FjchoJsd76rx2aMsFTB7CTFxRgUP05p=kcpQ@mail.gmail.com
-
Peter Eisentraut authored
-
Etsuro Fujita authored
EXPLAIN ANALYZE for an async-capable ForeignScan node associated with postgres_fdw is done just by using instrumentation for ExecProcNode() called from the node's callbacks, causing the following problems: 1) If the remote table to scan is empty, the node is incorrectly considered as "never executed" by the command even if the node is executed, as ExecProcNode() isn't called from the node's callbacks at all in that case. 2) The command fails to collect timings for things other than ExecProcNode() done in the node, such as creating a cursor for the node's remote query. To fix these problems, add instrumentation for async-capable nodes, and modify postgres_fdw accordingly. My oversight in commit 27e1f145. While at it, update a comment for the AsyncRequest struct in execnodes.h and the documentation for the ForeignAsyncRequest API in fdwhandler.sgml to match the code in ExecAsyncAppendResponse() in nodeAppend.c, and fix typos in comments in nodeAppend.c. Per report from Andrey Lepikhov, though I didn't use his patch. Reviewed-by: Andrey Lepikhov Discussion: https://postgr.es/m/2eb662bb-105d-fc20-7412-2f027cc3ca72%40postgrespro.ru
-
Tom Lane authored
Several queries in the privileges regression test cause the planner to apply the plpgsql function "leak()" to every element of the histogram for atest12.b. Since commit 0c882e52 increased the size of that histogram to 10000 entries, the test invokes that function over 100000 times, which takes an absolutely unreasonable amount of time in clobber-cache-always mode. However, there's no real reason why that has to be a plpgsql function: for the purposes of this test, all that matters is that it not be marked leakproof. So we can replace the plpgsql implementation with a direct call of int4lt, which has the same behavior and is orders of magnitude faster. This is expected to cut several hours off the buildfarm cycle time for CCA animals. It has some positive impact in normal builds too, though that's probably lost in the noise. Back-patch to v13 where 0c882e52 came in. Discussion: https://postgr.es/m/575884.1620626638@sss.pgh.pa.us
-
Fujii Masao authored
Previously long was used as the data type for some counters in BufferUsage and WalUsage. But long is only four byte, e.g., on Windows, and it's entirely possible to wrap a four byte counter. For example, emitting more than four billion WAL records in one transaction isn't actually particularly rare. To avoid the overflows of those counters, this commit changes the data type of them from long to int64. Suggested-by: Andres Freund Author: Masahiro Ikeda Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/20201221211650.k7b53tcnadrciqjo@alap3.anarazel.de Discussion: https://postgr.es/m/af0964ac-7080-1984-dc23-513754987716@oss.nttdata.com
-
Andrew Dunstan authored
Use a static prolog file instead of generating the prolog from the existing perl script. Also, support generation of the file in a vpath build. Discussion: https://postgr.es/m/700620.1620662868@sss.pgh.pa.us
- 11 May, 2021 8 commits
-
-
Tom Lane authored
I copied the existing spelling of "--max_connections", but that's just wrong :-(. Evidently setting $ENV{MAX_CONNECTIONS} has never actually worked in this script. Given the lack of complaints, it's probably not worth back-patching a fix. Per buildfarm. Discussion: https://postgr.es/m/899209.1620759506@sss.pgh.pa.us
-
Tom Lane authored
Having to maintain two lists of regression test scripts has proven annoyingly error-prone. We can achieve the effect of the serial_schedule by running the parallel_schedule with "--max_connections=1"; so do that and remove serial_schedule. This causes cosmetic differences in the progress output, but it doesn't seem worth restructuring pg_regress to avoid that. Discussion: https://postgr.es/m/899209.1620759506@sss.pgh.pa.us
-
Bruce Momjian authored
-
Tom Lane authored
opr_sanity's binary_coercible() function has always been meant to match the parser's notion of binary coercibility, but it also has always been a rather poor approximation of the parser's real rules (as embodied in IsBinaryCoercible()). That hasn't bit us so far, but it's predictable that it will eventually. It also now emerges that implementing this check in plpgsql performs absolutely horribly in clobber-cache-always testing. (Perhaps we could do something about that, but I suspect it just means that plpgsql is exploiting catalog caching to the hilt.) Hence, let's replace binary_coercible() with a C shim that directly invokes IsBinaryCoercible(), eliminating both the semantic hazard and the performance issue. Most of regress.c's C functions are declared in create_function_1, but we can't simply move that to before opr_sanity/type_sanity since those tests would complain about the resulting shell types. I chose to split it into create_function_0 and create_function_1. Since create_function_0 now runs as part of a parallel group while create_function_1 doesn't, reduce the latter to create just those functions that opr_sanity and type_sanity would whine about. To make room for create_function_0 in the second parallel group of tests, move tstypes to the third parallel group. In passing, clean up some ordering deviations between parallel_schedule and serial_schedule. Discussion: https://postgr.es/m/292305.1620503097@sss.pgh.pa.us
-
Peter Eisentraut authored
-
Bruce Momjian authored
-
David Rowley authored
The note is no longer true as of 86dc9005, so remove it. Author: Amit Langote Discussion: https://postgr.es/m/CA+HiwqFxQn7Hz1wT+wYgnf_9SK0c4BwOOwFFT8jcSZwJrd8HEA@mail.gmail.com
-
Michael Paquier authored
Since its introduction in bbe0a81d, compression of table data supports LZ4, but nothing had been done within the MSVC scripts to allow users to build the code with this library. This commit closes the gap by extending the MSVC scripts to be able to build optionally with LZ4. Getting libraries that can be used for compilation and execution is possible as LZ4 can be compiled down to MSVC 2010 using its source tarball. MinGW may require extra efforts to be able to work, and I have been able to test this only with MSVC, still this is better than nothing to give users a way to test the feature on Windows. Author: Dilip Kumar Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/YJPdNeF68XpwDDki@paquier.xyz
-
- 10 May, 2021 11 commits
-
-
Tom Lane authored
It's unusual to have any resjunk columns in an ON CONFLICT ... UPDATE list, but it can happen when MULTIEXPR_SUBLINK SubPlans are present. If it happens, the ON CONFLICT UPDATE code path would end up storing tuples that include the values of the extra resjunk columns. That's fairly harmless in the short run, but if new columns are added to the table then the values would become accessible, possibly leading to malfunctions if they don't match the datatypes of the new columns. This had escaped notice through a confluence of missing sanity checks, including * There's no cross-check that a tuple presented to heap_insert or heap_update matches the table rowtype. While it's difficult to check that fully at reasonable cost, we can easily add assertions that there aren't too many columns. * The output-column-assignment cases in execExprInterp.c lacked any sanity checks on the output column numbers, which seems like an oversight considering there are plenty of assertion checks on input column numbers. Add assertions there too. * We failed to apply nodeModifyTable's ExecCheckPlanOutput() to the ON CONFLICT UPDATE tlist. That wouldn't have caught this specific error, since that function is chartered to ignore resjunk columns; but it sure seems like a bad omission now that we've seen this bug. In HEAD, the right way to fix this is to make the processing of ON CONFLICT UPDATE tlists work the same as regular UPDATE tlists now do, that is don't add "SET x = x" entries, and use ExecBuildUpdateProjection to evaluate the tlist and combine it with old values of the not-set columns. This adds a little complication to ExecBuildUpdateProjection, but allows removal of a comparable amount of now-dead code from the planner. In the back branches, the most expedient solution seems to be to (a) use an output slot for the ON CONFLICT UPDATE projection that actually matches the target table, and then (b) invent a variant of ExecBuildProjectionInfo that can be told to not store values resulting from resjunk columns, so it doesn't try to store into nonexistent columns of the output slot. (We can't simply ignore the resjunk columns altogether; they have to be evaluated for MULTIEXPR_SUBLINK to work.) This works back to v10. In 9.6, projections work much differently and we can't cheaply give them such an option. The 9.6 version of this patch works by inserting a JunkFilter when it's necessary to get rid of resjunk columns. In addition, v11 and up have the reverse problem when trying to perform ON CONFLICT UPDATE on a partitioned table. Through a further oversight, adjust_partition_tlist() discarded resjunk columns when re-ordering the ON CONFLICT UPDATE tlist to match a partition. This accidentally prevented the storing-bogus-tuples problem, but at the cost that MULTIEXPR_SUBLINK cases didn't work, typically crashing if more than one row has to be updated. Fix by preserving resjunk columns in that routine. (I failed to resist the temptation to add more assertions there too, and to do some minor code beautification.) Per report from Andres Freund. Back-patch to all supported branches. Security: CVE-2021-32028
-
Tom Lane authored
While we were (mostly) careful about ensuring that the dimensions of arrays aren't large enough to cause integer overflow, the lower bound values were generally not checked. This allows situations where lower_bound + dimension overflows an integer. It seems that that's harmless so far as array reading is concerned, except that array elements with subscripts notionally exceeding INT_MAX are inaccessible. However, it confuses various array-assignment logic, resulting in a potential for memory stomps. Fix by adding checks that array lower bounds aren't large enough to cause lower_bound + dimension to overflow. (Note: this results in disallowing cases where the last subscript position would be exactly INT_MAX. In principle we could probably allow that, but there's a lot of code that computes lower_bound + dimension and would need adjustment. It seems doubtful that it's worth the trouble/risk to allow it.) Somewhat independently of that, array_set_element() was careless about possible overflow when checking the subscript of a fixed-length array, creating a different route to memory stomps. Fix that too. Security: CVE-2021-32027
-
Peter Eisentraut authored
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 1c361d3ac016b61715d99f2055dee050397e3f13
-
Peter Eisentraut authored
When building without --enable-dtrace, emit dummy do {} while (0) statements for the stubbed-out TRACE_POSTGRESQL_foo() macros instead of empty macros that totally elide the original probe statement. This fixes the warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body] introduced by b94409a0. Author: Craig Ringer <craig.ringer@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/20210504221531.cfvpmmdfsou6eitb%40alap3.anarazel.de
-
Peter Eisentraut authored
Was present in original commit 198b3716 but apparently never used.
-
Michael Paquier authored
Author: Kyotaro Horiguchi, Justin Pryzby Discussion: https://postgr.es/m/20210428.173633.1525059946206239295.horikyota.ntt@gmail.com
-
Bruce Momjian authored
-
Michael Paquier authored
"make dist", in charge of creating a distribution tarball, failed when attempting to generate ./INSTALL as a new reference added to guc-default-toast-compression on the documentation for the installation details was not getting translated properly to plain text. Like all the other link references on this page, this adds a new entry to standalone-profile.xsl to allow the generation of ./INSTALL to finish properly. Oversight in 02a93e7e, per buildfarm member guaibasaurus.
-
Thomas Munro authored
This set of commits has some bugs with known fixes, but at this late stage in the release cycle it seems best to revert and resubmit next time, along with some new automated test coverage for this whole area. Commits reverted: dc88460c: Doc: Review for "Optionally prefetch referenced data in recovery." 1d257577: Optionally prefetch referenced data in recovery. f003d9f8: Add circular WAL decoding buffer. 323cbe7c: Remove read_page callback from XLogReader. Remove the new GUC group WAL_RECOVERY recently added by a55a9847, as the corresponding section of config.sgml is now reverted. Discussion: https://postgr.es/m/CAOuzzgrn7iKnFRsB4MHp3UisEQAGgZMbk_ViTN4HV4-Ksq8zCg%40mail.gmail.com
-
Michael Paquier authored
Some relations with LZ4 used as the toast compression methods have been left around in the main regression test suite to stress pg_upgrade, but pg_dump, that includes tests much more picky in terms of output generated, had no actual coverage with this feature. Similarly to collations, tests only working with LZ4 are tracked with an additional flag, and this uses TestLib::check_pg_config() to check if the build supports LZ4 or not. This stresses more scenarios with tables, materialized views and pg_dump --no-toast-compression. Author: Dilip Kumar Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/CAFiTN-twgPmohG7qj1HXhySq16h123y5OowsQR+5h1YeZE9fmQ@mail.gmail.com
-
Michael Paquier authored
The upstream project is officially named "LZ4", and the documentation was confused with the option value that can be used with DDLs supporting this option, and the project name. Documentation related to the configure option --with-lz4 was missing, so add something for that. Author: Dilip Kumar, Michael Paquier Reviewed-by: Justin Pryzby Discussion: https://postgr.es/m/YJaOZQDXBVySq+Cc@paquier.xyz
-
- 09 May, 2021 1 commit
-
-
Tom Lane authored
These comments left the impression that USE_VALGRIND isn't really essential for valgrind testing. But that's wrong, as I learned the hard way today. Discussion: https://postgr.es/m/512778.1620588546@sss.pgh.pa.us
-
- 08 May, 2021 4 commits
-
-
David Rowley authored
In 9eacee2e, I included some code to verify the cache's memory tracking is correct by counting up the number of entries and the memory they use each time we evict something from the cache. Those values are then compared to the expected values using Assert. The problem is that this requires looping over the entire cache hash table each time we evict an entry from the cache. That can be pretty expensive, as noted by Pavel Stehule. Here we move this memory accounting checking code so that we only verify it on cassert builds once when shutting down the Result Cache node. Aside from the performance increase, this has two distinct advantages: 1) We do the memory checks at the last possible moment before destroying the cache. This means we'll now catch accounting problems that might sneak in after a cache eviction. 2) We now do the memory Assert checks when there were no cache evictions. This increases the coverage. One small disadvantage is that we'll now miss any memory tracking issues that somehow managed to resolve themselves by the end of execution. However, it seems to me that such a memory tracking problem would be quite unlikely, and likely somewhat less harmful if one were to exist. In passing, adjust the loop over the hash table to use the standard simplehash.h method of iteration. Reported-by: Pavel Stehule Discussion: https://postgr.es/m/CAFj8pRAzgoSkdEiqrKbT=7yG9FA5fjUAP3jmJywuDqYq6Ki5ug@mail.gmail.com
-
Tom Lane authored
It seems that various people have moved GUCs around in the config.sgml listing without bothering to make the code agree. Ensure that the config_group codes assigned to GUCs match where they are listed in config.sgml. Likewise ensure that postgresql.conf.sample lists GUCs in the same sub-section and same ordering as they appear in config.sgml. (I've got some doubts about some of these choices, but for the purposes of this patch, we'll treat config.sgml as gospel.) Notably, this requires adding a WAL_RECOVERY config_group value, because 1d257577 didn't. As long as we're renumbering that enum anyway, let's take out the values corresponding to major groups that are divided into sub-groups. No GUC should be assigned to the major group itself, so those values just create a temptation to do the wrong thing, while adding work for translators. In passing, adjust the short_desc strings for PRESET_OPTIONS GUCs to uniformly use the phrasing "Shows XYZ.", removing the impression some of these strings left that you can set the value. While some of these errors are old, no back-patch, as changing the contents of the pg_settings view in stable branches seems more likely to be seen as a compatibility break than anything helpful. Bharath Rupireddy, Justin Pryzby, Tom Lane Discussion: https://postgr.es/m/16997-ff16127f6e0d1390@postgresql.org Discussion: https://postgr.es/m/20210413123139.GE6091@telsasoft.com
-
Tom Lane authored
I came to fix "useful only useful", but the more I looked at the text the more things I thought could be improved.
-
Michael Paquier authored
Specifying an incorrect value for the compression method of an attribute caused ERRCODE_FEATURE_NOT_SUPPORTED to be raised as error. Use instead ERRCODE_INVALID_PARAMETER_VALUE to be more consistent. Author: Dilip Kumar Discussion: https://postgr.es/m/CAFiTN-vH84fE-8C4zGZw4v0Wyh4Y2v=5JWg2fGE5+LPaDvz1GQ@mail.gmail.com
-
- 07 May, 2021 7 commits
-
-
Tomas Vondra authored
When executing the INSERT with batching, we may need to rebuild the query when the batch size changes, in which case we pfree the current string. We must not release the original string, stored in fdw_private, because that may be needed in EXPLAIN ANALYZE. So make copy of the SQL, but only for INSERT queries. Reported-by: Pavel Stehule Discussion: https://postgr.es/m/CAFj8pRCL_Rjw-MCR6J7VX9OF7MR6PA5K8qUbrMvprW_e-aHkfQ%40mail.gmail.com
-
Andrew Dunstan authored
Discussion: https://postgr.es/m/20210506035602.3akutfvvojngj3nb@alap3.anarazel.de
-
Peter Eisentraut authored
-
Alvaro Herrera authored
This patch replaces use of the global "wrconn" variable in AlterSubscription_refresh with a local variable of the same name, making it consistent with other functions in subscriptioncmds.c (e.g. DropSubscription). The global wrconn is only meant to be used for logical apply/tablesync worker. Abusing it this way is known to cause trouble if an apply worker manages to do a subscription refresh, such as reported by Jeremy Finzel and diagnosed by Andres Freund back in November 2020, at https://www.postgresql.org/message-id/20201111215820.qihhrz7fayu6myfi@alap3.anarazel.de Backpatch to 10. In branch master, also move the connection establishment to occur outside the PG_TRY block; this way we can remove a test for NULL in PG_FINALLY, and it also makes the code more consistent with similar code in the same file. Author: Peter Smith <peter.b.smith@fujitsu.com> Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Reviewed-by: Japin Li <japinli@hotmail.com> Discussion: https://postgr.es/m/CAHut+Pu7Jv9L2BOEx_Z0UtJxfDevQSAUW2mJqWU+CtmDrEZVAg@mail.gmail.com
-
Andrew Dunstan authored
-
Andrew Dunstan authored
-
Tomas Vondra authored
The docs mentioned expression indexes as a way to improve selectivity estimates for functions, but we have a second option to improve that by creating extended statistics. So mention that too. Reported-by: Justin Pryzby Discussion: https://postgr.es/m/20210505210947.GA27406%40telsasoft.com
-