- 14 May, 2021 3 commits
-
-
Peter Eisentraut authored
-
Bruce Momjian authored
-
David Rowley authored
This seems to be leftover from ea15e186 and from when we used to evaluate SRFs at each node. Since there is an unconditional "return" at the end of the loop body, only 1 loop is ever possible, so we can just change this into an if condition. There is no actual bug being fixed here so no back-patch. It seems fine to just fix this anomaly in master only. Author: Greg Nancarrow Discussion: https://postgr.es/m/CAJcOf-d7T1q0az-D8evWXnsuBZjigT04WkV5hCAOEJQZRWy28w@mail.gmail.com
-
- 13 May, 2021 8 commits
-
-
Peter Geoghegan authored
The percentage of blocks from the table value reported by autovacuum log output (following commit 5100010e) should never exceed 100% because it describes the state of the table back when lazy_vacuum() was called. The value could nevertheless exceed 100% in the event of heap relation truncation. We failed to compensate for how truncation affects rel_pages. Fix the faulty accounting by using the original rel_pages value instead of the current/final rel_pages value. Reported-By: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20210423204306.5osfpkt2ggaedyvy@alap3.anarazel.de
-
Bruce Momjian authored
-
Alexander Korotkov authored
Make sample like_regex match string values of the root object instead of the whole document. The corrected example seems to represent a more relevant use case. Backpatch to 12, when jsonpath was introduced. Discussion: https://postgr.es/m/13440f8b-4c1f-5875-c8e3-f3f65606af2f%40xs4all.nl Author: Erik Rijkers Reviewed-by: Michael Paquier, Alexander Korotkov Backpatch-through: 12
-
Etsuro Fujita authored
Commits 27e1f145 and 86dc9005, which were independently discussed, cause a crash when executing an inherited foreign UPDATE/DELETE query with asynchronous execution enabled, where children of an Append node that is the direct/indirect child of the ModifyTable node are rewritten so as to modify foreign tables directly by postgresPlanDirectModify(); as in that case the direct modifications are executed asynchronously, which is not currently supported by asynchronous execution. Fix by disabling asynchronous execution of the direct modifications in that function. Author: Etsuro Fujita Reviewed-by: Amit Langote Discussion: https://postgr.es/m/CAPmGK158e9sJOfuWxfn%2B0ynrspXQU3JhNjSCbaoeSzMvnga%2Bbw%40mail.gmail.com
-
Peter Eisentraut authored
-
Amit Kapila authored
Some of the tests were not considering that the slot's spill stats could be received by the stats collector after we have reset the stats. Remove those tests and don't check total bytes decoded and sent to output plugin in the spilled stats test as we can send the spilled stats to the stats collector before actually sending the changes to output plugin. Reported-by: Tom Lane as per buildfarm Author: Vignesh C, Sawada Masahiko Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
-
Bruce Momjian authored
-
Michael Paquier authored
When specified directly as DML queries, INSERT was not getting always completed to "INSERT INTO", same for DELETE with "DELETE FROM". This makes the completion behavior more consistent for both commands, saving a few keystrokes. Commands on policies, triggers, grant/revoke, etc. require only DELETE as completion keyword. Author: Haiying Tang Reviewed-by: Dilip Kumar, Julien Rouhaud Discussion: https://postgr.es/m/OS0PR01MB61135AE2B07CCD1AB8C6A0F6FB549@OS0PR01MB6113.jpnprd01.prod.outlook.com
-
- 12 May, 2021 11 commits
-
-
Alvaro Herrera authored
The worker.c global wrconn is only meant to be used by logical apply/ tablesync workers, but there are other variables with the same name. To reduce future confusion rename the global from "wrconn" to "LogRepWorkerWalRcvConn". While this is just cosmetic, it seems better to backpatch it all the way back to 10 where this code appeared, to avoid future backpatching issues. Author: Peter Smith <smithpb2250@gmail.com> Discussion: https://postgr.es/m/CAHut+Pu7Jv9L2BOEx_Z0UtJxfDevQSAUW2mJqWU+CtmDrEZVAg@mail.gmail.com
-
Tom Lane authored
Previously, any error reported by the backend while reading system_constraints.sql would report the entire file, not just the particular command it was working on. (Ask me how I know.) Likewise, there were chunks of system_functions.sql that would be read as one command, which would be annoying if anything failed there. The issue for system_constraints.sql is an oversight in commit dfb75e47. I didn't try to trace down where the poor formatting in system_functions.sql started, but it's certainly contrary to the advice at the head of that file.
-
Tom Lane authored
Run renumber_oids.pl to move high-numbered OIDs down, as per pre-beta tasks specified by RELEASE_CHANGES. For reference, the command was ./renumber_oids.pl --first-mapped-oid 8000 --target-oid 6150
-
Tom Lane authored
Also "make reformat-dat-files". The only change worthy of note is that pgindent messed up the formatting of launcher.c's struct LogicalRepWorkerId, which led me to notice that that struct wasn't used at all anymore, so I just took it out.
-
Michael Paquier authored
The section of the code in charge of returning all the relations associated to a subscription only need one ScanKey, but allocated two of them. This code was introduced as a copy-paste from a different area on the same file by 7c4f5240, making the result confusing to follow. Author: Peter Smith Reviewed-by: Tom Lane, Julien Rouhaud, Bharath Rupireddy Discussion: https://postgr.es/m/CAHut+PsLKe+rN3FjchoJsd76rx2aMsFTB7CTFxRgUP05p=kcpQ@mail.gmail.com
-
Peter Eisentraut authored
-
Etsuro Fujita authored
EXPLAIN ANALYZE for an async-capable ForeignScan node associated with postgres_fdw is done just by using instrumentation for ExecProcNode() called from the node's callbacks, causing the following problems: 1) If the remote table to scan is empty, the node is incorrectly considered as "never executed" by the command even if the node is executed, as ExecProcNode() isn't called from the node's callbacks at all in that case. 2) The command fails to collect timings for things other than ExecProcNode() done in the node, such as creating a cursor for the node's remote query. To fix these problems, add instrumentation for async-capable nodes, and modify postgres_fdw accordingly. My oversight in commit 27e1f145. While at it, update a comment for the AsyncRequest struct in execnodes.h and the documentation for the ForeignAsyncRequest API in fdwhandler.sgml to match the code in ExecAsyncAppendResponse() in nodeAppend.c, and fix typos in comments in nodeAppend.c. Per report from Andrey Lepikhov, though I didn't use his patch. Reviewed-by: Andrey Lepikhov Discussion: https://postgr.es/m/2eb662bb-105d-fc20-7412-2f027cc3ca72%40postgrespro.ru
-
Tom Lane authored
Several queries in the privileges regression test cause the planner to apply the plpgsql function "leak()" to every element of the histogram for atest12.b. Since commit 0c882e52 increased the size of that histogram to 10000 entries, the test invokes that function over 100000 times, which takes an absolutely unreasonable amount of time in clobber-cache-always mode. However, there's no real reason why that has to be a plpgsql function: for the purposes of this test, all that matters is that it not be marked leakproof. So we can replace the plpgsql implementation with a direct call of int4lt, which has the same behavior and is orders of magnitude faster. This is expected to cut several hours off the buildfarm cycle time for CCA animals. It has some positive impact in normal builds too, though that's probably lost in the noise. Back-patch to v13 where 0c882e52 came in. Discussion: https://postgr.es/m/575884.1620626638@sss.pgh.pa.us
-
Fujii Masao authored
Previously long was used as the data type for some counters in BufferUsage and WalUsage. But long is only four byte, e.g., on Windows, and it's entirely possible to wrap a four byte counter. For example, emitting more than four billion WAL records in one transaction isn't actually particularly rare. To avoid the overflows of those counters, this commit changes the data type of them from long to int64. Suggested-by: Andres Freund Author: Masahiro Ikeda Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/20201221211650.k7b53tcnadrciqjo@alap3.anarazel.de Discussion: https://postgr.es/m/af0964ac-7080-1984-dc23-513754987716@oss.nttdata.com
-
Andrew Dunstan authored
Use a static prolog file instead of generating the prolog from the existing perl script. Also, support generation of the file in a vpath build. Discussion: https://postgr.es/m/700620.1620662868@sss.pgh.pa.us
-
- 11 May, 2021 8 commits
-
-
Tom Lane authored
I copied the existing spelling of "--max_connections", but that's just wrong :-(. Evidently setting $ENV{MAX_CONNECTIONS} has never actually worked in this script. Given the lack of complaints, it's probably not worth back-patching a fix. Per buildfarm. Discussion: https://postgr.es/m/899209.1620759506@sss.pgh.pa.us
-
Tom Lane authored
Having to maintain two lists of regression test scripts has proven annoyingly error-prone. We can achieve the effect of the serial_schedule by running the parallel_schedule with "--max_connections=1"; so do that and remove serial_schedule. This causes cosmetic differences in the progress output, but it doesn't seem worth restructuring pg_regress to avoid that. Discussion: https://postgr.es/m/899209.1620759506@sss.pgh.pa.us
-
Bruce Momjian authored
-
Tom Lane authored
opr_sanity's binary_coercible() function has always been meant to match the parser's notion of binary coercibility, but it also has always been a rather poor approximation of the parser's real rules (as embodied in IsBinaryCoercible()). That hasn't bit us so far, but it's predictable that it will eventually. It also now emerges that implementing this check in plpgsql performs absolutely horribly in clobber-cache-always testing. (Perhaps we could do something about that, but I suspect it just means that plpgsql is exploiting catalog caching to the hilt.) Hence, let's replace binary_coercible() with a C shim that directly invokes IsBinaryCoercible(), eliminating both the semantic hazard and the performance issue. Most of regress.c's C functions are declared in create_function_1, but we can't simply move that to before opr_sanity/type_sanity since those tests would complain about the resulting shell types. I chose to split it into create_function_0 and create_function_1. Since create_function_0 now runs as part of a parallel group while create_function_1 doesn't, reduce the latter to create just those functions that opr_sanity and type_sanity would whine about. To make room for create_function_0 in the second parallel group of tests, move tstypes to the third parallel group. In passing, clean up some ordering deviations between parallel_schedule and serial_schedule. Discussion: https://postgr.es/m/292305.1620503097@sss.pgh.pa.us
-
Peter Eisentraut authored
-
Bruce Momjian authored
-
David Rowley authored
The note is no longer true as of 86dc9005, so remove it. Author: Amit Langote Discussion: https://postgr.es/m/CA+HiwqFxQn7Hz1wT+wYgnf_9SK0c4BwOOwFFT8jcSZwJrd8HEA@mail.gmail.com
-
Michael Paquier authored
Since its introduction in bbe0a81d, compression of table data supports LZ4, but nothing had been done within the MSVC scripts to allow users to build the code with this library. This commit closes the gap by extending the MSVC scripts to be able to build optionally with LZ4. Getting libraries that can be used for compilation and execution is possible as LZ4 can be compiled down to MSVC 2010 using its source tarball. MinGW may require extra efforts to be able to work, and I have been able to test this only with MSVC, still this is better than nothing to give users a way to test the feature on Windows. Author: Dilip Kumar Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/YJPdNeF68XpwDDki@paquier.xyz
-
- 10 May, 2021 10 commits
-
-
Tom Lane authored
It's unusual to have any resjunk columns in an ON CONFLICT ... UPDATE list, but it can happen when MULTIEXPR_SUBLINK SubPlans are present. If it happens, the ON CONFLICT UPDATE code path would end up storing tuples that include the values of the extra resjunk columns. That's fairly harmless in the short run, but if new columns are added to the table then the values would become accessible, possibly leading to malfunctions if they don't match the datatypes of the new columns. This had escaped notice through a confluence of missing sanity checks, including * There's no cross-check that a tuple presented to heap_insert or heap_update matches the table rowtype. While it's difficult to check that fully at reasonable cost, we can easily add assertions that there aren't too many columns. * The output-column-assignment cases in execExprInterp.c lacked any sanity checks on the output column numbers, which seems like an oversight considering there are plenty of assertion checks on input column numbers. Add assertions there too. * We failed to apply nodeModifyTable's ExecCheckPlanOutput() to the ON CONFLICT UPDATE tlist. That wouldn't have caught this specific error, since that function is chartered to ignore resjunk columns; but it sure seems like a bad omission now that we've seen this bug. In HEAD, the right way to fix this is to make the processing of ON CONFLICT UPDATE tlists work the same as regular UPDATE tlists now do, that is don't add "SET x = x" entries, and use ExecBuildUpdateProjection to evaluate the tlist and combine it with old values of the not-set columns. This adds a little complication to ExecBuildUpdateProjection, but allows removal of a comparable amount of now-dead code from the planner. In the back branches, the most expedient solution seems to be to (a) use an output slot for the ON CONFLICT UPDATE projection that actually matches the target table, and then (b) invent a variant of ExecBuildProjectionInfo that can be told to not store values resulting from resjunk columns, so it doesn't try to store into nonexistent columns of the output slot. (We can't simply ignore the resjunk columns altogether; they have to be evaluated for MULTIEXPR_SUBLINK to work.) This works back to v10. In 9.6, projections work much differently and we can't cheaply give them such an option. The 9.6 version of this patch works by inserting a JunkFilter when it's necessary to get rid of resjunk columns. In addition, v11 and up have the reverse problem when trying to perform ON CONFLICT UPDATE on a partitioned table. Through a further oversight, adjust_partition_tlist() discarded resjunk columns when re-ordering the ON CONFLICT UPDATE tlist to match a partition. This accidentally prevented the storing-bogus-tuples problem, but at the cost that MULTIEXPR_SUBLINK cases didn't work, typically crashing if more than one row has to be updated. Fix by preserving resjunk columns in that routine. (I failed to resist the temptation to add more assertions there too, and to do some minor code beautification.) Per report from Andres Freund. Back-patch to all supported branches. Security: CVE-2021-32028
-
Tom Lane authored
While we were (mostly) careful about ensuring that the dimensions of arrays aren't large enough to cause integer overflow, the lower bound values were generally not checked. This allows situations where lower_bound + dimension overflows an integer. It seems that that's harmless so far as array reading is concerned, except that array elements with subscripts notionally exceeding INT_MAX are inaccessible. However, it confuses various array-assignment logic, resulting in a potential for memory stomps. Fix by adding checks that array lower bounds aren't large enough to cause lower_bound + dimension to overflow. (Note: this results in disallowing cases where the last subscript position would be exactly INT_MAX. In principle we could probably allow that, but there's a lot of code that computes lower_bound + dimension and would need adjustment. It seems doubtful that it's worth the trouble/risk to allow it.) Somewhat independently of that, array_set_element() was careless about possible overflow when checking the subscript of a fixed-length array, creating a different route to memory stomps. Fix that too. Security: CVE-2021-32027
-
Peter Eisentraut authored
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 1c361d3ac016b61715d99f2055dee050397e3f13
-
Peter Eisentraut authored
When building without --enable-dtrace, emit dummy do {} while (0) statements for the stubbed-out TRACE_POSTGRESQL_foo() macros instead of empty macros that totally elide the original probe statement. This fixes the warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body] introduced by b94409a0. Author: Craig Ringer <craig.ringer@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/20210504221531.cfvpmmdfsou6eitb%40alap3.anarazel.de
-
Peter Eisentraut authored
Was present in original commit 198b3716 but apparently never used.
-
Michael Paquier authored
Author: Kyotaro Horiguchi, Justin Pryzby Discussion: https://postgr.es/m/20210428.173633.1525059946206239295.horikyota.ntt@gmail.com
-
Bruce Momjian authored
-
Michael Paquier authored
"make dist", in charge of creating a distribution tarball, failed when attempting to generate ./INSTALL as a new reference added to guc-default-toast-compression on the documentation for the installation details was not getting translated properly to plain text. Like all the other link references on this page, this adds a new entry to standalone-profile.xsl to allow the generation of ./INSTALL to finish properly. Oversight in 02a93e7e, per buildfarm member guaibasaurus.
-
Thomas Munro authored
This set of commits has some bugs with known fixes, but at this late stage in the release cycle it seems best to revert and resubmit next time, along with some new automated test coverage for this whole area. Commits reverted: dc88460c: Doc: Review for "Optionally prefetch referenced data in recovery." 1d257577: Optionally prefetch referenced data in recovery. f003d9f8: Add circular WAL decoding buffer. 323cbe7c: Remove read_page callback from XLogReader. Remove the new GUC group WAL_RECOVERY recently added by a55a9847, as the corresponding section of config.sgml is now reverted. Discussion: https://postgr.es/m/CAOuzzgrn7iKnFRsB4MHp3UisEQAGgZMbk_ViTN4HV4-Ksq8zCg%40mail.gmail.com
-
Michael Paquier authored
Some relations with LZ4 used as the toast compression methods have been left around in the main regression test suite to stress pg_upgrade, but pg_dump, that includes tests much more picky in terms of output generated, had no actual coverage with this feature. Similarly to collations, tests only working with LZ4 are tracked with an additional flag, and this uses TestLib::check_pg_config() to check if the build supports LZ4 or not. This stresses more scenarios with tables, materialized views and pg_dump --no-toast-compression. Author: Dilip Kumar Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/CAFiTN-twgPmohG7qj1HXhySq16h123y5OowsQR+5h1YeZE9fmQ@mail.gmail.com
-