- 03 Jun, 2021 1 commit
-
-
Michael Paquier authored
Various environment variables were not getting reset in the TAP tests, which would cause failures depending on the tests or the environment variables involved. For example, PGSSL{MAX,MIN}PROTOCOLVERSION could cause failures in the SSL tests. Even worse, a junk value of PGCLIENTENCODING makes a server startup fail. The list of variables reset is adjusted in each stable branch depending on what is supported. While on it, simplify a bit the code per a suggestion from Andrew Dunstan, using a list of variables instead of doing single deletions. Reviewed-by: Andrew Dunstan, Daniel Gustafsson Discussion: https://postgr.es/m/YLbjjRpucIeZ78VQ@paquier.xyz Backpatch-through: 9.6
-
- 02 Jun, 2021 9 commits
-
-
Tom Lane authored
Commit 3db826bd disallowed this case, but it turns out that some people are depending on it. Since the core grammar has allowed it since 3dc37cd8, it seems like this code should fall in line. Per bug #17045 from Robert Sosinski. Discussion: https://postgr.es/m/17045-6a4a9f0d1513f72b@postgresql.org
-
Tomas Vondra authored
Reverts most of commit 39b66a91, which was found to cause significant regression for REFRESH MATERIALIZED VIEW. This means only rows inserted by heap_multi_insert will benefit from the optimization, implemented in commit 7db0cd21. Reported-by: Masahiko Sawada Discussion: https://postgr.es/m/CAD21AoA%3D%3Df2VSw3c-Cp_y%3DWLKHMKc1D6s7g3YWsCOvgaYPpJcg%40mail.gmail.com
-
Tom Lane authored
Commit 428b260f broke planning of cases where row marks are needed (SELECT FOR UPDATE, etc) and one of the query's tables is a foreign table that has regular table(s) as inheritance children. We got the reverse case right, but apparently were thinking that foreign tables couldn't be inheritance parents. Not so; so we need to be able to add a CTID junk column while adding a new child, not only a wholerow junk column. Back-patch to v12 where the faulty code came in. Amit Langote Discussion: https://postgr.es/m/CA+HiwqEmo3FV1LAQ4TVyS2h1WM=kMkZUmbNuZSCnfHvMcUcPeA@mail.gmail.com
-
Tom Lane authored
Commit c7aba7c1 missed adding SUBSCRIPT here, and commit 6df7a969 missed adding MULTIRANGE_TYPE_NAME. Haiying Tang and Tom Lane Discussion: https://postgr.es/m/OS0PR01MB6113F9EDA46FA53BAA5445BDFB3D9@OS0PR01MB6113.jpnprd01.prod.outlook.com
-
Fujii Masao authored
The regression test 013_crash_restart.pl included "use Time::HiRes qw(usleep)", but the usleep was not used there. Author: Fujii Masao Reviewed-by: Kyotaro Horiguchi Discussion: https://postgr.es/m/63ad1368-18e2-8900-8443-524bdfb1bef5@oss.nttdata.com
-
Fujii Masao authored
Previously there was no regression test for recovery pause feature. This commit adds the test that checks - recovery can be paused or resumed expectedly - pg_get_wal_replay_pause_state() reports the correct pause state - the paused state ends and promotion continues if a promotion is triggered while recovery is paused Suggested-by: Michael Paquier Author: Fujii Masao Reviewed-by: Kyotaro Horiguchi, Dilip Kumar Discussion: https://postgr.es/m/YKNirzqM1HYyk5h4@paquier.xyz
-
Noah Misch authored
-
Noah Misch authored
This avoids a warning under MinGW versions having gettimeofday(), per buildfarm member walleye.
-
- 01 Jun, 2021 2 commits
-
-
Tom Lane authored
This case should be disallowed, just as FOR UPDATE with a plain GROUP BY is disallowed; FOR UPDATE only makes sense when each row of the query result can be identified with a single table row. However, we missed teaching CheckSelectLocking() to check groupingSets as well as groupClause, so that it would allow degenerate grouping sets. That resulted in a bad plan and a null-pointer dereference in the executor. Looking around for other instances of the same bug, the only one I found was in examine_simple_variable(). That'd just lead to silly estimates, but it should be fixed too. Per private report from Yaoguang Chen. Back-patch to all supported branches.
-
Amit Kapila authored
Release memory allocated when creating the tuple-conversion map and its component TupleDescs when its owning sync entry is invalidated. TupleDescs must also be freed when no map is deemed necessary, to begin with. Reported-by: Andres Freund Author: Amit Langote Reviewed-by: Takamichi Osumi, Amit Kapila Backpatch-through: 13, where it was introduced Discussion: https://postgr.es/m/MEYP282MB166933B1AB02B4FE56E82453B64D9@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
-
- 31 May, 2021 5 commits
-
-
Thomas Munro authored
Commit 44bf3d50 incorrectly used an errno-style interface when supplying missing pthread functionality (i.e. on macOS), but it should check for and return error numbers directly.
-
Peter Eisentraut authored
The RADIUS-related checks in parse_hba_line() did not respect elevel and did not fill in *err_msg. Also, verify_option_list_length() pasted together error messages in an untranslatable way. To fix the latter, remove the function and do the error checking inline. It's a bit more verbose but only minimally longer, and it makes fixing the first two issues straightforward. Reviewed-by: Magnus Hagander <magnus@hagander.net> Discussion: https://www.postgresql.org/message-id/flat/8381e425-8c23-99b3-15ec-3115001db1b2%40enterprisedb.com
-
Tom Lane authored
create_projection_plan contains a hidden assumption (here made explicit by an Assert) that a projection-capable Path will yield a projection-capable Plan. Unfortunately, that assumption is violated only a few lines away, by create_projection_plan itself. This means that two stacked ProjectionPaths can yield an outcome where we try to jam the upper path's tlist into a non-projection-capable child node, resulting in an invalid plan. There isn't any good reason to have stacked ProjectionPaths; indeed the whole concept is faulty, since the set of Vars/Aggs/etc needed by the upper one wouldn't necessarily be available in the output of the lower one, nor could the lower one create such values if they weren't available from its input. Hence, we can fix this by adjusting create_projection_path to strip any top-level ProjectionPath from the subpath it's given. (This amounts to saying "oh, we changed our minds about what we need to project here".) The test case added here only fails in v13 and HEAD; before that, we don't attempt to shove the Sort into the parallel part of the plan, for reasons that aren't entirely clear to me. However, all the directly-related code looks generally the same as far back as v11, where the hazard was introduced (by d7c19e62). So I've got no faith that the same type of bug doesn't exist in v11 and v12, given the right test case. Hence, back-patch the code changes, but not the irrelevant test case, into those branches. Per report from Bas Poot. Discussion: https://postgr.es/m/534fca83789c4a378c7de379e9067d4f@politie.nl
-
Noah Misch authored
Per buildfarm member hornet. Also, update Pod documentation showing the lower value. Back-patch to v10, where the test first appeared.
-
Michael Paquier authored
Braces were referred in some error messages as only brackets (not curly brackets or curly braces), which can be confusing as other types of brackets could be used. While on it, add one test to check after the case of junk characters detected after a right brace. Author: Kyotaro Horiguchi Discussion: https://postgr.es/m/20210514.153153.1814935914483287479.horikyota.ntt@gmail.com
-
- 29 May, 2021 2 commits
-
-
Tom Lane authored
Clarify libpq.sgml's description of service file locations and semantics. Avoid use of backtick'ed pg_config calls to describe paths; that doesn't work on Windows, and even on Unix it's an idiom that not all readers may be instantly familiar with. Don't overspecify the locations of include files, instead writing only as much as you'd use in #include directives. The previous text in these places was incorrect for some installations, depending on where "postgresql" is in the install path. Our convention for referencing the user's home directory seems to be "~", so change the one place that spelled it "$HOME". install-windows.sgml follows the platform convention of spelling file paths with "\", so change the one place that used "/". Haiying Tang and Tom Lane Discussion: https://postgr.es/m/162149020918.26174.7150424047314144297@wrigleys.postgresql.org
-
Thomas Munro authored
Parallel query processes that called BlessTupleDesc() for identical tuple descriptors at the same moment could crash. There was code to handle that rare case, but it dereferenced a bogus DSA pointer. Repair. Back-patch to 11, where commit cc5f8136 added support for sharing tuple descriptors in parallel queries. Reported-by: Eric Thinnes <e.thinnes@gmx.de> Discussion: https://postgr.es/m/99aaa2eb-e194-bf07-c29a-1a76b4f2bcf9%40gmx.de
-
- 28 May, 2021 3 commits
-
-
Andrew Dunstan authored
-
Andrew Dunstan authored
This is a long standing omission, discovered when trying to write code that relied on it. Backpatch to all live branches.
-
Peter Geoghegan authored
Oversight in commit 5100010e.
-
- 27 May, 2021 8 commits
-
-
Tom Lane authored
Commit ab596105 increased FirstBootstrapObjectId from 12000 to 13000, but we've had some push-back about that. It's worrisome to reduce the daylight between there and FirstNormalObjectId, because the number of OIDs consumed during initdb for collation objects is hard to predict. We can improve the situation by abandoning the assumption that these OIDs must be globally unique. It should be sufficient for them to be unique per-catalog. (Any code that's unhappy about that is broken anyway, since no more than per-catalog uniqueness can be guaranteed once the OID counter wraps around.) With that change, the largest OID assigned during genbki.pl (starting from a base of 10000) is a bit under 11000. This allows reverting FirstBootstrapObjectId to 12000 with reasonable confidence that that will be sufficient for many years to come. We are not, at this time, abandoning the expectation that hand-assigned OIDs (below 10000) are globally unique. Someday that'll likely be necessary, but the need seems years away still. This is late for v14, but it seems worth doing it now so that downstream software doesn't have to deal with the consequences of a change in FirstBootstrapObjectId. In any case, we already bought into forcing an initdb for beta2, so another catversion bump won't hurt. Discussion: https://postgr.es/m/1665197.1622065382@sss.pgh.pa.us
-
Tom Lane authored
Redefine '\0' (InvalidCompressionMethod) as meaning "if we need to compress, use the current setting of default_toast_compression". This allows '\0' to be a suitable default choice regardless of datatype, greatly simplifying code paths that initialize tupledescs and the like. It seems like a more user-friendly approach as well, because now the default compression choice doesn't migrate into table definitions, meaning that changing default_toast_compression is usually sufficient to flip an installation's behavior; one needn't tediously issue per-column ALTER SET COMPRESSION commands. Along the way, fix a few minor bugs and documentation issues with the per-column-compression feature. Adopt more robust APIs for SetIndexStorageProperties and GetAttributeCompression. Bump catversion because typical contents of attcompression will now be different. We could get away without doing that, but it seems better to ensure v14 installations all agree on this. (We already forced initdb for beta2, anyway.) Discussion: https://postgr.es/m/626613.1621787110@sss.pgh.pa.us
-
Peter Eisentraut authored
The path needs to be set to refer to the build directory, not the current directory, because that's actually the source directory at that point. fix for 6abc8c25
-
Peter Eisentraut authored
Apply in libpq_pipeline test makefile, so that the test file is not installed into tmp_install. Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/cb9d16a6-760f-cd44-28d6-b091d5fb6ca7%40enterprisedb.com
-
Michael Paquier authored
The deliverables of upstream Kerberos on Windows are installed with paths that do not match our MSVC scripts. First, the include folder was named "inc/" in our scripts, but the upstream MSIs use "include/". Second, the build would fail with 64-bit environments as the libraries are named differently. This commit adjusts the MSVC scripts to be compatible with the latest installations of upstream, and I have checked that the compilation was able to work with the 32-bit and 64-bit installations. Special thanks to Kondo Yuta for the help in investigating the situation in hamerkop, which had an incorrect configuration for the GSS compilation. Reported-by: Brian Ye Discussion: https://postgr.es/m/162128202219.27274.12616756784952017465@wrigleys.postgresql.org Backpatch-through: 9.6
-
Peter Eisentraut authored
The error message was checking that the structures returned from the parser matched expectations. That's something we usually use assertions for, not a full user-facing error message. So replace that with an assertion (hidden inside lfirst_node()). Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/452e9df8-ec89-e01b-b64a-8cc6ce830458%40enterprisedb.com
-
Michael Paquier authored
The following parameters have been imprecise, or incorrect, about their description (PGC_POSTMASTER or PGC_SIGHUP): - autovacuum_work_mem (docs, as of 9.6~) - huge_page_size (docs, as of 14~) - max_logical_replication_workers (docs, as of 10~) - max_sync_workers_per_subscription (docs, as of 10~) - min_dynamic_shared_memory (docs, as of 14~) - recovery_init_sync_method (postgresql.conf.sample, as of 14~) - remove_temp_files_after_crash (docs, as of 14~) - restart_after_crash (docs, as of 9.6~) - ssl_min_protocol_version (docs, as of 12~) - ssl_max_protocol_version (docs, as of 12~) This commit adjusts the description of all these parameters to be more consistent with the practice used for the others. Revewed-by: Justin Pryzby Discussion: https://postgr.es/m/YK2ltuLpe+FbRXzA@paquier.xyz Backpatch-through: 9.6
-
Amit Kapila authored
While decoding the multi-insert WAL we can't clean the toast untill we get the last insert of that WAL record. Now if we stream the changes before we get the last change, the memory for toast chunks won't be released and we expect the txn to have streamed all changes after streaming. This restriction is mainly to ensure the correctness of streamed transactions and it doesn't seem worth uplifting such a restriction just to allow this case because anyway we will stream the transaction once such an insert is complete. Previously we were using two different flags (one for toast tuples and another for speculative inserts) to indicate partial changes. Now instead we replaced both of them with a single flag to indicate partial changes. Reported-by: Pavan Deolasee Author: Dilip Kumar Reviewed-by: Pavan Deolasee, Amit Kapila Discussion: https://postgr.es/m/CABOikdN-_858zojYN-2tNcHiVTw-nhxPwoQS4quExeweQfG1Ug@mail.gmail.com
-
- 26 May, 2021 1 commit
-
-
Michael Paquier authored
Author: Hou Zhijie Discussion: https://postgr.es/m/OS0PR01MB571612191738540B27A8DE5894249@OS0PR01MB5716.jpnprd01.prod.outlook.com
-
- 25 May, 2021 9 commits
-
-
Alvaro Herrera authored
Same as 5e0b1aeb, for the companion test file. This one seems lower probability (only two failures in a month of runs); I was hardly able to reproduce a failure without a patch, so the fact that I was also unable to reproduce one with it doesn't say anything. We'll have to wait for further buildfarm results to see if we need any further adjustments. Discussion: https://postgr.es/m/20210524090712.GA3771394@rfd.leadboat.com
-
Tom Lane authored
Commit e717a9a1 introduced a code path that bypassed the call of get_expr_result_type, which is not good because we need its rettupdesc result to pass to check_sql_fn_retval. We'd failed to notice right away because the code path in which check_sql_fn_retval uses that argument is fairly hard to reach in this context. It's not impossible though, and in any case inline_function would have no business assuming that check_sql_fn_retval doesn't need that value. To fix, move get_expr_result_type out of the if-block, which in turn requires moving the construction of the dummy FuncExpr out of it. Per report from Ranier Vilela. (I'm bemused by the lack of any compiler complaints...) Discussion: https://postgr.es/m/CAEudQAqBqQpQ3HruWAGU_7WaMJ7tntpk0T8k_dVtNB46DqdBgw@mail.gmail.com
-
Alvaro Herrera authored
This recently added test has shown to be too sensitive to timing when sending a cancel to a session waiting for a lock. We fix this by running a no-op query in the blocked session immediately after the cancel; this avoids the session that sent the cancel sending another query immediately before the cancel has been reported. Idea by Noah Misch. With that fix, we sometimes see that the cancel error report is shown only relative to the step that is cancelled, instead of together with the step that sends the cancel. To increase the probability that both steps are shown togeter, add a 0.1s sleep to the cancel. In normal conditions this appears sufficient to silence most failures, but we'll see that the slower buildfarm members say about it. Reported-by: Takamichi Osumi <osumi.takamichi@fujitsu.com> Discussion: https://postgr.es/m/OSBPR01MB4888C4ABA361C7E81094AC66ED269@OSBPR01MB4888.jpnprd01.prod.outlook.com
-
Peter Eisentraut authored
-
Michael Paquier authored
VACUUM FULL and CLUSTER can be used to enforce the use of the existing compression method of a toastable column if a value currently stored is compressed with a method that does not match the column's defined method. The code in charge of decompressing and recompressing toast values at rewrite left around the detoasted values, causing an accumulation of memory allocated in TopTransactionContext. When processing large relations, this could cause the system to run out of memory. The detoasted values are not needed once their tuple is rewritten, and this commit ensures that the necessary cleanup happens. Issue introduced by bbe0a81d. The comments of the area are reordered a bit while on it. Reported-by: Andres Freund Analyzed-by: Andres Freund Author: Michael Paquier Reviewed-by: Dilip Kumar Discussion: https://postgr.es/m/20210521211929.pcehg6f23icwstdb@alap3.anarazel.de
-
Amit Kapila authored
Add the information of pg_stat_replication_slots view along with other system catalogs related to logical decoding. Author: Vignesh C Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
-
Amit Kapila authored
The error messages, docs, and one of the options were using 'parallel degree' to indicate parallelism used by vacuum command. We normally use 'parallel workers' at other places so change it for parallel vacuum accordingly. Author: Bharath Rupireddy Reviewed-by: Dilip Kumar, Amit Kapila Backpatch-through: 13 Discussion: https://postgr.es/m/CALj2ACWz=PYrrFXVsEKb9J1aiX4raA+UBe02hdRp_zqDkrWUiw@mail.gmail.com
-
Michael Paquier authored
SSL renegotiation is already disabled as of 48d23c72, however this does not prevent the server to comply with a client willing to use renegotiation. In the last couple of years, renegotiation had its set of security issues and flaws (like the recent CVE-2021-3449), and it could be possible to crash the backend with a client attempting renegotiation. This commit takes one extra step by disabling renegotiation in the backend in the same way as SSL compression (f9264d15) or tickets (97d3a0b0). OpenSSL 1.1.0h has added an option named SSL_OP_NO_RENEGOTIATION able to achieve that. In older versions there is an option called SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS that was undocumented, and could be set within the SSL object created when the TLS connection opens, but I have decided not to use it, as it feels trickier to rely on, and it is not official. Note that this option is not usable in OpenSSL < 1.1.0h as the internal contents of the *SSL object are hidden to applications. SSL renegotiation concerns protocols up to TLSv1.2. Per original report from Robert Haas, with a patch based on a suggestion by Andres Freund. Author: Michael Paquier Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/YKZBXx7RhU74FlTE@paquier.xyz Backpatch-through: 9.6
-
David Rowley authored
Result Cache, added in 9eacee2e neglected to properly adjust the plan references in setrefs.c. This could lead to the following error during EXPLAIN: ERROR: cannot decompile join alias var in plan tree Fix that. Bug: 17030 Reported-by: Hans Buschmann Discussion: https://postgr.es/m/17030-5844aecae42fe223@postgresql.org
-