- 31 Jan, 2022 3 commits
-
-
Tom Lane authored
Also apply the changes suggested by running perl ppport.h --compat-version=5.8.0 And remove some no-longer-required NEED_foo declarations. Dagfinn Ilmari Mannsåker Back-patch of commit 05798c9f7 into all supported branches. At the time we thought this update was mostly cosmetic, but the lack of it has caused trouble, while the patch itself hasn't. Discussion: https://postgr.es/m/87y278s6iq.fsf@wibble.ilmari.org Discussion: https://postgr.es/m/20220131015130.shn6wr2fzuymerf6@alap3.anarazel.de
-
Andres Freund authored
ppport.h was only updated in 05798c9f7f0 (master). Unfortunately my commit c89f409749c uses PERL_VERSION_LT which came in with that update. Breaking most buildfarm animals. I should have noticed that... We might want to backpatch the ppport update instead, but for now lets get the buildfarm green again. Discussion: https://postgr.es/m/20220131015130.shn6wr2fzuymerf6@alap3.anarazel.de Backpatch: 10-14, master doesn't need it
-
Andres Freund authored
For older versions we need our own copy of perl's setlocale(), because it was not exposed (why we need the setlocale in the first place is explained in plperl_init_interp) . The copy stopped working in 5.28, as some of the used macros are not public anymore. But Perl_setlocale is available in 5.28, so use that. Author: Victor Wagner <vitus@wagner.pp.ru> Reviewed-By: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> Discussion: https://postgr.es/m/20200501134711.08750c5f@antares.wagner.home Backpatch: all versions
-
- 29 Jan, 2022 2 commits
-
-
Tom Lane authored
Although select_common_type() has a failure-return convention, an apparent successful return just provides a type OID that *might* work as a common supertype; we've not validated that the required casts actually exist. In the mainstream use-cases that doesn't matter, because we'll proceed to invoke coerce_to_common_type() on each input, which will fail appropriately if the proposed common type doesn't actually work. However, a few callers didn't read the (nonexistent) fine print, and thought that if they got back a nonzero OID then the coercions were sure to work. This affects in particular the recently-added "anycompatible" polymorphic types; we might think that a function/operator using such types matches cases it really doesn't. A likely end result of that is unexpected "ambiguous operator" errors, as for example in bug #17387 from James Inform. Another, much older, case is that the parser might try to transform an "x IN (list)" construct to a ScalarArrayOpExpr even when the list elements don't actually have a common supertype. It doesn't seem desirable to add more checking to select_common_type itself, as that'd just slow down the mainstream use-cases. Instead, write a separate function verify_common_type that performs the missing checks, and add a call to that where necessary. Likewise add verify_common_type_from_oids to go with select_common_type_from_oids. Back-patch to v13 where the "anycompatible" types came in. (The symptom complained of in bug #17387 doesn't appear till v14, but that's just because we didn't get around to converting || to use anycompatible till then.) In principle the "x IN (list)" fix could go back all the way, but I'm not currently convinced that it makes much difference in real-world cases, so I won't bother for now. Discussion: https://postgr.es/m/17387-5dfe54b988444963@postgresql.org
-
Michael Paquier authored
c532d15d has split the logic of COPY commands into multiple files, one change being to move the internals of BeginCopy() to BeginCopyTo(). Originally the code was written so as we'd switch back-and-forth between the current execution memory context and the dedicated memory context for the COPY command, and this refactoring has introduced an extra switch to the current memory context from the COPY context once BeginCopyTo() is done with the past logic coming from BeginCopy(). The code was correctly doing the analyze, rewrite and planning phases in the COPY context, but it was not assigning "copy_file" (FILE* used when copying to a source file) and "filename" in the COPY context, making the COPY status data inconsistent. Author: Bharath Rupireddy Reviewed-by: Japin Li Discussion: https://postgr.es/m/CALj2ACWvVa69foi9jhHFY=2BuHxAoYboyE+vXQTARwxZcJnVrQ@mail.gmail.com Backpatch-through: 14
-
- 28 Jan, 2022 2 commits
-
-
Etsuro Fujita authored
-
Fujii Masao authored
When pg_log_backend_memory_contexts() is executed, the target backend should use LOG_SERVER_ONLY to log its memory contexts, to prevent them from being sent to its connected client regardless of client_min_messages. But previously the backend unexpectedly used LOG to log the message "logging memory contexts of PID %d" and it could be sent to the client. This is a bug in memory context logging. To fix the bug, this commit changes that message so that it's logged with LOG_SERVER_ONLY. Back-patch to v14 where pg_log_backend_memory_contexts() was added. Author: Fujii Masao Reviewed-by: Bharath Rupireddy, Atsushi Torikoshi Discussion: https://postgr.es/m/82c12f36-86f7-5e72-79af-7f5c37f6cad7@oss.nttdata.com
-
- 27 Jan, 2022 4 commits
-
-
Tomas Vondra authored
Commit 8431e296 reworked ProcArrayApplyRecoveryInfo to sort XIDs before adding them to KnownAssignedXids. But the XIDs are sorted using xidComparator, which compares the XIDs simply as uint32 values, not logically. KnownAssignedXidsAdd() however expects XIDs in logical order, and calls TransactionIdFollowsOrEquals() to enforce that. If there are XIDs for which the two orderings disagree, an error is raised and the recovery fails/restarts. Hitting this issue is fairly easy - you just need two transactions, one started before the 4B limit (e.g. XID 4294967290), the other sometime after it (e.g. XID 1000). Logically (4294967290 <= 1000) but when compared using xidComparator we try to add them in the opposite order. Which makes KnownAssignedXidsAdd() fail with an error like this: ERROR: out-of-order XID insertion in KnownAssignedXids This only happens during replica startup, while processing RUNNING_XACTS records to build the snapshot. Once we reach STANDBY_SNAPSHOT_READY, we skip these records. So this does not affect already running replicas, but if you restart (or create) a replica while there are transactions with XIDs for which the two orderings disagree, you may hit this. Long-running transactions and frequent replica restarts increase the likelihood of hitting this issue. Once the replica gets into this state, it can't be started (even if the old transactions are terminated). Fixed by sorting the XIDs logically - this is fine because we're dealing with normal XIDs (because it's XIDs assigned to backends) and from the same wraparound epoch (otherwise the backends could not be running at the same time on the primary node). So there are no problems with the triangle inequality, which is why xidComparator compares raw values. Investigation and root cause analysis by Abhijit Menon-Sen. Patch by me. This issue is present in all releases since 9.4, however releases up to 9.6 are EOL already so backpatch to 10 only. Reviewed-by: Abhijit Menon-Sen Reviewed-by: Alvaro Herrera Backpatch-through: 10 Discussion: https://postgr.es/m/36b8a501-5d73-277c-4972-f58a4dce088a%40enterprisedb.com
-
Andrew Dunstan authored
Perl instances on some msys toolchains (e.g. UCRT64) have their configured osname set to 'MSWin32' rather than 'msys'. The test for the msys2 platform is adjusted accordingly. Backpatch to release 14.
-
Etsuro Fujita authored
Commit 27e1f145 failed to process a pending asynchronous request made for a given ForeignScan node in postgresReScanForeignScan() (if any) in cases where we would only reset the next_tuple counter in that function, contradicting the assumption that there should be no pending asynchronous requests that have been made for async-capable subplans for the parent Append node after ReScan. This led to an assert failure in an assert-enabled build. I think this would also lead to mis-rewinding the cursor in that function in the case where we have already fetched one batch for the ForeignScan node and the asynchronous request has been made for the second batch, because even in that case we would just reset the counter when called from that function, so we would fail to execute MOVE BACKWARD ALL. To fix, modify that function to process the asynchronous request before restarting the scan. While at it, add a comment to a function to match other places. Per bug #17344 from Alexander Lakhin. Back-patch to v14 where the aforesaid commit came in. Patch by me. Test case by Alexander Lakhin, adjusted by me. Reviewed and tested by Alexander Lakhin and Dmitry Dolgov. Discussion: https://postgr.es/m/17344-226b78b00de73a7e@postgresql.org
-
Noah Misch authored
Buildfarm members kittiwake, tadarida and snapper began to fail frequently when commits 3cd9c3b921977272e6650a5efbeade4203c4bca2 and f47ed79cc8a0cfa154dc7f01faaf59822552363f added tests of concurrency, but the problem was reachable before those commits. Back-patch to v10 (all supported versions). Discussion: https://postgr.es/m/20220116210241.GC756210@rfd.leadboat.com
-
- 26 Jan, 2022 1 commit
-
-
Magnus Hagander authored
For authentication method cert, clientcert=verify-full is implied. But the pg_hba_file_rules entry would incorrectly show clientcert=verify-ca. Per bug #17354 Reported-By: Feike Steenbergen Reviewed-By: Jonathan Katz Backpatch-through: 12
-
- 25 Jan, 2022 3 commits
-
-
Tom Lane authored
This reverts commits 6051857fc and ed52c3707, but only in the back branches. Further testing has shown that while those changes do fix some things, they also break others; in particular, it looks like walreceivers fail to detect walsender-initiated connection close reliably if the walsender shuts down this way. We'll keep trying to improve matters in HEAD, but it now seems unwise to push these changes into stable releases. Discussion: https://postgr.es/m/CA+hUKG+OeoETZQ=Qw5Ub5h3tmwQhBmDA=nuNO3KG=zWfUypFAw@mail.gmail.com
-
David Rowley authored
8edd0e79 added some code to remove Append and MergeAppend nodes when they contained a single child node. As it turned out, this was unsafe to do when the Append/MergeAppend was parallel_aware and the child node was not. Removing the Append/MergeAppend, in this case, could lead to the child plan being called multiple times by parallel workers when it was unsafe to do so. Here we fix this by just not removing the Append/MergeAppend when the parallel_aware flag of the parent and child node don't match. Reported-by: Yura Sokolov Bug: #17335 Discussion: https://postgr.es/m/b59605fecb20ba9ea94e70ab60098c237c870628.camel%40postgrespro.ru Backpatch-through: 12, where 8edd0e79 was first introduced
-
Michael Paquier authored
This is an extraction of the user-visible changes done in 410aa24, including all the relevant documentation parts. Author: Justin Pryzby Discussion: https://postgr.es/m/20220124030001.GQ23027@telsasoft.com Backpatch-through: 10
-
- 24 Jan, 2022 2 commits
-
-
Tom Lane authored
In logical replication mode, a WalSender is supposed to be able to execute any regular SQL command, as well as the special replication commands. Poor design of the replication-command parser caused it to fail in various cases, notably: * semicolons embedded in a command, or multiple SQL commands sent in a single message; * dollar-quoted literals containing odd numbers of single or double quote marks; * commands starting with a comment. The basic problem here is that we're trying to run repl_scanner.l across the entire input string even when it's not a replication command. Since repl_scanner.l does not understand all of the token types known to the core lexer, this is doomed to have failure modes. We certainly don't want to make repl_scanner.l as big as scan.l, so instead rejigger stuff so that we only lex the first token of a non-replication command. That will usually look like an IDENT to repl_scanner.l, though a comment would end up getting reported as a '-' or '/' single-character token. If the token is a replication command keyword, we push it back and proceed normally with repl_gram.y parsing. Otherwise, we can drop out of exec_replication_command() without examining the rest of the string. (It's still theoretically possible for repl_scanner.l to fail on the first token; but that could only happen if it's an unterminated single- or double-quoted string, in which case you'd have gotten largely the same error from the core lexer too.) In this way, repl_gram.y isn't involved at all in handling general SQL commands, so we can get rid of the SQLCmd node type. (In the back branches, we can't remove it because renumbering enum NodeTag would be an ABI break; so just leave it sit there unused.) I failed to resist the temptation to clean up some other sloppy coding in repl_scanner.l while at it. The only externally-visible behavior change from that is it now accepts \r and \f as whitespace, same as the core lexer. Per bug #17379 from Greg Rychlewski. Back-patch to all supported branches. Discussion: https://postgr.es/m/17379-6a5c6cfb3f1f5e77@postgresql.org
-
Tom Lane authored
Without this, we get odd behavior when the previous cycle of lexing exited in a non-default exclusive state. Every other copy of this code is aware that it has to do BEGIN(INITIAL), but repl_scanner.l did not get that memo. The real-world impact of this is probably limited, since most replication clients would abandon their connection after getting a syntax error. Still, it's a bug. This mistake is old, so back-patch to all supported branches. Discussion: https://postgr.es/m/1874781.1643035952@sss.pgh.pa.us
-
- 23 Jan, 2022 4 commits
-
-
Tom Lane authored
Commit 6df7a969 wrote appendPQExpBuffer where it should have written printfPQExpBuffer. This resulted in re-issuing the previous query along with the desired one, which very accidentally had no negative consequences except for some wasted cycles. Back-patch to v14 where that came in. Discussion: https://postgr.es/m/1714711.1642962663@sss.pgh.pa.us
-
Tom Lane authored
In the normal configuration where GEQO_DEBUG isn't defined, recent clang versions have started to complain that geqo_main.c accumulates the edge_failures count but never does anything with it. As a minimal back-patchable fix, insert a void cast to silence this warning. (I'd speculated about ripping out the GEQO_DEBUG logic altogether, but I don't think we'd wish to back-patch that.) Per recently-established project policy, this is a candidate for back-patching into out-of-support branches: it suppresses an annoying compiler warning but changes no behavior. Hence, back-patch all the way to 9.2. Discussion: https://postgr.es/m/CA+hUKGLTSZQwES8VNPmWO9AO0wSeLt36OCPDAZTccT1h7Q7kTQ@mail.gmail.com
-
Tomas Vondra authored
In sort_inner_and_outer we iterate a list of PathKey elements, but the variable is declared as (List *). This mistake is benign, because we only pass the pointer to lcons() and never dereference it. This exists since ~2004, but it's confusing. So fix and backpatch to all supported branches. Backpatch-through: 10 Discussion: https://postgr.es/m/bf3a6ea1-a7d8-7211-0669-189d5c169374%40enterprisedb.com
-
Tomas Vondra authored
The syscache lookup may return NULL even for valid OID, for example due to a concurrent DROP STATISTICS, so a HeapTupleIsValid is necessary. Without it, it may fail with a segfault. Reported by Alexander Lakhin, patch by me. Backpatch to 13, where ALTER STATISTICS ... SET STATISTICS was introduced. Backpatch-through: 13 Discussion: https://postgr.es/m/17372-bf3b6e947e35ae77%40postgresql.org
-
- 22 Jan, 2022 1 commit
-
-
Tom Lane authored
Previously, unless we had to add a NOT NULL constraint to the column, this command resulted in updating only the index's relcache entry. That's problematic when replication behavior is being driven off the existence of a primary key: other sessions (and ours too for that matter) failed to recalculate their opinion of whether the table can be replicated. Add a relcache invalidation to fix it. This has been broken since pg_class.relhaspkey was removed in v11. Before that, updating the table's relhaspkey value sufficed to cause a cache flush. Hence, backpatch to v11. Report and patch by Hou Zhijie Discussion: https://postgr.es/m/OS0PR01MB5716EBE01F112C62F8F9B786947B9@OS0PR01MB5716.jpnprd01.prod.outlook.com
-
- 21 Jan, 2022 4 commits
-
-
Tom Lane authored
In libpq and ecpglib, multiple threads can concurrently enter the initialization logic for message localization. Since we set the its-done flag before actually doing the work, it'd be possible for some threads to reach gettext() before anyone has called bindtextdomain(). Barring bugs in libintl itself, this would not result in anything worse than failure to localize some early messages. Nonetheless, it's a bug, and an easy one to fix. Noted while investigating bug #17299 from Clemens Zeidler (much thanks to Liam Bowen for followup investigation on that). It currently appears that that actually *is* a bug in libintl itself, but that doesn't let us off the hook for this bit. Back-patch to all supported versions. Discussion: https://postgr.es/m/17299-7270741958c0b1ab@postgresql.org Discussion: https://postgr.es/m/CAE7q7Eit4Eq2=bxce=Fm8HAStECjaXUE=WBQc-sDDcgJQ7s7eg@mail.gmail.com
-
Andres Freund authored
While individual logical rewrite files were synced to disk, the directory was not. On some filesystems that could lead to loosing directory entries after a crash. Reported-By: Tom Lane <tgl@sss.pgh.pa.us> Author: Nathan Bossart <bossartn@amazon.com> Discussion: https://postgr.es/m/867F2E29-2782-4869-970E-B984C6D35A8F@amazon.com Backpatch: 10-
-
Etsuro Fujita authored
Commit 27e1f145 resets the per-connection states of connections used to scan foreign tables asynchronously during abort cleanup at main transaction end, but it failed to do so during subabort cleanup at subtransaction end, leading to a segmentation fault when re-executing an asynchronous-foreign-table-scan query in a transaction that was cancelled in a subtransaction of it. Fix by modifying pgfdw_abort_cleanup() to reset the per-connection state of a given connection also when called for subabort cleanup. Also, modify that function to do the reset in both the abort-cleanup and subabort-cleanup cases if necessary, to save cycles, and improve a comment on it a little bit. Back-patch to v14 where the aforesaid commit came in. Reviewed by Alexander Pyhalov Discussion: https://postgr.es/m/CAPmGK14cCV-JA7kNsyt2EUTKvZ4xkr2LNRthi1U1C3cqfGppAw@mail.gmail.com
-
Michael Paquier authored
The logic in charge of writing commit timestamps (enabled with track_commit_timestamp) for subtransactions had a one-bug bug, where it would be possible that commit timestamps go missing for the last subtransaction committed. While on it, simplify a bit the iteration logic in the loop writing the commit timestamps, as per suggestions from Kyotaro Horiguchi and Tom Lane, so as some variable initializations are not part of the loop itself. Issue introduced in 73c986ad. Analyzed-by: Alex Kingsborough Author: Alex Kingsborough, Kyotaro Horiguchi Discussion: https://postgr.es/m/73A66172-4050-4F2A-B7F1-13508EDA2144@amazon.com Backpatch-through: 10
-
- 20 Jan, 2022 4 commits
-
-
Tom Lane authored
Commits 6c4a8903b et al. had a couple of deficiencies: * The logic I added to Cluster::start to see if a PID file is present could be fooled by a stale PID file left over from a previous postmaster. To fix, if we're not sure whether we expect to find a running postmaster or not, validate the PID using "kill 0". * 017_shm.pl has a loop in which it just issues repeated Cluster::start calls; this will fail if some invocation fails but leaves self->_pid set. Per buildfarm results, the above fix is not enough to make this safe: we might have "validated" a PID for a postmaster that exits immediately after we look. Hence, match each failed start call with a stop call that will get us back to the self->_pid == undef state. Add a fail_ok option to Cluster::stop to make this work. Discussion: https://postgr.es/m/CA+hUKGKV6fOHvfiPt8=dOKzvswjAyLoFoJF1iQXMNpi7+hD1JQ@mail.gmail.com
-
Andrew Dunstan authored
This was omitted from c3879a7b4c which modified the other msvc .bat files. Per request from Juan José Santamaría Flecha Discussion: https://postgr.es/m/CAC+AXB0_fxYGbQoaYjCA8um7TTbOVP4L9aXnVmHwK8WzaT4gdA@mail.gmail.com Backpatch to all live branches.
-
Thomas Munro authored
Since the test requires reproducible behavior from VACUUM, and since DISABLE_PAGE_SKIPPING doesn't actually disable all forms of page skipping, let's use a temporary table to avoid contention. Back-patch to 12, like commit 3414099c. Discussion: https://postgr.es/m/20220120052404.sonrhq3f3qgplpzj%40alap3.anarazel.de
-
Michael Paquier authored
This information was nowhere to be found. This adds one note on the page of COMMENT, and one note in the section dedicated to explicit locking, both telling that a SHARE UPDATE EXCLUSIVE lock is taken on the object commented. Author: Nikolai Berkoff Reviewed-by: Laurenz Albe Discussion: https://postgr.es/m/_0HDHIGcCdCsUyXn22QwI2FEuNR6Fs71rtgGX6hfyBlUh5rrnE2qMmvIFu9EY4Pijr2gUmJEAXCjuNU2Oxku9TryLp9CdHllpsCfN3gD0-Y=@pm.me Backpatch-through: 10
-
- 19 Jan, 2022 2 commits
-
-
Tom Lane authored
"pg_ctl start" might start a new postmaster and then return failure anyway, for example if PGCTLTIMEOUT is exceeded. If there is a postmaster there, it's still incumbent on us to shut it down at script end, so check for the PID file even though we are about to fail. This has been broken all along, so back-patch to all supported branches. Discussion: https://postgr.es/m/647439.1642622744@sss.pgh.pa.us
-
Michael Paquier authored
This function returns NULL if the replication origin given in input argument does not exist, contrary to what the docs described previously. Author: Ian Barwick Discussion: https://postgr.es/m/CAB8KJ=htJjBL=103URqjOxV2mqb4rjphDpMeKdyKq_QXt6h05w@mail.gmail.com Backpatch-through: 10
-
- 18 Jan, 2022 2 commits
-
-
Thomas Munro authored
Where we test vacuum_truncate's effects, sometimes this is failing to truncate as expected on the build farm. That could be explained by page skipping, so disable it explicitly, with the theory that commit fe246d1c didn't go far enough. Back-patch to 12, where the vacuum_truncate tests were added. Discussion: https://postgr.es/m/CA%2BhUKGLT2UL5_JhmBzUgkdyKfc%3D5J-gJSQJLysMs4rqLUKLAzw%40mail.gmail.com
-
Tom Lane authored
The original coding (from c33869cc) failed with "more than one row returned by a subquery used as an expression" if there were unrelated triggers of the same tgname on parent partitioned tables. (That's possible because statement-level triggers don't get inherited.) Fix by applying LIMIT 1 after sorting the candidates by inheritance level. Also, wrap the subquery in a CASE so that we don't have to execute it at all when the trigger is visibly non-inherited. Aside from saving some cycles, this avoids the need for a confusing and undocumented NULLIF(). While here, tweak the format of the emitted query to look a bit nicer for "psql -E", and add some explanation of this subquery, because it badly needs it. Report and patch by Justin Pryzby (with some editing by me). Back-patch to v13 where the faulty code came in. Discussion: https://postgr.es/m/20211217154356.GJ17618@telsasoft.com
-
- 17 Jan, 2022 2 commits
-
-
Tom Lane authored
It seems highly unlikely that gettext() can be relied on to be async-signal-safe. psql used to understand that, but someone got it wrong long ago in the src/bin/scripts/ version of handle_sigint, and then the bad idea was perpetuated when those two versions were unified into src/fe_utils/cancel.c. I'm unsure why there have not been field complaints about this ... maybe gettext() is signal-safe once it's translated at least one message? But we have no business assuming any such thing. In cancel.c (v13 and up), I preserved our ability to localize "Cancel request sent" messages by invoking gettext() before the signal handler is set up. In earlier branches I just made src/bin/scripts/ not localize those messages, as psql did then. (Just for extra unsafety, the src/bin/scripts/ version was invoking fprintf() from a signal handler. Sigh.) Noted while fixing signal-safety issues in PQcancel() itself. Back-patch to all supported branches. Discussion: https://postgr.es/m/2937814.1641960929@sss.pgh.pa.us
-
Tom Lane authored
PQcancel() is supposed to be safe to call from a signal handler, and indeed psql uses it that way. All of the library functions it uses are specified to be async-signal-safe by POSIX ... except for strerror. Neither plain strerror nor strerror_r are considered safe. When this code was written, back in the dark ages, we probably figured "oh, strerror will just index into a constant array of strings" ... but in any locale except C, that's unlikely to be true. Probably the reason we've not heard complaints is that (a) this error-handling code is unlikely to be reached in normal use, and (b) in many scenarios, localized error strings would already have been loaded, after which maybe it's safe to call strerror here. Still, this is clearly unacceptable. The best we can do without relying on strerror is to print the decimal value of errno, so make it do that instead. (This is probably not much loss of user-friendliness, given that it is hard to get a failure here.) Back-patch to all supported branches. Discussion: https://postgr.es/m/2937814.1641960929@sss.pgh.pa.us
-
- 16 Jan, 2022 2 commits
-
-
Tom Lane authored
The need for this was foreseen long ago, but when record_eq actually became hashable (in commit 01e658fa), we missed updating this spot. Per bug #17363 from Elvis Pranskevichus. Back-patch to v14 where the faulty commit came in. Discussion: https://postgr.es/m/17363-f6d42fd0d726be02@postgresql.org
-
Tom Lane authored
Since enum labels have to be single-quoted, this part of the tab completion machinery got side-swiped by commit cd69ec66. A side-effect of that commit is that (at least with some versions of Readline) the text string passed for completion will omit the leading quote mark of the enum label literal. Libedit still acts the same as before, though, so adapt COMPLETE_WITH_ENUM_VALUE so that it can cope with either convention. Also, when we fail to find any valid completion, set rl_completion_suppress_quote = 1. Otherwise readline will go ahead and append a closing quote, which is unwanted. Per report from Peter Eisentraut. Back-patch to v13 where cd69ec66 came in. Discussion: https://postgr.es/m/8ca82d89-ec3d-8b28-8291-500efaf23b25@enterprisedb.com
-
- 15 Jan, 2022 2 commits
-
-
Tomas Vondra authored
Commit 859b3003de disabled building of extended stats for inheritance trees, to prevent updating the same catalog row twice. While that resolved the issue, it also means there are no extended stats for declaratively partitioned tables, because there are no data in the non-leaf relations. That also means declaratively partitioned tables were not affected by the issue 859b3003de addressed, which means this is a regression affecting queries that calculate estimates for the whole inheritance tree as a whole (which includes e.g. GROUP BY queries). But because partitioned tables are empty, we can invert the condition and build statistics only for the case with inheritance, without losing anything. And we can consider them when calculating estimates. It may be necessary to run ANALYZE on partitioned tables, to collect proper statistics. For declarative partitioning there should no prior statistics, and it might take time before autoanalyze is triggered. For tables partitioned by inheritance the statistics may include data from child relations (if built 859b3003de), contradicting the current code. Report and patch by Justin Pryzby, minor fixes and cleanup by me. Backpatch all the way back to PostgreSQL 10, where extended statistics were introduced (same as 859b3003de). Author: Justin Pryzby Reported-by: Justin Pryzby Backpatch-through: 10 Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
-
Tomas Vondra authored
Since commit 859b3003de we only build extended statistics for individual relations, ignoring the child relations. This resolved the issue with updating catalog tuple twice, but we still tried to use the statistics when calculating estimates for the whole inheritance tree. When the relations contain very distinct data, it may produce bogus estimates. This is roughly the same issue 427c6b5b addressed ~15 years ago, and we fix it the same way - by ignoring extended statistics when calculating estimates for the inheritance tree as a whole. We still consider extended statistics when calculating estimates for individual child relations, of course. This may result in plan changes due to different estimates, but if the old statistics were not describing the inheritance tree particularly well it's quite likely the new plans is actually better. Report and patch by Justin Pryzby, minor fixes and cleanup by me. Backpatch all the way back to PostgreSQL 10, where extended statistics were introduced (same as 859b3003de). Author: Justin Pryzby Reported-by: Justin Pryzby Backpatch-through: 10 Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
-