- 11 Apr, 2020 5 commits
-
-
Tom Lane authored
ExecReScanHashJoin will destroy the join's hash table if it expects that the inner relation will produce different rows on rescan. Up to now it's not bothered to clear the additional pointer to that hash table that exists in the child HashState node. However, it's possible for the query to terminate without building a fresh hash table (this happens if the outer relation is found to be empty during the final rescan). So we can end with a dangling pointer to a deleted hash table. That was harmless originally, but since 9.0 EXPLAIN ANALYZE has used that pointer to print hash table statistics. In debug builds this reproducibly results in garbage statistics. In non-debug builds there's frequently no ill effects, but in principle one could get wrong EXPLAIN ANALYZE output, or perhaps even a crash if free() has released the hashtable memory back to the OS. To fix, just make sure we clear the additional pointer when destroying the hash table. In problematic cases, EXPLAIN ANALYZE will then print no hashtable statistics (reverting to its pre-9.0 behavior). This isn't ideal, but since the problem manifests only in unusual corner cases, it's hard to justify taking any risks to do better in the back branches. A follow-on patch will improve matters in HEAD. Konstantin Knizhnik and Tom Lane, per diagnosis by Thomas Munro of a trouble report from Alvaro Herrera. Discussion: https://postgr.es/m/20200323165059.GA24950@alvherre.pgsql
-
Peter Eisentraut authored
Introduced by 83fd4532. To fix, the tuple descriptors need to be copied into the current memory context. Discussion: https://www.postgresql.org/message-id/04d78603-edae-9243-9dde-fe3037176a7d@2ndquadrant.com
-
Peter Eisentraut authored
Introduced by 83fd4532
-
Andrew Gierth authored
Commit ac862376 "fixed" a typo in an example of what would happen in the event of a typo. Restore the original typo and add a comment about its intentionality. Backpatch to 12 where the error was introduced. Per report from irc user Nicolás Alvarez.
-
Peter Geoghegan authored
Add a DEBUG1 message indicating that verification of the index structure is underway. Also reduce the severity level of the existing "tree level" debug message to DEBUG1. It should never have been made DEBUG2. Any B-Tree index with more than a couple of levels will generally also have so many pages that the per-page DEBUG2 messages will become completely unmanageable. In passing, add a new "Tip" to the docs that advises users that run into corruption that the debug messages might provide useful additional context.
-
- 10 Apr, 2020 4 commits
-
-
Tom Lane authored
The docs explained that a SHARE ROW EXCLUSIVE lock is needed on the referenced table, but failed to say the same about the table being altered. Since the page says that ACCESS EXCLUSIVE lock is taken unless otherwise stated, this left readers with the wrong conclusion. Discussion: https://postgr.es/m/834603375.3470346.1586482852542@mail.yahoo.com
-
Tom Lane authored
Ashutosh Bapat Discussion: https://postgr.es/m/CAG-ACPWPB8Lc_aFj25eiPFqi31YB5vmaZnb39mbHSf5Yej=miA@mail.gmail.com
-
Tom Lane authored
CREATE GROUP is an exact alias for CREATE ROLE, and CREATE USER is almost an exact alias, as can easily be confirmed by checking the code. So the man page syntax descriptions ought to match up. The last few additions of role options seem to have forgotten to update create_group.sgml, though. Fix that, and add a naggy reminder to create_role.sgml in hopes of not forgetting again. Discussion: https://postgr.es/m/158647836143.655.9853963229391401576@wrigleys.postgresql.org
-
Michael Paquier authored
This fixes some comments and documentation new as of Postgres 13. Author: Justin Pryzby Discussion: https://postgr.es/m/20200408165653.GF2228@telsasoft.com
-
- 09 Apr, 2020 10 commits
-
-
Tom Lane authored
Depending on specific values of restart_lsn or pg_current_wal_lsn() is obviously unsafe. The previous coding tried to dodge this issue by rounding off, but that's not good enough, as shown by multiple buildfarm members. Nuke all the uses of these values except for null-ness checks, pending some credible argument why we should think something else could be usefully stable. Kyotaro Horiguchi, further modified by me Discussion: https://postgr.es/m/E1jM1Sa-0003mS-99@gemulon.postgresql.org
-
Tom Lane authored
Suppress a probably-meaningless uninitialized-variable warning (induced by my previous patch, I'm sorry to say). Improve mark_hl_fragments()'s test for overlapping cover strings: it failed to consider the possibility that the current string is strictly within another one. That's unlikely given the preceding splitting into MaxWords fragments, but I don't think it's impossible. Discussion: https://postgr.es/m/16345-2e0cf5cddbdcd3b4@postgresql.org
-
Tom Lane authored
Now that I've had my nose in that code, I thought the docs about it left something to be desired.
-
Tom Lane authored
This code could produce very poor results when asked to highlight a string based on a query using phrase-match operators. The root cause is that hlCover(), which is supposed to find a minimal substring that matches the query, was written assuming that word position is not significant. I'm only 95% convinced that its algorithm was correct even for plain AND/OR queries; but it definitely fails completely for phrase matches, causing it to possibly not identify a cover string at all. Hence, rewrite hlCover() with a less-tense algorithm that just tries all the possible substrings, earlier and shorter ones first. (This is not as bad as it sounds performance-wise, because all of the string matching has been done already: the repeated tsquery match checks boil down to pointer comparisons.) Unfortunately, since that approach produces more candidate cover strings than before, it also exposes that there were bugs in the heuristics in mark_hl_words() for selecting a best cover string. Fixes there include: * Do not apply the ShortWord filter to words that appear in the query. * Remove a misguided optimization for quickly rejecting a cover. * Fix order-of-operation bug that could cause computation of a wrong figure of merit (poslen) when shortening a cover. * Change the preference rule so that candidate headlines that do not include their whole cover string (after MaxWords trimming) are lowest priority, since they may not actually satisfy the user's query. This results in some changes in existing regression test cases, but they all seem reasonable. Note in particular that the tests involving strings like "1 2 3" were previously being affected by the ShortWord filter, masking the normal matching behavior. Per bug #16345 from Augustinas Jokubauskas; the new test cases are based on that example. Back-patch to 9.6 where phrase search was added to tsquery. Discussion: https://postgr.es/m/16345-2e0cf5cddbdcd3b4@postgresql.org
-
Tom Lane authored
This code was woefully unreadable and under-commented. Try to improve matters by adding comments, using some macros to make complicated if-tests more readable, using boolean type where appropriate, etc. There are a couple of tiny coding improvements too, but this commit includes (I hope) no behavioral change. Nonetheless, back-patch as far as 9.6, because a followup bug-fixing commit depends on this. Discussion: https://postgr.es/m/16345-2e0cf5cddbdcd3b4@postgresql.org
-
Peter Eisentraut authored
CREATE TABLE LIKE INCLUDING GENERATED would fail if a generated column referred to a column with a higher attribute number. This is because the column mapping mechanism created the mapping incrementally as columns are added. This was sufficient for previous uses of that mechanism (omitting dropped columns), and it also happened to work if generated columns only referred to columns with lower attribute numbers, but here it failed. This fix is to build the attribute mapping in a separate loop before processing the columns in detail. Bug: #16342 Reported-by: Ethan Waldo <ewaldo@healthetechs.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
-
Fujii Masao authored
Author: Fujii Masao Reviewed-by: Robert Haas Discussion: https://postgr.es/m/78f76a3d-1a28-a97d-0394-5c96985dd1c0@oss.nttdata.com
-
Fujii Masao authored
If there is already a backup_manifest file in the database cluster, it belongs to the past backup that was used to start this server. It is not correct for the backup being taken now. So this commit changes pg_basebackup so that it always skips such backup_manifest file. The backup_manifest file for the current backup will be injected separately if users want it. Author: Fujii Masao Reviewed-by: Robert Haas Discussion: https://postgr.es/m/78f76a3d-1a28-a97d-0394-5c96985dd1c0@oss.nttdata.com
-
Amit Kapila authored
Currently, we don't account for buffer usage incurred by parallel workers for parallel create index. This commit allows each worker to record the buffer usage stats and leader backend to accumulate that stats at the end of the operation. This will allow pg_stat_statements to display correct buffer usage stats for (parallel) create index command. Reported-by: Julien Rouhaud Author: Sawada Masahiko Reviewed-by: Dilip Kumar, Julien Rouhaud and Amit Kapila Backpatch-through: 11, where this was introduced Discussion: https://postgr.es/m/20200328151721.GB12854@nol
-
Fujii Masao authored
The added note explains that the numbers of planning and execution in the statement are not always expected to match because their statistics are updated at their respective end phase, and only for successful operations. Author: Pascal Legrand, Julien Rouhaud, tweaked a bit by Fujii Masao Discussion: https://postgr.es/m/1585857868967-0.post@n3.nabble.com
-
- 08 Apr, 2020 21 commits
-
-
Andrew Dunstan authored
1. Tell Msys2 not to mangle the tablespace map parameter 2. If rmdir doesn't work, fall back to trying unlink on the entry in pg_tblspc. Discussion: https://postgr.es/m/7330a7c7-ce5f-9769-39a1-bdb0b32bb4a6@2ndQuadrant.com
-
Peter Eisentraut authored
-
Tomas Vondra authored
The test never did ANALYZE on the test table, so the plans depended on various defaults (e.g. number of groups being 200). This worked most of the time, but with CLOBBER_CACHE_ALWAYS the autoanalyze often managed to build accurate stats, changing the plan. Fixed by increasing the size of test tables a bit, making the Sort a bit more expensive than Incremental Sort. The tests were constructed to test transitions in the Incremental Sort algorithm, and this change does not break that. Reviewed-by: James Coleman Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
-
Tom Lane authored
Repair an oversight in commit 8728b2c7: if we're postponing restore of event triggers to the end, we must also postpone restoring any comments on them, since of course we cannot create the comments first. (This opens yet another opportunity for an event trigger to bollix the restore, but there's no help for that.) Per bug #16346 from Alexander Lakhin. Like the previous commit, back-patch to all supported branches. Hamid Akhtar and Tom Lane Discussion: https://postgr.es/m/16346-6210ad7a0ea81be1@postgresql.org
-
Thomas Munro authored
GetWalRcvWriteRecPtr() previously reported the latest *flushed* location. Adopt the conventional terminology used elsewhere in the tree by renaming it to GetWalRcvFlushRecPtr(), and likewise for some related variables that used the term "received". Add a new definition of GetWalRcvWriteRecPtr(), which returns the latest *written* value. This will allow later patches to use the value for non-data-integrity purposes, without having to wait for the flush pointer to advance. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
-
Peter Eisentraut authored
To control whether partition changes are replicated using their own identity and schema or an ancestor's, add a new parameter that can be set per publication named 'publish_via_partition_root'. This allows replicating a partitioned table into a different partition structure on the subscriber. Author: Amit Langote <amitlangote09@gmail.com> Reviewed-by: Rafia Sabih <rafia.pghackers@gmail.com> Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com> Reviewed-by: Petr Jelinek <petr@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/CA+HiwqH=Y85vRK3mOdjEkqFK+E=ST=eQiHdpj43L=_eJMOOznQ@mail.gmail.com
-
Alexander Korotkov authored
0f5ca02f introduces 3 new keywords. It appears to be too much for relatively small feature. Given now we past feature freeze, it's already late for discussion of the new syntax. So, revert. Discussion: https://postgr.es/m/28209.1586294824%40sss.pgh.pa.us
-
David Rowley authored
2nd pass of modifying various places which obtain the next power of 2 of a number and make them use the new functions added in f0705bb6. In passing, also modify num_combinations(). This can be implemented using simple bitshifting rather than looping. Reviewed-by: John Naylor Discussion: https://postgr.es/m/20200114173553.GE32763%40fetter.org
-
Michael Paquier authored
Attempting to use a COLLATE clause with a type that it not collatable in a partition bound expression could crash the server. This commit fixes the code by adding more checks similar to what is done when computing index or partition attributes by making sure that there is a collation iff the type is collatable. Backpatch down to 12, as 7c079d74 introduced this problem. Reported-by: Alexander Lakhin Author: Dmitry Dolgov Discussion: https://postgr.es/m/16325-809194cf742313ab@postgresql.org Backpatch-through: 12
-
David Rowley authored
First pass of modifying various places that obtain the next power of 2 of a number and make them use the new functions added in pg_bitutils.h instead. This also removes the _hash_log2() function. There are no longer any callers in core. Other users can swap their _hash_log2(n) call to make use of pg_ceil_log2_32(n). Author: David Fetter, with some minor adjustments by me Reviewed-by: John Naylor, Jesse Zhang Discussion: https://postgr.es/m/20200114173553.GE32763%40fetter.org
-
Jeff Davis authored
If the memory context's maxBlockSize is too big, a single block allocation can suddenly exceed work_mem. For Hash Aggregation, this can mean spilling to disk too early or reporting a confusing memory usage number for EXPLAN ANALYZE. Introduce CreateWorkExprContext(), which is like CreateExprContext(), except that it creates the AllocSet with a maxBlockSize that is reasonable in proportion to work_mem. Right now, CreateWorkExprContext() is only used by Hash Aggregation, but it may be generally useful in the future. Discussion: https://postgr.es/m/412a3fbf306f84d8d78c4009e11791867e62b87c.camel@j-davis.com
-
David Rowley authored
There are many areas in the code where we need to determine the next highest power of 2 of a given number. We tend to always do that in an ad-hoc way each time, generally with some tight for loop which performs a bitshift left once per loop and goes until it finds a number above the given number. Here we add two generic functions which make use of the existing pg_leftmost_one_pos* functions which, when available, will allow us to calculate the next power of 2 without any looping. Here we don't add any code which uses these new functions. That will be done in follow-up commits. Author: David Fetter, with some minor adjustments by me Reviewed-by: John Naylor, Jesse Zhang Discussion: https://postgr.es/m/20200114173553.GE32763%40fetter.org
-
Tom Lane authored
In commit 4dbcb3f8 I removed some code from parse_coerce.c, and also removed some apparently-no-longer-needed #includes. But removing datum.h broke some not-compiled-by-default code. Discussion: https://postgr.es/m/20200407205436.pyjhddw5bn5upvsu@development
-
Alvaro Herrera authored
Trying to ensure that a slot's restart_lsn or amount of reserved bytes exactly match some specific values seems unnecessary, and fragile as shown by failures in multiple buildfarm members. Discussion: https://postgr.es/m/20200407232602.GA21559@alvherre.pgsql
-
Thomas Munro authored
Provide PrefetchSharedBuffer(), a variant that takes SMgrRelation, for use in recovery. Rename LocalPrefetchBuffer() to PrefetchLocalBuffer() for consistency. Add a return value to all of these. In recovery, tolerate and report missing files, so we can handle relations unlinked before crash recovery began. Also report cache hits and misses, so that callers can do faster buffer lookups and better I/O accounting. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
-
Tom Lane authored
This case didn't work because columns merged by FULL JOIN USING are represented in the parse tree by COALESCE expressions, and the logic for recognizing a partitionable join failed to match upper-level join clauses to such expressions. To fix, synthesize suitable COALESCE expressions and add them to the nullable_partexprs lists. This is pretty ugly and brute-force, but it gets the job done. (I have ambitions of rethinking the way outer-join output Vars are represented, so maybe that will provide a cleaner solution someday. For now, do this.) Amit Langote, reviewed by Justin Pryzby, Richard Guo, and myself Discussion: https://postgr.es/m/CA+HiwqG2WVUGmLJqtR0tPFhniO=H=9qQ+Z3L_ZC+Y3-EVQHFGg@mail.gmail.com
-
Etsuro Fujita authored
Previously, the partitionwise join technique only allowed partitionwise join when input partitioned tables had exactly the same partition bounds. This commit extends the technique to some cases when the tables have different partition bounds, by using an advanced partition-matching algorithm introduced by this commit. For both the input partitioned tables, the algorithm checks whether every partition of one input partitioned table only matches one partition of the other input partitioned table at most, and vice versa. In such a case the join between the tables can be broken down into joins between the matching partitions, so the algorithm produces the pairs of the matching partitions, plus the partition bounds for the join relation, to allow partitionwise join for computing the join. Currently, the algorithm works for list-partitioned and range-partitioned tables, but not hash-partitioned tables. See comments in partition_bounds_merge(). Ashutosh Bapat and Etsuro Fujita, most of regression tests by Rajkumar Raghuwanshi, some of the tests by Mark Dilger and Amul Sul, reviewed by Dmitry Dolgov and Amul Sul, with additional review at various points by Ashutosh Bapat, Mark Dilger, Robert Haas, Antonin Houska, Amit Langote, Justin Pryzby, and Tomas Vondra Discussion: https://postgr.es/m/CAFjFpRdjQvaUEV5DJX3TW6pU5eq54NCkadtxHX2JiJG_GvbrCA@mail.gmail.com
-
Tom Lane authored
Our documentation describes four allowed input syntaxes for circles, but the regression tests tried only three ... with predictable consequences. Remarkably, this has been wrong since the circle datatype was added in 1997, but nobody noticed till now. David Zhang, with some help from me Discussion: https://postgr.es/m/332c47fa-d951-7574-b5cc-a8f7f7201202@highgo.ca
-
Andres Freund authored
The goal of separating hotly accessed per-backend data from PGPROC into PGXACT is to make accesses fast (GetSnapshotData() in particular). But delayChkpt is not actually accessed frequently; only when starting a checkpoint. As it is frequently modified (multiple times in the course of a single transaction), storing it in the same cacheline as hotly accessed data unnecessarily dirties a contended cacheline. Therefore move delayChkpt to PGPROC. This is part of a larger series of patches intending to improve GetSnapshotData() scalability. It is committed and pushed separately, as it is independently beneficial (small but measurable win, limited by the other frequent modifications of PGXACT). Author: Andres Freund Reviewed-By: Robert Haas, Thomas Munro, David Rowley Discussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de
-
Tomas Vondra authored
SLRU page hits were tracked only in SimpleLruReadPage, but that's not enough because we may hit the page in SimpleLruReadPage_ReadOnly in which case we don't call SimpleLruReadPage at all. Reported-by: Kuntal Ghosh Discussion: https://postgr.es/m/20200119143707.gyinppnigokesjok@development
-
Andres Freund authored
Before the fix every 2PC commit/abort leaked a file descriptor. As the files are opened using BasicOpenFile(), that quickly leads to the backend running out of file descriptors. Once enough 2PC abort/commit have caused enough FDs to leak, any IO in the backend will fail with "Too many open files", as BasicOpenFilePerm() will have triggered all open files known to fd.c to be closed. The leak causing the problem at hand is a consequence of 0dc8ead4, but is only exascerbated by it. Previously most XLogPageReadCB callbacks used static variables to cache one open file, but after the commit the cache is private to each XLogReader instance. There never was infrastructure to close FDs at the time of XLogReaderFree, but the way XLogReader was used limited the leak to one FD. This commit just closes the during XLogReaderFree() if the FD is stored in XLogReaderState.seg.ws_segno. This may not be the way to solve this medium/long term, but at least unbreaks 2PC. Discussion: https://postgr.es/m/20200406025651.fpzdb5yyb7qyhqko@alap3.anarazel.de
-