- 16 Jul, 2019 7 commits
-
-
Tom Lane authored
This should lappend the OIDs, not lcons them; the existing code produced a list in reversed order. This is harmless for single-key FKs or FKs where all the key columns are of the same type, which probably explains how it went unnoticed. But if those conditions are not met, ATAddForeignKeyConstraint would make the wrong decision about whether an existing FK needs to be revalidated. I think it would almost always err in the safe direction by revalidating a constraint that didn't need it. You could imagine scenarios where the pfeqop check was fooled by swapping the types of two FK columns in one ALTER TABLE, but that case would probably be rejected by other tests, so it might be impossible to get to the worst-case scenario where an FK should be revalidated and isn't. (And even then, it's likely to be fine, unless there are weird inconsistencies in the equality behavior of the replacement types.) However, this is a performance bug at least. Noted while poking around to see whether lcons calls could be converted to lappend. This bug is old, dating to commit cb3a7c2b, so back-patch to all supported branches.
-
Tom Lane authored
It seems worth getting rid of these functions because they require the caller to retain a ListCell pointer into a List that it's modifying, which is a dangerous practice with the new List implementation. (The only other List-modifying function that takes a ListCell pointer as input is list_delete_cell, which nowadays is preferentially used via the constrained API foreach_delete_current.) There was only one remaining caller of these functions after commit 2f5b8eb5, and that was some fairly ugly GEQO code that can be much more clearly expressed using a list-index variable and list_insert_nth. Hence, rewrite that code, and remove the functions. Discussion: https://postgr.es/m/26193.1563228600@sss.pgh.pa.us
-
Tom Lane authored
heap.c and relcache.c contained nearly identical copies of logic to insert OIDs into an OID list while preserving the list's OID ordering (and rejecting duplicates, in one case but not the other). The comments argue that this is faster than qsort for small numbers of OIDs, which is at best unproven, and seems even less likely to be true now that lappend_cell_oid has to move data around. In any case it's ugly and hard-to-follow code, and if we do have a lot of OIDs to consider, it's O(N^2). Hence, replace with simply lappend'ing OIDs to a List, then list_sort the completed List, then remove adjacent duplicates if necessary. This is demonstrably O(N log N) and it's much simpler for the callers. It's possible that this would be somewhat inefficient if there were a very large number of duplicates, but that seems unlikely in the existing usage. This adds list_deduplicate_oid and list_oid_cmp infrastructure to list.c. I didn't bother with equivalent functionality for integer or pointer Lists, but such could always be added later if we find a use for it. Discussion: https://postgr.es/m/26193.1563228600@sss.pgh.pa.us
-
Tom Lane authored
In the wake of commit 1cff1b95, the obvious way to sort a List is to apply qsort() directly to the array of ListCells. list_qsort was building an intermediate array of pointers-to-ListCells, which we no longer need, but getting rid of it forces an API change: the comparator functions need to do one less level of indirection. Since we're having to touch the callers anyway, let's do two additional changes: sort the given list in-place rather than making a copy (as none of the existing callers have any use for the copying behavior), and rename list_qsort to list_sort. It was argued that the old name exposes more about the implementation than it should, which I find pretty questionable, but a better reason to rename it is to be sure we get the attention of any external callers about the need to fix their comparator functions. While we're at it, change four existing callers of qsort() to use list_sort instead; previously, they all had local reinventions of list_qsort, ie build-an-array-from-a-List-and-qsort-it. (There are some other places where changing to list_sort perhaps would be worthwhile, but they're less obviously wins.) Discussion: https://postgr.es/m/29361.1563220190@sss.pgh.pa.us
-
Michael Paquier authored
This is numbered take 7, and addresses a set of issues around: - Fixes for typos and incorrect reference names. - Removal of unneeded comments. - Removal of unreferenced functions and structures. - Fixes regarding variable name consistency. Author: Alexander Lakhin Discussion: https://postgr.es/m/10bfd4ac-3e7c-40ab-2b2e-355ed15495e8@gmail.com
-
Tom Lane authored
These memory context switches are useless in the wake of commit 1cff1b95. Noted by Jesper Pedersen. Discussion: https://postgr.es/m/f078ce63-9e04-0f3e-d200-d7ee66279abe@redhat.com
-
Bruce Momjian authored
Reported-by: Ian Barwick Discussion: https://postgr.es/m/538950ec-b86a-1650-6078-beb7091c09c2@2ndquadrant.com Backpatch-through: 9.4
-
- 15 Jul, 2019 6 commits
-
-
Thomas Munro authored
Author: Fabien Coelho Reviewed-by: Ibrar Ahmed Discussion: https://postgr.es/m/alpine.DEB.2.21.1904081737390.5867%40lancre
-
Thomas Munro authored
Author: Fabien Coelho Reviewed-by: Ibrar Ahmed Discussion: https://postgr.es/m/alpine.DEB.2.21.1904061810510.3678%40lancre
-
Peter Geoghegan authored
The logic just added by commit e3899ffd falls back on a 50:50 page split in the event of a new item that's just to the right of our provisional "many duplicates" split point. Fix a comment that incorrectly claimed that the new item had to be just to the left of our provisional split point. Backpatch: 12-, just like commit e3899ffd.
-
Peter Geoghegan authored
Specific ever-decreasing insertion patterns could cause successive unbalanced nbtree page splits. Problem cases involve a large group of duplicates to the left, and ever-decreasing insertions to the right. To fix, detect the situation by considering the newitem offset before performing a split using nbtsplitloc.c's "many duplicates" strategy. If the new item was inserted just to the right of our provisional "many duplicates" split point, infer ever-decreasing insertions and fall back on a 50:50 (space delta optimal) split. This seems to barely affect cases that already had acceptable space utilization. An alternative fix also seems possible. Instead of changing nbtsplitloc.c split choice logic, we could instead teach _bt_truncate() to generate a new value for new high keys by interpolating from the lastleft and firstright key values. That would certainly be a more elegant fix, but it isn't suitable for backpatching. Discussion: https://postgr.es/m/CAH2-WznCNvhZpxa__GqAa1fgQ9uYdVc=_apArkW2nc-K3O7_NA@mail.gmail.com Backpatch: 12-, where the nbtree page split enhancements were introduced.
-
Tom Lane authored
Originally, Postgres Lists were a more or less exact reimplementation of Lisp lists, which consist of chains of separately-allocated cons cells, each having a value and a next-cell link. We'd hacked that once before (commit d0b4399d) to add a separate List header, but the data was still in cons cells. That makes some operations -- notably list_nth() -- O(N), and it's bulky because of the next-cell pointers and per-cell palloc overhead, and it's very cache-unfriendly if the cons cells end up scattered around rather than being adjacent. In this rewrite, we still have List headers, but the data is in a resizable array of values, with no next-cell links. Now we need at most two palloc's per List, and often only one, since we can allocate some values in the same palloc call as the List header. (Of course, extending an existing List may require repalloc's to enlarge the array. But this involves just O(log N) allocations not O(N).) Of course this is not without downsides. The key difficulty is that addition or deletion of a list entry may now cause other entries to move, which it did not before. For example, that breaks foreach() and sister macros, which historically used a pointer to the current cons-cell as loop state. We can repair those macros transparently by making their actual loop state be an integer list index; the exposed "ListCell *" pointer is no longer state carried across loop iterations, but is just a derived value. (In practice, modern compilers can optimize things back to having just one loop state value, at least for simple cases with inline loop bodies.) In principle, this is a semantics change for cases where the loop body inserts or deletes list entries ahead of the current loop index; but I found no such cases in the Postgres code. The change is not at all transparent for code that doesn't use foreach() but chases lists "by hand" using lnext(). The largest share of such code in the backend is in loops that were maintaining "prev" and "next" variables in addition to the current-cell pointer, in order to delete list cells efficiently using list_delete_cell(). However, we no longer need a previous-cell pointer to delete a list cell efficiently. Keeping a next-cell pointer doesn't work, as explained above, but we can improve matters by changing such code to use a regular foreach() loop and then using the new macro foreach_delete_current() to delete the current cell. (This macro knows how to update the associated foreach loop's state so that no cells will be missed in the traversal.) There remains a nontrivial risk of code assuming that a ListCell * pointer will remain good over an operation that could now move the list contents. To help catch such errors, list.c can be compiled with a new define symbol DEBUG_LIST_MEMORY_USAGE that forcibly moves list contents whenever that could possibly happen. This makes list operations significantly more expensive so it's not normally turned on (though it is on by default if USE_VALGRIND is on). There are two notable API differences from the previous code: * lnext() now requires the List's header pointer in addition to the current cell's address. * list_delete_cell() no longer requires a previous-cell argument. These changes are somewhat unfortunate, but on the other hand code using either function needs inspection to see if it is assuming anything it shouldn't, so it's not all bad. Programmers should be aware of these significant performance changes: * list_nth() and related functions are now O(1); so there's no major access-speed difference between a list and an array. * Inserting or deleting a list element now takes time proportional to the distance to the end of the list, due to moving the array elements. (However, it typically *doesn't* require palloc or pfree, so except in long lists it's probably still faster than before.) Notably, lcons() used to be about the same cost as lappend(), but that's no longer true if the list is long. Code that uses lcons() and list_delete_first() to maintain a stack might usefully be rewritten to push and pop at the end of the list rather than the beginning. * There are now list_insert_nth...() and list_delete_nth...() functions that add or remove a list cell identified by index. These have the data-movement penalty explained above, but there's no search penalty. * list_concat() and variants now copy the second list's data into storage belonging to the first list, so there is no longer any sharing of cells between the input lists. The second argument is now declared "const List *" to reflect that it isn't changed. This patch just does the minimum needed to get the new implementation in place and fix bugs exposed by the regression tests. As suggested by the foregoing, there's a fair amount of followup work remaining to do. Also, the ENABLE_LIST_COMPAT macros are finally removed in this commit. Code using those should have been gone a dozen years ago. Patch by me; thanks to David Rowley, Jesper Pedersen, and others for review. Discussion: https://postgr.es/m/11587.1550975080@sss.pgh.pa.us
-
Thomas Munro authored
In order to be able to work with FullTransactionId values during replay without increasing the size of the WAL, infer the epoch. In general we can't do that safely, but during replay we can because we know that nextFullXid can't advance concurrently. Prevent frontend code from seeing this new function, due to the above restriction. Perhaps in future it will be possible to extract the value entirely from independent WAL records, and then this restriction can be lifted. Author: Thomas Munro, based on earlier code from Andres Freund Discussion: https://postgr.es/m/CA%2BhUKG%2BmLmuDjMi6o1dxkKvGRL56Y2Rz%2BiXAcrZV03G9ZuFQ8Q%40mail.gmail.com
-
- 14 Jul, 2019 7 commits
-
-
Peter Eisentraut authored
This adds a built-in function to generate UUIDs. PostgreSQL hasn't had a built-in function to generate a UUID yet, relying on external modules such as uuid-ossp and pgcrypto to provide one. Now that we have a strong random number generator built-in, we can easily provide a version 4 (random) UUID generation function. This patch takes the existing function gen_random_uuid() from pgcrypto and makes it a built-in function. The pgcrypto implementation now internally redirects to the built-in one. Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr> Discussion: https://www.postgresql.org/message-id/6a65610c-46fc-2323-6b78-e8086340a325@2ndquadrant.com
-
Alexander Korotkov authored
Opclass support functions already can handle this operator, just catalog adjustment appears to be required. Discussion: https://postgr.es/m/f71ba19d-d989-63b6-f04a-abf02ad9345d%40postgrespro.ru Author: Nikita Glukhov Reviewed-by: Tom Lane, Alexander Korotkov
-
Alexander Korotkov authored
Index-based calculation of this operator is exact. So, signature of gist_bbox_distance() function is changes so that caller is responsible for setting *recheck flag. Discussion: https://postgr.es/m/f71ba19d-d989-63b6-f04a-abf02ad9345d%40postgrespro.ru Author: Nikita Glukhov Reviewed-by: Tom Lane, Alexander Korotkov
-
Alexander Korotkov authored
Some of <-> operators between geometric types have their commutators missed. This commit adds them. The motivation is upcoming kNN support for some of those operators. Discussion: https://postgr.es/m/f71ba19d-d989-63b6-f04a-abf02ad9345d%40postgrespro.ru Author: Nikita Glukhov Reviewed-by: Tom Lane, Alexander Korotkov
-
Andrew Gierth authored
Performance of a SELECT FOR UPDATE may be quite distinct from the non-UPDATE version of the query, so treat all of the FOR UPDATE clause as being significant for distinguishing queries. Andrew Gierth and Vik Fearing, reviewed by Sergei Kornilov, Thomas Munro, Tom Lane Discussion: https://postgr.es/m/87h8e4hfwv.fsf@news-spur.riddles.org.uk
-
Thomas Munro authored
We choose a random value for delta, not balance. Back-patch to 9.6 where the mistake arrived. Author: Fabien Coelho Discussion: https://postgr.es/m/alpine.DEB.2.21.1904081752210.5867@lancre
-
- 13 Jul, 2019 7 commits
-
-
Noah Misch authored
Commit 578b2297 replaced it with a concurrent "nextval" test. That version does not detect PostgreSQL's incompatibility with xlc 13.1.3, so bring back an OID-based test that does. Back-patch to v12, where that commit first appeared. Discussion: https://postgr.es/m/20190707170035.GA1485546@rfd.leadboat.com
-
Michael Paquier authored
In configure scripts, --with-ossp-uuid is obsolete is replaced by --with-uuid, and it needs to specify a path to its library builds when building with the MSVC scripts. --with-perl needs also to specify a path. Author: Kyotaro Horiguchi Discussion: https://postgr.es/m/20190712.121529.194600624.horikyota.ntt@gmail.com
-
Michael Paquier authored
This adds some missing markups, fixes a couple of incorrect ones and clarifies some documentation in various places. Author: Liudmila Mantrova Discussion: https://postgr.es/m/a068f947-7a51-5df1-b3fd-1a131ae5c044@postgrespro.ru Backpatch-through: 12
-
Thomas Munro authored
Previously it suggested an extra "=" after "SET x=". Reported-by: Kyotaro Horiguchi Discussion: https://postgr.es/m/CA%2BhUKGLk%3D0yLDjfviONJLzcHEzygj%3Dx6VbGH43LnXbBUvQb52g%40mail.gmail.com
-
Thomas Munro authored
Author: Thomas Munro Reviewed-by: Kyotaro Horiguchi Discussion: https://postgr.es/m/CA%2BhUKGLk%3D0yLDjfviONJLzcHEzygj%3Dx6VbGH43LnXbBUvQb52g%40mail.gmail.com
-
Thomas Munro authored
After a process decides not to wait for a condition variable, it can still consume a signal before it reaches ConditionVariableCancelSleep(). In that case, pass the signal on to another waiter if possible, so that a signal doesn't go missing when there is another process ready to receive it. Author: Thomas Munro Reviewed-by: Shawn Debnath Discussion: https://postgr.es/m/CA%2BhUKGLQ_RW%2BXs8znDn36e-%2Bmq2--zrPemBqTQ8eKT-VO1OF4Q%40mail.gmail.com
-
Thomas Munro authored
Provide ConditionVariableTimedSleep(), like ConditionVariableSleep() but with a timeout argument. Author: Shawn Debnath Reviewed-by: Kyotaro Horiguchi, Thomas Munro Discussion: https://postgr.es/m/eeb06007ccfe46e399df6af18bfcd15a@EX13D05UWC002.ant.amazon.com
-
- 12 Jul, 2019 3 commits
-
-
Thomas Munro authored
Provide a hint to users that they need to increase wal_level before subscriptions can work. Author: Lucas Viecelli, with some adjustments by Thomas Munro Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAPjy-57rn5Y9g4e5u--eSOP-7P4QrE9uOZmT2ZcUebF8qxsYhg%40mail.gmail.com
-
Tom Lane authored
Commit 3ca930fc modified get_actual_variable_range() to use a new "SnapshotNonVacuumable" snapshot type for selecting tuples that it would consider valid. However, because that snapshot type can accept recently-dead tuples, this caused a bug when using a recently-created index: we might accept a recently-dead tuple that is an early member of a broken HOT chain and does not actually match the index entry. Then, the data extracted from the heap tuple would not necessarily be an endpoint value of the column; it could even be NULL, leading to get_actual_variable_range() itself reporting "found unexpected null value in index". Even without an error, this could lead to poor plan choices due to an erroneous notion of the endpoint value. We can improve matters by changing the code to use the index-only scan technique (which didn't exist when get_actual_variable_range was originally written). If any of the tuples in a HOT chain are live enough to satisfy SnapshotNonVacuumable, we take the data from the index entry, ignoring what is in the heap. This fixes the problem without changing the live-vs-dead-tuple behavior from what was intended by commit 3ca930fc. A side benefit is that for static tables we might not have to touch the heap at all (when the extremal value is in an all-visible page). In addition, we can save some overhead by not having to create a complete ExecutorState, and we don't need to run FormIndexDatum, avoiding more cycles as well as the possibility of failure for indexes on expressions. (I'm not sure that this code would ever be used to determine the extreme value of an expression, in the current state of the planner; but it's definitely possible that lower-order columns of the selected index could be expressions. So one could construct perhaps-artificial examples in which the old code unexpectedly failed due to trying to compute an expression's value for a now-dead row.) Per report from Manuel Rigger. Back-patch to v11 where commit 3ca930fc came in. Discussion: https://postgr.es/m/CA+u7OA7W4NWEhCvftdV6_8bbm2vgypi5nuxfnSEJQqVKFSUoMg@mail.gmail.com
-
David Rowley authored
match_clause_to_partition_key incorrectly would return PARTCLAUSE_UNSUPPORTED if a bool qual could not be matched to the current partition key. This was a problem, as it causes the calling function to discard the qual and not try to match it to any other partition key. If there was another partition key which did match this qual, then the qual would not be checked again and we could fail to prune some partitions. The worst this could do was to cause partitions not to be pruned when they could have been, so there was no danger of incorrect query results here. Fix this by changing match_boolean_partition_clause to have it return a PartClauseMatchStatus rather than a boolean value. This allows it to communicate if the qual is unsupported or if it just does not match this particular partition key, previously these two cases were treated the same. Now, if match_clause_to_partition_key is unable to match the qual to any other qual type then we can simply return the value from the match_boolean_partition_clause call so that the calling function properly treats the qual as either unmatched or unsupported. Reported-by: Rares Salcudean Reviewed-by: Amit Langote Backpatch-through: 11 where partition pruning was introduced Discussion: https://postgr.es/m/CAHp_FN2xwEznH6oyS0hNTuUUZKp5PvegcVv=Co6nBXJ+mC7Y5w@mail.gmail.com
-
- 11 Jul, 2019 1 commit
-
-
Alexander Korotkov authored
Reported-by: Thom Brown Discussion: https://postgr.es/m/CAA-aLv4Tggy6Z3kaG9n%2B3SHwOVGN2Yj_MJXfdfwjH_jBNZzJNA%40mail.gmail.com Backpatch-through: 12
-
- 10 Jul, 2019 7 commits
-
-
Tom Lane authored
Previously, exec_simple_query always ran parse analysis, rewrite, and planning in MessageContext, allowing all the data generated thereby to persist until the end of processing of the whole query string. That's fine for single-command strings, but if a client sends many commands in a single simple-Query message, this strategy could result in annoying memory bloat, as complained of by Andreas Seltenreich. To fix, create a child context to do this work in, and reclaim it after each command. But we only do so for parsetrees that are not last in their query string. That avoids adding any memory management overhead for the typical case of a single-command string. Memory allocated for the last parsetree would be freed immediately after finishing the command string anyway. Similarly, adjust extension.c's execute_sql_string() to reclaim memory after each command. In that usage, multi-command strings are the norm, so it's a bit surprising that no one has yet complained of bloat --- especially since the bloat extended to whatever data ProcessUtility execution might leak. Amit Langote, reviewed by Julien Rouhaud Discussion: https://postgr.es/m/87ftp6l2qr.fsf@credativ.de
-
Bruce Momjian authored
It is now always displayed in PG 12+. Discussion: https://postgr.es/m/b6ec6167-5dd5-6347-ac1d-1fd49382019f@2ndquadrant.com Author: Ian Barwick Backpatch-through: 12
-
Alvaro Herrera authored
Per gripe from Phil Bayer. Authors: Amit Langote and others Discussion: https://postgr.es/m/156236160709.1192.4498528196556144085@wrigleys.postgresql.org
-
Michael Paquier authored
This can cause valgrind to complain, as the flag marking a buffer as a temporary copy was not getting initialized. While on it, fill in with zeros newly-created buffer pages. This does not matter when loading a block from a temporary file, but it makes the push of an index tuple into a new buffer page safer. This has been introduced by 1d27dcf5, so backpatch all the way down to 9.4. Author: Alexander Lakhin Discussion: https://postgr.es/m/15899-0d24fb273b3dd90c@postgresql.org Backpatch-through: 9.4
-
Alexander Korotkov authored
This commit contains assorted fixes for jsonpath documentation including: grammar fixes, incorrect examples fixes as well as wording improvements. Discussion: https://postgr.es/m/CAA-aLv4VVX%3Db9RK5hkfPXJczqaiTdqO04teW9i0wiQVhdKcqzw%40mail.gmail.com Author: Liudmila Mantrova Reviewed-by: Alexander Korotkov Reported-by: Thom Brown
-
David Rowley authored
86b85044 abstracted calls to heap functions in COPY FROM to support a generic table AM. However, when performing a copy into a partitioned table, this commit neglected to call table_finish_bulk_insert for each partition. Before 86b85044, when we always called the heap functions, there was no need to call heapam_finish_bulk_insert for partitions since it only did any work when performing a copy without WAL. For partitioned tables, this was unsupported anyway, so there was no issue. With pluggable storage, we can't make any assumptions about what the table AM might want to do in its equivalent function, so we'd better ensure we always call table_finish_bulk_insert each partition that's received a row. For now, we make the table_finish_bulk_insert call whenever we evict a CopyMultiInsertBuffer out of the CopyMultiInsertInfo. This does mean that it's possible that we call table_finish_bulk_insert multiple times per partition, which is not a problem other than being an inefficiency. Improving this requires a more invasive patch, so let's leave that for another day. This also changes things so that we no longer needlessly call table_finish_bulk_insert when performing a COPY FROM for a non-partitioned table when not using multi-inserts. Reported-by: Robert Haas Backpatch-through: 12 Discussion: https://postgr.es/m/CA+TgmoYK=6BpxiJ0tN-p9wtH0BTAfbdxzHhwou0mdud4+BkYuQ@mail.gmail.com
-
Amit Kapila authored
Reported-by: Ashwin Agrawal Author: Ashwin Agrawal Reviewed-by: Amit Kapila Backpatch-through: 12, where it was introduced Discussion: https://postgr.es/m/CALfoeisgdZhYDrJOukaBzvXfJOK2FQ0szVMK7dzmcy6w93iDUA@mail.gmail.com
-
- 09 Jul, 2019 2 commits
-
-
Thomas Munro authored
Otherwise the executor can't see trigger transition tables during EPQ evaluation. Fixes bug #15900 and almost certainly also #15720. Back-patch to 10, where trigger transition tables landed. Author: Alex Aktsipetrov Reviewed-by: Thomas Munro, Tom Lane Discussion: https://postgr.es/m/15900-bc482754fe8d7415%40postgresql.org Discussion: https://postgr.es/m/15720-38c2b29e5d720187%40postgresql.org
-
Alvaro Herrera authored
We were creating the cloned triggers with an empty list of arguments, losing the ones that had been specified by the user when creating the trigger in the partitioned table. Repair. This was forgotten in commit 86f57594. Author: Patrick McHardy Reviewed-by: Tomas Vondra Discussion: https://postgr.es/m/20190709130027.amr2cavjvo7rdvac@access1.trash.net Discussion: https://postgr.es/m/15752-123bc90287986de4@postgresql.org
-