- 05 May, 2021 2 commits
-
-
Tom Lane authored
Previously, a lot of information about type regclass existed only in the discussion of the sequence functions. Maybe that made sense in the beginning, because I think originally those were the only functions taking regclass. But it doesn't make sense anymore. Move that material to the "Object Identifier Types" section in datatype.sgml, generalize it to talk about the other reg* types as well, and add more examples. Per bug #16991 from Federico Caselli. Discussion: https://postgr.es/m/16991-bcaeaafa17e0a723@postgresql.org
-
Peter Eisentraut authored
-
- 04 May, 2021 6 commits
-
-
Tom Lane authored
Commit 93f41461 improved a pre-existing test case so that it would show whether or not termination of the "remote" worker process happened. This soon exposed that, when debug_invalidate_system_caches_always (nee CLOBBER_CACHE_ALWAYS) is enabled, no such termination occurs. That's because cache invalidation forces postgres_fdw connections to be dropped at end of transaction, so that there's no worker to terminate. There's a race condition as to whether the worker will manage to get out of the BackendStatusArray before we look, but at least on buildfarm member hyrax, it's failed twice in two attempts. Rather than re-lobotomizing the test, let's fix this by transiently disabling debug_invalidate_system_caches_always. (Hooray for that being just a GUC nowadays, rather than a compile-time option.) If this proves not to be enough to make the test stable, we can do the other thing instead. Discussion: https://postgr.es/m/3854538.1620081771@sss.pgh.pa.us
-
Alvaro Herrera authored
The OID of the constraint is used instead of the OID of the trigger -- an easy mistake to make. Apparently the object-alter hooks are not very well tested :-( Backpatch to 12, where this typo was introduced by 578b2297 Discussion: https://postgr.es/m/20210503231633.GA6994@alvherre.pgsql
-
Peter Eisentraut authored
-
Peter Eisentraut authored
The previous fix for dumping of inherited generated columns (0bf83648) must not be applied to partitions, since, unlike normal inherited tables, they are always dumped separately and reattached. Reported-by: Santosh Udupi <email@hitha.net> Discussion: https://www.postgresql.org/message-id/flat/CACLRvHZ4a-%2BSM_159%2BtcrHdEqxFrG%3DW4gwTRnwf7Oj0UNj5R2A%40mail.gmail.com
-
Peter Eisentraut authored
When running ALTER TABLE t2 INHERIT t1, we must check that columns in t2 that correspond to a generated column in t1 are also generated and have the same generation expression. Otherwise, this would allow creating setups that a normal CREATE TABLE sequence would not allow. Discussion: https://www.postgresql.org/message-id/22de27f6-7096-8d96-4619-7b882932ca25@2ndquadrant.com
-
Alexander Korotkov authored
We don't usually mention the version number in similar situations. So, neither mention it here. Reported-by: Bruce Momjian Discussion: https://postgr.es/m/20210503234914.GO6180%40momjian.us
-
- 03 May, 2021 10 commits
-
-
Bruce Momjian authored
Properly fix: - the "ONLY" in FROM [ONLY] isn't hashed - the agglevelsup field in GROUPING isn't hashed - WITH TIES not being hashed (new in PG 13) - "DISTINCT" in "GROUP BY [DISTINCT]" isn't hashed (new in PG 14) Reported-by: Julien Rouhaud Discussion: https://postgr.es/m/20210425081119.ulyzxqz23ueh3wuj@nol
-
Peter Eisentraut authored
Before now, looking up "multirange" in the index only led to the multirange() function. To make this more useful, also add an entry pointing to the range types section.
-
Robert Haas authored
Don't phrase reports in terms of the number of tuples thus-far returned by the index scan, but rather in terms of the chunk_seq values found inside the tuples. Patch by me, reviewed by Mark Dilger. Discussion: http://postgr.es/m/CA+TgmoZUONCkdcdR778EKuE+f1r5Obieu63db2OgMPHaEvEPTQ@mail.gmail.com
-
Tom Lane authored
Commit 824bf719 introduced a new search of the NFAs generated by regex compilation. I failed to think hard about the performance characteristics of that search, with the predictable outcome that it's bad: weird regexes can trigger exponential search time. Worse, there's no check-for-interrupt in that code, so you can't even cancel the query if this happens. Fix by introducing memo-ization of the search results, so that any one NFA state need be examined in detail just once. This potentially uses a lot of memory, but we can bound the memory usage by putting a limit on the number of states for which we'll try to prove match-all-ness. That is sane because we already have a limit (DUPINF) on the maximum finite string length that a matchall regex can match; and patterns that involve much more than DUPINF states would probably exceed that limit anyway. Also, rearrange the logic so that we check the basic is-the-graph- all-RAINBOW-arcs property before we start the recursive search to determine path lengths. This will ensure that we fall out quickly whenever the NFA couldn't possibly be matchall. Also stick in a check-for-interrupt, just in case these measures don't completely eliminate the risk of slowness. Discussion: https://postgr.es/m/3483895.1619898362@sss.pgh.pa.us
-
Peter Eisentraut authored
If dtrace is compiled in but disabled, the lwlock dtrace probes still evaluate their arguments. Since PostgreSQL 13, T_NAME(lock) does nontrivial work, so it should be avoided if not needed. To fix, make these calls conditional on the *_ENABLED() macro corresponding to each probe. Reviewed-by: Craig Ringer <craig.ringer@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/CAGRY4nwxKUS_RvXFW-ugrZBYxPFFM5kjwKT5O+0+Stuga5b4+Q@mail.gmail.com
-
Peter Eisentraut authored
became unused by 04942bff
-
Peter Eisentraut authored
-
Peter Eisentraut authored
One more that ought to have been part of 82c3cd97.
-
Amit Kapila authored
Previously, we were using the size of all the changes present in ReorderBuffer to compute total_bytes after decoding a transaction and that can lead to counting some of the transactions' changes more than once. Fix it by using the size of the changes decoded for a transaction to compute 'total_bytes'. Author: Sawada Masahiko Reviewed-by: Vignesh C, Amit Kapila Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
-
Alexander Korotkov authored
websearch_to_tsquery() splits text in quotes into tokens and connects them with phrase operator on its own. However, that leads to surprising results when the token contains no words. For instance, websearch_to_tsquery('"aaa: bbb"') is 'aaa <2> bbb', because it is equivalent of to_tsquery(E'aaa <-> \':\' <-> bbb'). But websearch_to_tsquery('"aaa: bbb"') has to be 'aaa <-> bbb' in order to match to_tsvector('aaa: bbb'). Since 0c4f355c, we anyway connect lexemes of complex tokens with phrase operators. Thus, let's just websearch_to_tsquery() parse text in quotes as a single token. Therefore, websearch_to_tsquery() should process the quoted text in the same way phraseto_tsquery() does. This solution is what we exactly need and also simplifies the code. This commit is an incompatible change, so we don't backpatch it. Reported-by: Valentin Gatien-Baron Discussion: https://postgr.es/m/CA%2B0DEqiZs7gdOd4ikmg%3D0UWG%2BSwWOLxPsk_JW-sx9WNOyrb0KQ%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Tom Lane, Zhihong Yu
-
- 01 May, 2021 1 commit
-
-
Bruce Momjian authored
Turns out you can specify negative values using plurals: https://english.stackexchange.com/questions/9735/is-1-followed-by-a-singular-or-plural-noun so the previous code was correct enough, and consistent with other usage in our code. Also add comment in the two places where this could be confused. Reported-by: Noah Misch Diagnosed-by: 20210425115726.GA2353095@rfd.leadboat.com
-
- 30 Apr, 2021 6 commits
-
-
Tom Lane authored
While we've always allowed such cases, the documentation didn't say you could do it. Discussion: https://postgr.es/m/161969805833.690.13680986983883602407@wrigleys.postgresql.org
-
Tom Lane authored
Mention specifically that you can't call aggregates, window functions, or procedures this way (the inability to call SRFs was already mentioned). Also, the claim that PQfn doesn't support NULL arguments or results has been a lie since we invented protocol 3.0. Not sure why this text was never updated for that, but do it now. Discussion: https://postgr.es/m/2039442.1615317309@sss.pgh.pa.us
-
Tom Lane authored
Reject aggregates, window functions, and procedures. Aggregates failed anyway, though with a somewhat obscure error message. Window functions would hit an Assert or null-pointer dereference. Procedures seemed to work as long as you didn't try to do transaction control, but (a) transaction control is sort of the point of a procedure, and (b) it's not entirely clear that no bugs lurk in that path. Given the lack of testing of this area, it seems safest to be conservative in what we support. Also reject proretset functions, as the fastpath protocol can't support returning a set. Also remove an easily-triggered assertion that the given OID isn't 0; the subsequent lookups can handle that case themselves. Per report from Theodor-Arsenij Larionov-Trichkin. Back-patch to all supported branches. (The procedure angle only applies in v11+, of course.) Discussion: https://postgr.es/m/2039442.1615317309@sss.pgh.pa.us
-
Amit Kapila authored
There were two problems: a. We were always selecting the next available txn instead of selecting it when it is larger than the previous transaction. b. We were selecting the transactions which haven't made any changes to the database (base snapshot is not set). Later it was hitting an Assert because we don't decode such transactions and the changes in txn remain as it is. It is better not to choose such transactions for streaming in the first place. Reported-by: Haiying Tang Author: Dilip Kumar Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/OS0PR01MB61133B94E63177040F7ECDA1FB429@OS0PR01MB6113.jpnprd01.prod.outlook.com
-
David Rowley authored
Here we adjust the EXPLAIN ANALYZE output for Result Cache so that we don't show any Result Cache stats for parallel workers who don't contribute anything to Result Cache plan nodes. I originally had ideas that workers who don't help could still have their Result Cache stats displayed. The idea with that was so that I could write some parallel Result Cache regression tests that show the EXPLAIN ANALYZE output. However, I realized a little too late that such tests would just not be possible to have run in a stable way on the buildfarm. With that knowledge, before 9eacee2e went in, I had removed all of the tests that were showing the EXPLAIN ANALYZE output of a parallel Result Cache plan, however, I forgot to put back the code that adjusts the EXPLAIN output to hide the Result Cache stats for parallel workers who were not fast enough to help out before query execution was over. All other nodes behave this way and so should Result Cache. Additionally, with this change, it now seems safe enough to remove the SET force_parallel_mode = off that I had added to the regression tests. Also, perform some cleanup in the partition_prune tests. I had adjusted the explain_parallel_append() function to sanitize the Result Cache EXPLAIN ANALYZE output. However, since I didn't actually include any parallel Result Cache tests that show their EXPLAIN ANALYZE output, that code does nothing and can be removed. In passing, move the setting of memPeakKb into the scope where it's used. Reported-by: Amit Khandekar Author: David Rowley, Amit Khandekar Discussion: https://postgr.es/m/CAJ3gD9d8SkfY95GpM1zmsOtX2-Ogx5q-WLsf8f0ykEb0hCRK3w@mail.gmail.com
-
Amit Kapila authored
As per analysis, it appears that the 'drop slot' message from the previous test and 'create slot' message of the new test are either missed or not yet delivered to the stats collector due to which we will still see the stats from the old slot. This can happen rarely which could be the reason that we are seeing some failures in the buildfarm randomly. To avoid that we are using a different slot name for the tests in test_decoding/sql/stats.sql. Reported-by: Tom Lane based on buildfarm reports Author: Sawada Masahiko Reviewed-by: Amit Kapila, Vignesh C Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
-
- 29 Apr, 2021 6 commits
-
-
Tom Lane authored
Don't advocate dropping a whole table when dropping a column would serve. While at it, try to make the layout of these messages a bit cleaner and more consistent. Per suggestion from Daniel Gustafsson. No back-patch, as we generally don't like to churn translatable messages in released branches. Discussion: https://postgr.es/m/2798740.1619622555@sss.pgh.pa.us
-
Tom Lane authored
Commits 29aeda6e et al closed up some oversights involving not checking for non-upgradable types within container types, such as arrays and ranges. However, I only looked at version.c, failing to notice that there were substantially-equivalent tests in check.c. (The division of responsibility between those files is less than clear...) In addition, because genbki.pl does not guarantee that auto-generated rowtype OIDs will hold still across versions, we need to consider that the composite type associated with a system catalog or view is non-upgradable. It seems unlikely that someone would have a user column declared that way, but if they did, trying to read it in another PG version would likely draw "no such pg_type OID" failures, thanks to the type OID embedded in composite Datums. To support the composite and reg*-type cases, extend the recursive query that does the search to allow any base query that returns a column of pg_type OIDs, rather than limiting it to exactly one starting type. As before, back-patch to all supported branches. Discussion: https://postgr.es/m/2798740.1619622555@sss.pgh.pa.us
-
Alvaro Herrera authored
Backpatch to 12, where 87259588 introduced the current behavior. Per note from Justin Pryzby. Co-authored-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20210416143135.GI3315@telsasoft.com
-
Peter Eisentraut authored
This was broken by a silly mistake in e717a9a1. Reported-by: Jeff Janes <jeff.janes@gmail.com> Author: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://www.postgresql.org/message-id/CAMkU=1zKGWEJdBbYKw7Tn7cJmYR_UjgdcXTPDqJj=dNwCETBCQ@mail.gmail.com
-
Peter Eisentraut authored
Improve the wording in the connection type section of pg_hba.conf.sample a bit. After the hostgssenc part was added on, the whole thing became a bit wordy, and it's also a bit inaccurate for example in that the current wording for "host" appears to say that it does not apply to GSS-encrypted connections. Discussion: https://www.postgresql.org/message-id/flat/fc06dcc5-513f-e944-cd07-ba51dd7c6916%40enterprisedb.com
-
Michael Paquier authored
If no standbys can be found in the set of connection points provided, the fallback behavior is "any", and not "all" that does not exist. Author: Greg Nancarrow Reviewed-by: Laurenz Albe Discussion: https://postgr.es/m/CAJcOf-fDaCv8qCpWH7k5Yh6zFxZeUwZowu4sCWQ2zFx4CdkHpA@mail.gmail.com
-
- 28 Apr, 2021 5 commits
-
-
Tom Lane authored
We had a report of confusing server behavior caused by a client bug that sent junk to the server: the server thought the junk was a very long message length and waited patiently for data that would never come. We can reduce the risk of that by being less trusting about message lengths. For a long time, libpq has had a heuristic rule that it wouldn't believe large message size words, except for a small number of message types that are expected to be (potentially) long. This provides some defense against loss of message-boundary sync and other corrupted-data cases. The server does something similar, except that up to now it only limited the lengths of messages received during the connection authentication phase. Let's do the same as in libpq and put restrictions on the allowed length of all messages, while distinguishing between message types that are expected to be long and those that aren't. I used a limit of 10000 bytes for non-long messages. (libpq's corresponding limit is 30000 bytes, but given the asymmetry of the FE/BE protocol, there's no good reason why the numbers should be the same.) Experimentation suggests that this is at least a factor of 10, maybe a factor of 100, more than we really need; but plenty of daylight seems desirable to avoid false positives. In any case we can adjust the limit based on beta-test results. For long messages, set a limit of MaxAllocSize - 1, which is the most that we can absorb into the StringInfo buffer that the message is collected in. This just serves to make sure that a bogus message size is reported as such, rather than as a confusing gripe about not being able to enlarge a string buffer. While at it, make sure that non-mainline code paths (such as COPY FROM STDIN) are as paranoid as SocketBackend is, and validate the message type code before believing the message length. This provides an additional guard against getting stuck on corrupted input. Discussion: https://postgr.es/m/2003757.1619373089@sss.pgh.pa.us
-
Alvaro Herrera authored
Makes partition descriptor acquisition faster during the transient period in which a partition is in the process of being detached. This also adds the restriction that only one partition can be in pending-detach state for a partitioned table. While at it, return find_inheritance_children() API to what it was before 71f4c8c6, and create a separate find_inheritance_children_extended() that returns detailed info about detached partitions. (This incidentally fixes a bug in 8aba9322 whereby a memory context holding a transient partdesc is reparented to a NULL PortalContext, leading to permanent leak of that memory. The fix is to no longer rely on reparenting contexts to PortalContext. Reported by Amit Langote.) Per gripe from Amit Langote Discussion: https://postgr.es/m/CA+HiwqFgpP1LxJZOBYGt9rpvTjXXkg5qG2+Xch2Z1Q7KrqZR1A@mail.gmail.com
-
Tom Lane authored
Somehow I'd convinced myself that rotating to UTC-12 was the way to do this, but upon further review, it's definitely UTC+12. Discussion: https://postgr.es/m/1197050.1619123213@sss.pgh.pa.us
-
Michael Paquier authored
Spotted by buildfarm member prion, with -DRELCACHE_FORCE_RELEASE. Introduced in f7aab36d. Discussion: https://postgr.es/m/2759018.1619577848@sss.pgh.pa.us Backpatch-through: 9.6
-
Michael Paquier authored
Attempting to use this function with event triggers failed, as, since its introduction in a6762014, this code has never associated an object name with event triggers. This addresses the failure by adding the event trigger name to the set defining its object address. Note that regression tests are added within event_trigger and not object_address to avoid issues with concurrent connections in parallel schedules. Author: Joel Jacobson Discussion: https://postgr.es/m/3c905e77-a026-46ae-8835-c3f6cd1d24c8@www.fastmail.com Backpatch-through: 9.6
-
- 27 Apr, 2021 4 commits
-
-
Andrew Dunstan authored
Handle the situation where perl swaps the order of operands of the comparison operator. See `perldoc overload` for details: The third argument is set to TRUE if (and only if) the two operands have been swapped. Perl may do this to ensure that the first argument ($self) is an object implementing the overloaded operation, in line with general object calling conventions.
-
Fujii Masao authored
Typos, corrections and language improvements in the docs. Author: Justin Pryzby, Fujii Masao Reviewed-by: Bharath Rupireddy, Justin Pryzby, Fujii Masao Discussion: https://postgr.es/m/20210411041658.GB14564@telsasoft.com
-
Fujii Masao authored
Commit 8ff1c946 allowed TRUNCATE command to truncate foreign tables. Previously the information about "ONLY" options specified in TRUNCATE command were passed to the foreign data wrapper. Then postgres_fdw constructed the TRUNCATE command to issue the remote server and included "ONLY" options in it based on the passed information. On the other hand, "ONLY" options specified in SELECT, UPDATE or DELETE have no effect when accessing or modifying the remote table, i.e., are not passed to the foreign data wrapper. So it's inconsistent to make only TRUNCATE command pass the "ONLY" options to the foreign data wrapper. Therefore this commit changes the TRUNCATE command so that it doesn't pass the "ONLY" options to the foreign data wrapper, for the consistency with other statements. Also this commit changes postgres_fdw so that it always doesn't include "ONLY" options in the TRUNCATE command that it constructs. Author: Fujii Masao Reviewed-by: Bharath Rupireddy, Kyotaro Horiguchi, Justin Pryzby, Zhihong Yu Discussion: https://postgr.es/m/551ed8c1-f531-818b-664a-2cecdab99cd8@oss.nttdata.com
-
Amit Kapila authored
Previously, we used to use the array of size max_replication_slots to store stats for replication slots. But that had two problems in the cases where a message for dropping a slot gets lost: 1) the stats for the new slot are not recorded if the array is full and 2) writing beyond the end of the array if the user reduces the max_replication_slots. This commit uses HTAB for replication slot statistics, resolving both problems. Now, pgstat_vacuum_stat() search for all the dead replication slots in stats hashtable and tell the collector to remove them. To avoid showing the stats for the already-dropped slots, pg_stat_replication_slots view searches slot stats by the slot name taken from pg_replication_slots. Also, we send a message for creating a slot at slot creation, initializing the stats. This reduces the possibility that the stats are accumulated into the old slot stats when a message for dropping a slot gets lost. Reported-by: Andres Freund Author: Sawada Masahiko, test case by Vignesh C Reviewed-by: Amit Kapila, Vignesh C, Dilip Kumar Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
-