- 13 Dec, 2020 1 commit
-
-
Noah Misch authored
-
- 12 Dec, 2020 2 commits
-
-
Bruce Momjian authored
Backpatch-through: 9.5
-
Bruce Momjian authored
Backpatch-through: 9.5
-
- 11 Dec, 2020 4 commits
-
-
Tom Lane authored
This is basically a finger exercise to prove that it's possible for an extension module to add subscripting ability. Subscripted fetch from an hstore is not different from the existing "hstore -> text" operator. Subscripted update does seem to be a little easier to use than the traditional update method using hstore concatenation, but it's not a fundamentally new ability. However, there may be some value in the code as sample code, since it shows what's basically the minimum-complexity way to implement subscripting when one needn't consider nested container objects. Discussion: https://postgr.es/m/3724341.1607551174@sss.pgh.pa.us
-
Tom Lane authored
This is essential if we'd like to allow existing extension data types to support subscripting in future, since dropping and recreating the type isn't a practical thing for an extension upgrade script, and direct manipulation of pg_type isn't a great answer either. There was some discussion about also allowing alteration of typelem, but it's less clear whether that's a good idea or not, so for now I forebore. Discussion: https://postgr.es/m/3724341.1607551174@sss.pgh.pa.us
-
Tom Lane authored
Commit c7aba7c1 didn't add this, but after more fooling with the feature I feel that it'd be useful. To make this possible, refactor getSubscriptingRoutines() so that the caller is responsible for throwing any error. (In clauses.c, I just chose to make the most conservative assumption rather than throwing an error. We don't expect failures there anyway really, so the code space for an error message would be a poor investment.)
-
Peter Eisentraut authored
This usage would mean that values of the enum type are potentially not one of the enum values. Use macros instead, like everywhere else. Discussion: https://www.postgresql.org/message-id/14dde730-1d34-260e-fa9d-7664df2d6313@enterprisedb.com
-
- 10 Dec, 2020 2 commits
-
-
Michael Paquier authored
This module had a dependency on pgcrypto's md5.c that got removed by b67b57a9. Instead of the code from pgcrypto, this code can just use the new cryptohash routines for MD5 as a drop-in replacement, so let's just do this switch. This has also the merit to simplify a bit the compilation of uuid-ossp. This requires --with-uuid to be reproduced, and I have used e2fs as a way to reproduce the failure, then test this commit. Per reports from buildfarm members longfin, florican and sifaka. Discussion: https://postgr.es/m/X9GToVd3QmWeNvj8@paquier.xyz
-
Michael Paquier authored
This commit heavily reorganizes the MD5 implementations that exist in the tree in various aspects. First, MD5 is added to the list of options available in cryptohash.c and cryptohash_openssl.c. This means that if building with OpenSSL, EVP is used for MD5 instead of the fallback implementation that Postgres had for ages. With the recent refactoring work for cryptohash functions, this change is straight-forward. If not building with OpenSSL, a fallback implementation internal to src/common/ is used. Second, this reduces the number of MD5 implementations present in the tree from two to one, by moving the KAME implementation from pgcrypto to src/common/, and by removing the implementation that existed in src/common/. KAME was already structured with an init/update/final set of routines by pgcrypto (see original pgcrypto/md5.h) for compatibility with OpenSSL, so moving it to src/common/ has proved to be a straight-forward move, requiring no actual manipulation of the internals of each routine. Some benchmarking has not shown any performance gap between both implementations. Similarly to the fallback implementation used for SHA2, the fallback implementation of MD5 is moved to src/common/md5.c with an internal header called md5_int.h for the init, update and final routines. This gets then consumed by cryptohash.c. The original routines used for MD5-hashed passwords are moved to a separate file called md5_common.c, also in src/common/, aimed at being shared between all MD5 implementations as utility routines to keep compatibility with any code relying on them. Like the SHA2 changes, this commit had its round of tests on both Linux and Windows, across all versions of OpenSSL supported on HEAD, with and even without OpenSSL. Author: Michael Paquier Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/20201106073434.GA4961@paquier.xyz
-
- 09 Dec, 2020 4 commits
-
-
Tom Lane authored
This patch generalizes the subscripting infrastructure so that any data type can be subscripted, if it provides a handler function to define what that means. Traditional variable-length (varlena) arrays all use array_subscript_handler(), while the existing fixed-length types that support subscripting use raw_array_subscript_handler(). It's expected that other types that want to use subscripting notation will define their own handlers. (This patch provides no such new features, though; it only lays the foundation for them.) To do this, move the parser's semantic processing of subscripts (including coercion to whatever data type is required) into a method callback supplied by the handler. On the execution side, replace the ExecEvalSubscriptingRef* layer of functions with direct calls to callback-supplied execution routines. (Thus, essentially no new run-time overhead should be caused by this patch. Indeed, there is room to remove some overhead by supplying specialized execution routines. This patch does a little bit in that line, but more could be done.) Additional work is required here and there to remove formerly hard-wired assumptions about the result type, collation, etc of a SubscriptingRef expression node; and to remove assumptions that the subscript values must be integers. One useful side-effect of this is that we now have a less squishy mechanism for identifying whether a data type is a "true" array: instead of wiring in weird rules about typlen, we can look to see if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this to be bulletproof, we have to forbid user-defined types from using that handler directly; but there seems no good reason for them to do so. This patch also removes assumptions that the number of subscripts is limited to MAXDIM (6), or indeed has any hard-wired limit. That limit still applies to types handled by array_subscript_handler or raw_array_subscript_handler, but to discourage other dependencies on this constant, I've moved it from c.h to utils/array.h. Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov, Peter Eisentraut, Pavel Stehule Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
-
Peter Eisentraut authored
It was still using a scan of pg_depend instead of using the conindid column that has been added since. Since it is now just a catalog lookup wrapper and not related to pg_depend, move from pg_depend.c to lsyscache.c. Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://www.postgresql.org/message-id/flat/4688d55c-9a2e-9a5a-d166-5f24fe0bf8db%40enterprisedb.com
-
Michael Paquier authored
Three places of unicode_norm.c use a similar logic for getting the combining class from a codepoint. Commit 2991ac5f has added the function get_canonical_class() for this purpose, but it was only called by the backend. This commit refactors the code to use this function in all the places where the combining class is retrieved from a given codepoint. Author: John Naylor Discussion: https://postgr.es/m/CAFBsxsHUV7s7YrOm6hFz-Jq8Sc7K_yxTkfNZxsDV-DuM-k-gwg@mail.gmail.com
-
Andres Freund authored
It is error prone (see 5da871bf) and verbose to manually create function types. Add a helper that can reference a function pointer type via llvmjit_types.c and and convert existing instances of manual creation. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/20201207212142.wz5tnbk2jsaqzogb@alap3.anarazel.de
-
- 08 Dec, 2020 12 commits
-
-
Tom Lane authored
array_get_element and array_get_slice qualify as leakproof, since they will silently return NULL for bogus subscripts. But array_set_element and array_set_slice throw errors for such cases, making them clearly not leakproof. contain_leaked_vars was evidently written with only the former case in mind, as it gave the wrong answer for assignment SubscriptingRefs (nee ArrayRefs). This would be a live security bug, were it not that assignment SubscriptingRefs can only occur in INSERT and UPDATE target lists, while we only care about leakproofness for qual expressions; so the wrong answer can't occur in practice. Still, that's a rather shaky answer for a security-related question; and maybe in future somebody will want to ask about leakproofness of a tlist. So it seems wise to fix and even back-patch this correction. (We would need some change here anyway for the upcoming generic-subscripting patch, since extensions might make different tradeoffs about whether to throw errors. Commit 558d77f2 attempted to lay groundwork for that by asking check_functions_in_node whether a SubscriptingRef contains leaky functions; but that idea fails now that the implementation methods of a SubscriptingRef are not SQL-visible functions that could be marked leakproof or not.) Back-patch to 9.6. While 9.5 has the same issue, the code's a bit different. It seems quite unlikely that we'd introduce any actual bug in the short time 9.5 has left to live, so the work/risk/reward balance isn't attractive for changing 9.5. Discussion: https://postgr.es/m/3143742.1607368115@sss.pgh.pa.us
-
Tom Lane authored
This GUC was always intended as a temporary solution to help with finding 9.4-to-9.5 migration issues. Now that all pre-9.5 branches are out of support, and 9.5 will be too before v14 is released, it seems like it's okay to drop it. Doing so allows removal of several hundred lines of poorly-tested code in parse_expr.c, which have been a fertile source of bugs when people did use this. Discussion: https://postgr.es/m/2234320.1607117945@sss.pgh.pa.us
-
Dean Rasheed authored
Formerly, extended statistics only handled clauses that were RestrictInfos. However, the restrictinfo machinery doesn't create sub-AND RestrictInfos for AND clauses underneath OR clauses. Therefore teach extended statistics to handle bare AND clauses, looking for compatible RestrictInfo clauses underneath them. Dean Rasheed, reviewed by Tomas Vondra. Discussion: https://postgr.es/m/CAEZATCW=J65GUFm50RcPv-iASnS2mTXQbr=CfBvWRVhFLJ_fWA@mail.gmail.com
-
Dean Rasheed authored
When estimating an OR clause using multiple extended statistics objects, treat the estimates for each set of clauses for each statistics object as independent of one another. The overlap estimates produced for each statistics object do not apply to clauses covered by other statistics objects. Dean Rasheed, reviewed by Tomas Vondra. Discussion: https://postgr.es/m/CAEZATCW=J65GUFm50RcPv-iASnS2mTXQbr=CfBvWRVhFLJ_fWA@mail.gmail.com
-
Tom Lane authored
The SQL standard says that redundant unique constraints are disallowed, but we long ago decided that throwing an error would be too user-unfriendly, so we just drop redundant ones. The docs weren't very clear about that though, as this behavior was only explained for PRIMARY KEY vs UNIQUE, not UNIQUE vs UNIQUE. While here, I couldn't resist doing some copy-editing and markup-fixing on the adjacent text about INCLUDE options. Per bug #16767 from Matthias vd Meent. Discussion: https://postgr.es/m/16767-1714a2056ca516d0@postgresql.org
-
Tom Lane authored
This restriction was mentioned in connection with string literals, but it wasn't made clear that it's a general restriction not just a syntactic limitation in query strings. Per unsigned documentation comment. Discussion: https://postgr.es/m/160720552914.710.16625261471128631268@wrigleys.postgresql.org
-
Fujii Masao authored
After autovacuum collects the relations to vacuum or analyze, it rechecks whether each relation still needs to be vacuumed or analyzed before actually doing that. Previously this recheck could be a significant overhead especially when there were a very large number of relations. This was because each recheck forced the statistics to be refreshed, and the refresh of the statistics for a very large number of relations could cause heavy overhead. There was the report that this issue caused autovacuum workers to have gotten “stuck” in a tight loop of table_recheck_autovac() that rechecks whether a relation needs to be vacuumed or analyzed. This commit speeds up the recheck by making autovacuum worker reuse the previously-read statistics for the recheck if possible. Then if that "stale" statistics says that a relation still needs to be vacuumed or analyzed, autovacuum refreshes the statistics and does the recheck again. The benchmark shows that the more relations exist and autovacuum workers are running concurrently, the more this change reduces the autovacuum execution time. For example, when there are 20,000 tables and 10 autovacuum workers are running, the benchmark showed that the change improved the performance of autovacuum more than three times. On the other hand, even when there are only 1000 tables and only a single autovacuum worker is running, the benchmark didn't show any big performance regression by the change. Firstly POC patch was proposed by Jim Nasby. As the result of discussion, we used Tatsuhito Kasahara's version of the patch using the approach suggested by Tom Lane. Reported-by: Jim Nasby Author: Tatsuhito Kasahara Reviewed-by: Masahiko Sawada, Fujii Masao Discussion: https://postgr.es/m/3FC6C2F2-8A47-44C0-B997-28830B5716D0@amazon.com
-
Fujii Masao authored
Oversight in 01469241. Reported-by: Andres Freund Discussion: https://postgr.es/m/20201207185614.zzf63vggm5r4sozg@alap3.anarazel.de
-
Michael Paquier authored
The following routines are called within pgcrypto when handling digests but there were no checks for failures: - EVP_MD_CTX_size (can fail with -1 as of 3.0.0) - EVP_MD_CTX_block_size (can fail with -1 as of 3.0.0) - EVP_DigestInit_ex - EVP_DigestUpdate - EVP_DigestFinal_ex A set of elog(ERROR) is added by this commit to detect such failures, that should never happen except in the event of a processing failure internal to OpenSSL. Note that it would be possible to use ERR_reason_error_string() to get more context about such errors, but these refer mainly to the internals of OpenSSL, so it is not really obvious how useful that would be. This is left out for simplicity. Per report from Coverity. Thanks to Tom Lane for the discussion. Backpatch-through: 9.5
-
Andres Freund authored
clang only uses the 'i1' type for scalar booleans, not for pointers to booleans (as the pointer might be pointing into a larger memory allocation). Therefore a pointer-to-bool needs to the "storage" boolean. There's no known case of wrong code generation due to this, but it seems quite possible that it could cause problems (see e.g. 72559438). Author: Andres Freund Discussion: https://postgr.es/m/20201207212142.wz5tnbk2jsaqzogb@alap3.anarazel.de Backpatch: 11-, where jit support was added
-
Andres Freund authored
Until recently 'native' was implicitly included via 'orcjit', but a change included in LLVM 11 (not yet released) removed a number of such indirect component references. Reported-By: Fabien COELHO <coelho@cri.ensmp.fr> Reported-By: Andres Freund <andres@anarazel.de> Reported-By: Thomas Munro <thomas.munro@gmail.com> Author: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20201201064949.mex6kvi2kygby3ni@alap3.anarazel.de Backpatch: 11-, where jit support was added
-
Michael Paquier authored
pg_database.datfrozenxid gets updated using an in-place update at the end of vacuum or autovacuum. Since 96cdeae0, as pg_database has a toast relation, it is possible for a pg_database tuple to have toast values if there is a large set of ACLs in place. In such a case, the in-place update would fail because of the flattening of the toast values done for the catcache entry fetched. Instead of using a copy from the catcache, this changes the logic to fetch the copy of the tuple by directly scanning pg_database. Per the lack of complaints on the matter, no backpatch is done. Note that before 96cdeae0, attempting to insert such a tuple to pg_database would cause a "row is too big" error, so the end-of-vacuum problem was not reachable. Author: Ashwin Agrawal, Junfeng Yang Discussion: https://postgr.es/m/DM5PR0501MB38800D9E4605BCA72DD35557CCE10@DM5PR0501MB3880.namprd05.prod.outlook.com
-
- 07 Dec, 2020 4 commits
-
-
Tom Lane authored
Exercise some error cases that were never reached in the existing regression tests. This is partly for code-coverage reasons, and partly to memorialize the current behavior in advance of planned changes for generic subscripting. Also, I noticed that type_sanity's check to verify that all standard types have array types was never extended when we added arrays for all system catalog rowtypes (f7f70d5e), nor when we added arrays over domain types (c12d570f). So do that. Also, since the query's expected output isn't empty, it seems like a good idea to add an ORDER BY to make sure the result stays stable.
-
Heikki Linnakangas authored
pg_rewind looks at the control file to check what timeline a server is on. But promotion doesn't immediately write a checkpoint, it merely writes an end-of-recovery WAL record. If pg_rewind runs immediately after promotion, before the checkpoint has completed, it will think think that the server is still on the earlier timeline. We ran into this issue a long time ago already, see commit 484a848a. It's a bit bogus that pg_rewind doesn't determine the timeline correctly until the end-of-recovery checkpoint has completed. We probably should fix that. But for now work around it by waiting for the checkpoint to complete before running pg_rewind, like we did in commit 484a848a. In the passing, tidy up the new test a little bit. Rerder the INSERTs so that the comments make more sense, remove a spurious CHECKPOINT call after pg_rewind has already run, and add --debug option, so that if this fails again, we'll have more data. Per buildfarm failure at https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=rorqual&dt=2020-12-06%2018%3A32%3A19&stg=pg_rewind-check. Backpatch to all supported versions. Discussion: https://www.postgresql.org/message-id/1713707e-e318-761c-d287-5b6a4aa807e8@iki.fi
-
Tom Lane authored
Along the same lines as ed2c7f65 and daa9fe8a, reduce code duplication by having just one copy of the parts of the query that are the same across all server versions; and make the conditionals control the smallest possible amount of code. This is in preparation for adding another dumpable field to pg_type.
-
Michael Paquier authored
An error code path newly-introduced by 87ae9691 forgot to close a file descriptor when verifying a file's checksum. Per report from Coverity, via Tom Lane.
-
- 05 Dec, 2020 1 commit
-
-
Tom Lane authored
Commit 4be058fe forgot that the append_rel_list would already be populated at the time we remove useless result RTEs, and it might contain PlaceHolderVars that need to be adjusted like the ones in the main parse tree. This could lead to "no relation entry for relid N" failures later on, when the planner tries to do something with an unadjusted PHV. Per report from Tom Ellis. Back-patch to v12 where the bug came in. Discussion: https://postgr.es/m/20201205173056.GF30712@cloudinit-builder
-
- 04 Dec, 2020 6 commits
-
-
Heikki Linnakangas authored
Buildfarm has been failing sporadically on the new test. I was able to reproduce this by adding a random 0-10 s delay in the walreceiver, just before it connects to the primary. There's a race condition where node_3 is promoted before it has fully caught up with node_1, leading to diverged timelines. When node_1 is later reconfigured as standby following node_3, it fails to catch up: LOG: primary server contains no more WAL on requested timeline 1 LOG: new timeline 2 forked off current database system timeline 1 before current recovery point 0/30000A0 That's the situation where you'd need to use pg_rewind, but in this case it happens already when we are just setting up the actual pg_rewind scenario we want to test, so change the test so that it waits until node_3 is connected and fully caught up before promoting it, so that you get a clean, controlled failover. Also rewrite some of the comments, for clarity. The existing comments detailed what each step in the test did, but didn't give a good overview of the situation the steps were trying to create. For reasons I don't understand, the test setup had to be written slightly differently in 9.6 and 9.5 than in later versions. The 9.5/9.6 version needed node 1 to be reinitialized from backup, whereas in later versions it could be shut down and reconfigured to be a standby. But even 9.5 should support "clean switchover", where primary makes sure that pending WAL is replicated to standby on shutdown. It would be nice to figure out what's going on there, but that's independent of pg_rewind and the scenario that this test tests. Discussion: https://www.postgresql.org/message-id/b0a3b95b-82d2-6089-6892-40570f8c5e60%40iki.fi
-
Peter Eisentraut authored
User-visible log messages should go through ereport(), so they are subject to translation. Many remaining elog(LOG) calls are really debugging calls. Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://www.postgresql.org/message-id/flat/92d6f545-5102-65d8-3c87-489f71ea0a37%40enterprisedb.com
-
Peter Eisentraut authored
Instead of publication_name_list, we can use name_list. We already refer to publications everywhere else by the 'name' or 'name_list' symbols, so this only improves consistency. Reviewed-by: https://www.postgresql.org/message-id/flat/3e3ccddb-41bd-ecd8-29fe-195e34d9886f%40enterprisedb.com Discussion: Tom Lane <tgl@sss.pgh.pa.us>
-
Amit Kapila authored
We start recording changes in ReorderBufferTXN even before we reach SNAPBUILD_CONSISTENT state so that if the commit is encountered after reaching that we should be able to send the changes of the entire transaction. Now, while recording changes if the reorder buffer memory has exceeded logical_decoding_work_mem then we can start streaming if it is allowed and we haven't yet streamed that data. However, we must not allow streaming to start unless the snapshot has reached SNAPBUILD_CONSISTENT state. In passing, improve the comments atop ReorderBufferResetTXN to mention the case when we need to continue streaming after getting an error. Author: Amit Kapila Reviewed-by: Dilip Kumar Discussion: https://postgr.es/m/CAA4eK1KoOH0byboyYY40NBcC7Fe812trwTa+WY3jQF7WQWZbQg@mail.gmail.com
-
Michael Paquier authored
87ae9691 has created two new files called cryptohash{_openssl}.c in src/common/, whose names overlap with the existing backend file called cryptohashes.c dedicated to the SQL wrappers for SHA2 and MD5. This file is renamed to cryptohashfuncs.c to be more consistent with the surroundings and reduce the confusion with the new cryptohash interface of src/common/. Author: Michael Paquier Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/X8hHhaQgbMbW+aGU@paquier.xyz
-
Michael Paquier authored
The use of low-level hash routines is not recommended by upstream OpenSSL since 2000, and pgcrypto already switched to EVP as of 5ff4a67f. This takes advantage of the refactoring done in 87ae9691 that has introduced the allocation and free routines for cryptographic hashes. Since 1.1.0, OpenSSL does not publish the contents of the cryptohash contexts, forcing any consumers to rely on OpenSSL for all allocations. Hence, the resource owner callback mechanism gains a new set of routines to track and free cryptohash contexts when using OpenSSL, preventing any risks of leaks in the backend. Nothing is needed in the frontend thanks to the refactoring of 87ae9691, and the resowner knowledge is isolated into cryptohash_openssl.c. Note that this also fixes a failure with SCRAM authentication when using FIPS in OpenSSL, but as there have been few complaints about this problem and as this causes an ABI breakage, no backpatch is done. Author: Michael Paquier Reviewed-by: Daniel Gustafsson, Heikki Linnakangas Discussion: https://postgr.es/m/20200924025314.GE7405@paquier.xyz Discussion: https://postgr.es/m/20180911030250.GA27115@paquier.xyz
-
- 03 Dec, 2020 4 commits
-
-
Bruce Momjian authored
Backpatch-through: 11
-
Bruce Momjian authored
In a few places, the long-version options were listed before the single-letter ones in the command summary of a few commands. This didn't match other commands, and didn't match the option ordering later in the same reference page. Backpatch-through: 9.5
-
Heikki Linnakangas authored
If the target is a standby server, its WAL doesn't end at the last checkpoint record, but at minRecoveryPoint. We must scan all the WAL from the last common checkpoint all the way up to minRecoveryPoint for modified pages, and also consider that portion when determining whether the server needs rewinding. Backpatch to all supported versions. Author: Ian Barwick and me Discussion: https://www.postgresql.org/message-id/CABvVfJU-LDWvoz4-Yow3Ay5LZYTuPD7eSjjE4kGyNZpXC6FrVQ%40mail.gmail.com
-
Peter Eisentraut authored
strVal() can be used in a couple of places instead of coding the same thing by hand.
-