- 03 Nov, 2020 5 commits
-
-
Magnus Hagander authored
Commit d94c36a4 introduced error handling to sslinfo to handle OpenSSL errors gracefully. This ports this errorhandling to the backend TLS implementation. Author: Daniel Gustafsson <daniel@yesql.se>
-
Magnus Hagander authored
sslinfo was passing the Port->ssl member directly to OpenSSL in order to extract information regarding the connection. This breaks the API provided by the backend TLS implementation, as well as duplicates code for no benefit. Rewrite to make use of the backend API as much as possible. Author: Daniel Gustafsson <daniel@yesql.se>
-
Peter Eisentraut authored
Switch @ to <@ and ~ to @> in gist-related tests. The old operator names have been deprecated and will eventually (possibly soon) be removed. Author: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://www.postgresql.org/message-id/flat/20201027032511.GF9241@telsasoft.com
-
Amit Kapila authored
Author: Hou Zhijie Discussion: https://postgr.es/m/855a9421839d402b8b351d273c89a8f8@G08CNEXMBPEKD05.g08.fujitsu.local
-
Tom Lane authored
fill_hba_line() thought it could get away with passing sizeof(struct sockaddr_storage) rather than the actual addrlen previously returned by getaddrinfo(). While that appears to work on many platforms, it does not work on FreeBSD 11: you get back a failure, which leads to the view showing NULL for the address and netmask columns in all rows. The POSIX spec for getnameinfo() is pretty clearly on FreeBSD's side here: you should pass the actual address length. So it seems plausible that there are other platforms where this coding also fails, and we just hadn't noticed. Also, IMO the fact that getnameinfo() failure leads to a NULL output is pretty bogus in itself. Our pg_getnameinfo_all() wrapper is careful to emit "???" on failure, and we should use that in such cases. NULL should only be emitted in rows that don't have IP addresses. Per bug #16695 from Peter Vandivier. Back-patch to v10 where this code was added. Discussion: https://postgr.es/m/16695-a665558e2f630be7@postgresql.org
-
- 02 Nov, 2020 13 commits
-
-
Tom Lane authored
make_ruledef() and make_viewdef() were coded to cope with possible null-ness of these columns, but they've been marked BKI_FORCE_NOT_NULL for some time. So there's not really any need to do more than what we do for the other columns of pg_rewrite, i.e. just Assert that we got non-null results. (There is a school of thought that says Asserts aren't the thing to do to check for corrupt data, but surely here is not the place to start if we want such a policy.) Also, remove long-dead-if-indeed-it-ever-wasn't-dead handling of an empty actions list in make_ruledef(). That's an error case and should be treated as such. (DO INSTEAD NOTHING is represented by a CMD_NOTHING Query, not an empty list; cf transformRuleStmt.) Kyotaro Horiguchi, some changes by me Discussion: https://postgr.es/m/CAEudQApoA=tMTic6xEPYP_hsNZ8XtToVThK_0x7D_aFQYowq3w@mail.gmail.com
-
Tom Lane authored
Traditionally, the names of fmgroids.h macros for pg_proc OIDs have been constructed from the prosrc field. But sometimes the same C function underlies multiple pg_proc entries, forcing us to make an arbitrary choice of which OID to reference; the other entries are then not namable via fmgroids.h. Moreover, we could not have macros at all for pg_proc entries that aren't for C-coded functions. Instead, use the proname field, and append the proargtypes field (replacing inter-argument spaces with underscores) if proname is not unique. Special-casing unique entries such as F_OIDEQ removes the need to change a lot of code. Indeed, I can only find two places in the tree that need to be adjusted; while this changes quite a few existing entries in fmgroids.h, few of them are referenced from C code. With this patch, all entries in pg_proc.dat have macros in fmgroids.h. Discussion: https://postgr.es/m/472274.1604258384@sss.pgh.pa.us
-
Tom Lane authored
On detecting a corrupted match tag, pglz_decompress() should just summarily return -1. Breaking out of the loop, as I did in dfc79773, doesn't quite guarantee that will happen. Also, we can use unlikely() on that check, just in case it helps. Backpatch to v13, like the previous patch.
-
Peter Eisentraut authored
Some places were using PG_GETARG_UINT32 where PG_GETARG_TRANSACTIONID would be more appropriate. (Of course, they are the same internally, so there is no externally visible effect.) To do that, export PG_GETARG_TRANSACTIONID outside of xid.c. We also export PG_RETURN_TRANSACTIONID for symmetry, even though there are currently no external users. Author: Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/d8f6bdd536df403b9b33816e9f7e0b9d@G08CNEXMBPEKD05.g08.fujitsu.local
-
Magnus Hagander authored
Author: David Johnston
-
Thomas Munro authored
Record the current version of dependent collations in pg_depend when creating or rebuilding an index. When accessing the index later, warn that the index may be corrupted if the current version doesn't match. Thanks to Douglas Doole, Peter Eisentraut, Christoph Berg, Laurenz Albe, Michael Paquier, Robert Haas, Tom Lane and others for very helpful discussion. Author: Thomas Munro <thomas.munro@gmail.com> Author: Julien Rouhaud <rjuju123@gmail.com> Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com> (earlier versions) Discussion: https://postgr.es/m/CAEepm%3D0uEQCpfq_%2BLYFBdArCe4Ot98t1aR4eYiYTe%3DyavQygiQ%40mail.gmail.com
-
Thomas Munro authored
Provide a place for the version of referenced database objects to be recorded. A follow-up commit will use this to record dependencies on collation versions for indexes, but similar ideas for other kinds of objects have also been mooted. Author: Thomas Munro <thomas.munro@gmail.com> Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com> Discussion: https://postgr.es/m/CAEepm%3D0uEQCpfq_%2BLYFBdArCe4Ot98t1aR4eYiYTe%3DyavQygiQ%40mail.gmail.com
-
Thomas Munro authored
This model couldn't be extended to cover the default collation, and didn't have any information about the affected database objects when the version changed. Remove, in preparation for a follow-up commit that will add a new mechanism. Author: Thomas Munro <thomas.munro@gmail.com> Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com> Discussion: https://postgr.es/m/CAEepm%3D0uEQCpfq_%2BLYFBdArCe4Ot98t1aR4eYiYTe%3DyavQygiQ%40mail.gmail.com
-
Heikki Linnakangas authored
Most of the section and sub-section headers in the Queries chapter have the keywords literally stated, but neither "Sorting Rows" nor "Combining Rows" did. There's no rule that they must be, but it seems like a good practice. The keywords will ring a bell to anyone with with even a little bit of SQL experience. David G. Johnston, per suggestion by bilge@scriptfusion.com Discussion: https://www.postgresql.org/message-id/159981394174.31338.7014519396749859167%40wrigleys.postgresql.org
-
David Rowley authored
This was broken recently by a929e17e. I'd failed to remember that parallel tests should have their EXPLAIN output run through the explain_parallel_append function so that the output is stable when parallel workers fail to start. fairywren was first to notice. Reported-by: Michael Paquier Discussion: https://postgr.es/m/20201102062951.GB15770@paquier.xyz
-
Michael Paquier authored
The documentation fixes are backpatched down to where they apply. Author: Justin Pryzby Discussion: https://postgr.es/m/20201031020801.GD3080@telsasoft.com Backpatch-through: 9.6
-
Amit Kapila authored
Logical replication protocol uses a single byte character to identify a message type in logical replication protocol. The code uses string literals for the same. Use Enum so that 1. All the string literals used can be found at a single place. This makes it easy to add more types without the risk of conflicts. 2. It's easy to locate the code handling a given message type. 3. When used with switch statements, it is easy to identify the missing cases using -Wswitch. Author: Ashutosh Bapat Reviewed-by: Kyotaro Horiguchi, Andres Freund, Peter Smith and Amit Kapila Discussion: https://postgr.es/m/CAExHW5uPzQ7L0oAd_ENyvaiYMOPgkrAoJpE+ZY5-obdcVT6NPg@mail.gmail.com
-
David Rowley authored
Previously we only tagged on the required information to allow the executor to perform run-time partition pruning for Append/MergeAppend nodes belonging to base relations. It was thought that nested Append/MergeAppend nodes were just about always pulled up into the top-level Append/MergeAppend and that making the run-time pruning info for any sub Append/MergeAppend nodes was a waste of time. However, that was likely badly thought through. Some examples of cases we're unable to pullup nested Append/MergeAppends are: 1) Parallel Append nodes with a mix of parallel and non-parallel paths into a Parallel Append. 2) When planning an ordered Append scan a sub-partition which is unordered may require a nested MergeAppend path to ensure sub-partitions don't mix up the order of tuples being fed into the top-level Append. Unfortunately, it was not just as simple as removing the lines in createplan.c which were purposefully not building the run-time pruning info for anything but RELOPT_BASEREL relations. The code in add_paths_to_append_rel() was far too sloppy about which partitioned_rels it included for the Append/MergeAppend paths. The original code there would always assume accumulate_append_subpath() would pull each sub-Append and sub-MergeAppend path into the top-level path. While it does not appear that there were any actual bugs caused by having the additional partitioned table RT indexes recorded, what it did mean is that later in planning, when we built the run-time pruning info that we wasted effort and built PartitionedRelPruneInfos for partitioned tables that we had no subpaths for the executor to run-time prune. Here we tighten that up so that partitioned_rels only ever contains the RT index for partitioned tables which actually have subpaths in the given Append/MergeAppend. We can now Assert that every PartitionedRelPruneInfo has a non-empty present_parts. That should allow us to catch any weird corner cases that have been missed. In passing, it seems there is no longer a good reason to have the AppendPath and MergeAppendPath's partitioned_rel fields a List of IntList. We can simply have a List of Relids instead. This is more compact in memory and faster to add new members to. We still know which is the root level partition as these always have a lower relid than their children. Previously this field was used for more things, but run-time partition pruning now remains the only user of it and it has no need for a List of IntLists. Here we also get rid of the RelOptInfo partitioned_child_rels field. This is what was previously used to (sometimes incorrectly) set the Append/MergeAppend path's partitioned_rels field. That was the only usage of that field, so we can happily just remove it. I also couldn't resist changing some nearby code to make use of the newly added for_each_from macro so we can skip the first element in the list without checking if the current item was the first one on each iteration. A bug report from Andreas Kretschmer prompted all this work, however, after some consideration, I'm not personally classing this as a bug fix. So no backpatch. In Andreas' test case, it just wasn't that clear that there was a nested Append since the top-level Append just had a single sub-path which was pulled up a level, per 8edd0e79. Author: David Rowley Reviewed-by: Amit Langote Discussion: https://postgr.es/m/flat/CAApHDvqSchs%2BubdybcfFaSPB%2B%2BEA7kqMaoqajtP0GtZvzOOR3g%40mail.gmail.com
-
- 01 Nov, 2020 4 commits
-
-
Tom Lane authored
pglz_maximum_compressed_size() potentially underestimated the amount of compressed data required to produce N bytes of decompressed data; this is a fault in commit 11a078cf. Separately from that, pglz_decompress() failed to protect itself against corrupt compressed data, particularly off == 0 in a match tag. Commit c60e520f turned such a situation into an infinite loop, where before it'd just have resulted in garbage output. The combination of these two bugs seems like it may explain bug #16694 from Tom Vijlbrief, though it's impossible to be quite sure without direct inspection of the failing session. (One needs to assume that the pglz_maximum_compressed_size() bug caused us to fail to fetch the second byte of a match tag, and what happened to be there instead was a zero. The reported infinite loop is hard to explain without off == 0, though.) Aside from fixing the bugs, rewrite associated comments for more clarity. Back-patch to v13 where both these commits landed. Discussion: https://postgr.es/m/16694-f107871e499ec114@postgresql.org
-
Michael Paquier authored
Statistics associated to an index got lost after running REINDEX CONCURRENTLY, while the non-concurrent case preserves these correctly. The concurrent and non-concurrent operations need to be consistent for the end-user, and missing statistics would force to wait for a new analyze to happen, which could take some time depending on the activity of the existing autovacuum workers. This issue is fixed by copying any existing entries in pg_statistic associated to the old index to the new one. Note that this copy is already done with the data of the index in the stats collector. Reported-by: Fabrízio de Royes Mello Author: Michael Paquier, Fabrízio de Royes Mello Reviewed-by: Justin Pryzby Discussion: https://postgr.es/m/CAFcNs+qpFPmiHd1oTXvcPdvAHicJDA9qBUSujgAhUMJyUMb+SA@mail.gmail.com Backpatch-through: 12
-
Michael Paquier authored
PXE_DECRYPT_FAILED exists already for decryption errors, and an equivalent for encryption did not exist. There is one code path that deals with such failures for OpenSSL but it used PXE_ERR_GENERIC, which was inconsistent. This switches this code path to use the new error PXE_ENCRYPT_FAILED instead of PXE_ERR_GENERIC, making the code used for encryption more consistent with the decryption. Author: Daniel Gustafsson Discussion: https://postgr.es/m/03049139-CB7A-436E-B71B-42696D3E2EF7@yesql.se
-
- 31 Oct, 2020 2 commits
-
-
Noah Misch authored
This makes elog.c emit the string, which is good practice for a background worker that executes SQL strings. Reviewed by Tom Lane. Discussion: https://postgr.es/m/20201014022636.GA1962668@rfd.leadboat.com
-
Noah Misch authored
Certain background workers initiate parallel queries while debug_query_string==NULL, at which point they attempted strlen(NULL) and died to SIGSEGV. Older debug_query_string observers allow NULL, so do likewise in these newer ones. Back-patch to v11, where commit 7de4a1bc introduced the first of these. Discussion: https://postgr.es/m/20201014022636.GA1962668@rfd.leadboat.com
-
- 30 Oct, 2020 3 commits
-
-
Tom Lane authored
Commit 6b2c4e59 was overly confident about not being able to see a negative cmpval result from partition_range_bsearch(). Adjust the code to cope with that. Report and patch by Amul Sul; some additional cosmetic changes by me Discussion: https://postgr.es/m/CAAJ_b97WCO=EyVA7fKzc86kKfojHXLU04_zs7-7+yVzm=-1QkQ@mail.gmail.com
-
Heikki Linnakangas authored
Because of this, if you tried to create an operator family with the new sortsupport function, you got an error: ERROR: support function number 11 is invalid for access method gist We missed this in commit 16fa9b2b that added the sortsupport function, because it only added sortsupport to a built-in operator family. Author: Andrey Borodin Discussion: https://www.postgresql.org/message-id/3520A18A-5C38-4697-A2E3-F3BDE3496CD5%40yandex-team.ru
-
Tom Lane authored
Jimmy Angelakos Discussion: https://postgr.es/m/CABgVKCW_zPnvFXn24FTF0299_yU6+1p6JRUc0xpiZFWEXH1_jg@mail.gmail.com
-
- 29 Oct, 2020 6 commits
-
-
Tom Lane authored
The timetz test cases I added in commit a9632830 were unintentionally sensitive to whether or not DST is active in the PST8PDT time zone. Thus, they'll start failing this coming weekend, as reported by Bernhard M. Wiedemann in bug #16689. Fortunately, DST-awareness is not significant to the purpose of these test cases, so we can just force them all to PDT (DST hours) to preserve stability of the results. Back-patch to v10, as the prior patch was. Discussion: https://postgr.es/m/16689-57701daa23b377bf@postgresql.org
-
Tom Lane authored
On the same reasoning as in commit 36b93121, forbid using custom oid_symbol macros in pg_type as well as pg_proc, so that we always rely on the predictable macro names generated by genbki.pl. We do continue to grant grandfather status to the names CASHOID and LSNOID, although those are now considered deprecated aliases for the preferred names MONEYOID and PG_LSNOID. This is because there's likely to be client-side code using the old names, and this bout of neatnik-ism doesn't quite seem worth breaking client code. There might be a case for grandfathering EVTTRIGGEROID, too, since externally-maintained PLs may reference that symbol. But renaming such references to EVENT_TRIGGEROID doesn't seem like a particularly heavy lift --- we make far more significant backend API changes in every major release. For now I didn't add that, but we could reconsider if there's pushback. The other names changed here seem pretty unlikely to have any outside uses. Again, we could add alias macros if there are complaints, but for now I didn't. As before, no need for a catversion bump. John Naylor Discussion: https://postgr.es/m/CAFBsxsHpCbjfoddNGpnnnY5pHwckWfiYkMYSF74PmP1su0+ZOw@mail.gmail.com
-
Andres Freund authored
When ComputeXidHorizons() was called before MyDatabaseOid is set, e.g. because a dead row in a shared relation is encountered during InitPostgres(), the horizon for normal tables was computed too aggressively, ignoring all backends connected to a database. During subsequent pruning in a data table the too aggressive horizon could end up still being used, possibly leading to still needed tuples being removed. Not good. This is a bug in dc7420c2, which the test added in 94bc27b5 made visible, if run with force_parallel_mode set to regress. In that case the bug is reliably triggered, because "pruning_query" is run in a parallel worker and the start of that parallel worker is likely to encounter a dead row in pg_database. The fix is trivial: Compute a more pessimistic data table horizon if MyDatabaseId is not yet known. Author: Andres Freund Discussion: https://postgr.es/m/20201029040030.p4osrmaywhqaesd4@alap3.anarazel.de
-
Amit Kapila authored
This adds the statistics about transactions streamed to the decoding output plugin from ReorderBuffer. Users can query the pg_stat_replication_slots view to check these stats and call pg_stat_reset_replication_slot to reset the stats of a particular slot. Users can pass NULL in pg_stat_reset_replication_slot to reset stats of all the slots. Commit 98681675 has added the basic infrastructure to capture the stats of slot and this commit extends the statistics collector to track additional information about slots. Bump the catversion as we have added new columns in the catalog entry. Author: Ajin Cherian and Amit Kapila Reviewed-by: Sawada Masahiko and Dilip Kumar Discussion: https://postgr.es/m/CAA4eK1+chpEomLzgSoky-D31qev19AmECNiEAietPQUGEFhtVA@mail.gmail.com
-
Andres Freund authored
This fixes a bug in the edge case where, for a temp table, heap_page_prune() can end up with a different horizon than heap_vacuum_rel(). Which can trigger errors like "ERROR: cannot freeze committed xmax ...". The bug was introduced due to interaction of a7212be8 "Set cutoff xmin more aggressively when vacuuming a temporary table." with dc7420c2 "snapshot scalability: Don't compute global horizons while building snapshots.". The problem is caused by lazy_scan_heap() assuming that the only reason its HeapTupleSatisfiesVacuum() call would return HEAPTUPLE_DEAD is if the tuple is a HOT tuple, or if the tuple's inserting transaction has aborted since the heap_page_prune() call. But after a7212be8 that was also possible in other cases for temp tables, because heap_page_prune() uses a different visibility test after dc7420c2. The fix is fairly simple: Move the special case logic for temp tables from vacuum_set_xid_limits() to the infrastructure introduced in dc7420c2. That ensures that the horizon used for pruning is at least as aggressive as the one used by lazy_scan_heap(). The concrete horizon used for temp tables is slightly different than the logic in dc7420c2, but should always be as aggressive as before (see comments). A significant benefit to centralizing the logic procarray.c is that now the more aggressive horizons for temp tables does not just apply to VACUUM but also to e.g. HOT pruning and the nbtree killtuples logic. Because isTopLevel is not needed by vacuum_set_xid_limits() anymore, I undid the the related changes from a7212be8. This commit also adds an isolation test ensuring that the more aggressive vacuuming and pruning of temp tables keeps working. Debugged-By: Amit Kapila <amit.kapila16@gmail.com> Debugged-By: Tom Lane <tgl@sss.pgh.pa.us> Debugged-By: Ashutosh Sharma <ashu.coek88@gmail.com> Author: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20201014203103.72oke6hqywcyhx7s@alap3.anarazel.de Discussion: https://postgr.es/m/20201015083735.derdzysdtqdvxshp@alap3.anarazel.de
-
Michael Paquier authored
This would cause the function to crash when more than one page is considered as broken and reported in the SRF. Reported-by: Noriyoshi Shinoda Discussion: https://postgr.es/m/TU4PR8401MB11523D42C315AAF822E74275EE170@TU4PR8401MB1152.NAMPRD84.PROD.OUTLOOK.COM
-
- 28 Oct, 2020 7 commits
-
-
Tom Lane authored
I started with the intention of just suppressing a PDF build warning by removing the example output, but ended up doing more: correcting factual errors in the function's signature, moving a bunch of generalized handwaving into the "Using amcheck Effectively" section which seemed a better place for it, and improving wording and markup a little bit. Discussion: https://postgr.es/m/732904.1603728748@sss.pgh.pa.us
-
Tom Lane authored
In almost all other places, we use plain "r" or "w" mode in popen() calls (the exceptions being for COPY data). This one has been overlooked (possibly because it's buried in a ".l" flex file?), but it's using PG_BINARY_R. Kensuke Okamura complained in bug #16688 that we fail to strip \r when stripping the trailing newline from a backtick result string. That's true enough, but we'd also fail to convert embedded \r\n cleanly, which also seems undesirable. Fixing the popen() mode seems like the best way to deal with this. It's been like this for a long time, so back-patch to all supported branches. Discussion: https://postgr.es/m/16688-c649c7b69cd7e6f8@postgresql.org
-
Tom Lane authored
It's unsafe to do this at parse time because addition of generated columns to a table would not invalidate stored rules containing UPDATEs on the table ... but there might now be dependent generated columns that were not there when the rule was made. This also fixes an oversight that rewriteTargetView failed to update extraUpdatedCols when transforming an UPDATE on an updatable view. (Since the new calculation is downstream of that, rewriteTargetView doesn't actually need to do anything; but before, there was a demonstrable bug there.) In v13 and HEAD, this leads to easily-visible bugs because (since commit c6679e4f) we won't recalculate generated columns that aren't listed in extraUpdatedCols. In v12 this bitmap is mostly just used for trigger-firing decisions, so you'd only notice a problem if a trigger cared whether a generated column had been updated. I'd complained about this back in May, but then forgot about it until bug #16671 from Michael Paul Killian revived the issue. Back-patch to v12 where this field was introduced. If existing stored rules contain any extraUpdatedCols values, they'll be ignored because the rewriter will overwrite them, so the bug will be fixed even for existing rules. (But note that if someone were to update to 13.1 or 12.5, store some rules with UPDATEs on tables having generated columns, and then downgrade to a prior minor version, they might observe issues similar to what this patch fixes. That seems unlikely enough to not be worth going to a lot of effort to fix.) Discussion: https://postgr.es/m/10206.1588964727@sss.pgh.pa.us Discussion: https://postgr.es/m/16671-2fa55851859fb166@postgresql.org
-
Tom Lane authored
We have a perfectly good convention for OID macros for built-in functions already, so making custom symbols is just introducing unnecessary deviation from the convention. Remove the one case that had snuck in, and add an error check in genbki.pl to discourage future instances. Although this touches pg_proc.dat, there's no need for a catversion bump since the actual catalog data isn't changed. John Naylor Discussion: https://postgr.es/m/CAFBsxsHpCbjfoddNGpnnnY5pHwckWfiYkMYSF74PmP1su0+ZOw@mail.gmail.com
-
Tom Lane authored
get_foreign_key_join_selectivity() looks for join clauses that equate the two sides of the FK constraint. However, if we have a query like "WHERE fktab.a = pktab.a and fktab.a = 1", it won't find any such join clause, because equivclass.c replaces the given clauses with "fktab.a = 1 and pktab.a = 1", which can be enforced at the scan level, leaving nothing to be done for column "a" at the join level. We can fix that expectation without much trouble, but then a new problem arises: applying the foreign-key-based selectivity rule produces a rowcount underestimate, because we're effectively double-counting the selectivity of the "fktab.a = 1" clause. So we have to cancel that selectivity out of the estimate. To fix, refactor process_implied_equality() so that it can pass back the new RestrictInfo to its callers in equivclass.c, allowing the generated "fktab.a = 1" clause to be saved in the EquivalenceClass's ec_derives list. Then it's not much trouble to dig out the relevant RestrictInfo when we need to adjust an FK selectivity estimate. (While at it, we can also remove the expensive use of initialize_mergeclause_eclasses() to set up the new RestrictInfo's left_ec and right_ec pointers. The equivclass.c code can set those basically for free.) This seems like clearly a bug fix, but I'm hesitant to back-patch it, first because there's some API/ABI risk for extensions and second because we're usually loath to destabilize plan choices in stable branches. Per report from Sigrid Ehrenreich. Discussion: https://postgr.es/m/1019549.1603770457@sss.pgh.pa.us Discussion: https://postgr.es/m/AM6PR02MB5287A0ADD936C1FA80973E72AB190@AM6PR02MB5287.eurprd02.prod.outlook.com
-
Michael Paquier authored
UInt32GetDatum() was getting used, while the result needs Int64GetDatum(). Oversight in f2b88396. Per buildfarm member florican. Discussion: https://postgr.es/m/1226629.1603859189@sss.pgh.pa.us