- 02 Dec, 2020 6 commits
-
-
Stephen Frost authored
GSS information (if used) such as if the connection was authorized using GSS or if it was encrypted using GSS, and perhaps most importantly, what the GSS principal used for the authentication was, is extremely useful but wasn't being included in the connection authorized log message. Therefore, add to the connection authorized log message that information, in a similar manner to how we log SSL information when SSL is used for a connection. Author: Vignesh C Reviewed-by: Bharath Rupireddy Discussion: https://www.postgresql.org/message-id/CALDaNm2N1385_Ltoo%3DS7VGT-ESu_bRQa-sC1wg6ikrM2L2Z49w%40mail.gmail.com
-
Fujii Masao authored
Commit 6b466bf5 allowed pg_stat_statements to track the number of WAL records, full page images and bytes that each statement generated. Similarly this commit allows us to track the cluster-wide WAL statistics counters. New columns wal_records, wal_fpi and wal_bytes are added into the pg_stat_wal view, and reports the total number of WAL records, full page images and bytes generated in the , respectively. Author: Masahiro Ikeda Reviewed-by: Amit Kapila, Movead Li, Kyotaro Horiguchi, Fujii Masao Discussion: https://postgr.es/m/35ef960128b90bfae3b3fdf60a3a860f@oss.nttdata.com
-
Michael Paquier authored
These showed up with -O2. Oversight in 87ae9691. Author: Fujii Masao Discussion: https://postgr.es/m/cee3df00-566a-400c-1252-67c3701f918a@oss.nttdata.com
-
Fujii Masao authored
This commit changes restore_command from PGC_POSTMASTER to PGC_SIGHUP. As the side effect of this commit, restore_command can be reset to empty during archive recovery. In this setting, archive recovery tries to replay only WAL files available in pg_wal directory. This is the same behavior as when the command that always fails is specified in restore_command. Note that restore_command still must be specified (not empty) when starting archive recovery, even after applying this commit. This is necessary as the safeguard to prevent users from forgetting to specify restore_command and starting archive recovery. Thanks to Peter Eisentraut, Michael Paquier, Andres Freund, Robert Haas and Anastasia Lubennikova for discussion. Author: Sergei Kornilov Reviewed-by: Kyotaro Horiguchi, Fujii Masao Discussion: https://postgr.es/m/2317771549527294@sas2-985f744271ca.qloud-c.yandex.net
-
Michael Paquier authored
Two new routines to allocate a hash context and to free it are created, as these become necessary for the goal behind this refactoring: switch the all cryptohash implementations for OpenSSL to use EVP (for FIPS and also because upstream does not recommend the use of low-level cryptohash functions for 20 years). Note that OpenSSL hides the internals of cryptohash contexts since 1.1.0, so it is necessary to leave the allocation to OpenSSL itself, explaining the need for those two new routines. This part is going to require more work to properly track hash contexts with resource owners, but this not introduced here. Still, this refactoring makes the move possible. This reduces the number of routines for all SHA2 implementations from twelve (SHA{224,256,386,512} with init, update and final calls) to five (create, free, init, update and final calls) by incorporating the hash type directly into the hash context data. The new cryptohash routines are moved to a new file, called cryptohash.c for the fallback implementations, with SHA2 specifics becoming a part internal to src/common/. OpenSSL specifics are part of cryptohash_openssl.c. This infrastructure is usable for more hash types, like MD5 or HMAC. Any code paths using the internal SHA2 routines are adapted to report correctly errors, which are most of the changes of this commit. The zones mostly impacted are checksum manifests, libpq and SCRAM. Note that e21cbb4b was a first attempt to switch SHA2 to EVP, but it lacked the refactoring needed for libpq, as done here. This patch has been tested on Linux and Windows, with and without OpenSSL, and down to 1.0.1, the oldest version supported on HEAD. Author: Michael Paquier Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/20200924025314.GE7405@paquier.xyz
-
Bruce Momjian authored
While the previous behavior didn't generate a warning, we might as well use an accurate *printf specification. Backpatch-through: 12
-
- 01 Dec, 2020 7 commits
-
-
Tom Lane authored
As it stood, expandTableLikeClause() re-did the same relation_openrv call that transformTableLikeClause() had done. However there are scenarios where this would not find the same table as expected. We hold lock on the LIKE source table, so it can't be renamed or dropped, but another table could appear before it in the search path. This explains the odd behavior reported in bug #16758 when cloning a table as a temp table of the same name. This case worked as expected before commit 50289819 introduced the need to open the source table twice, so we should fix it. To make really sure we get the same table, let's re-open it by OID not name. That requires adding an OID field to struct TableLikeClause, which is a little nervous-making from an ABI standpoint, but as long as it's at the end I don't think there's any serious risk. Per bug #16758 from Marc Boeren. Like the previous patch, back-patch to all supported branches. Discussion: https://postgr.es/m/16758-840e84a6cfab276d@postgresql.org
-
Alvaro Herrera authored
When memcpy() is called on a pointer, the compiler is entitled to assume that the pointer is not null, which can lead to optimizing nearby code in potentially undesirable ways. We still want such optimizations (gcc's -fdelete-null-pointer-checks) in cases where they're valid. Related: commit 13bba022. Backpatch to pg11, where this particular instance appeared. Reported-by: Ranier Vilela <ranier.vf@gmail.com> Reported-by: Zhihong Yu <zyu@yugabyte.com> Discussion: https://postgr.es/m/CAEudQApUndmQkr5fLrCKXQ7+ib44i7S+Kk93pyVThS85PnG3bQ@mail.gmail.com Discussion: https://postgr.es/m/CALNJ-vSdhwSM5f4tnNn1cdLHvXMVe_S+V3nR5GwNrmCPNB2VtQ@mail.gmail.com
-
Heikki Linnakangas authored
Make sure that the first mentions of RFC's are ulinked to their ietf.org entry, and subsequent ones are marked as acronyms. This makes references to RFC's consistent across the documentation. Author: Daniel Gustafsson Discussion: https://www.postgresql.org/message-id/2C697878-4D01-4F06-8312-2FEDE931E973%40yesql.se
-
Fujii Masao authored
In the docs, the index entries for progress reporting views link to the "Viewing Statistics" section, but previously they did not link to the dedicated section (e.g., "ANALYZE Progress Reporting") for each view. This was inconvenient when finding the section describing the detailed information about each view, from the index. This commit adds additional index entries linking to those dedicated sections. Author: Fujii Masao Reviewed-by: Shinya Kato Discussion: https://postgr.es/m/e49c2768-65d2-188a-5424-270fa29ccc84@oss.nttdata.com
-
Michael Paquier authored
This is a follow-up of the work done in fa42c2ec, that did not include all the fixes previously agreed on. The contents removed here can be confusing to the reader as they refer to rather old server versions. Author: Stephen Frost, Tom Lane, Heikki Linnakangas, Yaroslav Schekin Discussion: https://postgr.es/m/CAB8KJ=jYHgnxLLZSNJz7gBTck4TxomngCmGkw3nEMSNF0yL6wA@mail.gmail.com Discussion: https://postgr.es/m/1599765595731-0.post@n3.nabble.com
-
Thomas Munro authored
When truncating files by name, use truncate(2). Windows hasn't got it, so keep our previous coding based on ftruncate(2) as a fallback. Discussion: https://postgr.es/m/16663-fe97ccf9932fc800%40postgresql.org
-
Thomas Munro authored
When committing a transaction that dropped a relation, we previously truncated only the first segment file to free up disk space (the one that won't be unlinked until the next checkpoint). Truncate higher numbered segments too, even though we unlink them on commit. This frees the disk space immediately, even if other backends have open file descriptors and might take a long time to get around to handling shared invalidation events and closing them. Also extend the same behavior to the first segment, in recovery. Back-patch to all supported releases. Bug: #16663 Reported-by: Denis Patron <denis.patron@previnet.it> Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com> Reviewed-by: Neil Chen <carpenter.nail.cz@gmail.com> Reviewed-by: David Zhang <david.zhang@highgo.ca> Discussion: https://postgr.es/m/16663-fe97ccf9932fc800%40postgresql.org
-
- 30 Nov, 2020 9 commits
-
-
Tom Lane authored
For debugging purposes, Path nodes are supposed to have outfuncs support, but this was overlooked in the original incremental sort patch. While at it, clean up a couple other minor oversights, as well as bizarre choice of return type for create_incremental_sort_path(). (All the existing callers just cast it to "Path *" immediately, so they don't care, but some future caller might care.) outfuncs.c fix by Zhijie Hou, the rest by me Discussion: https://postgr.es/m/324c4d81d8134117972a5b1f6cdf9560@G08CNEXMBPEKD05.g08.fujitsu.local
-
Alvaro Herrera authored
Because regular CREATE INDEX commands are independent, and there's no logical data dependency, it's not immediately obvious that transactions held by concurrent index builds on one table will block the second phase of concurrent index creation on an unrelated table, so document this caveat. Backpatch this all the way back. In branch master, mention that only some indexes are involved. Author: James Coleman <jtc331@gmail.com> Reviewed-by: David Johnston <david.g.johnston@gmail.com> Discussion: https://postgr.es/m/CAAaqYe994=PUrn8CJZ4UEo_S-FfRr_3ogERyhtdgHAb2WG_Ufg@mail.gmail.com
-
Tom Lane authored
Checking for DocBook being installed was valuable when we were on the OpenSP docs toolchain, because that was rather hard to get installed fully. Nowadays, as long as you have xmllint and xsltproc installed, you're good, because those programs will fetch the DocBook files off the net at need. Moreover, testing this at configure time means that a network access may well occur whether or not you have any interest in building the docs later. That can be slow (typically 2 or 3 seconds, though much higher delays have been reported), and it seems not very nice to be doing an off-machine access without warning, too. Hence, drop the PGAC_CHECK_DOCBOOK probe, and adjust related documentation. Without that macro, there's not much left of config/docbook.m4 at all, so I just removed it. Back-patch to v11, where we started to use xmllint in the PGAC_CHECK_DOCBOOK probe. Discussion: https://postgr.es/m/E2EE6B76-2D96-408A-B961-CAE47D1A86F0@yesql.se Discussion: https://postgr.es/m/A55A7FC9-FA60-47FE-98B5-139CDC57CE6E@gmail.com
-
Tom Lane authored
This can't work if there's no postmaster, and indeed the code got an assertion failure trying. There should be a check on IsUnderPostmaster gating the use of parallelism, as the planner has for ordinary parallel queries. Commit 40d964ec got this right, so follow its model of checking IsUnderPostmaster at the same place where we check for max_parallel_maintenance_workers == 0. In general, new code implementing parallel utility operations should do the same. Report and patch by Yulin Pei, cosmetically adjusted by me. Back-patch to v11 where this code came in. Discussion: https://postgr.es/m/HK0PR01MB22747D839F77142D7E76A45DF4F50@HK0PR01MB2274.apcprd01.prod.exchangelabs.com
-
Tom Lane authored
If a PlaceHolderVar is to be evaluated at a join relation, but its value is only needed there and not at higher levels, we neglected to update the joinrel's direct_lateral_relids to include the PHV's source rel. This causes problems because join_is_legal() then won't allow joining the joinrel to the PHV's source rel at all, leading to "failed to build any N-way joins" planner failures. Per report from Andreas Seltenreich. Back-patch to 9.5 where the problem originated. Discussion: https://postgr.es/m/87blfgqa4t.fsf@aurora.ydns.eu
-
Michael Paquier authored
Those three commands have been using the same grammar rules to handle a a list of parenthesized options. This refactors the code so as they use the same parsing rules, shaving some code. A future commit will make use of those option parsing rules for more utility commands, like REINDEX and CLUSTER. Author: Alexey Kondratov, Justin Pryzby Discussion: https://postgr.es/m/8a8f5f73-00d3-55f8-7583-1375ca8f6a91@postgrespro.ru
-
Heikki Linnakangas authored
Author: Amit Langote Discussion: https://www.postgresql.org/message-id/CA%2BHiwqGaRoF3XrhPW-Y7P%2BG7bKo84Z_h%3DkQHvMh-80%3Dav3wmOw%40mail.gmail.com
-
Fujii Masao authored
Author: Haiying Tang <tanghy.fnst@cn.fujitsu.com> Discussion: https://postgr.es/m/48a0928ac94b497d9c40acf1de394c15@G08CNEXMBPEKD05.g08.fujitsu.local
-
Fujii Masao authored
Previously the shutdown of a background worker that uses die() as SIGTERM signal handler produced the log message "terminating connection due to administrator command". This log message was confusing because a background worker is not a connection. This commit improves that log message to "terminating background worker XXX due to administrator command" (XXX is replaced with the name of the background worker). This is the same log message as another SIGTERM signal handler bgworker_die() for a background worker reports. Author: Bharath Rupireddy Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/3f292fbb-f155-9a01-7cb2-7ccc9007ab3f@oss.nttdata.com
-
- 29 Nov, 2020 3 commits
-
-
Tom Lane authored
Through my misreading of what the existing code actually did, commits 85c54287 et al. broke psql's behavior for the case where "\c connstring" provides a password in the connstring. We should use that password in such a case, but as of 85c54287 we ignored it (and instead, prompted for a password). Commit 94929f1c fixed that in HEAD, but since I thought it was cleaning up a longstanding misbehavior and not one I'd just created, I didn't back-patch it. Hence, back-patch the portions of 94929f1c having to do with password management. In addition to fixing the introduced bug, this means that "\c -reuse-previous=on connstring" will allow re-use of an existing connection's password if the connstring doesn't change user/host/port. That didn't happen before, but it seems like a bug fix, and anyway I'm loath to have significant differences in this code across versions. Also fix an error with the same root cause about whether or not to override a connstring's setting of client_encoding. As of 85c54287 we always did so; restore the previous behavior of overriding only when stdin/stdout are a terminal and there's no environment setting of PGCLIENTENCODING. (I find that definition a bit surprising, but right now doesn't seem like the time to revisit it.) Per bug #16746 from Krzysztof Gradek. As with the previous patch, back-patch to all supported branches. Discussion: https://postgr.es/m/16746-44b30e2edf4335d4@postgresql.org
-
Tom Lane authored
The documentation omitted the critical tidbit that a keyword-array entry is simply ignored if its corresponding value-array entry is NULL or an empty string; it will *not* override any previously-obtained value for the parameter. (See conninfo_array_parse().) I'd supposed that would force the setting back to default, which is what led me into bug #16746; but it doesn't. While here, I couldn't resist the temptation to do some copy-editing, both in the description of PQconnectdbParams() and in the section about connection URI syntax. Discussion: https://postgr.es/m/931505.1606618746@sss.pgh.pa.us
-
Noah Misch authored
Buildfarm member topminnow failed when the test script attempted this before the syslogger would have created the file. Back-patch to v12, which introduced the test.
-
- 28 Nov, 2020 2 commits
-
-
Tom Lane authored
Commit 566372b3 fixed some race conditions involving concurrent SimpleLruTruncate calls, but it introduced new ones in async.c. A newly-listening backend could attempt to read Notify SLRU pages that were in process of being truncated, possibly causing an error. Also, the QUEUE_TAIL pointer could become set to a value that's not equal to the queue position of any backend. While that's fairly harmless in v13 and up (thanks to commit 51004c71), in older branches it resulted in near-permanent disabling of the queue truncation logic, so that continued use of NOTIFY led to queue-fill warnings and eventual inability to send any more notifies. (A server restart is enough to make that go away, but it's still pretty unpleasant.) The core of the problem is confusion about whether QUEUE_TAIL represents the "logical" tail of the queue (i.e., the oldest still-interesting data) or the "physical" tail (the oldest data we've not yet truncated away). To fix, split that into two variables. QUEUE_TAIL regains its definition as the logical tail, and we introduce a new variable to track the oldest un-truncated page. Per report from Mikael Gustavsson. Like the previous patch, back-patch to all supported branches. Discussion: https://postgr.es/m/1b8561412e8a4f038d7a491c8b922788@smhi.se
- 27 Nov, 2020 4 commits
-
-
Fujii Masao authored
Previously pg_stat_progress_cluster view reported the current block number in heap scan as the number of heap blocks scanned (i.e., heap_blks_scanned). This reported number could be incorrect when synchronize_seqscans is enabled, because it allowed the heap scan to start at block in middle. This could result in wraparounds in the heap_blks_scanned column when the heap scan wrapped around. This commit fixes the bug by calculating the number of blocks from the block that the heap scan starts at to the current block in scan, and reporting that number in the heap_blks_scanned column. Also, in pg_stat_progress_cluster view, previously heap_blks_scanned could not reach heap_blks_total at the end of heap scan phase if the last pages scanned were empty. This commit fixes the bug by manually updating heap_blks_scanned to the same value as heap_blks_total when the heap scan phase finishes. Back-patch to v12 where pg_stat_progress_cluster view was introduced. Reported-by: Matthias van de Meent Author: Matthias van de Meent Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/CAEze2WjCBWSGkVfYag001Rc4+-nNLDpWM7QbyD6yPvuhKs-gYQ@mail.gmail.com
-
Fujii Masao authored
Previously test_shm_mq worker used the stripped-down version of die() as the SIGTERM signal handler. This commit makes it use die(), instead, to simplify the code. In terms of the code, the difference between die() and the stripped-down version previously used is whether the signal handler directly may call ProcessInterrupts() or not. But this difference doesn't exist in a background worker because, in bgworker, DoingCommandRead flag will never be true and die() will never call ProcessInterrupts() directly. Therefore test_shm_mq worker can safely use die(), like other bgworker proceses (e.g., logical replication apply launcher or autoprewarm worker) currently do. Thanks to Craig Ringer for the report and investigation of the issue. Author: Bharath Rupireddy Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/CAGRY4nxsAe_1k_9g5b47orA0S011iBoHsXHFMH7cg7HV0O1bwQ@mail.gmail.com
-
Fujii Masao authored
Previously worker_spi used its custom signal handlers for SIGHUP and SIGTERM. This commit makes worker_spi use the standard signal handlers, to simplify the code. Note that die() is used as the standard SIGTERM signal handler in worker_spi instead of SignalHandlerForShutdownRequest() or bgworker_die(). Previously the exit handling was only able to exit from within the main loop, and not from within the backend code it calls. This is why die() needs to be used here, so worker_spi can respond to SIGTERM signal while it's executing a query. Maybe we can say that it's a bug that worker_spi could not respond to SIGTERM during query execution. But since worker_spi is a just example of the background worker code, we don't do the back-patch. Thanks to Craig Ringer for the report and investigation of the issue. Author: Bharath Rupireddy Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/CALj2ACXDEZhAFOTDcqO9cFSRvrFLyYOnPKrcA1UG4uZn9hUAVg@mail.gmail.com Discussion: https://postgr.es/m/CAGRY4nxsAe_1k_9g5b47orA0S011iBoHsXHFMH7cg7HV0O1bwQ@mail.gmail.com
-
Amit Kapila authored
Tablesync worker runs under a single transaction but in streaming mode, we were committing the transaction on stream_stop, stream_abort, and stream_commit. We need to avoid committing the transaction in a streaming mode in tablesync worker. In passing move the call to process_syncing_tables in apply_handle_stream_commit after clean up of stream files. This will allow clean up of files to happen before the exit of tablesync worker which would otherwise be handled by one of the proc exit routines. Author: Dilip Kumar Reviewed-by: Amit Kapila and Peter Smith Tested-by: Peter Smith Discussion: https://postgr.es/m/CAHut+Pt4PyKQCwqzQ=EFF=bpKKJD7XKt_S23F6L20ayQNxg77A@mail.gmail.com
-
- 26 Nov, 2020 4 commits
-
-
Alvaro Herrera authored
Reverts 27838981 (some comments are kept). Per discussion, it does not seem safe to relax the lock level used for this; in order for it to be safe, there would have to be memory barriers between the point we set the flag and the point we set the trasaction Xid, which perhaps would not be so bad; but there would also have to be barriers at the readers' side, which from a performance perspective might be bad. Now maybe this analysis is wrong and it *is* safe for some reason, but proof of that is not trivial. Discussion: https://postgr.es/m/20201118190928.vnztes7c2sldu43a@alap3.anarazel.de
-
Fujii Masao authored
If more distinct statements than pg_stat_statements.max are observed, pg_stat_statements entries about the least-executed statements are deallocated. This commit enables us to track the total number of times those entries were deallocated. That number can be viewed in the pg_stat_statements_info view that this commit adds. It's useful when tuning pg_stat_statements.max parameter. If it's high, i.e., the entries are deallocated very frequently, which might cause the performance regression and we can increase pg_stat_statements.max to avoid those frequent deallocations. The pg_stat_statements_info view is intended to display the statistics of pg_stat_statements module itself. Currently it has only one column "dealloc" indicating the number of times entries were deallocated. But an upcoming patch will add other columns (for example, the time at which pg_stat_statements statistics were last reset) into the view. Author: Katsuragi Yuta, Yuki Seino Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/0d9f1107772cf5c3f954e985464c7298@oss.nttdata.com
-
Fujii Masao authored
A prepared statement is re-analyzed and re-planned whenever database objects used in the statement have undergone definitional changes or the planner statistics of them have been updated. The former has been documented from before, but the latter was not previously. This commit adds the description about the latter case into the docs. Author: Atsushi Torikoshi Reviewed-by: Andy Fan, Fujii Masao Discussion: https://postgr.es/m/3ac82f4817c9fe274a905c8a38d87bd9@oss.nttdata.com
-
Amit Kapila authored
Commit 644f0d7c added logical replication message type enums to use instead of character literals but some char substitutions were overlooked. Author: Peter Smith Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/CAHut+PsTG=Vrv8hgrvOnAvCNR21jhqMdPk2n0a1uJPoW0p+UfQ@mail.gmail.com
-
- 25 Nov, 2020 5 commits
-
-
Alvaro Herrera authored
In the various waiting phases of CREATE INDEX CONCURRENTLY (CIC) and REINDEX CONCURRENTLY (RC), we wait for other processes to release their snapshots; this is necessary in general for correctness. However, processes doing CIC in other tables cannot possibly affect CIC or RC done in "this" table, so we don't need to wait for those. This commit adds a flag in MyProc->statusFlags to indicate that the current process is doing CIC, so that other processes doing CIC or RC can ignore it when waiting. Note that this logic is only valid if the index does not access other tables. For simplicity we avoid setting the flag if the index has a column that's an expression, or has a WHERE predicate. (It is possible to have expressional or partial indexes that do not access other tables, but figuring that out would require more work.) This flag can potentially also be used by processes doing REINDEX CONCURRENTLY to be skipped; and by VACUUM to ignore processes in CIC or RC for the purposes of computing an Xmin. That's left for future commits. Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Author: Dimitry Dolgov <9erthalion6@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/20200810233815.GA18970@alvherre.pgsql
-
Tom Lane authored
Historically, psql has truncated the text of a column's default expression at 128 characters. This is unlike any other behavior in describe.c, and it's become particularly confusing now that the limit is only applied to the expression proper and not to the "generated always as (...) stored" text that may get wrapped around it. Excavation in our git history suggests that the original motivation for this limit was not really to limit the display width (as I'd long supposed), but to make it safe to use a fixed-width output buffer to store the result. That implementation restriction is long gone of course, but the limit remained. Let's just get rid of it. While here, rearrange the logic about when to free the output string so that it's not so dependent on unstated assumptions about the possible values of attidentity and attgenerated. Per bug #16743 from David Turon. Back-patch to v12 where GENERATED came in. (Arguably we could take it back further, but I'm hesitant to change the behavior of long-stable branches for this.) Discussion: https://postgr.es/m/16743-7b1bacc4af76e7ad@postgresql.org
-
Tom Lane authored
Break the per-index-type discussions into <sect2>'s so as to make them more visually separate and easier to find. Improve the markup, and make a couple of small wording adjustments. This also fixes one stray reference to the now-deprecated point operators <^ and >^. Dagfinn Ilmari Mannsåker, reviewed by David Johnston and Jürgen Purtz Discussion: https://postgr.es/m/877dukhvzg.fsf@wibble.ilmari.org
-
Tom Lane authored
Up to now, we sent a ParameterStatus message to the client immediately upon any change in the active value of any GUC_REPORT variable. This was only barely okay when the feature was designed; now that we have things like function SET clauses, there are very plausible use-cases where a GUC_REPORT variable might change many times within a query --- and even end up back at its original value, perhaps. Fortunately most of our GUC_REPORT variables are unlikely to be changed often; but there are proposals in play to enlarge that set, or even make it user-configurable. Hence, let's fix things to not generate more than one ParameterStatus message per variable per query, and to not send any message at all unless the end-of-query value is different from what we last reported. Discussion: https://postgr.es/m/5708.1601145259@sss.pgh.pa.us
-
Peter Eisentraut authored
The function converted the first argument i.e. the number of tuples to return into an unsigned integer which turns out to be huge number when a negative value is passed. This causes the function to take much longer time to execute. Instead, reject a negative value. (If someone really wants to generate many more result rows, they should consider adding a bigint or numeric variant.) While at it, improve SQL test to test the number of tuples returned by this function. Author: Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/CAG-ACPW3PUUmSnM6cLa9Rw4BEC5cEMKjX8Gogc8gvQcT3cYA1A@mail.gmail.com
-