- 06 Oct, 2020 4 commits
-
-
Magnus Hagander authored
Reviewed-By: David G. Johnston, Daniel Gustafsson
-
Michael Paquier authored
Oversight in 9d0bd95. Reported-by: Andres Freund Discussion: https://postgr.es/m/20201006023802.qqfi6m5bw5y77zql@alap3.anarazel.de
-
Andres Freund authored
Thanks to Andrew for proposing and testing this fix. It's possible that we should address this on a more fundamental basis, e.g. by configuring PerlIO to to CR/LF conversion for us, but this approach already exists in other places. And it's nice to unbreak the BF. Proposed-By: Andrew Dunstan <andrew.dunstan@2ndquadrant.com> Discussion: https://postgr.es/m/2355d1f0-0244-da9c-ef0c-7542b944e1ac@2ndQuadrant.com
-
Fujii Masao authored
In postgres_fdw, once remote connections are established, they are cached and re-used for subsequent queries and transactions. There can be some cases where those cached connections are unavaiable, for example, by the restart of remote server. In these cases, previously an error was reported and the query accessing to remote server failed if new remote transaction failed to start because the cached connection was broken. This commit improves postgres_fdw so that new connection is remade if broken connection is detected when starting new remote transaction. This is useful to avoid unnecessary failure of queries when connection is broken but can be reestablished. Author: Bharath Rupireddy, tweaked a bit by Fujii Masao Reviewed-by: Ashutosh Bapat, Tatsuhito Kasahara, Fujii Masao Discussion: https://postgr.es/m/CALj2ACUAi23vf1WiHNar_LksM9EDOWXcbHCo-fD4Mbr1d=78YQ@mail.gmail.com
-
- 05 Oct, 2020 8 commits
-
-
Bruce Momjian authored
Previously it was unclear exactly how ROWS FROM behaved and how to cast the data types of columns returned by FROM functions. Also document that only non-OUT record functions can have their columns cast to data types. Reported-by: guyren@gmail.com Discussion: https://postgr.es/m/158638264419.662.2482095087061084020@wrigleys.postgresql.org Backpatch-through: 9.5
-
Bruce Momjian authored
Since PG 12, clientcert no longer supported only on/off, so remove 1/0 as possible values, and instead support only the text strings 'verify-ca' and 'verify-full'. Remove support for 'no-verify' since that is possible by just not specifying clientcert. Also, throw an error if 'verify-ca' is used and 'cert' authentication is used, since cert authentication requires verify-full. Also improve the docs. THIS IS A BACKWARD INCOMPATIBLE API CHANGE. Reported-by: Kyotaro Horiguchi Discussion: https://postgr.es/m/20200716.093012.1627751694396009053.horikyota.ntt@gmail.com Author: Kyotaro Horiguchi Backpatch-through: master
-
Tom Lane authored
This should help to identify what happened when studying the postmaster log after-the-fact. While here, clean up some old comments in the same function. Discussion: https://postgr.es/m/1568983.1601845687@sss.pgh.pa.us
-
Tom Lane authored
get_eclass_for_sort_expr() computes expr_relids and nullable_relids early on, even though they won't be needed unless we make a new EquivalenceClass, which we often don't. Aside from the probably-minor inefficiency, there's a memory management problem: these bitmapsets will be built in the caller's context, leading to dangling pointers if that is shorter-lived than root->planner_cxt. This would be a live bug if get_eclass_for_sort_expr() could be called with create_it = true during GEQO join planning. So far as I can find, the core code never does that, but it's hard to be sure that no extensions do, especially since the comments make it clear that that's supposed to be a supported case. Fix by not computing these values until we've switched into planner_cxt to build the new EquivalenceClass. generate_join_implied_equalities() uses inner_rel->relids to look up relevant eclasses, but it ought to be using nominal_inner_relids. This is presently harmless because a child RelOptInfo will always have exactly the same eclass_indexes as its topmost parent; but that might not be true forever, and anyway it makes the code confusing. The first of these is old (introduced by me in f3b3b8d5), so back-patch to all supported branches. The second only dates to v13, but we might as well back-patch it to keep the code looking similar across branches. Discussion: https://postgr.es/m/1508010.1601832581@sss.pgh.pa.us
-
Tom Lane authored
The descriptions of make_interval() and pg_options_to_table() were randomly different from the reality embedded in pg_proc. (These are not all the discrepancies I found in a quick search, but the others perhaps require more discussion, since there's at least a case to be made for changing pg_proc not the docs.) make_interval issue noted by Thomas Kellerer. Discussion: https://postgr.es/m/7b154ef0-9f22-90b9-7734-4bf23686695b@gmx.net
-
Peter Eisentraut authored
Unlike for functions, OUT parameters for procedures are part of the signature. Therefore, they have to be listed in pg_proc.proargtypes as well as mentioned in ALTER PROCEDURE and DROP PROCEDURE. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/2b8490fe-51af-e671-c504-47359dc453c5@2ndquadrant.com
-
Tom Lane authored
I noticed while trying to run the regression tests under a low geqo_threshold that one query on information_schema.columns had unstable (as in, variable from one run to the next) output order. This is pretty unsurprising given the complexity of the underlying plan. Interestingly, of this test's three nigh-identical queries on information_schema.columns, the other two already had ORDER BY clauses guaranteeing stable output. Let's make this one look the same. Back-patch to v10 where this test was added. We've not heard field reports of the test failing, but this experience shows that it can happen when testing under even slightly unusual conditions.
-
Michael Paquier authored
The handling of those options was inconsistent, as the processing used directly the value assigned to the option to check if it was redundant, leading to patterns like this one to succeed (note that false is specified first): COPY hoge to '/path/to/file/' (header off, header on); And the opposite would fail correctly (note that true is first here): COPY hoge to '/path/to/file/' (header on, header off); While on it, add some tests to check for all redundant patterns with the options of COPY. I have gone through the code and did not notice similar mistakes for other commands. "header" got it wrong since b63990c6, and "freeze" was wrong from the start as of 8de72b66. No backpatch is done per the lack of complaints. Reported-by: Rémi Lapeyre Discussion: https://postgr.es/m/20200929072433.GA15570@paquier.xyz Discussion: https://postgr.es/m/0B55BD07-83E4-439F-AACC-FA2D7CF50532@lenstra.fr
-
- 04 Oct, 2020 1 commit
-
-
Tom Lane authored
The BKI file's string quoting conventions were previously quite weird, perhaps as a result of repurposing a function built to scan single-quoted strings to scan double-quoted ones. Change to use the same rules as we use in GUC files, allowing some simplifications in genbki.pl and initdb.c. While at it, completely remove the backend's scanstr() function, which was essentially a duplicate of the string dequoting code in guc-file.l. Instead export that one (under a less generic name than it had) and let bootscanner.l use it. Now we can clarify that scansup.c exists only to support the main lexer. We could alternatively have removed GUC_scanstr, but this way seems better since the previous arrangement could mislead a reader into thinking that scanstr() had something to do with the main lexer's handling of string literals. Maybe it did once, but if so it was a long time ago. This patch does not bump catversion, since the initially-installed catalog contents don't change. Note however that successful initdb after applying this patch will require up-to-date postgres.bki as well as postgres and initdb executables. In passing, remove a bunch of very-long-obsolete #include's in bootparse.y and bootscanner.l. John Naylor Discussion: https://postgr.es/m/CACPNZCtDpd18T0KATTmCggO2GdVC4ow86ypiq5ENff1VnauL8g@mail.gmail.com
-
- 03 Oct, 2020 3 commits
-
-
Peter Eisentraut authored
SQL commands are generally marked up as <command>, except when a link to a reference page is used using <xref>. But the latter doesn't create monospace markup, so this looks strange especially when a paragraph contains a mix of links and non-links. We considered putting <command> in the <refentrytitle> on the target side, but that creates some formatting side effects elsewhere. Generally, it seems safer to solve this on the link source side. We can't put the <xref> inside the <command>; the DTD doesn't allow this. DocBook 5 would allow the <command> to have the linkend attribute itself, but we are not there yet. So to solve this for now, convert the <xref>s to <link> plus <command>. This gives the correct look and also gives some more flexibility what we can put into the link text (e.g., subcommands or other clauses). In the future, these could then be converted to DocBook 5 style. I haven't converted absolutely all xrefs to SQL command reference pages, only those where we care about the appearance of the link text or where it was otherwise appropriate to make the appearance match a bit better. Also in some cases, the links where repetitive, so in those cases the links where just removed and replaced by a plain <command>. In cases where we just want the link and don't specifically care about the generated link text (typically phrased "for further information see <xref ...>") the xref is kept. Reported-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> Discussion: https://www.postgresql.org/message-id/flat/87o8pco34z.fsf@wibble.ilmari.org
-
Bruce Momjian authored
Reported-by: Alexander Lakhin Discussion: https://postgr.es/m/16486-b9c93d71c02c4907@postgresql.org Backpatch-through: 9.5
-
Bruce Momjian authored
Reported-by: karimelghazouly@gmail.com Discussion: https://postgr.es/m/159854511172.24991.4373145230066586863@wrigleys.postgresql.org Backpatch-through: 9.5
-
- 02 Oct, 2020 4 commits
-
-
Heikki Linnakangas authored
Use PLy_elog() only when a call to a Python C API function failed, and ereport() for other errors. Add an error code to the "wrong length of inner sequence" ereport(). Reviewed-by: Daniel Gustafsson Discussion: https://www.postgresql.org/message-id/B8B72889-D6D7-48FF-B782-D670A6CA4D37%40yesql.se
-
Michael Paquier authored
This clarifies some wording in the description of the options available as replication solutions. While on it, this replaces some instances of "master" with "primary", for consistency with recent changes like 9e101cf6. Author: Robert Treat Reviewed-by: Magnus Hagander, Michael Paquier Discussion: https://postgr.es/m/CAJSLCQ2TPaK_K8raofCamrqELCxY-H6mJrpDNRzc-LKpPY7c+g@mail.gmail.com
-
Fujii Masao authored
This view shows the statistics about WAL activity. Currently it has only two columns: wal_buffers_full and stats_reset. wal_buffers_full column indicates the number of times WAL data was written to the disk because WAL buffers got full. This information is useful when tuning wal_buffers. stats_reset column indicates the time at which these statistics were last reset. pg_stat_wal view is also the basic infrastructure to expose other various statistics about WAL activity later. Bump PGSTAT_FILE_FORMAT_ID due to the change in pgstat format. Bump catalog version. Author: Masahiro Ikeda Reviewed-by: Takayuki Tsunakawa, Kyotaro Horiguchi, Amit Kapila, Fujii Masao Discussion: https://postgr.es/m/188bd3f2d2233cf97753b5ced02bb050@oss.nttdata.com
-
Michael Paquier authored
Providing this information can be useful for example when diagnosing problems related to recovery conflicts or for recovery issues without having to go through the output generated by pg_waldump to get some information about the blocks a WAL record works on. The block information is printed in the same format as pg_waldump. This already existed in xlog.c for debugging purposes with -DWAL_DEBUG, so adding the block information in the callback has required just a small refactoring. Author: Bertrand Drouvot Reviewed-by: Michael Paquier, Masahiko Sawada Discussion: https://postgr.es/m/c31e2cba-efda-762c-f4ad-5c25e5dac3d0@amazon.com
-
- 01 Oct, 2020 5 commits
-
-
Tom Lane authored
Commit 151c0c5f neglected the possibility that a TEMP_CONFIG file would explicitly set max_wal_senders=0; as indeed buildfarm member thorntail does, so that it can test wal_level=minimal in other test suites. Hence, rather than assuming that max_wal_senders=10 will prevail if we say nothing, set it explicitly. Set max_replication_slots=10 explicitly too, just to be safe. Back-patch to v10, like the previous patch. Discussion: https://postgr.es/m/723911.1601417626@sss.pgh.pa.us
-
Heikki Linnakangas authored
This has been wrong ever since the support for multi-dimensional arrays as PL/python function arguments and return values was introduced in commit 94aceed3. Backpatch-through: 10 Discussion: https://www.postgresql.org/message-id/61647b8e-961c-0362-d5d3-c8a18f4a7ec6%40iki.fi
-
Heikki Linnakangas authored
This is not strictly necessary, as the right-links are only needed by scans that are concurrent with page splits, and neither scans or page splits can happen during sorted index build. But it seems like a good idea to set them anyway, if we e.g. want to add a check to amcheck in the future to verify that the chain of right-links is complete. Author: Andrey Borodin Discussion: https://www.postgresql.org/message-id/4D68C21F-9FB9-41DA-B663-FDFC8D143788%40yandex-team.ru
-
Michael Paquier authored
As fe0a1dc5 has proved, it is not a good concept to add to libpq dependencies that would enforce the error output to a central logging facility because it breaks the promise of reporting the error back to an application in a consistent way, with the application to potentially exit() suddenly if using pieces from for example jsonapi.c. prairiedog has allowed to report an actual design problem with fe0a1dc5, but it will not be around forever, so removing logging.c from libpgcommon_shlib is a simple and much better long-term way to prevent any attempt to load the central logging in libraries with general purposes. Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/20200928073330.GC2316@paquier.xyz
-
Andres Freund authored
I (Andres) broke this in 623a9CA79bx, because I didn't think about the way snapshots are built on standbys sufficiently. Unfortunately our existing tests did not catch this, as they are all just querying with psql (therefore ending up with fresh snapshots). The fix is trivial, we just need to increment the transaction completion counter in ExpireTreeKnownAssignedTransactionIds(), which is the equivalent of ProcArrayEndTransaction() during recovery. This commit also adds a new test doing some basic testing of the correctness of snapshots built on standbys. To avoid the aforementioned issue of one-shot psql's not exercising the snapshot caching, the test uses a long lived psqls, similar to 013_crash_restart.pl. It'd be good to extend the test further. Reported-By: Ian Barwick <ian.barwick@2ndquadrant.com> Author: Andres Freund <andres@anarazel.de> Author: Ian Barwick <ian.barwick@2ndquadrant.com> Discussion: https://postgr.es/m/61291ffe-d611-f889-68b5-c298da9fb18f@2ndquadrant.com
-
- 30 Sep, 2020 6 commits
-
-
Alvaro Herrera authored
The error message about columns in the primary key not including all of the partition key was unclear; reword it. Backpatch all the way to pg11, where it appeared. Reported-by: Nagaraj Raj <nagaraj.sf@yahoo.com> Discussion: https://postgr.es/m/64062533.78364.1601415362244@mail.yahoo.com
-
Tom Lane authored
Previously, a conversion such as to_date('-44-02-01','YYYY-MM-DD') would result in '0045-02-01 BC', as the code attempted to interpret the negative year as BC, but failed to apply the correction needed for our internal handling of BC years. Fix the off-by-one problem. Also, arrange for the combination of a negative year and an explicit "BC" marker to cancel out and produce AD. This is how the negative-century case works, so it seems sane to do likewise. Continue to read "year 0000" as 1 BC. Oracle would throw an error, but we've accepted that case for a long time so I'm hesitant to change it in a back-patch. Per bug #16419 from Saeed Hubaishan. Back-patch to all supported branches. Dar Alathar-Yemen and Tom Lane Discussion: https://postgr.es/m/16419-d8d9db0a7553f01b@postgresql.org
-
Heikki Linnakangas authored
Author: Fabien Coelho Reviewed-by: Jeevan Ladhe Discussion: https://www.postgresql.org/message-id/alpine.DEB.2.21.1910220826570.15559%40lancre
-
Peter Eisentraut authored
For some reason, the id of the description of max_parallel_maintenance_workers has been guc-max-parallel-workers-maintenance since the beginning. Flip that around to make it consistent.
-
Tom Lane authored
PostgresNode.pm set "max_wal_senders = 5" for replication testing, but this seems to be slightly too low for our current test suite. Slower buildfarm members frequently report "number of requested standby connections exceeds max_wal_senders" failures, due to old walsenders not exiting instantaneously. Usually, the test does not fail overall because of automatic walreceiver restart, but sometimes the failure becomes visible; and in any case such retries slow down the test. That value came in with commit 89ac7004, but was soon obsoleted by f6d6d292, which raised the built-in default from zero to 10; so that PostgresNode.pm is actually setting it to less than the conservative built-in default. That seems pretty pointless, so let's remove the special setting and let the default prevail, in hopes of making the TAP tests more robust. Likewise, the setting "max_replication_slots = 5" is obsolete and can be removed. While here, reverse-engineer a comment about why we're choosing less-than-default values for some other settings. (Note: before v12, max_wal_senders counted against max_connections so that the latter setting also needs some fiddling with.) Back-patch to v10 where the subscription tests were added. It's likely that the older branches aren't pushing the boundaries of max_wal_senders, but I'm disinclined to spend time trying to figure out exactly when it started to be a problem. Discussion: https://postgr.es/m/723911.1601417626@sss.pgh.pa.us
-
David Rowley authored
Explicitly mention that primary key constraints are also included in the limitation that the constraint columns must be a superset of the partition key columns. Wording suggestion from Tom Lane. Discussion: https://postgr.es/m/64062533.78364.1601415362244@mail.yahoo.com Backpatch-through: 11, where unique constraints on partitioned tables were added
-
- 29 Sep, 2020 8 commits
-
-
Tom Lane authored
Previously we threw an error. But make_date already allowed the case, so it is inconsistent as well as unhelpful for make_timestamp not to. Both functions continue to reject year zero. Code and test fixes by Peter Eisentraut, doc changes by me Discussion: https://postgr.es/m/13c0992c-f15a-a0ca-d839-91d3efd965d9@2ndquadrant.com
-
Tom Lane authored
When executing a CALL or DO in a non-atomic context (i.e., not inside a function or query), plpgsql creates a new plan each time through, as a rather hacky solution to some resource management issues. But it failed to free this plan until exit of the current procedure or DO block, resulting in serious memory bloat in procedures that called other procedures many times. Fix by remembering to free the plan, and by being more honest about restoring the previous state (otherwise, recursive procedure calls have a problem). There was also a smaller leak associated with recalculation of the "target" list of output variables. Fix that by using the statement- lifespan context to hold non-permanent values. Back-patch to v11 where procedures were introduced. Pavel Stehule and Tom Lane Discussion: https://postgr.es/m/CAFj8pRDiiU1dqym+_P4_GuTWm76knJu7z9opWayBJTC0nQGUUA@mail.gmail.com
-
Alexander Korotkov authored
The SQL standard doesn't require jsonpath .datetime() method to support the ISO 8601 format. But our to_json[b]() functions convert timestamps to text in the ISO 8601 format in the sake of compatibility with javascript. So, we add support of the ISO 8601 to the jsonpath .datetime() in the sake compatibility with to_json[b](). The standard mode of datetime parsing currently supports just template patterns and separators in the format string. In order to implement ISO 8601, we have to add support of the format string double quotes to the standard parsing mode. Discussion: https://postgr.es/m/94321be0-cc96-1a81-b6df-796f437f7c66%40postgrespro.ru Author: Nikita Glukhov, revised by me Backpatch-through: 13
-
Alexander Korotkov authored
bffe1bd6 has introduced jsonpath .datetime() method, but default formats for time and timestamp contain excess space between time and timezone. This commit removes this excess space making behavior of .datetime() method standard-compliant. Discussion: https://postgr.es/m/94321be0-cc96-1a81-b6df-796f437f7c66%40postgrespro.ru Author: Nikita Glukhov Backpatch-through: 13
-
Fujii Masao authored
Previously the standby server didn't archive timeline history files streamed from the primary even when archive_mode is set to "always", while it archives the streamed WAL files. This could cause the PITR to fail because there was no required timeline history file in the archive. The cause of this issue was that walreceiver didn't mark those files as ready for archiving. This commit makes walreceiver mark those streamed timeline history files as ready for archiving if archive_mode=always. Then the archiver process archives the marked timeline history files. Back-patch to all supported versions. Reported-by: Grigory Smolkin Author: Grigory Smolkin, Fujii Masao Reviewed-by: David Zhang, Anastasia Lubennikova Discussion: https://postgr.es/m/54b059d4-2b48-13a4-6f43-95a087c92367@postgrespro.ru
-
Michael Paquier authored
This addresses a couple of issues with the so-said subject: - Report the correct parent relation with the index actually being rebuilt or validated. Previously, the command status remained set to the last index created for the progress of the index build and validation, which would be incorrect when working on a table that has more than one index. - Use the correct phase when waiting before the drop of the old indexes. Previously, this was reported with the same status as when waiting before the old indexes are marked as dead. Author: Matthias van de Meent, Michael Paquier Discussion: https://postgr.es/m/CAEze2WhqFgcwe1_tv=sFYhLWV2AdpfukumotJ6JNcAOQs3jufg@mail.gmail.com Backpatch-through: 12
-
Tom Lane authored
We have a dozen or so places that need to iterate over all but the first cell of a List. Prior to v13 this was typically written as for_each_cell(lc, lnext(list_head(list))) Commit 1cff1b95 changed these to for_each_cell(lc, list, list_second_cell(list)) This patch introduces a new macro for_each_from() which expresses the start point as a list index, allowing these to be written as for_each_from(lc, list, 1) This is marginally more efficient, since ForEachState.i can be initialized directly instead of backing into it from a ListCell address. It also seems clearer and less typo-prone. Some of the remaining uses of for_each_cell() look like they could profitably be changed to for_each_from(), but here I confined myself to changing uses of list_second_cell(). Also, fix for_each_cell_setup() and for_both_cell_setup() to const-ify their arguments; that's a simple oversight in 1cff1b95. Back-patch into v13, on the grounds that (1) the const-ification is a minor bug fix, and (2) it's better for back-patching purposes if we only have two ways to write these loops rather than three. In HEAD, also remove list_third_cell() and list_fourth_cell(), which were also introduced in 1cff1b95, and are unused as of cc99baa4. It seems unlikely that any third-party code would have started to use them already; anyone who has can be directed to list_nth_cell instead. Discussion: https://postgr.es/m/CAApHDvpo1zj9KhEpU2cCRZfSM3Q6XGdhzuAS2v79PH7WJBkYVA@mail.gmail.com
-
Michael Paquier authored
This reverts commit e21cbb4b, as the switch to EVP routines requires a more careful design where we would need to have at least our wrapper routines return a status instead of issuing an error by themselves to let the caller do the error handling. The memory handling was also incorrect and could cause leaks in the backend if a failure happened, requiring most likely a callback to do the necessary cleanup as the only clean way to be able to allocate an EVP context requires the use of an allocation within OpenSSL. The potential rework of the wrappers also impacts the fallback implementation when not building with OpenSSL. Originally, prairiedog has reported a compilation failure, but after discussion with Tom Lane this needs a better design. Discussion: https://postgr.es/m/20200928073330.GC2316@paquier.xyz
-
- 28 Sep, 2020 1 commit
-