- 31 Mar, 2021 23 commits
-
-
Alvaro Herrera authored
Failing to do this can cause a crash, and I suspect is what has happened with a buildfarm member reporting mysterious failures. This is an ancient bug, but I'm not backpatching since evidently nobody cares about PQtrace in older releases. Discussion: https://postgr.es/m/3333908.1617227066@sss.pgh.pa.us
-
Alvaro Herrera authored
Some buildfarm animals with force_parallel_mode=regress were failing this test because the error is reported in a parallel worker quicker than the rows that succeed. Take the opportunity to move the SET of lc_messages out of the traced section, because it's not very interesting. Diagnosed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/3304521.1617221724@sss.pgh.pa.us
-
Tom Lane authored
We must cast the arguments of <ctype.h> functions to unsigned char to avoid problems where char is signed. Speaking of which, considering that this *is* a <ctype.h> function, it's rather remarkable that we aren't seeing more complaints about not having included that header. Per buildfarm.
-
Tom Lane authored
Remove confusion between time_t and pg_time_t; neither gettimeofday() nor localtime() deal in the latter. libpq indeed has no business using <pgtime.h> at all. Use snprintf not sprintf, to ensure we can't overrun the supplied buffer. (Unlikely, but let's be safe.) Per buildfarm.
-
Tom Lane authored
Per buildfarm.
-
Tom Lane authored
Since a4d75c86, some buildfarm members have been warning that Assert(attnum <= MaxAttrNumber); is useless if attnum is an AttrNumber. I'm not certain how plausible it is that the value coming out of the bitmap could actually exceed MaxAttrNumber, but we seem to have thought that that was possible back in 7300a699. Revert the intermediate variable to int so that we have the same overflow protection as before.
-
Stephen Frost authored
The new appendix groups information on renamed or removed settings, commands, etc into an out-of-the-way part of the docs. The original id elements are retained in each subsection to ensure that the same filenames are produced for HTML docs. This prevents /current/ links on the web from breaking, and allows users of the web docs to follow links from old version pages to info on the changes in the new version. Prior to this change, a link to /current/ for renamed sections like the recovery.conf docs would just 404. Similarly if someone searched for recovery.conf they would find the pg11 docs, but there would be no /12/ or /current/ link, so they couldn't easily find out that it was removed in pg12 or how to adapt. Index entries are also added so that there's a breadcrumb trail for users to follow when they know the old name, but not what we changed it to. So a user who is trying to find out how to set standby_mode in PostgreSQL 12+, or where pg_resetxlog went, now has more chance of finding that information. Craig Ringer and Stephen Frost Reviewed-by: Euler Taveira Discussion: https://postgr.es/m/CAGRY4nzPNOyYQ_1-pWYToUVqQ0ThqP5jdURnJMZPm539fdizOg%40mail.gmail.com Backpatch-through: 10
-
Tom Lane authored
Put the remote end's error message into the primary error string, instead of relegating it to errdetail(). Although this could end up being awkward if the remote sends us a really long error message, it seems more in keeping with our message style guidelines, and more helpful in situations where the errdetail could get dropped. Peter Smith Discussion: https://postgr.es/m/CAHut+Ps-Qv2yQceCwobQDP0aJOkfDzRFrOaR6+2Op2K=WHGeWg@mail.gmail.com
-
Alvaro Herrera authored
Test pipeline_abort was not checking that it got the rows it expected in one mode; make it do so. This doesn't fix the actual problem (no idea what that is, yet) but at least it should make it more obvious rather than being visible only as a difference in the trace output. While at it, fix other infelicities in the test: * I reversed the order of result vs. expected in like(). * The output traces from -t are being put in the log dir, which means the buildfarm script uselessly captures them. Put them in a separate dir tmp_check/traces instead, to avoid cluttering the buildfarm results. * Test pipelined_insert was using too large a row count. Reduce that a tad and add a filler column to make each insert a little bulkier, while still keeping enough that a buffer is filled and we have to switch mode.
-
Joe Conway authored
According to the comments, when an invalid or dropped column oid is passed to has_column_privilege(), the intention has always been to return NULL. However, when the caller had table level privilege the invalid/missing column was never discovered, because table permissions were checked first. Fix that by introducing extended versions of pg_attribute_acl(check|mask) and pg_class_acl(check|mask) which take a new argument, is_missing. When is_missing is NULL, the old behavior is preserved. But when is_missing is passed by the caller, no ERROR is thrown for dropped or missing columns/relations, and is_missing is flipped to true. This in turn allows has_column_privilege to check for column privileges first, providing the desired semantics. Not backpatched since it is a user visible behavioral change with no previous complaints, and the fix is a bit on the invasive side. Author: Joe Conway Reviewed-By: Tom Lane Reported by: Ian Barwick Discussion: https://postgr.es/m/flat/9b5f4311-157b-4164-7fe7-077b4fe8ed84%40joeconway.com
-
Tom Lane authored
This patch makes two closely related sets of changes: 1. For UPDATE, the subplan of the ModifyTable node now only delivers the new values of the changed columns (i.e., the expressions computed in the query's SET clause) plus row identity information such as CTID. ModifyTable must re-fetch the original tuple to merge in the old values of any unchanged columns. The core advantage of this is that the changed columns are uniform across all tables of an inherited or partitioned target relation, whereas the other columns might not be. A secondary advantage, when the UPDATE involves joins, is that less data needs to pass through the plan tree. The disadvantage of course is an extra fetch of each tuple to be updated. However, that seems to be very nearly free in context; even worst-case tests don't show it to add more than a couple percent to the total query cost. At some point it might be interesting to combine the re-fetch with the tuple access that ModifyTable must do anyway to mark the old tuple dead; but that would require a good deal of refactoring and it seems it wouldn't buy all that much, so this patch doesn't attempt it. 2. For inherited UPDATE/DELETE, instead of generating a separate subplan for each target relation, we now generate a single subplan that is just exactly like a SELECT's plan, then stick ModifyTable on top of that. To let ModifyTable know which target relation a given incoming row refers to, a tableoid junk column is added to the row identity information. This gets rid of the horrid hack that was inheritance_planner(), eliminating O(N^2) planning cost and memory consumption in cases where there were many unprunable target relations. Point 2 of course requires point 1, so that there is a uniform definition of the non-junk columns to be returned by the subplan. We can't insist on uniform definition of the row identity junk columns however, if we want to keep the ability to have both plain and foreign tables in a partitioning hierarchy. Since it wouldn't scale very far to have every child table have its own row identity column, this patch includes provisions to merge similar row identity columns into one column of the subplan result. In particular, we can merge the whole-row Vars typically used as row identity by FDWs into one column by pretending they are type RECORD. (It's still okay for the actual composite Datums to be labeled with the table's rowtype OID, though.) There is more that can be done to file down residual inefficiencies in this patch, but it seems to be committable now. FDW authors should note several API changes: * The argument list for AddForeignUpdateTargets() has changed, and so has the method it must use for adding junk columns to the query. Call add_row_identity_var() instead of manipulating the parse tree directly. You might want to reconsider exactly what you're adding, too. * PlanDirectModify() must now work a little harder to find the ForeignScan plan node; if the foreign table is part of a partitioning hierarchy then the ForeignScan might not be the direct child of ModifyTable. See postgres_fdw for sample code. * To check whether a relation is a target relation, it's no longer sufficient to compare its relid to root->parse->resultRelation. Instead, check it against all_result_relids or leaf_result_relids, as appropriate. Amit Langote and Tom Lane Discussion: https://postgr.es/m/CA+HiwqHpHdqdDn48yCEhynnniahH78rwcrv1rEX65-fsZGBOLQ@mail.gmail.com
-
Peter Eisentraut authored
This allows something like SELECT ... FROM t1 JOIN t2 USING (a, b, c) AS x where x has the columns a, b, c and unlike a regular alias it does not hide the range variables of the tables being joined t1 and t2. Per SQL:2016 feature F404 "Range variable for common column names". Reviewed-by: Vik Fearing <vik.fearing@2ndquadrant.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/454638cf-d563-ab76-a585-2564428062af@2ndquadrant.com
-
Etsuro Fujita authored
This implements asynchronous execution, which runs multiple parts of a non-parallel-aware Append concurrently rather than serially to improve performance when possible. Currently, the only node type that can be run concurrently is a ForeignScan that is an immediate child of such an Append. In the case where such ForeignScans access data on different remote servers, this would run those ForeignScans concurrently, and overlap the remote operations to be performed simultaneously, so it'll improve the performance especially when the operations involve time-consuming ones such as remote join and remote aggregation. We may extend this to other node types such as joins or aggregates over ForeignScans in the future. This also adds the support for postgres_fdw, which is enabled by the table-level/server-level option "async_capable". The default is false. Robert Haas, Kyotaro Horiguchi, Thomas Munro, and myself. This commit is mostly based on the patch proposed by Robert Haas, but also uses stuff from the patch proposed by Kyotaro Horiguchi and from the patch proposed by Thomas Munro. Reviewed by Kyotaro Horiguchi, Konstantin Knizhnik, Andrey Lepikhov, Movead Li, Thomas Munro, Justin Pryzby, and others. Discussion: https://postgr.es/m/CA%2BTgmoaXQEt4tZ03FtQhnzeDEMzBck%2BLrni0UWHVVgOTnA6C1w%40mail.gmail.com Discussion: https://postgr.es/m/CA%2BhUKGLBRyu0rHrDCMC4%3DRn3252gogyp1SjOgG8SEKKZv%3DFwfQ%40mail.gmail.com Discussion: https://postgr.es/m/20200228.170650.667613673625155850.horikyota.ntt%40gmail.com
-
Peter Eisentraut authored
ParseNamespaceItem had a wired-in assumption that p_rte->eref describes the table and column aliases exposed by the nsitem. This relaxes this by creating a separate p_names field in an nsitem. This is mainly preparation for a patch for JOIN USING aliases, but it saves one indirection in common code paths, so it's possibly a win on its own. Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/785329.1616455091@sss.pgh.pa.us
-
Peter Eisentraut authored
Similar to existing errmsg_plural() and errdetail_plural(). Some errhint() calls hadn't received the proper plural treatment yet.
-
Peter Eisentraut authored
Not supported by PDF build right now, so let's do without it.
-
Amit Kapila authored
Author: Suraj Kharage Reviewed-by: Bharath Rupireddy, Vignesh C Discussion: https://postgr.es/m/CAF1DzPWRfxUeH-wShz7P_pK5Tx6M_nEK+TkS8gn5ngvg07Q5=g@mail.gmail.com
-
Amit Kapila authored
At some places in the docs, we refer to them as tablesync slots and at other places as table synchronization slots. For consistency, we refer to them as table synchronization slots at all places. Author: Peter Smith Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/CAHut+PvzYNKCeZ=kKBDkh3dw-r=2D3fk=nNc9SXSW=CZGk69xg@mail.gmail.com
-
Noah Misch authored
We always inserted a larger-than-fillfactor tuple into a newly-extended page, even when existing pages were empty or contained nothing but an unused line pointer. This was unnecessary relation extension. Start tolerating page usage up to 1/8 the maximum space that could be taken up by line pointers. This is somewhat arbitrary, but it should allow more cases to reuse pages. This has no effect on tables with fillfactor=100 (the default). John Naylor and Floris van Nee. Reviewed by Matthias van de Meent. Reported by Floris van Nee. Discussion: https://postgr.es/m/6e263217180649339720afe2176c50aa@opammb0562.comp.optiver.com
-
Michael Paquier authored
CreateStmt->inhRelations is a list of RangeVars, but a comment was incorrect about that. Author: Julien Rouhaud Discussion: https://postgr.es/m/20210330123015.yzekhz5sweqbgxdr@nol
-
Michael Paquier authored
When specified, only extensions matching the given pattern are included in dumps. Similarly to --table and --schema, when --strict-names is used, a perfect match is required. Also, like the two other options, this new option offers no guarantee that dependent objects have been dumped, so a restore may fail on a clean database. Tests are added in test_pg_dump/, checking after a set of positive and negative cases, with or without an extension's contents added to the dump generated. Author: Guillaume Lelarge Reviewed-by: David Fetter, Tom Lane, Michael Paquier, Asif Rehman, Julien Rouhaud Discussion: https://postgr.es/m/CAECtzeXOt4cnMU5+XMZzxBPJ_wu76pNy6HZKPRBL-j7yj1E4+g@mail.gmail.com
-
Tom Lane authored
Whilst poking at nodeModifyTable.c, I chanced to notice that while its calls to ExecBR*Triggers and ExecIR*Triggers are protected by tests to see if there are any relevant triggers to fire, its calls to ExecAR*Triggers are not; the latter functions do the equivalent tests themselves. This seems possibly reasonable given the more complex conditions involved, but what's less reasonable is that the ExecAR* functions aren't careful to do no work when there is no work to be done. ExecARInsertTriggers gets this right, but the other two will both force creation of a slot that the query may have no use for. ExecARUpdateTriggers additionally performed a usually-useless ExecClearTuple() on that slot. This is probably all pretty microscopic in real workloads, but a cycle shaved is a cycle earned.
-
- 30 Mar, 2021 13 commits
-
-
Bruce Momjian authored
Seems the -1/singular output is used in the dblink regression tests. Reported-by: Álvaro Herrera Discussion: https://postgr.es/m/20210330231506.GA10666@alvherre.pgsql
-
Alvaro Herrera authored
The libpq_pipeline program recently introduced by commit acb7e4eb is well equipped to test the PQtrace() functionality, so let's make it do that. Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/20210327192812.GA25115@alvherre.pgsql
-
Alvaro Herrera authored
Transform the PQtrace output format from its ancient (and mostly useless) byte-level output format to a logical-message-level output, making it much more usable. This implementation allows the printing code to be written (as it indeed was) by looking at the protocol documentation, which gives more confidence that the three (docs, trace code and actual code) actually match. Author: 岩田 彩 (Aya Iwata) <iwata.aya@fujitsu.com> Reviewed-by: 綱川 貴之 (Takayuki Tsunakawa) <tsunakawa.takay@fujitsu.com> Reviewed-by: Kirk Jamison <k.jamison@fujitsu.com> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: 黒田 隼人 (Hayato Kuroda) <kuroda.hayato@fujitsu.com> Reviewed-by: "Nagaura, Ryohei" <nagaura.ryohei@jp.fujitsu.com> Reviewed-by: Ryo Matsumura <matsumura.ryo@fujitsu.com> Reviewed-by: Greg Nancarrow <gregn4422@gmail.com> Reviewed-by: Jim Doty <jdoty@pivotal.io> Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/71E660EB361DF14299875B198D4CE5423DE3FBA4@g01jpexmbkw25
-
Bruce Momjian authored
This outputs "-1 year", not "-1 years". Reported-by: neverov.max@gmail.com Bug: 16939 Discussion: https://postgr.es/m/16939-cceeb03fb72736ee@postgresql.org
-
Peter Eisentraut authored
This exercises a special case in the implementations of date_part('epoch', timestamp[tz]) that was previously not tested.
-
Stephen Frost authored
Instead of using pg_usleep() in vacuum_delay_point(), use a WaitLatch. This has the advantage that we will realize if the postmaster has been killed since the last time we decided to sleep while vacuuming. Reviewed-by: Thomas Munro Discussion: https://postgr.es/m/CAFh8B=kcdk8k-Y21RfXPu5dX=bgPqJ8TC3p_qxR_ygdBS=JN5w@mail.gmail.com
-
Tom Lane authored
As committed in bbe0a81d, pg_dump from a pre-v14 server effectively acts as though you'd said --no-toast-compression. I think the right thing is for it to act as though default_toast_compression is set to "pglz", instead, so that the tables' toast compression behavior is preserved. You can always get the other behavior, if you want that, by giving the switch. Discussion: https://postgr.es/m/1112852.1616609702@sss.pgh.pa.us
-
David Rowley authored
Here we add a new output parameter to estimate_num_groups() to allow it to inform the caller of additional, possibly useful information about the estimation. The new output parameter is a struct that currently contains just a single field with a set of flags. This was done rather than having the flags as an output parameter to allow future fields to be added without having to change the signature of the function at a later date when we want to pass back further information that might not be suitable to store in the flags field. It seems reasonable that one day in the future that the planner would want to know more about the estimation. For example, how many individual sets of statistics was the estimation generated from? The planner may want to take that into account if we ever want to consider risks as well as costs when generating plans. For now, there's only 1 flag we set in the flags field. This is to indicate if the estimation fell back on using the hard-coded constants in any part of the estimation. Callers may like to change their behavior if this is set, and this gives them the ability to do so. Callers may pass the flag pointer as NULL if they have no interest in obtaining any additional information about the estimate. We're not adding any actual usages of these flags here. Some follow-up commits will make use of this feature. Additionally, we're also not making any changes to add support for clauselist_selectivity() and clauselist_selectivity_ext(). However, if this is required in the future then the same struct being added here should be fine to use as a new output argument for those functions too. Author: David Rowley Discussion: https://postgr.es/m/CAApHDvqQqpk=1W-G_ds7A9CsXX3BggWj_7okinzkLVhDubQzjA@mail.gmail.com
-
David Rowley authored
Some compilers are not aware that elog/ereport ERROR does not return.
-
David Rowley authored
Previously simplehash.h only exposed a method to perform a hash table delete using the hash table key. This meant that the delete function had to perform a hash lookup in order to find the entry to delete. Here we add a new function so that users of simplehash.h can perform a hash delete directly using the entry pointer, thus saving the hash lookup. An upcoming patch that uses simplehash.h already has performed the hash lookup so already has the entry pointer. This change will allow the code in that patch to perform the hash delete without the code in simplehash.h having to perform an additional hash lookup. Author: David Rowley Reviewed-by: Andres Freund Discussion: https://postgr.es/m/CAApHDvqFLXXge153WmPsjke5VGOSt7Ez0yD0c7eBXLfmWxs3Kw@mail.gmail.com
-
Peter Eisentraut authored
The existing regression tests only tested the lower boundary of the range supported by the timestamp and timestamptz types because "The upper boundary differs between integer and float timestamps, so no check". Since this is obsolete, add similar tests for the upper boundary.
-
Amit Kapila authored
Along with gid, this provides a different way to identify the transaction. The users that use xid in some way to prepare the transactions can use it to filter prepare transactions. The later commands COMMIT PREPARED or ROLLBACK PREPARED carries both identifiers, providing an output plugin the choice of what to use. Author: Markus Wanner Reviewed-by: Vignesh C, Amit Kapila Discussion: https://postgr.es/m/ee280000-7355-c4dc-e47b-2436e7be959c@enterprisedb.com
-
Etsuro Fujita authored
Back-patch to all supported branches. Author: Etsuro Fujita Discussion: https://postgr.es/m/CAPmGK17DwzaSf%2BB71dhL2apXdtG-OmD6u2AL9Cq2ZmAR0%2BzapQ%40mail.gmail.com
-
- 29 Mar, 2021 4 commits
-
-
Alvaro Herrera authored
We were never doing clearerr() on the output stream, which results in a message being printed after each result once an EOF is seen: could not print result table: Success This message was added by commit b0343699 (in the pg13 era); before that, the error indicator would never be examined. So backpatch only that far back, even though the actual bug (to wit: the fact that the error indicator is never cleared) is older.
-
David Rowley authored
The design of the data structures which allow storage of the per-worker memory during parallel seq scans were not ideal. The work done in 56788d21 required an additional data structure to allow workers to remember the range of pages that had been allocated to them for processing during a parallel seqscan. That commit added a void pointer field to TableScanDescData to allow heapam to store the per-worker allocation information. However putting the field there made very little sense given that we have AM specific structs for that, e.g. HeapScanDescData. Here we remove the void pointer field from TableScanDescData and add a dedicated field for this purpose to HeapScanDescData. Previously we also allocated memory for this parallel per-worker data for all scans, regardless if it was a parallel scan or not. This was just a wasted allocation for non-parallel scans, so here we make the allocation conditional on the scan being parallel. Also, add previously missing pfree() to free the per-worker data in heap_endscan(). Reported-by: Andres Freund Reviewed-by: Andres Freund Discussion: https://postgr.es/m/20210317023101.anvejcfotwka6gaa@alap3.anarazel.de
-
Andrew Dunstan authored
Currently we only recognize the Common Name (CN) of a certificate's subject to be matched against the user name. Thus certificates with subjects '/OU=eng/CN=fred' and '/OU=sales/CN=fred' will have the same connection rights. This patch provides an option to match the whole Distinguished Name (DN) instead of just the CN. On any hba line using client certificate identity, there is an option 'clientname' which can have values of 'DN' or 'CN'. The default is 'CN', the current procedure. The DN is matched against the RFC2253 formatted DN, which looks like 'CN=fred,OU=eng'. This facility of probably best used in conjunction with an ident map. Discussion: https://postgr.es/m/92e70110-9273-d93c-5913-0bccb6562740@dunslane.net Reviewed-By: Michael Paquier, Daniel Gustafsson, Jacob Champion
-
Peter Eisentraut authored
Some tests for timestamp and timestamptz were in the date.sql test file. Move them to their appropriate files, or drop tests cases that were already present there.
-