- 13 Jan, 2020 6 commits
-
-
Amit Kapila authored
Reported-by: Antonin Houska Author: Antonin Houska Backpatch-through: 11, where it was introduced Discussion: https://postgr.es/m/2246.1578900133@antos
-
Michael Paquier authored
Improvement per suggestion from Tom Lane. Author: Daniel Gustafsson Discussion: https://postgr.es/m/FED18699-4270-4778-8DA8-10F119A5ECF3@yesql.se
-
Andrew Dunstan authored
Othrwise there is a security risk. Discussion: https://postgr.es/m/20200109103014.GA4192@msg.df7cb.de
-
Amit Kapila authored
Earlier, we use to postpone deleting empty pages till the second stage of vacuum to amortize the cost of scanning internal pages. However, that can sometimes (say vacuum is canceled or errored between first and second stage) delay the pages to be recycled. Another thing is that to facilitate deleting empty pages in the second stage, we need to share the information about internal and empty pages between different stages of vacuum. It will be quite tricky to share this information via DSM which is required for the upcoming parallel vacuum patch. Also, it will bring the logic to reclaim deleted pages closer to nbtree where we delete empty pages in each pass. Overall, the advantages of deleting empty pages in each pass outweigh the advantages of postponing the same. Author: Dilip Kumar, with changes by Amit Kapila Reviewed-by: Sawada Masahiko and Amit Kapila Discussion: https://postgr.es/m/CAA4eK1LGr+MN0xHZpJ2dfS8QNQ1a_aROKowZB+MPNep8FVtwAA@mail.gmail.com
-
Tomas Vondra authored
Until now we've only used a single multivariate MCV list per relation, covering the largest number of clauses. So for example given a query SELECT * FROM t WHERE a = 1 AND b =1 AND c = 1 AND d = 1 and extended statistics on (a,b) and (c,d), we'd only pick and use one of them. This commit improves this by repeatedly picking and applying the best statistics (matching the largest number of remaining clauses) until no additional statistics is applicable. This greedy algorithm is simple, but may not be optimal. A different choice of statistics may leave fewer clauses unestimated and/or give better estimates for some other reason. This can however happen only when there are overlapping statistics, and selecting one makes it impossible to use the other. E.g. with statistics on (a,b), (c,d), (b,c,d), we may pick either (a,b) and (c,d) or (b,c,d). But it's not clear which option is the best one. We however assume cases like this are rare, and the easiest solution is to define statistics covering the whole group of correlated columns. In the future we might support overlapping stats, using some of the clauses as conditions (in conditional probability sense). Author: Tomas Vondra Reviewed-by: Mark Dilger, Kyotaro Horiguchi Discussion: https://postgr.es/m/20191028152048.jc6pqv5hb7j77ocp@development
-
Tomas Vondra authored
When considering functional dependencies during selectivity estimation, it's not necessary to bother with selecting the best extended statistic object and then use just dependencies from it. We can simply consider all applicable functional dependencies at once. This means we need to deserialie all (applicable) dependencies before applying them to the clauses. This is a bit more expensive than picking the best statistics and deserializing dependencies for it. To minimize the additional cost, we ignore statistics that are not applicable. Author: Tomas Vondra Reviewed-by: Mark Dilger Discussion: https://postgr.es/m/20191028152048.jc6pqv5hb7j77ocp@development
-
- 12 Jan, 2020 2 commits
-
-
Tom Lane authored
When estimating the selectivity of "range_var <@ range_constant" or "range_var @> range_constant", if the upper (or respectively lower) bound of the range_constant was above the last bin of the range_var's histogram, the code would access uninitialized memory and potentially crash (though it seems the probability of a crash is quite low). Handle the endpoint cases explicitly to fix that. While at it, be more paranoid about the possibility of getting NaN or other silly results from the range type's subdiff function. And improve some comments. Ordinarily we'd probably add a regression test case demonstrating the bug in unpatched code. But it's too hard to get it to crash reliably because of the uninitialized-memory dependence, so skip that. Per bug #16122 from Adam Scott. It's been broken from the beginning, apparently, so backpatch to all supported branches. Diagnosis by Michael Paquier, patch by Andrey Borodin and Tom Lane. Discussion: https://postgr.es/m/16122-eb35bc248c806c15@postgresql.org
-
Michael Paquier authored
On the publisher, it was assumed that an INSERT change cannot happen for a relation with no replica identity. However this is true only for a change that needs references to old rows, aka UPDATE or DELETE, so trying to use logical replication with a relation that has no replica identity led to an assertion failure in the publisher when issuing an INSERT. This commit removes the incorrect assertion, and adds more regression tests to provide coverage for relations without replica identity. Reported-by: Neha Sharma Author: Dilip Kumar, Michael Paquier Reviewed-by: Andres Freund Discussion: https://postgr.es/m/CANiYTQsL1Hb8_Km08qd32svrqNumXLJeoGo014O7VZymgOhZEA@mail.gmail.com Backpatch-through: 10
-
- 11 Jan, 2020 4 commits
-
-
Tom Lane authored
Fix assorted bugs in handling of non-blocking I/O when using GSSAPI encryption. The encryption layer could return the wrong status information to its caller, resulting in effectively dropping some data (or possibly in aborting a not-broken connection), or in a "livelock" situation where data remains to be sent but the upper layers think transmission is done and just go to sleep. There were multiple small thinkos contributing to that, as well as one big one (failure to think through what to do when a send fails after having already transmitted data). Note that these errors could cause failures whether the client application asked for non-blocking I/O or not, since both libpq and the backend always run things in non-block mode at this level. Also get rid of use of static variables for GSSAPI inside libpq; that's entirely not okay given that multiple connections could be open at once inside a single client process. Also adjust a bunch of random small discrepancies between the frontend and backend versions of the send/receive functions -- except for error handling, they should be identical, and now they are. Also extend the Kerberos TAP tests to exercise cases where nontrivial amounts of data need to be pushed through encryption. Before, those tests didn't provide any useful coverage at all for the cases of interest here. (They still might not, depending on timing, but at least there's a chance.) Per complaint from pmc@citylink and subsequent investigation. Back-patch to v12 where this code was introduced. Discussion: https://postgr.es/m/20200109181822.GA74698@gate.oper.dinoex.org
-
Peter Eisentraut authored
Some callers are not using it, so it's wasteful to have to specify it. Reviewed-by: Masahiko Sawada <masahiko.sawada@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/CA+fd4k4BcYrYucNfTnK-CQX3+jsG+PRPEhHAUSo-W4P0Lec57A@mail.gmail.com
-
Peter Eisentraut authored
Replace the solitary use with a bool. Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://www.postgresql.org/message-id/flat/a6f91ead-0ce4-2a34-062b-7ab9813ea308%402ndquadrant.com
-
Noah Misch authored
FileClose() failure ordinarily causes a PANIC. Suppose the user disables that PANIC via data_sync_retry=on. After mdclose() issued a FileClose() that failed, calls into md.c raised SIGSEGV. This fix adds repalloc() calls during mdclose(); update a comment about ignoring repalloc() cost. The rate of relation segment count change is a minor factor; more relevant to overall performance is the rate of mdclose() and subsequent re-opening of segments. Back-patch to v10, where commit 45e191e3 introduced the bug. Reviewed by Kyotaro Horiguchi. Discussion: https://postgr.es/m/20191222091930.GA1280238@rfd.leadboat.com
-
- 10 Jan, 2020 5 commits
-
-
Peter Geoghegan authored
Author: Peter Geoghegan Reviewed-By: Heikki Linnakangas
-
Peter Geoghegan authored
Author: Peter Geoghegan Reviewed-By: Heikki Linnakangas
-
Alvaro Herrera authored
This simplifies addition of further flags. Author: Nikhil Sontakke Discussion: https://postgr.es/m/CAMGcDxeViP+R-OL7QhzUV9eKCVjURobuY1Zijik4Ay_Ddwo4Cg@mail.gmail.com
-
Michael Paquier authored
Reported-by: Tham Nguyen Discussion: https://postgr.es/m/157851402876.29175.12977878383183540468@wrigleys.postgresql.org Backpatch-through: 9.4
-
Michael Paquier authored
This is a follow-up of 30a3e772, making the output more consistent when using --debug for meta-command execution. Author: Michael Paquier Reviewed-by: Fabien Coelho Discussion: https://postgr.es/m/alpine.DEB.2.21.1912241100390.3339@pseudo
-
- 09 Jan, 2020 7 commits
-
-
Tom Lane authored
Experience so far suggests that getting these tests to pass on all libedit versions that are out there may be impossible, or require dumbing down the tests to the point of uselessness. So we need to provide a way to skip them when the user knows they'll fail. An environment variable is probably the most convenient way to deal with this; it's easy for, e.g., a buildfarm animal's configuration to set up. Discussion: https://postgr.es/m/9594.1578586797@sss.pgh.pa.us
-
Tom Lane authored
The core idea of this patch is to make the parser generate join alias Vars (that is, ones with varno pointing to a JOIN RTE) only when the alias Var is actually different from any raw join input, that is a type coercion and/or COALESCE is necessary to generate the join output value. Otherwise just generate varno/varattno pointing to the relevant join input column. In effect, this means that the planner's flatten_join_alias_vars() transformation is already done in the parser, for all cases except (a) columns that are merged by JOIN USING and are transformed in the process, and (b) whole-row join Vars. In principle that would allow us to skip doing flatten_join_alias_vars() in many more queries than we do now, but we don't have quite enough infrastructure to know that we can do so --- in particular there's no cheap way to know whether there are any whole-row join Vars. I'm not sure if it's worth the trouble to add a Query-level flag for that, and in any case it seems like fit material for a separate patch. But even without skipping the work entirely, this should make flatten_join_alias_vars() faster, particularly where there are nested joins that it previously had to flatten recursively. An essential part of this change is to replace Var nodes' varnoold/varoattno fields with varnosyn/varattnosyn, which have considerably more tightly-defined meanings than the old fields: when they differ from varno/varattno, they identify the Var's position in an aliased JOIN RTE, and the join alias is what ruleutils.c should print for the Var. This is necessary because the varno change destroyed ruleutils.c's ability to find the JOIN RTE from the Var's varno. Another way in which this change broke ruleutils.c is that it's no longer feasible to determine, from a JOIN RTE's joinaliasvars list, which join columns correspond to which columns of the join's immediate input relations. (If those are sub-joins, the joinaliasvars entries may point to columns of their base relations, not the sub-joins.) But that was a horrid mess requiring a lot of fragile assumptions already, so let's just bite the bullet and add some more JOIN RTE fields to make it more straightforward to figure that out. I added two integer-List fields containing the relevant column numbers from the left and right input rels, plus a count of how many merged columns there are. This patch depends on the ParseNamespaceColumn infrastructure that I added in commit 5815696b. The biggest bit of code change is restructuring transformFromClauseItem's handling of JOINs so that the ParseNamespaceColumn data is propagated upward correctly. Other than that and the ruleutils fixes, everything pretty much just works, though some processing is now inessential. I grabbed two pieces of low-hanging fruit in that line: 1. In find_expr_references, we don't need to recurse into join alias Vars anymore. There aren't any except for references to merged USING columns, which are more properly handled when we scan the join's RTE. This change actually fixes an edge-case issue: we will now record a dependency on any type-coercion function present in a USING column's joinaliasvar, even if that join column has no references in the query text. The odds of the missing dependency causing a problem seem quite small: you'd have to posit somebody dropping an implicit cast between two data types, without removing the types themselves, and then having a stored rule containing a whole-row Var for a join whose USING merge depends on that cast. So I don't feel a great need to change this in the back branches. But in theory this way is more correct. 2. markRTEForSelectPriv and markTargetListOrigin don't need to recurse into join alias Vars either, because the cases they care about don't apply to alias Vars for USING columns that are semantically distinct from the underlying columns. This removes the only case in which markVarForSelectPriv could be called with NULL for the RTE, so adjust the comments to describe that hack as being strictly internal to markRTEForSelectPriv. catversion bump required due to changes in stored rules. Discussion: https://postgr.es/m/7115.1577986646@sss.pgh.pa.us
-
Robert Haas authored
This tells you about allocations that have been made from the main shared memory segment. The original patch also tried to show information about dynamic shared memory allocation as well, but I decided to leave that problem for another time. Andres Freund and Robert Haas, reviewed by Michael Paquier, Marti Raudsepp, Tom Lane, Álvaro Herrera, and Kyotaro Horiguchi. Discussion: http://postgr.es/m/20140504114417.GM12715@awork2.anarazel.de
-
Robert Haas authored
Per the buildfarm, via Michael Paquier. Discussion: http://postgr.es/m/20200108032648.GE3413@paquier.xyz
-
Magnus Hagander authored
Reported-by: Octopus ZHANG Author: Daniel Gustafsson
-
Peter Eisentraut authored
We currently have several sets of files generated from data provided by Unicode. These all have ad hoc rules and instructions for updating when new Unicode versions appear, and it's not done consistently. This patch centralizes and automates the process and makes it part of the release checklist. The Unicode and CLDR versions are specified in Makefile.global.in. There is a new make target "update-unicode" that downloads all the relevant files and runs the generation script. There is also a new script for generating the table of combining characters for ucs_wcwidth(). That table is now in a separate include file rather than hardcoded into the middle of other code. This is based on the script that was used for generating d8594d12, but the script itself wasn't committed at that time. Reviewed-by: John Naylor <john.naylor@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/c8d05f42-443e-6c23-819b-05b31759a37c@2ndquadrant.com
-
Andrew Dunstan authored
This allows different users to authenticate with different certificates. Author: Craig Ringer
-
- 08 Jan, 2020 9 commits
-
-
Peter Eisentraut authored
Change the exception syntax used in the tests to use the more current except Exception as ex: rather than the old except Exception, ex: Since support for Python <2.6 has been removed, all supported versions now support the new style, and we can save one step in the Python 3 compatibility conversion. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/98b69261-298c-13d2-f34d-836fd9c29b21%402ndquadrant.com
-
Peter Eisentraut authored
Supporting very old Python versions is a maintenance burden, especially with the several variant test files to maintain for Python <2.6. Since we have dropped support for older OpenSSL versions in 7b283d0e, RHEL 5 is now effectively desupported, and that was also the only mainstream operating system still using Python versions before 2.6, so it's a good time to drop those as well. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/98b69261-298c-13d2-f34d-836fd9c29b21%402ndquadrant.com
-
Alvaro Herrera authored
Make the value null only at pg_stat_activity-output time, as suggested by Tom Lane, instead of messing with the internal state. This should appease buildfarm members with force_parallel_mode=regress, which are running parallel queries on logical replication walsenders. The fact that walsenders can run parallel queries should perhaps be studied more carefully, but for the moment let's get rid of the red blots in buildfarm. Backpatch to pg10, like the previous commit. Discussion: https://postgr.es/m/30804.1578438763@sss.pgh.pa.us
-
Tom Lane authored
Use the parser's standard type coercion machinery to convert the output column(s) of a SQL function's final SELECT or RETURNING to the type(s) they should have according to the function's declared result type. We'll allow any case where an assignment-level coercion is available. Previously, we failed unless the required coercion was a binary-compatible one (and the documentation ignored this, falsely claiming that the types must match exactly). Notably, the coercion now accounts for typmods, so that cases where a SQL function is declared to return a composite type whose columns are typmod-constrained now behave as one would expect. Arguably this aspect is a bug fix, but the overall behavioral change here seems too large to consider back-patching. A nice side-effect is that functions can now be inlined in a few cases where we previously failed to do so because of type mismatches. Discussion: https://postgr.es/m/18929.1574895430@sss.pgh.pa.us
-
Stephen Frost authored
The original comment was a bit confusing, pointed out by Alvaro Herrera. Thread: https://postgr.es/m/20191224151520.GA16435%40alvherre.pgsql
-
Tom Lane authored
ALTER TABLE failed if a column referenced in a GENERATED expression had been added or changed in type earlier in the ALTER command. That's because the GENERATED expression needs to be evaluated against the table's updated tuples, but it was being evaluated against the original tuples. (Fortunately the executor has adequate cross-checks to notice the mismatch, so we just got an obscure error message and not anything more dangerous.) Per report from Andreas Joseph Krogh. Back-patch to v12 where GENERATED was added. Discussion: https://postgr.es/m/VisenaEmail.200.231b0a41523275d0.16ea7f800c7@tc7-visena
-
Peter Eisentraut authored
Author: Fabien COELHO <coelho@cri.ensmp.fr> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1912241100390.3339@pseudo
-
Michael Paquier authored
This reverts commit a052f6cb, following complains from Robert Haas and Tom Lane. Backpatch down to 9.4, like the previous commit. Discussion: https://postgr.es/m/CA+TgmobL4npEX5=E5h=5Jm_9mZun3MT39Kq2suJFVeamc9skSQ@mail.gmail.com Backpatch-through: 9.4
-
Michael Paquier authored
Failures in allocations could lead to crashes with NULL pointer dereferences . Memory context TopMemoryContext is used instead to keep alive the plans allocated in the session. A more specific context could be used here, but this is left for later. Reported-by: Jian Zhang Author: Michael Paquier Reviewed-by: Tom Lane, Andres Freund Discussion: https://postgr.es/m/16190-70181c803641c3dc@postgresql.org
-
- 07 Jan, 2020 5 commits
-
-
Alvaro Herrera authored
Returning a non-NULL time is pointless, sinc a walsender is not a process that would be running normal transactions anyway, but the code was unintentionally exposing the process start time intermittently, which was not only bogus but it also confused monitoring systems looking for idle transactions. Fix by avoiding all updates in walsenders. Backpatch to 11, where walsenders started appearing in pg_stat_activity. Reported-by: Tomas Vondra Discussion: https://postgr.es/m/20191209234409.exe7osmyalwkt5j4@development
-
Robert Haas authored
Instead of always calling heap_fetch_toast_slice during detoasting, invoke a table AM callback which, when the toast table is a heap table, will be heap_fetch_toast_slice. This makes it possible for a table AM other than heap to be used as a TOAST table. It also completes the series of commits intended to improve the interaction of tableam with TOAST that began with commit 8b94dab0; detoast.c is now, hopefully, fully AM-independent. Patch by me, reviewed by Andres Freund and Peter Eisentraut. Discussion: http://postgr.es/m/CA+TgmoZv-=2iWM4jcw5ZhJeL18HF96+W1yJeYrnGMYdkFFnEpQ@mail.gmail.com
-
Robert Haas authored
Previously, the toast table had to be implemented by the same AM that was used for the main table, which was bad, because the detoasting code won't work with anything but heap. This commit doesn't fix the latter problem, although there's another patch coming which does, but it does let you pick something that works (i.e. heap, right now). Patch by me, reviewed by Andres Freund. Discussion: http://postgr.es/m/CA+TgmoZv-=2iWM4jcw5ZhJeL18HF96+W1yJeYrnGMYdkFFnEpQ@mail.gmail.com
-
Robert Haas authored
This one-line change provoked a lot of discussion, but ultimately the consensus seems to be that allowing a larger value might be useful to somebody, and probably won't hurt anyone who chooses not to take advantage of the higher maximum limit. Vyacheslav Makarov, reviewed by many people. Discussion: http://postgr.es/m/7b5ecc5a9991045e2f13c84e3047541d@postgrespro.ru
-
Tom Lane authored
Instead of hard-wiring the netmask as /32, allow it to be specified where we specify the server address. This will ease changing the test to use IPv6, when/if somebody wants to do that. Also remove the hard-wired pg_hba.conf entries for IPv6 (::1/128). These have never had any usefulness, because the client side of the tests has always explicitly connected to $SERVERHOSTADDR which has always been set to IPv4 (127.0.0.1). All they accomplish is to break the test on non-IPv6-supporting hosts, and besides that they violate the express intent of the code to minimize the server's range of allowed connections. This could be back-patched, perhaps, but for now I don't see a need to. Discussion: https://postgr.es/m/1899.1578356089@sss.pgh.pa.us
-
- 06 Jan, 2020 2 commits
-
-
Tom Lane authored
Since the WAL flush position only moves forward, it's safe to cache its previous value within each walsender process, and update from shared memory only once we've caught up to the previously-seen value. When there are many active walsenders, this makes for a very significant reduction in the amount of contention on the XLogCtl->info_lck spinlock. This patch also adjusts the logic so that we update our idea of the flush position after processing a WAL record, rather than beforehand. This may cause us to realize we're not caught up when the preceding coding would've thought that we were, but that seems all to the good; it may avoid a useless sleep-and-wakeup cycle. Back-patch to v12. The contention problem exists in prior branches, but it's much less severe (due to inefficiencies elsewhere) so there seems no need to take any risk of back-patching further. Pierre Ducroquet, reviewed by Julien Rouhaud Discussion: https://postgr.es/m/2931018.Vxl9zapr77@pierred-pdoc
-
Tom Lane authored
These allow better control of trailing zeroes in numeric values. Pavel Stehule, based on an old proposal of Marko Tiikkaja's; review by Karl Pinc Discussion: https://postgr.es/m/CAFj8pRDjs-navGASeF0Wk74N36YGFJ+v=Ok9_knRa7vDc-qugg@mail.gmail.com
-