- 20 Jan, 2017 10 commits
-
-
Robert Haas authored
If either bound is infinite, then we shouldn't even try to perform a comparison of the values themselves. Rearrange the logic so that we don't. Per buildfarm member skink and Tom Lane.
-
Alvaro Herrera authored
This was forgotten in 665d1fad and caused the whole buildfarm to become red for a little while. Author: Petr Jelínek Also fix a typo in a nearby error message.
-
Alvaro Herrera authored
We were using != to compare strings, for which "ne" is the right thing. It's not clear why it works everywhere except on Pavan's machine, but it's clearly bogus anyway. Author and reporter: Pavan Deolasee Discussion: https://postgr.es/m/CABOikdPhsHM+pX8skoEY1_T0OtKdO1udzUj4VCjU5VEt+bj4eA@mail.gmail.com
-
Tom Lane authored
pgoutput evidently needs to be built without -DBUILDING_DLL. (It seems like a pretty bad idea that these makefiles need to know exactly where all the shlibs are in the tree, or maybe what's bad is putting them under src/backend/. But right now is not the time to redesign that.) Also, remove "override CPPFLAGS" in pgoutput's Makefile. I don't think that that actually has any bad consequences, but it's certainly useless in a directory that has no .h files, and it might be contributing to the failure somehow. Per buildfarm.
-
Tom Lane authored
A pgbench meta command can now be continued onto additional line(s) of a script file by writing backslash-return. The continuation marker is equivalent to white space in that it separates tokens. Eventually it'd be nice to have the same thing in psql, but that will be a much larger project. Fabien Coelho, reviewed by Rafia Sabih Discussion: https://postgr.es/m/alpine.DEB.2.20.1610031049310.19411@lancre
-
Fujii Masao authored
Ayumi Ishii
-
Peter Eisentraut authored
The publication test didn't drop all the publications it was creating when it was probably intending to do that. There is still a bug with dependency tracking in there, but this should at least quiet down the build farm.
-
Peter Eisentraut authored
-
Peter Eisentraut authored
- Add PUBLICATION catalogs and DDL - Add SUBSCRIPTION catalog and DDL - Define logical replication protocol and output plugin - Add logical replication workers From: Petr Jelinek <petr@2ndquadrant.com> Reviewed-by: Steve Singer <steve@ssinger.info> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Erik Rijkers <er@xs4all.nl> Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
-
Tom Lane authored
Brown-paper-bag bug in commit ab1f0c82: the old code here coped with null CachedPlanSource.raw_parse_tree, the new code not so much. Per report from Dave Cramer. No regression test, because our core testing infrastructure doesn't provide any easy way to exercise this path. Fortunately, the JDBC crew test it regularly. Discussion: https://postgr.es/m/CADK3HH+Ug3xCysKqw_dZOnaNnytZ1Rh5yP05hjO-e4NoyRxVvA@mail.gmail.com
-
- 19 Jan, 2017 13 commits
-
-
Tom Lane authored
I'd somehow talked myself into believing that set_append_rel_size doesn't need to worry about getting back an AND clause when it applies eval_const_expressions to the result of adjust_appendrel_attrs (that is, transposing the appendrel parent's restriction clauses for one child). But that is nonsense, and Andreas Seltenreich's fuzz tester soon turned up a counterexample. Put back the make_ands_implicit step that was there before, and add a regression test covering the case. Report: https://postgr.es/m/878tq6vja6.fsf@ansel.ydns.eu
-
Andres Freund authored
Due to the changed costing in that commit hash-aggregates started to be used, which results in big-endian vs. little-endian output differences. Disable hash-aggs for those tests. Author: Andres Freund, with input from Tom Lane Discussion: https://postgr.es/m/22891.1484791792@sss.pgh.pa.us
-
Andres Freund authored
Since 69f4b9c8 plain expression evaluation (and thus normal projection) can't return sets of tuples anymore. Thus remove code dealing with that possibility. This will require adjustments in external code using ExecEvalExpr()/ExecProject() - that should neither be hard nor very common. Author: Andres Freund and Tom Lane Discussion: https://postgr.es/m/20160822214023.aaxz5l4igypowyri@alap3.anarazel.de
-
Alvaro Herrera authored
If a user requests the commit timestamp for a transaction old enough that its data is concurrently being truncated away by vacuum at just the right time, they would receive an ugly internal file-not-found error message from slru.c rather than the expected NULL return value. In a primary server, the window for the race is very small: the lookup has to occur exactly between the two calls by vacuum, and there's not a lot that happens between them (mostly just a multixact truncate). In a standby server, however, the window is larger because the truncation is executed as soon as the WAL record for it is replayed, but the advance of the oldest-Xid is not executed until the next checkpoint record. To fix in the primary, simply reverse the order of operations in vac_truncate_clog. To fix in the standby, augment the WAL truncation record so that the standby is aware of the new oldest-XID value and can apply the update immediately. WAL version bumped because of this. No backpatch, because of the low importance of the bug and its rarity. Author: Craig Ringer Reviewed-By: Petr Jelínek, Peter Eisentraut Discussion: https://postgr.es/m/CAMsr+YFhVtRQT1VAwC+WGbbxZZRzNou=N9Ed-FrCqkwQ8H8oJQ@mail.gmail.com
-
Peter Eisentraut authored
The previous coding did not properly quote the user name before casting it to regrole. To avoid all that, just pass in BOOTSTRAP_SUPERUSERID numerically. Also fix one place where the BOOTSTRAP_SUPERUSERID was hardcoded as 10.
-
Robert Haas authored
This occasionally causes failures; the order in which the affected objects are listed is not 100% consistent. Amit Langote
-
Robert Haas authored
Code to map attribute numbers in map_partition_varattnos() duplicates what convert_tuples_by_name_map() does. Avoid that. Amit Langote, per a report from Álvaro Herrera. Discussion: http://postgr.es/m/9ce97382-54c8-deb3-9ee9-a2ec271d866b%40lab.ntt.co.jp
-
Robert Haas authored
Account for the fact that the highest bound less than or equal to the upper bound might be either the lower or the upper bound of the overlapping partition, depending on whether the proposed partition completely contains the existing partition or merely overlaps it. Also, we need not continue searching for even greater bound in partition_bound_bsearch() once we find the first bound that is *equal* to the probe, because we don't have duplicate datums. That spends cycles needlessly. Amit Langote, per a report from Amul Sul. Cosmetic changes by me. Discussion: http://postgr.es/m/CAAJ_b94XgbqVoXMyxxs63CaqWoMS1o2gpHiU0F7yGnJBnvDc_A%40mail.gmail.com
-
Robert Haas authored
In ExecInsert(), do not switch back to the root partitioned table ResultRelInfo until after we finish ExecProcessReturning(), so that RETURNING projection is done using the partition's descriptor. For the projection to work correctly, we must initialize the same for each leaf partition during ModifyTableState initialization. Amit Langote
-
Robert Haas authored
When a tuple is inherited into a partitioning root, no partition constraints need to be enforced; when it is inserted into a leaf, the parent's partitioning quals needed to be enforced. The previous coding got both of those cases right. When a tuple is inserted into an intermediate level of the partitioning hierarchy (i.e. a table which is both a partition itself and in turn partitioned), it must enforce the partitioning qual inherited from its parent. That case got overlooked; repair. Amit Langote
-
Stephen Frost authored
When considering a sequence's Data entry in dumpSequenceData, we were actually looking at the sequence definition's dump flag to decide if we should dump the data or not. That's generally fine, except for when the sequence data entry was created by processExtensionTables() because it's a config sequence. In that case, the sequence itself won't be marked as dumping data because it's part of an extension, leading to the need for processExtensionTables() to create the sequence data entry. This leads to extension config sequence data not being included in the dump when it should be. Fix this by looking at the sequence data's dump flag instead, just as dumpTableData() was doing for tables (which is why config tables were correctly being handled), and add a regression test to make sure we don't break it moving forward. All of this is a bit round-about since we can now represent which components of a given dump item should be dumped out through the dump flag. A future improvement might be to change checkExtensionMembership() to check for config sequences/tables and set the dump flag based on that directly, possibly removing the need for processExtensionTables(). Bug found by Daniele Varrazzo. Discussion: https://postgr.es/m/CA+mi_8ZmxQM7+nZ7pJ8uyfxc9V3o=UAG14dVqvftdmvw8OJ3gQ@mail.gmail.com Patch by Michael Paquier, with some tweaking of the regression tests by me. Back-patch to 9.6 where the bug was introduced.
-
Alvaro Herrera authored
There doesn't seem to be any reason not to allow negative years to be interpreted as BC, so do that. The documentation is pretty vague on the details of this function, so nothing needs to change there. Reported-by: Andy Abelisto, in bug #14446
-
Andres Freund authored
Hopefully this'll unbreak the buildfarm.
-
- 18 Jan, 2017 15 commits
-
-
Tom Lane authored
Correct a misstatement about how things used to work: we did allow nested SRFs before, as long as no function had more than one set-returning input. Also, attempt to document the fact that the new implementation changes the behavior for SRFs within conditional constructs (eg CASE): the conditional construct no longer gates whether the SRF is run, and thus cannot affect the number of rows emitted. We might want to change this behavior, but first it behooves us to see if we can explain it. Minor other wordsmithing on what I wrote yesterday, too. Discussion: https://postgr.es/m/20170118214702.54b2mdbxce5piwv5@alap3.anarazel.de
-
Andres Freund authored
Evaluation of set returning functions (SRFs_ in the targetlist (like SELECT generate_series(1,5)) so far was done in the expression evaluation (i.e. ExecEvalExpr()) and projection (i.e. ExecProject/ExecTargetList) code. This meant that most executor nodes performing projection, and most expression evaluation functions, had to deal with the possibility that an evaluated expression could return a set of return values. That's bad because it leads to repeated code in a lot of places. It also, and that's my (Andres's) motivation, made it a lot harder to implement a more efficient way of doing expression evaluation. To fix this, introduce a new executor node (ProjectSet) that can evaluate targetlists containing one or more SRFs. To avoid the complexity of the old way of handling nested expressions returning sets (e.g. having to pass up ExprDoneCond, and dealing with arguments to functions returning sets etc.), those SRFs can only be at the top level of the node's targetlist. The planner makes sure (via split_pathtarget_at_srfs()) that SRF evaluation is only necessary in ProjectSet nodes and that SRFs are only present at the top level of the node's targetlist. If there are nested SRFs the planner creates multiple stacked ProjectSet nodes. The ProjectSet nodes always get input from an underlying node. We also discussed and prototyped evaluating targetlist SRFs using ROWS FROM(), but that turned out to be more complicated than we'd hoped. While moving SRF evaluation to ProjectSet would allow to retain the old "least common multiple" behavior when multiple SRFs are present in one targetlist (i.e. continue returning rows until all SRFs are at the end of their input at the same time), we decided to instead only return rows till all SRFs are exhausted, returning NULL for already exhausted ones. We deemed the previous behavior to be too confusing, unexpected and actually not particularly useful. As a side effect, the previously prohibited case of multiple set returning arguments to a function, is now allowed. Not because it's particularly desirable, but because it ends up working and there seems to be no argument for adding code to prohibit it. Currently the behavior for COALESCE and CASE containing SRFs has changed, returning multiple rows from the expression, even when the SRF containing "arm" of the expression is not evaluated. That's because the SRFs are evaluated in a separate ProjectSet node. As that's quite confusing, we're likely to instead prohibit SRFs in those places. But that's still being discussed, and the code would reside in places not touched here, so that's a task for later. There's a lot of, now superfluous, code dealing with set return expressions around. But as the changes to get rid of those are verbose largely boring, it seems better for readability to keep the cleanup as a separate commit. Author: Tom Lane and Andres Freund Discussion: https://postgr.es/m/20160822214023.aaxz5l4igypowyri@alap3.anarazel.de
-
Robert Haas authored
Typo fix from Mithun Cy; other improvements by me.
-
Tom Lane authored
Thinko in commit a4523c5a. It doesn't really affect anything at present, but it would be a problem if any tests added later in this file ought to get index-only-scan plans. Back-patch, like the previous commit, just to avoid surprises in case we add such a test and then back-patch it. Nikita Glukhov Discussion: https://postgr.es/m/8b70135d-ad38-bdd8-ac92-71e2b3c273cf@postgrespro.ru
-
Alvaro Herrera authored
These macros work fine when they are used directly in an "if" test or similar, but as soon as the return values are assigned to boolean variables (or passed as boolean arguments to some function), they become bugs, hopefully caught by compiler warnings. To avoid future problems, fix the definitions so that they return actual booleans. To further minimize the risk that somebody uses them in back-patched fixes that only work correctly in branches starting from the current master and not in old ones, back-patch the change to supported branches as appropriate. See also commit af4472bc, and the long discussion (and larger patch) in the thread mentioned in its commit message. Discussion: https://postgr.es/m/18672.1483022414@sss.pgh.pa.us
-
Magnus Hagander authored
This makes it possible to delete multiple keys from a jsonb value by passing in an array of text values, which makes the operaiton much faster than individually deleting the keys (which would require copying the jsonb structure over and over again. Reviewed by Dmitry Dolgov and Michael Paquier
-
Tom Lane authored
These resulted in wrong answers if the relabeled argument could be matched to an index column, as shown in bug #14504 from Evgeniy Kozlov. We might be able to resurrect these optimizations by adjusting the planner's treatment of RelabelType, or by adjusting btree's rules for selecting comparison functions, but either solution will take careful analysis and does not sound like a fit candidate for backpatching. I left the catalog infrastructure in place and just reduced the transform functions to always-return-NULL. This would be necessary anyway in the back branches, and it doesn't seem important to be more invasive in HEAD. Bug introduced by commit b8a18ad4. Back-patch to 9.5 where that came in. Report: https://postgr.es/m/20170118144828.1432.52823@wrigleys.postgresql.org Discussion: https://postgr.es/m/18771.1484759439@sss.pgh.pa.us
-
Robert Haas authored
This isn't really guaranteed to always produce exactly the same output; the order can change from run to run. See related cleanup in 257d8157.
-
Robert Haas authored
Commit a2566508 fixed some issues with how PartitionDispatch related code handled multi-level partitioned tables, but didn't add any tests. Discussion: http://postgr.es/m/CA%2BTgmoZ86v1G%2Bzx9etMiSQaBBvYMKfU-iitqZArSh5z0n8Q4cA%40mail.gmail.com Amit Langote, per a complaint from me.
-
Robert Haas authored
The original table partitioning patch overlooked this. Discussion: http://postgr.es/m/CAG1_KcDJiZB=L6yOUO_bVufj2q2851_xdkfhw0JdcD_2VtKssw@mail.gmail.com Keith Fiske and Amit Langote, adjusted by me.
-
Alvaro Herrera authored
This avoids additional translatable strings for each distinct type, as well as making our quoting style around type names more consistent (namely, that we don't quote type names). This continues what started as f402b995. Discussion: https://postgr.es/m/20160401170642.GA57509@alvherre.pgsql
-
Robert Haas authored
Forthcoming patches to allow other types of parallel scans will need this logic, or something like it. Dilip Kumar
-
Tom Lane authored
This resulted in failures depending on the order of "locale -a" output. The original coding in initdb sorted the results, but that should be unnecessary as long as "locale -a" doesn't print duplicate names. The original entries will then all be non-dups, and while we might generate duplicate aliases by stripping, they should be for different encodings and thus not conflict. Even if the latter assumption fails somehow, it won't be fatal because we're using if_not_exists mode for the aliases. Discussion: https://postgr.es/m/26116.1484751196%40sss.pgh.pa.us
-
Tom Lane authored
In an RLS query, we must ensure that security filter quals are evaluated before ordinary query quals, in case the latter contain "leaky" functions that could expose the contents of sensitive rows. The original implementation of RLS planning ensured this by pushing the scan of a secured table into a sub-query that it marked as a security-barrier view. Unfortunately this results in very inefficient plans in many cases, because the sub-query cannot be flattened and gets planned independently of the rest of the query. To fix, drop the use of sub-queries to enforce RLS qual order, and instead mark each qual (RestrictInfo) with a security_level field establishing its priority for evaluation. Quals must be evaluated in security_level order, except that "leakproof" quals can be allowed to go ahead of quals of lower security_level, if it's helpful to do so. This has to be enforced within the ordering of any one list of quals to be evaluated at a table scan node, and we also have to ensure that quals are not chosen for early evaluation (i.e., use as an index qual or TID scan qual) if they're not allowed to go ahead of other quals at the scan node. This is sufficient to fix the problem for RLS quals, since we only support RLS policies on simple tables and thus RLS quals will always exist at the table scan level only. Eventually these qual ordering rules should be enforced for join quals as well, which would permit improving planning for explicit security-barrier views; but that's a task for another patch. Note that FDWs would need to be aware of these rules --- and not, for example, send an insecure qual for remote execution --- but since we do not yet allow RLS policies on foreign tables, the case doesn't arise. This will need to be addressed before we can allow such policies. Patch by me, reviewed by Stephen Frost and Dean Rasheed. Discussion: https://postgr.es/m/8185.1477432701@sss.pgh.pa.us
-
Peter Eisentraut authored
Move this logic out of initdb into a user-callable function. This simplifies the code and makes it possible to update the standard collations later on if additional operating system collations appear. Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Euler Taveira <euler@timbira.com.br>
-
- 17 Jan, 2017 2 commits
-
-
Alvaro Herrera authored
The bootstrap scanner/parser contains code to parse floating point values, but this is not exercised anywhere, so remove it. Reviewed-by: Jim Nasby Discussion: https://postgr.es/m/20170110051119.b5h7i3z5qagy35rb@alvherre.pgsql
-
Alvaro Herrera authored
-