- 22 Jan, 2017 3 commits
-
-
Tom Lane authored
Project style is to put things in this order, for the good and sufficient reason that you often need the typedefs in the function declarations. There already was one function declaration that needed a typedef, which was randomly placed away from all the other static function declarations in consequence. And the submitted patch for better json_populate_record functionality jumped through even more hoops in order to preserve this bad idea. This patch only moves lines from point A to point B, no other changes.
-
Tom Lane authored
Coverity complained quite properly that commit ea15e186 had introduced unreachable code into ExecGather(); to wit, it was no longer possible to iterate the final for-loop more or less than once. So remove the for(). In passing, clean up a couple of comments, and make better use of a local variable.
-
Peter Eisentraut authored
-
- 21 Jan, 2017 3 commits
-
-
Peter Eisentraut authored
-
Tom Lane authored
Turns out this has been broken for years and we'd not noticed. The one case that was getting exercised in the buildfarm, or probably anywhere else, was postgres_fdw.sl's reference to libpq.sl; and it turns out that that was always going to libpq.sl in the actual installation directory not the temporary install. We'd not noticed because the buildfarm script does "make install" before it tests contrib. However, the recent addition of a logical-replication test to the core regression scripts resulted in trying to use libpqwalreceiver.sl before "make install" happens, and that failed for lack of finding libpq.sl, as shown by failures on buildfarm members gaur and pademelon. There are two changes needed to fix it: the magic environment variable to specify shlib search path at runtime is SHLIB_PATH not LD_LIBRARY_PATH, and the shlib link command needs to specify the +s switch else the library will not honor SHLIB_PATH. I'm not quite sure why buildfarm members anole and gharial (HPUX 11) didn't show the same failure. Consulting man pages on the web says that HPUX 11 honors both LD_LIBRARY_PATH and SHLIB_PATH, which would explain half of it, and the rather confusing wording I've been able to find suggests that +s might effectively be the default in HPUX 11. But it seems at least as likely that there's just a libpq.so installed in /usr/lib on that machine; as long as it's not too ancient, that would satisfy the test. In any case I do not think this patch will break HPUX 11. At the moment I don't see a need to back-patch this, since it only matters for testing purposes, not to mention that HPUX 10 is probably dead in the real world anyway.
-
Peter Eisentraut authored
This avoids that builtins.h has to include additional header files.
-
- 20 Jan, 2017 11 commits
-
-
Robert Haas authored
When (1) autovacuum = off and (2) there's at least one database with an XID age greater than autovacuum_freeze_max_age and (3) all tables in that database that need vacuuming are already being processed by a worker and (4) the autovacuum launcher is started, a kind of infinite loop occurs. The launcher starts a worker and immediately exits. The worker, finding no worker to do, immediately starts the launcher, supposedly so that the next database can be processed. But because datfrozenxid for that database hasn't been advanced yet, the new worker gets put right back into the same database as the old one, where it once again starts the launcher and exits. High-speed ping pong ensues. There are several possible ways to break the cycle; this seems like the safest one. Amit Khandekar (code) and Robert Haas (comments), reviewed by Álvaro Herrera. Discussion: http://postgr.es/m/CAJ3gD9eWejf72HKquKSzax0r+epS=nAbQKNnykkMA0E8c+rMDg@mail.gmail.com
-
Robert Haas authored
If either bound is infinite, then we shouldn't even try to perform a comparison of the values themselves. Rearrange the logic so that we don't. Per buildfarm member skink and Tom Lane.
-
Alvaro Herrera authored
This was forgotten in 665d1fad and caused the whole buildfarm to become red for a little while. Author: Petr Jelínek Also fix a typo in a nearby error message.
-
Alvaro Herrera authored
We were using != to compare strings, for which "ne" is the right thing. It's not clear why it works everywhere except on Pavan's machine, but it's clearly bogus anyway. Author and reporter: Pavan Deolasee Discussion: https://postgr.es/m/CABOikdPhsHM+pX8skoEY1_T0OtKdO1udzUj4VCjU5VEt+bj4eA@mail.gmail.com
-
Tom Lane authored
pgoutput evidently needs to be built without -DBUILDING_DLL. (It seems like a pretty bad idea that these makefiles need to know exactly where all the shlibs are in the tree, or maybe what's bad is putting them under src/backend/. But right now is not the time to redesign that.) Also, remove "override CPPFLAGS" in pgoutput's Makefile. I don't think that that actually has any bad consequences, but it's certainly useless in a directory that has no .h files, and it might be contributing to the failure somehow. Per buildfarm.
-
Tom Lane authored
A pgbench meta command can now be continued onto additional line(s) of a script file by writing backslash-return. The continuation marker is equivalent to white space in that it separates tokens. Eventually it'd be nice to have the same thing in psql, but that will be a much larger project. Fabien Coelho, reviewed by Rafia Sabih Discussion: https://postgr.es/m/alpine.DEB.2.20.1610031049310.19411@lancre
-
Fujii Masao authored
Ayumi Ishii
-
Peter Eisentraut authored
The publication test didn't drop all the publications it was creating when it was probably intending to do that. There is still a bug with dependency tracking in there, but this should at least quiet down the build farm.
-
Peter Eisentraut authored
-
Peter Eisentraut authored
- Add PUBLICATION catalogs and DDL - Add SUBSCRIPTION catalog and DDL - Define logical replication protocol and output plugin - Add logical replication workers From: Petr Jelinek <petr@2ndquadrant.com> Reviewed-by: Steve Singer <steve@ssinger.info> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Erik Rijkers <er@xs4all.nl> Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
-
Tom Lane authored
Brown-paper-bag bug in commit ab1f0c82: the old code here coped with null CachedPlanSource.raw_parse_tree, the new code not so much. Per report from Dave Cramer. No regression test, because our core testing infrastructure doesn't provide any easy way to exercise this path. Fortunately, the JDBC crew test it regularly. Discussion: https://postgr.es/m/CADK3HH+Ug3xCysKqw_dZOnaNnytZ1Rh5yP05hjO-e4NoyRxVvA@mail.gmail.com
-
- 19 Jan, 2017 13 commits
-
-
Tom Lane authored
I'd somehow talked myself into believing that set_append_rel_size doesn't need to worry about getting back an AND clause when it applies eval_const_expressions to the result of adjust_appendrel_attrs (that is, transposing the appendrel parent's restriction clauses for one child). But that is nonsense, and Andreas Seltenreich's fuzz tester soon turned up a counterexample. Put back the make_ands_implicit step that was there before, and add a regression test covering the case. Report: https://postgr.es/m/878tq6vja6.fsf@ansel.ydns.eu
-
Andres Freund authored
Due to the changed costing in that commit hash-aggregates started to be used, which results in big-endian vs. little-endian output differences. Disable hash-aggs for those tests. Author: Andres Freund, with input from Tom Lane Discussion: https://postgr.es/m/22891.1484791792@sss.pgh.pa.us
-
Andres Freund authored
Since 69f4b9c8 plain expression evaluation (and thus normal projection) can't return sets of tuples anymore. Thus remove code dealing with that possibility. This will require adjustments in external code using ExecEvalExpr()/ExecProject() - that should neither be hard nor very common. Author: Andres Freund and Tom Lane Discussion: https://postgr.es/m/20160822214023.aaxz5l4igypowyri@alap3.anarazel.de
-
Alvaro Herrera authored
If a user requests the commit timestamp for a transaction old enough that its data is concurrently being truncated away by vacuum at just the right time, they would receive an ugly internal file-not-found error message from slru.c rather than the expected NULL return value. In a primary server, the window for the race is very small: the lookup has to occur exactly between the two calls by vacuum, and there's not a lot that happens between them (mostly just a multixact truncate). In a standby server, however, the window is larger because the truncation is executed as soon as the WAL record for it is replayed, but the advance of the oldest-Xid is not executed until the next checkpoint record. To fix in the primary, simply reverse the order of operations in vac_truncate_clog. To fix in the standby, augment the WAL truncation record so that the standby is aware of the new oldest-XID value and can apply the update immediately. WAL version bumped because of this. No backpatch, because of the low importance of the bug and its rarity. Author: Craig Ringer Reviewed-By: Petr Jelínek, Peter Eisentraut Discussion: https://postgr.es/m/CAMsr+YFhVtRQT1VAwC+WGbbxZZRzNou=N9Ed-FrCqkwQ8H8oJQ@mail.gmail.com
-
Peter Eisentraut authored
The previous coding did not properly quote the user name before casting it to regrole. To avoid all that, just pass in BOOTSTRAP_SUPERUSERID numerically. Also fix one place where the BOOTSTRAP_SUPERUSERID was hardcoded as 10.
-
Robert Haas authored
This occasionally causes failures; the order in which the affected objects are listed is not 100% consistent. Amit Langote
-
Robert Haas authored
Code to map attribute numbers in map_partition_varattnos() duplicates what convert_tuples_by_name_map() does. Avoid that. Amit Langote, per a report from Álvaro Herrera. Discussion: http://postgr.es/m/9ce97382-54c8-deb3-9ee9-a2ec271d866b%40lab.ntt.co.jp
-
Robert Haas authored
Account for the fact that the highest bound less than or equal to the upper bound might be either the lower or the upper bound of the overlapping partition, depending on whether the proposed partition completely contains the existing partition or merely overlaps it. Also, we need not continue searching for even greater bound in partition_bound_bsearch() once we find the first bound that is *equal* to the probe, because we don't have duplicate datums. That spends cycles needlessly. Amit Langote, per a report from Amul Sul. Cosmetic changes by me. Discussion: http://postgr.es/m/CAAJ_b94XgbqVoXMyxxs63CaqWoMS1o2gpHiU0F7yGnJBnvDc_A%40mail.gmail.com
-
Robert Haas authored
In ExecInsert(), do not switch back to the root partitioned table ResultRelInfo until after we finish ExecProcessReturning(), so that RETURNING projection is done using the partition's descriptor. For the projection to work correctly, we must initialize the same for each leaf partition during ModifyTableState initialization. Amit Langote
-
Robert Haas authored
When a tuple is inherited into a partitioning root, no partition constraints need to be enforced; when it is inserted into a leaf, the parent's partitioning quals needed to be enforced. The previous coding got both of those cases right. When a tuple is inserted into an intermediate level of the partitioning hierarchy (i.e. a table which is both a partition itself and in turn partitioned), it must enforce the partitioning qual inherited from its parent. That case got overlooked; repair. Amit Langote
-
Stephen Frost authored
When considering a sequence's Data entry in dumpSequenceData, we were actually looking at the sequence definition's dump flag to decide if we should dump the data or not. That's generally fine, except for when the sequence data entry was created by processExtensionTables() because it's a config sequence. In that case, the sequence itself won't be marked as dumping data because it's part of an extension, leading to the need for processExtensionTables() to create the sequence data entry. This leads to extension config sequence data not being included in the dump when it should be. Fix this by looking at the sequence data's dump flag instead, just as dumpTableData() was doing for tables (which is why config tables were correctly being handled), and add a regression test to make sure we don't break it moving forward. All of this is a bit round-about since we can now represent which components of a given dump item should be dumped out through the dump flag. A future improvement might be to change checkExtensionMembership() to check for config sequences/tables and set the dump flag based on that directly, possibly removing the need for processExtensionTables(). Bug found by Daniele Varrazzo. Discussion: https://postgr.es/m/CA+mi_8ZmxQM7+nZ7pJ8uyfxc9V3o=UAG14dVqvftdmvw8OJ3gQ@mail.gmail.com Patch by Michael Paquier, with some tweaking of the regression tests by me. Back-patch to 9.6 where the bug was introduced.
-
Alvaro Herrera authored
There doesn't seem to be any reason not to allow negative years to be interpreted as BC, so do that. The documentation is pretty vague on the details of this function, so nothing needs to change there. Reported-by: Andy Abelisto, in bug #14446
-
Andres Freund authored
Hopefully this'll unbreak the buildfarm.
-
- 18 Jan, 2017 10 commits
-
-
Tom Lane authored
Correct a misstatement about how things used to work: we did allow nested SRFs before, as long as no function had more than one set-returning input. Also, attempt to document the fact that the new implementation changes the behavior for SRFs within conditional constructs (eg CASE): the conditional construct no longer gates whether the SRF is run, and thus cannot affect the number of rows emitted. We might want to change this behavior, but first it behooves us to see if we can explain it. Minor other wordsmithing on what I wrote yesterday, too. Discussion: https://postgr.es/m/20170118214702.54b2mdbxce5piwv5@alap3.anarazel.de
-
Andres Freund authored
Evaluation of set returning functions (SRFs_ in the targetlist (like SELECT generate_series(1,5)) so far was done in the expression evaluation (i.e. ExecEvalExpr()) and projection (i.e. ExecProject/ExecTargetList) code. This meant that most executor nodes performing projection, and most expression evaluation functions, had to deal with the possibility that an evaluated expression could return a set of return values. That's bad because it leads to repeated code in a lot of places. It also, and that's my (Andres's) motivation, made it a lot harder to implement a more efficient way of doing expression evaluation. To fix this, introduce a new executor node (ProjectSet) that can evaluate targetlists containing one or more SRFs. To avoid the complexity of the old way of handling nested expressions returning sets (e.g. having to pass up ExprDoneCond, and dealing with arguments to functions returning sets etc.), those SRFs can only be at the top level of the node's targetlist. The planner makes sure (via split_pathtarget_at_srfs()) that SRF evaluation is only necessary in ProjectSet nodes and that SRFs are only present at the top level of the node's targetlist. If there are nested SRFs the planner creates multiple stacked ProjectSet nodes. The ProjectSet nodes always get input from an underlying node. We also discussed and prototyped evaluating targetlist SRFs using ROWS FROM(), but that turned out to be more complicated than we'd hoped. While moving SRF evaluation to ProjectSet would allow to retain the old "least common multiple" behavior when multiple SRFs are present in one targetlist (i.e. continue returning rows until all SRFs are at the end of their input at the same time), we decided to instead only return rows till all SRFs are exhausted, returning NULL for already exhausted ones. We deemed the previous behavior to be too confusing, unexpected and actually not particularly useful. As a side effect, the previously prohibited case of multiple set returning arguments to a function, is now allowed. Not because it's particularly desirable, but because it ends up working and there seems to be no argument for adding code to prohibit it. Currently the behavior for COALESCE and CASE containing SRFs has changed, returning multiple rows from the expression, even when the SRF containing "arm" of the expression is not evaluated. That's because the SRFs are evaluated in a separate ProjectSet node. As that's quite confusing, we're likely to instead prohibit SRFs in those places. But that's still being discussed, and the code would reside in places not touched here, so that's a task for later. There's a lot of, now superfluous, code dealing with set return expressions around. But as the changes to get rid of those are verbose largely boring, it seems better for readability to keep the cleanup as a separate commit. Author: Tom Lane and Andres Freund Discussion: https://postgr.es/m/20160822214023.aaxz5l4igypowyri@alap3.anarazel.de
-
Robert Haas authored
Typo fix from Mithun Cy; other improvements by me.
-
Tom Lane authored
Thinko in commit a4523c5a. It doesn't really affect anything at present, but it would be a problem if any tests added later in this file ought to get index-only-scan plans. Back-patch, like the previous commit, just to avoid surprises in case we add such a test and then back-patch it. Nikita Glukhov Discussion: https://postgr.es/m/8b70135d-ad38-bdd8-ac92-71e2b3c273cf@postgrespro.ru
-
Alvaro Herrera authored
These macros work fine when they are used directly in an "if" test or similar, but as soon as the return values are assigned to boolean variables (or passed as boolean arguments to some function), they become bugs, hopefully caught by compiler warnings. To avoid future problems, fix the definitions so that they return actual booleans. To further minimize the risk that somebody uses them in back-patched fixes that only work correctly in branches starting from the current master and not in old ones, back-patch the change to supported branches as appropriate. See also commit af4472bc, and the long discussion (and larger patch) in the thread mentioned in its commit message. Discussion: https://postgr.es/m/18672.1483022414@sss.pgh.pa.us
-
Magnus Hagander authored
This makes it possible to delete multiple keys from a jsonb value by passing in an array of text values, which makes the operaiton much faster than individually deleting the keys (which would require copying the jsonb structure over and over again. Reviewed by Dmitry Dolgov and Michael Paquier
-
Tom Lane authored
These resulted in wrong answers if the relabeled argument could be matched to an index column, as shown in bug #14504 from Evgeniy Kozlov. We might be able to resurrect these optimizations by adjusting the planner's treatment of RelabelType, or by adjusting btree's rules for selecting comparison functions, but either solution will take careful analysis and does not sound like a fit candidate for backpatching. I left the catalog infrastructure in place and just reduced the transform functions to always-return-NULL. This would be necessary anyway in the back branches, and it doesn't seem important to be more invasive in HEAD. Bug introduced by commit b8a18ad4. Back-patch to 9.5 where that came in. Report: https://postgr.es/m/20170118144828.1432.52823@wrigleys.postgresql.org Discussion: https://postgr.es/m/18771.1484759439@sss.pgh.pa.us
-
Robert Haas authored
This isn't really guaranteed to always produce exactly the same output; the order can change from run to run. See related cleanup in 257d8157.
-
Robert Haas authored
Commit a2566508 fixed some issues with how PartitionDispatch related code handled multi-level partitioned tables, but didn't add any tests. Discussion: http://postgr.es/m/CA%2BTgmoZ86v1G%2Bzx9etMiSQaBBvYMKfU-iitqZArSh5z0n8Q4cA%40mail.gmail.com Amit Langote, per a complaint from me.
-
Robert Haas authored
The original table partitioning patch overlooked this. Discussion: http://postgr.es/m/CAG1_KcDJiZB=L6yOUO_bVufj2q2851_xdkfhw0JdcD_2VtKssw@mail.gmail.com Keith Fiske and Amit Langote, adjusted by me.
-