- 22 Mar, 2018 5 commits
-
-
Andres Freund authored
This commit introduces: 1) JIT provider abstraction, which allows JIT functionality to be implemented in separate shared libraries. That's desirable because it allows to install JIT support as a separate package, and because it allows experimentation with different forms of JITing. 2) JITContexts which can be, using functions introduced in follow up commits, used to emit JITed functions, and have them be cleaned up on error. 3) The outline of a LLVM JIT provider, which will be fleshed out in subsequent commits. Documentation for GUCs added, and for JIT in general, will be added in later commits. Author: Andres Freund, with architectural input from Jeff Davis Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
-
Andres Freund authored
Typoed-In: 5b2526c8 Reported-By: Catalin Iacob
-
Andres Freund authored
We do the same for CFLAGS. This was an omission in 6869b4f2. Reported-By: Catalin Iacob
-
Tom Lane authored
Pending some solution for the problems noted in commit 74286994, disallow dynamic creation of GUC_LIST_QUOTE variables. If there are any extensions out there using this feature, they'd not be happy for us to start enforcing this rule in minor releases, so this is a HEAD-only change. The previous commit didn't make things any worse than they already were for such cases. Discussion: https://postgr.es/m/20180111064900.GA51030@paquier.xyz
-
Tom Lane authored
Code that prints out the contents of setconfig or proconfig arrays in SQL format needs to handle GUC_LIST_QUOTE variables differently from other ones, because for those variables, flatten_set_variable_args() already applied a layer of quoting. The value can therefore safely be printed as-is, and indeed must be, or flatten_set_variable_args() will muck it up completely on reload. For all other GUC variables, it's necessary and sufficient to quote the value as a SQL literal. We'd recognized the need for this long ago, but mis-analyzed the need slightly, thinking that all GUC_LIST_INPUT variables needed the special treatment. That's actually wrong, since a valid value of a LIST variable might include characters that need quoting, although no existing variables accept such values. More to the point, we hadn't made any particular effort to keep the various places that deal with this up-to-date with the set of variables that actually need special treatment, meaning that we'd do the wrong thing with, for example, temp_tablespaces values. This affects dumping of SET clauses attached to functions, as well as ALTER DATABASE/ROLE SET commands. In ruleutils.c we can fix it reasonably honestly by exporting a guc.c function that allows discovering the flags for a given GUC variable. But pg_dump doesn't have easy access to that, so continue the old method of having a hard-wired list of affected variable names. At least we can fix it to have just one list not two, and update the list to match current reality. A remaining problem with this is that it only works for built-in GUC variables. pg_dump's list obvious knows nothing of third-party extensions, and even the "ask guc.c" method isn't bulletproof since the relevant extension might not be loaded. There's no obvious solution to that, so for now, we'll just have to discourage extension authors from inventing custom GUCs that need GUC_LIST_QUOTE. This has been busted for a long time, so back-patch to all supported branches. Michael Paquier and Tom Lane, reviewed by Kyotaro Horiguchi and Pavel Stehule Discussion: https://postgr.es/m/20180111064900.GA51030@paquier.xyz
-
- 21 Mar, 2018 14 commits
-
-
Tom Lane authored
Currently, if operator_predicate_proof() is given an operator clause like "something op NULL", it just throws up its hands and reports it can't prove anything. But we can often do better than that, if the operator is strict, because then we know that the clause returns NULL overall. Depending on whether we're trying to prove or refute something, and whether we need weak or strong semantics for NULL, this may be enough to prove the implication, especially when we rely on the standard rule that "false implies anything". In particular, this lets us do something useful with questions like "does X IN (1,3,5,NULL) imply X <= 5?" The null entry in the IN list can effectively be ignored for this purpose, but the proof rules were not previously smart enough to deduce that. This patch is by me, but it owes something to previous work by Amit Langote to try to solve problems of the form mentioned. Thanks also to Emre Hasegeli and Ashutosh Bapat for review. Discussion: https://postgr.es/m/3bad48fc-f257-c445-feeb-8a2b2fb622ba@lab.ntt.co.jp
-
Tom Lane authored
I noticed while fooling with John Naylor's bootstrap-data patch that we had one high-numbered manually assigned OID, 8888, which evidently came from a submission that the committer didn't bother to bring into line with usual OID allocation practices before committing. That's a bad idea, because it creates a hazard for other patches that may be temporarily using high OID numbers. Change it to something more in line with what we usually do. This evidently dates to commit abb17339. It's too late to change it in released branches, but we can fix it in HEAD.
-
Peter Eisentraut authored
If the control file is corrupted and specifies the WAL segment size to be 0 bytes, calculating the latest checkpoint's REDO WAL file will fail with a division-by-zero error. Show it as "???" instead. Also reword the warning message a bit and send it to stdout, like the other pre-existing warning messages. Add some tests for dealing with a corrupted pg_control file. Author: Nathan Bossart <bossartn@amazon.com>, tests by me
-
Alvaro Herrera authored
My commit 4dba331c that moved around CommandCounterIncrement calls in partitioning DDL code unearthed a problem with the relcache handling for the 'default' partition: the construction of a correct relcache entry for the partitioned table was at the mercy of lack of CCI calls in non-trivial amounts of code. This was prone to creating problems later on, as the code develops. This was visible as a test failure in a compile with RELCACHE_FORCE_RELASE (buildfarm member prion). The problem is that after the mentioned commit it was possible to create a relcache entry that had incomplete information regarding the default partition because I introduced a CCI between adding the catalog entries for the default partition (StorePartitionBound) and the update of pg_partitioned_table entry for its parent partitioned table (update_default_partition_oid). It seems the best fix is to move the latter so that it occurs inside the former; the purposeful lack of intervening CCI should be more obvious, and harder to break. I also remove a check in RelationBuildPartitionDesc that returns NULL if the key is not set. I couldn't find any place that needs this hack anymore; probably it was required because of bugs that have since been fixed. Fix a few typos I noticed while reviewing the code involved. Discussion: https://postgr.es/m/20180320182659.nyzn3vqtjbbtfgwq@alvherre.pgsql
-
Teodor Sigaev authored
Hashing function is useful for simulating real-world workload in test like WEB workload, as an example - YCSB benchmarks. Author: Ildar Musin with minor editorization by me Reviewed by: Fabien Coelho, me Discussion: https://www.postgresql.org/message-id/flat/0e8bd39e-dfcd-2879-f88f-272799ad7ef2@postgrespro.ru
-
Tatsuo Ishii authored
Patch by me.
-
Peter Eisentraut authored
Logical decoding should not publish anything about tables created as part of a heap rewrite during DDL. Those tables don't exist externally, so consumers of logical decoding cannot do anything sensible with that information. In ab28feae, we worked around this for built-in logical replication, but that was hack. This is a more proper fix: We mark such transient heaps using the new field pg_class.relwrite, linking to the original relation OID. By default, we ignore them in logical decoding before they get to the output plugin. Optionally, a plugin can register their interest in getting such changes, if they handle DDL specially, in which case the new field will help them get information about the actual table. Reviewed-by: Craig Ringer <craig@2ndquadrant.com>
-
Teodor Sigaev authored
strict_word_similarity is similar to existing word_similarity function but it takes into account word boundaries to compute similarity. Author: Alexander Korotkov Review by: David Steele, Liudmila Mantrova, me Discussion: https://www.postgresql.org/message-id/flat/CY4PR17MB13207ED8310F847CF117EED0D85A0@CY4PR17MB1320.namprd17.prod.outlook.com
-
Peter Eisentraut authored
This will allow us to assess how many platforms have bool with a size other than 1, which will help us decide how to go forward with using stdbool.h. Discussion: https://www.postgresql.org/message-id/flat/3a0fe7e1-5ed1-414b-9230-53bbc0ed1f49@2ndquadrant.com
-
Andrew Gierth authored
If there were multiple grouping sets, none of them empty, all of which were unsortable, then an oversight in consider_groupingsets_paths led to a null pointer dereference. Fix, and add a regression test for this case. Per report from Dang Minh Huong, though I didn't use their patch. Backpatch to 10.x where hashed grouping sets were added.
-
Teodor Sigaev authored
word_similarity before claimed as returning similarity of closest word in string, but, actually it returns similarity of substring. Also fix mistyped comments. Author: Alexander Korotkov Review by: David Steele, Liudmila Mantrova Discussionis: https://www.postgresql.org/message-id/flat/CY4PR17MB13207ED8310F847CF117EED0D85A0@CY4PR17MB1320.namprd17.prod.outlook.com https://www.postgresql.org/message-id/flat/f43b242d-000c-f4c8-cb8b-d37e9752cd93%40postgrespro.ru
-
Peter Eisentraut authored
-
Andres Freund authored
This isn't a very common op, and it doesn't seem worth duplicating for JIT. Author: Andres Freund
-
Andres Freund authored
LLVM will be used for *optional* Just-in-time compilation support. This commit just adds the configure infrastructure that detects LLVM. No documentation has been added for the --with-llvm flag, that'll be added after the actual supporting code has been added. Author: Andres Freund Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
-
- 20 Mar, 2018 10 commits
-
-
Andres Freund authored
This is an optional dependency. It'll be used for the upcoming LLVM based just in time compilation support, which needs to wrap a few LLVM C++ APIs so they're accessible from C.. For now test for C++ compilers unconditionally, without failing if not present, to ensure wide buildfarm coverage. If we're bothered by the additional test times (which are quite short) or verbosity, we can later make the tests conditional on --with-llvm. Author: Andres Freund Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
-
Peter Eisentraut authored
Since e3bdb2d9, libpq failed to build on some platforms because they did not have SSL_clear_options(). Although mainline OpenSSL introduced SSL_clear_options() after SSL_OP_NO_COMPRESSION, so the code should have built fine, at least an old NetBSD version (build farm "coypu" NetBSD 5.1 gcc 4.1.3 PR-20080704 powerpc) has SSL_OP_NO_COMPRESSION but no SSL_clear_options(). So add a configure check for SSL_clear_options(). If we don't find it, skip the call. That means on such a platform one cannot *enable* SSL compression if the built-in default is off, but that seems an unlikely combination anyway and not very interesting in practice.
-
Andres Freund authored
The new macro allows to test flags for different compilers and to store them in different CFLAG like variables. The existing PGAC_PROG_CC_CFLAGS_OPT and PGAC_PROG_CC_VAR_OPT are changed to be just wrappers around the new function. This'll be used by the upcoming LLVM support, to separately detect capabilities used by clang, when generating bitcode. Author: Andres Freund Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
-
Tom Lane authored
Red Hat's notion of a basic Perl installation doesn't include Test::More or Time::HiRes, and reportedly some Debian installs also omit Time::HiRes. Check for those during configure to spare the user the pain of digging through check-world output to find out what went wrong. While we're at it, we should also check the version of Test::More, since TestLib.pm requires at least 0.87. In principle this could be back-patched, but it's probably not necessary. Discussion: https://postgr.es/m/516.1521475003@sss.pgh.pa.us
-
Robert Haas authored
Since commit 4f15e5d0 made grouped_rel set reltarget, a variety of other functions can just get it from grouped_rel instead of having to pass it around explicitly. Simplify accordingly. Patch by me, reviewed by Ashutosh Bapat. Discussion: http://postgr.es/m/CA+TgmoZ+ZJTVad-=vEq393N99KTooxv9k7M+z73qnTAqkb49BQ@mail.gmail.com
-
Tom Lane authored
Too much PG on the brain in commit 769159fd, evidently. Noted by marcelhuberfoo@gmail.com. Discussion: https://postgr.es/m/152154834496.11957.17112112802418832865@wrigleys.postgresql.org
-
Robert Haas authored
Partition-wise aggregate will call create_ordinary_grouping_paths multiple times and we don't want to redo this work every time; have the caller do it instead and pass the details down. Patch by me, reviewed by Ashutosh Bapat. Discussion: http://postgr.es/m/CA+TgmoY7VYYn9a7YHj1nJL6zj6BkHmt4K-un9LRmXkyqRZyynA@mail.gmail.com
-
Robert Haas authored
This avoids unnecessarily creating a RelOptInfo for which we have no actual need. This idea is from Ashutosh Bapat, who wrote a very different patch to accomplish a similar goal. It will be more important if and when we get partition-wise aggregate, since then there could be many partially grouped relations all of which could potentially be unnecessary. In passing, this sets the grouping relation's reltarget, which wasn't done previously but makes things simpler for this refactoring. Along the way, adjust things so that add_paths_to_partial_grouping_rel, now renamed create_partial_grouping_paths, does not perform the Gather or Gather Merge steps to generate non-partial paths from partial paths; have the caller do it instead. This is again for the convenience of partition-wise aggregate, which wants to inject additional partial paths are created and before we decide which ones to Gather/Gather Merge. This might seem like a separate change, but it's actually pretty closely entangled; I couldn't really see much value in separating it and having to change some things twice. Patch by me, reviewed by Ashutosh Bapat. Discussion: http://postgr.es/m/CA+TgmoZ+ZJTVad-=vEq393N99KTooxv9k7M+z73qnTAqkb49BQ@mail.gmail.com
-
Alvaro Herrera authored
It makes sense to do the CCIs in the places that do catalog updates, rather than before the places that error out because the former ones fail to do it. In particular, it looks like StorePartitionBound() and IndexSetParentIndex() ought to make their own CCIs. Per review comments from Peter Eisentraut for row-level triggers on partitioned tables. Discussion: https://postgr.es/m/20171229225319.ajltgss2ojkfd3kp@alvherre.pgsql
-
Tom Lane authored
The original coding of the SP-GiST scan traversalValue feature (commit ccd6eb49) arranged for traversal values to be stored in the query's main executor context. That's fine if there's only one index scan per query, but if there are many, we have a memory leak as successive scans create new traversal values. Fix it by creating a separate memory context for traversal values, which we can reset during spgrescan(). Back-patch to 9.6 where this code was introduced. In principle, adding the traversalCxt field to SpGistScanOpaqueData creates an ABI break in the back branches. But I (tgl) have little sympathy for extensions including spgist_private.h, so I'm not very worried about that. Alternatively we could stick the new field at the end of the struct in back branches, but that has its own downsides. Anton Dignös, reviewed by Alexander Kuzmenkov Discussion: https://postgr.es/m/CALNdv1jb6y2Te-m8xHLxLX12RsBmZJ1f4hESX7J0HjgyOhA9eA@mail.gmail.com
-
- 19 Mar, 2018 10 commits
-
-
Peter Eisentraut authored
-
Tom Lane authored
refresh_by_match_merge() has some issues in the way it builds a SQL query to construct the "diff" table: 1. It doesn't require the selected unique index(es) to be indimmediate. 2. It doesn't pay attention to the particular equality semantics enforced by a given index, but just assumes that they must be those of the column datatype's default btree opclass. 3. It doesn't check that the indexes are btrees. 4. It's insufficiently careful to ensure that the parser will pick the intended operator when parsing the query. (This would have been a security bug before CVE-2018-1058.) 5. It's not careful about indexes on system columns. The way to fix #4 is to make use of the existing code in ri_triggers.c for generating an arbitrary binary operator clause. I chose to move that to ruleutils.c, since that seems a more reasonable place to be exporting such functionality from than ri_triggers.c. While #1, #3, and #5 are just latent given existing feature restrictions, and #2 doesn't arise in the core system for lack of alternate opclasses with different equality behaviors, #4 seems like an issue worth back-patching. That's the bulk of the change anyway, so just back-patch the whole thing to 9.4 where this code was introduced. Discussion: https://postgr.es/m/13836.1521413227@sss.pgh.pa.us
-
Andrew Dunstan authored
The new unlogged_reinit recovery tests create a new tablespace using TestLib.pm's tempdir. However, on msys that function returns a virtual path that isn't understood by Postgres. Here we add a new function to TestLib.pm to turn such a path into a real path on the underlying file system, and use it in the new test to create the tablespace. The new function is essentially a NOOP everywhere but msys.
-
Tom Lane authored
Jeff Janes discovered that commit 7ca25b7d made one of the queries run by REFRESH MATERIALIZED VIEW CONCURRENTLY perform badly. The root cause is bad cardinality estimation for correlated quals, but a principled solution to that problem is some way off, especially since the planner lacks any statistics about whole-row variables. Moreover, in non-error cases this query produces no rows, meaning it must be run to completion; but use of LIMIT 1 encourages the planner to pick a fast-start, slow-completion plan, exactly not what we want. Remove the LIMIT clause, and instead rely on the count parameter we pass to SPI_execute() to prevent excess work if the query does return some rows. While we've heard no field reports of planner misbehavior with this query, it could be that people are having performance issues that haven't reached the level of pain needed to cause a bug report. In any case, that LIMIT clause can't possibly do anything helpful with any existing version of the planner, and it demonstrably can cause bad choices in some cases, so back-patch to 9.4 where the code was introduced. Thomas Munro Discussion: https://postgr.es/m/CAMkU=1z-JoGymHneGHar1cru4F1XDfHqJDzxP_CtK5cL3DOfmg@mail.gmail.com
-
Alvaro Herrera authored
These values can be obtained from the ModifyTable node which is already a part of both the ModifyTableState and ExecInsert. Author: Álvaro Herrera, Amit Langote Reviewed-by: Peter Geoghegan Discussion: https://postgr.es/m/20180316151303.rml2p5wffn3o6qy6@alvherre.pgsql
-
Alvaro Herrera authored
The previous commit removed a comment that was a bit more verbose than its replacement.
-
Alvaro Herrera authored
We make some changes to ModifyTableState and the EState it uses whenever we route tuples to partitions; but we weren't restoring properly in all cases, possibly causing crashes when partitions with different tuple descriptors are targeted by tuples inserted in the same command. Refactor some code, creating ExecPrepareTupleRouting, to encapsulate the needed state changing logic, and have it invoked one level above its current place (ie. put it in ExecModifyTable instead of ExecInsert); this makes it all more readable. Add a test case to exercise this. We don't support having views as partitions; and since only views can have INSTEAD OF triggers, there is no point in testing for INSTEAD OF when processing insertions into a partitioned table. Remove code that appears to support this (but which is actually never relevant.) In passing, fix location of some very confusing comments in ModifyTableState. Reported-by: Amit Langote Author: Etsuro Fujita, Amit Langote Discussion: https://postgr/es/m/0473bf5c-57b1-f1f7-3d58-455c2230bc5f@lab.ntt.co.jp
-
Robert Haas authored
Commit 3fc6e2d7 made setop planning stages return paths rather than plans, but all such paths were loosely associated with a single RelOptInfo, and only the final path was added to the RelOptInfo. Even at the time, it was foreseen that this should be changed, because there is otherwise no good way for a single stage of setop planning to return multiple paths. With this patch, each stage of set operation planning now creates a separate RelOptInfo; these are distinguished by using appropriate relid sets. Note that this patch does nothing whatsoever about actually returning multiple paths for the same set operation; it just makes it possible for a future patch to do so. Along the way, adjust things so that create_upper_paths_hook is called for each of these new RelOptInfos rather than just once, since that might be useful to extensions using that hook. It might be a good to provide an FDW API here as well, but I didn't try to do that for now. Patch by me, reviewed and tested by Ashutosh Bapat and Rajkumar Raghuwanshi. Discussion: http://postgr.es/m/CA+TgmoaLRAOqHmMZx=ESM3VDEPceg+-XXZsRXQ8GtFJO_zbMSw@mail.gmail.com
-
Robert Haas authored
Also, rename it to plan_union_chidren, so the old name wasn't very descriptive. This results in a small net reduction in code, seems at least to me to be easier to understand, and saves space on the process stack. Patch by me, reviewed and tested by Ashutosh Bapat and Rajkumar Raghuwanshi. Discussion: http://postgr.es/m/CA+TgmoaLRAOqHmMZx=ESM3VDEPceg+-XXZsRXQ8GtFJO_zbMSw@mail.gmail.com
-
Magnus Hagander authored
Author: Daniel Gustafsson <daniel@yesql.se>
-
- 18 Mar, 2018 1 commit
-
-
Tom Lane authored
If a view lacks an INSTEAD OF trigger, DML on it can only work by rewriting the command into a command on the underlying base table(s). Then we will fire triggers attached to those table(s), not those for the view. This seems appropriate from a consistency standpoint, but nowhere was the behavior explicitly documented, so let's do that. There was some discussion of throwing an error or warning if a statement trigger is created on a view without creating a row INSTEAD OF trigger. But a simple implementation of that would result in dump/restore ordering hazards. Given that it's been like this all along, and we hadn't heard a complaint till now, a documentation improvement seems sufficient. Per bug #15106 from Pu Qun. Back-patch to all supported branches. Discussion: https://postgr.es/m/152083391168.1215.16892140713507052796@wrigleys.postgresql.org
-