- 14 Feb, 2018 8 commits
-
-
Tom Lane authored
The buildfarm's CLOBBER_CACHE_ALWAYS animals aren't happy with some of the test cases added in commit 4b93f579. There are two different problems: * In two places, a different CONTEXT stack is shown because the error is detected in a different place, due to recompiling an expression from scratch rather than re-using a previously cached plan for it. I fixed these via the expedient of hiding the CONTEXT stack altogether. * In one place, a test expected to fail (because a cached plan hadn't been updated) actually succeeds (because the forced recompile makes it good). I couldn't think of a simple workaround for this, so I've just commented out that test step altogether. I have hopes of improving things enough that both of these kluges can be reverted eventually. The first one is the same kind of problem previously discussed at https://postgr.es/m/31545.1512924904@sss.pgh.pa.us but there was insufficient agreement about how to fix it, so we just hacked around the output instability (commit 9edc97b7). The second issue should be fixed by allowing the plan to be rebuilt when a type conflict is detected. But for today, let's just make the buildfarm green again.
-
Andres Freund authored
Some older compilers otherwise sometimes complain about undefined values, even though the return value should not be used in the overflow case. We assume that any decent compiler will optimize away the unnecessary assignment in performance critical cases. We do not want to restrain the returned value to a specific value, e.g. 0 or the wrapped-around value, because some fast ways to implement overflow detecting math do not easily allow for that (e.g. msvc intrinsics). As the function documentation already documents the returned value in case of intrinsics to be implementation defined, no documentation has to be updated. Per complaint from Tom Lane and his buildfarm member prairiedog. Author: Andres Freund Discussion: https://postgr.es/m/18169.1513958454@sss.pgh.pa.us
-
Tom Lane authored
All of these are false positives, but in each case a fair amount of analysis is needed to see that, and it's not too surprising that not all compilers are smart enough. (In particular, in the logtape.c case, a compiler lacking the knowledge provided by the Assert would almost surely complain, so that this warning will be seen in any non-assert build.) Some of these are of long standing while others are pretty recent, but it only seems worth fixing them in HEAD. Jaime Casanova, tweaked a bit by me Discussion: https://postgr.es/m/CAJGNTeMcYAMJdPAom52dppLMtF-UnEZi0dooj==75OEv1EoBZA@mail.gmail.com
-
Tom Lane authored
Per commit e748e902, we appear to have little or no coverage in the buildfarm of machines that will dump core when asked to printf a null string pointer. Let's try to improve that situation by adding an assertion that will make src/port/snprintf.c behave that way. Since it's just an assertion, it won't break anything in production builds, but it will help developers find this type of oversight. Note that while our buildfarm coverage of machines that use that snprintf implementation is pretty thin on the Unix side (apparently amounting only to gaur/pademelon), all of the MSVC critters use it. Discussion: https://postgr.es/m/156b989dbc6fe7c4d3223cf51da61195@postgrespro.ru
-
Tom Lane authored
plpython_error_callback() reported the name of the function associated with the topmost PL/Python execution context. This was not merely wrong if there were nested PL/Python contexts, but it risked a core dump if the topmost one is an inline code block rather than a named function. That will have proname = NULL, and so we were passing a NULL pointer to snprintf("%s"). It seems that none of the PL/Python-testing machines in the buildfarm will dump core for that, but some platforms do, as reported by Marina Polyakova. Investigation finds that there actually is an existing regression test that used to prove that the behavior was wrong, though apparently no one had noticed that it was printing the wrong function name. It stopped showing the problem in 9.6 when we adjusted psql to not print CONTEXT by default for NOTICE messages. The problem is masked (if your platform avoids the core dump) in error cases, because PL/Python will throw away the originally generated error info in favor of a new traceback produced at the outer level. Repair by using ErrorContextCallback.arg to pass the correct context to the error callback. Add a regression test illustrating correct behavior. Back-patch to all supported branches, since they're all broken this way. Discussion: https://postgr.es/m/156b989dbc6fe7c4d3223cf51da61195@postgrespro.ru
-
Tom Lane authored
These features were never implemented previously for composite or record variables ... not that the documentation admitted it, so there's no doc updates here. This also fixes some issues concerning enforcing DOMAIN NOT NULL constraints against plpgsql variables, although I'm not sure that that topic is completely dealt with. I created a new plpgsql test file for these features, and moved the one relevant existing test case into that file. Tom Lane, reviewed by Daniel Gustafsson Discussion: https://postgr.es/m/18362.1514605650@sss.pgh.pa.us
-
Tom Lane authored
Over the years we've accreted quite a few special variables that are predefined in plpgsql trigger functions. The cost of initializing these variables to their defined values turns out to be a significant part of the runtime of simple triggers; but, undoubtedly, most real-world triggers never examine the values of most of these variables. To improve matters, invent the notion of a variable that has a "promise" attached to it, specifying which of the predetermined values should be assigned to the variable if anything ever reads it. This eliminates all the unneeded startup overhead, in return for a small penalty on accesses to these variables. Tom Lane, reviewed by Pavel Stehule Discussion: https://postgr.es/m/11986.1514407114@sss.pgh.pa.us
-
Tom Lane authored
Previously, copy_plpgsql_datum did a separate palloc for each variable needing instance-local storage. In simple benchmarks this made for a noticeable fraction of the total runtime. Improve it by precalculating the space needed for all of a function's variables and doing just one palloc for all of them. In passing, remove PLPGSQL_DTYPE_EXPR from the list of plpgsql "datum" types, since in fact it has nothing in common with the others, and there is noplace that needs to discriminate on the basis of dtype between an expression and any type of datum. And add comments clarifying which datum struct fields are generic and which aren't. Tom Lane, reviewed by Pavel Stehule Discussion: https://postgr.es/m/11986.1514407114@sss.pgh.pa.us
-
- 13 Feb, 2018 7 commits
-
-
Tom Lane authored
Formerly, DTYPE_REC was used only for variables declared as "record"; variables of named composite types used DTYPE_ROW, which is faster for some purposes but much less flexible. In particular, the ROW code paths are entirely incapable of dealing with DDL-caused changes to the number or data types of the columns of a row variable, once a particular plpgsql function has been parsed for the first time in a session. And, since the stored representation of a ROW isn't a tuple, there wasn't any easy way to deal with variables of domain-over-composite types, since the domain constraint checking code would expect the value to be checked to be a tuple. A lesser, but still real, annoyance is that ROW format cannot represent a true NULL composite value, only a row of per-field NULL values, which is not exactly the same thing. Hence, switch to using DTYPE_REC for all composite-typed variables, whether "record", named composite type, or domain over named composite type. DTYPE_ROW remains but is used only for its native purpose, to represent a fixed-at-compile-time list of variables, for instance the targets of an INTO clause. To accomplish this without taking significant performance losses, introduce infrastructure that allows storing composite-type variables as "expanded objects", similar to the "expanded array" infrastructure introduced in commit 1dc5ebc9. A composite variable's value is thereby kept (most of the time) in the form of separate Datums, so that field accesses and updates are not much more expensive than they were in the ROW format. This holds the line, more or less, on performance of variables of named composite types in field-access-intensive microbenchmarks, and makes variables declared "record" perform much better than before in similar tests. In addition, the logic involved with enforcing composite-domain constraints against updates of individual fields is in the expanded record infrastructure not plpgsql proper, so that it might be reusable for other purposes. In further support of this, introduce a typcache feature for assigning a unique-within-process identifier to each distinct tuple descriptor of interest; in particular, DDL alterations on composite types result in a new identifier for that type. This allows very cheap detection of the need to refresh tupdesc-dependent data. This improves on the "tupDescSeqNo" idea I had in commit 687f096e: that assigned identifying sequence numbers to successive versions of individual composite types, but the numbers were not unique across different types, nor was there support for assigning numbers to registered record types. In passing, allow plpgsql functions to accept as well as return type "record". There was no good reason for the old restriction, and it was out of step with most of the other PLs. Tom Lane, reviewed by Pavel Stehule Discussion: https://postgr.es/m/8962.1514399547@sss.pgh.pa.us
-
Peter Eisentraut authored
-
Peter Eisentraut authored
This also makes procedures work in psql's \ef and \sf commands. Reported-by: Pavel Stehule <pavel.stehule@gmail.com>
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
Peter Eisentraut authored
Instead of issuing a reload after pg_hba.conf changes between test cases, run a full restart. With a reload, an error in the new pg_hba.conf is ignored and the tests will continue to run with the old settings, invalidating the subsequent test cases. With a restart, a faulty pg_hba.conf will lead to the test being aborted, which is what we'd rather want.
-
Peter Eisentraut authored
Author: Masahiko Sawada <sawada.mshk@gmail.com>
-
- 12 Feb, 2018 4 commits
-
-
Alvaro Herrera authored
The modern way is to use a missing_ok argument instead of two separate almost-identical routines, so do that. Author: Michaël Paquier Reviewed-by: Álvaro Herrera Discussion: https://postgr.es/m/20180201063212.GE6398@paquier.xyz
-
Robert Haas authored
The previous code failed to realize that this setting effectively disables parallelism, and would crash if it decided to attempt parallelism anyway. Instead, treat it as a disabling condition. Kyotaro Horiguchi, who also reported the issue. Reviewed by Michael Paquier and Peter Geoghegan. Discussion: http://postgr.es/m/20180209.170635.256350357.horiguchi.kyotaro@lab.ntt.co.jp
-
Alvaro Herrera authored
Noticed while reviewing nearby text
-
Bruce Momjian authored
Also add comment on why exit/quit are not documented. Discussion: https://postgr.es/m/20180202053928.GA13472@momjian.us
-
- 11 Feb, 2018 1 commit
-
-
Tom Lane authored
pg_dump supposed that a stats object necessarily shares the same schema as its underlying table, and that it doesn't have a separate owner. These things may have been true during early development of the feature, but they are not true as of v10 release. Failure to track the object's schema separately turns out to have only limited consequences, because pg_get_statisticsobjdef() always schema- qualifies the target object name in the generated CREATE STATISTICS command (a decision out of step with the rest of ruleutils.c, but I digress). Therefore the restored object would be in the right schema, so that the only problem is that the TOC entry would be mislabeled as to schema. That could lead to wrong decisions for schema-selective restores, for example. The ownership issue is a bit more serious: not only was the TOC entry potentially mislabeled as to owner, but pg_dump didn't bother to issue an ALTER OWNER command at all, so that after restore the stats object would continue to be owned by the restoring superuser. A final point is that decisions as to whether to dump a stats object or not were driven by whether the underlying table was dumped or not. While that's not wrong on its face, it won't scale nicely to the planned future extension to cross-table statistics. Moreover, that design decision comes out of the view of stats objects as being auxiliary to a particular table, like a rule or trigger, which is exactly where the above problems came from. Since we're now treating stats objects more like independent objects in their own right, they ought to behave like standalone objects for this purpose too. So change to using the generic selectDumpableObject() logic for them (which presently amounts to "dump if containing schema is to be dumped"). Along the way to fixing this, restructure so that getExtendedStatistics collects the identity info (only) for all extended stats objects in one query, and then for each object actually being dumped, we retrieve the definition in dumpStatisticsExt. This is necessary to ensure that schema-qualification in the generated CREATE STATISTICS command happens with respect to the search path that pg_dump will now be using at restore time (ie, the schema the stats object is in, not that of the underlying table). It's probably also significantly faster in the typical scenario where only a minority of tables have extended stats. Back-patch to v10 where extended stats were introduced. Discussion: https://postgr.es/m/18272.1518328606@sss.pgh.pa.us
-
- 10 Feb, 2018 3 commits
-
-
Tom Lane authored
Prematurely freeing the EState used to evaluate CALL arguments led, in some cases, to passing dangling pointers to the procedure. This was masked in trivial cases because the argument pointers would point to Const nodes in the original expression tree, and in some other cases because the result value would end up in the standalone ExprContext rather than in memory belonging to the EState --- but that wasn't exactly high quality programming either, because the standalone ExprContext was never explicitly freed, breaking assorted API contracts. In addition, using a separate EState for each argument was just silly. So let's use just one EState, and one ExprContext, and make the latter belong to the former rather than be standalone, and clean up the EState (and hence the ExprContext) post-call. While at it, improve the function's commentary a bit. Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us
-
Tom Lane authored
CALL statements cannot support sub-SELECTs in the arguments of the called procedure, since they just use ExecEvalExpr to evaluate such arguments. Teach transformSubLink() to reject the case, as it already does for other contexts in which subqueries are not supported. In passing, s/EXPR_KIND_CALL/EXPR_KIND_CALL_ARGUMENT/ to make that enum symbol line up more closely with the phrasing of the error messages it is associated with. And fix someone's weak grasp of English grammar in the preceding EXPR_KIND_PARTITION_EXPRESSION addition. Also update an incorrect comment in resolve_unique_index_expr (possibly it was correct when written, but nowadays transformExpr definitely does reject SRFs here). Per report from Pavel Stehule --- but this resolves only one of the bugs he mentions. Discussion: https://postgr.es/m/CAFj8pRDxOwPPzpA8i+AQeDQFj7bhVw-dR2==rfWZ3zMGkm568Q@mail.gmail.com
-
Alvaro Herrera authored
We can now create indexes more easily than before, so update this chapter to use the simpler instructions. After an idea of Amit Langote. I (Álvaro) opted to do more invasive surgery and remove the previous suggestion to create per-partition indexes, which his patch left in place. Discussion: https://postgr.es/m/eafaaeb1-f0fd-d010-dd45-07db0300f645@lab.ntt.co.jp Author: Amit Langote, Álvaro Herrera
-
- 09 Feb, 2018 3 commits
-
-
Robert Haas authored
This makes life easier for extension authors. Metin Doslu Discussion: http://postgr.es/m/CAL1dPcfa45o1dC-c4t-48v0OZE6oy4ChJhObrtkK8mzNfXqDTA@mail.gmail.com
-
Robert Haas authored
Otherwise, we can end up with the flag set when the timeout is actually disabled, leading to misbehavior. Commit f8e5f156 introduced this bug. Reported by Peter Eisentraut. Analysis and fix by Thomas Munro, tweaked by me. Discussion: http://postgr.es/m/6a909374-2602-7136-8c70-397330a418f3@2ndquadrant.com
-
Robert Haas authored
Even after commit 882ea509, some buildfarm members are still failing in the postgres_fdw tests. Try to fix that by disabling use of remote statistics for some test cases. Etsuro Fujita Discussion: http://postgr.es/m/5A7D76CF.8080601@lab.ntt.co.jp
-
- 08 Feb, 2018 5 commits
-
-
Robert Haas authored
Atsushi Torikoshi Discussion: http://postgr.es/m/1b056262-4bc0-a982-c899-bb67a0a7fd52@lab.ntt.co.jp
-
Robert Haas authored
Doing so causes EXPLAIN ANALYZE to show trigger statistics multiple times. Commit 2f178441 seems to be to blame for this. Amit Langote, revieed by Amit Khandekar, Etsuro Fujita, and me.
-
Robert Haas authored
When the previously-chosen plan was non-partial, all pa_finished flags for partial plans are now set, and pa_next_plan has not yet been set to INVALID_SUBPLAN_INDEX, the previous code could go into an infinite loop. Report by Rajkumar Raghuwanshi. Patch by Amit Khandekar and me. Review by Kyotaro Horiguchi. Discussion: http://postgr.es/m/CAJ3gD9cf43z78qY=U=H0HvOEN341qfRO-vLpnKPSviHeWgJQ5w@mail.gmail.com
-
Peter Eisentraut authored
Instead of using the psql/libpq connection string as the displayed test name and relying on "notes" and source code comments to explain the tests, give the tests self-explanatory names, like we do elsewhere. Reviewed-by: Michael Paquier <michael.paquier@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
-
Robert Haas authored
Commit 1bc0100d added these tests, but they're not stable enough to survive in the buildfarm. Remove CTIDs from the output in the hopes of fixing that.
-
- 07 Feb, 2018 7 commits
-
-
Robert Haas authored
Commit 0bf3ae88 allowed direct foreign table modification; instead of fetching each row, updating it locally, and then pushing the modification back to the remote side, we would instead do all the work on the remote server via a single remote UPDATE or DELETE command. However, that commit only enabled this optimization when join tree consisted only of the target table. This change allows the same optimization when an UPDATE statement has a FROM clause or a DELETE statement has a USING clause. This works much like ordinary foreign join pushdown, in that the tables must be on the same remote server, relevant parts of the query must be pushdown-safe, and so forth. Etsuro Fujita, reviewed by Ashutosh Bapat, Rushabh Lathia, and me. Some formatting corrections by me. Discussion: http://postgr.es/m/5A57193A.2080003@lab.ntt.co.jp Discussion: http://postgr.es/m/b9cee735-62f8-6c07-7528-6364ce9347d0@lab.ntt.co.jp
-
Peter Eisentraut authored
Add explicit collation on the trigger name to avoid locale dependencies. Also restrict the tables selected, to avoid interference from concurrently running tests.
-
Peter Eisentraut authored
- table_constraints.enforced - triggers.action_order - triggers.action_reference_old_table - triggers.action_reference_new_table Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
-
Robert Haas authored
Commit 4b0d28de should have updated this comment, but did not. Thomas Munro Discussion: http://postgr.es/m/CAEepm=0iJ8aqQcF9ij2KerAkuHF3SwrVTzjMdm1H4w++nfBf9A@mail.gmail.com
-
Robert Haas authored
Commit 5ded4bd2 removed the code for this function, but neglected to remove the prototype and associated comments. Dagfinn Ilmari Mannsåker Discussion: http://postgr.es/m/d8j4lmuxjzk.fsf@dalvik.ping.uio.no
-
Magnus Hagander authored
Since we now support the server side handler for git over https (so we're no longer using the "dumb protocol"), make https the primary choice for cloning the repository, and the git protocol the secondary choice. In passing, also change the links to git-scm.com from http to https. Reviewed by Stefan Kaltenbrunner and David G. Johnston
-
Tom Lane authored
This patch adds the ability to use "RANGE offset PRECEDING/FOLLOWING" frame boundaries in window functions. We'd punted on that back in the original patch to add window functions, because it was not clear how to do it in a reasonably data-type-extensible fashion. That problem is resolved here by adding the ability for btree operator classes to provide an "in_range" support function that defines how to add or subtract the RANGE offset value. Factoring it this way also allows the operator class to avoid overflow problems near the ends of the datatype's range, if it wishes to expend effort on that. (In the committed patch, the integer opclasses handle that issue, but it did not seem worth the trouble to avoid overflow failures for datetime types.) The patch includes in_range support for the integer_ops opfamily (int2/int4/int8) as well as the standard datetime types. Support for other numeric types has been requested, but that seems like suitable material for a follow-on patch. In addition, the patch adds GROUPS mode which counts the offset in ORDER-BY peer groups rather than rows, and it adds the frame_exclusion options specified by SQL:2011. As far as I can see, we are now fully up to spec on window framing options. Existing behaviors remain unchanged, except that I changed the errcode for a couple of existing error reports to meet the SQL spec's expectation that negative "offset" values should be reported as SQLSTATE 22013. Internally and in relevant parts of the documentation, we now consistently use the terminology "offset PRECEDING/FOLLOWING" rather than "value PRECEDING/FOLLOWING", since the term "value" is confusingly vague. Oliver Ford, reviewed and whacked around some by me Discussion: https://postgr.es/m/CAGMVOdu9sivPAxbNN0X+q19Sfv9edEPv=HibOJhB14TJv_RCQg@mail.gmail.com
-
- 06 Feb, 2018 2 commits
-
-
Robert Haas authored
Etsuro Fujita Discussion: http://postgr.es/m/5A7981EA.8020201@lab.ntt.co.jp
-
Robert Haas authored
LogicalTapeFreeze() may write out its first block when it is dirty but not full, and then immediately read the first block back in from its BufFile as a BLCKSZ-width block. This can only occur in rare cases where very few tuples were written out, which is currently only possible with parallel external tuplesorts. To avoid valgrind complaints, tell it to treat the tail of logtape.c's buffer as defined. Commit 9da0cc35 exposed this problem but did not create it. LogicalTapeFreeze() has always tended to write out some amount of garbage bytes, but previously never wrote less than one block of data in total, so the problem was masked. Per buildfarm members lousyjack and skink. Peter Geoghegan, based on a suggestion from Tom Lane and me. Some comment revisions by me.
-