- 29 Jan, 2020 1 commit
-
-
Amit Kapila authored
Commit 40d964ec allowed vacuum command to leverage multiple CPUs by invoking parallel workers to process indexes. This commit provides a '--parallel' option to specify the parallel degree used by vacuum command. Author: Masahiko Sawada, with few modifications by me Reviewed-by: Mahendra Singh and Amit Kapila Discussion: https://postgr.es/m/CAD21AoDTPMgzSkV4E3SFo1CH_x50bf5PqZFQf4jmqjk-C03BWg@mail.gmail.com
-
- 28 Jan, 2020 7 commits
-
-
Tom Lane authored
EvalPlanQualStart() supposed that it could re-use the relsubs_rowmark and relsubs_done arrays from a prior instantiation. But since they are allocated in the es_query_cxt of the recheckestate, that's just wrong; EvalPlanQualEnd() will blow away that storage. Therefore we were using storage that could have been reallocated to something else, causing all sorts of havoc. I think this was modeled on the old code's handling of es_epqTupleSlot, but since the code was anyway clearing the arrays at re-use, there's clearly no expectation of importing any outside state. So it's just a dubious savings of a couple of pallocs, which is negligible compared to setting up a new planstate tree. Therefore, just allocate the arrays always. (I moved the allocations slightly for readability.) In principle this bug could cause a problem whenever EPQ rechecks are needed in more than one target table of a ModifyTable plan node. In practice it seems not quite so easy to trigger as that; I couldn't readily duplicate a crash with a partitioned target table, for instance. That's probably down to incidental choices about when to free or reallocate stuff. The added isolation test case does seem to reliably show an assertion failure, though. Per report from Oleksii Kliukin. Back-patch to v12 where the bug was introduced (evidently by commit 3fb307bc4). Discussion: https://postgr.es/m/EEF05F66-2871-4786-992B-5F45C92FEE2E@hintbits.com
-
Heikki Linnakangas authored
Commit 38a95731 got this backwards. Author: Kyotaro Horiguchi Discussion: https://www.postgresql.org/message-id/20200128.194408.2260703306774646445.horikyota.ntt@gmail.com
-
Thomas Munro authored
Per build farm animal anole, after commit 6f38d4da.
-
Thomas Munro authored
It's not OK to do that without calling CHECK_FOR_INTERRUPTS(). Let the next wait loop deal with it, following the usual pattern. One consequence of this bug was that a SIGTERM delivered in a very narrow timing window could leave a parallel worker process waiting forever for a condition variable that will never be signaled, after an error was raised in other process. The code is a bit different in the stable branches due to commit 1321509f, making problems less likely there. No back-patch for now, but we may finish up deciding to make a similar change after more discussion. Author: Thomas Munro Reviewed-by: Shawn Debnath Reported-by: Tomas Vondra Discussion: https://postgr.es/m/CA%2BhUKGJOm8zZHjVA8svoNT3tHY0XdqmaC_kHitmgXDQM49m1dA%40mail.gmail.com
-
Amit Kapila authored
This gives more information to the user about the error and it makes such messages consistent with the other similar messages in the code. Reported-by: Simon Riggs Author: Mahendra Singh and Simon Riggs Reviewed-by: Beena Emerson and Amit Kapila Discussion: https://postgr.es/m/CANP8+j+7YUvQvGxTrCiw77R23enMJ7DFmyA3buR+fa2pKs4XhA@mail.gmail.com
-
Michael Paquier authored
These two new parameters, named sslminprotocolversion and sslmaxprotocolversion, allow to respectively control the minimum and the maximum version of the SSL protocol used for the SSL connection attempt. The default setting is to allow any version for both the minimum and the maximum bounds, causing libpq to rely on the bounds set by the backend when negotiating the protocol to use for an SSL connection. The bounds are checked when the values are set at the earliest stage possible as this makes the checks independent of any SSL implementation. Author: Daniel Gustafsson Reviewed-by: Michael Paquier, Cary Huang Discussion: https://postgr.es/m/4F246AE3-A7AE-471E-BD3D-C799D3748E03@yesql.se
-
Thomas Munro authored
The following changes make the predicate locking functions more generic and suitable for use by future access methods: - PredicateLockTuple() is renamed to PredicateLockTID(). It takes ItemPointer and inserting transaction ID instead of HeapTuple. - CheckForSerializableConflictIn() takes blocknum instead of buffer. - CheckForSerializableConflictOut() no longer takes HeapTuple or buffer. Author: Ashwin Agrawal Reviewed-by: Andres Freund, Kuntal Ghosh, Thomas Munro Discussion: https://postgr.es/m/CALfoeiv0k3hkEb3Oqk%3DziWqtyk2Jys1UOK5hwRBNeANT_yX%2Bng%40mail.gmail.com
-
- 27 Jan, 2020 5 commits
-
-
Tom Lane authored
In the wake of 1f3a0217, assorted buildfarm members were warning about "control reaches end of non-void function" or the like. Do what we've done elsewhere: in place of a "default" switch case that will prevent the compiler from warning about unhandled enum values, put a catchall elog() after the switch. And return a dummy value to satisfy compilers that don't know elog() doesn't return.
-
Robert Haas authored
Specifically, move those functions that depend on ereport() from jsonapi.c to jsonfuncs.c, in preparation for allowing jsonapi.c to be used from frontend code. A few cases where elog(ERROR, ...) is used for can't-happen conditions are left alone; we can handle those in some other way in frontend code. Reviewed by Mark Dilger and Andrew Dunstan. Discussion: http://postgr.es/m/CA+TgmoYfOXhd27MUDGioVh6QtpD0C1K-f6ObSA10AWiHBAL5bA@mail.gmail.com
-
Robert Haas authored
Instead, it now returns a value indicating either success or the type of error which occurred. The old behavior is still available by calling pg_parse_json_or_ereport(). If the new interface is used, an error can be thrown by passing the return value of pg_parse_json() to json_ereport_error(). pg_parse_json() can still elog() in can't-happen cases, but it seems like that issue is best handled separately. Adjust json_lex() and json_count_array_elements() to return an error code, too. This is all in preparation for making the backend's json parser available to frontend code. Reviewed and/or tested by Mark Dilger and Andrew Dunstan. Discussion: http://postgr.es/m/CA+TgmoYfOXhd27MUDGioVh6QtpD0C1K-f6ObSA10AWiHBAL5bA@mail.gmail.com
-
Thomas Munro authored
Currently, Parallel Hash Join cannot be used for full/right joins, so there is no point in setting the match flag. It turns out that the cache coherence traffic generated by those writes slows down large systems running many-core joins, so let's stop doing that. In future, if we need to use match bits in parallel joins, we might want to consider setting them only if not already set. Back-patch to 11, where Parallel Hash Join arrived. Reported-by: Deng, Gang Discussion: https://postgr.es/m/0F44E799048C4849BAE4B91012DB910462E9897A%40SHSMSX103.ccr.corp.intel.com
-
Michael Paquier authored
The leaks have been detected by a Coverity run on Windows. No backpatch is done as the leaks are minor. While on it, make restricted token creation more consistent in its error handling by logging an error instead of a warning if missing advapi32.dll, which was missing in the NT4 days. Any modern platform should have this DLL around. Now, if the library is not there, an error is still reported back to the caller, and nothing is done do there is no behavior change done in this commit. Author: Ranier Vilela Discussion: https://postgr.es/m/CAEudQApa9MG0foPkgPX87fipk=vhnF2Xfg+CfUyR08h4R7Mywg@mail.gmail.com
-
- 26 Jan, 2020 4 commits
-
-
Tom Lane authored
In non-TEXT output formats, the "Settings" field should appear when requested, even if it would be empty. Also, get rid of the premature optimization of counting all the GUC_EXPLAIN variables at startup. Since there was no provision for adjusting that count later, all it'd take would be some extension marking a parameter as GUC_EXPLAIN to risk an assertion failure or memory stomp. We could make get_explain_guc_options() count those variables on-the-fly, or dynamically resize its array ... but TBH I do not think that making a transient array of pointers a bit smaller is worth any extra complication, especially when you consider all the other transient space EXPLAIN eats. So just allocate that array at the max possible size. In HEAD, also add some regression test coverage for this feature. Because of the memory-stomp hazard, back-patch to v12 where this feature was added. Discussion: https://postgr.es/m/19416.1580069629@sss.pgh.pa.us
-
Thomas Munro authored
As reported independently by a couple of people, _mdfd_openseg() is coded in a way that seems to imply that the segments could be opened in an order that isn't strictly sequential. Even if that were true, it's also using the wrong comparison. It's not an active bug, since the condition is always true anyway, but it's confusing, so replace it with an assertion. Author: Thomas Munro Reviewed-by: Andres Freund, Kyotaro Horiguchi, Noah Misch Discussion: https://postgr.es/m/CA%2BhUKG%2BNBw%2BuSzxF1os-SO6gUuw%3DcqO5DAybk6KnHKzgGvxhxA%40mail.gmail.com Discussion: https://postgr.es/m/20191222091930.GA1280238%40rfd.leadboat.com
-
Tom Lane authored
In a statement like "UPDATE remote_tab SET (x,y) = (SELECT ...)", we'd conclude that the statement could be directly executed remotely, because the sub-SELECT is in a resjunk tlist item that's not examined for shippability. Currently that ends up crashing if the sub-SELECT contains any remote Vars. Prevent the crash by deeming MULTIEXEC Params to be unshippable. This is a bit of a brute-force solution, since if the sub-SELECT *doesn't* contain any remote Vars, the current execution technology would work; but that's not a terribly common use-case for this syntax, I think. In any case, we generally don't try to ship sub-SELECTs, so it won't surprise anybody that this doesn't end up as a remote direct update. I'd be inclined to see if that general limitation can be fixed before worrying about this case further. Per report from Lukáš Sobotka. Back-patch to 9.6. 9.5 had MULTIEXPR, but we didn't try to perform remote direct updates then, so the case didn't arise anyway. Discussion: https://postgr.es/m/CAJif3k+iA_ekBB5Zw2hDBaE1wtiQa4LH4_JUXrrMGwTrH0J01Q@mail.gmail.com
-
Heikki Linnakangas authored
The signature of XLogReadRecord() required the caller to pass the starting WAL position as argument, or InvalidXLogRecPtr to continue reading at the end of previous record. That's slightly awkward to the callers, as most of them don't want to randomly jump around in the WAL stream, but start reading at one position and then read everything from that point onwards. Remove the 'RecPtr' argument and add a new function XLogBeginRead() to specify the starting position instead. That's more convenient for the callers. Also, xlogreader holds state that is reset when you change the starting position, so having a separate function for doing that feels like a more natural fit. This changes XLogFindNextRecord() function so that it doesn't reset the xlogreader's state to what it was before the call anymore. Instead, it positions the xlogreader to the found record, like XLogBeginRead(). Reviewed-by: Kyotaro Horiguchi, Alvaro Herrera Discussion: https://www.postgresql.org/message-id/5382a7a3-debe-be31-c860-cb810c08f366%40iki.fi
-
- 25 Jan, 2020 2 commits
-
-
Tom Lane authored
Previously, it was possible for EXPLAIN ANALYZE of a parallel query to produce several different "Workers" fields for a single plan node, because different portions of explain.c independently generated per-worker data and wrapped that output in separate fields. This is pretty bogus, especially for the structured output formats: even if it's not technically illegal, most programs would have a hard time dealing with such data. To improve matters, add infrastructure that allows redirecting per-worker values into a side data structure, and then collect that data into a single "Workers" field after we've finished running all the relevant code for a given plan node. There are a few visible side-effects: * In text format, instead of something like Sort Method: external merge Disk: 4920kB Worker 0: Sort Method: external merge Disk: 5880kB Worker 1: Sort Method: external merge Disk: 5920kB Buffers: shared hit=682 read=10188, temp read=1415 written=2101 Worker 0: actual time=130.058..130.324 rows=1324 loops=1 Buffers: shared hit=337 read=3489, temp read=505 written=739 Worker 1: actual time=130.273..130.512 rows=1297 loops=1 Buffers: shared hit=345 read=3507, temp read=505 written=744 you get Sort Method: external merge Disk: 4920kB Buffers: shared hit=682 read=10188, temp read=1415 written=2101 Worker 0: actual time=130.058..130.324 rows=1324 loops=1 Sort Method: external merge Disk: 5880kB Buffers: shared hit=337 read=3489, temp read=505 written=739 Worker 1: actual time=130.273..130.512 rows=1297 loops=1 Sort Method: external merge Disk: 5920kB Buffers: shared hit=345 read=3507, temp read=505 written=744 * When JIT is enabled, any relevant per-worker JIT stats are attached to the child node of the Gather or Gather Merge node, which is where the other per-worker output has always been. Previously, that info was attached directly to a Gather node, or missed entirely for Gather Merge. * A query's summary JIT data no longer includes a bogus "Worker Number: -1" field. A notable code-level change is that indenting for lines of text-format output should now be handled by calling "ExplainIndentText(es)", instead of hard-wiring how much space to emit. This seems a good deal cleaner anyway. This patch also adds a new "explain.sql" regression test script that's dedicated to testing EXPLAIN. There is more that can be done in that line, certainly, but for now it just adds some coverage of the XML and YAML output formats, which had been completely untested. Although this is surely a bug fix, it's not clear that people would be happy with rearranging EXPLAIN output in a minor release, so apply to HEAD only. Maciek Sakrejda and Tom Lane, based on an idea of Andres Freund's; reviewed by Georgios Kokolatos Discussion: https://postgr.es/m/CAOtHd0AvAA8CLB9Xz0wnxu1U=zJCKrr1r4QwwXi_kcQsHDVU=Q@mail.gmail.com
-
Dean Rasheed authored
These compute the greatest common divisor and least common multiple of a pair of numbers using the Euclidean algorithm. Vik Fearing, reviewed by Fabien Coelho. Discussion: https://postgr.es/m/adbd3e0b-e3f1-5bbc-21db-03caf1cef0f7@2ndquadrant.com
-
- 24 Jan, 2020 7 commits
-
-
Robert Haas authored
At first glance, this function seems useful, but it actually increases the amount of code required rather than decreasing it. Inline the logic into the callers instead; most callers don't use the 'lexeme' argument for anything and as a result considerable simplification is possible. Along the way, fix the header comment for the nearby function lex_expect(), which mislabeled it as lex_accept(). Patch by me, reviewed by David Steele, Mark Dilger, and Andrew Dunstan. Discussion: http://postgr.es/m/CA+TgmoYfOXhd27MUDGioVh6QtpD0C1K-f6ObSA10AWiHBAL5bA@mail.gmail.com
-
Robert Haas authored
Keep the code that pertains to the 'json' data type in json.c, but move the lexing and parsing code to a new file jsonapi.c, a name I chose because the corresponding prototypes are in jsonapi.h. This seems like a logical division, because the JSON lexer and parser are also used by the 'jsonb' data type, but the SQL-callable functions in json.c are a separate thing. Also, the new jsonapi.c file needs to include far fewer header files than json.c, which seems like a good sign that this is an appropriate place to insert an abstraction boundary. I took the opportunity to remove a few apparently-unneeded includes from json.c at the same time. Patch by me, reviewed by David Steele, Mark Dilger, and Andrew Dunstan. The previous commit was, too, but I forgot to note it in the commit message. Discussion: http://postgr.es/m/CA+TgmoYfOXhd27MUDGioVh6QtpD0C1K-f6ObSA10AWiHBAL5bA@mail.gmail.com
-
Robert Haas authored
The major change here is that we no longer include jsonb.h into jsonapi.h. The reason that was necessary is that jsonapi.h included several prototypes functions in jsonfuncs.c that depend on the Jsonb type. Move those prototypes to a new header, jsonfuncs.h, and include it where needed. The other change is that JsonEncodeDateTime is now declared in json.h rather than jsonapi.h. Taken together, these steps eliminate all dependencies of jsonapi.h on backend-only data types and header files, so that it can potentially be included in frontend code.
-
Fujii Masao authored
This function allows us to fsync the specified file or directory. It's useful, for example, when we want to sync the file that pg_file_write() writes out or that COPY TO exports the data into, for durability. Author: Fujii Masao Reviewed-By: Julien Rouhaud, Arthur Zakirov, Michael Paquier, Atsushi Torikoshi Discussion: https://www.postgresql.org/message-id/CAHGQGwGY8uzZ_k8dHRoW1zDcy1Z7=5GQ+So4ZkVy2u=nLsk=hA@mail.gmail.com
-
Peter Eisentraut authored
src/include/common/unicode_combining_table.h is currently not meant to be included standalone. Things could be refactored to allow it, but that would be beyond the present purpose. So adding an exclusion here seems best. Discussion: https://www.postgresql.org/message-id/10754.1579535012@sss.pgh.pa.us
-
Peter Eisentraut authored
-
Michael Paquier authored
Only the parameter parallel_workers can be used directly with ALTER TABLE. Issue introduced in 6f3a13ff, so backpatch down to 10. Author: Justin Pryzby Discussion: https://postgr.es/m/20200106025623.GA12066@telsasoft.com Backpatch-through: 10
-
- 23 Jan, 2020 6 commits
-
-
Tom Lane authored
I had supposed that all versions of Readline that have filename quoting hooks also have the rl_completion_suppress_quote variable. But it seems OpenBSD managed to find a version someplace that does not, so we'll have to expend a separate configure probe for that. (Light testing suggests that this version also lacks the bugs that make it necessary to frob that variable. Hooray!) Per buildfarm.
-
Tom Lane authored
I had supposed that the from_char_seq_search() call sites were all passing the constant arrays you'd expect them to pass ... but on looking closer, the one for DY format was passing the days[] array not days_short[]. This accidentally worked because the day abbreviations in English are all the same as the first three letters of the full day names. However, once we took out the "maximum comparison length" logic, it stopped working. As penance for that oversight, add regression test cases covering this, as well as every other switch case in DCH_from_char() that was not reached according to the code coverage report. Also, fold the DCH_RM and DCH_rm cases into one --- now that seq_search is case independent, there's no need to pass different comparison arrays for those cases. Back-patch, as the previous commit was.
-
Tom Lane authored
seq_search(), which is used to match input substrings to constants such as month and day names, had a lot of bizarre and unnecessary behaviors. It was mostly possible to avert our eyes from that before, but we don't want to duplicate those behaviors in the upcoming patch to allow recognition of non-English month and day names. So it's time to clean this up. In particular: * seq_search scribbled on the input string, which is a pretty dangerous thing to do, especially in the badly underdocumented way it was done here. Fortunately the input string is a temporary copy, but that was being made three subroutine levels away, making it something easy to break accidentally. The behavior is externally visible nonetheless, in the form of odd case-folding in error reports about unrecognized month/day names. The scribbling is evidently being done to save a few calls to pg_tolower, but that's such a cheap function (at least for ASCII data) that it's pretty pointless to worry about. In HEAD I switched it to be pg_ascii_tolower to ensure it is cheap in all cases; but there are corner cases in Turkish where this'd change behavior, so leave it as pg_tolower in the back branches. * seq_search insisted on knowing the case form (all-upper, all-lower, or initcap) of the constant strings, so that it didn't have to case-fold them to perform case-insensitive comparisons. This likewise seems like excessive micro-optimization, given that pg_tolower is certainly very cheap for ASCII data. It seems unsafe to assume that we know the case form that will come out of pg_locale.c for localized month/day names, so it's better just to define the comparison rule as "downcase all strings before comparing". (The choice between downcasing and upcasing is arbitrary so far as English is concerned, but it might not be in other locales, so follow citext's lead here.) * seq_search also had a parameter that'd cause it to report a match after a maximum number of characters, even if the constant string were longer than that. This was not actually used because no caller passed a value small enough to cut off a comparison. Replicating that behavior for localized month/day names seems expensive as well as useless, so let's get rid of that too. * from_char_seq_search used the maximum-length parameter to truncate the input string in error reports about not finding a matching name. This leads to rather confusing reports in many cases. Worse, it is outright dangerous if the input string isn't all-ASCII, because we risk truncating the string in the middle of a multibyte character. That'd lead either to delivering an illegible error message to the client, or to encoding-conversion failures that obscure the actual data problem. Get rid of that in favor of truncating at whitespace if any (a suggestion due to Alvaro Herrera). In addition to fixing these things, I const-ified the input string pointers of DCH_from_char and its subroutines, to make sure there aren't any other scribbling-on-input problems. The risk of generating a badly-encoded error message seems like enough of a bug to justify back-patching, so patch all supported branches. Discussion: https://postgr.es/m/29432.1579731087@sss.pgh.pa.us
-
Tom Lane authored
The Readline library contains a fair amount of knowledge about how to tab-complete filenames, but it turns out that that doesn't work too well unless we follow its expectation that we use its filename quoting hooks to quote and de-quote filenames. We were trying to do such quote handling within complete_from_files(), and that's still what we have to do if we're using libedit, which lacks those hooks. But for Readline, it works a lot better if we tell Readline that single-quote is a quoting character and then provide hooks that know the details of the quoting rules for SQL and psql meta-commands. Hence, resurrect the quoting hook functions that existed in the original version of tab-complete.c (and were disabled by commit f6689a32 because they "didn't work so well yet"), and whack on them until they do seem to work well. Notably, this fixes bug #16059 from Steven Winfield, who pointed out that the previous coding would strip quote marks from filenames in SQL COPY commands, even though they're syntactically necessary there. Now, we not only don't do that, but we'll add a quote mark when you tab-complete, even if you didn't type one. Getting this to work across a range of libedit versions (and, to a lesser extent, libreadline versions) was depressingly difficult. It will be interesting to see whether the new regression test cases pass everywhere in the buildfarm. Some future patch might try to handle quoted SQL identifiers with similar explicit quoting/dequoting logic, but that's for another day. Patch by me, reviewed by Peter Eisentraut. Discussion: https://postgr.es/m/16059-8836946734c02b84@postgresql.org
-
Michael Paquier authored
The docs had some typos and grammar mistakes, and its indentation was inconsistent. Author: Amit Langote, Justin Pryzby Discussion: https://postgr.es/m/20200116151930.GM26045@telsasoft.com
-
Michael Paquier authored
Author: Justin Pryzby Discussion: https://postgr.es/m/20200113004542.GA26045@telsasoft.com
-
- 22 Jan, 2020 4 commits
-
-
Alvaro Herrera authored
This test case was sketched in commit message 4c870109 to explain an ancient bug; it translates to a coverage increase, so add it to the BRIN regression tests.
-
Fujii Masao authored
Detection of WAL records having references to invalid pages during recovery causes PostgreSQL to raise a PANIC-level error, aborting the recovery. Setting ignore_invalid_pages to on causes the system to ignore those WAL records (but still report a warning), and continue recovery. This behavior may cause crashes, data loss, propagate or hide corruption, or other serious problems. However, it may allow you to get past the PANIC-level error, to finish the recovery, and to cause the server to start up. Author: Fujii Masao Reviewed-by: Michael Paquier Discussion: https://www.postgresql.org/message-id/CAHGQGwHCK6f77yeZD4MHOnN+PaTf6XiJfEB+Ce7SksSHjeAWtg@mail.gmail.com
-
Amit Kapila authored
In commit 40d964ec, we changed the way memory is allocated for dead tuples but forgot to update the place where we compute the maximum number of dead tuples. This could lead to invalid memory requests. Reported-by: Andres Freund Diagnosed-by: Andres Freund Author: Masahiko Sawada Reviewed-by: Amit Kapila and Dilip Kumar Discussion: https://postgr.es/m/20200121060020.e3cr7s7fj5rw4lok@alap3.anarazel.de
-
Michael Paquier authored
Attempting to use CREATE INDEX, DROP INDEX or REINDEX with CONCURRENTLY on a temporary relation with ON COMMIT actions triggered unexpected errors because those operations use multiple transactions internally to complete their work. Here is for example one confusing error when using ON COMMIT DELETE ROWS: ERROR: index "foo" already contains data Issues related to temporary relations and concurrent indexing are fixed in this commit by enforcing the non-concurrent path to be taken for temporary relations even if using CONCURRENTLY, transparently to the user. Using a non-concurrent path does not matter in practice as locks cannot be taken on a temporary relation by a session different than the one owning the relation, and the non-concurrent operation is more effective. The problem exists with REINDEX since v12 with the introduction of CONCURRENTLY, and with CREATE/DROP INDEX since CONCURRENTLY exists for those commands. In all supported versions, this caused only confusing error messages to be generated. Note that with REINDEX, it was also possible to issue a REINDEX CONCURRENTLY for a temporary relation owned by a different session, leading to a server crash. The idea to enforce transparently the non-concurrent code path for temporary relations comes originally from Andres Freund. Reported-by: Manuel Rigger Author: Michael Paquier, Heikki Linnakangas Reviewed-by: Andres Freund, Álvaro Herrera, Heikki Linnakangas Discussion: https://postgr.es/m/CA+u7OA6gP7YAeCguyseusYcc=uR8+ypjCcgDDCTzjQ+k6S9ksQ@mail.gmail.com Backpatch-through: 9.4
-
- 21 Jan, 2020 3 commits
-
-
Tom Lane authored
The behavior of something like ALTER TABLE transactions ADD COLUMN status varchar(30) DEFAULT 'old', ALTER COLUMN status SET default 'current'; is to fill existing table rows with 'old', not 'current'. That's intentional and desirable for a couple of reasons: * It makes the behavior the same whether you merge the sub-commands into one ALTER command or give them separately; * If we applied the new default while filling the table, there would be no way to get the existing behavior in one SQL command. The same reasoning applies in cases that add a column and then manipulate its GENERATED/IDENTITY status in a second sub-command, since the generation expression is really just a kind of default. However, that wasn't very obvious (at least not to me; earlier in the referenced discussion thread I'd thought it was a bug to be fixed). And it certainly wasn't documented. Hence, add documentation, code comments, and a test case to clarify that this behavior is all intentional. In passing, adjust ATExecAddColumn's defaults-related relkind check so that it matches up exactly with ATRewriteTables, instead of being effectively (though not literally) the negated inverse condition. The reasoning can be explained a lot more concisely that way, too (not to mention that the comment now matches the code, which it did not before). Discussion: https://postgr.es/m/10365.1558909428@sss.pgh.pa.us
-
Andres Freund authored
The code checking whether an aggregate transition value needs to be reparented into the current context has always only compared the transition return value with the previous transition value by datum, i.e. without regard for NULLness. This normally works, because when the transition function returns NULL (via fcinfo->isnull), it'll return a value that won't be the same as its input value. But there's no hard requirement that that's the case. And it turns out, it's possible to hit this case (see discussion or reproducers), leading to a non-null transition value not being reparented, followed by a crash caused by that. Instead of adding another comparison of NULLness, instead have ExecAggTransReparent() ensure that pergroup->transValue ends up as 0 when the new transition value is NULL. That avoids having to add an additional branch to the much more common cases of the transition function returning the old transition value (which is a pointer in this case), and when the new value is different, but not NULL. In branches since 69c3936a, also deduplicate the reparenting code between the expression evaluation based transitions, and the path for ordered aggregates. Reported-By: Teodor Sigaev, Nikita Glukhov Author: Andres Freund Discussion: https://postgr.es/m/bd34e930-cfec-ea9b-3827-a8bc50891393@sigaev.ru Backpatch: 9.4-, this issue has existed since at least 7.4
-
Michael Paquier authored
This helps integration of extensions with Windows. The following parameters are changed: - idle_in_transaction_session_timeout (9.6 and newer versions) - lock_timeout - statement_timeout - track_activities - track_counts - track_functions Author: Pascal Legrand Reviewed-by: Amit Kamila, Julien Rouhaud, Michael Paquier Discussion: https://postgr.es/m/1579298868581-0.post@n3.nabble.com Backpatch-through: 9.4
-
- 20 Jan, 2020 1 commit
-