- 23 Oct, 2020 7 commits
-
-
Tom Lane authored
The ExplainCloseGroup arguments for incremental sort usage data didn't match the corresponding ExplainOpenGroup. This only matters for XML-format output, which is probably why we'd not noticed. Daniel Gustafsson, per bug #16683 from Frits Jalvingh Discussion: https://postgr.es/m/16683-8005033324ad34e9@postgresql.org
-
Peter Eisentraut authored
The order of AuthenticationGSSContinue and AuthenticationSSPI was swapped, based on the other Authentication* protocol messages being listed in subcode order.
-
Heikki Linnakangas authored
The behavioural change in the -t/--table option happened around 15 years ago and there seems little point in keeping it around. Author: Ian Barwick Discussion: https://www.postgresql.org/message-id/CAB8KJ%3Dh-XALik4M7gv-pX48%3D%2BSPWexfaYwa%2ByTnPwD3DxceXrg%40mail.gmail.com
-
Heikki Linnakangas authored
Thinko in commit 1375422c. EvalPlanQualStart() was mistakenly resetting the parent EState's es_result_relations, when it should initialize the field in the child EPQ EState it just created. That was clearly wrong, but it didn't cause any ill effects, because es_result_relations is currently not used after the ExecInit* phase. Author: Amit Langote Discussion: https://www.postgresql.org/message-id/CA%2BHiwqFEuq8AAAmxXsTDVZ1r38cHbfYuiPQx_%3DYyKe2DC-6q4A%40mail.gmail.com
-
Michael Paquier authored
This replaces the existing binary search with two perfect hash functions for the composition and the decomposition in the backend code, at the cost of slightly-larger binaries there (35kB in libpgcommon_srv.a). Per the measurements done, this improves the speed of the recomposition and decomposition by up to 30~40 times for the NFC and NFKC conversions, while all other operations get at least 40% faster. This is not as "good" as what libicu has, but it closes the gap a lot as per the feedback from Daniel Verite. The decomposition table remains the same, getting used for the binary search in the frontend code, where we care more about the size of the libraries like libpq over performance as this gets involved only in code paths related to the SCRAM authentication. In consequence, note that the perfect hash function for the recomposition needs to use a new inverse lookup array back to to the existing decomposition table. The size of all frontend deliverables remains unchanged, even with --enable-debug, including libpq. Author: John Naylor Reviewed-by: Michael Paquier, Tom Lane Discussion: https://postgr.es/m/CAFBsxsHUuMFCt6-pU+oG-F1==CmEp8wR+O+bRouXWu6i8kXuqA@mail.gmail.com
-
Tom Lane authored
DST law changes in Palestine, with a whopping 120 hours' notice. Also some historical corrections for Palestine.
-
Tom Lane authored
There's no functional change at all here, but I'm curious to see whether this change successfully shuts up Coverity's warning about a useless strcmp(), which appeared with the previous update. Discussion: http://mm.icann.org/pipermail/tz/2020-October/029370.html
-
- 22 Oct, 2020 7 commits
-
-
Tom Lane authored
ECPG's PREPARE ... FROM and EXECUTE IMMEDIATE can optionally take the target query as a simple literal, rather than the more usual string-variable reference. This was previously documented as being a C string literal, but that's a lie in one critical respect: you can't write a data double quote as \" in such literals. That's because the lexer is in SQL mode at this point, so it'll parse double-quoted strings as SQL identifiers, within which backslash is not special, so \" ends the literal. I looked into making this work as documented, but getting the lexer to switch behaviors at just the right point is somewhere between very difficult and impossible. It's not really worth the trouble, because these cases are next to useless: if you have a fixed SQL statement to execute or prepare, you might as well write it as a direct EXEC SQL, saving the messiness of converting it into a string literal and gaining the opportunity for compile-time SQL syntax checking. Instead, let's just document (and test) the workaround of writing a double quote as an octal escape (\042) in such cases. There's no code behavioral change here, so in principle this could be back-patched, but it's such a niche case I doubt it's worth the trouble. Per report from 1250kv. Discussion: https://postgr.es/m/673825.1603223178@sss.pgh.pa.us
-
Tom Lane authored
If you write the literal 'abc''def' in an EXEC SQL command, that will come out the other end as 'abc'def', triggering a syntax error in the backend. Likewise, "abc""def" is reduced to "abc"def" which is wrong syntax for a quoted identifier. The cause is that the lexer thinks it should emit just one quote mark, whereas what it really should do is keep the string as-is. Add some docs and test cases, too. Although this seems clearly a bug, I fear users wouldn't appreciate changing it in minor releases. Some may well be working around it by applying an extra doubling of affected quotes, as for example sql/dyntest.pgc has been doing. Per investigation of a report from 1250kv, although this isn't exactly what he/she was on about. Discussion: https://postgr.es/m/673825.1603223178@sss.pgh.pa.us
-
Robert Haas authored
Mark Dilger, with a couple of stray semicolons removed by me. Discussion: http://postgr.es/m/2A7DA1A8-C4AA-43DF-A985-3CA52F4DC775@enterprisedb.com
-
Tom Lane authored
The check for whether to complain about not having an old connection to get parameters from was seriously out of date: it had not been rethought when we invented connstrings, nor when we invented the -reuse-previous option. Replace it with a check that throws an error if reuse-previous is active and we lack an old connection to reuse. While that doesn't move the goalposts very far in terms of easing reconnection after a server crash, at least it's consistent. If the user specifies a connstring plus additional parameters (which is invalid per the documentation), the extra parameters were silently ignored. That seems like it could be really confusing, so let's throw a syntax error instead. Teach the connstring code path to re-use the old connection's password in the same cases as the old-style-syntax code path would, ie if we are reusing parameters and the values of username, host/hostaddr, and port are not being changed. Document this behavior, too, since it was unmentioned before. Also simplify the implementation a bit, giving rise to two new and useful properties: if there's a "password=xxx" in the connstring, we'll use it not ignore it, and by default (i.e., except with --no-password) we will prompt for a password if the re-used password or connstring password doesn't work. The previous code just failed if the re-used password didn't work. Given the paucity of field complaints about these issues, I don't think that they rise to the level of back-patchable bug fixes, and in any case they might represent undesirable behavior changes in minor releases. So no back-patch. Discussion: https://postgr.es/m/235210.1603321144@sss.pgh.pa.us
-
Robert Haas authored
Mark Dilger, reviewed by Peter Geoghegan, Andres Freund, Álvaro Herrera, Michael Paquier, Amul Sul, and by me. Some last-minute cosmetic revisions by me. Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com
-
Peter Eisentraut authored
-
David Rowley authored
There is a handful of places where we called list_delete_ptr() to remove some element from a List. In many of these places we know, or with very little additional effort know the index of the ListCell that we need to remove. Here we change all of those places to instead either use one of; list_delete_nth_cell(), foreach_delete_current() or list_delete_last(). Each of these saves from having to iterate over the list to search for the element to remove by its pointer value. There are some small performance gains to be had by doing this, but in the general case, none of these lists are likely to be very large, so the lookup was probably never that expensive anyway. However, some of the calls are in fairly hot code paths, e.g process_equivalence(). So any small gains there are useful. Author: Zhijie Hou and David Rowley Discussion: https://postgr.es/m/b3517353ec7c4f87aa560678fbb1034b@G08CNEXMBPEKD05.g08.fujitsu.local
-
- 21 Oct, 2020 6 commits
-
-
Tom Lane authored
psql's \connect claims to be able to re-use previous connection parameters, but in fact it only re-uses the database name, user name, host name (and possibly hostaddr, depending on version), and port. This is problematic for assorted use cases. Notably, pg_dump[all] emits "\connect databasename" commands which we would like to have re-use all other parameters. If such a script is loaded in a psql run that initially had "-d connstring" with some non-default parameters, those other parameters would be lost, potentially causing connection failure. (Thus, this is the same kind of bug addressed in commits a45bc8a4 and 8e5793ab, although the details are much different.) To fix, redesign do_connect() so that it pulls out all properties of the old PGconn using PQconninfo(), and then replaces individual properties in that array. In the case where we don't wish to re-use anything, get libpq's default settings using PQconndefaults() and replace entries in that, so that we don't need different code paths for the two cases. This does result in an additional behavioral change for cases where the original connection parameters allowed multiple hosts, say "psql -h host1,host2", and the \connect request allows re-use of the host setting. Because the previous coding relied on PQhost(), it would only permit reconnection to the same host originally selected. Although one can think of scenarios where that's a good thing, there are others where it is not. Moreover, that behavior doesn't seem to meet the principle of least surprise, nor was it documented; nor is it even clear it was intended, since that coding long pre-dates the addition of multi-host support to libpq. Hence, this patch is content to drop it and re-use the host list as given. Per Peter Eisentraut's comments on bug #16604. Back-patch to all supported branches. Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org
-
Alvaro Herrera authored
Should cause tests to be a bit faster
-
Tom Lane authored
Theoretically one could go into src/test/thread and build/run this program there. In practice, that hasn't worked since 96bf88d5, and probably much longer on some platforms (likely including just the sort of hoary leftovers where this test might be of interest). While it wouldn't be too hard to repair the breakage, the fact that nobody has noticed for two years shows that there is zero usefulness in maintaining this build pathway. Let's get rid of it and decree that thread_test.c is *only* meant to be built/used in configure. Given that decision, it makes sense to put thread_test.c under config/ and get rid of src/test/thread altogether, so that's what I did. In passing, update src/test/README, which had been ignored by some not-so-recent additions of subdirectories. Discussion: https://postgr.es/m/227659.1603041612@sss.pgh.pa.us
-
Peter Eisentraut authored
Commit 8dace66e added #ifdefs for a number of errno symbols because they were not present on Windows. Later, commit 125ad539 added replacement #defines for some of those symbols. So some of the changes from the first commit are made dead code by the second commit and can now be removed. Discussion: https://www.postgresql.org/message-id/flat/6dee8574-b0ad-fc49-9c8c-2edc796f0033@2ndquadrant.com
-
Peter Eisentraut authored
After de8feb1f, some warnings remained that were only visible when using GCC on Windows. Fix those as well. Note that the ecpg test source files don't use the full pg_config.h, so we can't use pg_funcptr_t there but have to do it the long way.
-
Michael Paquier authored
80f8eb79 has added to the normalization quick check headers some code generated by PerfectHash.pm that is incompatible with the settings of gitattributes for this repository, as whitespaces followed a set of tabs for the first element of a line in the table. Instead of adding a new exception to gitattributes, rework the format generated so as a right padding with spaces is used instead of a left padding. This keeps the table generated in a readable shape with its set of columns, making unnecessary an update of gitattributes. Reported-by: Peter Eisentraut Author: John Naylor Discussion: https://postgr.es/m/d601b3b5-a3c7-5457-2f84-3d6513d690fc@2ndquadrant.com
-
- 20 Oct, 2020 2 commits
-
-
Alvaro Herrera authored
More precisely, correctly handle the ONLY flag indicating not to recurse. This was implemented in 86f57594 by recursing in trigger.c, but that's the wrong place; use ATSimpleRecursion instead, which behaves properly. However, because legacy inheritance has never recursed in that situation, make sure to do that only for new-style partitioning. I noticed this problem while testing a fix for another bug in the vicinity. This has been wrong all along, so backpatch to 11. Discussion: https://postgr.es/m/20201016235925.GA29829@alvherre.pgsql
-
Amit Kapila authored
Change the attribute 'name' to 'slot_name' in pg_stat_replication_slots view to make it clear and that way we will be consistent with the other places like pg_stat_wal_receiver view where we display the same attribute. In the passing, fix the typo in one of the macros in the related code. Bump the catversion as we have modified the name in the catalog as well. Reported-by: Noriyoshi Shinoda Author: Noriyoshi Shinoda Reviewed-by: Sawada Masahiko and Amit Kapila Discussion: https://postgr.es/m/CA+fd4k5_pPAYRTDrO2PbtTOe0eHQpBvuqmCr8ic39uTNmR49Eg@mail.gmail.com
-
- 19 Oct, 2020 14 commits
-
-
Tom Lane authored
When told to process all databases, clusterdb, reindexdb, and vacuumdb would reconnect by replacing their --maintenance-db parameter with the name of the target database. If that parameter is a connstring (which has been allowed for a long time, though we failed to document that before this patch), we'd lose any other options it might specify, for example SSL or GSS parameters, possibly resulting in failure to connect. Thus, this is the same bug as commit a45bc8a4 fixed in pg_dump and pg_restore. We can fix it in the same way, by using libpq's rules for handling multiple "dbname" parameters to add the target database name separately. I chose to apply the same refactoring approach as in that patch, with a struct to handle the command line parameters that need to be passed through to connectDatabase. (Maybe someday we can unify the very similar functions here and in pg_dump/pg_restore.) Per Peter Eisentraut's comments on bug #16604. Back-patch to all supported branches. Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org
-
Tom Lane authored
Since commit 913bbd88, check_sql_fn_retval() can either insert type coercion steps in-line in the Query that produces the SQL function's results, or generate a new top-level Query to perform the coercions, if modifying the Query's output in-place wouldn't be safe. However, it appears that the latter case has never actually worked, because the code tried to inject the new Query back into the query list it was passed ... which is not the list that will be used for later processing when we execute the SQL function "normally" (without inlining it). So we ended up with no coercion happening at run-time, leading to wrong results or crashes depending on the datatypes involved. While the regression tests look like they cover this area well enough, through a huge bit of bad luck all the test cases that exercise the separate-Query path were checking either inline-able cases (which accidentally didn't have the bug) or cases that are no-ops at runtime (e.g., varchar to text), so that the failure to perform the coercion wasn't obvious. The fact that the cases that don't work weren't allowed at all before v13 probably contributed to not noticing the problem sooner, too. To fix, get rid of the separate "flat" list of Query nodes and instead pass the real two-level list that is going to be used later. I chose to make the same change in check_sql_fn_statements(), although that has no actual bug, just so that we don't need that data structure at all. This is an API change, as evidenced by the adjustments needed to callers outside functions.c. That's a bit scary to be doing in a released branch, but so far as I can tell from a quick search, there are no outside callers of these functions (and they are sufficiently specific to our semantics for SQL-language functions that it's not apparent why any extension would need to call them). In any case, v13 already changed the API of check_sql_fn_retval() compared to prior branches. Per report from pinker. Back-patch to v13 where this code came in. Discussion: https://postgr.es/m/1603050466566-0.post@n3.nabble.com
-
Heikki Linnakangas authored
- Misc grammar and punctuation fixes. - Stylistic cleanup: use spaces between function arguments and JSON fields in examples. For example "foo(a,b)" -> "foo(a, b)". Add semicolon after last END in a few PL/pgSQL examples that were missing them. - Make sentence that talked about "..." and ".." operators more clear, by avoiding to end the sentence with "..". That makes it look the same as "..." - Fix syntax description for HAVING: HAVING conditions cannot be repeated Patch by Justin Pryzby, per Yaroslav Schekin's report. Backpatch to all supported versions, to the extent that the patch applies easily. Discussion: https://www.postgresql.org/message-id/20201005191922.GE17626%40telsasoft.com
-
Heikki Linnakangas authored
ALTER SEQUENCE RESTART was made transactional in commit 3d79013b. Backpatch to v10, where that was introduced. Patch by Justin Pryzby, per Yaroslav Schekin's report. Discussion: https://www.postgresql.org/message-id/20201005191922.GE17626%40telsasoft.com
-
Heikki Linnakangas authored
The output for this query changed in commit 4e2477b7b8. Backport to 9.6 like that commit. Patch by Justin Pryzby, per Yaroslav Schekin's report. Discussion: https://www.postgresql.org/message-id/20201005191922.GE17626%40telsasoft.com
-
Heikki Linnakangas authored
Commit 028350f6 changed its behavior from "at most" to "exactly", but forgot to update the documentation. Backpatch to 9.6. Patch by Justin Pryzby, per Yaroslav Schekin's report. Discussion: https://www.postgresql.org/message-id/20201005191922.GE17626%40telsasoft.com
-
Magnus Hagander authored
Author: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/A05874AE-8771-4C61-A24E-0B6249B8F3C2@yesql.se
-
Heikki Linnakangas authored
The extra indirection neeeded to access its members via its enclosing ResultRelInfo seems pointless. Move all the fields from PartitionRoutingInfo to ResultRelInfo. Author: Amit Langote Reviewed-by: Alvaro Herrera Discussion: https://www.postgresql.org/message-id/CA%2BHiwqFViT47Zbr_ASBejiK7iDG8%3DQ1swQ-tjM6caRPQ67pT%3Dw%40mail.gmail.com
-
Heikki Linnakangas authored
Store the tuple conversion map to convert a tuple from a child table's format to the root format in a new ri_ChildToRootMap field in ResultRelInfo. It is initialized if transition tuple capture for FOR STATEMENT triggers or INSERT tuple routing on a partitioned table is needed. Previously, ModifyTable kept the maps in the per-subplan ModifyTableState->mt_per_subplan_tupconv_maps array, or when tuple routing was used, in ResultRelInfo->ri_Partitioninfo->pi_PartitionToRootMap. The new field replaces both of those. Now that the child-to-root tuple conversion map is always available in ResultRelInfo (when needed), remove the TransitionCaptureState.tcs_map field. The callers of Exec*Trigger() functions no longer need to set or save it, which is much less confusing and bug-prone. Also, as a future optimization, this will allow us to delay creating the map for a given result relation until the relation is actually processed during execution. Author: Amit Langote Discussion: https://www.postgresql.org/message-id/CA%2BHiwqHtCWLdK-LO%3DNEsvOdHx%2B7yv4mE_zYK0i3BH7dXb-wxog%40mail.gmail.com
-
Heikki Linnakangas authored
When executing DDL on a partitioned table or on a table with inheritance children, statement-level triggers must be fired against the table given in the original statement. The code to look that up was a bit messy and duplicative. Commit 501ed02c added a helper function, getASTriggerResultRelInfo() (later renamed to getTargetResultRelInfo()) for it, but for some reason it was only used when firing AFTER STATEMENT triggers and the code to fire BEFORE STATEMENT triggers duplicated the logic. Determine the target relation in ExecInitModifyTable(), and set it always in ModifyTableState. Code that used to call getTargetResultRelInfo() can now use ModifyTableState->rootResultRelInfo directly. Discussion: https://www.postgresql.org/message-id/CA%2BHiwqFViT47Zbr_ASBejiK7iDG8%3DQ1swQ-tjM6caRPQ67pT%3Dw%40mail.gmail.com
-
Peter Eisentraut authored
In shm_mq_receive(), a huge payload could trigger an unjustified "invalid memory alloc request size" error due to the way the buffer size is increased. Add error checks (documenting the upper limit) and avoid the error by limiting the allocation size to MaxAllocSize. Author: Markus Wanner <markus.wanner@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/3bb363e7-ac04-0ac4-9fe8-db1148755bfa%402ndquadrant.com
-
Peter Eisentraut authored
The SQL file was using a mix of tabs and spaces inconsistently. Convert all to spaces and adjust indentation accordingly.
-
Amit Kapila authored
The rules to choose the number of parallel workers to perform parallel vacuum operation were not clearly specified. Reported-by: Peter Eisentraut Author: Amit Kapila Backpatch-through: 13, where it was introduced Discussion: https://postgr.es/m/36aa8aea-61b7-eb3c-263b-648e0cb117b7@2ndquadrant.com
-
Michael Paquier authored
When allocating a EVP context, it would have been possible to leak some memory allocated directly by OpenSSL, that PostgreSQL lost track of if the initialization of the context allocated failed. The cleanup can be done with EVP_MD_CTX_destroy(). Note that EVP APIs exist since OpenSSL 0.9.7 and we have in the tree equivalent implementations for older versions since ce9b75db (code removed with 9b7cd59a as of 10~). However, in 9.5 and 9.6, the existing code makes use of EVP_MD_CTX_destroy() and EVP_MD_CTX_create() without an equivalent implementation when building the tree with OpenSSL 0.9.6 or older, meaning that this code is in reality broken with such versions since it got introduced in e2838c5. As we have heard no complains about that, it does not seem worth bothering with in 9.5 and 9.6, so I have left that out for simplicity. Author: Michael Paquier Discussion: https://postgr.es/m/20201015072212.GC2305@paquier.xyz Backpatch-through: 9.5
-
- 18 Oct, 2020 3 commits
-
-
David Rowley authored
Given a query with enough joins, it was possible that the query planner, after multiplying the row estimates with the join selectivity that the estimated number of rows would exceed the limits of the double data type and become infinite. To give an indication on how extreme a case is required to hit this, the particular example case reported required 379 joins to a table without any statistics, which resulted in the 1.0/DEFAULT_NUM_DISTINCT being used for the join selectivity. This eventually caused the row estimates to go infinite and resulted in an assert failure in initial_cost_mergejoin() where the infinite row estimated was multiplied by an outerstartsel of 0.0 resulting in NaN. The failing assert verified that NaN <= Inf, which is false. To get around this we use clamp_row_est() to cap row estimates at a maximum of 1e100. This value is thought to be low enough that costs derived from it would remain within the bounds of what the double type can represent. Aside from fixing the failing Assert, this also has the added benefit of making it so add_path() will still receive proper numerical values as costs which will allow it to make more sane choices when determining the cheaper path in extreme cases such as the one described above. Additionally, we also get rid of the isnan() checks in the join costing functions. The actual case which originally triggered those checks to be added in the first place never made it to the mailing lists. It seems likely that the new code being added to clamp_row_est() will result in those becoming checks redundant, so just remove them. The fairly harmless assert failure problem does also exist in the backbranches, however, a more minimalistic fix will be applied there. Reported-by: Onder Kalaci Reviewed-by: Tom Lane Discussion: https://postgr.es/m/DM6PR21MB1211FF360183BCA901B27F04D80B0@DM6PR21MB1211.namprd21.prod.outlook.com
-
Tom Lane authored
According to Microsoft's documentation, 2.2 has been the current version since Windows 98 or so. Moreover, that's what the Postgres backend has been requesting since 2004 (cf commit 4cdf51e6). So there seems no reason for libpq to keep asking for 1.1. Bring thread_test along, too, so that we're uniformly asking for 2.2 in all our WSAStartup calls. It's not clear whether there's any point in back-patching this, so for now I didn't. Discussion: https://postgr.es/m/132799.1602960277@sss.pgh.pa.us
-
Tom Lane authored
Failure to write data to a large object during restore led to an ugly and uninformative error message. To add insult to injury, it then fatal'd out, where other SQL-level errors usually result in pressing on. Report the underlying error condition, rather than just giving not-very- useful byte counts, and use warn_or_exit_horribly() so as to adhere to pg_restore's general policy about whether to continue or not. Also recognize that lo_write() returns int not size_t. Per report from Justin Pryzby, though I didn't use his patch. Given the lack of comparable complaints, I'm not sure this is worth back-patching. Discussion: https://postgr.es/m/20201018010232.GF9241@telsasoft.com
-
- 17 Oct, 2020 1 commit
-
-
Tom Lane authored
The Windows documentation insists that every WSAStartup call should have a matching WSACleanup call. However, if that ever had actual relevance, it wasn't in this century. Every remotely-modern Windows kernel is capable of cleaning up when a process exits without doing that, and must be so to avoid resource leaks in case of a process crash. Moreover, Postgres backends have done WSAStartup without WSACleanup since commit 4cdf51e6 in 2004, and we've never seen any indication of a problem with that. libpq's habit of doing WSAStartup during connection start and WSACleanup during shutdown is also rather inefficient, since a series of non-overlapping connection requests leads to repeated, quite expensive DLL unload/reload cycles. We document a workaround for that (having the application call WSAStartup for itself), but that's just a kluge. It's also worth noting that it's far from uncommon for applications to exit without doing PQfinish, and we've not heard reports of trouble from that either. However, the real reason for acting on this is that recent experiments by Alexander Lakhin suggest that calling WSACleanup during PQfinish might be triggering the symptom we occasionally see that a process using libpq fails to emit expected stdio output. Therefore, let's change libpq so that it calls WSAStartup only once per process, during the first connection attempt, and never calls WSACleanup at all. While at it, get rid of the only other WSACleanup call in our code tree, in pg_dump/parallel.c; that presumably is equally useless. If this proves to suppress the fairly-common ecpg test failures we see on Windows, I'll back-patch, but for now let's just do it in HEAD and see what happens. Discussion: https://postgr.es/m/ac976d8c-03df-d6b8-025c-15a2de8d9af1@postgrespro.ru
-