- 04 Jul, 2019 4 commits
-
-
Peter Eisentraut authored
-
Michael Paquier authored
This is a follow-up refactoring after 09ec55b9 and b6742117, which has proved that the encoding and decoding routines used by SCRAM have a poor interface when it comes to check after buffer overflows. This adds an extra argument in the shape of the length of the result buffer for each routine, which is used for overflow checks when encoding or decoding an input string. The original idea comes from Tom Lane. As a result of that, the encoding routine can now fail, so all its callers are adjusted to generate proper error messages in case of problems. On failure, the result buffer gets zeroed. Author: Michael Paquier Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/20190623132535.GB1628@paquier.xyz
-
Michael Paquier authored
The last set of scenarios did an initialization of nodes followed by an extra command to set up the authentication policy with pg_regress --config-auth. This configuration step can be integrated directly using the option auth_extra from PostgresNode::init when initializing the node, saving from one extra command. On Windows, this also restricts more pg_ident.conf for the SSPI user mapping by removing the entry of the OS user running the test, which is not needed anyway. Note that IPC::Run mishandles double quotes, hence the restore user name is changed to map with that. This was already done in the test as a later step, but not in a consistent way, causing the switch to use auth_extra to fail. Found while reviewing ca129e58. Discussion: https://postgr.es/m/20190703062024.GD3084@paquier.xyz
-
David Rowley authored
This changes various places where appendPQExpBuffer was used in places where it was possible to use appendPQExpBufferStr, and likewise for appendStringInfo and appendStringInfoString. This is really just a stylistic improvement, but there are also small performance gains to be had from doing this. Discussion: http://postgr.es/m/CAKJS1f9P=M-3ULmPvr8iCno8yvfDViHibJjpriHU8+SXUgeZ=w@mail.gmail.com
-
- 03 Jul, 2019 8 commits
-
-
Tom Lane authored
A function that is declared to return a named composite type must return tuple datums that are physically marked as having that type. The plpgsql code path that allowed directly returning an expanded-record datum forgot to check that, so that an expanded record marked as type RECORDOID could be returned if it had a physically-compatible tupdesc. This'd be harmless, I think, if the record value never escaped the current session --- but it's possible for it to get stored into a table, and then subsequent sessions can't interpret the anonymous record type. Fix by flattening the record into a tuple datum and overwriting its type/typmod fields, if its declared type doesn't match the function's declared type. (In principle it might be possible to just change the expanded record's stored type ID info, but there are enough tricky consequences that I didn't want to mess with that, especially not in a back-patched bug fix.) Per bug report from Steve Rogerson. Back-patch to v11 where the bug was introduced. Discussion: https://postgr.es/m/cbaecae6-7b87-584e-45f6-4d047b92ca2a@yewtc.demon.co.uk
-
Tom Lane authored
In verbose mode, listTables() now emits a "Persistence" column showing whether the table/index/view/etc is permanent, temporary, or unlogged. David Fetter, reviewed by Fabien Coelho and Rafia Sabih Discussion: https://postgr.es/m/20190423005642.GZ28936@fetter.org
-
David Rowley authored
d4c3a156 added code to remove columns that were not part of a table's PRIMARY KEY constraint from the GROUP BY clause when all the primary key columns were present in the group by. This is fine to do since we know that there will only be one row per group coming from this relation. However, the logic failed to consider inheritance parent relations. These can have child relations without a primary key, but even if they did, they could duplicate one of the parent's rows or one from another child relation. In this case, those additional GROUP BY columns are required. Fix this by disabling the optimization for inheritance parent tables. In v11 and beyond, partitioned tables are fine since partitions cannot overlap and before v11 partitioned tables could not have a primary key. Reported-by: Manuel Rigger Discussion: http://postgr.es/m/CA+u7OA7VLKf_vEr6kLF3MnWSA9LToJYncgpNX2tQ-oWzYCBQAw@mail.gmail.com Backpatch-through: 9.6
-
Etsuro Fujita authored
Previously, in the loop in postgresAcquireSampleRowsFunc() to iterate fetching rows from a given remote table, we redundantly 1) determined the fetch size by parsing the table's server/table-level options and then 2) constructed the fetch command; remove that redundancy. Author: Etsuro Fujita Reviewed-by: Julien Rouhaud Discussion: https://postgr.es/m/CAPmGK17_urk9qkLV65_iYMFw64z5qhdfhY=tMVV6Jg4KNYx8+w@mail.gmail.com
-
Michael Meskes authored
second time. Author: "Zhang, Jie" <zhangjie2@cn.fujitsu.com>
-
Michael Meskes authored
an int variable. Author: Yang Xiao <YangX92@hotmail.com>
-
Michael Meskes authored
-
- 02 Jul, 2019 8 commits
-
-
Peter Eisentraut authored
Author: Alexey Kondratov <a.kondratov@postgrespro.ru>
-
Peter Eisentraut authored
These are no longer needed/allowed with the new logging API.
-
Tom Lane authored
Commit 4f3b38fe supposed that complete_from_const() is equivalent to the one-element-list case of complete_from_list(), but that's not really true at all. complete_from_const() supposes that the completion is certain enough to justify wiping out whatever the user typed, while complete_from_list() will only provide completions that match the word-so-far. In practice, given the lame parsing technology used by tab-complete.c, it's fairly hard to believe that we're *ever* certain enough about a completion to justify auto-correcting user input that doesn't match. Hence, remove the inappropriate unification of the two cases. As things now stand, complete_from_const() is used only for the situation where we have no matches and we need to keep readline from applying its default complete-with-file-names behavior. This (mis?) behavior actually exists much further back, but I'm hesitant to change it in released branches. It's not too late for v12, though, especially seeing that the aforesaid commit is new in v12. Per gripe from Ken Tanzer. Discussion: https://postgr.es/m/CAD3a31XpXzrZA9TT3BqLSHghdTK+=cXjNCE+oL2Zn4+oWoc=qA@mail.gmail.com
-
Tom Lane authored
Don't think that the context "UPDATE tab SET var =" is a GUC-setting command. If we have "SET var =" but the "var" is not a known GUC variable, don't offer any completions. The most likely explanation is that we've misparsed the context and it's not really a GUC-setting command. Per gripe from Ken Tanzer. Back-patch to 9.6. The issue exists further back, but before 9.6 the code looks very different and it doesn't actually know whether the "var" name matches anything, so I desisted from trying to fix it. Discussion: https://postgr.es/m/CAD3a31XpXzrZA9TT3BqLSHghdTK+=cXjNCE+oL2Zn4+oWoc=qA@mail.gmail.com
-
Tom Lane authored
The previous rule was "primary key (if any) first, then other unique indexes in name order, then all other indexes in name order". But the preference for unique indexes seems a bit obsolete since the introduction of exclusion constraints. It's no longer the case that unique indexes are the only ones that constrain what data can be in the table, and it's hard to see what other rationale there is for separating out unique indexes. Other new features like the possibility for some indexes to be INVALID (hence, not constraining anything) make this even shakier. Hence, simplify the sort order to be "primary key (if any) first, then all other indexes in name order". No documentation change, since this was never documented anyway. A couple of existing regression test cases change output, though. Discussion: https://postgr.es/m/14422.1561474929@sss.pgh.pa.us
-
Peter Geoghegan authored
Remove a very old Berkeley era comment that doesn't seem to have anything to do with the current locking considerations within _bt_getroot(). Discussion: https://postgr.es/m/CAH2-WzmA2H+rL-xxF5o6QhMD+9x6cJTnz2Mr3Li_pbPBmqoTBQ@mail.gmail.com
-
Michael Paquier authored
This fixes at the same time a set of inconsistencies in the documentation and the scripts related to the versions of Windows SDK supported. Author: Haribabu Kommi Reviewed-by: Andrew Dunstan, Juan José Santamaría Flecha, Michael Paquier Discussion: https://postgr.es/m/CAJrrPGcfqXhfPyMrny9apoDU7M1t59dzVAvoJ9AeAh5BJi+UzA@mail.gmail.com
-
Michael Paquier authored
This merges the portion related to REINDEX SYSTEM into the routine already available for all the other reindex types, making the query generation cleaner. While on it, change the handling of the reindex types using an enum, which allows to get rid of the hardcoded strings used directly in the query generation present for the same purpose (aka "TABLE", "DATABASE", etc.). Per discussion with Julien Rouhaud, Tom Lane, Alvaro Herrera and me. Author: Julien Rouhaud Discussion: https://postgr.es/m/CAOBaU_bSmSik_WRK9niDnm-3NkNZky6+uKxkmQwvthZvMWpS5A@mail.gmail.com
-
- 01 Jul, 2019 11 commits
-
-
Peter Eisentraut authored
This is long obsolete. Discussion: https://www.postgresql.org/message-id/8eacdc0d-123f-dbca-bacf-0a68766a4889@2ndquadrant.com
-
Tom Lane authored
Let the hacking begin ...
-
Tom Lane authored
pgperltidy and reformat-dat-files too, though the latter didn't find anything to change.
-
Tom Lane authored
Looks like one copy of this text didn't get updated. Justin Pryzby Discussion: https://postgr.es/m/20190701155932.GA22866@telsasoft.com
-
David Rowley authored
This reverts commits 4de60244 and b2d69806. Further thought is required to make this work properly.
-
David Rowley authored
4de60244 added the call to table_finish_bulk_insert to the CopyMultiInsertBufferCleanup function. We use a CopyMultiInsertBuffer even for non-partitioned tables, so having the cleanup do that meant we would call table_finsh_bulk_insert twice when performing COPY FROM with a non-partitioned table. Here we can just remove the direct call in CopyFrom and let CopyMultiInsertBufferCleanup handle the call instead.
-
David Rowley authored
86b85044 abstracted calls to heap functions in COPY FROM to support a generic table AM. However, when performing a copy into a partitioned table, this commit neglected to call table_finish_bulk_insert for each partition. Before 86b85044, when we always called the heap functions, there was no need to call heapam_finish_bulk_insert for partitions since it only did any work when performing a copy without WAL. For partitioned tables, this was unsupported anyway, so there was no issue. With pluggable storage, we can't make any assumptions about what the table AM might want to do in its equivalent function, so we'd better ensure we always call table_finish_bulk_insert each partition that's received a row. For now, we make the table_finish_bulk_insert call whenever we evict a CopyMultiInsertBuffer out of the CopyMultiInsertInfo. This does mean that it's possible that we call table_finish_bulk_insert multiple times per partition, which is not a problem other than being an inefficiency. Improving this requires a more invasive patch, so let's leave that for another day. In passing, move the table_finish_bulk_insert for the target of the COPY command so that it's only called when we're actually performing bulk inserts. We don't need to call this when inserting 1 row at a time. Reported-by: Robert Haas Discussion: https://postgr.es/m/CA+TgmoYK=6BpxiJ0tN-p9wtH0BTAfbdxzHhwou0mdud4+BkYuQ@mail.gmail.com
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
Michael Paquier authored
Author: Alexander Lakhin Discussion: https://postgr.es/m/af27d1b3-a128-9d62-46e0-88f424397f44@gmail.com
-
Noah Misch authored
UBSan complains about this. Instead, cast to a suitable type requiring only 4-byte alignment. DatumGetAnyArrayP() already assumes one can cast between AnyArrayType and ArrayType, so this doesn't introduce a new assumption. Back-patch to 9.5, where AnyArrayType was introduced. Reviewed by Tom Lane. Discussion: https://postgr.es/m/20190629210334.GA1244217@rfd.leadboat.com
-
- 30 Jun, 2019 7 commits
-
-
Andrew Gierth authored
The logic in reorder_grouping_sets to order grouping set elements to match a pre-specified sort ordering was defective, resulting in unnecessary sort nodes (though the query output would still be correct). Repair, simplifying the code a little, and add a test. Per report from Richard Guo, though I didn't use their patch. Original bug seems to have been my fault. Backpatch back to 9.5 where grouping sets were introduced. Discussion: https://postgr.es/m/CAN_9JTzyjGcUjiBHxLsgqfk7PkdLGXiM=pwM+=ph2LsWw0WO1A@mail.gmail.com
-
Tom Lane authored
There's nothing to build here, and that was confusing AddContrib(). Per buildfarm.
-
Tom Lane authored
Up to now, pg_regress --config-auth had a hard-wired assumption that the target cluster uses the default bootstrap superuser name. pg_dump's 010_dump_connstr.pl TAP test uses non-default superuser names, and was klugily getting around the restriction by listing the desired superuser name as a role to "create". This is pretty confusing (or at least, it confused me). Let's make it clearer by allowing --config-auth mode to be told the bootstrap superuser name. Repurpose the existing --user switch for that, since it has no other function in --config-auth mode. Per buildfarm. I don't have an environment at hand in which I can test this fix, but the buildfarm should soon show if it works. Discussion: https://postgr.es/m/3142.1561840611@sss.pgh.pa.us
-
Tom Lane authored
This test script is unsafe to run in "make installcheck" mode for (at least) two reasons: it creates and destroys some role names that don't follow the "regress_xxx" naming convention, and it sets and then resets the application_name GUC attached to every existing role. While we've not had complaints, these surely are not good things to do within a production installation, and regress.sgml pretty clearly implies that we won't do them. Rather than lose test coverage altogether, let's just move this script somewhere where it will get run by "make check" but not "make installcheck". src/test/modules/ already has that property. Since it seems likely that we'll want other regression tests in future that also exceed the constraints of "make installcheck", create a generically-named src/test/modules/unsafe_tests/ directory to hold them. Discussion: https://postgr.es/m/16638.1468620817@sss.pgh.pa.us
-
Peter Eisentraut authored
Using PG_RETURN_LSN() from non-fmgr pg_lsn_in_internal() happened to work on some platforms, but should just be a plain "return".
-
Peter Eisentraut authored
Instead of calling pg_lsn_in() in check_recovery_target_lsn and timestamptz_in() in check_recovery_target_time, reorganize the respective code so that we don't raise any errors in the check hooks. The previous code tried to use PG_TRY/PG_CATCH to handle errors in a way that is not safe, so now the code contains no ereport() calls and can operate safely within the GUC error handling system. Moreover, since the interpretation of the recovery_target_time string may depend on the time zone, we cannot do the final processing of that string until all the GUC processing is done. Instead, check_recovery_target_time() now does some parsing for syntax checking, but the actual conversion to a timestamptz value is done later in the recovery code that uses it. Reported-by: Andres Freund <andres@anarazel.de> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://www.postgresql.org/message-id/flat/20190611061115.njjwkagvxp4qujhp%40alap3.anarazel.de
-
Peter Eisentraut authored
The date/time values 'current', 'invalid', and 'undefined' were removed a long time ago, but the code still contains explicit error handling for the transition. To simplify the code and avoid having to handle these values everywhere, just remove the recognition of these tokens altogether now. Reviewed-by: Michael Paquier <michael@paquier.xyz>
-
- 29 Jun, 2019 2 commits
-
-
Tom Lane authored
In commit 18555b13 we tentatively established a rule that regression tests should use names containing "regression" for databases, and names starting with "regress_" for all other globally-visible object names, so as to circumscribe the side-effects that "make installcheck" could have on an existing installation. This commit adds a simple enforcement mechanism for that rule: if the code is compiled with ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS defined, it will emit a warning (not an error) whenever a database, role, tablespace, subscription, or replication origin name is created that doesn't obey the rule. Running one or more buildfarm members with that symbol defined should be enough to catch new violations, at least in the regular regression tests. Most TAP tests wouldn't notice such warnings, but that's actually fine because TAP tests don't execute against an existing server anyway. Since it's already the case that running src/test/modules/ tests in installcheck mode is deprecated, we can use that as a home for tests that seem unsafe to run against an existing server, such as tests that might have side-effects on existing roles. Document that (though this commit doesn't in itself make it any less safe than before). Update regress.sgml to define these restrictions more clearly, and to clean up assorted lack-of-up-to-date-ness in its descriptions of the available regression tests. Discussion: https://postgr.es/m/16638.1468620817@sss.pgh.pa.us
-
Tom Lane authored
In commit 18555b13 we tentatively established a rule that regression tests should use names containing "regression" for databases, and names starting with "regress_" for all other globally-visible object names, so as to circumscribe the side-effects that "make installcheck" could have on an existing installation. However, no enforcement mechanism was created, so it's unsurprising that some new violations have crept in since then. In fact, a whole new *category* of violations has crept in, to wit we now also have globally-visible subscription and replication origin names, and "make installcheck" could very easily clobber user-created objects of those types. So it's past time to do something about this. This commit sanitizes the tests enough that they will pass (i.e. not generate any visible warnings) with the enforcement mechanism I'll add in the next commit. There are some TAP tests that still trigger the warnings, but the warnings do not cause test failure. Since these tests do not actually run against a pre-existing installation, there's no need to worry whether they could conflict with user-created objects. The problem with rolenames.sql testing special role names like "user" is still there, and is dealt with only very cosmetically in this patch (by hiding the warnings :-(). What we actually need to do to be safe is to take that test script out of "make installcheck" altogether, but that seems like material for a separate patch. Discussion: https://postgr.es/m/16638.1468620817@sss.pgh.pa.us
-