- 28 Aug, 2018 3 commits
-
-
Andrew Gierth authored
regexp_matches, regexp_split_to_table and regexp_split_to_array all work by compiling a list of match positions as character offsets (NOT byte positions) in the source string. Formerly, they then used text_substr to extract the matched text; but in a multi-byte encoding, that counts the characters in the string, and the characters needed to reach the starting byte position, on every call. Accordingly, the performance degraded as the product of the input string length and the number of match positions, such that splitting a string of a few hundred kbytes could take many minutes. Repair by keeping the wide-character copy of the input string available (only in the case where encoding_max_length is not 1) after performing the match operation, and extracting substrings from that instead. This reduces the complexity to being linear in the number of result bytes, discounting the actual regexp match itself (which is not affected by this patch). In passing, remove cleanup using retail pfree() which was obsoleted by commit ff428cde (Feb 2008) which made cleanup of SRF multi-call contexts automatic. Also increase (to ~134 million) the maximum number of matches and provide an error message when it is reached. Backpatch all the way because this has been wrong forever. Analysis and patch by me; review by Kaiting Chen. Discussion: https://postgr.es/m/87pnyn55qh.fsf@news-spur.riddles.org.uk see also https://postgr.es/m/87lg996g4r.fsf@news-spur.riddles.org.uk
-
Peter Eisentraut authored
The source code was already set up for NLS support, so just a nls.mk file needed to be added. Also, fix the old problem of putting the int64 format specifier right into the string, which breaks NLS.
-
Thomas Munro authored
Fix reference to non-existent file in comment. Add SH_ prefix to the EMPTY and IN_USE tokens, to reduce likelihood of collisions with unrelated macros. Add include guards around the function definitions that are not "parameterized", so the header can be used again in the same translation unit. Undefine SH_EQUAL macro where other "parameter" macros are undefined, for the same reason. Author: Thomas Munro Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAEepm%3D1LdXZ3mMTM8tHt_b%3DK1kREit%3Dp8sikesak%3DkzHHM07Nw%40mail.gmail.com
-
- 27 Aug, 2018 4 commits
-
-
Peter Eisentraut authored
The problem arises with the combination of CALL with output parameters and doing a COMMIT inside the procedure. When a CALL has output parameters, the portal uses the strategy PORTAL_UTIL_SELECT instead of PORTAL_MULTI_QUERY. Using PORTAL_UTIL_SELECT causes the portal's snapshot to be registered with the current resource owner (portal->holdSnapshot); see 9ee1cf04 for the reason. Normally, PortalDrop() unregisters the snapshot. If not, then ResourceOwnerRelease() will print a warning about a snapshot leak on transaction commit. A transaction commit normally drops all portals (PreCommit_Portals()), except the active portal. So in case of the active portal, we need to manually release the snapshot to avoid the warning. Reported-by: Prabhat Sahu <prabhat.sahu@enterprisedb.com> Reviewed-by: Jonathan S. Katz <jkatz@postgresql.org>
-
Tom Lane authored
The archive should show a dependency on the item's table, but it failed to include one. This could cause failures in parallel restore due to emitting ALTER TABLE ... ENABLE ROW LEVEL SECURITY before restoring the table's data. In practice the odds of a problem seem low, since you would typically need to have set FORCE ROW LEVEL SECURITY as well, and you'd also need a very high --jobs count to have any chance of this happening. That probably explains the lack of field reports. Still, it's a bug, so back-patch to 9.5 where RLS was introduced. Discussion: https://postgr.es/m/19784.1535390902@sss.pgh.pa.us
-
Peter Eisentraut authored
Use BKI_FORCE_NOT_NULL on some catalog field declarations that are never null (according to the source code that accesses them).
-
Michael Paquier authored
A caller of VACUUM can perform early lookup obtention which can cause other sessions to block on the request done, causing potentially DOS attacks as even a non-privileged user can attempt a vacuum fill of a critical catalog table to block even all incoming connection attempts. Contrary to TRUNCATE, a client could attempt a system-wide VACUUM after building the list of relations to VACUUM, which can cause vacuum_rel() or analyze_rel() to try to lock the relation but the operation would just block. When the client specifies a list of relations and the relation needs to be skipped, ownership checks are done when building the list of relations to work on, preventing a later lock attempt. vacuum_rel() already had the sanity checks needed, except that those were applied too late. This commit refactors the code so as relation skips are checked beforehand, making it safer to avoid too early locks, for both manual VACUUM with and without a list of relations specified. An isolation test is added emulating the fact that early locks do not happen anymore, issuing a WARNING message earlier if the user calling VACUUM is not a relation owner. When a partitioned table is listed in a manual VACUUM or ANALYZE command, its full list of partitions is fetched, all partitions get added to the list to work on, and then each one of them is processed one by one, with ownership checks happening at the later phase of vacuum_rel() or analyze_rel(). Trying to do early ownership checks for each partition is proving to be tedious as this would result in deadlock risks with lock upgrades, and skipping all partitions if the listed partitioned table is not owned would result in a behavior change compared to how Postgres 10 has implemented vacuum for partitioned tables. The original problem reported related to early lock queue for critical relations is fixed anyway, so priority is given to avoiding a backward-incompatible behavior. Reported-by: Lloyd Albin, Jeremy Schneider Author: Michael Paquier Reviewed by: Nathan Bossart, Kyotaro Horiguchi Discussion: https://postgr.es/m/152512087100.19803.12733865831237526317@wrigleys.postgresql.org Discussion: https://postgr.es/m/20180812222142.GA6097@paquier.xyz
-
- 26 Aug, 2018 3 commits
-
-
Thomas Munro authored
Author: David Rowley Discussion: https://postgr.es/m/CAKJS1f8du35u5DprpykWvgNEScxapbWYJdHq%2Bz06Wj3Y2KFPbw%40mail.gmail.com
-
Tom Lane authored
The previous coding figured it'd be good enough to postpone opening the first CSV log file until we got a message we needed to write there. This is unsafe, though, because if the open fails we end up in infinite recursion trying to report the failure. Instead make the CSV log file management code look as nearly as possible like the longstanding logic for the stderr log file. In particular, open it immediately at postmaster startup (if enabled), or when we get a SIGHUP in which we find that log_destination has been changed to enable CSV logging. It seems OK to fail if a postmaster-start-time open attempt fails, as we've long done for the stderr log file. But we can't die if we fail to open a CSV log file during SIGHUP, so we're still left with a problem. In that case, write any output meant for the CSV log file to the stderr log file. (This will also cover race-condition cases in which backends send CSV log data before or after we have the CSV log file open.) This patch also fixes an ancient oversight that, if CSV logging was turned off during a SIGHUP, we never actually closed the last CSV log file. In passing, remember to reset whereToSendOutput = DestNone during syslogger start, since (unlike all other postmaster children) it's forked before the postmaster has done that. This made for a platform-dependent difference in error reporting behavior between the syslogger and other children: except on Windows, it'd report problems to the original postmaster stderr as well as the normal error log file(s). It's barely possible that that was intentional at some point; but it doesn't seem likely to be desirable in production, and the platform dependency definitely isn't desirable. Per report from Alexander Kukushkin. It's been like this for a long time, so back-patch to all supported branches. Discussion: https://postgr.es/m/CAFh8B==iLUD_gqC-dAENS0V+kVrCeGiKujtKqSQ7++S-caaChw@mail.gmail.com
-
Jeff Davis authored
Andres and Tom objected to the choice of the ".tmp" extension. Changing to Andres's suggestion of ".spill". Discussion: https://postgr.es/m/88092095-3348-49D8-8746-EB574B1D30EA%40anarazel.de
-
- 25 Aug, 2018 6 commits
-
-
Bruce Momjian authored
Mention that "Latest checkpoint location" will not match in pg_upgrade if the standby server is still running during the upgrade, which is possible. "Match" text first appeared in PG 9.5. Reported-by: Paul Bonaud Discussion: https://postgr.es/m/c7268794-edb4-1772-3bfd-04c54585c24e@trainline.com Backpatch-through: 9.5
-
Bruce Momjian authored
Reported-by: Ashutosh Sharma Discussion: https://postgr.es/m/CAE9k0PnhnL6MNDLuvkk8USzOa_DpzDzFQPAM_uaGuXbh9HMKYw@mail.gmail.com Author: Ashutosh Sharma Backpatch-through: 9.3
-
Jeff Davis authored
The previous extension, ".snap", was chosen for historical reasons and became confusing. Discussion: https://postgr.es/m/CAMp0ubd_P8vBGx8=MfDXQJZxHA5D_Zarw5cCkDxJ_63+pWRJ9w@mail.gmail.com
-
Jeff Davis authored
The description of the filename for mapping files did not match the code.
-
Bruce Momjian authored
The options string appeared in PG 10. Reported-by: pgsql-kr@postgresql.kr Discussion: https://postgr.es/m/153500377658.1378.6587007319641704057@wrigleys.postgresql.org Backpatch-through: 10
-
Bruce Momjian authored
Reported-by: Adam Bielański Discussion: https://postgr.es/m/153484305538.1370.7605856225879294548@wrigleys.postgresql.org Backpatch-through: 9.3
-
- 24 Aug, 2018 7 commits
-
-
Andres Freund authored
This simplifies logic / reduces duplication in a few headers. Author: Andres Freund Discussion: https://postgr.es/m/97d4b165-192d-3605-749c-f614a0c4e783@2ndquadrant.com
-
Andres Freund authored
Noticed thanks to buildfarm animal seawasp. Author: Andres Freund Backpatch: v11-, where LLVM based JIT compliation was introduced.
-
Tom Lane authored
While we generally don't sweat too much about "may be used uninitialized" warnings from older compilers, I noticed that there's a fair number of buildfarm animals that are producing such a warning *only* for this variable. So it seems worth silencing.
-
Michael Paquier authored
Since 5220bb75, not only Append, but also MergeAppend support the operation. Author: Amit Langote Discussion: https://postgr.es/m/59d8eb92-4536-c44e-54e2-305b9b3d8eb7@lab.ntt.co.jp
-
Andres Freund authored
This just converts a few for loops in postgres.c to declare variables in the loop initializer, and uses designated initializers in smgr.c's definition of smgr callbacks. Author: Andres Freund Discussion: https://postgr.es/m/97d4b165-192d-3605-749c-f614a0c4e783@2ndquadrant.com
-
Andres Freund authored
In 86d78ef5 I enabled configure to check for C99 support, with the goal of checking which platforms support C99. While there are a few machines without C99 support among our buildfarm animals, de-supporting them for v12 was deemed acceptable. While not tested in aforementioned commit, the biggest increase in minimum compiler version comes from MSVC, which gained C99 support fairly late. The subset in MSVC 2013 is sufficient for our needs, at this point. While that is a significant increase in minimum version, the existing windows binaries are already built with a new enough version. Make configure error out if C99 support could not be detected. For MSVC builds, increase the minimum version to 2013. The increase to MSVC 2013 allows us to get rid of VCBuildProject.pm, as that was only required for MSVC 2005/2008. Author: Andres Freund Discussion: https://postgr.es/m/97d4b165-192d-3605-749c-f614a0c4e783@2ndquadrant.com
-
Michael Paquier authored
A VACUUM or ANALYZE command listing directly a partitioned table expands it to its partitions, causing all elements of a tree to be processed with individual ownership checks done. This results in different relation skips depending on the ownership policy of a tree, which may not be consistent for a partition tree. This commit adds more tests to ensure that any future refactoring allows to keep a consistent behavior, or at least that any changes done are easily identified and checked. The current behavior of VACUUM with partitioned tables is present since 10. Author: Nathan Bossart Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/DC186201-B01F-4A66-9EC4-F855A957C1F9@amazon.com
-
- 23 Aug, 2018 7 commits
-
-
Andres Freund authored
Code in slot_getallattrs() is the same as if slot_getsomeattrs() is called with number of attributes specified in the tuple descriptor. Implement it that way instead of duplicating the code between those two functions. This is part of a patchseries abstracting TupleTableSlots so they can store arbitrary forms of tuples, but is a nice enough cleanup on its own. Author: Ashutosh Bapat Reviewed-By: Andres Freund Discussion: https://postgr.es/m/20180220224318.gw4oe5jadhpmcdnm@alap3.anarazel.de
-
Andrew Gierth authored
Commits c6b3c939 (which fixed the precedence of >=, <=, <> operators) and 865f14a2 (which added support for the standard => notation for named arguments) created a class of lexer tokens which look like multi-character operators but which have their own token IDs distinct from Op. However, longest-match rules meant that following any of these tokens with another operator character, as in (1<>-1), would cause them to be incorrectly returned as Op. The error here isn't immediately obvious, because the parser would usually still find the correct operator via the Op token, but there were more subtle problems: 1. If immediately followed by a comment or +-, >= <= <> would be given the old precedence of Op rather than the correct new precedence; 2. If followed by a comment, != would be returned as Op rather than as NOT_EQUAL, causing it not to be found at all; 3. If followed by a comment or +-, the => token for named arguments would be lexed as Op, causing the argument to be mis-parsed as a simple expression, usually causing an error. Fix by explicitly checking for the operators in the {operator} code block in addition to all the existing special cases there. Backpatch to 9.5 where the problem was introduced. Analysis and patch by me; review by Tom Lane. Discussion: https://postgr.es/m/87va851ppl.fsf@news-spur.riddles.org.uk
-
Andrew Gierth authored
The lexer's handling of operators contained an O(N^3) hazard when dealing with long strings of + or - characters; it seems hard to prevent this case from being O(N^2), but the additional N multiplier was not needed. Backpatch all the way since this has been there since 7.x, and it presents at least a mild hazard in that trying to do Bind, PREPARE or EXPLAIN on a hostile query could take excessive time (without honouring cancels or timeouts) even if the query was never executed.
-
Tom Lane authored
Historically, we looked up the target hostname in connectDBStart, so that PQconnectPoll did not need to do DNS name resolution. The patches that added multiple-target-host support to libpq preserved this division of labor; but it's really nonsensical now, because it means that if any one of the target hosts fails to resolve in DNS, the connection fails. That negates the no-single-point-of-failure goal of the feature. Additionally, DNS lookups aren't exactly cheap, but the code did them all even if the first connection attempt succeeds. Hence, rearrange so that PQconnectPoll does the lookups, and only looks up a hostname when it's time to try that host. This does mean that PQconnectPoll could block on a DNS lookup --- but if you wanted to avoid that, you should be using hostaddr, as the documentation has always specified. It seems fairly unlikely that any applications would really care whether the lookup occurs inside PQconnectStart or PQconnectPoll. In addition to calling out that fact explicitly, do some other minor wordsmithing in the docs around the multiple-target-host feature. Since this seems like a bug in the multiple-target-host feature, backpatch to v10 where that was introduced. In the back branches, avoid moving any existing fields of struct pg_conn, just in case any third-party code is looking into that struct. Tom Lane, reviewed by Fabien Coelho Discussion: https://postgr.es/m/4913.1533827102@sss.pgh.pa.us
-
Peter Eisentraut authored
Reformat synopsis, put options into better order, make the desciption line a bit shorter, and put more details into the description.
-
Peter Eisentraut authored
This test was supposed to check the interaction of INOUT and default parameters in a procedure call, but it only checked the case where the parameter was not supplied. Now it also checks the case where the parameter was supplied. It was already working correctly, so no code changes required.
-
Alvaro Herrera authored
Thinko in commit 857ee8e3. Discovered-by: Gianni Ciolli
-
- 22 Aug, 2018 8 commits
-
-
Peter Eisentraut authored
Some text was still claiming that committing transactions was not possible in PL/pgSQL.
-
Peter Eisentraut authored
Since procedures are now a different thing from functions, change the CREATE TRIGGER and CREATE EVENT TRIGGER syntax to use FUNCTION in the clause that specifies the function. PROCEDURE is still accepted for compatibility. pg_dump and ruleutils.c output is not changed yet, because that would require a change in information_schema.sql and thus a catversion change. Reported-by: Peter Geoghegan <pg@bowt.ie> Reviewed-by: Jonathan S. Katz <jonathan.katz@excoventures.com>
-
Peter Eisentraut authored
Since procedures are now a different thing from functions, change the CREATE OPERATOR syntax to use FUNCTION in the clause that specifies the function. PROCEDURE is still accepted for compatibility. Reported-by: Peter Geoghegan <pg@bowt.ie> Reviewed-by: Jonathan S. Katz <jonathan.katz@excoventures.com>
-
Peter Eisentraut authored
Historically, the term procedure was used as a synonym for function in Postgres/PostgreSQL. Now we have procedures as separate objects from functions, so we need to clean up the documentation to not mix those terms. In particular, mentions of "trigger procedures" are changed to "trigger functions", and access method "support procedures" are changed to "support functions". (The latter already used FUNCTION in the SQL syntax anyway.) Also, the terminology in the SPI chapter has been cleaned up. A few tests, examples, and code comments are also adjusted to be consistent with documentation changes, but not everything. Reported-by: Peter Geoghegan <pg@bowt.ie> Reviewed-by: Jonathan S. Katz <jonathan.katz@excoventures.com>
-
Thomas Munro authored
Per complaint from Michael Paquier.
-
Thomas Munro authored
Author: David Rowley Discussion: https://postgr.es/m/CAKJS1f8YkwojSTSg8YjNYCLCXzx0fR7wBR3Gf%2BrA9_52eoPZKg%40mail.gmail.com
-
Michael Paquier authored
This commit prevents a crash of pg_dump caused by the exclusion of a table which has identity columns, as the table would be correctly excluded but not its identity sequence. In order to fix that, identity sequences are excluded if the parent table is defined as such. Knowing about such sequences has no meaning without their parent table anyway. Reported-by: Andy Abelisto Author: David Rowley Reviewed-by: Peter Eisentraut, Michael Paquier Discussion: https://postgr.es/m/153479393218.1316.8472285660264976457@wrigleys.postgresql.org Backpatch-through: 10
-
Michael Paquier authored
When a user does not have ownership on a relation, then specific log messages are generated. This new test suite adds coverage for all the possible log messages generated, which will be useful to check the consistency of any refactoring related to ownership checks for relations vacuumed or analyzed. Author: Michael Paquier Reviewed-by: Nathan Bossart Discussion: https://postgr.es/m/20180812222142.GA6097@paquier.xyz
-
- 21 Aug, 2018 2 commits
-
-
Alvaro Herrera authored
-
Alvaro Herrera authored
-