- 28 Sep, 2020 3 commits
-
-
Fujii Masao authored
Author: Naoki Nakamichi Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/ec1a45b06edfce13706f2c765778d8c2@oss.nttdata.com
-
David Rowley authored
Prior to this commit, the linitial(), lsecond(), lthird(), lfourth() macros and their int and Oid list cousins would call their corresponding inlined function to fetch the cell of interest. Those inline functions were kind enough to return NULL if the particular cell did not exist. Unfortunately, the care that these functions took was of no relevance to the calling macros as they proceeded to directly dereference the returned value without any regard to whether that value was NULL or not. If it had been, we'd have segfaulted. Of course, the fact that we would have segfaulted on misuse of these macros just goes to prove that nobody is relying on the empty or list too small checks. So here we just get rid of those checks completely. The existing inline functions have been left alone as someone may be using those directly. We just replace the call within each macro to use list_nth_cell(). For the llast*() case we require a new list_last_cell() inline function to get away from the multiple evaluation hazard that we'd get if we fetched ->length on the macro's parameter. Author: David Rowley Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAApHDvpo1zj9KhEpU2cCRZfSM3Q6XGdhzuAS2v79PH7WJBkYVA@mail.gmail.com
-
Michael Paquier authored
Both tools never had safeguard checks for the options provided, and it was possible to make pg_test_fsync run an infinite amount of time or pass down buggy values to pg_test_timing. These behaviors have existed for a long time, with no actual complaints, so no backpatch is done. Basic TAP tests are introduced for both tools. Author: Michael Paquier Reviewed-by: Peter Eisentraut Discussion: https://postgr.es/m/20200806062759.GE16470@paquier.xyz
-
- 27 Sep, 2020 1 commit
-
-
Tom Lane authored
When commit bd3dadda introduced AlternativeSubPlans, I had some ambitions towards allowing the choice of subplan to change during execution. That has not happened, or even been thought about, in the ensuing twelve years; so it seems like a failed experiment. So let's rip that out and resolve the choice of subplan at the end of planning (in setrefs.c) rather than during executor startup. This has a number of positive benefits: * Removal of a few hundred lines of executor code, since AlternativeSubPlans need no longer be supported there. * Removal of executor-startup overhead (particularly, initialization of subplans that won't be used). * Removal of incidental costs of having a larger plan tree, such as tree-scanning and copying costs in the plancache; not to mention setrefs.c's own costs of processing the discarded subplans. * EXPLAIN no longer has to print a weird (and undocumented) representation of an AlternativeSubPlan choice; it sees only the subplan actually used. This should mean less confusion for users. * Since setrefs.c knows which subexpression of a plan node it's working on at any instant, it's possible to adjust the estimated number of executions of the subplan based on that. For example, we should usually estimate more executions of a qual expression than a targetlist expression. The implementation used here is pretty simplistic, because we don't want to expend a lot of cycles on the issue; but it's better than ignoring the point entirely, as the executor had to. That last point might possibly result in shifting the choice between hashed and non-hashed EXISTS subplans in a few cases, but in general this patch isn't meant to change planner choices. Since we're doing the resolution so late, it's really impossible to change any plan choices outside the AlternativeSubPlan itself. Patch by me; thanks to David Rowley for review. Discussion: https://postgr.es/m/1992952.1592785225@sss.pgh.pa.us
-
- 26 Sep, 2020 3 commits
-
-
Tom Lane authored
Commit e5209bf3 didn't quite get the job done, as I failed to notice that chksetconfig() also needed to have its ORDER BY extended. Per buildfarm member dory. Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dory&dt=2020-09-26%2020%3A10%3A13
-
Tom Lane authored
This function leaked some memory while loading qual clauses for an RLS policy. While ordinarily negligible, that could build up in some repeated-reload cases, as reported by Konstantin Knizhnik. We can improve matters by borrowing the coding long used in RelationBuildRuleLock: build stringToNode's result directly in the target context, and remember to explicitly pfree the input string. This patch by no means completely guarantees zero leaks within this function, since we have no real guarantee that the catalog- reading subroutines it calls don't leak anything. However, practical tests suggest that this is enough to resolve the issue. In any case, any remaining leaks are similar to those risked by RelationBuildRuleLock and other relcache-loading subroutines. If we need to fix them, we should adopt a more global approach such as that used by the RECOVER_RELATION_BUILD_MEMORY hack. While here, let's remove the need for an expensive PG_TRY block by using MemoryContextSetParent to reparent an initially-short-lived context for the RLS data. Back-patch to all supported branches. Discussion: https://postgr.es/m/21356c12-8917-8249-b35f-1c447231922b@postgrespro.ru
-
Amit Kapila authored
Commit 46482432 changed the logical replication protocol to allow the streaming of in-progress transactions and used the new version of protocol irrespective of the server version. Use the appropriate version of the protocol based on the server version. Reported-by: Ashutosh Sharma Author: Dilip Kumar Reviewed-by: Ashutosh Sharma and Amit Kapila Discussion: https://postgr.es/m/CAE9k0P=9OpXcNrcU5Gsvd5MZ8GFpiN833vNHzX6Uc=8+h1ft1Q@mail.gmail.com
-
- 25 Sep, 2020 2 commits
-
-
Thomas Munro authored
Previously, we called fsync() after writing out individual pg_xact, pg_multixact and pg_commit_ts pages due to cache pressure, leading to regular I/O stalls in user backends and recovery. Collapse requests for the same file into a single system call as part of the next checkpoint, as we already did for relation files, using the infrastructure developed by commit 3eb77eba. This can cause a significant improvement to recovery performance, especially when it's otherwise CPU-bound. Hoist ProcessSyncRequests() up into CheckPointGuts() to make it clearer that it applies to all the SLRU mini-buffer-pools as well as the main buffer pool. Rearrange things so that data collected in CheckpointStats includes SLRU activity. Also remove the Shutdown{CLOG,CommitTS,SUBTRANS,MultiXact}() functions, because they were redundant after the shutdown checkpoint that immediately precedes them. (I'm not sure if they were ever needed, but they aren't now.) Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (parts) Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com> Discussion: https://postgr.es/m/CA+hUKGLJ=84YT+NvhkEEDAuUtVHMfQ9i-N7k_o50JmQ6Rpj_OQ@mail.gmail.com
-
Michael Paquier authored
PX_OWN_ALLOC was intended as a way to disable the use of palloc(), and over the time new palloc() or equivalent calls have been added like in 32984d8f, making this extra layer losing its original purpose. This simplifies on the way some code paths to use palloc0() rather than palloc() followed by memset(0). Author: Daniel Gustafsson Discussion: https://postgr.es/m/A5BFAA1A-B2E8-4CBC-895E-7B1B9475A527@yesql.se
-
- 24 Sep, 2020 7 commits
-
-
Tom Lane authored
Parallel pg_dump failed if its -d parameter was a connection string containing any essential information other than host, port, or username. The same was true for pg_restore with --create. The reason is that these scenarios failed to preserve the connection string from the command line; the code felt free to replace that with just the database name when reconnecting from a pg_dump parallel worker or after creating the target database. By chance, parallel pg_restore did not suffer this defect, as long as you didn't say --create. In practice it seems that the error would be obvious only if the connstring included essential, non-default SSL or GSS parameters. This may explain why it took us so long to notice. (It also makes it very difficult to craft a regression test case illustrating the problem, since the test would fail in builds without those options.) Fix by refactoring so that ConnectDatabase always receives all the relevant options directly from the command line, rather than reconstructed values. Inject a different database name, when necessary, by relying on libpq's rules for handling multiple "dbname" parameters. While here, let's get rid of the essentially duplicate _connectDB function, as well as some obsolete nearby cruft. Per bug #16604 from Zsolt Ero. Back-patch to all supported branches. Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org
-
Robert Haas authored
The previous coding was confused about whether head_timestamp was intended to represent the timestamp for the newest bucket in the mapping or the oldest timestamp for the oldest bucket in the mapping. Decide that it's intended to be the oldest one, and repair accordingly. To do that, we need to do two things. First, when advancing to a new bucket, don't categorically set head_timestamp to the new timestamp. Do this only if we're blowing out the map completely because a lot of time has passed since we last maintained it. If we're replacing entries one by one, advance head_timestamp by 1 minute for each; if we're filling in unused entries, don't advance head_timestamp at all. Second, fix the computation of how many buckets we need to advance. The previous formula would be correct if head_timestamp were the timestamp for the new bucket, but we're now making all the code agree that it's the timestamp for the oldest bucket, so adjust the formula accordingly. This is certainly a bug fix, but I don't feel good about back-patching it without the introspection tools added by commit aecf5ee2, and perhaps also some actual tests. Since back-patching the introspection tools might not attract sufficient support and since there are no automated tests of these fixes yet, I'm just committing this to master for now. Patch by me, reviewed by Thomas Munro, Dilip Kumar, Hamid Akhtar. Discussion: http://postgr.es/m/CA+TgmoY=aqf0zjTD+3dUWYkgMiNDegDLFjo+6ze=Wtpik+3XqA@mail.gmail.com
-
Peter Eisentraut authored
Existing code used various inconsistent ways to printf struct stat's st_size member. The type of that is off_t, which is in most cases a signed 64-bit integer, so use the long long int format for it.
-
Robert Haas authored
You can use this to view the contents of the time to XID mapping which the server maintains when old_snapshot_threshold != -1. Being able to view that information may be interesting for users, and it's definitely useful for figuring out whether the mapping is being maintained correctly. It isn't, so that will need to be fixed in a subsequent commit. Patch by me, reviewed by Thomas Munro, Dilip Kumar, Hamid Akhtar. Discussion: http://postgr.es/m/CA+TgmoY=aqf0zjTD+3dUWYkgMiNDegDLFjo+6ze=Wtpik+3XqA@mail.gmail.com
-
Robert Haas authored
This makes it possible for code outside snapmgr.c to examine the contents of this data structure. This commit does not add any code which actually does so; a subsequent commit will make that change. Patch by me, reviewed by Thomas Munro, Dilip Kumar, Hamid Akhtar. Discussion: http://postgr.es/m/CA+TgmoY=aqf0zjTD+3dUWYkgMiNDegDLFjo+6ze=Wtpik+3XqA@mail.gmail.com
-
Tom Lane authored
Zhijie Hou Discussion: https://postgr.es/m/ce2cd951fe9b448a9cda99dc1a871fb9@G08CNEXMBPEKD05.g08.fujitsu.local
-
Tom Lane authored
Commit fbeb9da2, which added the tsearch_readline APIs, left t_readline() in place as a compatibility measure. But that function has been unused and deprecated for twelve years now, so that seems like enough time to remove it. Doing so, and merging t_readline's code into tsearch_readline, aids in making several useful improvements: * The hard-wired 4K limit on line length in tsearch data files is removed, by using a StringInfo buffer instead of a fixed-size buffer. * We can buy back the per-line palloc/pfree added by 3ea7e955 in the common case where encoding conversion is not required. * We no longer need a separate pg_verify_mbstr call, as that functionality was folded into encoding conversion some time ago. (We could have done some of this stuff while keeping t_readline as a separate API, but there seems little point, since there's no reason for anyone to still be using t_readline directly.) Discussion: https://postgr.es/m/48A4FA71-524E-41B9-953A-FD04EF36E2E7@yesql.se
-
- 23 Sep, 2020 4 commits
-
-
Thomas Munro authored
Harmonize behavior by moving reponsibility for fsyncing directories down into slru.c. In 10 and later, only the multixact directories were missed (see commit 1b02be21), and in older branches all SLRUs were missed. Back-patch to all supported releases. Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/CA%2BhUKGLtsTUOScnNoSMZ-2ZLv%2BwGh01J6kAo_DM8mTRq1sKdSQ%40mail.gmail.com
-
Tom Lane authored
We failed to pass down the query string to check_new_partition_bound, so that its attempts to provide error cursor positions were for naught; one must have the query string to get parser_errposition to do anything. Adjust its API to require a ParseState to be passed down. Also, improve the logic inside check_new_partition_bound so that the cursor points at the partition bound for the specific column causing the issue, when one can be identified. That part is also for naught if we can't determine the query position of the column with the problem. Improve transformPartitionBoundValue so that it makes sure that const-simplified partition expressions will be properly labeled with positions. In passing, skip calling evaluate_expr if the value is already a Const, which is surely the most common case. Alexandra Wang, Ashwin Agrawal, Amit Langote; reviewed by Ashutosh Bapat Discussion: https://postgr.es/m/CACiyaSopZoqssfMzgHk6fAkp01cL6vnqBdmTw2C5_KJaFR_aMg@mail.gmail.com Discussion: https://postgr.es/m/CAJV4CdrZ5mKuaEsRSbLf2URQ3h6iMtKD=hik8MaF5WwdmC9uZw@mail.gmail.com
-
Tom Lane authored
tsearch_readline() saves the string pointer it returns to the caller for possible use in the associated error context callback. However, the caller will usually pfree that string sometime before it next calls tsearch_readline(), so that there is a window where an ereport will try to print an already-freed string. The built-in users of tsearch_readline() happen to all do that pfree at the bottoms of their loops, so that the window is effectively empty for them. However, this is not documented as a requirement, and contrib/dict_xsyn doesn't do it like that, so it seems likely that third-party dictionaries might have live bugs here. The practical consequences of this seem pretty limited in any case, since production builds wouldn't clobber the freed string immediately, besides which you'd not expect syntax errors in dictionary files being used in production. Still, it's clearly a bug waiting to bite somebody. Fix by pstrdup'ing the string to be saved for the error callback, and then pfree'ing it next time through. It's been like this for a long time, so back-patch to all supported branches. Discussion: https://postgr.es/m/48A4FA71-524E-41B9-953A-FD04EF36E2E7@yesql.se
-
Thomas Munro authored
Due to flaws in commit 3347c982, using WaitLatch() without WL_LATCH_SET could cause an assertion failure or crash. Repair. While here, also add a check that the latch we're switching to belongs to this backend, when changing from one latch to another. Discussion: https://postgr.es/m/CA%2BhUKGK1607VmtrDUHQXrsooU%3Dap4g4R2yaoByWOOA3m8xevUQ%40mail.gmail.com
-
- 22 Sep, 2020 5 commits
-
-
Tom Lane authored
pg_restore previously coped with overlength TOC-file lines using some complicated logic to ignore additional bufferloads. While this isn't wrong, since we don't expect that the interesting part of a line would run to more than a dozen or so bytes, it's more complex than it needs to be. Use a StringInfo instead of a fixed-size buffer so that we can process long lines as single entities and thus not need the extra logic. Daniel Gustafsson Discussion: https://postgr.es/m/48A4FA71-524E-41B9-953A-FD04EF36E2E7@yesql.se
-
Tom Lane authored
Use a StringInfo instead of a fixed-size buffer in parseServiceInfo(). While we've not heard complaints about the existing 255-byte limit, it certainly seems possible that complex cases could run afoul of it. Daniel Gustafsson Discussion: https://postgr.es/m/48A4FA71-524E-41B9-953A-FD04EF36E2E7@yesql.se
-
Tom Lane authored
Further experience says that the appending behavior offered by pg_get_line_append is useful to only a very small minority of callers. For most, the requirement to reset the buffer after each line is just an error-prone nuisance. Hence, invent another alternative call pg_get_line_buf, which takes care of that detail. Noted while reviewing a patch from Daniel Gustafsson. Discussion: https://postgr.es/m/48A4FA71-524E-41B9-953A-FD04EF36E2E7@yesql.se
-
Tom Lane authored
pgindent messes up entries in this file if their names match typedef names. While there's reason to avoid choosing conflicting names, we have some historical exceptions, and there's no guarantee that more duplicates won't appear in future. Since this is a derived file anyway, there's little harm in just excluding it. I said yesterday that all our derived files are pgindent-clean, or else explicitly excluded from indentation, but I'd forgotten about this one. Now that project is really done, as confirmed by a test run. Discussion: https://postgr.es/m/79ed5348-be7a-b647-dd40-742207186a22@2ndquadrant.com
-
Tom Lane authored
The existing message about "a column definition list is only allowed for functions returning "record"" could be given in some cases where it was fairly confusing; in particular, a function with multiple OUT parameters *does* return record according to pg_proc. Break it down into a couple more cases to deliver a more on-point complaint. Per complaint from Bruce Momjian. Discussion: https://postgr.es/m/798909.1600562993@sss.pgh.pa.us
-
- 21 Sep, 2020 4 commits
-
-
Tom Lane authored
This completes the project of making all our derived files be pgindent-clean (or else explicitly excluded from indentation), so that no surprises result when running pgindent in a built-out development tree. Discussion: https://postgr.es/m/79ed5348-be7a-b647-dd40-742207186a22@2ndquadrant.com
-
Tom Lane authored
99% of this is docs, but also a couple of comments. No code changes. Justin Pryzby Discussion: https://postgr.es/m/20200919175804.GE30557@telsasoft.com
-
Peter Eisentraut authored
The standard order in PostgreSQL and other code is use strict first, but some code was uselessly inconsistent about this.
-
Heikki Linnakangas authored
Since we're bypassing the buffer manager, we need to call PageSetChecksumInplace() directly. As reported by Justin Pryzby. In the passing, add RelationOpenSmgr() calls before all smgrwrite() and smgrextend() calls. Tom added one before the first smgrextend() call in commit c2bb2870, which seems to be enough, but let's play it safe and do it before each one. That's how it's done in the similar code in nbtsort.c, too. Discussion: https://www.postgresql.org/message-id/20200920224446.GF30557@telsasoft.com
-
- 20 Sep, 2020 2 commits
-
-
Tom Lane authored
Can't say if this fixes *all* cases, but at least we get through the "point" regression test now, which hyrax's last run did not. Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2020-09-19%2021%3A27%3A23
-
Peter Eisentraut authored
-
- 19 Sep, 2020 3 commits
-
-
Tom Lane authored
It's no longer necessary to assign explicit precedences to GENERATED, NULL_P, PRESERVE, or STRIP_P. Actually, we don't need to assign precedence to IDENT either; that was really just there to govern the behavior of target_el's "a_expr IDENT" production, which no longer ends with that terminal. However, it seems like a good idea to continue to do so, because it provides a reference point for a precedence level that we can assign to other unreserved keywords that lack a natural precedence level. Research by Peter Eisentraut and John Naylor; comment rewrite by me. Discussion: https://postgr.es/m/38ca86db-42ab-9b48-2902-337a0d6b8311@2ndquadrant.com
-
Peter Eisentraut authored
Remove various unused parameters in pg_dump code. These have all become unused over time or were never used. Discussion: https://www.postgresql.org/message-id/flat/511bb100-f829-ba21-2f10-9f952ec06ead%402ndquadrant.com
-
Thomas Munro authored
Commit be0a6666 left behind a comment about the order of some tests that didn't make sense without the expensive division, and in fact we might as well change the order to one that fails more cheaply most of the time as a micro-optimization. Also, remove the "+ 1" applied to max_bucket, to drop an instruction and match the original behavior. Per review from Tom Lane. Discussion: https://postgr.es/m/VI1PR0701MB696044FC35013A96FECC7AC8F62D0%40VI1PR0701MB6960.eurprd07.prod.outlook.com
-
- 18 Sep, 2020 6 commits
-
-
Thomas Munro authored
Since ancient times we have had support for a fill factor (maximum load factor) to be set for a dynahash hash table, but: 1. It was an integer, whereas for in-memory hash tables interesting load factor targets are probably somewhere near the 0.75-1.0 range. 2. It was implemented in a way that performed an expensive division operation that regularly showed up in profiles. 3. We are not aware of anyone ever having used a non-default value. Therefore, remove support, effectively fixing it at 1. Author: Jakub Wartak <Jakub.Wartak@tomtom.com> Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> Reviewed-by: Thomas Munro <thomas.munro@gmail.com> Reviewed-by: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/VI1PR0701MB696044FC35013A96FECC7AC8F62D0%40VI1PR0701MB6960.eurprd07.prod.outlook.com
-
Tom Lane authored
Up to now, if you tried to omit "AS" before a column label in a SELECT list, it would only work if the column label was an IDENT, that is not any known keyword. This is rather unfriendly considering that we have so many keywords and are constantly growing more. In the wake of commit 1ed6b895 it's possible to improve matters quite a bit. We'd originally tried to make this work by having some of the existing keyword categories be allowed without AS, but that didn't work too well, because each category contains a few special cases that don't work without AS. Instead, invent an entirely orthogonal keyword property "can be bare column label", and mark all keywords that way for which we don't get shift/reduce errors by doing so. It turns out that of our 450 current keywords, all but 39 can be made bare column labels, improving the situation by over 90%. This number might move around a little depending on future grammar work, but it's a pretty nice improvement. Mark Dilger, based on work by myself and Robert Haas; review by John Naylor Discussion: https://postgr.es/m/38ca86db-42ab-9b48-2902-337a0d6b8311@2ndquadrant.com
-
Robert Haas authored
According to buildfarm member sungazer, the behavior of VACUUM can be unstable in these tests even if we prevent autovacuum from running on the tables in question, apparently because even a manual vacuum can behave differently depending on whether anything else is running that holds back the global xmin. So use a temporary table instead, which as of commit a7212be8 enables vacuuming using a more aggressive cutoff. This approach can't be used for the regression test that involves a materialized view, but that test doesn't run vacuum, so it shouldn't be prone to this particular failure mode. Analysis by Tom Lane. Patch by Ashutosh Sharma and me. Discussion: http://postgr.es/m/665524.1599948007@sss.pgh.pa.us
-
Amit Kapila authored
Author: Amit Langote Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/CA+HiwqE20oZoix13JyCeALpTf_SmjarZWtBFe5sND6zz+iupAw@mail.gmail.com
-
Amit Kapila authored
After commits 85f6b49c and 3ba59ccc, we can allow parallel inserts which was earlier not possible as parallel group members won't conflict for relation extension and page lock. In those commits, we forgot to update comments at few places. Author: Amit Kapila Reviewed-by: Robert Haas and Dilip Kumar Backpatch-through: 13 Discussion: https://postgr.es/m/CAFiTN-tMrQh5FFMPx5aWJ+1gi1H6JxktEhq5mDwCHgnEO5oBkA@mail.gmail.com
-
Tom Lane authored
It's not quite clear why commit 45b98057 has resulted in some instability here, though interference from concurrent autovacuum runs seems like a reasonable guess. What is clear is that the output ordering of the test queries is underdetermined for no very good reason. Extend the ORDER BY keys in hopes of fixing the buildfarm. Discussion: https://postgr.es/m/147499.1600351924@sss.pgh.pa.us
-