- 17 May, 2017 1 commit
-
-
Tom Lane authored
Reformat various places in which pgindent will make a mess, and fix a few small violations of coding style that I happened to notice while perusing the diffs from a pgindent dry run. There is one actual bug fix here: the need-to-enlarge-the-buffer code path in icu_convert_case was obviously broken. Perhaps it's unreachable in our usage? Or maybe this is just sadly undertested.
-
- 16 May, 2017 8 commits
-
-
Tom Lane authored
Commit 827d6f97 contained the same misunderstanding of hash_create's API as commit 090010f2. As in 5d00b764, remove the unnecessary layer of memory context. (This bug is less significant than the other one, since the extra context would be under a relatively short-lived context, but it's still a bug.)
-
Kevin Grittner authored
This reverts commit 4a03f935.
-
Kevin Grittner authored
-
Tom Lane authored
Since commit 4e37b3e1, buildfarm member frogmouth has been failing occasionally with symptoms indicating that some expected stats data is getting dropped. The reason that that commit changed the behavior seems probably to be that more data is getting shoved at the collector in a short span of time. In current sources, the stats test's first session sends about 9KB of data while exiting, which is probably the same as what was sent just before wait_for_stats() in the previous test design. But now, the test's second session is starting up concurrently, and it sends another 2KB (presumably reflecting its initial catalog accesses). Since frogmouth is running on Windows XP, which reputedly has a default socket receive buffer size of only 8KB, it is not very surprising if this has put us over the threshold where the receive buffer can overflow and drop messages. The same mechanism could very easily explain the intermittent stats test failures we've been seeing for years, since background processes such as the bgwriter will sometimes send data concurrently with all this, and could thus cause occasional buffer overflows. Hence, insert some code into pgstat_init() to increase the stats socket's receive buffer size to 100KB if it's less than that. (On failure, emit a LOG message, but keep going.) Modern systems seem to have default sizes in the range of 100KB-250KB, but older platforms don't. I couldn't find any platforms that wouldn't accept 100KB, so in theory this won't cause any portability problems. If this is successful at reducing the buildfarm failure rate in HEAD, we should back-patch it, because it's certain that similar buffer overflows happen in the field on platforms with small buffer sizes. Going forward, there might be an argument for trying to increase the buffer size even more, but let's take a baby step first. Discussion: https://postgr.es/m/22173.1494788088@sss.pgh.pa.us
-
Robert Haas authored
Thomas Munro, reviewed by Amit Langote Discussion: http://postgr.es/m/CAEepm=15Jss-yhFApuKzxcoCuFnb8TR8iQiWMjG=CLYPx48QLw@mail.gmail.com
-
Robert Haas authored
Paul Jungwirth, reviewed by Ashutosh Bapat. Discussion: http://postgr.es/m/e24a6a6d-5670-739b-00f3-41a226a80f25@illuminatedcomputing.com
-
Tom Lane authored
Modifying the permissions of a persistent file isn't really much nicer than modifying its contents, even if git doesn't currently notice it. Adjust the test script to make a copy and set the permissions of that instead. Michael Paquier, per a gripe from me. Back-patch to 9.5 where these tests were introduced. Discussion: https://postgr.es/m/14836.1494885946@sss.pgh.pa.us
-
Tom Lane authored
Masahiko Sawada
-
- 15 May, 2017 17 commits
-
-
Tom Lane authored
-
Tom Lane authored
Building PDFs with the new toolchain creates *.fo temporary files.
-
Robert Haas authored
Masahiko Sawada Discussion: http://postgr.es/m/CAD21AoAzaR_XV7j7Wk9-QYXaFoT8H4egKwXvFY63wc8Lw2C9cg@mail.gmail.com
-
Tom Lane authored
-
Peter Eisentraut authored
-
Peter Eisentraut authored
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 398beeef4921df0956f917becd7b5669d2a8a5c4
-
Peter Eisentraut authored
sql.sgml has not been part of the documentation since forever, so it's pointless to keep it around.
-
Tom Lane authored
Original coding was careless about where to insert commas. Masahiko Sawada Discussion: https://postgr.es/m/3427593a-61aa-b17e-64ef-383b7742d6d9@enterprisedb.com
-
Tom Lane authored
The CommentStmt made by RebuildConstraintComment() has to pstrdup the relation name, else it will contain a dangling pointer after that relcache entry is flushed. (I'm less sure that pstrdup'ing conname is necessary, but let's be safe.) Failure to do this leads to weird errors or crashes, as reported by Marko Elezovic. Bug introduced by commit e42375fc, so back-patch to 9.5 as that was. Fix by David Rowley, regression test by Michael Paquier Discussion: https://postgr.es/m/DB6PR03MB30775D58E732D4EB0C13725B9AE00@DB6PR03MB3077.eurprd03.prod.outlook.com
-
Peter Eisentraut authored
In 1753b1b0, the pg_sequence system catalog was introduced. This made sequence metadata changes transactional, while the actual sequence values are still behaving nontransactionally. This requires some refinement in how ALTER SEQUENCE, which operates on both, locks the sequence and the catalog. The main problems were: - Concurrent ALTER SEQUENCE causes "tuple concurrently updated" error, caused by updates to pg_sequence catalog. - Sequence WAL writes and catalog updates are not protected by same lock, which could lead to inconsistent recovery order. - nextval() disregarding uncommitted ALTER SEQUENCE changes. To fix, nextval() and friends now lock the sequence using RowExclusiveLock instead of AccessShareLock. ALTER SEQUENCE locks the sequence using ShareRowExclusiveLock. This means that nextval() and ALTER SEQUENCE block each other, and ALTER SEQUENCE on the same sequence blocks itself. (This was already the case previously for the OWNER TO, RENAME, and SET SCHEMA variants.) Also, rearrange some code so that the entire AlterSequence is protected by the lock on the sequence. As an exception, use reduced locking for ALTER SEQUENCE ... RESTART. Since that is basically a setval(), it does not require the full locking of other ALTER SEQUENCE actions. So check whether we are only running a RESTART and run with less locking if so. Reviewed-by: Michael Paquier <michael.paquier@gmail.com> Reported-by: Jason Petersen <jason@citusdata.com> Reported-by: Andres Freund <andres@anarazel.de>
-
Magnus Hagander authored
Michael Paquier
-
Tom Lane authored
pg_stat_get_snapshot_timestamp() returns the timestamp seen in the "global" stats file. Because pgstat_write_statsfiles() writes per-DB stats files before the global file (or at least before renaming it into place), there is a window where the test backend can see all the stats updates that wait_for_stats() was checking for (all of which come from the per-DB file) but also see the same global stats file it had seen at the start of the test script. This results in a failure in only the "snapshot_newer" query, as reported by a couple of buildfarm members recently. I suspect that this ought to be back-patched. Commit 4e37b3e1 has evidently increased the probability of this window getting hit, but it's not apparent why it could not have been hit before. I'll refrain for the moment though.
-
Bruce Momjian authored
-
Tom Lane authored
Code review for commit 090010f2. Fix cases where an elog(ERROR) partway through a function would leave the persistent data structures in a corrupt state. pgstat_report_stat got this wrong by invalidating PgStat_TableEntry structs before removing hashtable entries pointing to them, and get_tabstat_entry got it wrong by ignoring the possibility of palloc failure after it had already created a hashtable entry. Also, avoid leaking a memory context per transaction, which the previous code did through misunderstanding hash_create's API. We do not need to create a context to hold the hash table; hash_create will do that. (The leak wasn't that large, amounting to only a memory context header per iteration, but it's still surprising that nobody noticed it yet.)
-
Bruce Momjian authored
-
Tom Lane authored
Commit 60690a6f attempted to fix the wait_for_stats() function in this test so that it would wait properly if the tenk2 scans were done in parallel workers instead of the main session (typically as a consequence of force_parallel_mode being turned on). However, we made it test for whether the main session's actions had been reported by looking for inserts on 'trunc_stats_test'. This is the Wrong Thing, because those aren't the last updates we expect the main session to do. As shown by recent failures on buildfarm member frogmouth, it's entirely likely that the trunc_stats_test updates will be reported in a separate message from later updates, which means there can be a window in which wait_for_stats() will exit but not all the updates we are expecting to see will have arrived. We should test for the last updates we're expecting, namely those on 'trunc_stats_test4'. Unfortunately, I doubt that this explains frogmouth's failures, because there's no reason to believe that it's running the tenk2 queries in parallel. Still, the test is wrong on its own terms, so fix and back-patch to 9.6 where parallel query came in.
-
Robert Haas authored
Per a report from Tom Lane, newer versions of gcc apparently think that partexprs_item_saved can be used uninitialized. Try to convince them otherwise.
-
- 14 May, 2017 7 commits
-
-
Tom Lane authored
Use the "statistics object" terminology uniformly here too. Assorted copy-editing. Put new catalogs.sgml sections into alphabetical order.
-
Tom Lane authored
Remove default cases from assorted switches over ObjectClass and some related enum types, so that we'll get compiler warnings when someone adds a new enum value without accounting for it in all these places. In passing, re-order some switch cases as needed to match the declaration of enum ObjectClass. OK, that's just neatnik-ism, but I dislike code that looks like it was assembled with the help of a dartboard. Discussion: https://postgr.es/m/20170512221010.nglatgt5azzdxjlj@alvherre.pgsql
-
Tom Lane authored
ALTER COLUMN TYPE on a column used by a statistics object fails since commit 928c4de3, because the relevant switch in ATExecAlterColumnType is unprepared for columns to have dependencies from OCLASS_STATISTIC_EXT objects. Although the existing types of extended statistics don't actually need us to do any work for a column type change, it seems completely indefensible that that assumption is hidden behind the failure of an unrelated module to contain any code for the case. Hence, create and call an API function in statscmds.c where the assumption can be explained, and where we could add code to deal with the problem when it inevitably becomes real. Also, the reason this wasn't handled before, neither for extended stats nor for the last half-dozen new OCLASS kinds :-(, is that the default: in that switch suppresses compiler warnings, allowing people to miss the need to consider it when adding an OCLASS. We don't really need a default because surely getObjectClass should only return valid values of the enum; so remove it, and add the missed OCLASS entries where they should be. Discussion: https://postgr.es/m/20170512221010.nglatgt5azzdxjlj@alvherre.pgsql
-
Peter Eisentraut authored
-
Tom Lane authored
skewColType/skewColTypmod are no longer used in the wake of commit 9aab83fc, and seem unlikely to be wanted in future, so let's drop 'em. Discussion: https://postgr.es/m/16364.1494520862@sss.pgh.pa.us
-
Tom Lane authored
Consistently refer to such an entry as a "statistics object", not just "statistics" or "extended statistics". Previously we had a mismash of terms, accompanied by utter confusion as to whether the term was singular or plural. That's not only grating (at least to the ear of a native English speaker) but could be outright misleading, eg in error messages that seemed to be referring to multiple objects where only one could be meant. This commit fixes the code and a lot of comments (though I may have missed a few). I also renamed two new SQL functions, pg_get_statisticsextdef -> pg_get_statisticsobjdef pg_statistic_ext_is_visible -> pg_statistics_obj_is_visible to conform better with this terminology. I have not touched the SGML docs other than fixing those function names; the docs certainly need work but it seems like a separable task. Discussion: https://postgr.es/m/22676.1494557205@sss.pgh.pa.us
-
Andrew Dunstan authored
Ultra-modern versions of the perl Data::Dumper module have apparently changed how they indent output. Instead of trying to keep up we choose to tell it to supporess all indentation in the hstore_plperl regression tests. Backpatch to 9.5 where this feature was introduced.
-
- 13 May, 2017 7 commits
-
-
Andres Freund authored
This should probably have been part of 60f826c5. Reported-By: Andrew Gierth
-
Andres Freund authored
Before 955a684e logical decoding snapshot maintenance needed to cope with transactions it might not have seen in their entirety. For such transactions we'd to assume they modified the catalog (could have happened before we were watching), and thus a new snapshot had to be built, and distributed to concurrently running transactions. That's problematic because building a new snapshot isn't that cheap , especially as the the array of committed transactions needs to be sorted. When creating a slot on a server with a lot of transactions, this could make logical slot creation infeasibly expensive. After 955a684e there's no need to deal with transaction that aren't guaranteed to be fully observable. That allows to avoid building snapshots for transactions that haven't modified catalog, even before reaching consistency. While this isn't necessarily a bugfix, slot creation being impossible in some production workloads, is severe enough to warrant backpatching. Author: Andres Freund, based on a quite different patch from Petr Jelinek Analyzed-By: Petr Jelinek Reviewed-By: Petr Jelinek Discussion: https://postgr.es/m/f37e975c-908f-858e-707f-058d3b1eb214@2ndquadrant.com Backpatch: 9.4-, where logical decoding has been introduced
-
Andres Freund authored
The snapshot assembly during the creation of logical slots relied waiting for transactions in xl_running_xacts to end, by checking for their commit/abort records. Unfortunately, despite locking, it is possible to see an xl_running_xact record listing transactions as ready, that have already WAL-logged an commit/abort record, as the locking just prevents the ProcArray to be adjusted, and the commit record has to be logged first. That lead to either delayed or hanging snapshot creation, because snapbuild.c would wait "forever" to see commit/abort records for some transactions. That hang resolved only if a xl_running_xacts record without any running transactions happened to be logged, far from certain on a busy server. It's impractical to prevent that via more heavyweight locking, the likelihood of deadlocks and significantly increased contention would be too big. Instead change the initial snapshot creation to be solely based on tracking the oldest running transaction via xl_running_xacts->oldestRunningXid - that actually ends up significantly simplifying the code. That has two disadvantages: 1) Because we cannot fully "trust" the contents of xl_running_xacts, we cannot use it to build the initial snapshot. Instead we have to wait twice for all running transactions to finish. 2) Previously a slot, unless the race occurred, could be created when the all transaction perceived as running based on commit/abort records, now we have to wait for the next xl_running_xacts record. To address that, trigger logging new xl_running_xacts record from within snapbuild.c exactly when necessary. Unfortunately snabuild.c's SnapBuild is stored on disk, one of the stupider ideas of a certain Mr Freund, so we can't change it in a minor release. As this is going to be backpatched, we have to hack around a bit to keep on-disk compatibility. A later commit will rejigger that on master. Author: Andres Freund, based on a quite different patch from Petr Jelinek Analyzed-By: Petr Jelinek Reviewed-By: Petr Jelinek Discussion: https://postgr.es/m/f37e975c-908f-858e-707f-058d3b1eb214@2ndquadrant.com Backpatch: 9.4-, where logical decoding has been introduced
-
Tom Lane authored
The mess cleaned up in commit da075960 is clear evidence that it's a bug hazard to expect the caller of get_attstatsslot()/free_attstatsslot() to provide the correct type OID for the array elements in the slot. Moreover, we weren't even getting any performance benefit from that, since get_attstatsslot() was extracting the real type OID from the array anyway. So we ought to get rid of that requirement; indeed, it would make more sense for get_attstatsslot() to pass back the type OID it found, in case the caller isn't sure what to expect, which is likely in binary- compatible-operator cases. Another problem with the current implementation is that if the stats array element type is pass-by-reference, we incur a palloc/memcpy/pfree cycle for each element. That seemed acceptable when the code was written because we were targeting O(10) array sizes --- but these days, stats arrays are almost always bigger than that, sometimes much bigger. We can save a significant number of cycles by doing one palloc/memcpy/pfree of the whole array. Indeed, in the now-probably-common case where the array is toasted, that happens anyway so this method is basically free. (Note: although the catcache code will inline any out-of-line toasted values, it doesn't decompress them. At the other end of the size range, it doesn't expand short-header datums either. In either case, DatumGetArrayTypeP would have to make a copy. We do end up using an extra array copy step if the element type is pass-by-value and the array length is neither small enough for a short header nor large enough to have suffered compression. But that seems like a very acceptable price for winning in pass-by-ref cases.) Hence, redesign to take these insights into account. While at it, convert to an API in which we fill a struct rather than passing a bunch of pointers to individual output arguments. That will make it less painful if we ever want further expansion of what get_attstatsslot can pass back. It's certainly arguable that this is new development and not something to push post-feature-freeze. However, I view it as primarily bug-proofing and therefore something that's better to have sooner not later. Since we aren't quite at beta phase yet, let's put it in. Discussion: https://postgr.es/m/16364.1494520862@sss.pgh.pa.us
-
Robert Haas authored
The fact that we didn't have this in the first place is likely why the problem fixed by f8bffe9e escaped detection. Patch by Amit Langote, reviewed and slightly adjusted by me. Discussion: http://postgr.es/m/CA+TgmoYWnV2GMnYLG-Czsix-E1WGAbo4D+0tx7t9NdfYBDMFsA@mail.gmail.com
-
Robert Haas authored
The old logic was just plain wrong. Report by Olaf Gawenda. Patch by Amit Langote, reviewed by Beena Emerson and by me. Minor adjustments by me also.
-
Tom Lane authored
On faster machines, the overall runtime for running the core regression tests is under twenty seconds these days, of which the hard-wired delays in the stats test are a significant fraction. But on closer inspection, it seems like we shouldn't need those. The initial 2-second delay is there only to reduce the risk of the test's stats messages not getting sent due to contention. But analysis of the last ten years' worth of buildfarm runs shows no evidence that such failures actually occur. (We do see failures that look like stats messages not getting sent, particularly on Windows; but there is little reason to believe that the initial delay reduces their frequency.) The later 1-second delay is there to ensure that our session's stats will have gotten sent. But we could also do that by starting a fresh session, which takes well under 1 second even on very slow machines. Hence, let's remove both delays and see what happens. The first delay was the only test of pg_sleep_for() in the regression tests, but we can move that responsibility into wait_for_stats(). Discussion: https://postgr.es/m/17795.1493869423@sss.pgh.pa.us
-