- 26 Nov, 2018 4 commits
-
-
Michael Paquier authored
Author: Nawaz Ahmed Discussion: https://postgr.es/m/154319327168.1315.1846953598601966513@wrigleys.postgresql.org
-
Michael Paquier authored
A set of failures in buildfarm machines are proving that this is not quite ready yet because of another set of issues: - MSVC scripts assume that REGRESS_OPTS can only use top_builddir. Some test suites actually finish by using top_srcdir, like pg_stat_statements which cause the regression tests to never run. - Trying to enforce top_builddir does not work either when using VPATH as this is not recognized properly. - TAP tests of bloom are unstable on various platforms, causing various failures.
-
Michael Paquier authored
The set of scripts in charge of running the regression tests for MSVC run currently under the assumption that only $(top_builddir) can used in option values defined in REGRESS_OPTS, and those options need to have a specific format as well to be correctly parsed, so fix the Makefile values so as those are correctly set. Per complains from buildfarm member dory and whelk, with some extra testing done on my side with MSVC to check this patch.
-
Michael Paquier authored
The recent commit 03faa4a8 has enabled those tests, however several buildfarm members are complaining about their stability on Windows and macOS. This will keep the buildfarm green, while investigating the root problem. Discussion: https://postgr.es/m/20181126003351.GE1776@paquier.xyz
-
- 25 Nov, 2018 3 commits
-
-
Michael Paquier authored
The following options are added for extensions: - TAP_TESTS, to allow an extention to run TAP tests which are the ones present in t/*.pl. A subset of tests can always be run with the existing PROVE_TESTS for developers. - ISOLATION, to define a list of isolation tests. - ISOLATION_OPTS, to pass custom options to isolation_tester. A couple of custom Makefile targets have been accumulated across the tree to cover the lack of facility in PGXS for a couple of releases when using those test suites, which are all now replaced with the new flags, without reducing the test coverage. This also fixes an issue with contrib/bloom/, which had a custom target to trigger its TAP tests of its own not part of the main check runs. Author: Michael Paquier Reviewed-by: Adam Berlin, Álvaro Herrera, Tom Lane, Nikolay Shaplov, Arthur Zakirov Discussion: https://postgr.es/m/20180906014849.GG2726@paquier.xyz
-
Peter Eisentraut authored
recovery.conf settings are now set in postgresql.conf (or other GUC sources). Currently, all the affected settings are PGC_POSTMASTER; this could be refined in the future case by case. Recovery is now initiated by a file recovery.signal. Standby mode is initiated by a file standby.signal. The standby_mode setting is gone. If a recovery.conf file is found, an error is issued. The trigger_file setting has been renamed to promote_trigger_file as part of the move. The documentation chapter "Recovery Configuration" has been integrated into "Server Configuration". pg_basebackup -R now appends settings to postgresql.auto.conf and creates a standby.signal file. Author: Fujii Masao <masao.fujii@gmail.com> Author: Simon Riggs <simon@2ndquadrant.com> Author: Abhijit Menon-Sen <ams@2ndquadrant.com> Author: Sergei Kornilov <sk@zsrv.org> Discussion: https://www.postgresql.org/message-id/flat/607741529606767@web3g.yandex.ru/
-
Thomas Munro authored
Commit cfdf4dc4 added an assertion that every WaitLatch() or similar handles postmaster death. One place did not, but was missed in review and testing due to the need for an SSL connection. Fix, by asking for WL_EXIT_ON_PM_DEATH. Reported-by: Christoph Berg Discussion: https://postgr.es/m/20181124143845.GA15039%40msg.df7cb.de
-
- 24 Nov, 2018 4 commits
-
-
Andrew Gierth authored
Hstore data generated on pg 8.4 and pg_upgraded to current versions remains in its original on-disk format unless modified. The same goes for values generated by the addon hstore-new module on pre-9.0 versions. (The hstoreUpgrade function converts old values on the fly when read in, but the on-disk value is not modified by this.) Since old-format empty hstores (and hstore-new hstores) have representations compatible with the new format, hstoreUpgrade thought it could get away without modifying such values; but this breaks hstore_hash (and the new hstore_hash_extended) which assumes bit-perfect matching between semantically identical hstore values. Only one bit actually differs (the "new version" flag in the count field) but that of course is enough to break the hash. Fix by making hstoreUpgrade unconditionally convert all old values to new format. Backpatch all the way, even though this changes a hash value in some cases, because in those cases the hash value is already failing - for example, a hash join between old- and new-format empty hstores will be failing to match, or a hash index on an hstore column containing an old-format empty value will be failing to find the value since it will be searching for a hash derived from a new-format datum. (There are no known field reports of this happening, probably because hashing of hstores has only been useful in limited circumstances and there probably isn't much upgraded data being used this way.) Per concerns arising from discussion of commit eb6f2914. Original bug is my fault. Discussion: https://postgr.es/m/60b1fd3b-7332-40f0-7e7f-f2f04f777747%402ndquadrant.com
-
Andrew Gierth authored
1. Integer overflow in internal_size could result in memory corruption in decompression since a zero-length array would be allocated and then written to. This leads to crashes or corruption when traversing an index which has been populated with sufficiently sparse values. Fix by using int64 for computations and checking for overflow. 2. Integer overflow in g_int_compress could cause pessimal merge choices, resulting in unnecessarily large ranges (which would in turn trigger issue 1 above). Fix by using int64 again. 3. Even without overflow, array sizes could become large enough to cause unexplained memory allocation errors. Fix by capping the sizes to a safe limit and report actual errors pointing at gist__intbig_ops as needed. 4. Large inputs to the compression function always consist of large runs of consecutive integers, and the compression loop was processing these one at a time in an O(N^2) manner with a lot of overhead. The expected runtime of this function could easily exceed 6 months for a single call as a result. Fix by performing a linear-time first pass, which reduces the worst case to something on the order of seconds. Backpatch all the way, since this has been wrong forever. Per bug #15518 from report from irc user "dymk", analysis and patch by me. Discussion: https://postgr.es/m/15518-799e426c3b4f8358@postgresql.org
-
Tom Lane authored
Early returns from the buildfarm say that most critters are good with commit cbdb8b4c, but gaur gives unexpected results with the test case involving a float8 that's one-ULP-less-than-2^63. It appears that that platform's version of rint() rounds that value up to 2^63 instead of leaving it be. This is possibly a bug, and it's also possible that no other platform anybody is using anywhere behaves likewise. Still, the point of the test is not to insist that everybody's rint() behaves exactly the same. Let's use two-ULPs-less-than-2^63 instead, which I've tested to act the same on gaur as on more modern hardware. (This is, more or less, exactly the portability issue I'd feared might arise...) Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
-
Tom Lane authored
ftoi4 and its sibling coercion functions did their overflow checks in a way that looked superficially plausible, but actually depended on an assumption that the MIN and MAX comparison constants can be represented exactly in the float4 or float8 domain. That fails in ftoi4, ftoi8, and dtoi8, resulting in a possibility that values near the MAX limit will be wrongly converted (to negative values) when they need to be rejected. Also, because we compared before rounding off the fractional part, the other three functions threw errors for values that really ought to get rounded to the min or max integer value. Fix by doing rint() first (requiring an assumption that it handles NaN and Inf correctly; but dtoi8 and ftoi8 were assuming that already), and by comparing to values that should coerce to float exactly, namely INTxx_MIN and -INTxx_MIN. Also remove some random cosmetic discrepancies between these six functions. Per bug #15519 from Victor Petrovykh. This should get back-patched, but first let's see what the buildfarm thinks of it --- I'm not too sure about portability of some of the regression test cases. Patch by me; thanks to Andrew Gierth for analysis and discussion. Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
-
- 23 Nov, 2018 9 commits
-
-
Tom Lane authored
There's some question about the correctness of the hash function, but if it's wrong, the 32-bit version is also wrong. Amul Sul, reviewed by Hironobu Suzuki Discussion: https://postgr.es/m/CAAJ_b947JjnNr9Cp45iNjSqKf6PA5mCTmKsRwPjows93YwQrmw@mail.gmail.com
-
Tom Lane authored
Amul Sul, reviewed by Hironobu Suzuki Discussion: https://postgr.es/m/CAAJ_b947JjnNr9Cp45iNjSqKf6PA5mCTmKsRwPjows93YwQrmw@mail.gmail.com
-
Tom Lane authored
We should never estimate the output of a semijoin to be more rows than we estimate for an inner join with the same input rels and join condition; it's obviously impossible for that to happen. However, given the relatively poor quality of our semijoin selectivity estimates --- particularly, but not only, in cases where we punt and return a default estimate --- we did often deliver such estimates. To improve matters, calculate both estimates inside eqjoinsel() and take the smaller one. The bulk of this patch is just mechanical refactoring to avoid repetitive information lookup when we call both eqjoinsel_semi and eqjoinsel_inner. The actual new behavior is just selec = Min(selec, inner_rel->rows * selec_inner); which looks a bit odd but is correct because of our different definitions for inner and semi join selectivity. There is one ensuing plan change in the regression tests, but it looks reasonable enough (and checking the actual row counts shows that the estimate moved closer to reality, not further away). Per bug #15160 from Alexey Ermakov. Although this is arguably a bug fix, I won't risk destabilizing plan choices in stable branches by back-patching. Tom Lane, reviewed by Melanie Plageman Discussion: https://postgr.es/m/152395805004.19366.3107109716821067806@wrigleys.postgresql.org
-
Alvaro Herrera authored
Commit cfdf4dc4 left a few unnecessary assignments, one of which caused compiler warnings, as reported by Erik Rijkers. Remove them all. Discussion: https://postgr.es/m/df0dcca2025b3d90d946ecc508ca9678@xs4all.nl
-
Tom Lane authored
-
Alvaro Herrera authored
Missing in dfa60814. Author: David Rowley Discussion: https://postgr.es/m/CAKJS1f-M3NMTCpv=vDfkoqHbMPFf=3-Z1ud=+1DHH00tC+zLaQ@mail.gmail.com
-
Peter Eisentraut authored
-
Thomas Munro authored
Users of the WaitEventSet and WaitLatch() APIs can now choose between asking for WL_POSTMASTER_DEATH and then handling it explicitly, or asking for WL_EXIT_ON_PM_DEATH to trigger immediate exit on postmaster death. This reduces code duplication, since almost all callers want the latter. Repair all code that was previously ignoring postmaster death completely, or requesting the event but ignoring it, or requesting the event but then doing an unconditional PostmasterIsAlive() call every time through its event loop (which is an expensive syscall on platforms for which we don't have USE_POSTMASTER_DEATH_SIGNAL support). Assert that callers of WaitLatchXXX() under the postmaster remember to ask for either WL_POSTMASTER_DEATH or WL_EXIT_ON_PM_DEATH, to prevent future bugs. The only process that doesn't handle postmaster death is syslogger. It waits until all backends holding the write end of the syslog pipe (including the postmaster) have closed it by exiting, to be sure to capture any parting messages. By using the WaitEventSet API directly it avoids the new assertion, and as a by-product it may be slightly more efficient on platforms that have epoll(). Author: Thomas Munro Reviewed-by: Kyotaro Horiguchi, Heikki Linnakangas, Tom Lane Discussion: https://postgr.es/m/CAEepm%3D1TCviRykkUb69ppWLr_V697rzd1j3eZsRMmbXvETfqbQ%40mail.gmail.com, https://postgr.es/m/CAEepm=2LqHzizbe7muD7-2yHUbTOoF7Q+qkSD5Q41kuhttRTwA@mail.gmail.com
-
Michael Paquier authored
The documentation of CREATE/ALTER ROLE has been missing two things related to PASSWORD: - The password value provided needs to be quoted, some places of the documentation marked the field with quotes, but not others, which led to confusion. - PASSWORD NULL was not provided consistently, with ENCRYPTED being not compatible with it. Reported-by: Steven Winfield Author: Michael Paquier Reviewed-by: David G. Johnston Discussion: https://postgr.es/m/154282901979.1316.7418475422120496802@wrigleys.postgresql.org
-
- 22 Nov, 2018 3 commits
-
-
Tom Lane authored
populate_recordset_worker() failed to consider the possibility that the supplied JSON data contains no rows, so that update_cached_tupdesc never got called. This led to a null-pointer dereference since commit 9a5e8ed28; before that it led to a bogus "set-valued function called in context that cannot accept a set" error. Fix by forcing the update to happen. Per bug #15514. Back-patch to v11 as 9a5e8ed28 was. (If we were excited about the bogus error, we could perhaps go back further, but it'd take more work to figure out how to fix it in older branches. Given the lack of field complaints about that aspect, I'm not excited.) Discussion: https://postgr.es/m/15514-59d5b4c4065b178b@postgresql.org
-
Tom Lane authored
Documenting INCLUDE in the section about unique indexes is confusing, as complained of by Emilio Platzer. Furthermore, it entirely failed to explain why you might want to use the feature. The section about index-only scans is really the right place; it already talked about making such things the hard way. Rewrite that text to describe INCLUDE as the normal way to make a covering index. Also, move that section up a couple of places, as it now seems more important than some of the stuff we had before it. It still has to be after expression and partial indexes, since otherwise some of it would involve forward references. Discussion: https://postgr.es/m/154031939560.30897.14677735588262722042@wrigleys.postgresql.org
-
Michael Paquier authored
Author: Amit Langote Discussion: https://postgr.es/m/CA+HiwqHg0=UL+Dhh3gpiwYNA=ufk9Lb7GQ2c=5rs=ZmVTP7xAw@mail.gmail.com
-
- 21 Nov, 2018 9 commits
-
-
Bruce Momjian authored
Removed one too many words. Fix for 7906de84. Reported-by: Thomas Munro Backpatch-through: 9.4
-
Bruce Momjian authored
Reported-by: Kevin <kcolagio@gmail.com> Discussion: https://postgr.es/m/154082462281.30897.14043119084654378035@wrigleys.postgresql.org Backpatch-through: 9.4
-
Alvaro Herrera authored
This commit continues the code improvements started by commit 12788ae4. With this commit, state machine transitions are better contained in the routine that was called doCustom() and is now called advanceConnectionState -- the resulting code is easier to reason about, since there are no state changes occuring in the outer layer. This change is prompted by future patches to add more features to pgbench, which will need to effect some more surgery to this code. Fabien's original had all the machine state changes inside one routine, but I (Álvaro) thought that a subroutine to handle command execution is more straightforward to review, so this commit does not match Fabien's submission closely. If something is broken, it's probably my fault. Author: Fabien Coelho, Álvaro Herrera Reviewed-by: Kirk Jamison Discussion: https://postgr.es/m/alpine.DEB.2.21.1808111104320.1705@lancre
-
Alvaro Herrera authored
Per pink buildfarm.
-
Alvaro Herrera authored
Per David Rowley Discussion: https://postgr.es/m/CAKJS1f-MstvBWdkOzACsOHyBgj2oXcBM8kfv+NhVe-Ux-wq9Sg@mail.gmail.com
-
Alvaro Herrera authored
Sets the timestamp to current if not already set. Will acquire more callers momentarily. Author: Fabien Coelho Discussion: https://postgr.es/m/alpine.DEB.2.21.1808111104320.1705@lancre
-
Andres Freund authored
-
Andres Freund authored
Per buildfarm animal rhinoceros. I (Andres) missed replacing a few uses of ObjectIdAttributeNumber in sepgsql. It's quite probable that the sepgsql test output will need more adapting than done in 578b22... Author: Thomas Munro Discussion: https://postgr.es/m/CAEepm=2Sk+66HJV8FLDfm_sKTn22j7cWTY_Y1Rok3RxeWL_Y0w@mail.gmail.com
-
Andres Freund authored
Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
-
- 20 Nov, 2018 7 commits
-
-
Michael Paquier authored
The dedicated private buffer to store records is used only for these crossing a page boundary since 285bd0ac, but its description did not match completely the reality. Reported-by: Andrey Lepikhov Author: Michael Paquier Discussion: https://postgr.es/m/49518b48-2036-5e43-1818-0f594e375e76@postgrespro.ru
-
Peter Eisentraut authored
As already explained in configure.in, using the OpenSSL version number to detect presence of functions doesn't work, because LibreSSL reports incompatible version numbers. Fortunately, the functions we need here are actually macros, so we can just test for them directly.
-
Peter Eisentraut authored
For example: ssl_min_protocol_version = 'TLSv1.1' ssl_max_protocol_version = 'TLSv1.2' Reviewed-by: Steve Singer <steve@ssinger.info> Discussion: https://www.postgresql.org/message-id/flat/1822da87-b862-041a-9fc2-d0310c3da173@2ndquadrant.com
-
Peter Eisentraut authored
The output for record types XLOG_DBASE_CREATE and XLOG_DBASE_DROP used the order dbid/tablespaceid, whereas elsewhere the order is tablespaceid/dbid[/relfilenodeid]. Flip the order for those two types to make it consistent. Author: Jean-Christophe Arnu <jcarnu@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAHZmTm18Ln62KW-G8NYvO1wbBL3QU1E76Zep=DuHmg-zS2XFAg@mail.gmail.com/
-
Peter Eisentraut authored
The documentation claimed that an enum type requires "one or more" labels, but since 1fd9883f, zero labels are also allowed. Reported-by: Lukas Eder <lukas.eder@gmail.com> Bug: #15356
-
Peter Eisentraut authored
These settings apply to communication with the sending server, which is not necessarily a primary. Author: Sergei Kornilov <sk@zsrv.org>
-
Michael Paquier authored
Two issues have been spotted and get fixed here: - When checking for corrupted files, make sure that pg_verify_checksums complains about the correct file. In order to make the logic more robust, all files created are immediately removed once checks on them are done. The error message generated by pg_verify_checksums also now includes the file name it sees as corrupted. - Before running corruption-related tests, empty files are generated which used names mapping with the corrupted files, potentially leading to conflicts. So use different set of names for both. Author: Michael Banck Discussion: https://postgr.es/m/20181119181119.GC23740@nighthawk.caipicrew.dd-dns.de
-
- 19 Nov, 2018 1 commit
-