- 06 May, 2016 25 commits
-
-
Tom Lane authored
Fabien Coelho
-
Tom Lane authored
Describe compat_realm = 0 as "disabled" not "enabled", per discussion with Christian Ullrich. I failed to resist the temptation to do some other minor copy-editing in the same area.
-
Stephen Frost authored
The test_pg_dump extension doesn't have a C component, so we need to exclude it from the MSVC build system trying to figure out how to build it. Also add a "MODULES" line to the Makefile, as test_extensions has. Might not be necessary, but seems good to keep things consistent. Lastly, remove the 'installcheck' line from test_pg_dump, as that was causing redefinition errors, at least on my box. This also makes test_pg_dump consistent with how commit_ts is set up.
-
Tom Lane authored
Corrections per Jeff Janes, Christian Ullrich, and Daniel Vérité.
-
Stephen Frost authored
We need to use a new branch due to the 9.5 addition of bypassrls when adding in the clause to exclude pg_* roles from being dumped by pg_dumpall. Pointed out by Noah, patch by me.
-
Stephen Frost authored
The Makefile for test_pg_dump shouldn't have a MODULES_big line because there's no actual compiled bit for that extension. Hopefully this will fix the Windows buildfarm members which were complaining. In passing, also add the 'prove_installcheck' bit to the pg_dump and test_pg_dump Makefiles, to get the buildfarm members to actually run those tests.
-
Robert Haas authored
Discussion is still underway as to whether to revert the entire patch that added this function, but that discussion may not conclude before beta1. So, in the meantime, let's do at least this much. David Rowley
-
Robert Haas authored
This new limit affects both the max_parallel_degree GUC and the parallel_degree reloption. There may some day be a use case for using more than 1024 CPUs for a single query, but that's surely not the case right now. Not only do not very many people have that many CPUs, but the code hasn't been tested at that kind of scale and is very unlikely to perform well, or even work at all, without a lot more work. The issue addressed by commit 06bd458c is probably just one problem of many. The idea of a more reasonable limit here was suggested by Tom Lane; the value of 1024 was suggested by Amit Kapila.
-
Tom Lane authored
Ordinarily, pg_upgrade shouldn't have any difficulty in matching up all the relations it sees in the old and new databases. If it does, however, it just goes belly-up with a pretty unhelpful error message. That seemed fine as long as we expected the case never to occur in the wild, but Alvaro reported that it had been seen in a database whose pg_largeobject table had somehow acquired a TOAST table. That doesn't quite seem like a case that pg_upgrade actually needs to handle, but it would be good if the report were more diagnosable. Hence, extend the logic to print out as much information as we can about the mismatch(es) before we quit. In passing, improve the readability of get_rel_infos()'s data collection query, which had suffered seriously from lets-not-bother-to-update-comments syndrome, and generally was unnecessarily disrespectful to readers. It could be argued that this is a bug fix, but given that we have so few reports, I don't feel a need to back-patch; at least not before this has baked awhile in HEAD.
-
Robert Haas authored
That way, if the result overflows size_t, you'll get an error instead of undefined behavior, which seems like a plus. This also has the effect of casting the number of workers from int to Size, which is better because it's harder to overflow int than size_t. Dilip Kumar reported this issue and provided a patch upon which this patch is based, but his version did use mul_size.
-
Stephen Frost authored
Default roles really should be like regular roles, for the most part. This removes a number of checks that were trying to make default roles extra special by not allowing them to be used as regular roles. We still prevent users from creating roles in the "pg_" namespace or from altering roles which exist in that namespace via ALTER ROLE, as we can't preserve such changes, but otherwise the roles are very much like regular roles. Based on discussion with Robert and Tom.
-
Stephen Frost authored
This TAP test suite will create a new cluster, populate it based on the 'create_sql' values in the '%tests' hash, run all of the runs defined in the '%pgdump_runs' hash, and then for each test in the '%tests' hash, compare each run's output the the regular expression defined for the test under the 'like' and 'unlike' functions, as appropriate. While this test suite covers a fair bit of ground (67% of pg_dump.c and quite a bit of the other files in src/bin/pg_dump), there is still quite a bit which remains to be added to provide better code coverage. Still, this is quite a bit better than we had, and has found a few bugs already (note that the CREATE TRANSFORM test is commented out, as it is currently failing). Idea for using the TAP system from Tom, though all of the code is mine.
-
Stephen Frost authored
Reviewing the cases where we need to LOCK a given table during a dump, it was pointed out by Tom that we really don't need to LOCK a table if we are only looking to dump the ACL for it, or certain other components. After reviewing the queries run for all of the component pieces, a list of components were determined to not require LOCK'ing of the table. This implements a check to avoid LOCK'ing those tables. Initial complaint from Rushabh Lathia, discussed with Robert and Tom, the patch is mine.
-
Stephen Frost authored
Do not try to dump objects which do not have ACLs when only ACLs are being requested. This results in a significant performance improvement as we can avoid querying for further information on these objects when we don't need to. When limiting the components to dump for an extension, consider what components have been requested. Initially, we incorrectly hard-coded the components of the extension objects to dump, which would mean that we wouldn't dump some components even with they were asked for and in other cases we would dump components which weren't requested. Correct defaultACLs to use 'dump_contains' instead of 'dump'. The defaultACL is considered a member of the namespace and should be dumped based on the same set of components that the other objects in the schema are, not based on what we're dumping for the namespace itself (which might not include ACLs, if the namespace has just the default or initial ACL). Use DUMP_COMPONENT_ACL for from-initdb objects, to allow users to change their ACLs, should they wish to. This just extends what we are doing for the pg_catalog namespace to objects which are not members of namespaces. Due to column ACLs being treated a bit differently from other ACLs (they are actually reset to NULL when all privileges are revoked), adjust the query which gathers column-level ACLs to consider all of the ACL-relevant columns.
-
Stephen Frost authored
The query to grab the function/aggregate information is now joining to pg_init_privs, so we can simplify (and correct) the WHERE clause used to determine if a given function's ACL has changed from the initial ACL on the function. Bug found by Noah, patch by me.
-
Peter Eisentraut authored
-
Tom Lane authored
to_timestamp() handles the TH/th format codes by advancing over two input characters, whatever those are. It failed to notice whether there were two characters available to be skipped, making it possible to advance the pointer past the end of the input string and keep on parsing. A similar risk existed in the handling of "Y,YYY" format: it would advance over three characters after the "," whether or not three characters were available. In principle this might be exploitable to disclose contents of server memory. But the security team concluded that it would be very hard to use that way, because the parsing loop would stop upon hitting any zero byte, and TH/th format codes can't be consecutive --- they have to follow some other format code, which would have to match whatever data is there. So it seems impractical to examine memory very much beyond the end of the input string via this bug; and the input string will always be in local memory not in disk buffers, making it unlikely that anything very interesting is close to it in a predictable way. So this doesn't quite rise to the level of needing a CVE. Thanks to Wolf Roediger for reporting this bug.
-
Tom Lane authored
Noted by Fabien Coelho, though this isn't exactly his proposed patch. (The technique used here is borrowed from the zic sources.)
-
Tom Lane authored
The previous coding always stored variable values as strings, doing conversion on-the-fly when a numeric value was needed or a number was to be assigned. This was a bit inefficient and risked loss of precision for floating-point values. The precision aspect had been hacked around by printing doubles in "%.18e" format, which is ugly and has machine-dependent results. Instead, arrange to preserve an assigned numeric value in the original binary numeric format, converting to string only when and if needed. When we do need to convert a double to string, convert in "%g" format with DBL_DIG precision, which is the standard way to do it and produces the least surprising results in most cases. The implementation supports storing both a string value and a numeric value for any one variable, with lazy conversion between them. I also arranged for lazy re-sorting of the variable array when new variables are added. That was mainly to allow a clean refactoring of putVariable() into two levels of subroutine, but it may allow us to save a few sorts. Discussion: <9188.1462475559@sss.pgh.pa.us>
-
Tom Lane authored
This example missed being updated when we redefined \crosstabview's argument processing. Daniel Vérité
-
Kevin Grittner authored
Hash indexes are not WAL-logged, and so do not maintain the LSN of index pages. Since the "snapshot too old" feature counts on detecting error conditions using the LSN of a table and all indexes on it, this makes it impossible to safely do early vacuuming on any table with a hash index, so add this to the tests for whether the xid used to vacuum a table can be adjusted based on old_snapshot_threshold. While at it, add a paragraph to the docs for old_snapshot_threshold which specifically mentions this and other aspects of the feature which may otherwise surprise users. Problem reported and patch reviewed by Amit Kapila
-
Dean Rasheed authored
Commit 8eb6407a added support for editing and showing view definitions, but neglected to account for view options such as security_barrier and WITH CHECK OPTION which are not returned by pg_get_viewdef() and so need special handling. Author: Dean Rasheed Reviewed-by: Peter Eisentraut Discussion: http://www.postgresql.org/message-id/CAEZATCWZjCgKRyM-agE0p8ax15j9uyQoF=qew7D2xB6cF76T8A@mail.gmail.com
-
Dean Rasheed authored
Move fmtReloptionsArray() from pg_dump.c to string_utils.c so that it is available to other frontend code. In particular psql's \ev and \sv commands need it to handle view reloptions. Also rename the function to appendReloptionsArray(), which is a more accurate description of what it does. Author: Dean Rasheed Reviewed-by: Peter Eisentraut Discussion: http://www.postgresql.org/message-id/CAEZATCWZjCgKRyM-agE0p8ax15j9uyQoF=qew7D2xB6cF76T8A@mail.gmail.com
-
Tom Lane authored
Call out the major enhancements in this release as identified by pgsql-advocacy discussion, and rearrange some of the entries to make those items more prominent. Other minor improvements per advice from Vitaly Burovoy, Masahiko Sawada, Peter Geoghegan, and Andres Freund.
-
Tom Lane authored
DST law changes in Russia (Magadan, Tomsk regions) and Venezuela. Historical corrections for Russia. There are new zone names Europe/Kirov and Asia/Tomsk reflecting the fact that these regions now have different time zone histories from adjacent regions.
-
- 05 May, 2016 6 commits
-
-
Tom Lane authored
The similarity of the original names to SQL keywords seems like a bad idea. Rename them before we're stuck with 'em forever. In passing, minor code and docs cleanup. Discussion: <4875.1462210058@sss.pgh.pa.us>
-
Tom Lane authored
Sync release notes through today, and incorporate some suggestions from Robert Haas.
-
Tom Lane authored
These functions behave like the backend's least/greatest functions, not like min/max, so the originally-chosen names invite confusion. Per discussion, rename to least/greatest. I also took it upon myself to make them return double if any input is double. The previous behavior of silently coercing all inputs to int surely does not meet the principle of least astonishment. Copy-edit some of the other new functions' documentation, too.
-
Tom Lane authored
These are just of beta quality, but we're only at beta ... the section about parallel query, in particular, could doubtless use more work.
-
Tom Lane authored
Somebody added pg_replication_origin, pg_replication_origin_status and pg_replication_slots to catalogs.sgml without a whole lot of concern for either alphabetical order or the difference between a table and a view. Clean up the mess. Back-patch to 9.5, not so much because this is critical as because if I don't it will result in a cross-branch divergence in release-9.5.sgml, which would be a maintenance hazard.
-
Dean Rasheed authored
Commit 7d9a4737 greatly improved the accuracy of the numeric transcendental functions, however it failed to consider the case where the result from pow() is close to the overflow threshold, for example 0.12 ^ -2345.6. For such inputs, where the result has more than 2000 digits before the decimal point, the decimal result weight estimate was being clamped to 2000, leading to a loss of precision in the final calculation. Fix this by replacing the clamping code with an overflow test that aborts the calculation early if the final result is sure to overflow, based on the overflow limit in exp_var(). This provides the same protection against integer overflow in the subsequent result scale computation as the original clamping code, but it also ensures that precision is never lost and saves compute cycles in cases that are sure to overflow. The new early overflow test works with the initial low-precision result (expected to be accurate to around 8 significant digits) and includes a small fuzz factor to ensure that it doesn't kick in for values that would not overflow exp_var(), so the overall overflow threshold of pow() is unchanged and consistent for all inputs with non-integer exponents. Author: Dean Rasheed Reviewed-by: Tom Lane Discussion: http://www.postgresql.org/message-id/CAEZATCUj3U-cQj0jjoia=qgs0SjE3auroxh8swvNKvZWUqegrg@mail.gmail.com See-also: http://www.postgresql.org/message-id/CAEZATCV7w+8iB=07dJ8Q0zihXQT1semcQuTeK+4_rogC_zq5Hw@mail.gmail.com
-
- 04 May, 2016 5 commits
-
-
Alvaro Herrera authored
This reverts commits f07d18b6, 82c83b33, 3a3b3090, and 24c5f1a1. This feature has shown enough immaturity that it was deemed better to rip it out before rushing some more fixes at the last minute. There are discussions on larger changes in this area for the next release.
-
Peter Eisentraut authored
From: Alexander Law <exclusion@gmail.com>
-
Teodor Sigaev authored
Variable storing a position of lexeme, had a wrong type: char, it's obviously not enough to store 2^14 possible positions. Stas Kelvich
-
Andres Freund authored
Unfortunately the segment size checks from 72a98a63 had the negative side-effect of breaking a corner case in mdsync(): When processing a fsync request for a truncated away segment mdsync() could fail with "could not fsync file" (if previous segment < RELSEG_SIZE) because _mdfd_getseg() now wouldn't return the relevant segment anymore. The cleanest fix seems to be to allow the caller of _mdfd_getseg() to specify whether checks for RELSEG_SIZE are performed. To allow doing so, change the ExtensionBehavior enum into a bitmask. Besides allowing for the addition of EXTENSION_DONT_CHECK_SIZE, this makes for a nicer implementation of EXTENSION_REALLY_RETURN_NULL. Besides mdsync() the only callsite that should change behaviour due to this is mdprefetch() which now doesn't create segments anymore, even in recovery. Given the uses of mdprefetch() that seems better. Reported-By: Thom Brown Discussion: CAA-aLv72QazLvPdKZYpVn4a_Eh+i4_cxuB03k+iCuZM_xjc+6Q@mail.gmail.com
-
Peter Eisentraut authored
From: Alexander Law <exclusion@gmail.com>
-
- 03 May, 2016 3 commits
-
-
Robert Haas authored
Conversion functions were previously marked as parallel-unsafe, since that is the default, but in fact they are safe. Parallel-safe functions defined in pg_proc.h and redefined in system_views.sql were ending up as parallel-unsafe because the redeclarations were not marked PARALLEL SAFE. While editing system_views.sql, mark ts_debug() parallel safe also. Andreas Karlsson
-
Robert Haas authored
These adjustments adjust code and comments in minor ways to prevent pgindent from mangling them. Among other things, I tried to avoid situations where pgindent would emit "a +b" instead of "a + b", and I tried to avoid having it break up inline comments across multiple lines.
-
Robert Haas authored
Since this is a minor issue, no back-patch. Julien Rouhaud
-
- 02 May, 2016 1 commit
-
-
Alvaro Herrera authored
Pointed out by Andres Freund
-