- 06 Oct, 2018 2 commits
-
-
Dean Rasheed authored
When computing statistical aggregates like variance, the common schoolbook algorithm which computes the sum of the squares of the values and subtracts the square of the mean can lead to a large loss of precision when using floating point arithmetic, because the difference between the two terms is often very small relative to the terms themselves. To avoid this, re-work these aggregates to use the Youngs-Cramer algorithm, which is a proven, numerically stable algorithm that directly aggregates the sum of the squares of the differences of the values from the mean in a single pass over the data. While at it, improve the test coverage to test the aggregate combine functions used during parallel aggregation. Per report and suggested algorithm from Erich Schubert. Patch by me, reviewed by Madeleine Thompson. Discussion: https://postgr.es/m/153313051300.1397.9594490737341194671@wrigleys.postgresql.org
-
Michael Paquier authored
This is for example used when attaching a partition to a partitioned table which includes foreign keys, and in this case the constraint name has been missing in the data cloned. This could lead to hard crashes, as when validating the foreign key constraint, the constraint name is always expected. Particularly, when using log_min_messages >= DEBUG1, a log message would be generated with this unassigned constraint name, leading to an assertion failure on HEAD. While on it, rename a variable in ATExecAttachPartition which was declared twice with the same name. Author: Michael Paquier Reviewed-by: Álvaro Herrera Discussion: https://postgr.es/m/20181005042236.GG1629@paquier.xyz Backpatch-through: 11
-
- 05 Oct, 2018 5 commits
-
-
Bruce Momjian authored
Discussion: https://postgr.es/m/1f5b2e66-7ba8-98ec-c06a-aee9ff33f050@postgresql.org Author: Jonathan S. Katz Backpatch-through: 11
-
Tom Lane authored
Historically we forbade datatype-specific comparison functions from returning INT_MIN, so that it would be safe to invert the sort order just by negating the comparison result. However, this was never really safe for comparison functions that directly return the result of memcmp(), strcmp(), etc, as POSIX doesn't place any such restriction on those library functions. Buildfarm results show that at least on recent Linux on s390x, memcmp() actually does return INT_MIN sometimes, causing sort failures. The agreed-on answer is to remove this restriction and fix relevant call sites to not make such an assumption; code such as "res = -res" should be replaced by "INVERT_COMPARE_RESULT(res)". The same is needed in a few places that just directly negated the result of memcmp or strcmp. To help find places having this problem, I've also added a compile option to nbtcompare.c that causes some of the commonly used comparators to return INT_MIN/INT_MAX instead of their usual -1/+1. It'd likely be a good idea to have at least one buildfarm member running with "-DSTRESS_SORT_INT_MIN". That's far from a complete test of course, but it should help to prevent fresh introductions of such bugs. This is a longstanding portability hazard, so back-patch to all supported branches. Discussion: https://postgr.es/m/20180928185215.ffoq2xrq5d3pafna@alap3.anarazel.de
-
Tom Lane authored
Without this, the syntax-tree-dumping functions in pl_funcs.c crash, and there are other places that might be at risk too. Per report from Pavel Stehule. Looks like I broke this in commit f9263006, so back-patch to v11. Discussion: https://postgr.es/m/CAFj8pRA+3f5n4642q2g8BXCKjbTd7yU9JMYAgDyHgozk6cQ-VA@mail.gmail.com
-
Peter Eisentraut authored
Author: Nikita Glukhov <n.gluhov@postgrespro.ru>
-
Michael Paquier authored
This lists the contents of a temporary directory associated to a given tablespace, useful to get information about on-disk consumption caused by temporary files used by a session query. By default, pg_default is scanned, and a tablespace can be specified as argument. This function is intended to be used by monitoring tools, and, unlike pg_ls_dir(), access to them can be granted to non-superusers so that those monitoring tools can observe the principle of least privilege. Access is also given by default to members of pg_monitor. Author: Nathan Bossart Reviewed-by: Laurenz Albe Discussion: https://postgr.es/m/92F458A2-6459-44B8-A7F2-2ADD3225046A@amazon.com
-
- 04 Oct, 2018 5 commits
-
-
Tom Lane authored
Instead of doing a lot of list_nth() accesses to es_range_table, create a flattened pointer array during executor startup and index into that to get at individual RangeTblEntrys. This eliminates one source of O(N^2) behavior with lots of partitions. (I'm not exactly convinced that it's the most important source, but it's an easy one to fix.) Amit Langote and David Rowley Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
-
Tom Lane authored
Create an array estate->es_relations[] paralleling the es_range_table, and store references to Relations (relcache entries) there, so that any given RT entry is opened and closed just once per executor run. Scan nodes typically still call ExecOpenScanRelation, but ExecCloseScanRelation is no more; relation closing is now done centrally in ExecEndPlan. This is slightly more complex than one would expect because of the interactions with relcache references held in ResultRelInfo nodes. The general convention is now that ResultRelInfo->ri_RelationDesc does not represent a separate relcache reference and so does not need to be explicitly closed; but there is an exception for ResultRelInfos in the es_trig_target_relations list, which are manufactured by ExecGetTriggerResultRel and have to be cleaned up by ExecCleanUpTriggerState. (That much was true all along, but these ResultRelInfos are now more different from others than they used to be.) To allow the partition pruning logic to make use of es_relations[] rather than having its own relcache references, adjust PartitionedRelPruneInfo to store an RT index rather than a relation OID. Amit Langote, reviewed by David Rowley and Jesper Pedersen, some mods by me Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
-
Alvaro Herrera authored
When using the CREATE TABLE .. PARTITION OF syntax, it's possible to cause a partition to get two primary keys if the parent already has one. Tighten the check to disallow that. Reported-by: Rajkumar Raghuwanshi Author: Amul Sul Discussion: https://postgr.es/m/CAKcux6=OnSV3-qd8Gb6W=KPPwcCz6Fe_O_MQYjTa24__Xn8XxA@mail.gmail.com
-
Michael Paquier authored
This moves the system administration functions for signalling backends from backend/utils/adt/misc.c into a separate file dedicated to backend signalling. No new functionality is introduced in this commit. Author: Daniel Gustafsson Reviewed-by: Michael Paquier, Álvaro Herrera Discussion: https://postgr.es/m/C2C7C3EC-CC5F-44B6-9C78-637C88BD7D14@yesql.se
-
Michael Paquier authored
When specified, this option allows VACUUM to skip the work on a relation if there is a conflicting lock on it when trying to open it at the beginning of its processing. Similarly to autovacuum, this comes with a couple of limitations while the relation is processed which can cause the process to still block: - when opening the relation indexes. - when acquiring row samples for table inheritance trees, partition trees or certain types of foreign tables, and that a lock is taken on some leaves of such trees. Author: Nathan Bossart Reviewed-by: Michael Paquier, Andres Freund, Masahiko Sawada Discussion: https://postgr.es/m/9EF7EBE4-720D-4CF1-9D0E-4403D7E92990@amazon.com Discussion: https://postgr.es/m/20171201160907.27110.74730@wrigleys.postgresql.org
-
- 03 Oct, 2018 8 commits
-
-
Andres Freund authored
Reported-By: Tom Lane Discussion: https://postgr.es/m/527.1538598263@sss.pgh.pa.us
-
Andres Freund authored
This isn't actually a live bug, as the output happens to be the same. But it upsets tools like UBSan, which makes it worthwhile to fix. As it's an issue without practical consequences, don't backpatch. Author: Andres Freund Discussion: https://postgr.es/m/20180928001121.hhx5n6dsygqxr5wu@alap3.anarazel.de
-
Tom Lane authored
Instead of locking tables during executor startup, just Assert that suitable locks were obtained already during the parse/plan pipeline (or re-obtained by the plan cache). This must be so, else we have a hazard that concurrent DDL has invalidated the plan. This is pretty inefficient as well as undercommented, but it's all going to go away shortly, so I didn't try hard. This commit is just another attempt to use the buildfarm to see if we've missed anything in the plan to simplify the executor's table management. Note that the change needed here in relation_open() exposes that parallel workers now really are accessing tables without holding any lock of their own, whereas they were not doing that before this commit. This does not give me a warm fuzzy feeling about that aspect of parallel query; it does not seem like a good design, and we now know that it's had exactly no actual testing. I think that we should modify parallel query so that that change can be reverted. Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
-
Andres Freund authored
I (Andres) was more than a bit hasty in committing 33001fd7 after last minute changes, leading to a number of problems (jit output was only shown for JIT in parallel workers, and just EXPLAIN without ANALYZE didn't work). Lukas luckily found these issues quickly. Instead of combining instrumentation in in standard_ExecutorEnd(), do so on demand in the new ExplainPrintJITSummary(). Also update a documentation example of the JIT output, changed in 52050ad8. Author: Lukas Fittl, with minor changes by me Discussion: https://postgr.es/m/CAP53PkxmgJht69pabxBXJBM+0oc6kf3KHMborLP7H2ouJ0CCtQ@mail.gmail.com Backpatch: 11, where JIT compilation was introduced
-
Tom Lane authored
Although all known platforms define "long long" as 64 bits, it still feels a bit shaky to be using "va_arg(args, int64)" to pull out an argument that the caller thought was declared "long long". The reason it was coded like this, way back in commit 3311c766, was to work around the possibility that the compiler had no type named "long long" --- and, at the time, that it maybe didn't have 64-bit ints at all. Now that we're requiring compilers to support C99, those concerns are moot. Let's make the code clearer and more bulletproof by writing "long long" where we mean "long long". This does introduce a hazard that we'd inefficiently use 128-bit arithmetic to convert plain old integers. The way to tackle that would be to provide two versions of fmtint(), one for "long long" and one for narrower types. Since, as of today, no platforms require that, we won't bother with the extra code for now. Discussion: https://postgr.es/m/1680.1538587115@sss.pgh.pa.us
-
Tom Lane authored
This case occurs often enough (around 45% of conversion specs executed in our regression tests are just "%s") that it's worth an extra test per conversion spec to allow skipping all the logic associated with field widths and padding when it happens. Discussion: https://postgr.es/m/26193.1538582367@sss.pgh.pa.us
-
Tom Lane authored
In combination, these changes make our version of snprintf as fast or faster than most platforms' native snprintf, except for cases involving floating-point conversion (which we still delegate to the native sprintf). The speed penalty for a float conversion is down to around 10% though, much better than before. Notable changes: * Rather than always parsing the format twice to see if it contains instances of %n$, do the extra scan only if we actually find a $. This obviously wins for non-localized formats, and even when there is use of %n$, we can avoid scanning text before the first % twice. * Use strchrnul() if available to find the next %, and emit the literal text between % escapes as strings rather than char-by-char. * Create a bespoke function (dopr_outchmulti) for the common case of emitting N copies of the same character, in place of writing loops around dopr_outch. * Simplify construction of the format string for invocations of sprintf for floats. * Const-ify some internal functions, and avoid unnecessary use of pass-by-reference arguments. Patch by me, reviewed by Andres Freund Discussion: https://postgr.es/m/11787.1534530779@sss.pgh.pa.us
-
Amit Kapila authored
The API (EOH_flatten_into) that flattens the expanded value representation expects the target address to be maxaligned. All it's usage adhere to that principle except when serializing datums for parallel query. Fix that usage. Diagnosed-by: Tom Lane Author: Tom Lane and Amit Kapila Backpatch-through: 9.6 Discussion: https://postgr.es/m/11629.1536550032@sss.pgh.pa.us
-
- 02 Oct, 2018 7 commits
-
-
Andrew Dunstan authored
Cygwin has been building and linking against static libraries. Although a bug this has been relatively harmless until now, when this has caused errors due to changes in the way we build certain libraries. So this patch makes things work the way we always intended, namely that we would link against the dynamic libraries (cygpq.dll etc.) and just not build the static libraries. The downstream packagers have been doing this for some time, so this just aligns with their practice. Extracted from a patch by Marco Atzeri, with a suggestion from Tom Lane. Discussion: https://postgr.es/m/1056.1538235347@sss.pgh.pa.us
-
Tom Lane authored
Instead of recomputing the required lock levels in all these places, just use what commit fdba460a made the parser store in the RTE fields. This already simplifies the code measurably in these places, and follow-on changes will remove a bunch of no-longer-needed infrastructure. In a few cases, this change causes us to acquire a higher lock level than we did before. This is OK primarily because said higher lock level should've been acquired already at query parse time; thus, we're saving a useless extra trip through the shared lock manager to acquire a lesser lock alongside the original lock. The only known exception to this is that re-execution of a previously planned SELECT FOR UPDATE/SHARE query, for a table that uses ROW_MARK_REFERENCE or ROW_MARK_COPY methods, might have gotten only AccessShareLock before. Now it will get RowShareLock like the first execution did, which seems fine. While there's more to do, push it in this state anyway, to let the buildfarm help verify that nothing bad happened. Amit Langote, reviewed by David Rowley and Jesper Pedersen, and whacked around a bit more by me Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
-
Andres Freund authored
It's inefficient to use a single slot for mapping between tuple descriptors for multiple tuples, as previously done when using ConvertPartitionTupleSlot(), as that means the slot's tuple descriptors change for every tuple. Previously we also, via ConvertPartitionTupleSlot(), built new tuples after the mapping even in cases where we, immediately afterwards, access individual columns again. Refactor the code so one slot, on demand, is used for each partition. That avoids having to change the descriptor (and allows to use the more efficient "fixed" tuple slots). Then use slot->slot mapping, to avoid unnecessarily forming a tuple. As the naming between the tuple and slot mapping functions wasn't consistent, rename them to execute_attr_map_{tuple,slot}. It's likely that we'll also rename convert_tuples_by_* to denote that these functions "only" build a map, but that's left for later. Author: Amit Khandekar and Amit Langote, editorialized by me Reviewed-By: Amit Langote, Amit Khandekar, Andres Freund Discussion: https://postgr.es/m/CAJ3gD9fR0wRNeAE8VqffNTyONS_UfFPRpqxhnD9Q42vZB+Jvpg@mail.gmail.com https://postgr.es/m/e4f9d743-cd4b-efb0-7574-da21d86a7f36%40lab.ntt.co.jp Backpatch: -
-
Tom Lane authored
Previously, we used the platform's NL_ARGMAX if any, otherwise 16. The trouble with this is that the platform value is hugely variable, ranging from the POSIX-minimum 9 to as much as 64K on recent FreeBSD. Values of more than a dozen or two have no practical use and slow down the initialization of the argtypes array. Worse, they cause snprintf.c to consume far more stack space than was the design intention, possibly resulting in stack-overflow crashes. Standardize on 31, which is comfortably more than we need (it looks like no existing translatable message has more than about 10 parameters). I chose that, not 32, to make the array sizes powers of 2, for some possible small gain in speed of the memset. The lack of reported crashes suggests that the set of platforms we use snprintf.c on (in released branches) may have no overlap with the set where NL_ARGMAX has unreasonably large values. But that's not entirely clear, so back-patch to all supported branches. Per report from Mateusz Guzik (via Thomas Munro). Discussion: https://postgr.es/m/CAEepm=3VF=PUp2f8gU8fgZB22yPE_KBS0+e1AHAtQ=09schTHg@mail.gmail.com
-
Tom Lane authored
The variants of these functions that take numeric inputs (OIDs or column numbers) are supposed to return NULL rather than failing on bad input; this rule reduces problems with snapshot skew when queries apply the functions to all rows of a catalog. has_column_privilege() had careless handling of the case where the table OID didn't exist. You might get something like this: select has_column_privilege(9999,'nosuchcol','select'); ERROR: column "nosuchcol" of relation "(null)" does not exist or you might get a crash, depending on the platform's printf's response to a null string pointer. In addition, while applying the column-number variant to a dropped column returned NULL as desired, applying the column-name variant did not: select has_column_privilege('mytable','........pg.dropped.2........','select'); ERROR: column "........pg.dropped.2........" of relation "mytable" does not exist It seems better to make this case return NULL as well. Also, the OID-accepting variants of has_foreign_data_wrapper_privilege, has_server_privilege, and has_tablespace_privilege didn't follow the principle of returning NULL for nonexistent OIDs. Superusers got TRUE, everybody else got an error. Per investigation of Jaime Casanova's report of a new crash in HEAD. These behaviors have been like this for a long time, so back-patch to all supported branches. Patch by me; thanks to Stephen Frost for discussion and review Discussion: https://postgr.es/m/CAJGNTeP=-6Gyqq5TN9OvYEydi7Fv1oGyYj650LGTnW44oAzYCg@mail.gmail.com
-
Michael Paquier authored
The example used in the documentation is outdated as well. This is an oversight from 0ac5ad51, which bumped up pgrowlocks but forgot some bits of the documentation. Reported-by: Chris Wilson Discussion: https://postgr.es/m/153838692816.2950.12001142346234155699@wrigleys.postgresql.org Backpatch-through: 9.3
-
Amit Kapila authored
Currently, we don't have an explicit test to pass expanded-value representations to workers, so we don't know whether it works on all kind of platforms. We suspect that the current code won't work on alignment-sensitive hardware. This commit will test that aspect and can lead to failure on some of the buildfarm machines which we will fix in the later commit. Author: Tom Lane and Amit Kapila Discussion: https://postgr.es/m/11629.1536550032@sss.pgh.pa.us
-
- 01 Oct, 2018 6 commits
-
-
Michael Paquier authored
VACUUM and ANALYZE share similar logic when it comes to opening a relation to work on in terms of how the relation is opened, in which order locks are tried and how logs should be generated when something does not work as expected. This commit refactors things so as both use the same code path to handle the way a relation is opened, so as the integration of new options becomes easier. Author: Michael Paquier Reviewed-by: Nathan Bossart Discussion: https://postgr.es/m/20180927075152.GT1659@paquier.xyz
-
Peter Eisentraut authored
This was claimed to have been done in 0a63f996, but that actually only changed the documentation and not the grammar. (That commit did fully change it for CREATE TRIGGER.)
-
Tom Lane authored
Opening a relation with no lock at all is unsafe; there's no guarantee that we'll see a consistent state of the relevant catalog entries. While use of MVCC scans to read the catalogs partially addresses that complaint, it's still possible to switch to a new catalog snapshot partway through loading the relcache entry. Moreover, whether or not you trust the reasoning behind sometimes using less than AccessExclusiveLock for ALTER TABLE, that reasoning is certainly not valid if concurrent users of the table don't hold a lock corresponding to the operation they want to perform. Hence, add some assertion-build-only checks that require any caller of relation_open(x, NoLock) to hold at least AccessShareLock. This isn't a full solution, since we can't verify that the lock level is semantically appropriate for the action --- but it's definitely of some use, because it's already caught two bugs. We can also assert that callers of addRangeTableEntryForRelation() hold at least the lock level specified for the new RTE. Amit Langote and Tom Lane Discussion: https://postgr.es/m/16565.1538327894@sss.pgh.pa.us
-
Tom Lane authored
contrib/pageinspect's tuple_data_split() function thought it could get away with opening the referenced relation with NoLock. In practice there's no guarantee that the current session holds any lock on that rel (even if we just read a page from it), so that this is unsafe. Switch to using AccessShareLock. Also, postpone closing the relation, so that we needn't copy its tupdesc. Also, fix unsafe use of att_isnull() for attributes past the end of the tuple. Per testing with a patch that complains if we open a relation without holding any lock on it. I don't plan to back-patch that patch, but we should close the holes it identifies in all supported branches. Discussion: https://postgr.es/m/2038.1538335244@sss.pgh.pa.us
-
Tom Lane authored
If the column being modified is referenced by a foreign key constraint of another table, ALTER TABLE would open the other table (to re-parse the constraint's definition) without having first obtained a lock on it. This was evidently intentional, but that doesn't mean it's really safe. It's especially not safe in 9.3, which pre-dates use of MVCC scans for catalog reads, but even in current releases it doesn't seem like a good idea. We know we'll need AccessExclusiveLock shortly to drop the obsoleted constraint, so just get that a little sooner to close the hole. Per testing with a patch that complains if we open a relation without holding any lock on it. I don't plan to back-patch that patch, but we should close the holes it identifies in all supported branches. Discussion: https://postgr.es/m/2038.1538335244@sss.pgh.pa.us
-
Peter Eisentraut authored
Be more specific about when and how to create the directory and what permissions it should have. Discussion: https://www.postgresql.org/message-id/flat/5ca60e1a-26f9-89fd-e912-021dd2b8afe2%40gmail.com
-
- 30 Sep, 2018 1 commit
-
-
Tom Lane authored
Add RangeTblEntry.rellockmode, which records the appropriate lock mode for each RTE_RELATION rangetable entry (either AccessShareLock, RowShareLock, or RowExclusiveLock depending on the RTE's role in the query). This patch creates the field and makes all creators of RTE nodes fill it in reasonably, but for the moment nothing much is done with it. The plan is to replace assorted post-parser logic that re-determines the right lockmode to use with simple uses of rte->rellockmode. For now, just add Asserts in each of those places that the rellockmode matches what they are computing today. (In some cases the match isn't perfect, so the Asserts are weaker than you might expect; but this seems OK, as per discussion.) This passes check-world for me, but it seems worth pushing in this state to see if the buildfarm finds any problems in cases I failed to test. catversion bump due to change of stored rules. Amit Langote, reviewed by David Rowley and Jesper Pedersen, and whacked around a bit more by me Discussion: https://postgr.es/m/468c85d9-540e-66a2-1dde-fec2b741e688@lab.ntt.co.jp
-
- 28 Sep, 2018 6 commits
-
-
Stephen Frost authored
The connection authorized message has quite a bit of useful information in it, but didn't include the application_name (when provided), so let's add that as it can be very useful. Note that at the point where we're emitting the connection authorized message, we haven't processed GUCs, so it's not possible to get this by using log_line_prefix (which pulls from the GUC). There's also something to be said for having this included in the connection authorized message and then not needing to repeat it for every line, as having it in log_line_prefix would do. The GUC cleans the application name to pure-ascii, so do that here too, but pull out the logic for cleaning up a string into its own function in common and re-use it from those places, and check_cluster_name which was doing the same thing. Author: Don Seiler <don@seiler.us> Discussion: https://postgr.es/m/CAHJZqBB_Pxv8HRfoh%2BAB4KxSQQuPVvtYCzMg7woNR3r7dfmopw%40mail.gmail.com
-
Tom Lane authored
Give a specific error complaining about lack of posix_fadvise() when someone tries to set effective_io_concurrency > 0 on platforms without that. This probably isn't worth extensive back-patching, but I (tgl) felt cramming it into v11 was reasonable. James Robinson Discussion: https://postgr.es/m/153771876450.14994.560017943128223619@wrigleys.postgresql.org Discussion: https://postgr.es/m/A3942987-5BC7-4F05-B54D-2A0EC2914B33@jlr-photo.com
-
Tom Lane authored
Looks like we need to pull in $libpgcommon in a couple more places than before. Per buildfarm.
-
Tom Lane authored
Also try to make the comment suggesting that this might be needed more intelligible. Per buildfarm.
-
Tom Lane authored
Build a third version of libpgcommon.a, with -fPIC and -DFRONTEND, as commit ea53100d did for src/port. Use that in libpq to avoid symlinking+rebuilding source files retail. Also adjust ecpg to use the new src/port and src/common libraries. Arrange to install these libraries, too, to simplify out-of-tree builds of shared libraries that need any of these modules. Discussion: https://postgr.es/m/13022.1538003440@sss.pgh.pa.us Discussion: https://postgr.es/m/E1g5Y8r-0006vs-QA@gemulon.postgresql.org
-
Tom Lane authored
Client applications should get this function, if they need it, from libpgport. The fact that it's exported from libpq is a hack left over from before we set up libpgport. It's never been documented, and there's no good reason for non-PG code to be calling it anyway, so hopefully this won't cause any problems. Moreover, with the previous setup it was not real clear whether our clients that use the function were getting it from libpgport or libpq, so this might actually prevent problems. The reason for changing it now is that in the wake of commit ea53100d, some linkers won't export the symbol, apparently because it's coming from a .a library instead of a .o file. We could get around that by continuing to symlink pqsignal.c into libpq as before; but unless somebody complains very hard, I don't want to adopt such a kluge. Discussion: https://postgr.es/m/13022.1538003440@sss.pgh.pa.us Discussion: https://postgr.es/m/E1g5Y8r-0006vs-QA@gemulon.postgresql.org
-