- 05 Apr, 2018 5 commits
-
-
Teodor Sigaev authored
Masahiko Sawada
-
Teodor Sigaev authored
Masahiko Sawada
-
Simon Riggs authored
Review comments from Andres Freund * Consolidate code into AfterTriggerGetTransitionTable() * Rename nodeMerge.c to execMerge.c * Rename nodeMerge.h to execMerge.h * Move MERGE handling in ExecInitModifyTable() into a execMerge.c ExecInitMerge() * Move mt_merge_subcommands flags into execMerge.h * Rename opt_and_condition to opt_merge_when_and_condition * Wordsmith various comments Author: Pavan Deolasee Reviewer: Simon Riggs
-
Andrew Gierth authored
Maintainers of out-of-tree PLs typically need access to the set of error codes. To avoid the need to duplicate that information in some form in PL source trees, provide errcodes.txt as part of a server installation. Thomas Munro, based on a suggestion from Andrew Gierth Discussion: https://postgr.es/m/87woykk7mu.fsf%40news-spur.riddles.org.uk
-
Peter Eisentraut authored
Some of these were indented using 8 spaces whereas the rest uses 4 spaces. Probably originally some difference in tab size.
-
- 04 Apr, 2018 13 commits
-
-
Alvaro Herrera authored
This is a blind fix, since I don't have SE-Linux to verify it. Per unwanted change in rhinoceros, running sepgsql tests. Noted by Tom Lane. Discussion: https://postgr.es/m/32347.1522865050@sss.pgh.pa.us
-
Stephen Frost authored
This reworks how the tests to run are defined. Instead of having to define all runs for all tests, we define those tests which should pass (generally using one of the defined broad hashes), add in any which should be specific for this test, and exclude any specific runs that shouldn't pass for this test. This ends up removing some 4k+ lines (more than half the file) but, more importantly, greatly simplifies the way runs-to-be-tested are defined. As discussed in the updated comments, for example, take the test which does CREATE TABLE test_table. That CREATE TABLE should show up in all 'full' runs of pg_dump, except those cases where 'test_table' is excluded, of course, and that's exactly how the test gets defined now (modulo a few other related cases, like where we dump only that table, or we dump the schema it's in, or we exclude the schema it's in): like => { %full_runs, %dump_test_schema_runs, only_dump_test_table => 1, section_pre_data => 1, }, unlike => { exclude_dump_test_schema => 1, exclude_test_table => 1, }, }, Next, we no longer expect every run to be listed for every test. If a run is listed in 'like' (directly or through a hash) then it's a 'like', unless it's listed in 'unlike' in which case it's an 'unlike'. If it isn't listed in either, then it's considered an 'unlike' automatically. Lastly, this changes the code to no longer use like/unlike but rather to use 'ok()' with 'diag()' which allows much more control over what gets spit out to the screen. Gone are the days of the entire dump being sent to the console, now you'll just get a couple of lines for each failing test which say the test that failed and the run that it failed on. This covers both the pg_dump TAP tests in src/bin/pg_dump and those in src/test/modules/test_pg_dump.
-
Bruce Momjian authored
Reported-by: bbrincat@gmail.com Discussion: https://postgr.es/m/152283596377.1441.11672249301622760943@wrigleys.postgresql.org Author: Oleg Bartunov Backpatch-through: 9.3
-
Tom Lane authored
BRIN indexes like to propagate additions of free space into the upper pages of their free space maps as soon as the new space is known, even when it's just on one individual index page. Previously this required calling FreeSpaceMapVacuum, which is quite an expensive thing if the map is large. Use the FreeSpaceMapVacuumRange function recently added by commit c79f6df7 to reduce the amount of work done for this purpose. Fix a couple of places that neglected to do the upper-page vacuuming at all after recording new free space. If the policy is to be that BRIN should do that, it should do it everywhere. Do RecordPageWithFreeSpace unconditionally in brin_page_cleanup, and do FreeSpaceMapVacuum unconditionally in brin_vacuum_scan. Because of the FSM's imprecise storage of free space, the old complications here seldom bought anything, they just slowed things down. This approach also provides a predictable path for FSM corruption to be repaired. Remove premature RecordPageWithFreeSpace call in brin_getinsertbuffer where it's about to return an extended page to the caller. The caller should do that, instead, after it's inserted its new tuple. Fix the one caller that forgot to do so. Simplify logic in brin_doupdate's same-page-update case by postponing brin_initialize_empty_new_buffer to after the critical section; I see little point in doing it before. Avoid repeat calls of RelationGetNumberOfBlocks in brin_vacuum_scan. Avoid duplicate BufferGetBlockNumber and BufferGetPage calls in a couple of places where we already had the right values. Move a BRIN_elog debug logging call out of a critical section; that's pretty unsafe and I don't think it buys us anything to not wait till after the critical section. Move the "*extended = false" step in brin_getinsertbuffer into the routine's main loop. There's no actual bug there, since the loop can't iterate with *extended still true, but it doesn't seem very future-proof as coded; and it's certainly not documented as a loop invariant. This is all from follow-on investigation inspired by commit c79f6df7. Discussion: https://postgr.es/m/5801.1522429460@sss.pgh.pa.us
-
Alvaro Herrera authored
Author: Álvaro Herrera Discussion: https://postgr.es/m/20171231194359.cvojcour423ulha4@alvherre.pgsql Reviewed-by: Peter Eisentraut
-
Teodor Sigaev authored
Vacuum of index consists from two stages: multiple (zero of more) ambulkdelete calls and one amvacuumcleanup call. When workload on particular table is append-only, then autovacuum isn't intended to touch this table. However, user may run vacuum manually in order to fill visibility map and get benefits of index-only scans. Then ambulkdelete wouldn't be called for indexes of such table (because no heap tuples were deleted), only amvacuumcleanup would be called In this case, amvacuumcleanup would perform full index scan for two objectives: put recyclable pages into free space map and update index statistics. This patch allows btvacuumclanup to skip full index scan when two conditions are satisfied: no pages are going to be put into free space map and index statistics isn't stalled. In order to check first condition, we store oldest btpo_xact in the meta-page. When it's precedes RecentGlobalXmin, then there are some recyclable pages. In order to check second condition we store number of heap tuples observed during previous full index scan by cleanup. If fraction of newly inserted tuples is less than vacuum_cleanup_index_scale_factor, then statistics isn't considered to be stalled. vacuum_cleanup_index_scale_factor can be defined as both reloption and GUC (default). This patch bumps B-tree meta-page version. Upgrade of meta-page is performed "on the fly": during VACUUM meta-page is rewritten with new version. No special handling in pg_upgrade is required. Author: Masahiko Sawada, Alexander Korotkov Review by: Peter Geoghegan, Kyotaro Horiguchi, Alexander Korotkov, Yura Sokolov Discussion: https://www.postgresql.org/message-id/flat/CAD21AoAX+d2oD_nrd9O2YkpzHaFr=uQeGr9s1rKC3O4ENc568g@mail.gmail.com
-
Tom Lane authored
In commit 331b2369 I added a test to see what jsonb_plperl would do with a qr{} result. Turns out the answer is Perl version dependent. That fact doesn't bother me particularly, but coping with multiple result possibilities is way more work than this test seems worth. So remove it again. Discussion: https://postgr.es/m/E1f3MMJ-0006bf-B0@gemulon.postgresql.org
-
Tom Lane authored
Testing SvTYPE() directly is more fraught with problems than one might think, because depending on context Perl might be storing a scalar value in one of several forms, eg both numeric and string values. This resulted in Perl-version-dependent buildfarm test failures. Instead use the SvTYPE test only to distinguish non-scalar cases (AV, HV, NULL). Disambiguate scalars by testing SvIOK, SvNOK, then SvPOK. This creates a preference order for how we will resolve cases where the value is available in more than one form, which seems fine to me. Furthermore, because we're now dealing directly with a "double" value in the SvNOK case, we can get rid of an inadequate and unportable string-comparison test for infinities, and use isinf() instead. (We do need some additional #include and "-lm" infrastructure to use that in a contrib module, per prior experiences.) In passing, prevent the regression test results from depending on DROP CASCADE order; I've not seen that malfunction, but it's trouble waiting to happen. Discussion: https://postgr.es/m/E1f3MMJ-0006bf-B0@gemulon.postgresql.org
-
Heikki Linnakangas authored
The code before the main loop, to handle the possible 1-7 unaligned bytes at the beginning of the input, was broken, and read past the input, if the the input was very short.
-
Magnus Hagander authored
Hopefully fix the fact that these checks are unstable, by introducing the corruption in a separate table from pg_class, and also explicitly disable autovacuum on those tables. Also make sure PostgreSQL is stopped while the corruption is introduced to avoid possible caching effects. Author: Michael Banck
-
Heikki Linnakangas authored
ARMv8 introduced special CPU instructions for calculating CRC-32C. Use them, when available, for speed. Like with the similar Intel CRC instructions, several factors affect whether the instructions can be used. The compiler intrinsics for them must be supported by the compiler, and the instructions must be supported by the target architecture. If the compilation target architecture does not support the instructions, but adding "-march=armv8-a+crc" makes them available, then we compile the code with a runtime check to determine if the host we're running on supports them or not. For the runtime check, use glibc getauxval() function. Unfortunately, that's not very portable, but I couldn't find any more portable way to do it. If getauxval() is not available, the CRC instructions will still be used if the target architecture supports them without any additional compiler flags, but the runtime check will not be available. Original patch by Yuqi Gu, heavily modified by me. Reviewed by Andres Freund, Thomas Munro. Discussion: https://www.postgresql.org/message-id/HE1PR0801MB1323D171938EABC04FFE7FA9E3110%40HE1PR0801MB1323.eurprd08.prod.outlook.com
-
Heikki Linnakangas authored
I missed pg_config.h.win32 in the previous commit that fixed these in pg_config.h.in.
-
Heikki Linnakangas authored
And a typo in the description of USE_SSE42_CRC32C_WITH_RUNTIME_CHECK, spotted by Daniel Gustafsson.
-
- 03 Apr, 2018 16 commits
-
-
Alvaro Herrera authored
Trigger cloning to partitions was supposed to occur for user-visible triggers only, but during development the protection that prevented it from occurring to internal triggers was lost. Reinstate it, as well as add a test case to ensure internal triggers (in the tested case, triggers implementing a deferred unique constraint) are not cloned. Without the code fix, the partitions in the test end up with different numbers of triggers, which is clearly wrong ... Bug in 86f57594. Discussion: https://postgr.es/m/20180403214903.ozfagwjcpk337uw7@alvherre.pgsql
-
Andres Freund authored
Make buffer 1 byte larger to fit a sign. It's actually impossible for there to be a sign in practice, but this is still required to keep GCC 7 happy. Cleanup from commit 51bc2717. Based on a suggestion from Peter Eisentraut. Author: Peter Geoghegan Reported-By: Peter Eisentraut Discussion: https://postgr.es/m/d1cc82ed-d07d-cef2-7c00-2e987f121648@2ndquadrant.com
-
Alvaro Herrera authored
Previous coding was passing the wrong table's tuple descriptor, which accidentally fails to fail because no existing test case exercises a foreign key in which the referenced attributes are further to the right of the referencing attributes. Add a test so that further breakage is visible. This got broken in 16828d5c. Discussion: https://postgr.es/m/20180403204723.fqte755nukgm42uf@alvherre.pgsql
-
Tom Lane authored
We were being careless in some places about the order of -L switches in link command lines, such that -L switches referring to external directories could come before those referring to directories within the build tree. This made it possible to accidentally link a system-supplied library, for example /usr/lib/libpq.so, in place of the one built in the build tree. Hilarity ensued, the more so the older the system-supplied library is. To fix, break LDFLAGS into two parts, a sub-variable LDFLAGS_INTERNAL and the main LDFLAGS variable, both of which are "recursively expanded" so that they can be incrementally adjusted by different makefiles. Establish a policy that -L switches for directories in the build tree must always be added to LDFLAGS_INTERNAL, while -L switches for external directories must always be added to LDFLAGS. This is sufficient to ensure a safe search order. For simplicity, we typically also put -l switches for the respective libraries into those same variables. (Traditional make usage would have us put -l switches into LIBS, but cleaning that up is a project for another day, as there's no clear need for it.) This turns out to also require separating SHLIB_LINK into two variables, SHLIB_LINK and SHLIB_LINK_INTERNAL, with a similar rule about which switches go into which variable. And likewise for PG_LIBS. Although this change might appear to affect external users of pgxs.mk, I think it doesn't; they shouldn't have any need to touch the _INTERNAL variables. In passing, tweak src/common/Makefile so that the value of CPPFLAGS recorded in pg_config lacks "-DFRONTEND" and the recorded value of LDFLAGS lacks "-L../../../src/common". Both of those things are mistakes, apparently introduced during prior code rearrangements, as old versions of pg_config don't print them. In general we don't want anything that's specific to the src/common subdirectory to appear in those outputs. This is certainly a bug fix, but in view of the lack of field complaints, I'm unsure whether it's worth the risk of back-patching. In any case it seems wise to see what the buildfarm makes of it first. Discussion: https://postgr.es/m/25214.1522604295@sss.pgh.pa.us
-
Tom Lane authored
Some compilers are evidently pickier than others about whether Perl's I32 typedef should be considered equivalent to int. Dodge the problem by using a separate variable; the prior coding was a bit confusing anyway. Per buildfarm. Note this does nothing to fix the test failures due to SV_to_JsonbValue not covering enough variable types.
-
Bruce Momjian authored
Discussion: https://postgr.es/m/CAFjFpRcF-wNbe0w-m3NpkEwr9shmOZ=GoESOzd2Wog9h55J8sA@mail.gmail.com Author: Ashutosh Bapat
-
Teodor Sigaev authored
The prefix operator along with SP-GiST indexes can be used as an alternative for LIKE 'word%' commands and it doesn't have a limitation of string/prefix length as B-Tree has. Bump catalog version Author: Ildus Kurbangaliev with some editorization by me Review by: Arthur Zakirov, Alexander Korotkov, and me Discussion: https://www.postgresql.org/message-id/flat/20180202180327.222b04b3@wp.localdomain
-
Peter Eisentraut authored
-
Magnus Hagander authored
Per buildfarm animal prairiedog, suggestion solution from Tom.
-
Peter Eisentraut authored
Add a new contrib module jsonb_plperl that provides a transform between jsonb and PL/Perl. jsonb values are converted to appropriate Perl types such as arrays and hashes, and vice versa. Author: Anthony Bykov <a.bykov@postgrespro.ru> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Reviewed-by: Aleksander Alekseev <a.alekseev@postgrespro.ru> Reviewed-by: Nikita Glukhov <n.gluhov@postgrespro.ru>
-
Magnus Hagander authored
Reorder the check for non-BLCKSZ size reads to make sure we don't abort sending the file in this case. Missed in the previous commit.
-
Magnus Hagander authored
When base backups are run over the replication protocol (for example using pg_basebackup), verify the checksums of all data blocks if checksums are enabled. If checksum failures are encountered, log them as warnings but don't abort the backup. This becomes the default behaviour in pg_basebackup (provided checksums are enabled on the server), so add a switch (-k) to disable the checks if necessary. Author: Michael Banck Reviewed-By: Magnus Hagander, David Steele Discussion: https://postgr.es/m/20180228180856.GE13784@nighthawk.caipicrew.dd-dns.de
-
Simon Riggs authored
Author: Pavan Deolasee
-
Simon Riggs authored
Author: Peter Geoghegan Recursive support removed, no tests Docs added by me
-
Simon Riggs authored
-
Simon Riggs authored
MERGE performs actions that modify rows in the target table using a source table or query. MERGE provides a single SQL statement that can conditionally INSERT/UPDATE/DELETE rows a task that would other require multiple PL statements. e.g. MERGE INTO target AS t USING source AS s ON t.tid = s.sid WHEN MATCHED AND t.balance > s.delta THEN UPDATE SET balance = t.balance - s.delta WHEN MATCHED THEN DELETE WHEN NOT MATCHED AND s.delta > 0 THEN INSERT VALUES (s.sid, s.delta) WHEN NOT MATCHED THEN DO NOTHING; MERGE works with regular and partitioned tables, including column and row security enforcement, as well as support for row, statement and transition triggers. MERGE is optimized for OLTP and is parameterizable, though also useful for large scale ETL/ELT. MERGE is not intended to be used in preference to existing single SQL commands for INSERT, UPDATE or DELETE since there is some overhead. MERGE can be used statically from PL/pgSQL. MERGE does not yet support inheritance, write rules, RETURNING clauses, updatable views or foreign tables. MERGE follows SQL Standard per the most recent SQL:2016. Includes full tests and documentation, including full isolation tests to demonstrate the concurrent behavior. This version written from scratch in 2017 by Simon Riggs, using docs and tests originally written in 2009. Later work from Pavan Deolasee has been both complex and deep, leaving the lead author credit now in his hands. Extensive discussion of concurrency from Peter Geoghegan, with thanks for the time and effort contributed. Various issues reported via sqlsmith by Andreas Seltenreich Authors: Pavan Deolasee, Simon Riggs Reviewer: Peter Geoghegan, Amit Langote, Tomas Vondra, Simon Riggs Discussion: https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.com https://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com
-
- 02 Apr, 2018 6 commits
-
-
Simon Riggs authored
This reverts commit e6597dc3.
-
Simon Riggs authored
This reverts commit 354f1385.
-
Simon Riggs authored
-
Simon Riggs authored
MERGE performs actions that modify rows in the target table using a source table or query. MERGE provides a single SQL statement that can conditionally INSERT/UPDATE/DELETE rows a task that would other require multiple PL statements. e.g. MERGE INTO target AS t USING source AS s ON t.tid = s.sid WHEN MATCHED AND t.balance > s.delta THEN UPDATE SET balance = t.balance - s.delta WHEN MATCHED THEN DELETE WHEN NOT MATCHED AND s.delta > 0 THEN INSERT VALUES (s.sid, s.delta) WHEN NOT MATCHED THEN DO NOTHING; MERGE works with regular and partitioned tables, including column and row security enforcement, as well as support for row, statement and transition triggers. MERGE is optimized for OLTP and is parameterizable, though also useful for large scale ETL/ELT. MERGE is not intended to be used in preference to existing single SQL commands for INSERT, UPDATE or DELETE since there is some overhead. MERGE can be used statically from PL/pgSQL. MERGE does not yet support inheritance, write rules, RETURNING clauses, updatable views or foreign tables. MERGE follows SQL Standard per the most recent SQL:2016. Includes full tests and documentation, including full isolation tests to demonstrate the concurrent behavior. This version written from scratch in 2017 by Simon Riggs, using docs and tests originally written in 2009. Later work from Pavan Deolasee has been both complex and deep, leaving the lead author credit now in his hands. Extensive discussion of concurrency from Peter Geoghegan, with thanks for the time and effort contributed. Various issues reported via sqlsmith by Andreas Seltenreich Authors: Pavan Deolasee, Simon Riggs Reviewers: Peter Geoghegan, Amit Langote, Tomas Vondra, Simon Riggs Discussion: https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.com https://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com
-
Peter Eisentraut authored
-
Tom Lane authored
Coverity complained about possible buffer overrun in two places added by commit 1eb6d652, and AFAICS it's reasonable to worry: even granting that the WAL originator properly truncated the commit GID to GIDSIZE, we should not really bet our lives on that having the same value as it does in the current build. Hence, use strlcpy() not strcpy(), and adjust the pointer advancement logic to be sure we skip over the whole source string even if strlcpy() truncated it.
-