- 09 Apr, 2018 15 commits
-
-
Tom Lane authored
Commit 372728b0 created some problems for usages like building a subdirectory without having first done "make all" at the top level, or for proceeding directly to "make install" without "make all". The only reasonably clean way to fix this seems to be to force the submake-generated-headers rule to fire in *any* "make all" or "make install" command anywhere in the tree. To avoid lots of redundant work, as well as parallel make jobs possibly clobbering each others' output, we still need to be sure that the rule fires only once in a recursive build. For that, adopt the same MAKELEVEL hack previously used for "temp-install". But try to document it a bit better. The submake-errcodes mechanism previously used in src/port/ and src/common/ is subsumed by this, so we can get rid of those special cases. It was inadequate for src/common/ anyway after the aforesaid commit, and it always risked parallel attempts to build errcodes.h. Discussion: https://postgr.es/m/E1f5FAB-0006LU-MB@gemulon.postgresql.org
-
Alvaro Herrera authored
In 499be013 support for pruning unneeded Append subnodes was added. The logic in that commit was not correctly checking if the next subplan was in fact a valid subplan. This could cause parallel workers processes to be given a subplan to work on which didn't require any work. Per code review following an otherwise unexplained regression failure in buildfarm member Pademelon. (We haven't been able to reproduce the failure, so this is a bit of a blind fix in terms of whether it'll actually fix it; but it is a clear bug nonetheless). In passing, also add a comment to explain what first_partial_plan means. Author: David Rowley Discussion: https://postgr.es/m/CAKJS1f_E5r05hHUVG3UmCQJ49DGKKHtN=SHybD44LdzBn+CJng@mail.gmail.com
-
Magnus Hagander authored
Author: Michael Paquier
-
Magnus Hagander authored
Previously a warning was printed, but the tool actually kept running even when running as root. This is something we definitely want to prevent, but since this means a behavior change, not backpatching. Author: Michael Paquier
-
Tom Lane authored
Make these scripts emit just one log message when they run, not one per output file. The latter is way too verbose in the wake of commit 372728b0. The specific wording used is what already existed in the MSVC scripts. John Naylor Discussion: https://postgr.es/m/11103.1523208822@sss.pgh.pa.us
-
Tom Lane authored
In its original form, reformat_dat_file.pl smashed consecutive blank lines to a single blank line, which was helpful for mopping up excess whitespace during the bootstrap data format conversion. But going forward, there seems little reason to do that; if developers want to put in multiple blank lines, let 'em. This makes it conform to the documentation I (tgl) wrote, too. In passing, clean up some sloppy markup choices in bki.sgml. John Naylor Discussion: https://postgr.es/m/28827.1523039259@sss.pgh.pa.us
-
Tom Lane authored
In commit 9c0a0de4, I'd failed to notice that catalog/catalog.h should also be considered a frontend-unsafe header, because it includes (and needs) the full form of pg_class.h, not to mention relcache.h. However, various frontend code was depending on it to get TABLESPACE_VERSION_DIRECTORY, so refactoring of some sort is called for. The cleanest answer seems to be to move TABLESPACE_VERSION_DIRECTORY, as well as the OIDCHARS symbol, to common/relpath.h. Do that, and mop up inclusions as necessary. (I found that quite a few current users of catalog/catalog.h don't seem to need it at all anymore, apparently as a result of the refactorings that created common/relpath.[hc]. And initdb.c needed it only as a route to pg_class_d.h.) Discussion: https://postgr.es/m/6629.1523294509@sss.pgh.pa.us
-
Magnus Hagander authored
Lack thereof pointed out by Tom Lane.
-
Magnus Hagander authored
This reverts the backend sides of commit 1fde38be. I have, at least for now, left the pg_verify_checksums tool in place, as this tool can be very valuable without the rest of the patch as well, and since it's a read-only tool that only runs when the cluster is down it should be a lot safer.
-
Teodor Sigaev authored
Add missed description of pg_constraint.conincluding Shinoda, Noriyoshi and Alexander Korotkov
-
Alvaro Herrera authored
Fix a couple of typos, and update a comment about why we set a BMS to NULL. Author: David Rowley Discussion: http://postgr.es/m/CAKJS1f-tux=KdUz6ENJ9GHM_V2qgxysadYiOyQS9Ko9PTteVhQ@mail.gmail.com
-
Alvaro Herrera authored
We were initializing a BMS to merely reference an existing one, which would cause a double-free (and a crash) when the recursive algorithm tried to intersect it with an empty one. Fix it by creating a copy at initialization time. Reported-by: sqlsmith (by way of Andreas Seltenreich) Author: Amit Langote Discussion: https://postgr.es/m/87in923lyw.fsf@ansel.ydns.eu
-
Heikki Linnakangas authored
Author: Kyotaro Horiguchi
-
Teodor Sigaev authored
Repeating these tests adds unnecessary cycles, since no improvement in test coverage is expected. Cleanup from commit 8224de4f. Peter Geoghegan
-
Stephen Frost authored
We don't support the same kind of permissions tests on Windows/MINGW, so these tests really shouldn't be getting run on that platform. Per buildfarm.
-
- 08 Apr, 2018 13 commits
-
-
Tom Lane authored
CheckIndexCompatible() misused ComputeIndexAttrs() by not bothering to fill ii_NumIndexAttrs and ii_NumIndexKeyAttrs in the passed IndexInfo. Omission of ii_NumIndexAttrs was previously unimportant, but now this matters because ComputeIndexAttrs depends on ii_NumIndexKeyAttrs to decide how many columns it needs to report on. (BTW, the fact that this oversight wasn't detected earlier implies that we have no regression test verifying whether CheckIndexCompatible ever succeeds. Bad dog. Not the job of this patch to fix it, though.) Also, change the API of ComputeIndexAttrs so that it fills the opclass output array for all column positions, as it does for the options output array; positions for non-key index columns are filled with zeroes. This isn't directly fixing any bug, but it seems like a good idea. Per valgrind failure reports from buildfarm. Alexander Korotkov, tweaked a bit by me Discussion: https://postgr.es/m/CAPpHfduWrysrT-qAhn+3Ea5+Mg6Vhc-oA6o2Z-hRCPRdvf3tiw@mail.gmail.com
-
Tom Lane authored
This section confusingly used both "infile" and "outfile" to refer to the same file, i.e. the textual output of pg_dump. Use "dumpfile" for both cases, per suggestion from Jonathan Katz. Discussion: https://postgr.es/m/152311295239.31235.6487236091906987117@wrigleys.postgresql.org
-
Tom Lane authored
Write ',' and ';' for typdelim values instead of the obscurantist ASCII octal equivalents. Not sure why anybody ever thought the latter were better; maybe it had something to do with lack of a better quoting convention, twenty-plus years ago? Reassign a couple of high-numbered OIDs that were left in during yesterday's mad rush to commit stuff of uncertain internal temperature. The latter requires a catversion bump, though the former wouldn't since the end-result catalog data is unchanged.
-
Tom Lane authored
Addition of the catalog/pg_foo_d.h headers seems to have pushed us over the brink of the maximum command line length for some older platforms during "make install" for our header files. The main culprit here is repetition of the target directory path, which could be long. Rearrange so that we don't repeat that once per file, but only once per subdirectory. Per buildfarm. Discussion: https://postgr.es/m/E1f5Dwm-0004n5-7O@gemulon.postgresql.org
-
Tom Lane authored
Traditionally, include/catalog/pg_foo.h contains extern declarations for functions in backend/catalog/pg_foo.c, in addition to its function as the authoritative definition of the pg_foo catalog's rowtype. In some cases, we'd been forced to split out those extern declarations into separate pg_foo_fn.h headers so that the catalog definitions could be #include'd by frontend code. That problem is gone as of commit 9c0a0de4, so let's undo the splits to make things less confusing. Discussion: https://postgr.es/m/23690.1523031777@sss.pgh.pa.us
-
Tom Lane authored
Everything of use to frontend code should now appear in the _d.h files, and making this change frees us from needing to worry about whether the catalog header files proper are frontend-safe. Remove src/interfaces/ecpg/ecpglib/pg_type.h entirely, as the previous commit reduced it to a confusingly-named wrapper around pg_type_d.h. In passing, make test_rls_hooks.c follow project convention of including our own files with #include "" not <>. Discussion: https://postgr.es/m/23690.1523031777@sss.pgh.pa.us
-
Tom Lane authored
Historically, the initial catalog data to be installed during bootstrap has been written in DATA() lines in the catalog header files. This had lots of disadvantages: the format was badly underdocumented, it was very difficult to edit the data in any mechanized way, and due to the lack of any abstraction the data was verbose, hard to read/understand, and easy to get wrong. Hence, move this data into separate ".dat" files and represent it in a way that can easily be read and rewritten by Perl scripts. The new format is essentially "key => value" for each column; while it's a bit repetitive, explicit labeling of each value makes the data far more readable and less error-prone. Provide a way to abbreviate entries by omitting field values that match a specified default value for their column. This allows removal of a large amount of repetitive boilerplate and also lowers the barrier to adding new columns. Also teach genbki.pl how to translate symbolic OID references into numeric OIDs for more cases than just "regproc"-like pg_proc references. It can now do that for regprocedure-like references (thus solving the problem that regproc is ambiguous for overloaded functions), operators, types, opfamilies, opclasses, and access methods. Use this to turn nearly all OID cross-references in the initial data into symbolic form. This represents a very large step forward in readability and error resistance of the initial catalog data. It should also reduce the difficulty of renumbering OID assignments in uncommitted patches. Also, solve the longstanding problem that frontend code that would like to use OID macros and other information from the catalog headers often had difficulty with backend-only code in the headers. To do this, arrange for all generated macros, plus such other declarations as we deem fit, to be placed in "derived" header files that are safe for frontend inclusion. (Once clients migrate to using these pg_*_d.h headers, it will be possible to get rid of the pg_*_fn.h headers, which only exist to quarantine code away from clients. That is left for follow-on patches, however.) The now-automatically-generated macros include the Anum_xxx and Natts_xxx constants that we used to have to update by hand when adding or removing catalog columns. Replace the former manual method of generating OID macros for pg_type entries with an automatic method, ensuring that all built-in types have OID macros. (But note that this patch does not change the way that OID macros for pg_proc entries are built and used. It's not clear that making that match the other catalogs would be worth extra code churn.) Add SGML documentation explaining what the new data format is and how to work with it. Despite being a very large change in the catalog headers, there is no catversion bump here, because postgres.bki and related output files haven't changed at all. John Naylor, based on ideas from various people; review and minor additional coding by me; previous review by Alvaro Herrera Discussion: https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com
-
Teodor Sigaev authored
Alexander Korotkov per gripe from Tom Lane noticed on valgrind-enabled buildfarm members
-
Teodor Sigaev authored
Use field of structure in Assert directly Jeff Janes
-
Andrew Gierth authored
Disable index-only scan for tests that might report variable results for "Heap Fetches" statistic due to concurrent transactions affecting whether all-visible flags can be set. Author: David Rowley Discussion: https://postgr.es/m/CAKJS1f_yjtHDJnDzx1uuR_3D7beDVAkNQfWJhRLA1gvPCzkAhg@mail.gmail.com
-
Andrew Gierth authored
This rectifies an oversight in commit 8224de4f, by adding a new property 'can_include' for pg_indexam_has_property, and adjusting the results of pg_index_column_has_property to give more appropriate results for INCLUDEd columns.
-
Andres Freund authored
The atomics code asserts proper alignment in various places. That's mainly because the alignment of 64bit integers is not sufficient for atomic operations on all platforms. Some ABIs only have four byte alignment, but don't have atomic behavior when crossing page boundaries. The flags code isn't affected by that however, as the type alignment always is sufficient for atomic operations. Nevertheless the code asserted alignment requirements. Before 8c3debbb it was only broken on hppa, after it probably affect further platforms. Thus remove the assertions for pg_atomic_flag operators. Per buildfarm animal pademelon. Discussion: https://postgr.es/m/7223.1523124425@sss.pgh.pa.us Backpatch: 9.5-
-
- 07 Apr, 2018 12 commits
-
-
Stephen Frost authored
Under EXEC BACKEND we also need to be going through the group privileges setup since we do support that on Unixy systems, so add that to SubPostmasterMain(). Under Windows, we need to simply return true from GetDataDirectoryCreatePerm(), but that wasn't happening due to a missing #else clause. Per buildfarm.
-
Stephen Frost authored
Allow the cluster to be optionally init'd with read access for the group. This means a relatively non-privileged user can perform a backup of the cluster without requiring write privileges, which enhances security. The mode of PGDATA is used to determine whether group permissions are enabled for directory and file creates. This method was chosen as it's simple and works well for the various utilities that write into PGDATA. Changing the mode of PGDATA manually will not automatically change the mode of all the files contained therein. If the user would like to enable group access on an existing cluster then changing the mode of all the existing files will be required. Note that pg_upgrade will automatically change the mode of all migrated files if the new cluster is init'd with the -g option. Tests are included for the backend and all the utilities which operate on the PG data directory to ensure that the correct mode is set based on the data directory permissions. Author: David Steele <david@pgmasters.net> Reviewed-By: Michael Paquier, with discussion amongst many others. Discussion: https://postgr.es/m/ad346fe6-b23e-59f1-ecb7-0e08390ad629%40pgmasters.net
-
Stephen Frost authored
Consolidate directory and file create permissions for tools which work with the PG data directory by adding a new module (common/file_perm.c) that contains variables (pg_file_create_mode, pg_dir_create_mode) and constants to initialize them (0600 for files and 0700 for directories). Convert mkdir() calls in the backend to MakePGDirectory() if the original call used default permissions (always the case for regular PG directories). Add tests to make sure permissions in PGDATA are set correctly by the tools which modify the PG data directory. Authors: David Steele <david@pgmasters.net>, Adam Brightwell <adam.brightwell@crunchydata.com> Reviewed-By: Michael Paquier, with discussion amongst many others. Discussion: https://postgr.es/m/ad346fe6-b23e-59f1-ecb7-0e08390ad629%40pgmasters.net
-
Alvaro Herrera authored
Existing partition pruning is only able to work at plan time, for query quals that appear in the parsed query. This is good but limiting, as there can be parameters that appear later that can be usefully used to further prune partitions. This commit adds support for pruning subnodes of Append which cannot possibly contain any matching tuples, during execution, by evaluating Params to determine the minimum set of subnodes that can possibly match. We support more than just simple Params in WHERE clauses. Support additionally includes: 1. Parameterized Nested Loop Joins: The parameter from the outer side of the join can be used to determine the minimum set of inner side partitions to scan. 2. Initplans: Once an initplan has been executed we can then determine which partitions match the value from the initplan. Partition pruning is performed in two ways. When Params external to the plan are found to match the partition key we attempt to prune away unneeded Append subplans during the initialization of the executor. This allows us to bypass the initialization of non-matching subplans meaning they won't appear in the EXPLAIN or EXPLAIN ANALYZE output. For parameters whose value is only known during the actual execution then the pruning of these subplans must wait. Subplans which are eliminated during this stage of pruning are still visible in the EXPLAIN output. In order to determine if pruning has actually taken place, the EXPLAIN ANALYZE must be viewed. If a certain Append subplan was never executed due to the elimination of the partition then the execution timing area will state "(never executed)". Whereas, if, for example in the case of parameterized nested loops, the number of loops stated in the EXPLAIN ANALYZE output for certain subplans may appear lower than others due to the subplan having been scanned fewer times. This is due to the list of matching subnodes having to be evaluated whenever a parameter which was found to match the partition key changes. This commit required some additional infrastructure that permits the building of a data structure which is able to perform the translation of the matching partition IDs, as returned by get_matching_partitions, into the list index of a subpaths list, as exist in node types such as Append, MergeAppend and ModifyTable. This allows us to translate a list of clauses into a Bitmapset of all the subpath indexes which must be included to satisfy the clause list. Author: David Rowley, based on an earlier effort by Beena Emerson Reviewers: Amit Langote, Robert Haas, Amul Sul, Rajkumar Raghuwanshi, Jesper Pedersen Discussion: https://postgr.es/m/CAOG9ApE16ac-_VVZVvv0gePSgkg_BwYEV1NBqZFqDR2bBE0X0A@mail.gmail.com
-
Alvaro Herrera authored
This works very much like the existing bms_last_member function, only it traverses through the Bitmapset in the opposite direction from the most significant bit down to the least significant bit. A special prevbit value of -1 may be used to have the function determine the most significant bit. This is useful for starting a loop. When there are no members less than prevbit, the function returns -2 to indicate there are no more members. Author: David Rowley Discussion: https://postgr.es/m/CAKJS1f-K=3d5MDASNYFJpUpc20xcBnAwNC1-AOeunhn0OtkWbQ@mail.gmail.com
-
Andres Freund authored
When an update moves a row between partitions (supported since 2f178441), our normal logic for following update chains in READ COMMITTED mode doesn't work anymore. Cross partition updates are modeled as an delete from the old and insert into the new partition. No ctid chain exists across partitions, and there's no convenient space to introduce that link. Not throwing an error in a partitioned context when one would have been thrown without partitioning is obviously problematic. This commit introduces infrastructure to detect when a tuple has been moved, not just plainly deleted. That allows to throw an error when encountering a deletion that's actually a move, while attempting to following a ctid chain. The row deleted as part of a cross partition update is marked by pointing it's t_ctid to an invalid block, instead of self as a normal update would. That was deemed to be the least invasive and most future proof way to represent the knowledge, given how few infomask bits are there to be recycled (there's also some locking issues with using infomask bits). External code following ctid chains should be updated to check for moved tuples. The most likely consequence of not doing so is a missed error. Author: Amul Sul, editorialized by me Reviewed-By: Amit Kapila, Pavan Deolasee, Andres Freund, Robert Haas Discussion: http://postgr.es/m/CAAJ_b95PkwojoYfz0bzXU8OokcTVGzN6vYGCNVUukeUDrnF3dw@mail.gmail.com
-
Teodor Sigaev authored
This patch introduces INCLUDE clause to index definition. This clause specifies a list of columns which will be included as a non-key part in the index. The INCLUDE columns exist solely to allow more queries to benefit from index-only scans. Also, such columns don't need to have appropriate operator classes. Expressions are not supported as INCLUDE columns since they cannot be used in index-only scans. Index access methods supporting INCLUDE are indicated by amcaninclude flag in IndexAmRoutine. For now, only B-tree indexes support INCLUDE clause. In B-tree indexes INCLUDE columns are truncated from pivot index tuples (tuples located in non-leaf pages and high keys). Therefore, B-tree indexes now might have variable number of attributes. This patch also provides generic facility to support that: pivot tuples contain number of their attributes in t_tid.ip_posid. Free 13th bit of t_info is used for indicating that. This facility will simplify further support of index suffix truncation. The changes of above are backward-compatible, pg_upgrade doesn't need special handling of B-tree indexes for that. Bump catalog version Author: Anastasia Lubennikova with contribition by Alexander Korotkov and me Reviewed by: Peter Geoghegan, Tomas Vondra, Antonin Houska, Jeff Janes, David Rowley, Alexander Korotkov Discussion: https://www.postgresql.org/message-id/flat/56168952.4010101@postgrespro.ru
-
Teodor Sigaev authored
Missed in 1c1791e0 commit
-
Teodor Sigaev authored
Jsonb has a complex nature so there isn't best-for-everything way to convert it to tsvector for full text search. Current to_tsvector(json(b)) suggests to convert only string values, but it's possible to index keys, numerics and even booleans value. To solve that json(b)_to_tsvector has a second required argument contained a list of desired types of json fields. Second argument is a jsonb scalar or array right now with possibility to add new options in a future. Bump catalog version Author: Dmitry Dolgov with some editorization by me Reviewed by: Teodor Sigaev Discussion: https://www.postgresql.org/message-id/CA+q6zcXJQbS1b4kJ_HeAOoOc=unfnOrUEL=KGgE32QKDww7d8g@mail.gmail.com
-
Peter Eisentraut authored
We need to wait for the initial sync of all subscriptions. On some (faster?) machines, this didn't make a difference, but the (slower?) buildfarm machines are upset.
-
Andres Freund authored
They've been broken for days, and prevent other tests from being run. The plan is to revert their addition later. Discussion: https://postgr.es/m/20180407162252.wfo5aorjrjw2n5ws@alap3.anarazel.de
-
Peter Eisentraut authored
Update the built-in logical replication system to make use of the previously added logical decoding for TRUNCATE support. Add the required truncate callback to pgoutput and a new logical replication protocol message. Publications get a new attribute to determine whether to replicate truncate actions. When updating a publication via pg_dump from an older version, this is not set, thus preserving the previous behavior. Author: Simon Riggs <simon@2ndquadrant.com> Author: Marco Nenciarini <marco.nenciarini@2ndquadrant.it> Author: Peter Eisentraut <peter.eisentraut@2ndquadrant.com> Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
-