- 04 Apr, 2020 7 commits
-
-
Tom Lane authored
Commit 77ec5aff added an assertion to enforce_generic_type_consistency that boils down to "if the function result is polymorphic, there must be at least one polymorphic argument". This should be true for user-created functions, but there are built-in functions for which it's not true, as pointed out by Jaime Casanova. Hence, go back to the old behavior of leaving the return type alone. There's only a limited amount of stuff you can do with such a function result, but it does work to some extent; add some regression test cases to ensure we don't break that again. Discussion: https://postgr.es/m/CAJGNTeMbhtsCUZgJJ8h8XxAJbK7U2ipsX8wkHRtZRz-NieT8RA@mail.gmail.com
-
Noah Misch authored
Until now, only selected bulk operations (e.g. COPY) did this. If a given relfilenode received both a WAL-skipping COPY and a WAL-logged operation (e.g. INSERT), recovery could lose tuples from the COPY. See src/backend/access/transam/README section "Skipping WAL for New RelFileNode" for the new coding rules. Maintainers of table access methods should examine that section. To maintain data durability, just before commit, we choose between an fsync of the relfilenode and copying its contents to WAL. A new GUC, wal_skip_threshold, guides that choice. If this change slows a workload that creates small, permanent relfilenodes under wal_level=minimal, try adjusting wal_skip_threshold. Users setting a timeout on COMMIT may need to adjust that timeout, and log_min_duration_statement analysis will reflect time consumption moving to COMMIT from commands like COPY. Internally, this requires a reliable determination of whether RollbackAndReleaseCurrentSubTransaction() would unlink a relation's current relfilenode. Introduce rd_firstRelfilenodeSubid. Amend the specification of rd_createSubid such that the field is zero when a new rel has an old rd_node. Make relcache.c retain entries for certain dropped relations until end of transaction. Bump XLOG_PAGE_MAGIC, since this introduces XLOG_GIST_ASSIGN_LSN. Future servers accept older WAL, so this bump is discretionary. Kyotaro Horiguchi, reviewed (in earlier, similar versions) by Robert Haas. Heikki Linnakangas and Michael Paquier implemented earlier designs that materially clarified the problem. Reviewed, in earlier designs, by Andrew Dunstan, Andres Freund, Alvaro Herrera, Tom Lane, Fujii Masao, and Simon Riggs. Reported by Martijn van Oosterhout. Discussion: https://postgr.es/m/20150702220524.GA9392@svana.org
-
Peter Eisentraut authored
This reverts commit 246f136e. That patch wasn't quite complete enough. Discussion: https://www.postgresql.org/message-id/flat/E1jIpJu-0007Ql-CL%40gemulon.postgresql.org
-
Amit Kapila authored
This allows gathering the WAL generation statistics for each statement execution. The three statistics that we collect are the number of WAL records, the number of full page writes and the amount of WAL bytes generated. This helps the users who have write-intensive workload to see the impact of I/O due to WAL. This further enables us to see approximately what percentage of overall WAL is due to full page writes. In the future, we can extend this functionality to allow us to compute the the exact amount of WAL data due to full page writes. This patch in itself is just an infrastructure to compute WAL usage data. The upcoming patches will expose this data via explain, auto_explain, pg_stat_statements and verbose (auto)vacuum output. Author: Kirill Bychik, Julien Rouhaud Reviewed-by: Dilip Kumar, Fujii Masao and Amit Kapila Discussion: https://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com
-
Jeff Davis authored
Don't try to be precise about it, just use a constant 16 bytes of chunk overhead. Being smarter would require knowing the memory context where the chunk will be allocated, which is not known by all callers. Discussion: https://postgr.es/m/20200325220936.il3ni2fj2j2b45y5@alap3.anarazel.de
-
Robert Haas authored
Commit 0d8c9c12 allowed BASE_BACKUP to acquire a ResourceOwner without a transaction so that the backup manifest functionality could use a BufFile, but it overlooked the fact that when a walsender is used with replication=database, it might have a transaction in progress, because in that mode, SQL and replication commands can be mixed. Try to fix things up so that the two cleanup mechanisms don't conflict. Per buildfarm member serinus, which triggered the problem when CREATE_REPLICATION_SLOT failed from inside a transaction. It passed on the subsequent run, so evidently the failure doesn't happen every time.
-
Robert Haas authored
lapwing is complaining that about a call to pg_gmtime, saying that it "expected 'const pg_time_t *' but argument is of type 'time_t *'". I at first thought that the problem had someting to do with const, but Thomas Munro suggested that it might be just because time_t and pg_time_t are different identifers. lapwing is i686 rather than x86_64, and pg_time_t is always int64, so that seems like a good guess. There is other code that just casts time_t to pg_time_t without any conversion function, so try that approach here. Introduced in commit 0d8c9c12.
-
- 03 Apr, 2020 19 commits
-
-
Robert Haas authored
Report by Tom Lane. Discussion: http://postgr.es/m/22394.1585951968@sss.pgh.pa.us
-
Robert Haas authored
It may be necessary to go further and remove this test altogether, but I'm going to try this fix first. It's not clear, at least to me, exactly how this is breaking buildfarm members, but it appears to be doing so.
-
Robert Haas authored
Second try at getting the buildfarm to be happy with 003_corrution.pl as added by commit 0d8c9c12. Per suggestion from Álvaro Herrera. Discussion: http://postgr.es/m/20200403205412.GA8279@alvherre.pgsql
-
Tom Lane authored
Move have_partkey_equi_join and match_expr_to_partition_keys to relnode.c, since they're used only there. Refactor build_joinrel_partition_info to split out the code that fills the joinrel's partition key lists; this doesn't have any non-cosmetic impact, but it seems like a useful separation of concerns. Improve assorted nearby comments. Amit Langote, with a little further editorialization by me Discussion: https://postgr.es/m/CA+HiwqG2WVUGmLJqtR0tPFhniO=H=9qQ+Z3L_ZC+Y3-EVQHFGg@mail.gmail.com
-
Robert Haas authored
The buildfarm is very unhappy right now because TAP test 003_corruption.pl uses TestLib::tempdir to generate the name of a temporary directory that is used as a tablespace name, and this results in a 'symbolic link target too long' error message on many of the buildfarm machines, but not on my machine. It appears that other people have run into similar problems in the past and that TestLib::tempdir_short was the solution, so let's try using that instead.
-
Robert Haas authored
It seems that we have a policy that every Perl subroutine should end with an explicit "return", so add explicit "return" statements to all the new subroutines added by my prior commit 0d8c9c12. Per buildfarm.
-
Robert Haas authored
A manifest is a JSON document which includes (1) the file name, size, last modification time, and an optional checksum for each file backed up, (2) timelines and LSNs for whatever WAL will need to be replayed to make the backup consistent, and (3) a checksum for the manifest itself. By default, we use CRC-32C when checksumming data files, because we are trying to detect corruption and user error, not foil an adversary. However, pg_basebackup and the server-side BASE_BACKUP command now have options to select a different algorithm, so users wanting a cryptographic hash function can select SHA-224, SHA-256, SHA-384, or SHA-512. Users not wanting file checksums at all can disable them, or disable generating of the backup manifest altogether. Using a cryptographic hash function in place of CRC-32C consumes significantly more CPU cycles, which may slow down backups in some cases. A new tool called pg_validatebackup can validate a backup against the manifest. If no checksums are present, it can still check that the right files exist and that they have the expected sizes. If checksums are present, it can also verify that each file has the expected checksum. Additionally, it calls pg_waldump to verify that the expected WAL files are present and parseable. Only plain format backups can be validated directly, but tar format backups can be validated after extracting them. Robert Haas, with help, ideas, review, and testing from David Steele, Stephen Frost, Andrew Dunstan, Rushabh Lathia, Suraj Kharage, Tushar Ahuja, Rajkumar Raghuwanshi, Mark Dilger, Davinder Singh, Jeevan Chalke, Amit Kapila, Andres Freund, and Noah Misch. Discussion: http://postgr.es/m/CA+TgmoZV8dw1H2bzZ9xkKwdrk8+XYa+DC9H=F7heO2zna5T6qg@mail.gmail.com
-
Fujii Masao authored
When BUFFERS option is enabled, EXPLAIN command includes the information on buffer usage during each plan node, in its output. In addition to that, this commit makes EXPLAIN command include also the information on buffer usage during planning phase, in its output. This feature makes it easier to discern the cases where lots of buffer access happen during planning. This commit revives the original commit ed7a5095 that was reverted by commit 19db23bc. The original commit had to be reverted because it caused the regression test failure on the buildfarm members prion and dory. But since commit c0885c4c got rid of the caues of the test failure, the original commit can be safely introduced again. Author: Julien Rouhaud, slightly revised by Fujii Masao Reviewed-by: Justin Pryzby Discussion: https://postgr.es/m/16109-26a1a88651e90608@postgresql.org
-
Tom Lane authored
entryGetItem()'s three code paths each contained bugs associated with filtering the entries for gin_fuzzy_search_limit. The posting-tree path failed to advance "advancePast" after having decided to filter an item. If we ran out of items on the current page and needed to advance to the next, what would actually happen is that entryLoadMoreItems() would re-load the same page. Eventually, the random dropItem() test would accept one of the same items it'd previously rejected, and we'd move on --- but it could take awhile with small gin_fuzzy_search_limit. To add insult to injury, this case would inevitably cause entryLoadMoreItems() to decide it needed to re-descend from the root, making things even slower. The posting-list path failed to implement gin_fuzzy_search_limit filtering at all, so that all entries in the posting list would be returned. The bitmap-result path used a "gotitem" variable that it failed to update in the one place where it'd actually make a difference, ie at the one "continue" statement. I think this was unreachable in practice, because if we'd looped around then it shouldn't be the case that the entries on the new page are before advancePast. Still, the "gotitem" variable was contributing nothing to either clarity or correctness, so get rid of it. Refactor all three loops so that the termination conditions are more alike and less unreadable. The code coverage report showed that we had no coverage at all for the re-descend-from-root code path in entryLoadMoreItems(), which seems like a very bad thing, so add a test case that exercises it. We also had exactly no coverage for gin_fuzzy_search_limit, so add a simplistic test case that at least hits those code paths a little bit. Back-patch to all supported branches. Adé Heyward and Tom Lane Discussion: https://postgr.es/m/CAEknJCdS-dE1Heddptm7ay2xTbSeADbkaQ8bU2AXRCVC2LdtKQ@mail.gmail.com
-
Fujii Masao authored
The explain regression test runs EXPLAIN commands via the function that filters unstable outputs. To produce more stable test output, this commit improves the function so that it also filters out text-mode Buffers lines. This is necessary because text-mode Buffers lines vary depending the system state. This improvement will get rid of the regression test failure that the commit ed7a5095 caused on the buildfarm members prion and dory because of the instability of Buffers lines. Author: Fujii Masao Reviewed-by: Tom Lane Discussion: https://postgr.es/m/20200403025751.GB1759@paquier.xyz
-
Alvaro Herrera authored
More work is still needed, but this is a good start. Co-authored-by: Corey Huinker <corey.huinker@gmail.com> Co-authored-by: Jürgen Purtz <juergen@purtz.de> Co-authored-by: Roger Harkavy <rogerharkavy@gmail.com> Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/CADkLM=eP6HOeqDjn0FdXuGRusQu4oWH_LFsKjjafmhvWD=aSpQ@mail.gmail.com
-
Robert Haas authored
Commit ac44367e introduced this problem. Report and fix by Fujii Masao. Discussion: http://postgr.es/m/d332b8f0-0c72-3cd6-6945-7a86a503662a@oss.nttdata.com
-
Robert Haas authored
These functions make it easier to write code that wants to compute a checksum for some data while allowing the user to configure the type of checksum that gets used. This is another piece of infrastructure for the upcoming patch to add backup manifests. Patch written from scratch by me, but it is similar to previous work by Rushabh Lathia and Suraj Kharage. Suraj also reviewed this version off-list. Advice on how not to break Windows from Davinder Singh. Discussion: http://postgr.es/m/CA+TgmoZV8dw1H2bzZ9xkKwdrk8+XYa+DC9H=F7heO2zna5T6qg@mail.gmail.com Discussion: http://postgr.es/m/CA+TgmoZRTBiPyvQEwV79PU1ePTtSEo2UeVncrkJMbn1sU1gnRA@mail.gmail.com
-
Tom Lane authored
contrib/lo's lo_manage() thought it could use trigdata->tg_trigger->tgname in its error message about not being called as a trigger. That naturally led to a core dump. unique_key_recheck() figured it could Assert that fcinfo->context is a TriggerData node in advance of having checked that it's being called as a trigger. That's harmless in production builds, and perhaps not that easy to reach in any case, but it's logically wrong. The first of these per bug #16340 from William Crowell; the second from manual inspection of other CALLED_AS_TRIGGER call sites. Back-patch the lo.c change to all supported branches, the other to v10 where the thinko crept in. Discussion: https://postgr.es/m/16340-591c7449dc7c8c47@postgresql.org
-
Fujii Masao authored
This reverts commit ed7a5095. Per buildfarm member prion.
-
Fujii Masao authored
This commit introduces new wait events RecoveryConflictSnapshot and RecoveryConflictTablespace. The former is reported while waiting for recovery conflict resolution on a vacuum cleanup. The latter is reported while waiting for recovery conflict resolution on dropping tablespace. Also this commit changes the code so that the wait event Lock is reported while waiting in ResolveRecoveryConflictWithVirtualXIDs() for recovery conflict resolution on a lock. Basically the wait event Lock is reported during that wait, but previously was not reported only when that wait happened in ResolveRecoveryConflictWithVirtualXIDs(). Author: Masahiko Sawada Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/CA+fd4k4mXWTwfQLS3RPwGr4xnfAEs1ysFfgYHvmmoUgv6Zxvmg@mail.gmail.com
-
Michael Paquier authored
This option is similar to \gset, except that it is able to store all results from combined SQL queries into separate variables. If a query returns multiple rows, the last result is stored and if a query returns no rows, nothing is stored. While on it, add a TAP test for \gset to check for a failure when a query returns multiple rows. Author: Fabien Coelho Reviewed-by: Ibrar Ahmed, Michael Paquier Discussion: https://postgr.es/m/alpine.DEB.2.21.1904081914200.2529@lancre
-
Fujii Masao authored
When BUFFERS option is enabled, EXPLAIN command includes the information on buffer usage during each plan node, in its output. In addition to that, this commit makes EXPLAIN command include also the information on buffer usage during planning phase, in its output. This feature makes it easier to discern the cases where lots of buffer access happen during planning. Author: Julien Rouhaud, slightly revised by Fujii Masao Reviewed-by: Justin Pryzby Discussion: https://postgr.es/m/16109-26a1a88651e90608@postgresql.org
-
Robert Haas authored
The primary motivation for this change is that it will be used by the upcoming patch to add backup manifests, but it also seems to have some potential more general use. Andres Freund and Robert Haas Discussion: http://postgr.es/m/20200330020814.nspra4mvby42yoa4@alap3.anarazel.de
-
- 02 Apr, 2020 14 commits
-
-
Tom Lane authored
Instead of disabling autovacuum on these test tables, adjust the partition boundaries so that the child partitions are not all the same size. That should cause the planner to use a predictable ordering of the per-partition scan nodes even in cases where autovacuum causes the rowcount estimates to be off a bit. Moreover, this also lets these tests show that the planner does properly order the tables in descending size order, something that wasn't being proven before. The pagg_tab1 and pagg_tab2 partitions are still all the same size, but that should be fine, because those tables are so small that (1) autovacuum won't fire on them, and (2) even if it did, it couldn't change the reltuples value --- with only one page, it can't see just part of the relation. Discussion: https://postgr.es/m/24467.1585838693@sss.pgh.pa.us
-
Bruce Momjian authored
A join that was added in commit 9b2009c4 that did not use the INNER keyword but the existing query used it. It was cleaner to remove the existing INNER keyword. Reported-by: Peter Eisentraut Discussion: https://postgr.es/m/a1ffbfda-59d2-5732-e5fb-3df8582b6434@2ndquadrant.com Backpatch-through: 9.5
-
Bruce Momjian authored
Reported-by: Peter Eisentraut Discussion: https://postgr.es/m/750b8832-d123-7f9b-931e-43ce8321b2d7@2ndquadrant.com Backpatch-through: 9.5
-
Tom Lane authored
This patch replaces the boolean GUC log_parameters_on_error introduced by commit ba79cb5d with an integer log_parameter_max_length_on_error, adding the ability to specify how many bytes to trim each logged parameter value to. (The previous coding hard-wired that choice at 64 bytes.) In addition, add a new parameter log_parameter_max_length that provides similar control over truncation of query parameters that are logged in response to statement-logging options, as opposed to errors. Previous releases always logged such parameters in full, possibly causing log bloat. For backwards compatibility with prior releases, log_parameter_max_length defaults to -1 (log in full), while log_parameter_max_length_on_error defaults to 0 (no logging). Per discussion, log_parameter_max_length is SUSET since the DBA should control routine logging behavior, but log_parameter_max_length_on_error is USERSET because it also affects errcontext data sent back to the client. Alexey Bashtanov, editorialized a little by me Discussion: https://postgr.es/m/b10493cc-a399-a03a-67c7-068f2791ee50@imap.cc
-
Tomas Vondra authored
Author: Noriyoshi Shinoda Discussion: https://www.postgresql.org/message-id/flat/20200119143707.gyinppnigokesjok@development
-
David Rowley authored
In b07642db, we added code to trigger autovacuums based on the number of INSERTs into a table. This seems to have cause some destabilization of the regression tests. Likely this is due to an autovacuum triggering mid-test and (per theory from Tom Lane) one of the test's queries causes autovacuum to skip some number of pages, resulting in the reltuples estimate changing. The failure that this is attempting to fix is around the order of subnodes in an Append. Since the planner orders these according to the subnode cost, then it's possible that a small change in the reltuples value changes the subnode's cost enough that it swaps position with one of its fellow subnodes. The failure here only seems to occur on slower buildfarm machines. In this case, lousyjack, which seems have taken over 8 minutes to run just the partitionwise_aggregate test. Such a slow run would increase the chances that the autovacuum launcher would trigger a vacuum mid-test. Faster machines run this test in sub second time, so have a much smaller window for an autovacuum to trigger. Here we fix this by disabling autovacuum on all tables created in the test. Additionally, this reverts the change made in the partitionwise_aggregate test in 2dc16efe. Discussion: https://postgr.es/m/22297.1585797192@sss.pgh.pa.us
-
Peter Eisentraut authored
This adds SQL expressions NORMALIZE() and IS NORMALIZED to convert and check Unicode normal forms, per SQL standard. To support fast IS NORMALIZED tests, we pull in a new data file DerivedNormalizationProps.txt from Unicode and build a lookup table from that, using techniques similar to ones already used for other Unicode data. make update-unicode will keep it up to date. We only build and use these tables for the NFC and NFKC forms, because they are too big for NFD and NFKD and the improvement is not significant enough there. Reviewed-by: Daniel Verite <daniel@manitou-mail.org> Reviewed-by: Andreas Karlsson <andreas@proxel.se> Discussion: https://www.postgresql.org/message-id/flat/c1909f27-c269-2ed9-12f8-3ab72c8caf7a@2ndquadrant.com
-
Peter Eisentraut authored
-
Peter Eisentraut authored
Update the documentation to reflect that Unix-domain sockets are now usable on Windows.
-
Peter Eisentraut authored
Otherwise, it could be confusing to a reader that some of these well-publicized features are simply listed as unsupported without further explanation.
-
Thomas Munro authored
New GUC from commit fc34b0d9.
-
Amit Kapila authored
Commit 40d964ec allowed vacuum command to process indexes in parallel but forgot to accumulate the buffer usage stats of parallel workers. This allows leader backend to accumulate buffer usage stats of all the parallel workers. Reported-by: Julien Rouhaud Author: Sawada Masahiko Reviewed-by: Dilip Kumar, Amit Kapila and Julien Rouhaud Discussion: https://postgr.es/m/20200328151721.GB12854@nol
-
Fujii Masao authored
This commit makes pg_stat_statements support new GUC pg_stat_statements.track_planning. If this option is enabled, pg_stat_statements tracks the planning statistics of the statements, e.g., the number of times the statement was planned, the total time spent planning the statement, etc. This feature is useful to check the statements that it takes a long time to plan. Previously since pg_stat_statements tracked only the execution statistics, we could not use that for the purpose. The planning and execution statistics are stored at the end of each phase separately. So there are not always one-to-one relationship between them. For example, if the statement is successfully planned but fails in the execution phase, only its planning statistics are stored. This may cause the users to be able to see different pg_stat_statements results from the previous version. To avoid this, pg_stat_statements.track_planning needs to be disabled. This commit bumps the version of pg_stat_statements to 1.8 since it changes the definition of pg_stat_statements function. Author: Julien Rouhaud, Pascal Legrand, Thomas Munro, Fujii Masao Reviewed-by: Sergei Kornilov, Tomas Vondra, Yoshikazu Imai, Haribabu Kommi, Tom Lane Discussion: https://postgr.es/m/CAHGQGwFx_=DO-Gu-MfPW3VQ4qC7TfVdH2zHmvZfrGv6fQ3D-Tw@mail.gmail.com Discussion: https://postgr.es/m/CAEepm=0e59Y_6Q_YXYCTHZkqOc6H2pJ54C_Xe=VFu50Aqqp_sA@mail.gmail.com Discussion: https://postgr.es/m/DB6PR0301MB21352F6210E3B11934B0DCC790B00@DB6PR0301MB2135.eurprd03.prod.outlook.com
-
Tomas Vondra authored
There's a number of SLRU caches used to access important data like clog, commit timestamps, multixact, asynchronous notifications, etc. Until now we had no easy way to monitor these shared caches, compute hit ratios, number of reads/writes etc. This commit extends the statistics collector to track this information for a predefined list of SLRUs, and also introduces a new system view pg_stat_slru displaying the data. The list of built-in SLRUs is fixed, but additional SLRUs may be defined in extensions. Unfortunately, there's no suitable registry of SLRUs, so this patch simply defines a fixed list of SLRUs with entries for the built-in ones and one entry for all additional SLRUs. Extensions adding their own SLRU are fairly rare, so this seems acceptable. This patch only allows monitoring of SLRUs, not tuning. The SLRU sizes are still fixed (hard-coded in the code) and it's not entirely clear which of the SLRUs might need a GUC to tune size. In a way, allowing us to determine that is one of the goals of this patch. Bump catversion as the patch introduces new functions and system view. Author: Tomas Vondra Reviewed-by: Alvaro Herrera Discussion: https://www.postgresql.org/message-id/flat/20200119143707.gyinppnigokesjok@development
-