- 06 Apr, 2020 9 commits
-
-
Tomas Vondra authored
Incremental Sort is an optimized variant of multikey sort for cases when the input is already sorted by a prefix of the requested sort keys. For example when the relation is already sorted by (key1, key2) and we need to sort it by (key1, key2, key3) we can simply split the input rows into groups having equal values in (key1, key2), and only sort/compare the remaining column key3. This has a number of benefits: - Reduced memory consumption, because only a single group (determined by values in the sorted prefix) needs to be kept in memory. This may also eliminate the need to spill to disk. - Lower startup cost, because Incremental Sort produce results after each prefix group, which is beneficial for plans where startup cost matters (like for example queries with LIMIT clause). We consider both Sort and Incremental Sort, and decide based on costing. The implemented algorithm operates in two different modes: - Fetching a minimum number of tuples without check of equality on the prefix keys, and sorting on all columns when safe. - Fetching all tuples for a single prefix group and then sorting by comparing only the remaining (non-prefix) keys. We always start in the first mode, and employ a heuristic to switch into the second mode if we believe it's beneficial - the goal is to minimize the number of unnecessary comparions while keeping memory consumption below work_mem. This is a very old patch series. The idea was originally proposed by Alexander Korotkov back in 2013, and then revived in 2017. In 2018 the patch was taken over by James Coleman, who wrote and rewrote most of the current code. There were many reviewers/contributors since 2013 - I've done my best to pick the most active ones, and listed them in this commit message. Author: James Coleman, Alexander Korotkov Reviewed-by: Tomas Vondra, Andreas Karlsson, Marti Raudsepp, Peter Geoghegan, Robert Haas, Thomas Munro, Antonin Houska, Andres Freund, Alexander Kuzmenkov Discussion: https://postgr.es/m/CAPpHfdscOX5an71nHd8WSUH6GNOCf=V7wgDaTXdDd9=goN-gfA@mail.gmail.com Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
-
Tom Lane authored
Since commit 8f59f6b9, CLOBBER_CACHE_ALWAYS buildfarm members have been failing this test case because the error message now sometimes includes an error cursor position. It seems largely just luck that that never happened before, and there are likely to be more ways it could happen in future. Hence, rather than trying to prevent it, adjust the test script to suppress that component of the report. At some point we might need to back-patch this, but refrain until there's a demonstrated need. (We'd need a different fix before v12, anyway, since VERBOSITY=sqlstate is a recent thing.) Tom Lane and Andres Freund Discussion: https://postgr.es/m/30675.1586111599@sss.pgh.pa.us
-
Peter Eisentraut authored
Mainly, this adds support code in logical/worker.c for applying replicated operations whose target is a partitioned table to its relevant partitions. Author: Amit Langote <amitlangote09@gmail.com> Reviewed-by: Rafia Sabih <rafia.pghackers@gmail.com> Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com> Reviewed-by: Petr Jelinek <petr@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/CA+HiwqH=Y85vRK3mOdjEkqFK+E=ST=eQiHdpj43L=_eJMOOznQ@mail.gmail.com
-
Amit Kapila authored
This commit allows autovacuum to log WAL usage statistics added by commit df3b1814. Author: Julien Rouhaud Reviewed-by: Dilip Kumar and Amit Kapila Discussion: https://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com
-
Michael Paquier authored
This new cache lookup routine has been introduced in a40caf5f, and more code paths can directly use it. Note that in cluster_rel(), the code was returning immediately if the tuple's entry in pg_index for the clustered index was not valid. This commit changes the code so as a lookup error is raised instead, something that could not happen from the start as we check for the existence of the index beforehand, while holding an exclusive lock on the parent table. Author: Justin Pryzby Reviewed-by: Álvaro Herrera, Michael Paquier Discussion: https://postgr.es/m/20200202161718.GI13621@telsasoft.com
-
Amit Kapila authored
This commit adds a new option WAL similar to existing option BUFFERS in the EXPLAIN command. This option allows to include information on WAL record generation added by commit df3b1814 in EXPLAIN output. This also allows the WAL usage information to be displayed via the auto_explain module. A new parameter auto_explain.log_wal controls whether WAL usage statistics are printed when an execution plan is logged. This parameter has no effect unless auto_explain.log_analyze is enabled. Author: Julien Rouhaud Reviewed-by: Dilip Kumar and Amit Kapila Discussion: https://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com
-
Michael Paquier authored
A table rewritten by ALTER TABLE would lose tracking of an index usable for CLUSTER. This setting is tracked by pg_index.indisclustered and is controlled by ALTER TABLE, so some extra work was needed to restore it properly. Note that ALTER TABLE only marks the index that can be used for clustering, and does not do the actual operation. Author: Amit Langote, Justin Pryzby Reviewed-by: Ibrar Ahmed, Michael Paquier Discussion: https://postgr.es/m/20200202161718.GI13621@telsasoft.com Backpatch-through: 9.5
-
Andres Freund authored
This is for the benefit of running postgres under the rr debugger. When using rr signal handlers running while a syscall is active use an alternative stack. As e.g. bgworkers are started from within signal handlers, the forked backend then has a different stack base than postmaster. Previously that subsequently lead to those processes triggering spurious "stack depth limit exceeded" errors. Discussion: https://postgr.es/m/20200327182217.ubrrl32lyfhxfwk5@alap3.anarazel.de
-
Andres Freund authored
There's a very low risk that RecentGlobalXmin could be far enough in the past to be older than relfrozenxid, or even wrapped around. Luckily the consequences of that having happened wouldn't be too bad - the page wouldn't be pruned for a while. Avoid that risk by using TransactionXmin instead. As that's announced via MyPgXact->xmin, it is protected against wrapping around (see code comments for details around relfrozenxid). Author: Andres Freund Discussion: https://postgr.es/m/20200328213023.s4eyijhdosuc4vcj@alap3.anarazel.de Backpatch: 9.5-
-
- 05 Apr, 2020 4 commits
-
-
Andres Freund authored
Reported-By: David Rowley
-
Peter Eisentraut authored
Fixup for "Drop slot's LWLock before returning from SaveSlotToPath()" Reported-by: Michael Paquier <michael@paquier.xyz>
-
Tom Lane authored
Commit 7cb0a423 overlooked that the multi-level partition test table pagg_tab_ml still had an exactly even row split at its upper level of partitioning, so that some of the sub-aggregation plan steps still had exactly equal costs, leading to plan instability. Tweak the partition boundaries some more to make the row distribution unequal at both levels. This leads to more changes in the "expected" plan order than the previous round, but it seems fine. (Actually, I'm surprised that this didn't affect even more plans in this test: looking at the underlying costs shows that some of the parallel plan groups are *not* getting sorted by cost. Bug?) Per buildfarm member lousyjack, https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-04-04%2021%3A03%3A04 Discussion: https://postgr.es/m/24467.1585838693@sss.pgh.pa.us
-
Amit Kapila authored
This commit adds three new columns in pg_stat_statements output to display WAL usage statistics added by commit df3b1814. This commit doesn't bump the version of pg_stat_statements as the same is done for this release in commit 17e03282. Author: Kirill Bychik and Julien Rouhaud Reviewed-by: Julien Rouhaud, Fujii Masao, Dilip Kumar and Amit Kapila Discussion: https://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com
-
- 04 Apr, 2020 8 commits
-
-
Noah Misch authored
Oversight in today's commit c6b92041. Per buildfarm member jacana. Discussion: http://postgr.es/m/20200404223212.GC3442685@rfd.leadboat.com
-
Tom Lane authored
Commit 77ec5aff added an assertion to enforce_generic_type_consistency that boils down to "if the function result is polymorphic, there must be at least one polymorphic argument". This should be true for user-created functions, but there are built-in functions for which it's not true, as pointed out by Jaime Casanova. Hence, go back to the old behavior of leaving the return type alone. There's only a limited amount of stuff you can do with such a function result, but it does work to some extent; add some regression test cases to ensure we don't break that again. Discussion: https://postgr.es/m/CAJGNTeMbhtsCUZgJJ8h8XxAJbK7U2ipsX8wkHRtZRz-NieT8RA@mail.gmail.com
-
Noah Misch authored
Until now, only selected bulk operations (e.g. COPY) did this. If a given relfilenode received both a WAL-skipping COPY and a WAL-logged operation (e.g. INSERT), recovery could lose tuples from the COPY. See src/backend/access/transam/README section "Skipping WAL for New RelFileNode" for the new coding rules. Maintainers of table access methods should examine that section. To maintain data durability, just before commit, we choose between an fsync of the relfilenode and copying its contents to WAL. A new GUC, wal_skip_threshold, guides that choice. If this change slows a workload that creates small, permanent relfilenodes under wal_level=minimal, try adjusting wal_skip_threshold. Users setting a timeout on COMMIT may need to adjust that timeout, and log_min_duration_statement analysis will reflect time consumption moving to COMMIT from commands like COPY. Internally, this requires a reliable determination of whether RollbackAndReleaseCurrentSubTransaction() would unlink a relation's current relfilenode. Introduce rd_firstRelfilenodeSubid. Amend the specification of rd_createSubid such that the field is zero when a new rel has an old rd_node. Make relcache.c retain entries for certain dropped relations until end of transaction. Bump XLOG_PAGE_MAGIC, since this introduces XLOG_GIST_ASSIGN_LSN. Future servers accept older WAL, so this bump is discretionary. Kyotaro Horiguchi, reviewed (in earlier, similar versions) by Robert Haas. Heikki Linnakangas and Michael Paquier implemented earlier designs that materially clarified the problem. Reviewed, in earlier designs, by Andrew Dunstan, Andres Freund, Alvaro Herrera, Tom Lane, Fujii Masao, and Simon Riggs. Reported by Martijn van Oosterhout. Discussion: https://postgr.es/m/20150702220524.GA9392@svana.org
-
Peter Eisentraut authored
This reverts commit 246f136e. That patch wasn't quite complete enough. Discussion: https://www.postgresql.org/message-id/flat/E1jIpJu-0007Ql-CL%40gemulon.postgresql.org
-
Amit Kapila authored
This allows gathering the WAL generation statistics for each statement execution. The three statistics that we collect are the number of WAL records, the number of full page writes and the amount of WAL bytes generated. This helps the users who have write-intensive workload to see the impact of I/O due to WAL. This further enables us to see approximately what percentage of overall WAL is due to full page writes. In the future, we can extend this functionality to allow us to compute the the exact amount of WAL data due to full page writes. This patch in itself is just an infrastructure to compute WAL usage data. The upcoming patches will expose this data via explain, auto_explain, pg_stat_statements and verbose (auto)vacuum output. Author: Kirill Bychik, Julien Rouhaud Reviewed-by: Dilip Kumar, Fujii Masao and Amit Kapila Discussion: https://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com
-
Jeff Davis authored
Don't try to be precise about it, just use a constant 16 bytes of chunk overhead. Being smarter would require knowing the memory context where the chunk will be allocated, which is not known by all callers. Discussion: https://postgr.es/m/20200325220936.il3ni2fj2j2b45y5@alap3.anarazel.de
-
Robert Haas authored
Commit 0d8c9c12 allowed BASE_BACKUP to acquire a ResourceOwner without a transaction so that the backup manifest functionality could use a BufFile, but it overlooked the fact that when a walsender is used with replication=database, it might have a transaction in progress, because in that mode, SQL and replication commands can be mixed. Try to fix things up so that the two cleanup mechanisms don't conflict. Per buildfarm member serinus, which triggered the problem when CREATE_REPLICATION_SLOT failed from inside a transaction. It passed on the subsequent run, so evidently the failure doesn't happen every time.
-
Robert Haas authored
lapwing is complaining that about a call to pg_gmtime, saying that it "expected 'const pg_time_t *' but argument is of type 'time_t *'". I at first thought that the problem had someting to do with const, but Thomas Munro suggested that it might be just because time_t and pg_time_t are different identifers. lapwing is i686 rather than x86_64, and pg_time_t is always int64, so that seems like a good guess. There is other code that just casts time_t to pg_time_t without any conversion function, so try that approach here. Introduced in commit 0d8c9c12.
-
- 03 Apr, 2020 19 commits
-
-
Robert Haas authored
Report by Tom Lane. Discussion: http://postgr.es/m/22394.1585951968@sss.pgh.pa.us
-
Robert Haas authored
It may be necessary to go further and remove this test altogether, but I'm going to try this fix first. It's not clear, at least to me, exactly how this is breaking buildfarm members, but it appears to be doing so.
-
Robert Haas authored
Second try at getting the buildfarm to be happy with 003_corrution.pl as added by commit 0d8c9c12. Per suggestion from Álvaro Herrera. Discussion: http://postgr.es/m/20200403205412.GA8279@alvherre.pgsql
-
Tom Lane authored
Move have_partkey_equi_join and match_expr_to_partition_keys to relnode.c, since they're used only there. Refactor build_joinrel_partition_info to split out the code that fills the joinrel's partition key lists; this doesn't have any non-cosmetic impact, but it seems like a useful separation of concerns. Improve assorted nearby comments. Amit Langote, with a little further editorialization by me Discussion: https://postgr.es/m/CA+HiwqG2WVUGmLJqtR0tPFhniO=H=9qQ+Z3L_ZC+Y3-EVQHFGg@mail.gmail.com
-
Robert Haas authored
The buildfarm is very unhappy right now because TAP test 003_corruption.pl uses TestLib::tempdir to generate the name of a temporary directory that is used as a tablespace name, and this results in a 'symbolic link target too long' error message on many of the buildfarm machines, but not on my machine. It appears that other people have run into similar problems in the past and that TestLib::tempdir_short was the solution, so let's try using that instead.
-
Robert Haas authored
It seems that we have a policy that every Perl subroutine should end with an explicit "return", so add explicit "return" statements to all the new subroutines added by my prior commit 0d8c9c12. Per buildfarm.
-
Robert Haas authored
A manifest is a JSON document which includes (1) the file name, size, last modification time, and an optional checksum for each file backed up, (2) timelines and LSNs for whatever WAL will need to be replayed to make the backup consistent, and (3) a checksum for the manifest itself. By default, we use CRC-32C when checksumming data files, because we are trying to detect corruption and user error, not foil an adversary. However, pg_basebackup and the server-side BASE_BACKUP command now have options to select a different algorithm, so users wanting a cryptographic hash function can select SHA-224, SHA-256, SHA-384, or SHA-512. Users not wanting file checksums at all can disable them, or disable generating of the backup manifest altogether. Using a cryptographic hash function in place of CRC-32C consumes significantly more CPU cycles, which may slow down backups in some cases. A new tool called pg_validatebackup can validate a backup against the manifest. If no checksums are present, it can still check that the right files exist and that they have the expected sizes. If checksums are present, it can also verify that each file has the expected checksum. Additionally, it calls pg_waldump to verify that the expected WAL files are present and parseable. Only plain format backups can be validated directly, but tar format backups can be validated after extracting them. Robert Haas, with help, ideas, review, and testing from David Steele, Stephen Frost, Andrew Dunstan, Rushabh Lathia, Suraj Kharage, Tushar Ahuja, Rajkumar Raghuwanshi, Mark Dilger, Davinder Singh, Jeevan Chalke, Amit Kapila, Andres Freund, and Noah Misch. Discussion: http://postgr.es/m/CA+TgmoZV8dw1H2bzZ9xkKwdrk8+XYa+DC9H=F7heO2zna5T6qg@mail.gmail.com
-
Fujii Masao authored
When BUFFERS option is enabled, EXPLAIN command includes the information on buffer usage during each plan node, in its output. In addition to that, this commit makes EXPLAIN command include also the information on buffer usage during planning phase, in its output. This feature makes it easier to discern the cases where lots of buffer access happen during planning. This commit revives the original commit ed7a5095 that was reverted by commit 19db23bc. The original commit had to be reverted because it caused the regression test failure on the buildfarm members prion and dory. But since commit c0885c4c got rid of the caues of the test failure, the original commit can be safely introduced again. Author: Julien Rouhaud, slightly revised by Fujii Masao Reviewed-by: Justin Pryzby Discussion: https://postgr.es/m/16109-26a1a88651e90608@postgresql.org
-
Tom Lane authored
entryGetItem()'s three code paths each contained bugs associated with filtering the entries for gin_fuzzy_search_limit. The posting-tree path failed to advance "advancePast" after having decided to filter an item. If we ran out of items on the current page and needed to advance to the next, what would actually happen is that entryLoadMoreItems() would re-load the same page. Eventually, the random dropItem() test would accept one of the same items it'd previously rejected, and we'd move on --- but it could take awhile with small gin_fuzzy_search_limit. To add insult to injury, this case would inevitably cause entryLoadMoreItems() to decide it needed to re-descend from the root, making things even slower. The posting-list path failed to implement gin_fuzzy_search_limit filtering at all, so that all entries in the posting list would be returned. The bitmap-result path used a "gotitem" variable that it failed to update in the one place where it'd actually make a difference, ie at the one "continue" statement. I think this was unreachable in practice, because if we'd looped around then it shouldn't be the case that the entries on the new page are before advancePast. Still, the "gotitem" variable was contributing nothing to either clarity or correctness, so get rid of it. Refactor all three loops so that the termination conditions are more alike and less unreadable. The code coverage report showed that we had no coverage at all for the re-descend-from-root code path in entryLoadMoreItems(), which seems like a very bad thing, so add a test case that exercises it. We also had exactly no coverage for gin_fuzzy_search_limit, so add a simplistic test case that at least hits those code paths a little bit. Back-patch to all supported branches. Adé Heyward and Tom Lane Discussion: https://postgr.es/m/CAEknJCdS-dE1Heddptm7ay2xTbSeADbkaQ8bU2AXRCVC2LdtKQ@mail.gmail.com
-
Fujii Masao authored
The explain regression test runs EXPLAIN commands via the function that filters unstable outputs. To produce more stable test output, this commit improves the function so that it also filters out text-mode Buffers lines. This is necessary because text-mode Buffers lines vary depending the system state. This improvement will get rid of the regression test failure that the commit ed7a5095 caused on the buildfarm members prion and dory because of the instability of Buffers lines. Author: Fujii Masao Reviewed-by: Tom Lane Discussion: https://postgr.es/m/20200403025751.GB1759@paquier.xyz
-
Alvaro Herrera authored
More work is still needed, but this is a good start. Co-authored-by: Corey Huinker <corey.huinker@gmail.com> Co-authored-by: Jürgen Purtz <juergen@purtz.de> Co-authored-by: Roger Harkavy <rogerharkavy@gmail.com> Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/CADkLM=eP6HOeqDjn0FdXuGRusQu4oWH_LFsKjjafmhvWD=aSpQ@mail.gmail.com
-
Robert Haas authored
Commit ac44367e introduced this problem. Report and fix by Fujii Masao. Discussion: http://postgr.es/m/d332b8f0-0c72-3cd6-6945-7a86a503662a@oss.nttdata.com
-
Robert Haas authored
These functions make it easier to write code that wants to compute a checksum for some data while allowing the user to configure the type of checksum that gets used. This is another piece of infrastructure for the upcoming patch to add backup manifests. Patch written from scratch by me, but it is similar to previous work by Rushabh Lathia and Suraj Kharage. Suraj also reviewed this version off-list. Advice on how not to break Windows from Davinder Singh. Discussion: http://postgr.es/m/CA+TgmoZV8dw1H2bzZ9xkKwdrk8+XYa+DC9H=F7heO2zna5T6qg@mail.gmail.com Discussion: http://postgr.es/m/CA+TgmoZRTBiPyvQEwV79PU1ePTtSEo2UeVncrkJMbn1sU1gnRA@mail.gmail.com
-
Tom Lane authored
contrib/lo's lo_manage() thought it could use trigdata->tg_trigger->tgname in its error message about not being called as a trigger. That naturally led to a core dump. unique_key_recheck() figured it could Assert that fcinfo->context is a TriggerData node in advance of having checked that it's being called as a trigger. That's harmless in production builds, and perhaps not that easy to reach in any case, but it's logically wrong. The first of these per bug #16340 from William Crowell; the second from manual inspection of other CALLED_AS_TRIGGER call sites. Back-patch the lo.c change to all supported branches, the other to v10 where the thinko crept in. Discussion: https://postgr.es/m/16340-591c7449dc7c8c47@postgresql.org
-
Fujii Masao authored
This reverts commit ed7a5095. Per buildfarm member prion.
-
Fujii Masao authored
This commit introduces new wait events RecoveryConflictSnapshot and RecoveryConflictTablespace. The former is reported while waiting for recovery conflict resolution on a vacuum cleanup. The latter is reported while waiting for recovery conflict resolution on dropping tablespace. Also this commit changes the code so that the wait event Lock is reported while waiting in ResolveRecoveryConflictWithVirtualXIDs() for recovery conflict resolution on a lock. Basically the wait event Lock is reported during that wait, but previously was not reported only when that wait happened in ResolveRecoveryConflictWithVirtualXIDs(). Author: Masahiko Sawada Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/CA+fd4k4mXWTwfQLS3RPwGr4xnfAEs1ysFfgYHvmmoUgv6Zxvmg@mail.gmail.com
-
Michael Paquier authored
This option is similar to \gset, except that it is able to store all results from combined SQL queries into separate variables. If a query returns multiple rows, the last result is stored and if a query returns no rows, nothing is stored. While on it, add a TAP test for \gset to check for a failure when a query returns multiple rows. Author: Fabien Coelho Reviewed-by: Ibrar Ahmed, Michael Paquier Discussion: https://postgr.es/m/alpine.DEB.2.21.1904081914200.2529@lancre
-
Fujii Masao authored
When BUFFERS option is enabled, EXPLAIN command includes the information on buffer usage during each plan node, in its output. In addition to that, this commit makes EXPLAIN command include also the information on buffer usage during planning phase, in its output. This feature makes it easier to discern the cases where lots of buffer access happen during planning. Author: Julien Rouhaud, slightly revised by Fujii Masao Reviewed-by: Justin Pryzby Discussion: https://postgr.es/m/16109-26a1a88651e90608@postgresql.org
-
Robert Haas authored
The primary motivation for this change is that it will be used by the upcoming patch to add backup manifests, but it also seems to have some potential more general use. Andres Freund and Robert Haas Discussion: http://postgr.es/m/20200330020814.nspra4mvby42yoa4@alap3.anarazel.de
-