- 15 Mar, 2021 2 commits
-
-
Fujii Masao authored
This commit changes WAL archiver process so that it's treated as an auxiliary process and can use shared memory. This is an infrastructure patch required for upcoming shared-memory based stats collector patch series. These patch series basically need any processes including archiver that can report the statistics to access to shared memory. Since this patch itself is useful to simplify the code and when users monitor the status of archiver, it's committed separately in advance. This commit simplifies the code for WAL archiving. For example, previously backends need to signal to archiver via postmaster when they notify archiver that there are some WAL files to archive. On the other hand, this commit removes that signal to postmaster and enables backends to notify archier directly using shared latch. Also, as the side of this change, the information about archiver process becomes viewable at pg_stat_activity view. Author: Kyotaro Horiguchi Reviewed-by: Andres Freund, Álvaro Herrera, Julien Rouhaud, Tomas Vondra, Arthur Zakirov, Fujii Masao Discussion: https://postgr.es/m/20180629.173418.190173462.horiguchi.kyotaro@lab.ntt.co.jp
-
Peter Geoghegan authored
Consistently set a flag variable that tracks whether the current heap page has a dead item during lazy vacuum's heap scan. We missed the common case where there is an preexisting (or even a new) LP_DEAD heap line pointer. Also make it clear that the variable might be affected by an existing line pointer, say from an earlier opportunistic pruning operation. This distinction is important because it's the main reason why we can't just use the nearby tups_vacuumed variable instead. No backpatch. In theory failing to set the page level flag variable had no consequences. Currently it is only used to defensively check if a page marked all visible has dead items, which should never happen anyway (if it does then the table must be corrupt). Author: Masahiko Sawada <sawada.mshk@gmail.com> Diagnosed-By: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAD21AoAtZb4+HJT_8RoOXvu4HM-Zd4HKS3YSMCH6+-W=bDyh-w@mail.gmail.com
-
- 13 Mar, 2021 8 commits
-
-
Tom Lane authored
It's not immediately obvious what you have to do to get "make installcheck" to work here, so document that along the same lines as we've used elsewhere.
-
Robert Haas authored
It does not work on all versions of perl across all platforms. To avoid endian-ness issues, pick a new value for column a that has the same upper 4 bytes as lower 4 bytes. Try to make it something that isn't likely to occur anywhere nearby in the page. Discussion: http://postgr.es/m/29DA079B-0658-4E66-BDAA-0EFD7B64D9C6@enterprisedb.com
-
Tom Lane authored
Fix another example of non-portable option ordering in the tests. Oversight in 24189277. Mark Dilger Discussion: https://postgr.es/m/C37D28BA-3BA3-4776-B812-17F05F3472D8@enterprisedb.com
-
Thomas Munro authored
Don't try to compile src/port/pthread_barrier_wait.c if we opted out of threads at configure time. Revealed by build farm member gaur, which can't compile this code because of problems with its pthread implementation. It shouldn't be trying to, because it's using --disable-thread-safety. Defect in commit 44bf3d50. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/2568537.1615603606%40sss.pgh.pa.us
-
Amit Kapila authored
Commit 05c8482f added special logic related to parallel-safety of FK triggers. This is a bit of a hack and should have instead been done by simply setting appropriate proparallel values on those trigger functions themselves. Suggested-by: Tom Lane Author: Greg Nancarrow Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/2309260.1615485644@sss.pgh.pa.us
-
Robert Haas authored
Commit 24189277 managed to remove one of the two places where we were checking for a "no such user" error while leaving the other one right next to it. So remove that too. In fact, remove the entire test, because the whole point of this test was to see which message we got on a failure.
-
Robert Haas authored
Avoid use of non-portable option ordering in command_checks_all(). The use of bare command line arguments before switches doesn't work everywhere. Per buildfarm members drongo and hoverfly. Avoid testing for the message "role \"%s\" does not exist", because some buildfarm machines report a different error. fairywren complains about "SSPI authentication failed for user \"%s\"", for example. Mark Dilger Discussion: http://postgr.es/m/9E76E46A-48B2-4869-BD0C-422204C1F767@enterprisedb.com Discussion: http://postgr.es/m/F0A1FD70-A2F4-4528-8A03-8650CAEC0554%40enterprisedb.com
-
Robert Haas authored
It's hard to believe, but buildfarm results from the new pg_amcheck suggest that command_checks_all() perform shell expansion on some machines but not others, apparently due to an underlying behavior difference in IPC::Run. Let's see if we can work around that - and confirm that it is the real problem - by passing '-S*' as a single argument rather than '-S' and '*' as two separate ones. Failures were observed on jacana and hoverfly. Mark Dilger Discussion: http://postgr.es/m/9E76E46A-48B2-4869-BD0C-422204C1F767@enterprisedb.com
-
- 12 Mar, 2021 17 commits
-
-
Robert Haas authored
Test #12 overwrote a 1-byte varlena header to make it look like the initial byte of a 4-byte varlena header, but the results were endian-dependent. Also, the byte "abc" that followed the overwritten byte would be interpreted differently depending on endian-ness. Overwrite 4 bytes instead, in an endian-aware manner. Test #13 accidentally managed to depend on TOAST_MAX_CHUNK_SIZE, which varies slightly depending on MAXIMUM_ALIGNOF. That's not the point anyway, so make the regexp insensitive to the expected number of chunks. Mark Dilger Discussion: http://postgr.es/m/A80D68F6-E38F-482D-9522-E2FB6AAFE8A1@enterprisedb.com
-
Peter Geoghegan authored
Simplify _bt_vacuum_needs_cleanup() functions's signature (it only needs a single 'rel' argument now), and move it next to its sibling function in nbtpage.c. I believe that _bt_vacuum_needs_cleanup() was originally located in nbtree.c due to an include dependency issue. That's no longer an issue. Follow-up to commit 9f3665fb.
-
Robert Haas authored
Erik Rijkers reported a compile failure, and I think this is probably the reason.
-
Robert Haas authored
Per buildfarm member crake.
-
Robert Haas authored
Per report from Peter Geoghegan. Discussion: http://postgr.es/m/CAH2-WznpwULZ3uJ1_6WXvNMXYbOy8k8tYs3r=qSdGmZeRd6tDw@mail.gmail.com
-
Robert Haas authored
This makes it a lot easier to run the corruption checks that are implemented by contrib/amcheck against lots of relations and get the result in an easily understandable format. It has a wide variety of options for choosing which relations to check and which checks to perform, and it can run checks in parallel if you want. Mark Dilger, reviewed by Peter Geoghegan and by me. Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com Discussion: http://postgr.es/m/BA592F2D-F928-46FF-9516-2B827F067F57@enterprisedb.com
-
Tom Lane authored
psql's editing commands decide whether the user has edited the file by checking for change of modification timestamp. This is probably fine for a pre-existing file, but with a temporary file that is created within the command, it's possible for a fast typist to save-and-exit in less than the one-second granularity of stat(2) timestamps. On Windows FAT filesystems the granularity is even worse, 2 seconds, making the race a bit easier to hit. To fix, try to set the temp file's mod time to be two seconds ago. It's unlikely this would fail, but then again the race condition itself is unlikely, so just ignore any error. Also, we might as well check the file size as well as its mod time. While this is a difficult bug to hit, it still seems worth back-patching, to ensure that users' edits aren't lost. Laurenz Albe, per gripe from Jacob Champion; based on fix suggestions from Jacob and myself Discussion: https://postgr.es/m/0ba3f2a658bac6546d9934ab6ba63a805d46a49b.camel@cybertec.at
-
Tom Lane authored
GENERATED ALWAYS AS IDENTITY implies NOT NULL, but the code failed to complain if you overrode that with "GENERATED ALWAYS AS IDENTITY NULL". One might think the old behavior was a feature, but it was inconsistent because the outcome varied depending on the order of the clauses, so it seems to have been just an oversight. Per bug #16913 from Pavel Boev. Back-patch to v10 where identity columns were introduced. Vik Fearing (minor tweaks by me) Discussion: https://postgr.es/m/16913-3b5198410f67d8c6@postgresql.org
-
Thomas Munro authored
When sorting a potentially large number of dirty buffers, the checkpointer can benefit from a faster sort routine. One reported improvement on a large buffer pool system was 1.4s -> 0.6s. Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/CA%2BhUKGJ2-eaDqAum5bxhpMNhvuJmRDZxB_Tow0n-gse%2BHG0Yig%40mail.gmail.com
-
Amit Kapila authored
Reported-by: Thomas Munro Author: Takayuki Tsunakawa Reviewed-by: Kyotaro Horiguchi Discussion: https://postgr.es/m/CA+hUKG+oPoFizjABt=GXZWTEHx3oev5rAe2scjW2r6F1rguo5w@mail.gmail.com
-
Amit Kapila authored
The commit added code which used a relcache TriggerDesc field across another cache access, which it shouldn't because the relcache doesn't guarantee it won't get moved. Diagnosed-by: Tom Lane Author: Greg Nancarrow Reviewed-by: Hou Zhijie, Amit Kapila Discussion: https://postgr.es/m/2309260.1615485644@sss.pgh.pa.us
-
Thomas Munro authored
Since commits 9f095299 and f98b8476 we don't poll the postmaster pipe at all during crash recovery on Linux and FreeBSD, but on other operating systems we were still doing it for every WAL record. Do it less frequently on operating systems where system calls are required, at the cost of delaying exit a bit after postmaster death. This avoids expensive system calls reported to slow down CPU-bound recovery by as much as 10-30%. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/CA%2BhUKGK1607VmtrDUHQXrsooU%3Dap4g4R2yaoByWOOA3m8xevUQ%40mail.gmail.com Discussion: https://postgr.es/m/7261eb39-0369-f2f4-1bb5-62f3b6083b5e@iki.fi
-
Thomas Munro authored
Use this new CV to wait for walreceiver shutdown without a sleep/poll loop, while also benefiting from standard postmaster death handling. Discussion: https://postgr.es/m/CA%2BhUKGK1607VmtrDUHQXrsooU%3Dap4g4R2yaoByWOOA3m8xevUQ%40mail.gmail.com
-
Thomas Munro authored
Replace a sleep loop with a CV, to get a fast reaction time when recovery is resumed or the postmaster exits via standard infrastructure. Unfortunately we still need to wake up every second to perform extra polling during the recovery pause loop. Discussion: https://postgr.es/m/CA%2BhUKGK1607VmtrDUHQXrsooU%3Dap4g4R2yaoByWOOA3m8xevUQ%40mail.gmail.com
-
Fujii Masao authored
When shutdown is requested, checkpointer performs checkpoint or restartpoint, and updates the statistics, before it exits. But previously checkpointer didn't send those statistics to the stats collector. Shutdown checkpoint and restartpoint are treated as requested ones instead of scheduled ones, so the number of them are counted in pg_stat_bgwriter.checkpoints_req column. Author: Masahiro Ikeda Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/0509ad67b585a5b86a83d445dfa75392@oss.nttdata.com
-
Fujii Masao authored
In walwriter's main loop, WAL stats message is only sent if enough time has passed since last one was sent to reach PGSTAT_STAT_INTERVAL msecs. This is necessary to avoid overloading to the stats collector. But this can cause recent WAL stats to be unsent when walwriter exits. To ensure that all the WAL stats are sent, this commit makes walwriter force to send remaining WAL stats to the collector when it exits because of shutdown request. Note that those remaining WAL stats can still be unsent when walwriter exits with non-zero exit code (e.g., FATAL error). This is OK because that walwriter exit leads to server crash and subsequent recovery discards all the stats. So there is no need to send remaining stats in that case. Author: Masahiro Ikeda Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/0509ad67b585a5b86a83d445dfa75392@oss.nttdata.com
-
Thomas Munro authored
Itanium is very uncommon and being discontinued. ARM is everywhere. Prefer ARM as an example of an architecture with weak memory ordering.
-
- 11 Mar, 2021 11 commits
-
-
Peter Geoghegan authored
Avoid calling RelationGetNumberOfBlocks() unnecessarily in the common case where there are no deleted but not yet recycled pages to recycle during a cleanup-only nbtree VACUUM operation. Follow-up to commit e5d8a999, which (among other things) taught the "skip full scan" nbtree VACUUM mechanism to only trigger a full index scan when the absolute number of deleted pages in the index is considered excessive.
-
Peter Geoghegan authored
Commit 9f3665fb removed the vacuum_cleanup_index_scale_factor storage parameter. However, that creates dump/reload hazards when moving across major versions. Add back the vacuum_cleanup_index_scale_factor parameter (though not the GUC of the same name) purely to avoid problems when using tools like pg_upgrade. The parameter remains disabled and undocumented. No backpatch to Postgres 13, since vacuum_cleanup_index_scale_factor was only disabled by REL_13_STABLE's version of master branch commit 9f3665fb in the first place -- the parameter already looks like this on REL_13_STABLE. Discussion: https://postgr.es/m/YEm/a3Ko3nKnBuVq@paquier.xyz
-
Robert Haas authored
Previously, the code and documentation seem to have essentially assumed than a call to pg_wal_replay_pause() would take place immediately, but that's not the case, because we only check for a pause in certain places. This means that a tool that uses this function and then wants to do something else afterward that is dependent on the pause having taken effect doesn't know how long it needs to wait to be sure that no more WAL is going to be replayed. To avoid that, add a new function pg_get_wal_replay_pause_state() which returns either 'not paused', 'paused requested', or 'paused'. After calling pg_wal_replay_pause() the status will immediate change from 'not paused' to 'pause requested'; when the startup process has noticed this, the status will change to 'pause'. For backward compatibility, pg_is_wal_replay_paused() still exists and returns the same thing as before: true if a pause has been requested, whether or not it has taken effect yet; and false if not. The documentation is updated to clarify. To improve the changes that a pause request is quickly confirmed effective, adjust things so that WaitForWALToBecomeAvailable will swiftly reach a call to recoveryPausesHere() when a pause request is made. Dilip Kumar, reviewed by Simon Riggs, Kyotaro Horiguchi, Yugo Nagata, Masahiko Sawada, and Bharath Rupireddy. Discussion: http://postgr.es/m/CAFiTN-vcLLWEm8Zr%3DYK83rgYrT9pbC8VJCfa1kY9vL3AUPfu6g%40mail.gmail.com
-
Tom Lane authored
Commit 92785dac copied some logic related to advancement of inStart from pqParseInput3 into getRowDescriptions and getAnotherTuple, because it wanted to allow user-defined row processor callbacks to potentially longjmp out of the library, and inStart would have to be updated before that happened to avoid an infinite loop. We later decided that that API was impossibly fragile and reverted it, but we didn't undo all of the related code changes, and this bit of messiness survived. Undo it now so that there's just one place in pqParseInput3's processing where inStart is advanced; this will simplify addition of better tracing support. getParamDescriptions had grown similar processing somewhere along the way (not in 92785dac; I didn't track down just when), but it's actually buggy because its handling of corrupt-message cases seems to have been copied from the v2 logic where we lacked a known message length. The cases where we "goto not_enough_data" should not simply return EOF, because then we won't consume the message, potentially creating an infinite loop. That situation now represents a definitively corrupt message, and we should report it as such. Although no field reports of getParamDescriptions getting stuck in a loop have been seen, it seems appropriate to back-patch that fix. I chose to back-patch all of this to keep the logic looking more alike in supported branches. Discussion: https://postgr.es/m/2217283.1615411989@sss.pgh.pa.us
-
Robert Haas authored
Create a wrapper object, ParallelSlotArray, to encapsulate the number of slots and the slot array itself, plus some other relevant bits of information. This reduces the number of parameters we have to pass around all over the place. Allow for a ParallelSlotArray to contain slots connected to different databases within a single cluster. The current clients of this mechanism don't need this, but it is expected to be used by future patches. Defer connecting to databases until we actually need the connection for something. This is a slight behavior change for vacuumdb and reindexdb. If you specify a number of jobs that is larger than the number of objects, the extra connections will now not be used. But, on the other hand, if you specify a number of jobs that is so large that it's going to fail, the failure would previously have happened before any operations were actually started, and now it won't. Mark Dilger, reviewed by me. Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com Discussion: http://postgr.es/m/BA592F2D-F928-46FF-9516-2B827F067F57@enterprisedb.com
-
Michael Paquier authored
Based on an analysis of the OpenSSL code with Jacob, moving to EVP for the cryptohash computations makes necessary the setup of the libcrypto callbacks that were getting set only for SSL connections, but not for connections without SSL. Not setting the callbacks makes the use of threads potentially unsafe for connections calling cryptohashes during authentication, like MD5 or SCRAM, if a failure happens during a cryptohash computation. The logic setting the libssl and libcrypto states is then split into two parts, both using the same locking, with libcrypto being set up for SSL and non-SSL connections, while SSL connections set any libssl state afterwards as needed. Prior to this commit, only SSL connections would have set libcrypto callbacks that are necessary to ensure a proper thread locking when using multiple concurrent threads in libpq (ENABLE_THREAD_SAFETY). Note that this is only required for OpenSSL 1.0.2 and 1.0.1 (oldest version supported on HEAD), as 1.1.0 has its own internal locking and it has dropped support for CRYPTO_set_locking_callback(). Tests with up to 300 threads with OpenSSL 1.0.1 and 1.0.2, mixing SSL and non-SSL connection threads did not show any performance impact after some micro-benchmarking. pgbench can be used here with -C and a mostly-empty script (with one \set meta-command for example) to stress authentication requests, and we have mixed that with some custom programs for testing. Reported-by: Jacob Champion Author: Michael Paquier Reviewed-by: Jacob Champion Discussion: https://postgr.es/m/fd3ba610085f1ff54623478cf2f7adf5af193cbb.camel@vmware.com
-
Peter Geoghegan authored
Oversight in commit 9f3665fb. Backpatch: 13-, just like commit 9f3665fb.
-
Thomas Munro authored
Add a note that per-buffer I/O condition variables currently live outside the BufferDesc struct. Follow-up for commit d8725104. Reported-by: Julien Rouhaud <rjuju123@gmail.com> Discussion: https://postgr.es/m/20210311031118.hucytmrgwlktjxgq%40nol
-
Bruce Momjian authored
This is a follow-on patch to 92c12e46. In that patch, we renamed "altitude" to "elevation" in the docs, based on these details: https://mapscaping.com/blogs/geo-candy/what-is-the-difference-between-elevation-relief-and-altitude This renames the tutorial SQL files to match the documentation. Reported-by: max1@inbox.ru Discussion: https://postgr.es/m/161512392887.1046.3137472627109459518@wrigleys.postgresql.org Backpatch-through: 9.6
-
Peter Geoghegan authored
vacuumlazy.c sometimes fails to update pg_class entries for each index (to ensure that pg_class.reltuples is current), even though analyze.c assumed that that must have happened during VACUUM ANALYZE. There are at least a couple of reasons for this. For example, vacuumlazy.c could fail to update pg_class when the index AM indicated that its statistics are merely an estimate, per the contract for amvacuumcleanup() routines established by commit e5734597 back in 2006. Stop assuming that pg_class must have been updated with accurate statistics within VACUUM ANALYZE -- update pg_class for indexes at the same time as the table relation in all cases. That way VACUUM ANALYZE will never fail to keep pg_class.reltuples reasonably accurate. The only downside of this approach (compared to the old approach) is that it might inaccurately set pg_class.reltuples for indexes whose heap relation ends up with the same inaccurate value anyway. This doesn't seem too bad. We already consistently called vac_update_relstats() (to update pg_class) for the heap/table relation twice during any VACUUM ANALYZE -- once in vacuumlazy.c, and once in analyze.c. We now make sure that we call vac_update_relstats() at least once (though often twice) for each index. This is follow up work to commit 9f3665fb, which dealt with issues in btvacuumcleanup(). Technically this fixes an unrelated issue, though. btvacuumcleanup() no longer provides an accurate num_index_tuples value following commit 9f3665fb (when there was no btbulkdelete() call during the VACUUM operation in question), but hashvacuumcleanup() has worked in the same way for many years now. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAH2-WzknxdComjhqo4SUxVFk_Q1171GJO2ZgHZ1Y6pion6u8rA@mail.gmail.com Backpatch: 13-, just like commit 9f3665fb.
-
Peter Geoghegan authored
Remove the entire idea of "stale stats" within nbtree VACUUM (stop caring about stats involving the number of inserted tuples). Also remove the vacuum_cleanup_index_scale_factor GUC/param on the master branch (though just disable them on postgres 13). The vacuum_cleanup_index_scale_factor/stats interface made the nbtree AM partially responsible for deciding when pg_class.reltuples stats needed to be updated. This seems contrary to the spirit of the index AM API, though -- it is not actually necessary for an index AM's bulk delete and cleanup callbacks to provide accurate stats when it happens to be inconvenient. The core code owns that. (Index AMs have the authority to perform or not perform certain kinds of deferred cleanup based on their own considerations, such as page deletion and recycling, but that has little to do with pg_class.reltuples/num_index_tuples.) This issue was fairly harmless until the introduction of the autovacuum_vacuum_insert_threshold feature by commit b07642db, which had an undesirable interaction with the vacuum_cleanup_index_scale_factor mechanism: it made insert-driven autovacuums perform full index scans, even though there is no real benefit to doing so. This has been tied to a regression with an append-only insert benchmark [1]. Also have remaining cases that perform a full scan of an index during a cleanup-only nbtree VACUUM indicate that the final tuple count is only an estimate. This prevents vacuumlazy.c from setting the index's pg_class.reltuples in those cases (it will now only update pg_class when vacuumlazy.c had TIDs for nbtree to bulk delete). This arguably fixes an oversight in deduplication-related bugfix commit 48e12913. [1] https://smalldatum.blogspot.com/2021/01/insert-benchmark-postgres-is-still.html Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAD21AoA4WHthN5uU6+WScZ7+J_RcEjmcuH94qcoUPuB42ShXzg@mail.gmail.com Backpatch: 13-, where autovacuum_vacuum_insert_threshold was added.
-
- 10 Mar, 2021 2 commits
-
-
Bruce Momjian authored
GiST indexes are complex, so adding more details in the code might help someone. Discussion: https://postgr.es/m/20210302164021.GA364@momjian.us
-
Thomas Munro authored
1. Backends waiting for buffer I/O are now interruptible. 2. If something goes wrong in a backend that is currently performing I/O, waiting backends no longer wake up until that backend reaches AbortBufferIO() and broadcasts on the CV. Previously, any waiters would wake up (because the I/O lock was automatically released) and then busy-loop until AbortBufferIO() cleared BM_IO_IN_PROGRESS. 3. LWLockMinimallyPadded is removed, as it would now be unused. Author: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Thomas Munro <thomas.munro@gmail.com> Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (earlier version, 2016) Discussion: https://postgr.es/m/CA%2BhUKGJ8nBFrjLuCTuqKN0pd2PQOwj9b_jnsiGFFMDvUxahj_A%40mail.gmail.com Discussion: https://postgr.es/m/CA+Tgmoaj2aPti0yho7FeEf2qt-JgQPRWb0gci_o1Hfr=C56Xng@mail.gmail.com
-