- 26 Feb, 2020 3 commits
-
-
Peter Geoghegan authored
Deduplication reduces the storage overhead of duplicates in indexes that use the standard nbtree index access method. The deduplication process is applied lazily, after the point where opportunistic deletion of LP_DEAD-marked index tuples occurs. Deduplication is only applied at the point where a leaf page split would otherwise be required. New posting list tuples are formed by merging together existing duplicate tuples. The physical representation of the items on an nbtree leaf page is made more space efficient by deduplication, but the logical contents of the page are not changed. Even unique indexes make use of deduplication as a way of controlling bloat from duplicates whose TIDs point to different versions of the same logical table row. The lazy approach taken by nbtree has significant advantages over a GIN style eager approach. Most individual inserts of index tuples have exactly the same overhead as before. The extra overhead of deduplication is amortized across insertions, just like the overhead of page splits. The key space of indexes works in the same way as it has since commit dd299df8 (the commit that made heap TID a tiebreaker column). Testing has shown that nbtree deduplication can generally make indexes with about 10 or 15 tuples for each distinct key value about 2.5X - 4X smaller, even with single column integer indexes (e.g., an index on a referencing column that accompanies a foreign key). The final size of single column nbtree indexes comes close to the final size of a similar contrib/btree_gin index, at least in cases where GIN's posting list compression isn't very effective. This can significantly improve transaction throughput, and significantly reduce the cost of vacuuming indexes. A new index storage parameter (deduplicate_items) controls the use of deduplication. The default setting is 'on', so all new B-Tree indexes automatically use deduplication where possible. This decision will be reviewed at the end of the Postgres 13 beta period. There is a regression of approximately 2% of transaction throughput with synthetic workloads that consist of append-only inserts into a table with several non-unique indexes, where all indexes have few or no repeated values. The underlying issue is that cycles are wasted on unsuccessful attempts at deduplicating items in non-unique indexes. There doesn't seem to be a way around it short of disabling deduplication entirely. Note that deduplication of items in unique indexes is fairly well targeted in general, which avoids the problem there (we can use a special heuristic to trigger deduplication passes in unique indexes, since we're specifically targeting "version bloat"). Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed. No bump in BTREE_VERSION, since the representation of posting list tuples works in a way that's backwards compatible with version 4 indexes (i.e. indexes built on PostgreSQL 12). However, users must still REINDEX a pg_upgrade'd index to use deduplication, regardless of the Postgres version they've upgraded from. This is the only way to set the new nbtree metapage flag indicating that deduplication is generally safe. Author: Anastasia Lubennikova, Peter Geoghegan Reviewed-By: Peter Geoghegan, Heikki Linnakangas Discussion: https://postgr.es/m/55E4051B.7020209@postgrespro.ru https://postgr.es/m/4ab6e2db-bcee-f4cf-0916-3a06e6ccbb55@postgrespro.ru
-
Peter Geoghegan authored
Invent the concept of a B-Tree equalimage ("equality implies image equality") support function, registered as support function 4. This indicates whether it is safe (or not safe) to apply optimizations that assume that any two datums considered equal by an operator class's order method must be interchangeable without any loss of semantic information. This is static information about an operator class and a collation. Register an equalimage routine for almost all of the existing B-Tree opclasses. We only need two trivial routines for all of the opclasses that are included with the core distribution. There is one routine for opclasses that index non-collatable types (which returns 'true' unconditionally), plus another routine for collatable types (which returns 'true' when the collation is a deterministic collation). This patch is infrastructure for an upcoming patch that adds B-Tree deduplication. Author: Peter Geoghegan, Anastasia Lubennikova Discussion: https://postgr.es/m/CAH2-Wzn3Ee49Gmxb7V1VJ3-AC8fWn-Fr8pfWQebHe8rYRxt5OQ@mail.gmail.com
-
Magnus Hagander authored
In passing, also quote the filename in one message where it wasn't. Author: Dagfinn Ilmari Mannsåker Discussion: https://postgr.es/m/87pne2w98h.fsf@wibble.ilmari.org
-
- 25 Feb, 2020 1 commit
-
-
Michael Paquier authored
GenerateConfigHeader() in Solution.pm was complaining about unused define symbols even if a newer config header was not generated, causing successive build attempts with MSVC to fail. Oversight in commit 8f4fb4c6. Author: Kyotaro Horiguchi Reviewed-by: Juan José Santamaría Flecha Discussion: https://postgr.es/m/20200218.160500.44393633318853097.horikyota.ntt@gmail.com
-
- 24 Feb, 2020 9 commits
-
-
Tom Lane authored
I forgot that some compilers won't handle #if constructs within ereport() calls. Duplicating most of the call is annoying but simple. Per buildfarm.
-
Andres Freund authored
Do so by combining the various steps that are part of aggregate transition function invocation into one larger step. As some of the current steps are only necessary for some aggregates, have one variant of the aggregate transition step for each possible combination. To avoid further manual copies of code in the different transition step implementations, move most of the code into helper functions marked as "always inline". The benefit of this change is an increase in performance when aggregating lots of rows. This comes in part due to the reduced number of indirect jumps due to the reduced number of steps, and in part by reducing redundant setup code across steps. This mainly benefits interpreted execution, but the code generated by JIT is also improved a bit. As a nice side-effect it also ends up making the code a bit simpler. A small additional optimization is removing the need to set aggstate->curaggcontext before calling ExecAggInitGroup, choosing to instead passign curaggcontext as an argument. It was, in contrast to other aggregate related functions, only needed to fetch a memory context to copy the transition value into. Author: Andres Freund Discussion: https://postgr.es/m/20191023163849.sosqbfs5yenocez3@alap3.anarazel.de https://postgr.es/m/5c371df7cee903e8cd4c685f90c6c72086d3a2dc.camel@j-davis.com
-
Michael Paquier authored
Multi-insert for heap is not yet used actively for catalogs, but the code to support this case is in place for logical decoding. The existing code forgot to issue a XLOG_HEAP2_NEW_CID record for the first tuple inserted, leading to failures when attempting to use multiple inserts for catalogs at decoding time. This commit fixes the problem by WAL-logging the needed CID. This is not an active bug, so no back-patch is done. Author: Daniel Gustafsson Discussion: https://postgr.es/m/E0D4CC67-A1CF-4DF4-991D-B3AC2EB5FAE9@yesql.se
-
Tom Lane authored
The comments in fd.c have long claimed that all file allocations should go through that module, but in reality that's not always practical. fd.c doesn't supply APIs for invoking some FD-producing syscalls like pipe() or epoll_create(); and the APIs it does supply for non-virtual FDs are mostly insistent on releasing those FDs at transaction end; and in some cases the actual open() call is in code that can't be made to use fd.c, such as libpq. This has led to a situation where, in a modern server, there are likely to be seven or so long-lived FDs per backend process that are not known to fd.c. Since NUM_RESERVED_FDS is only 10, that meant we had *very* few spare FDs if max_files_per_process is >= the system ulimit and fd.c had opened all the files it thought it safely could. The contrib/postgres_fdw regression test, in particular, could easily be made to fall over by running it under a restrictive ulimit. To improve matters, invent functions Acquire/Reserve/ReleaseExternalFD that allow outside callers to tell fd.c that they have or want to allocate a FD that's not directly managed by fd.c. Add calls to track all the fixed FDs in a standard backend session, so that we are honestly guaranteeing that NUM_RESERVED_FDS FDs remain unused below the EMFILE limit in a backend's idle state. The coding rules for these functions say that there's no need to call them in code that just allocates one FD over a fairly short interval; we can dip into NUM_RESERVED_FDS for such cases. That means that there aren't all that many places where we need to worry. But postgres_fdw and dblink must use this facility to account for long-lived FDs consumed by libpq connections. There may be other places where it's worth doing such accounting, too, but this seems like enough to solve the immediate problem. Internally to fd.c, "external" FDs are limited to max_safe_fds/3 FDs. (Callers can choose to ignore this limit, but of course it's unwise to do so except for fixed file allocations.) I also reduced the limit on "allocated" files to max_safe_fds/3 FDs (it had been max_safe_fds/2). Conceivably a smarter rule could be used here --- but in practice, on reasonable systems, max_safe_fds should be large enough that this isn't much of an issue, so KISS for now. To avoid possible regression in the number of external or allocated files that can be opened, increase FD_MINFREE and the lower limit on max_files_per_process a little bit; we now insist that the effective "ulimit -n" be at least 64. This seems like pretty clearly a bug fix, but in view of the lack of field complaints, I'll refrain from risking a back-patch. Discussion: https://postgr.es/m/E1izCmM-0005pV-Co@gemulon.postgresql.org
-
Peter Eisentraut authored
Given all we have learned about fsync() error handling in the last few years, reporting an fsync() error non-fatally is not useful, unless you don't care much about the file, in which case you probably don't need to use fsync() in the first place. Change fsync_fname() and durable_rename() to exit(1) on fsync() errors other than those that we specifically chose to ignore. This affects initdb, pg_basebackup, pg_checksums, pg_dump, pg_dumpall, and pg_rewind. Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://www.postgresql.org/message-id/flat/d239d1bd-aef0-ca7c-dc0a-da14bdcf0392%402ndquadrant.com
-
Robert Haas authored
hash_any() and its various variants are defined to return Datum, which is a backend-only concept, but the underlying functions actually want to return uint32 and uint64, and only return Datum because it's convenient for callers who are using them to implement a hash function for some SQL datatype. However, changing these functions to return uint32 and uint64 seems like it might lead to programming errors or back-patching difficulties, both because they are widely used and because failure to use UInt{32,64}GetDatum() might not provoke a compilation error. Instead, rename the existing functions as well as changing the return type, and add static inline wrappers for those callers that need the previous behavior. Although this commit adapts hashutils.h and hashfn.c so that they can be compiled as frontend code, it does not actually do anything that would cause them to be so compiled. That is left for another commit. Patch by me, reviewed by Suraj Kharage and Mark Dilger. Discussion: http://postgr.es/m/CA+TgmoaRiG4TXND8QuM6JXFRkM_1wL2ZNhzaUKsuec9-4yrkgw@mail.gmail.com
-
Robert Haas authored
Previously, some of the prototypes for functions in hashfn.c were in utils/hashutils.h and others were in utils/hsearch.h, but that is confusing and has no particular benefit. Patch by me, reviewed by Suraj Kharage and Mark Dilger. Discussion: http://postgr.es/m/CA+TgmoaRiG4TXND8QuM6JXFRkM_1wL2ZNhzaUKsuec9-4yrkgw@mail.gmail.com
-
Robert Haas authored
The closely-related function bms_hash_value is already defined in that file, and this change means that hashfn.c no longer needs to depend on nodes/bitmapset.h. That gets us closer to allowing use of the hash functions in hashfn.c in frontend code. Patch by me, reviewed by Suraj Kharage and Mark Dilger. Discussion: http://postgr.es/m/CA+TgmoaRiG4TXND8QuM6JXFRkM_1wL2ZNhzaUKsuec9-4yrkgw@mail.gmail.com
-
Michael Paquier authored
An instance of PostgreSQL crashing with a bad timing could leave behind temporary pg_internal.init files, potentially causing failures when verifying checksums. As the same exclusion lists are used between pg_rewind, pg_checksums and basebackup.c, all those tools are extended with prefix checks to keep everything in sync, with dedicated checks added for pg_internal.init. Backpatch down to 11, where pg_checksums (pg_verify_checksums in 11) and checksum verification for base backups have been introduced. Reported-by: Michael Banck Author: Michael Paquier Reviewed-by: Kyotaro Horiguchi, David Steele Discussion: https://postgr.es/m/62031974fd8e941dd8351fbc8c7eff60d59c5338.camel@credativ.de Backpatch-through: 11
-
- 22 Feb, 2020 3 commits
-
-
Peter Eisentraut authored
Right now this only makes BootStrapXLOG() a bit more manageable, but in the future there may be external callers. Discussion: https://www.postgresql.org/message-id/e8f86ba5-48f1-a80a-7f1d-b76bcb9c5c47@2ndquadrant.com
-
Peter Eisentraut authored
ReadControlFile() here conflicts with a function of the same name in xlog.c. There is no actual conflict right now, but since pg_resetwal.c reaches deep inside backend headers, it's possible in the future. Discussion: https://www.postgresql.org/message-id/e8f86ba5-48f1-a80a-7f1d-b76bcb9c5c47@2ndquadrant.com
-
- 21 Feb, 2020 16 commits
-
-
Tom Lane authored
We're not testing that anywhere anymore, but for consistency, it should get defined.
-
Peter Eisentraut authored
-
Peter Eisentraut authored
This to allow verifying the MSVC build file generation without having to have Windows. To do this, we avoid Windows-specific Perl modules and don't run the "cl" compiler or "nmake". The resulting build files won't actually be completely correct, but it's useful enough. Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/d73b2c7b-f081-8357-8422-7564d55f1aac%402ndquadrant.com
-
Tom Lane authored
These compiler features are required by C99, so remove the configure probes for them. This is part of a series of commits to get rid of no-longer-relevant configure checks and dead src/port/ code. I'm committing them separately to make it easier to back out individual changes if they prove less portable than I expect. Discussion: https://postgr.es/m/15379.1582221614@sss.pgh.pa.us
-
Tom Lane authored
Windows has this, and so do all other live platforms according to the buildfarm; it's been required by POSIX since SUSv2. So remove the configure probe and tests of HAVE_WCHAR_H. This is part of a series of commits to get rid of no-longer-relevant configure checks and dead src/port/ code. I'm committing them separately to make it easier to back out individual changes if they prove less portable than I expect. Discussion: https://postgr.es/m/15379.1582221614@sss.pgh.pa.us
-
Tom Lane authored
These are required by POSIX since SUSv2, and no live platforms fail to provide them. On Windows, utime() exists and we bring our own <utime.h>, so we're good there too. So remove the configure probes and ad-hoc substitute code. We don't need to check for utimes() anymore either, since that was only used as a substitute. In passing, make the Windows build include <sys/utime.h> only where we need it, not everywhere. This is part of a series of commits to get rid of no-longer-relevant configure checks and dead src/port/ code. I'm committing them separately to make it easier to back out individual changes if they prove less portable than I expect. Discussion: https://postgr.es/m/15379.1582221614@sss.pgh.pa.us
-
Tom Lane authored
Windows has this since _MSC_VER >= 1200, and so do all other live platforms according to the buildfarm, so remove the configure probe and src/port/ substitution. This is part of a series of commits to get rid of no-longer-relevant configure checks and dead src/port/ code. I'm committing them separately to make it easier to back out individual changes if they prove less portable than I expect. Discussion: https://postgr.es/m/15379.1582221614@sss.pgh.pa.us
-
Tom Lane authored
Windows has this, and so do all other live platforms according to the buildfarm, so remove the configure probe and c.h's substitute code. This is part of a series of commits to get rid of no-longer-relevant configure checks and dead src/port/ code. I'm committing them separately to make it easier to back out individual changes if they prove less portable than I expect. Discussion: https://postgr.es/m/15379.1582221614@sss.pgh.pa.us
-
Tom Lane authored
Windows has this, and so do all other live platforms according to the buildfarm, so remove the configure probe and float.c's substitute code. This is part of a series of commits to get rid of no-longer-relevant configure checks and dead src/port/ code. I'm committing them separately to make it easier to back out individual changes if they prove less portable than I expect. Discussion: https://postgr.es/m/15379.1582221614@sss.pgh.pa.us
-
Tom Lane authored
Windows has this, and so do all other live platforms according to the buildfarm, so remove the configure probe and src/port/ substitution. This also lets us get rid of some configure probes that existed only to support src/port/isinf.c. I kept the port.h hack to force using __builtin_isinf() on clang, though. This is part of a series of commits to get rid of no-longer-relevant configure checks and dead src/port/ code. I'm committing them separately to make it easier to back out individual changes if they prove less portable than I expect. Discussion: https://postgr.es/m/15379.1582221614@sss.pgh.pa.us
-
Tom Lane authored
Windows has this, and so do all other live platforms according to the buildfarm, so remove the configure probe and src/port/ substitution. Keep the probe that detects whether _LARGEFILE_SOURCE has to be defined to get that, though ... that seems to be still relevant in some places. This is part of a series of commits to get rid of no-longer-relevant configure checks and dead src/port/ code. I'm committing them separately to make it easier to back out individual changes if they prove less portable than I expect. Discussion: https://postgr.es/m/15379.1582221614@sss.pgh.pa.us
-
Peter Eisentraut authored
GCC reports various instances of warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] and MSVC equivalently warning C4312: 'type cast': conversion from 'int' to 'void *' of greater size warning C4311: 'type cast': pointer truncation from 'void *' to 'long' in ECPG test files. This is because void* and long are cast back and forth, but on 64-bit Windows, these have different sizes. Fix by using intptr_t instead. The code actually worked fine because the integer values in use are all small. So this is just to get the test code to compile warning-free. This change is simplified by having made stdint.h required (commit 95733841). Before this it would have been more complicated because the ecpg test source files don't use the full pg_config.h. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/5d398bbb-262a-5fed-d839-d0e5cff3c0d7%402ndquadrant.com
-
Jeff Davis authored
Commit 5b618e1f made an unintended behavior change.
-
Etsuro Fujita authored
Previously, partition_bounds_copy() checked whether the strategy for the given partition bounds was hash or not, and then determined the number of elements in the datums in the datums array for the partition bounds, on each iteration of the loop for copying the datums array, but there is no need to do that. Perform the checks only once before the loop iteration. Author: Etsuro Fujita Reported-by: Amit Langote and Julien Rouhaud Discussion: https://postgr.es/m/CAPmGK14Rvxrm8DHWvCjdoks6nwZuHBPvMnWZ6rkEx2KhFeEoPQ@mail.gmail.com
-
Peter Eisentraut authored
stdint.h belongs to the compiler (as opposed to inttypes.h), so by requiring a C99 compiler we can also require stdint.h unconditionally. Remove configure checks and other workarounds for it. This also removes a few steps in the required portability adjustments to the imported time zone code, which can be applied on the next import. When using GCC on a platform that is otherwise pre-C99, this will now require at least GCC 4.5, which is the first release that supplied a standard-conforming stdint.h if the native platform didn't have it. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/5d398bbb-262a-5fed-d839-d0e5cff3c0d7%402ndquadrant.com
-
Michael Paquier authored
The documentation included some outdated instructions to change the architecture, build type or target OS of a build done with MSVC. This commit updates the documentation to include the modern options available, down to Visual Studio 2013. Reported-by: Kyotaro Horiguchi Author: Juan José Santamaría Flecha Discussion: https://postgr.es/m/CAC+AXB0J7tAqW_2F1fCE4Dh2=Ccz96TcLpsGXOCvka7VvWG9Qw@mail.gmail.com Backpatch-through: 12
-
- 20 Feb, 2020 3 commits
-
-
Alvaro Herrera authored
Avoid a join between relations having the FK to detect FK violation. The planner might optimize this considering the PK must exist on the referenced side at some point, effectively masking a bug this test tries to detect. Tom Lane and Jehan-Guillaume de Rorthais Discussion: https://postgr.es/m/467.1581270529@sss.pgh.pa.us
-
Etsuro Fujita authored
-
Michael Paquier authored
e2e02191 has removed a code path for Windows 2000 that attempts to load wship6.dll as fallback if ws2_32.dll is found but not getaddrinfo(), leaving behind a dangling pointer as the library is freed. However, there is no point in this check as ws2_32.dll exists since Windows XP, so just remove the duplicated check. Reported-by: Tom Lane Discussion: https://postgr.es/m/9781.1582146114@sss.pgh.pa.us
-
- 19 Feb, 2020 5 commits
-
-
Tom Lane authored
Creating a bunch of non-overlapping partial indexes is generally a bad idea, so add an example saying not to do that. Back-patch to v10. Before that, the alternative of using (real) partitioning wasn't available, so that the tradeoff isn't quite so clear cut. Discussion: https://postgr.es/m/CAKVFrvFY-f7kgwMRMiPLbPYMmgjc8Y2jjUGK_Y0HVcYAmU6ymg@mail.gmail.com
-
Tom Lane authored
Andres Freund pointed out that allowing non-superusers to run "CREATE EXTENSION ... FROM unpackaged" has security risks, since the unpackaged-to-1.0 scripts don't try to verify that the existing objects they're modifying are what they expect. Just attaching such objects to an extension doesn't seem too dangerous, but some of them do more than that. We could have resolved this, perhaps, by still requiring superuser privilege to use the FROM option. However, it's fair to ask just what we're accomplishing by continuing to lug the unpackaged-to-1.0 scripts forward. None of them have received any real testing since 9.1 days, so they may not even work anymore (even assuming that one could still load the previous "loose" object definitions into a v13 database). And an installation that's trying to go from pre-9.1 to v13 or later in one jump is going to have worse compatibility problems than whether there's a trivial way to convert their contrib modules into extension style. Hence, let's just drop both those scripts and the core-code support for "CREATE EXTENSION ... FROM". Discussion: https://postgr.es/m/20200213233015.r6rnubcvl4egdh5r@alap3.anarazel.de
-
Peter Eisentraut authored
Reported-by: Daniel Verite <daniel@manitou-mail.org>
-
Tom Lane authored
The function hash table keys made by compute_function_hashkey() failed to distinguish event-trigger call context from regular call context. This meant that once we'd successfully made a hash entry for an event trigger (either by validation, or by normal use as an event trigger), an attempt to call the trigger function as a plain function would find this hash entry and thereby bypass the you-can't-do-that check in do_compile(). Thus we'd attempt to execute the function, leading to strange errors or even crashes, depending on function contents and server version. To fix, add an isEventTrigger field to PLpgSQL_func_hashkey, paralleling the longstanding infrastructure for regular triggers. This fits into what had been pad space, so there's no risk of an ABI break, even assuming that any third-party code is looking at these hash keys. (I considered replacing isTrigger with a PLpgSQL_trigtype enum field, but felt that that carried some API/ABI risk. Maybe we should change it in HEAD though.) Per bug #16266 from Alexander Lakhin. This has been broken since event triggers were invented, so back-patch to all supported branches. Discussion: https://postgr.es/m/16266-fcd7f838e97ba5d4@postgresql.org
-
Peter Eisentraut authored
It was set to immutable. This was a mistake in the initial commit (5925e554). Reported-by: hubert depesz lubaczewski <depesz@depesz.com> Discussion: https://www.postgresql.org/message-id/flat/20200218185452.GA8710%40depesz.com
-