- 30 Mar, 2018 2 commits
-
-
Bruce Momjian authored
Reported-by: Fabrízio de Royes Mello Discussion: https://postgr.es/m/CAFcNs+qDD+QKcF8YCPQnjAxoWN61qY_YdFLB3iQqbWCLSCyY0g@mail.gmail.com Author: Fabrízio de Royes Mello
-
Tatsuo Ishii authored
Now all tables used in view definitions can be recursively locked by a LOCK command. Author: Yugo Nagata Reviewed by Robert Haas, Thomas Munro and me. Discussion: https://postgr.es/m/20171011183629.eb2817b3.nagata%40sraoss.co.jp
-
- 29 Mar, 2018 16 commits
-
-
Andres Freund authored
Author: John Naylor and Andres Freund Discussion: https://postgr.es/m/CAJVSVGUs-VcwSY7-Kx-GQe__8hvWuA4Uhyf3gxoMXeiZqebE9g@mail.gmail.com
-
Robert Haas authored
It's not needed. Jeevan Chalke Discussion: http://postgr.es/m/CAM2+6=XPWujjmj5zUaBTGDoB38CemwcPmjkRy0qOcsQj_V+2sQ@mail.gmail.com
-
Robert Haas authored
If the toplevel scan/join target list is parallel-safe, postpone generating Gather (or Gather Merge) paths until after the toplevel has been adjusted to return it. This (correctly) makes queries with expensive functions in the target list more likely to choose a parallel plan, since the cost of the plan now reflects the fact that the evaluation will happen in the workers rather than the leader. The original complaint about this problem was from Jeff Janes. If the toplevel scan/join relation is partitioned, recursively apply the changes to all partitions. This sometimes allows us to get rid of Result nodes, because Append is not projection-capable but its children may be. It also cleans up what appears to be incorrect SRF handling from commit e2f1eb0e: the old code had no knowledge of SRFs for child scan/join rels. Because we now use create_projection_path() in some cases where we formerly used apply_projection_to_path(), this changes the ordering of columns in some queries generated by postgres_fdw. Update regression outputs accordingly. Patch by me, reviewed by Amit Kapila and by Ashutosh Bapat. Other fixes for this problem (substantially different from this version) were reviewed by Dilip Kumar, Amit Khandekar, and Marina Polyakova. Discussion: http://postgr.es/m/CAMkU=1ycXNipvhWuweUVpKuyu6SpNjF=yHWu4c4US5JgVGxtZQ@mail.gmail.com
-
Robert Haas authored
Don't call generate_gather_paths for the topmost scan/join relation when it is initially populated with paths. Instead, do the work in grouping_planner. By itself, this gains nothing; in fact it loses slightly because we end up calling set_cheapest() for the topmost scan/join rel twice rather than once. However, it paves the way for a future commit which will postpone generate_gather_paths for the topmost scan/join relation even further, allowing more accurate costing of parallel paths. Amit Kapila and Robert Haas. Earlier versions of this patch (which different substantially) were reviewed by Dilip Kumar, Amit Khandekar, Marina Polyakova, and Ashutosh Bapat.
-
Robert Haas authored
We sometimes insert a ProjectionPath into a plan tree when projection is not strictly required. The existing code already arranges to avoid emitting a Result node when the ProjectionPath's subpath can perform the projection itself, but previously it didn't consider the possibility that the parent node might not actually require the projection to be performed at all. Skipping projection when it's not required can not only avoid Result nodes that aren't needed, but also avoid losing the "physical tlist" optimization unneccessarily. Patch by me, reviewed by Amit Kapila. Discussion: http://postgr.es/m/CA+TgmoakT5gmahbPWGqrR2nAdFOMAOnOXYoWHRdVfGWs34t6_A@mail.gmail.com
-
Bruce Momjian authored
Reported-by: Michael Paquier, Abhijit Menon-Sen Discussion: https://postgr.es/m/20180305045854.GB2266@paquier.xyz Author: Michael Paquier, Abhijit Menon-Sen, me
-
Bruce Momjian authored
Reported-by: Thomas Munro, Justin Pryzby Discussion: https://postgr.es/m/CAEepm=1_682z-09DNHj4GkCJAqWK-D6h9Oq5ea84T1oqq1-Utg@mail.gmail.com
-
Magnus Hagander authored
Author: Alexander Korotkov <a.korotkov@postgrespro.ru>
-
Magnus Hagander authored
Author: Daniel Gustafsson <daniel@yesql.se>
-
Tom Lane authored
Just noticed that these were quite redundant, since we're holding the page address in a local variable anyway, and we have pin on the buffer throughout. Also improve a comment.
-
Tom Lane authored
FreeSpaceMapVacuumRange has the same effect, is more efficient if many pages are involved, and makes fewer assumptions about how it's used. Notably, Claudio Freire pointed out that UpdateFreeSpaceMap could fail if the specified freespace value isn't the maximum possible. This isn't a problem for the single existing user, but the function represents an attractive nuisance IMO, because it's named as though it were a general-purpose update function and its limitations are undocumented. In any case we don't need multiple ways to get the same result. In passing, do some code review and cleanup in RelationAddExtraBlocks. In particular, I see no excuse for it to omit the PageIsNew safety check that's done in the mainline extension path in RelationGetBufferForTuple. Discussion: https://postgr.es/m/CAGTBQpYR0uJCNTt3M5GOzBRHo+-GccNO1nCaQ8yEJmZKSW5q1A@mail.gmail.com
-
Bruce Momjian authored
Reported-by: Tels Discussion: https://postgr.es/m/e66e05bc55f5ce904e361ad17a3395ae.squirrel@sm.webmail.pair.com
-
Tom Lane authored
VACUUM updates leaf-level FSM entries immediately after cleaning the corresponding heap blocks. fsmpage.c updates the intra-page search trees on the leaf-level FSM pages when this happens, but it does not touch the upper-level FSM pages, so that the released space might not actually be findable by searchers. Previously, updating the upper-level pages happened only at the conclusion of the VACUUM run, in a single FreeSpaceMapVacuum() call. This is bad because the VACUUM might get canceled before ever reaching that point, so that from the point of view of searchers no space has been freed at all, leading to table bloat. We can improve matters by updating the upper pages immediately after each cycle of index-cleaning and heap-cleaning, processing just the FSM pages corresponding to the range of heap blocks we have now fully cleaned. This adds a small amount of extra work, since the FSM pages leading down to each range boundary will be touched twice, but it's pretty negligible compared to everything else going on in a large VACUUM. If there are no indexes, VACUUM doesn't work in cycles but just cleans each heap page on first visit. In that case we just arbitrarily update upper FSM pages after each 8GB of heap. That maintains the goal of not letting all this work slide until the very end, and it doesn't seem worth expending extra complexity on a case that so seldom occurs in practice. In either case, the FSM is fully up to date before any attempt is made to truncate the relation, so that the most likely scenario for VACUUM cancellation no longer results in out-of-date upper FSM pages. When we do successfully truncate, adjusting the FSM to reflect that is now fully handled within FreeSpaceMapTruncateRel. Claudio Freire, reviewed by Masahiko Sawada and Jing Wang, some additional tweaks by me Discussion: https://postgr.es/m/CAGTBQpYR0uJCNTt3M5GOzBRHo+-GccNO1nCaQ8yEJmZKSW5q1A@mail.gmail.com
-
Teodor Sigaev authored
Add explicit cast from scalar jsonb to all numeric and bool types. It would be better to have cast from scalar jsonb to text too but there is already a cast from jsonb to text as just text representation of json. There is no way to have two different casts for the same type's pair. Bump catalog version Author: Anastasia Lubennikova with editorization by Nikita Glukhov and me Review by: Aleksander Alekseev, Nikita Glukhov, Darafei Praliaskouski Discussion: https://www.postgresql.org/message-id/flat/0154d35a-24ae-f063-5273-9ffcdf1c7f2e@postgrespro.ru
-
Bruce Momjian authored
Was broken by "jit" link.
-
Magnus Hagander authored
Arthur Zakirov, confirmed by Thomas Munro
-
- 28 Mar, 2018 20 commits
-
-
Peter Eisentraut authored
Previously, committing or aborting inside a cursor loop was prohibited because that would close and remove the cursor. To allow that, automatically convert such cursors to holdable cursors so they survive commits or rollbacks. Portals now have a new state "auto-held", which means they have been converted automatically from pinned. An auto-held portal is kept on transaction commit or rollback, but is still removed when returning to the main loop on error. This supports all languages that have cursor loop constructs: PL/pgSQL, PL/Python, PL/Perl. Reviewed-by: Ildus Kurbangaliev <i.kurbangaliev@postgrespro.ru>
-
Bruce Momjian authored
Reported-by: atorikoshi Discussion: https://postgr.es/m/b61f2ab9-c0e0-d33d-ce3f-42a228025681@lab.ntt.co.jp Author: atorikoshi
-
Andres Freund authored
Reported-By: Lukas Fittl Author: Andres Freund
-
Andres Freund authored
As promised in earlier commits, this adds documentation about the new build options, the new GUCs, about the planner logic when JIT is used, and the benefits of JIT in general. Also adds a more implementation oriented README. I'm sure we're going to want to expand this further, but I think this is a reasonable start. Author: Andres Freund, with contributions by Thomas Munro Reviewed-By: Thomas Munro Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
-
Andres Freund authored
This just shows a few details about JITing, e.g. how many functions have been JITed, and how long that took. To avoid noise in regression tests with functions sometimes being JITed in --with-llvm builds, disable display when COSTS OFF is specified. Author: Andres Freund Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
-
Andres Freund authored
This provides infrastructure to allow JITed code to inline code implemented in C. This e.g. can be postgres internal functions or extension code. This already speeds up long running queries, by allowing the LLVM optimizer to optimize across function boundaries. The optimization potential currently doesn't reach its full potential because LLVM cannot optimize the FunctionCallInfoData argument fully away, because it's allocated on the heap rather than the stack. Fixing that is beyond what's realistic for v11. To be able to do that, use CLANG to convert C code to LLVM bitcode, and have LLVM build a summary for it. That bitcode can then be used to to inline functions at runtime. For that the bitcode needs to be installed. Postgres bitcode goes into $pkglibdir/bitcode/postgres, extensions go into equivalent directories. PGXS has been modified so that happens automatically if postgres has been compiled with LLVM support. Currently this isn't the fastest inline implementation, modules are reloaded from disk during inlining. That's to work around an apparent LLVM bug, triggering an apparently spurious error in LLVM assertion enabled builds. Once that is resolved we can remove the superfluous read from disk. Docs will follow in a later commit containing docs for the whole JIT feature. Author: Andres Freund Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
-
Andres Freund authored
When compiling with clang glibc's definition of isinf() ends up leading to and external libc function call. That's because there was a bug in the builtin in an old gcc version, and clang claims compatibility with an older version. That causes clang to be measurably slower for floating point heavy workloads than gcc. To fix simply redirect isinf when using clang and clang confirms it has __builtin_isinf().
-
Fujii Masao authored
The target cluster that was rewound needs to perform recovery from the checkpoint created at failover, which leads it to remove or recreate some files and directories that may have been copied from the source cluster. So pg_rewind can skip synchronizing such files and directories, and which reduces the amount of data transferred during a rewind without changing the usefulness of the operation. Author: Michael Paquier Reviewed-by: Anastasia Lubennikova, Stephen Frost and me Discussion: https://postgr.es/m/20180205071022.GA17337@paquier.xyz
-
Fujii Masao authored
After processing the filemap to build the list of chunks that will be fetched from the source to rewing the target server, it is possible that a file which was previously processed is removed from the source. A simple example of such an occurence is a WAL segment which gets recycled on the target in-between. When the filemap is processed, files not categorized as relation files are first truncated to prepare for its full copy of which is going to be taken from the source, divided into a set of junks. However, for a recycled WAL segment, this would result in a segment which has a zero-byte size. With such an empty file, post-rewind recovery thinks that records are saved but they are actually not because of the truncation which happened when processing the filemap, resulting in data loss. In order to fix the problem, make sure that files which are found as removed on the source when receiving chunks of them are as well deleted on the target server for consistency. Back-patch to 9.5 where pg_rewind was added. Author: Tsunakawa Takayuki Reviewed-by: Michael Paquier Reported-by: Tsunakawa Takayuki Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F8DAAA2%40G01JPEXMBYT05
-
Peter Eisentraut authored
So far, a nested CALL or DO in PL/pgSQL would not establish a context where transaction control statements were allowed. This fixes that by handling CALL and DO specially in PL/pgSQL, passing the atomic/nonatomic execution context through and doing the required management around transaction boundaries. Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>
-
Tom Lane authored
tuplesort_gettupleslot() passed back tuples allocated in the tuplesort's own memory context, even when the caller was responsible to free them. This created a double-free hazard, because some callers might destroy the tuplesort object (via tuplesort_end) before trying to clean up the last returned tuple. To avoid this, change the API to specify that the tuple is allocated in the caller's memory context. v10 and HEAD already did things that way, but in 9.5 and 9.6 this is a live bug that can demonstrably cause crashes with some grouping-set usages. In 9.5 and 9.6, this requires doing an extra tuple copy in some cases, which is unfortunate. But the amount of refactoring needed to avoid it seems excessive for a back-patched change, especially since the cases where an extra copy happens are less performance-critical. Likewise change tuplesort_getdatum() to return pass-by-reference Datums in the caller's context not the tuplesort's context. There seem to be no live bugs among its callers, but clearly the same sort of situation could happen in future. For other tuplesort fetch routines, continue to allocate the memory in the tuplesort's context. This is a little inconsistent with what we now do for tuplesort_gettupleslot() and tuplesort_getdatum(), but that's preferable to adding new copy overhead in the back branches where it's clearly unnecessary. These other fetch routines provide the weakest possible guarantees about tuple memory lifespan from v10 on, anyway, so this actually seems more consistent overall. Adjust relevant comments to reflect these API redefinitions. Arguably, we should change the pre-9.5 branches as well, but since there are no known failure cases there, it seems not worth the risk. Peter Geoghegan, per report from Bernd Helmle. Reviewed by Kyotaro Horiguchi; thanks also to Andreas Seltenreich for extracting a self-contained test case. Discussion: https://postgr.es/m/1512661638.9720.34.camel@oopsware.de
-
Simon Riggs authored
Store GID of 2PC in commit/abort WAL records when wal_level = logical. This allows logical decoding to send the SAME gid to subscribers across restarts of logical replication. Track relica origin replay progress for 2PC. (Edited from patch 0003 in the logical decoding 2PC series.) Authors: Nikhil Sontakke, Stas Kelvich Reviewed-by: Simon Riggs, Andres Freund
-
Peter Eisentraut authored
-
Peter Eisentraut authored
Rewrite one test to avoid a case where some Python versions have output format differences (Decimal('1') vs Decimal("1")).
-
Peter Eisentraut authored
Add a new contrib module jsonb_plpython that provide a transform between jsonb and PL/Python. jsonb values are converted to appropriate Python types such as dicts and lists, and vice versa. Author: Anthony Bykov <a.bykov@postgrespro.ru> Reviewed-by: Aleksander Alekseev <a.alekseev@postgrespro.ru> Reviewed-by: Nikita Glukhov <n.gluhov@postgrespro.ru>
-
Andrew Dunstan authored
-
Simon Riggs authored
Resolve build farm failures from c203d6cf, diagnosed by Tom Lane. The output of pg_stat_get_xact_tuples_hot_updated() and friends is not guaranteed to show anything after the transaction completes. Data is flushed slowly to stats collector, so using them can give timing issues.
-
Andres Freund authored
Instead using memset to set tts_isnull, call the new slot_getmissingattrs(). Also fix a bug (= instead of >=) in the code generation. Normally = is correct, but when repeatedly deforming fields not in a tuple (e.g. deform up to natts + 1 and then natts + 2) >= is needed. Discussion: https://postgr.es/m/20180328010053.i2qvsuuusst4lgmc@alap3.anarazel.de
-
Andres Freund authored
Given that pg_attribute changed its layout...
-
Andrew Dunstan authored
Currently adding a column to a table with a non-NULL default results in a rewrite of the table. For large tables this can be both expensive and disruptive. This patch removes the need for the rewrite as long as the default value is not volatile. The default expression is evaluated at the time of the ALTER TABLE and the result stored in a new column (attmissingval) in pg_attribute, and a new column (atthasmissing) is set to true. Any existing row when fetched will be supplied with the attmissingval. New rows will have the supplied value or the default and so will never need the attmissingval. Any time the table is rewritten all the atthasmissing and attmissingval settings for the attributes are cleared, as they are no longer needed. The most visible code change from this is in heap_attisnull, which acquires a third TupleDesc argument, allowing it to detect a missing value if there is one. In many cases where it is known that there will not be any (e.g. catalog relations) NULL can be passed for this argument. Andrew Dunstan, heavily modified from an original patch from Serge Rielau. Reviewed by Tom Lane, Andres Freund, Tomas Vondra and David Rowley. Discussion: https://postgr.es/m/31e2e921-7002-4c27-59f5-51f08404c858@2ndQuadrant.com
-
- 27 Mar, 2018 2 commits
-
-
Tom Lane authored
It seems that all buildfarm members are now using the <stdbool.h> code path, so that none of them report "bool" as a typedef. We still need it to be treated that way, so adjust pgindent to force that whether or not it's in the given list. Also, the recent introduction of LLVM infrastructure has caused the appearance of some typedef names that we definitely *don't* want treated as typedefs, such as "string" and "abs". Extend the existing blacklist to include these. (Additions based on comparing v10's typedefs list to what the buildfarm is currently emitting.) Rearrange the code so that the lists of whitelisted/blacklisted names are a bit easier to find and modify. Andrew Dunstan and Tom Lane Discussion: https://postgr.es/m/28690.1521912334@sss.pgh.pa.us
-
Tom Lane authored
Originally, we treated memory context names as potentially variable in all cases, and therefore always copied them into the context header. Commit 9fa6f00b rethought this a little bit and invented a distinction between fixed and variable names, skipping the copy step for the former. But we can make things both simpler and more useful by instead allowing there to be two parts to a context's identification, a fixed "name" and an optional, variable "ident". The name supplied in the context create call is now required to be a compile-time-constant string in all cases, as it is never copied but just pointed to. The "ident" string, if wanted, is supplied later. This is needed because typically we want the ident to be stored inside the context so that it's cleaned up automatically on context deletion; that means it has to be copied into the context before we can set the pointer. The cost of this approach is basically just an additional pointer field in struct MemoryContextData, which isn't much overhead, and is bought back entirely in the AllocSet case by not needing a headerSize field anymore, since we no longer have to cope with variable header length. In addition, we can simplify the internal interfaces for memory context creation still further, saving a few cycles there. And it's no longer true that a custom identifier disqualifies a context from participating in aset.c's freelist scheme, so possibly there's some win on that end. All the places that were using non-compile-time-constant context names are adjusted to put the variable info into the "ident" instead. This allows more effective identification of those contexts in many cases; for example, subsidary contexts of relcache entries are now identified by both type (e.g. "index info") and relname, where before you got only one or the other. Contexts associated with PL function cache entries are now identified more fully and uniformly, too. I also arranged for plancache contexts to use the query source string as their identifier. This is basically free for CachedPlanSources, as they contained a copy of that string already. We pay an extra pstrdup to do it for CachedPlans. That could perhaps be avoided, but it would make things more fragile (since the CachedPlanSource is sometimes destroyed first). I suspect future improvements in error reporting will require CachedPlans to have a copy of that string anyway, so it's not clear that it's worth moving mountains to avoid it now. This also changes the APIs for context statistics routines so that the context-specific routines no longer assume that output goes straight to stderr, nor do they know all details of the output format. This is useful immediately to reduce code duplication, and it also allows for external code to do something with stats output that's different from printing to stderr. The reason for pushing this now rather than waiting for v12 is that it rethinks some of the API changes made by commit 9fa6f00b. Seems better for extension authors to endure just one round of API changes not two. Discussion: https://postgr.es/m/CAB=Je-FdtmFZ9y9REHD7VsSrnCkiBhsA4mdsLKSPauwXtQBeNA@mail.gmail.com
-