- 01 Feb, 2019 3 commits
-
-
Alvaro Herrera authored
An upcoming patch which changes how inheritance planning works requires adding a new function that does a similar job to set_append_rel_size() but for child target relations. To save it from having to duplicate the qual building code, move that to a separate function first. Here we also change things so that we never attempt to build security quals after detecting some const false child quals. We needlessly used to do this just before we marked the child relation as a dummy rel. In passing, this also moves the partition pruned check to before the qual building code. We don't need to build the child quals before we check if the partition has been pruned. Author: David Rowley Discussion: https://postgr.es/m/CAKJS1f_i+jrrD+if8qC7KPuTAAWsd=dtepgY_7u=P86GDEwm7A@mail.gmail.com
-
Michael Paquier authored
A timeout of 5s is used when waiting for WAL to become available at recovery so as the startup process is able to react promptly if a trigger file shows up. However this missed the fact that the startup process also relies on the timeout to check periodically the status of any active WAL receiver. Discussion: https://postgr.es/m/20190131070956.GE13429@paquier.xyz
-
Michael Paquier authored
When logging the replica identity of a deleted tuple, XLOG_HEAP_DELETE records include references of the old tuple. Its data is stored in an intermediate variable used to register this information for the WAL record, but this variable gets away from the stack when the record gets actually inserted. Spotted by clang's AddressSanitizer. Author: Stas Kelvish Discussion: https://postgr.es/m/085C8825-AD86-4E93-AF80-E26CDF03D1EA@postgrespro.ru Backpatch-through: 9.4
-
- 31 Jan, 2019 3 commits
-
-
Peter Eisentraut authored
Add columns client_serial and issuer_dn to pg_stat_ssl. These allow uniquely identifying the client certificate. Rename the existing column clientdn to client_dn, to make the naming more consistent and easier to read. Discussion: https://www.postgresql.org/message-id/flat/398754d8-6bb5-c5cf-e7b8-22e5f0983caf@2ndquadrant.com/
-
Michael Paquier authored
These two new options can be used to improve the selectivity of relations to vacuum or analyze even further depending on the age of respectively their transaction ID or multixact ID, so as it is possible to prioritize tables to prevent wraparound of one or the other. Combined with --table, it is possible to target a subset of tables to choose as potential processing targets. Author: Nathan Bossart Reviewed-by: Michael Paquier, Masahiko Sawada Discussion: https://postgr.es/m/FFE5373C-E26A-495B-B5C8-911EC4A41C5E@amazon.com
-
Tom Lane authored
We can't allow these pseudo-types to be used as table column types, because storing an anonymous record value in a table would result in data that couldn't be understood by other sessions. However, it seems like there's no harm in allowing the case in a column definition list that's specifying what a function-returning-record returns. The data involved is all local to the current session, so we should be just as able to resolve its actual tuple type as we are for the function-returning-record's top-level tuple output. Elvis Pranskevichus, with cosmetic changes by me Discussion: https://postgr.es/m/11038447.kQ5A9Uj5xi@hammer.magicstack.net
-
- 30 Jan, 2019 7 commits
-
-
Peter Eisentraut authored
Logging the PostgreSQL version on startup is useful for two reasons: There is a clear marker in the log file that a new postmaster is beginning, and it's useful for tracking the server version across startup while upgrading. Author: Christoph Berg <christoph.berg@credativ.de> Discussion: https://www.postgresql.org/message-id/flat/20181121144611.GJ15795@msg.credativ.de/
-
Peter Eisentraut authored
When the syslogger was originally added (bdf8ef69), nothing was normally logged before the point where it was started. But since f9dfa5c9, the creation of sockets causes messages of level LOG to be written routinely, so those don't go to the syslogger now. To improve that, arrange the sequence in PostmasterMain() slightly so that the syslogger is started early enough to capture those messages. Discussion: https://www.postgresql.org/message-id/d5d50936-20b9-85f1-06bc-94a01c5040c1%402ndquadrant.comReviewed-by: Christoph Berg <christoph.berg@credativ.de>
-
Michael Meskes authored
The function called can result in an out of memory error that subsequently was disregarded. Instead it should set the appropriate SQL error variables and be checked by whatever whenever statement is defined.
-
Michael Meskes authored
during development. Test cases themselves should not hang or segfault.
-
Michael Meskes authored
-
Peter Eisentraut authored
The bug was that determining which columns are part of the replica identity index using RelationGetIndexAttrBitmap() would run eval_const_expressions() on index expressions and predicates across all indexes of the table, which in turn might require a snapshot, but there wasn't one set, so it crashes. There were actually two separate bugs, one on the publisher and one on the subscriber. To trigger the bug, a table that is part of a publication or subscription needs to have an index with a predicate or expression that lends itself to constant expressions simplification. The fix is to avoid the constant expressions simplification in RelationGetIndexAttrBitmap(), so that it becomes safe to call in these contexts. The constant expressions simplification comes from the calls to RelationGetIndexExpressions()/RelationGetIndexPredicate() via BuildIndexInfo(). But RelationGetIndexAttrBitmap() calling BuildIndexInfo() is overkill. The latter just takes pg_index catalog information, packs it into the IndexInfo structure, which former then just unpacks again and throws away. We can just do this directly with less overhead and skip the troublesome calls to eval_const_expressions(). This also removes the awkward cross-dependency between relcache.c and index.c. Bug: #15114 Reported-by: Петър Славов <pet.slavov@gmail.com> Reviewed-by: Noah Misch <noah@leadboat.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://www.postgresql.org/message-id/flat/152110589574.1223.17983600132321618383@wrigleys.postgresql.org/
-
Michael Paquier authored
If a user specifies a relation name which cannot be processed, then the backend can warn directly about what is wrong with it. This fixes an oversight from e0c2933a. Author: Nathan Bossart Discussion: https://postgr.es/m/32049A78-C429-4742-AEC1-941C9ABDE7B8@amazon.com
-
- 29 Jan, 2019 11 commits
-
-
Tom Lane authored
The old name of this file was never a very good indication of what it was for. Now that there's also access/relation.h, we have a potential confusion hazard as well, so let's rename it to something more apropos. Per discussion, "pathnodes.h" is reasonable, since a good fraction of the file is Path node definitions. While at it, tweak a couple of other headers that were gratuitously importing relation.h into modules that don't need it. Discussion: https://postgr.es/m/7719.1548688728@sss.pgh.pa.us
-
Tom Lane authored
Create a new header optimizer/optimizer.h, which exposes just the planner functions that can be used "at arm's length", without need to access Paths or the other planner-internal data structures defined in nodes/relation.h. This is intended to provide the whole planner API seen by most of the rest of the system; although FDWs still need to use additional stuff, and more thought is also needed about just what selfuncs.c should rely on. The main point of doing this now is to limit the amount of new #include baggage that will be needed by "planner support functions", which I expect to introduce later, and which will be in relevant datatype modules rather than anywhere near the planner. This commit just moves relevant declarations into optimizer.h from other header files (a couple of which go away because everything got moved), and adjusts #include lists to match. There's further cleanup that could be done if we want to decide that some stuff being exposed by optimizer.h doesn't belong in the planner at all, but I'll leave that for another day. Discussion: https://postgr.es/m/11460.1548706639@sss.pgh.pa.us
-
Tom Lane authored
Move a few very simple node-creation and node-type-testing functions from the planner's clauses.c to nodes/makefuncs and nodes/nodeFuncs. There's nothing planner-specific about them, as evidenced by the number of other places that were using them. While at it, rename and_clause() etc to is_andclause() etc, to clarify that they are node-type-testing functions not node-creation functions. And use "static inline" implementations for the shortest ones. Also, modify flatten_join_alias_vars() and some subsidiary functions to take a Query not a PlannerInfo to define the join structure that Vars should be translated according to. They were only using the "parse" field of the PlannerInfo anyway, so this just requires removing one level of indirection. The advantage is that now parse_agg.c can use flatten_join_alias_vars() without the horrid kluge of creating an incomplete PlannerInfo, which will allow that file to be decoupled from relation.h in a subsequent patch. Discussion: https://postgr.es/m/11460.1548706639@sss.pgh.pa.us
-
Peter Eisentraut authored
Return null if there is no client certificate. This is how it has always been documented, but in reality it returned an empty string. Reviewed-by: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> Discussion: https://www.postgresql.org/message-id/flat/398754d8-6bb5-c5cf-e7b8-22e5f0983caf@2ndquadrant.com/
-
Peter Eisentraut authored
Someone running these test could have key or certificate files in their ~/.postgresql/, which would interfere with the tests. The way to override that is to specify sslcert=invalid and/or sslrootcert=invalid if no actual certificate is used for a particular test. Document that and fix up one test that had a risk of failing in these circumstances. Discussion: https://www.postgresql.org/message-id/flat/398754d8-6bb5-c5cf-e7b8-22e5f0983caf@2ndquadrant.com/
-
Magnus Hagander authored
Author: Alex Kliukin Reviewed-By: Michael Paquier, Magnus Hagander
-
Etsuro Fujita authored
estimate_path_cost_size() failed to re-use cached costs when the cached startup/total cost was 0, so it calculated the costs redundantly. This is an oversight in commit aa09cd24; but apply the patch to HEAD only because there are no reports of actual trouble from that. Author: Etsuro Fujita Discussion: https://postgr.es/m/5C4AF3F3.4060409%40lab.ntt.co.jp
-
Michael Paquier authored
vacuumdb would use a catalog query only when the command caller does not define a list of tables. Switching to a catalog table represents two advantages: - Relation existence check can happen before running any VACUUM or ANALYZE query. Before this change, if multiple relations are defined using --table, the utility would fail only after processing the firstly-defined ones, which may be a long some depending on the size of the relation. This adds checks for the relation names, and does nothing, at least yet, for the attribute names. - More filtering options can become available for the utility user. These options, which may be introduced later on, are based on the relation size or the relation age, and need to be made available even if the user does not list any specific table with --table. Author: Nathan Bossart Reviewed-by: Michael Paquier, Masahiko Sawada Discussion: https://postgr.es/m/FFE5373C-E26A-495B-B5C8-911EC4A41C5E@amazon.com
-
Andres Freund authored
Previously llvmjit.h #error'ed when USE_LLVM was not defined, to prevent it from being included from code not having #ifdef USE_LLVM guards - but that's not actually that useful after, during the development of JIT support, LLVM related code was moved into a separately compiled .so. Having that #error means cpluspluscheck doesn't work when llvm support isn't enabled, which isn't great. Similarly add USE_LLVM guards to llvmjit_emit.h, and additionally make sure it compiles standalone. Per complaint from Tom Lane. Author: Andres Freund Discussion: https://postgr.es/m/19808.1548692361@sss.pgh.pa.us Backpatch: 11, where JIT support was added
-
Andres Freund authored
This reverts commit fc02e672 and e6799d5a. Parts of the buildfarm error out with ERROR: page %u of relation "%s" should be empty but is not errors, and so far I/we do not know why. fc02e672 didn't fix the issue. As I cannot reproduce the issue locally, it seems best to get the buildfarm green again, and reproduce the issue without time pressure.
-
- 28 Jan, 2019 11 commits
-
-
Andres Freund authored
In e6799d5a I removed vacuumlazy.c trickery around re-checking whether a page is actually empty after acquiring an extension lock on the relation, because the page is not PageInit()ed anymore, and entries in the FSM ought not to lead to user-visible errors. As reported by various buildfarm animals that is not correct, given the way to code currently stands: If vacuum processes a page that's just been newly added by either RelationGetBufferForTuple() or RelationAddExtraBlocks(), it could add that page to the FSM and it could be reused by other backends, before those two functions check whether the newly added page is actually new. That's a relatively narrow race, but several buildfarm machines appear to be able to hit it. While it seems wrong that the FSM, given it's lack of durability and approximative nature, can trigger errors like this, that seems better fixed in a separate commit. Especially given that a good portion of the buildfarm is red, and this is just re-introducing logic that existed a few hours ago. Author: Andres Freund Discussion: https://postgr.es/m/20190128222259.zhi7ovzgtkft6em6@alap3.anarazel.de
-
Tomas Vondra authored
In batching mode, COPY was using the same (per-tuple) memory context for allocations with longer lifetime. This was confusing but harmless, until commit 31f38174 added COPY FROM ... WHERE feature, introducing a risk of memory leak. The "per-tuple" memory context was reset only when starting new batch, but as the rows may be filtered out by the WHERE clauses, that may not happen at all. The WHERE clause however has to be evaluated for all rows, before filtering them out. This commit separates the per-tuple and per-batch contexts, removing the ambiguity. Expressions (both defaults and WHERE clause) are evaluated in the per-tuple context, while tuples are formed in the batch context. This allows resetting the contexts at appropriate times. The main complexity is related to partitioning, in which case we need to reset the batch context after forming the tuple (which happens before routing to leaf partition). Instead of switching between two contexts as before, we simply copy the last tuple aside, reset the context and then copy the tuple back. The performance impact is negligible, and juggling with two contexts is not free either. Discussion: https://www.postgresql.org/message-id/flat/CALAY4q_DdpWDuB5-Zyi-oTtO2uSk8pmy+dupiRe3AvAc++1imA@mail.gmail.com
-
Tom Lane authored
The fact that "SELECT expression" has no base relations has long been a thorn in the side of the planner. It makes it hard to flatten a sub-query that looks like that, or is a trivial VALUES() item, because the planner generally uses relid sets to identify sub-relations, and such a sub-query would have an empty relid set if we flattened it. prepjointree.c contains some baroque logic that works around this in certain special cases --- but there is a much better answer. We can replace an empty FROM clause with a dummy RTE that acts like a table of one row and no columns, and then there are no such corner cases to worry about. Instead we need some logic to get rid of useless dummy RTEs, but that's simpler and covers more cases than what was there before. For really trivial cases, where the query is just "SELECT expression" and nothing else, there's a hazard that adding the extra RTE makes for a noticeable slowdown; even though it's not much processing, there's not that much for the planner to do overall. However testing says that the penalty is very small, close to the noise level. In more complex queries, this is able to find optimizations that we could not find before. The new RTE type is called RTE_RESULT, since the "scan" plan type it gives rise to is a Result node (the same plan we produced for a "SELECT expression" query before). To avoid confusion, rename the old ResultPath path type to GroupResultPath, reflecting that it's only used in degenerate grouping cases where we know the query produces just one grouped row. (It wouldn't work to unify the two cases, because there are different rules about where the associated quals live during query_planner.) Note: although this touches readfuncs.c, I don't think a catversion bump is required, because the added case can't occur in stored rules, only plans. Patch by me, reviewed by David Rowley and Mark Dilger Discussion: https://postgr.es/m/15944.1521127664@sss.pgh.pa.us
-
Andres Freund authored
There's no reason not to install these, and jit.h can be useful for users of e.g. planner hooks. Author: Donald Dong Reviewed-By: Andres Freund Discussion: https://postgr.es/m/296D405F-7F95-49F1-B565-389D6AA78505@csumb.edu Backpatch: 11-, where JIT compilation was introduced
-
Andres Freund authored
Previously we initialized pages when bulk extending in RelationAddExtraBlocks(). That has a major disadvantage: It ties RelationAddExtraBlocks() to heap, as other types of storage are likely to need different amounts of special space, have different amount of free space (previously determined by PageGetHeapFreeSpace()). That we're relying on initializing pages, but not WAL logging the initialization, also means the risk for getting "WARNING: relation \"%s\" page %u is uninitialized --- fixing" style warnings in vacuums after crashes/immediate shutdowns, is considerably higher. The warning sounds much more serious than what they are. Fix those two issues together by not initializing pages in RelationAddExtraPages() (but continue to do so in RelationGetBufferForTuple(), which is linked much more closely to heap), and accepting uninitialized pages as normal in vacuumlazy.c. When vacuumlazy encounters an empty page it now adds it to the FSM, but does nothing else. We chose to not issue a debug message, much less a warning in that case - it seems rarely useful, and quite likely to scare people unnecessarily. For now empty pages aren't added to the VM, because standbys would not re-discover such pages after a promotion. In contrast to other sources for empty pages, there's no corresponding WAL records triggering FSM updates during replay. Author: Andres Freund Reviewed-By: Tom Lane Discussion: https://postgr.es/m/20181219083945.6khtgm36mivonhva@alap3.anarazel.de
-
Peter Eisentraut authored
This was used for the old CLUSTER syntax, has been unused since e55c8e36.
-
Michael Paquier authored
The completion here consists of attribute numbers, which is specific to this grammar. Author: Tatsuro Yamada Reviewed-by: Peter Eisentraut Discussion: https://portgr.es/m/b58a78fa-81ce-186f-f0bc-c1aa93c46cbf@lab.ntt.co.jp
-
Amit Kapila authored
This reverts commit ac88d296.
-
Amit Kapila authored
Previously, all heaps had FSMs. For very small tables, this means that the FSM took up more space than the heap did. This is wasteful, so now we refrain from creating the FSM for heaps with 4 pages or fewer. If the last known target block has insufficient space, we still try to insert into some other page before giving up and extending the relation, since doing otherwise leads to table bloat. Testing showed that trying every page penalized performance slightly, so we compromise and try every other page. This way, we visit at most two pages. Any pages with wasted free space become visible at next relation extension, so we still control table bloat. As a bonus, directly attempting one or two pages can even be faster than consulting the FSM would have been. Once the FSM is created for a heap we don't remove it even if somebody deletes all the rows from the corresponding relation. We don't think it is a useful optimization as it is quite likely that relation will again grow to the same size. Author: John Naylor with design inputs and some code contribution by Amit Kapila Reviewed-by: Amit Kapila Tested-by: Mithun C Y Discussion: https://www.postgresql.org/message-id/CAJVSVGWvB13PzpbLEecFuGFc5V2fsO736BsdTakPiPAcdMM5tQ@mail.gmail.com
-
Amit Kapila authored
exist. In commit's b9d01fe2 and 3908473c, we have added some code where we allowed the creation of files during mdopen even if they didn't exist during the bootstrap mode. The later commit obviates the need for same. This was harmless code till now but with an upcoming feature where we don't allow to create FSM for small tables, this will needlessly create FSM files. Author: John Naylor Reviewed-by: Amit Kapila Discussion: https://www.postgresql.org/message-id/CAJVSVGWvB13PzpbLEecFuGFc5V2fsO736BsdTakPiPAcdMM5tQ@mail.gmail.com https://www.postgresql.org/message-id/CAA4eK1KsET6sotf+rzOTQfb83pzVEzVhbQi1nxGFYVstVWXUGw@mail.gmail.com
-
- 27 Jan, 2019 2 commits
-
-
Michael Paquier authored
vacuumdb generates by itself SQL queries to run ANALYZE or VACUUM on the backend, but we never actually checked for query patterns with column lists defined. Author: Michael Paquier Reviewed-by: Nathan Bossart Discussion: https://postgr.es/m/FFE5373C-E26A-495B-B5C8-911EC4A41C5E@amazon.com
-
Tom Lane authored
Given the right timing, psql could emit "connection to server was lost" rather than one of the other messages that this test script checked for. It looks like commit 4247db62 may have made this more likely, but I don't really believe it was impossible before then. Rather than stress about it, just add that spelling as one of the crash-successfully- detected cases. Discussion: https://postgr.es/m/19344.1548554028@sss.pgh.pa.us
-
- 26 Jan, 2019 3 commits
-
-
Andres Freund authored
Before this change FunctionCallInfoData, the struct arguments etc for V1 function calls are stored in, always had space for FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two arrays. For nearly every function call 100 arguments is far more than needed, therefore wasting memory. Arg and argnull being two separate arrays also guarantees that to access a single argument, two cachelines have to be touched. Change the layout so there's a single variable-length array with pairs of value / isnull. That drastically reduces memory consumption for most function calls (on x86-64 a two argument function now uses 64bytes, previously 936 bytes), and makes it very likely that argument value and its nullness are on the same cacheline. Arguments are stored in a new NullableDatum struct, which, due to padding, needs more memory per argument than before. But as usually far fewer arguments are stored, and individual arguments are cheaper to access, that's still a clear win. It's likely that there's other places where conversion to NullableDatum arrays would make sense, e.g. TupleTableSlots, but that's for another commit. Because the function call information is now variable-length allocations have to take the number of arguments into account. For heap allocations that can be done with SizeForFunctionCallInfoData(), for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro that helps to allocate an appropriately sized and aligned variable. Some places with stack allocation function call information don't know the number of arguments at compile time, and currently variably sized stack allocations aren't allowed in postgres. Therefore allow for FUNC_MAX_ARGS space in these cases. They're not that common, so for now that seems acceptable. Because of the need to allocate FunctionCallInfo of the appropriate size, older extensions may need to update their code. To avoid subtle breakages, the FunctionCallInfoData struct has been renamed to FunctionCallInfoBaseData. Most code only references FunctionCallInfo, so that shouldn't cause much collateral damage. This change is also a prerequisite for more efficient expression JIT compilation (by allocating the function call information on the stack, allowing LLVM to optimize it away); previously the size of the call information caused problems inside LLVM's optimizer. Author: Andres Freund Reviewed-By: Tom Lane Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
-
Tom Lane authored
Previously, \g would successfully execute the COPY command, but the target specification if any was ignored, so that the data was always dumped to the regular query output target. This seems like a clear bug, so let's not just fix it but back-patch it. While at it, adjust the documentation for \copy to recommend "COPY ... TO STDOUT \g foo" as a plausible alternative. Back-patch to 9.5. The problem exists much further back, but the code associated with \g was refactored enough in 9.5 that we'd need a significantly different patch for 9.4, and it doesn't seem worth the trouble. Daniel Vérité, reviewed by Fabien Coelho Discussion: https://postgr.es/m/15dadc39-e050-4d46-956b-dcc4ed098753@manitou-mail.org
-
Peter Eisentraut authored
In some locales, letters sort before numbers, so change the object naming to not depend on that. Introduced by commit 7c079d74.
-