- 04 Oct, 2017 1 commit
-
-
Tom Lane authored
It wasn't on board with REL_n_n format.
-
- 03 Oct, 2017 3 commits
-
-
Tom Lane authored
Not much to say about this; does what it says on the tin. However, formerly, if there was a column list then the ANALYZE action was implied; now it must be specified, or you get an error. This is because it would otherwise be a bit unclear what the user meant if some tables have column lists and some don't. Nathan Bossart, reviewed by Michael Paquier and Masahiko Sawada, with some editorialization by me Discussion: https://postgr.es/m/E061A8E3-5E3D-494D-94F0-E8A9B312BBFC@amazon.com
-
Tom Lane authored
Commit 597a87cc introduced a latch pointer variable to replace use of a long-lived shared latch in the shared WalRcvData structure. This was not well thought out, because there are now hazards of the pointer variable changing while it's being inspected by another process. This could obviously lead to a core dump in code like if (WalRcv->latch) SetLatch(WalRcv->latch); and there's a more remote risk of a torn read, if we have any platforms where reading/writing a pointer is not atomic. An actual problem would occur only if the walreceiver process exits (gracefully) while the startup process is trying to signal it, but that seems well within the realm of possibility. To fix, treat the pointer variable (not the referenced latch) as being protected by the WalRcv->mutex spinlock. There remains a race condition that we could apply SetLatch to a process latch that no longer belongs to the walreceiver, but I believe that's harmless: at worst it'd cause an extra wakeup of the next process to use that PGPROC structure. Back-patch to v10 where the faulty code was added. Discussion: https://postgr.es/m/22735.1507048202@sss.pgh.pa.us
-
Alvaro Herrera authored
1. Since commit b1a9bad9 we had pstrdup() inside a spinlock-protected critical section; reported by Andreas Seltenreich. Turn those into strlcpy() to stack-allocated variables instead. Backpatch to 9.6. 2. Since commit 9ed551e0 we had a pfree() uselessly inside a spinlock-protected critical section. Tom Lane noticed in code review. Move down. Backpatch to 9.6. 3. Since commit 64233902 we had GetCurrentTimestamp() (a kernel call) inside a spinlock-protected critical section. Tom Lane noticed in code review. Move it up. Backpatch to 9.2. 4. Since commit 1bb25580 we did elog(PANIC) while holding spinlock. Tom Lane noticed in code review. Release spinlock before dying. Backpatch to 9.2. Discussion: https://postgr.es/m/87h8vhtgj2.fsf@ansel.ydns.eu
-
- 02 Oct, 2017 4 commits
-
-
Peter Eisentraut authored
Document better how to create custom collations and what locale strings ICU accepts. Explain the ICU examples in more detail. Also update the text on the CREATE COLLATION reference page a bit to take ICU more into account.
-
Simon Riggs authored
-
Andres Freund authored
Per buildfarm animal frogmouth. Brown-Paper-Bagged-By: Andres Freund
-
Andres Freund authored
Per buildfarm animal frogmouth. Author: Andres Freund
-
- 01 Oct, 2017 8 commits
-
-
Andres Freund authored
All postgres internal usages are replaced, it's just libpq example usages that haven't been converted. External users of libpq can't generally rely on including postgres internal headers. Note that this includes replacing open-coded byte swapping of 64bit integers (using two 32 bit swaps) with a single 64bit swap. Where it looked applicable, I have removed netinet/in.h and arpa/inet.h usage, which previously provided the relevant functionality. It's perfectly possible that I missed other reasons for including those, the buildfarm will tell. Author: Andres Freund Discussion: https://postgr.es/m/20170927172019.gheidqy6xvlxb325@alap3.anarazel.de
-
Andres Freund authored
Discussion: https://postgr.es/m/31674.1506788226@sss.pgh.pa.us
-
Andres Freund authored
Author: Andres Freund Tested-By: Andrew Dunstan Discussion: https://postgr.es/m/20170930224424.ud5ilchmclbl5y5n@alap3.anarazel.de
-
Andres Freund authored
Previously that was disallowed out of an abundance of caution. Providing KILL support however is helpful to make the 013_crash_restart.pl test portable, and there's no actual issue with allowing it. SIGABRT, which has similar consequences except it also dumps core, was already allowed. Author: Andres Freund Discussion: https://postgr.es/m/45d42d41-6145-9be1-7261-84acf6d9e344@2ndQuadrant.com
-
Tom Lane authored
Last(?) round of changes for 10.0.
-
Tom Lane authored
Buildfarm members skink and sungazer have both recently failed this test, with symptoms indicating that the default 3-second timeout isn't quite enough for those very slow systems. There's no reason to be miserly with this timeout, so boost it to 60 seconds. Back-patch to all versions containing this test. That may be overkill, because the failure has only been observed in the v10 branch, but I don't feel like having to revisit this later.
-
Peter Eisentraut authored
This contains all individuals mentioned in the commit messages during PostgreSQL 10 development. current through babf18579455e85269ad75e1ddb03f34138f77b6 Discussion: https://www.postgresql.org/message-id/flat/54ad0e42-770e-dfe1-123e-bce9361ad452%402ndquadrant.com
-
Heikki Linnakangas authored
If --rate was used to throttle pgbench, it failed to sleep when it had nothing to do, leading to a busy-wait with 100% CPU usage. This bug was introduced in the refactoring in v10. Before that, sleep() was called with a timeout, even when there were no file descriptors to wait for. Reported by Jeff Janes, patch by Fabien COELHO. Backpatch to v10. Discussion: https://www.postgresql.org/message-id/CAMkU%3D1x5hoX0pLLKPRnXCy0T8uHoDvXdq%2B7kAM9eoC9_z72ucw%40mail.gmail.com
-
- 30 Sep, 2017 5 commits
-
-
Tom Lane authored
During a binary upgrade, all type OIDs are supposed to be assigned by pg_dump based on their values in the old cluster. But now that domains have arrays, there's nothing to base the arrays' type OIDs on, if we're upgrading from a pre-v11 cluster. Make pg_dump search for an unused type OID to use for this purpose. Per buildfarm. Discussion: https://postgr.es/m/E1dyLlE-0002gT-H5@gemulon.postgresql.org
-
Tom Lane authored
Allowing arrays with a domain type as their element type was left un-done in the original domain patch, but not for any very good reason. This omission leads to such surprising results as array_agg() not working on a domain column, because the parser can't identify a suitable output type for the polymorphic aggregate. In order to fix this, first clean up the APIs of coerce_to_domain() and some internal functions in parse_coerce.c so that we consistently pass around a CoercionContext along with CoercionForm. Previously, we sometimes passed an "isExplicit" boolean flag instead, which is strictly less information; and coerce_to_domain() didn't even get that, but instead had to reverse-engineer isExplicit from CoercionForm. That's contrary to the documentation in primnodes.h that says that CoercionForm only affects display and not semantics. I don't think this change fixes any live bugs, but it makes things more consistent. The main reason for doing it though is that now build_coercion_expression() receives ccontext, which it needs in order to be able to recursively invoke coerce_to_target_type(). Next, reimplement ArrayCoerceExpr so that the node does not directly know any details of what has to be done to the individual array elements while performing the array coercion. Instead, the per-element processing is represented by a sub-expression whose input is a source array element and whose output is a target array element. This simplifies life in parse_coerce.c, because it can build that sub-expression by a recursive invocation of coerce_to_target_type(). The executor now handles the per-element processing as a compiled expression instead of hard-wired code. The main advantage of this is that we can use a single ArrayCoerceExpr to handle as many as three successive steps per element: base type conversion, typmod coercion, and domain constraint checking. The old code used two stacked ArrayCoerceExprs to handle type + typmod coercion, which was pretty inefficient, and adding yet another array deconstruction to do domain constraint checking seemed very unappetizing. In the case where we just need a single, very simple coercion function, doing this straightforwardly leads to a noticeable increase in the per-array-element runtime cost. Hence, add an additional shortcut evalfunc in execExprInterp.c that skips unnecessary overhead for that specific form of expression. The runtime speed of simple cases is within 1% or so of where it was before, while cases that previously required two levels of array processing are significantly faster. Finally, create an implicit array type for every domain type, as we do for base types, enums, etc. Everything except the array-coercion case seems to just work without further effort. Tom Lane, reviewed by Andrew Dunstan Discussion: https://postgr.es/m/9852.1499791473@sss.pgh.pa.us
-
Andres Freund authored
Reported-By: Peter Geoghegan
-
Andres Freund authored
Reported-By: Thomas Munro and Jesper Pedersen
-
Andres Freund authored
Upcoming patches are going to address performance issues that involve slow system provided ntohs/htons etc. To address that expand pg_bswap.h to provide pg_ntoh{16,32,64}, pg_hton{16,32,64} and optimize their respective implementations by using compiler intrinsics for gcc compatible compilers and msvc. Fall back to manual implementations using shifts etc otherwise. Additionally remove multiple evaluation hazards from the existing BSWAP32/64 macros, by replacing them with inline functions when necessary. In the course of that the naming scheme is changed to pg_bswap16/32/64. Author: Andres Freund Discussion: https://postgr.es/m/20170927172019.gheidqy6xvlxb325@alap3.anarazel.de
-
- 29 Sep, 2017 10 commits
-
-
Peter Eisentraut authored
This is more idiomatic style and available as of Python 2.4, which is our minimum.
-
Tom Lane authored
get_rel_oids used to not take any relation locks at all, but that stopped being a good idea with commit 3c3bb993, which inserted a syscache lookup into the function. A concurrent DROP TABLE could now produce "cache lookup failed", which we don't want to have happen in normal operation. The best solution seems to be to transiently take a lock on the relation named by the RangeVar (which also makes the result of RangeVarGetRelid a lot less spongy). But we shouldn't hold the lock beyond this function, because we don't want VACUUM to lock more than one table at a time. (That would not be a big problem right now, but it will become one after the pending feature patch to allow multiple tables to be named in VACUUM.) In passing, adjust vacuum_rel and analyze_rel to document that we don't trust the passed RangeVar to be accurate, and allow the RangeVar to possibly be NULL --- which it is anyway for a whole-database VACUUM, though we accidentally didn't crash for that case. The passed RangeVar is in fact inaccurate when dealing with a child partition, as of v10, and it has been wrong for a whole long time in the case of vacuum_rel() recursing to a TOAST table. None of these things present visible bugs up to now, because the passed RangeVar is in fact only consulted for autovacuum logging, and in that particular context it's always accurate because autovacuum doesn't let vacuum.c expand partitions nor recurse to toast tables. Still, this seems like trouble waiting to happen, so let's nail the door at least partly shut. (Further cleanup is planned, in HEAD only, as part of the pending feature patch.) Fix some sadly inaccurate/obsolete comments too. Back-patch to v10. Michael Paquier and Tom Lane Discussion: https://postgr.es/m/25023.1506107590@sss.pgh.pa.us
-
Robert Haas authored
If \d rather than \d+ is used, then verbose is false and we don't ask the server for the partition constraint; so we shouldn't print it in that case either. Maksim Milyutin, per a report from Jesper Pedersen. Reviewed by Jesper Pedersen and Amit Langote. Discussion: http://postgr.es/m/2af5fc4d-7bcc-daa8-4fe6-86274bea363c@redhat.com
-
Robert Haas authored
This beats the old behavior of busy-waiting hands down. Oversight in commit 12788ae4. Report by Pavan Deolasee. Patch by Fabien Coelho. Reviewed by Pavan Deolasee. Discussion: http://postgr.es/m/CABOikdPhfXTypckMC1Ux6Ko+hKBWwUBA=EXsvamXYSg8M9J94w@mail.gmail.com
-
Peter Eisentraut authored
For \d sequencename, the psql code just did SELECT * FROM sequencename to get the information to display, but this does not contain much interesting information anymore in PostgreSQL 10, because the metadata has been moved to a separate system catalog. This patch creates a newly designed sequence display that is not merely an extension of the general relation/table display as it was previously. Example: PostgreSQL 9.6: => \d foobar Sequence "public.foobar" Column | Type | Value ---------------+---------+--------------------- sequence_name | name | foobar last_value | bigint | 1 start_value | bigint | 1 increment_by | bigint | 1 max_value | bigint | 9223372036854775807 min_value | bigint | 1 cache_value | bigint | 1 log_cnt | bigint | 0 is_cycled | boolean | f is_called | boolean | f PostgreSQL 10 before this change: => \d foobar Sequence "public.foobar" Column | Type | Value ------------+---------+------- last_value | bigint | 1 log_cnt | bigint | 0 is_called | boolean | f New: => \d foobar Sequence "public.foobar" Type | Start | Minimum | Maximum | Increment | Cycles? | Cache --------+-------+---------+---------------------+-----------+---------+------- bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1 Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
-
Tom Lane authored
Avoid the coding pattern "*op->resvalue = f();", as some compilers think that requires them to evaluate "op->resvalue" before the function call. Unless there are lots of free registers, this can lead to a useless register spill and reload across the call. I changed all the cases like this in ExecInterpExpr(), but didn't bother in the out-of-line opcode eval subroutines, since those are presumably not as performance-critical. Discussion: https://postgr.es/m/2508.1506630094@sss.pgh.pa.us
-
Peter Eisentraut authored
Add bgw_type field to background worker structure. It is intended to be set to the same value for all workers of the same type, so they can be grouped in pg_stat_activity, for example. The backend_type column in pg_stat_activity now shows bgw_type for a background worker. The ps listing also no longer calls out that a process is a background worker but just show the bgw_type. That way, being a background worker is more of an implementation detail now that is not shown to the user. However, most log messages still refer to 'background worker "%s"'; otherwise constructing sensible and translatable log messages would become tricky. Reviewed-by: Michael Paquier <michael.paquier@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
-
Robert Haas authored
At the time replacement_sort_tuples was introduced, there were still cases where replacement selection sort noticeably outperformed using quicksort even for the first run. However, those cases seem to have evaporated as a result of further improvements made since that time (and perhaps also advances in CPU technology). So remove replacement selection and the controlling GUC entirely. This makes tuplesort.c noticeably simpler and probably paves the way for further optimizations someone might want to do later. Peter Geoghegan, with review and testing by Tomas Vondra and me. Discussion: https://postgr.es/m/CAH2-WzmmNjG_K0R9nqYwMq3zjyJJK+hCbiZYNGhAy-Zyjs64GQ@mail.gmail.com
-
Peter Eisentraut authored
Also make overriding the title easier. That helps telling where the report came from and labeling different variants of a report. Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
-
Peter Eisentraut authored
By just running lcov on the produced .gcda data files, we don't account for source files that are not touched by tests at all. To fix that, run lcov --initial to create a base line info file with all zero counters, and merge that with the actual counters when creating the final report. Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
-
- 28 Sep, 2017 4 commits
-
-
Peter Eisentraut authored
For XML compatibility, replace marked sections <![IGNORE[ ]]> with comments <!-- -->. In some cases it seemed better to remove the ignored text altogether, and in one case the text should not have been ignored.
-
Alvaro Herrera authored
Vacuum calls page-level HOT prune to remove dead HOT tuples before doing liveness checks (HeapTupleSatisfiesVacuum) on the remaining tuples. But concurrent transaction commit/abort may turn DEAD some of the HOT tuples that survived the prune, before HeapTupleSatisfiesVacuum tests them. This happens to activate the code that decides to freeze the tuple ... which resuscitates it, duplicating data. (This is especially bad if there's any unique constraints, because those are now internally violated due to the duplicate entries, though you won't know until you try to REINDEX or dump/restore the table.) One possible fix would be to simply skip doing anything to the tuple, and hope that the next HOT prune would remove it. But there is a problem: if the tuple is older than freeze horizon, this would leave an unfrozen XID behind, and if no HOT prune happens to clean it up before the containing pg_clog segment is truncated away, it'd later cause an error when the XID is looked up. Fix the problem by having the tuple freezing routines cope with the situation: don't freeze the tuple (and keep it dead). In the cases that the XID is older than the freeze age, set the HEAP_XMAX_COMMITTED flag so that there is no need to look up the XID in pg_clog later on. An isolation test is included, authored by Michael Paquier, loosely based on Daniel Wood's original reproducer. It only tests one particular scenario, though, not all the possible ways for this problem to surface; it be good to have a more reliable way to test this more fully, but it'd require more work. In message https://postgr.es/m/20170911140103.5akxptyrwgpc25bw@alvherre.pgsql I outlined another test case (more closely matching Dan Wood's) that exposed a few more ways for the problem to occur. Backpatch all the way back to 9.3, where this problem was introduced by multixact juggling. In branches 9.3 and 9.4, this includes a backpatch of commit e5ff9fefcd50 (of 9.5 era), since the original is not correctable without matching the coding pattern in 9.5 up. Reported-by: Daniel Wood Diagnosed-by: Daniel Wood Reviewed-by: Yi Wen Wong, Michaël Paquier Discussion: https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com
-
Peter Eisentraut authored
Call lcov with --no-external option to exclude external files (for example, system headers with inline functions) from output. Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
-
Peter Eisentraut authored
This is the way lcov was intended to be used. It is much faster and more robust and makes the makefiles simpler than running it in each subdirectory. The previous coding ran gcov before lcov, but that is useless because lcov/geninfo call gcov internally and use that information. Moreover, this led to complications and failures during parallel make. This separates the two targets: You either use "make coverage" to get textual output from gcov or "make coverage-html" to get an HTML report via lcov. (Using both is still problematic because they write the same output files.) Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
-
- 27 Sep, 2017 5 commits
-
-
Tom Lane authored
float8_numeric() and float4_numeric() failed to consider the possibility that the input is an IEEE infinity. The results depended on the platform-specific behavior of sprintf(): on most platforms you'd get something like ERROR: invalid input syntax for type numeric: "inf" but at least on Windows it's possible for the conversion to succeed and deliver a finite value (typically 1), due to a nonstandard output format from sprintf and lack of syntax error checking in these functions. Since our numeric type lacks the concept of infinity, a suitable conversion is impossible; the best thing to do is throw an explicit error before letting sprintf do its thing. While at it, let's use snprintf not sprintf. Overrunning the buffer should be impossible if sprintf does what it's supposed to, but this is cheap insurance against a stack smash if it doesn't. Problem reported by Taiki Kondo. Patch by me based on fix suggestion from KaiGai Kohei. Back-patch to all supported branches. Discussion: https://postgr.es/m/12A9442FBAE80D4E8953883E0B84E088C8C7A2@BPXM01GP.gisp.nec.co.jp
-
Tom Lane authored
This reverts commit 15bc038f, along with the followon commits 1635e80d and 984c9207 that tried to clean up the problems exposed by bug #14825. The result was incomplete because it failed to address parallel-query requirements. With 10.0 release so close upon us, now does not seem like the time to be adding more code to fix that. I hope we can un-revert this code and add the missing parallel query support during the v11 cycle. Back-patch to v10. Discussion: https://postgr.es/m/20170922185904.1448.16585@wrigleys.postgresql.org
-
Peter Eisentraut authored
The changes in 639928c9 turned out to require Perl 5.9.3, which is newer than our minimum required version. So revert back to the old code for the normal case and only use the new variant when both coverage and vpath are used. As the minimum Perl version moves forward, we can drop the old code sometime.
-
Dean Rasheed authored
Provide a correct description of how multiple policies are combined, clarify when SELECT permissions are required, mention SELECT FOR UPDATE/SHARE, and do some other more minor tidying up. Reviewed by Stephen Frost Discussion: https://postgr.es/m/CAEZATCVrxyYbOFU8XbGHicz%2BmXPYzw%3DhfNL2XTphDt-53TomQQ%40mail.gmail.com Back-patch to 9.5.
-
Peter Eisentraut authored
Run xsubpp with the -output option instead of redirecting stdout. That ensures that the #line directives in the output file point to the right place in a vpath build. This in turn fixes an error in coverage builds that it can't find the source files. Refactor the makefile rules while we're here. Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
-