- 02 Dec, 2016 8 commits
-
-
Robert Haas authored
C doesn't have any sort of built-in understanding of a pointer relative to some arbitrary base address, but dynamic shared memory segments can be mapped at different addresses in different processes, so any sort of shared data structure stored within a dynamic shared memory segment can't use absolute pointers. We could use something like Size to represent a relative pointer, but then the compiler provides no type-checking. Use stupid macro tricks to get some type-checking. Patch originally by me. Concept suggested by Andres Freund. Recently resubmitted as part of Thomas Munro's work on dynamic shared memory allocation. Discussion: 20131205144434.GG12398@alap2.anarazel.de Discussion: CAEepm=1z5WLuNoJ80PaCvz6EtG9dN0j-KuHcHtU6QEfcPP5-qA@mail.gmail.com
-
Robert Haas authored
There was always documentation of the GUC that controlled what the limit actually was, but previously the documentation of the field itself made no mention of that limit. Ian Barwick
-
Alvaro Herrera authored
Commit 597a87cc neglected to update some comments; fix. Report and patch by Thomas Munro. Reviewed by Petr Jelínek.
-
Robert Haas authored
Increase the default value of the existing max_worker_processes GUC from 8 to 16, and add a new max_parallel_workers GUC with a maximum of 8. This way, even if the maximum amount of parallel query is happening, there is still room for background workers that do other things, as originally envisioned when max_worker_processes was added. Julien Rouhaud, reviewed by Amit Kapila and by revised by me.
-
Alvaro Herrera authored
Author: Petr Jelínek
-
Alvaro Herrera authored
Our documentation states that our maximum field size is 1 GB, and that our maximum row size of 1.6 TB. However, while this might be attainable in theory with enough contortions, it is not workable in practice; for starters, pg_dump fails to dump tables containing rows larger than 1 GB, even if individual columns are well below the limit; and even if one does manage to manufacture a dump file containing a row that large, the server refuses to load it anyway. This commit enables dumping and reloading of such tuples, provided two conditions are met: 1. no single column is larger than 1 GB (in output size -- for bytea this includes the formatting overhead) 2. the whole row is not larger than 2 GB There are three related changes to enable this: a. StringInfo's API now has two additional functions that allow creating a string that grows beyond the typical 1GB limit (and "long" string). ABI compatibility is maintained. We still limit these strings to 2 GB, though, for reasons explained below. b. COPY now uses long StringInfos, so that pg_dump doesn't choke trying to emit rows longer than 1GB. c. heap_form_tuple now uses the MCXT_ALLOW_HUGE flag in its allocation for the input tuple, which means that large tuples are accepted on input. Note that at this point we do not apply any further limit to the input tuple size. The main reason to limit to 2 GB is that the FE/BE protocol uses 32 bit length words to describe each row; and because the documentation is ambiguous on its signedness and libpq does consider it signed, we cannot use the highest-order bit. Additionally, the StringInfo API uses "int" (which is 4 bytes wide in most platforms) in many places, so we'd need to change that API too in order to improve, which has lots of fallout. Backpatch to 9.5, which is the oldest that has MemoryContextAllocExtended, a necessary piece of infrastructure. We could apply to 9.4 with very minimal additional effort, but any further than that would require backpatching "huge" allocations too. This is the largest set of changes we could find that can be back-patched without breaking compatibility with existing systems. Fixing a bigger set of problems (for example, dumping tuples bigger than 2GB, or dumping fields bigger than 1GB) would require changing the FE/BE protocol and/or changing the StringInfo API in an ABI-incompatible way, neither of which would be back-patchable. Authors: Daniel Vérité, Álvaro Herrera Reviewed by: Tomas Vondra Discussion: https://postgr.es/m/20160229183023.GA286012@alvherre.pgsql
-
Peter Eisentraut authored
The whole walreceiver API is now wrapped into a struct, like most of our other loadable module APIs. The libpq connection is no longer a global variable in libpqwalreceiver. Instead, it is encapsulated into a struct that is passed around the functions. This allows multiple walreceivers to run at the same time. Add some rudimentary support for logical replication connections to libpqwalreceiver. These changes are mostly cosmetic and are going to be useful for the future logical replication patches. From: Petr Jelinek <petr@2ndquadrant.com>
-
Peter Eisentraut authored
Replace use of poll()/select() by WaitLatchOrSocket(), which is more portable and flexible. Also change walreceiver to use its procLatch instead of a custom latch. From: Petr Jelinek <petr@2ndquadrant.com>
-
- 01 Dec, 2016 7 commits
-
-
Peter Eisentraut authored
This works like function_with_argtypes, but aggregates allow slightly different arguments. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
-
Peter Eisentraut authored
It was apparently added for use by GRANT/REVOKE, but move it closer to where other function signature related things are kept. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
-
Peter Eisentraut authored
Instead of sometimes referring to a function signature like func_name func_args, use the existing function_with_argtypes symbol, which combines the two. Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
-
Robert Haas authored
Commit 274bb2b3 caused PQhost() to return the value of the hostaddr parameter rather than the relevant host when the latter parameter was specified. That's wrong. Commit 9a1d0af4 then amplified the damage by using PQhost() in more places, so that the SSL test suite started failing. Report by Andreas Karlsson; patch by me.
-
Michael Meskes authored
-
Andres Freund authored
So far the hashtable stored representative tuples in the form of its input slot, with all columns in the hashtable that are not needed (i.e. not grouped upon or functionally dependent) set to NULL. Thats good for saving memory, but it turns out that having tuples full of NULL isn't free. slot_deform_tuple is faster if there's no NULL bitmap even if no NULLs are encountered, and skipping over leading NULLs isn't free. So compute a separate tuple descriptor that only contains the needed columns. As columns have already been moved in/out the slot for the hashtable that does not imply additional per-row overhead. Author: Andres Freund Reviewed-By: Heikki Linnakangas Discussion: https://postgr.es/m/20161103110721.h5i5t5saxfk5eeik@alap3.anarazel.de
-
Andres Freund authored
Previously we did a ExecProject() for each individual aggregate argument. That turned out to be a performance bottleneck in queries with multiple aggregates. Doing all the argument computations in one ExecProject() is quite a bit cheaper because ExecProject's fastpath can do the work at once in a relatively tight loop, and because it can get all the required columns with a single slot_getsomeattr and save some other redundant setup costs. Author: Andres Freund Reviewed-By: Heikki Linnakangas Discussion: https://postgr.es/m/20161103110721.h5i5t5saxfk5eeik@alap3.anarazel.de
-
- 30 Nov, 2016 10 commits
-
-
Robert Haas authored
Previously, the right to split a bucket was represented by a heavyweight lock on the page number of the primary bucket page. Unfortunately, this meant that every scan needed to take a heavyweight lock on that bucket also, which was bad for concurrency. Instead, use a cleanup lock on the primary bucket page to indicate the right to begin a split, so that scans only need to retain a pin on that page, which is they would have to acquire anyway, and which is also much cheaper. In addition to reducing the locking cost, this also avoids locking out scans and inserts for the entire lifetime of the split: while the new bucket is being populated with copies of the appropriate tuples from the old bucket, scans and inserts can happen in parallel. There are minor concurrency improvements for vacuum operations as well, though the situation there is still far from ideal. This patch also removes the unworldly assumption that a split will never be interrupted. With the new code, a split is done in a series of small steps and the system can pick up where it left off if it is interrupted prior to completion. While this patch does not itself add write-ahead logging for hash indexes, it is clearly a necessary first step, since one of the things that could interrupt a split is the removal of electrical power from the machine performing it. Amit Kapila. I wrote the original design on which this patch is based, and did a good bit of work on the comments and README through multiple rounds of review, but all of the code is Amit's. Also reviewed by Jesper Pedersen, Jeff Janes, and others. Discussion: http://postgr.es/m/CAA4eK1LfzcZYxLoXS874Ad0+S-ZM60U9bwcyiUZx9mHZ-KCWhw@mail.gmail.com
-
Tom Lane authored
Per bug #14441 from Mark Pether, the documentation could be misread, mainly because some of the examples failed to show what happens with a multicharacter "characters to trim" string. Also, while the text description in most of these entries was fairly clear that the "characters" argument is a set of characters not a substring to match, some of them used variant wording that was a bit less clear. trim() itself suffered from both deficiencies and was thus pretty misinterpretable. Also fix failure to explain which of LEADING/TRAILING/BOTH is the default. Discussion: https://postgr.es/m/20161130011710.6539.53657@wrigleys.postgresql.org
-
Heikki Linnakangas authored
The loops were a bit difficult to understand, due to breaking out of them early. Also fix things that perlcritic complained about. Daniel Gustafsson
-
Peter Eisentraut authored
Testing has shown that it is no longer correct. From: Tsunakawa, Takayuki <tsunakawa.takay@jp.fujitsu.com> Reviewed-by: amul sul <sulamul@gmail.com> Discussion: http://www.postgresql.org/message-id/flat/0A3221C70F24FB45833433255569204D1F5EE995@G01JPEXMBYT05/
-
Peter Eisentraut authored
From: Tsunakawa, Takayuki <tsunakawa.takay@jp.fujitsu.com>
-
Heikki Linnakangas authored
Generate EUC_CN mappings from gb-18030-2000.xml, because GB2312.TXT is no longer available. Get UHC from windows-949-2000.xml, it's more up-to-date. Plus tons more small changes. With these changes, the perl scripts faithfully produce the *.map files we have in the repository, from the external source files. In the passing, fix the Makefile to also download CP932.TXT and CP950.TXT. Based on patches by Kyotaro Horiguchi, reviewed by Daniel Gustafsson. Discussion: https://postgr.es/m/08e7892a-d55c-eefe-76e6-7910bc8dd1f3@iki.fi
-
Heikki Linnakangas authored
The common style is to pad to 4 digits. Running the current perl scripts to generate these map files would override this change, but the next commit will rewrite the perl scripts to produce this style. I'm doing this as a separate commit, to make it more clear what non-cosmetic changes the next commit makes to the map files. Discussion: https://postgr.es/m/08e7892a-d55c-eefe-76e6-7910bc8dd1f3@iki.fi
-
Heikki Linnakangas authored
PostgreSQL treats characters with < 0x80 leading byte as plain ASCII, and they are not even passed to the conversion routines. There is no point in having them in the conversion tables. Everything in the tables were direct ASCII-ASCII mappings, except for two: * SHIFT_JIS_2004 code point 0x5C (backslash in ASCII) was mapped to Unicode YEN SIGN character. * Unicode 0x5C (backslash again) was mapped to "REVERSE SOLIDUS" in SHIFT_JIS_2004 These mappings never had any effect, so there's no functional change from removing them. Discussion: https://postgr.es/m/08e7892a-d55c-eefe-76e6-7910bc8dd1f3@iki.fi
-
Heikki Linnakangas authored
pgp-pubkey-DISABLED test has been unused since 2006, when support for built-in bignum math was added (commit 1abf76e8). pgp-encrypt-DISABLED has been unused forever, AFAICS. Also remove a couple of unused error codes.
-
Tom Lane authored
consider_parallel_nestloop passed the wrong jointype down to its subroutines for JOIN_UNIQUE_INNER cases (it should pass JOIN_INNER), and it thought that it could pass paths other than innerrel->cheapest_total_path to create_unique_path, which create_unique_path is not on board with. These bugs would lead to assertion failures or other errors, suggesting that this code path hasn't been tested much. hash_inner_and_outer's code for parallel join effectively treated both JOIN_UNIQUE_OUTER and JOIN_UNIQUE_INNER the same as JOIN_INNER (for different reasons :-(), leading to incorrect plans that treated a semijoin as if it were a plain join. Michael Day submitted a test case demonstrating that hash_inner_and_outer failed for JOIN_UNIQUE_OUTER, and I found the other cases through code review. Report: https://postgr.es/m/D0E8A029-D1AC-42E8-979A-5DE4A77E4413@rcmail.com
-
- 29 Nov, 2016 10 commits
-
-
Tom Lane authored
If we don't have any MCV statistics for the inner relation, and we don't trust its numdistinct estimate either, eqjoinsel_semi falls back to a very conservative estimate (that 50% of the outer rows have matches). This is particularly problematic if the inner relation is completely empty, since then even an explicit ANALYZE won't produce any pg_statistic entries, so there's no way to budge the planner off the bad estimate. We'd produce a better estimate in such cases if we used the nd2/nd1 selectivity heuristic, so an easy fix is to treat the nd2 estimate as non-default if we derive it from clamping to the inner rel's rowcount estimate. This won't fix every related case (mainly because the rowcount estimate might be larger than DEFAULT_NUM_DISTINCT), but it seems like a sane extension of the existing logic, so let's apply the change in HEAD and see if anyone complains. Per bug #14438 from Nikolay Nikitin. Report: https://postgr.es/m/20161128182113.6527.58926@wrigleys.postgresql.org Discussion: https://postgr.es/m/31089.1480384713@sss.pgh.pa.us
-
Peter Eisentraut authored
-
Tom Lane authored
I'd supposed that people would do this manually when creating new operator classes, but the folly of that was exposed today. The tests seem fast enough that we can just apply them during the normal regression tests. contrib/isn fails the checks for lack of complete sets of cross-type operators. That's a nice-to-have policy rather than a functional requirement, so leave it as-is, but insert ORDER BY in the query to ensure consistent cross-platform output. Discussion: https://postgr.es/m/7076.1480446837@sss.pgh.pa.us
-
Tom Lane authored
Paul Jungwirth, reviewed and hacked on by Teodor Sigaev, Ildus Kurbangaliev, Adam Brusselback, Chris Bandy, and myself. Discussion: https://postgr.es/m/CA+renyUEE29=X01JXdz8_TQvo6n9=2XoEBBRnQ8rkLyr+kjPxQ@mail.gmail.com Discussion: https://postgr.es/m/55F6EE82.8080209@sigaev.ru
-
Robert Haas authored
Commit 274bb2b3 made it possible to specify multiple IPs in a connection string, but that's not good enough for the case where you have a read-write master and a bunch of read-only standbys and want to connect to whichever server is the master at the current time. This commit allows that, by making it possible to specify target_session_attrs=read-write as a connection parameter. There was extensive discussion of the best name for the connection parameter and its values as well as the best way to distinguish master and standbys. For now, adopt the same solution as JDBC: if the user wants a read-write connection, issue 'show transaction_read_only' and rejection the connection if the result is 'on'. In the future, we could add additional values of this new target_session_attrs parameter that issue different queries; or we might have some way of distinguishing the server type without resorting to an SQL query; but right now, we have this, and that's (hopefully) a good start. Victor Wagner and Mithun Cy. Design review by Álvaro Herrera, Catalin Iacob, Takayuki Tsunakawa, and Craig Ringer; code review by me. I changed Mithun's patch to skip all remaining IPs for a host if we reject a connection based on this new parameter, rewrote the documentation, and did some other cosmetic cleanup. Discussion: http://postgr.es/m/CAD__OuhqPRGpcsfwPHz_PDqAGkoqS1UvnUnOnAB-LBWBW=wu4A@mail.gmail.com
-
Stephen Frost authored
Add an option to exclude blobs when running pg_dump. By default, blobs are included but this option can be used to exclude them while keeping the rest of the dump. Commment updates and regression tests from me. Author: Guillaume Lelarge Reviewed-by: Amul Sul Discussion: https://postgr.es/m/VisenaEmail.48.49926ea6f91dceb6.15355a48249@tc7-visena
-
Tom Lane authored
func_parallel() returns char not Oid. Harmless, but still wrong. Amit Langote
-
Stephen Frost authored
The documentation around the -b/--blobs option to pg_dump seemed to imply that it might be possible to add blobs to a "schema-only" dump or similar. Clarify that blobs are data and therefore will only be included in dumps where data is being included, even when -b is used to request blobs be included. The -b option has been around since before 9.2, so back-patch to all supported branches. Discussion: https://postgr.es/m/20161119173316.GA13284@tamriel.snowman.net
-
Stephen Frost authored
An example in the psql documentation had an incorrect field name from what the command actually produced. Pointed out by Fabien COELHO Back-patch to 9.6 where the example was added. Discussion: https://postgr.es/m/alpine.DEB.2.20.1611291349400.19314@lancre
-
- 28 Nov, 2016 3 commits
-
-
Alvaro Herrera authored
Plus add a missing comment about this in get_relation_info itself. Author: Amit Langote Discussion: https://postgr.es/m/e46c0569-0449-afa0-e2fe-f3776e4b3fd5@lab.ntt.co.jp
-
Tom Lane authored
Commit a734fd5d exposed some race conditions that existed previously in the autovac code, but were basically harmless because autovac would not try to delete orphaned relations immediately. Specifically, the test for orphaned-ness was made on a pg_class tuple that might be dead by now, allowing autovac to try to remove a table that the owning backend had just finished deleting. This resulted in a hard crash due to inadequate caution about accessing the table's catalog entries without any lock. We must take a relation lock and then recheck whether the table is still present and still looks deletable before we do anything. Also, it seemed to me that deleting multiple tables per transaction, and trying to continue after errors, represented unjustifiable complexity. We do not expect this code path to be taken often in the field, nor even during testing, which means that prioritizing performance over correctness is a bad tradeoff. Rip all that out in favor of just starting a new transaction after each successful temp table deletion. If we're unlucky enough to get an error, which shouldn't happen anyway now that we're being more cautious, let the autovacuum worker fail as it normally would. In passing, improve the order of operations in the initial scan loop. Now that we don't care about whether a temp table is a wraparound hazard, there's no need to perform extract_autovac_opts, get_pgstat_tabentry_relid, or relation_needs_vacanalyze for temp tables. Also, if GetTempNamespaceBackendId returns InvalidBackendId (indicating it doesn't recognize the schema as temp), treat that as meaning it's NOT an orphaned temp table, not that it IS one, which is what happened before because BackendIdGetProc necessarily failed. The case really shouldn't come up for a table that has RELPERSISTENCE_TEMP, but the consequences if it did seem undesirable. (This might represent a back-patchable bug fix; not sure if it's worth the trouble.) Discussion: https://postgr.es/m/21299.1480272347@sss.pgh.pa.us
-
- 27 Nov, 2016 1 commit
-
-
Magnus Hagander authored
Fix that the documentation for three ssl related parameters did not specify that they can only be changed at server start. Michael Paquier
-
- 26 Nov, 2016 1 commit
-
-
Tom Lane authored
Commit 08dd23ce introduced an exception to the rule that extension member objects can only be dropped as part of dropping the whole extension, intending to allow such drops while running the extension's own creation or update scripts. However, the exception was only applied at the outermost recursion level, because it was modeled on a pre-existing check to ignore dependencies on objects listed in pendingObjects. Bug #14434 from Philippe Beaudoin shows that this is inadequate: in some cases we can reach an extension member object by recursion from another one. (The bug concerns the serial-sequence case; I'm not sure if there are other cases, but there might well be.) To fix, revert 08dd23ce's changes to findDependentObjects() and instead apply the creating_extension exception regardless of stack level. Having seen this example, I'm a bit suspicious that the pendingObjects logic is also wrong and such cases should likewise be allowed at any recursion level. However, changing that would interact in subtle ways with the recursion logic (at least it would need to be moved to after the recursing-from check). Given that the code's been like that a long time, I'll refrain from touching it without a clear example showing it's wrong. Back-patch to all active branches. In HEAD and 9.6, where suitable test infrastructure exists, add a regression test case based on the bug report. Report: <20161125151448.6529.33039@wrigleys.postgresql.org> Discussion: <13224.1480177514@sss.pgh.pa.us>
-