- 25 Mar, 2017 14 commits
-
-
Andres Freund authored
This replaces the old, recursive tree-walk based evaluation, with non-recursive, opcode dispatch based, expression evaluation. Projection is now implemented as part of expression evaluation. This both leads to significant performance improvements, and makes future just-in-time compilation of expressions easier. The speed gains primarily come from: - non-recursive implementation reduces stack usage / overhead - simple sub-expressions are implemented with a single jump, without function calls - sharing some state between different sub-expressions - reduced amount of indirect/hard to predict memory accesses by laying out operation metadata sequentially; including the avoidance of nearly all of the previously used linked lists - more code has been moved to expression initialization, avoiding constant re-checks at evaluation time Future just-in-time compilation (JIT) has become easier, as demonstrated by released patches intended to be merged in a later release, for primarily two reasons: Firstly, due to a stricter split between expression initialization and evaluation, less code has to be handled by the JIT. Secondly, due to the non-recursive nature of the generated "instructions", less performance-critical code-paths can easily be shared between interpreted and compiled evaluation. The new framework allows for significant future optimizations. E.g.: - basic infrastructure for to later reduce the per executor-startup overhead of expression evaluation, by caching state in prepared statements. That'd be helpful in OLTPish scenarios where initialization overhead is measurable. - optimizing the generated "code". A number of proposals for potential work has already been made. - optimizing the interpreter. Similarly a number of proposals have been made here too. The move of logic into the expression initialization step leads to some backward-incompatible changes: - Function permission checks are now done during expression initialization, whereas previously they were done during execution. In edge cases this can lead to errors being raised that previously wouldn't have been, e.g. a NULL array being coerced to a different array type previously didn't perform checks. - The set of domain constraints to be checked, is now evaluated once during expression initialization, previously it was re-built every time a domain check was evaluated. For normal queries this doesn't change much, but e.g. for plpgsql functions, which caches ExprStates, the old set could stick around longer. The behavior around might still change. Author: Andres Freund, with significant changes by Tom Lane, changes by Heikki Linnakangas Reviewed-By: Tom Lane, Heikki Linnakangas Discussion: https://postgr.es/m/20161206034955.bh33paeralxbtluv@alap3.anarazel.de
-
Tom Lane authored
As explained at the head of parallel_schedule, we place an arbitrary limit of 20 test cases per parallel group. Commit c7a9fa39 overlooked this. Least messy solution seems to be to move the "comments" test to the next group, since it doesn't really belong in a group of datatype tests anyway.
-
Peter Eisentraut authored
-
Simon Riggs authored
If the upstream walsender is using a physical replication slot, store the catalog_xmin in the slot's catalog_xmin field. If the upstream doesn't use a slot and has only a PGPROC entry behaviour doesn't change, as we store the combined xmin and catalog_xmin in the PGPROC entry. Author: Craig Ringer
-
Peter Eisentraut authored
Reported-by: Mark Kirkwood <mark.kirkwood@catalyst.net.nz>
-
Peter Eisentraut authored
-
Peter Eisentraut authored
Author: David Rowley <david.rowley@2ndquadrant.com>
-
Peter Eisentraut authored
These tests require the test database to be in UTF8 encoding. Until there is a better solution, take them out of the default test set and treat them like the existing collate.linux.utf8 test, meaning it has to be selected manually.
-
Peter Eisentraut authored
The test would hang if a sufficient ~/.psqlrc was present. Fix by using psql -X.
-
Peter Eisentraut authored
-
Peter Eisentraut authored
Add necessary include files for things used in the header.
-
Peter Eisentraut authored
Add more tests for various variants of subscription DDL commands, based on code coverage report. Fix a small bug discovered by that.
- 24 Mar, 2017 15 commits
-
-
Alvaro Herrera authored
-
Alvaro Herrera authored
Because tuple packing is different (because of the MAXALIGN difference), the expected costs of a seqscan is different. The commonly used trick of eliding costs in EXPLAIN output (COSTS OFF) would make the tests completely pointless. Instead, add an alternative expected file.
-
Peter Eisentraut authored
Author: Petr Jelinek <pjmodos@pjmodos.net>
-
Robert Haas authored
In SQL, the ability to use parallel query was previous contingent on fcache->readonly_func, which is only set for non-volatile functions; but the volatility of a function has no bearing on whether queries inside it can use parallelism. Remove that condition. SPI_execute and SPI_execute_with_args always run the plan just once, though not necessarily to completion. Given the changes in commit 691b8d59, it's sensible to pass CURSOR_OPT_PARALLEL_OK here, so do that. This improves access to parallelism for any caller that uses these functions to execute queries. Such callers include plperl, plpython, pltcl, and plpgsql, though it's not the case that they all use these functions exclusively. In plpgsql, allow parallel query for plain SELECT queries (as opposed to PERFORM, which already worked) and for plain expressions (which probably won't go through the executor at all, because they will likely be simple expressions, but if they do then this helps). Rafia Sabih and Robert Haas, reviewed by Dilip Kumar and Amit Kapila Discussion: http://postgr.es/m/CAOGQiiMfJ+4SQwgG=6CVHWoisiU0+7jtXSuiyXBM3y=A=eJzmg@mail.gmail.com
-
Alvaro Herrera authored
Detected by buildfarm member prion
-
Simon Riggs authored
Buildfarm issues and other reported issues
-
Fujii Masao authored
Previously manual VACUUM did not report the number of skipped frozen pages even when VERBOSE option is specified. But this information is helpful to monitor the VACUUM activity, and also autovacuum reports that number in the log file when the condition of log_autovacuum_min_duration is met. This commit changes VACUUM VERBOSE so that it reports the number of frozen pages that it skips. Author: Masahiko Sawada Reviewed-by: Yugo Nagata and Jim Nasby Discussion: http://postgr.es/m/CAD21AoDZQKCxo0L39Mrq08cONNkXQKXuh=2DP1Q8ebmt35SoaA@mail.gmail.com
-
Alvaro Herrera authored
Add support for explicitly declared statistic objects (CREATE STATISTICS), allowing collection of statistics on more complex combinations that individual table columns. Companion commands DROP STATISTICS and ALTER STATISTICS ... OWNER TO / SET SCHEMA / RENAME are added too. All this DDL has been designed so that more statistic types can be added later on, such as multivariate most-common-values and multivariate histograms between columns of a single table, leaving room for permitting columns on multiple tables, too, as well as expressions. This commit only adds support for collection of n-distinct coefficient on user-specified sets of columns in a single table. This is useful to estimate number of distinct groups in GROUP BY and DISTINCT clauses; estimation errors there can cause over-allocation of memory in hashed aggregates, for instance, so it's a worthwhile problem to solve. A new special pseudo-type pg_ndistinct is used. (num-distinct estimation was deemed sufficiently useful by itself that this is worthwhile even if no further statistic types are added immediately; so much so that another version of essentially the same functionality was submitted by Kyotaro Horiguchi: https://postgr.es/m/20150828.173334.114731693.horiguchi.kyotaro@lab.ntt.co.jp though this commit does not use that code.) Author: Tomas Vondra. Some code rework by Álvaro. Reviewed-by: Dean Rasheed, David Rowley, Kyotaro Horiguchi, Jeff Janes, Ideriha Takeshi Discussion: https://postgr.es/m/543AFA15.4080608@fuzzy.cz https://postgr.es/m/20170320190220.ixlaueanxegqd5gr@alvherre.pgsql
-
Robert Haas authored
Commit 7aea8e4f allowed a parallel plan to be generated when for a RETURN QUERY or RETURN QUERY EXECUTE statement in a PL/pgsql block, but that's a bad idea because plplgsql asks the executor for 50 rows at a time. That means that we'll always be running serially a plan that was intended for parallel execution, which is not a good idea. Fix by not requesting a parallel plan from the outset. Per discussion, back-patch to 9.6. There is a slight risk that, due to optimizer error, somebody could have a case where the parallel plan executed serially is actually faster than the supposedly-best serial plan, but the consensus seems to be that that's not sufficient justification for leaving 9.6 unpatched. Discussion: http://postgr.es/m/CA+TgmoZ_ZuH+auEeeWnmtorPsgc_SmP+XWbDsJ+cWvWBSjNwDQ@mail.gmail.com Discussion: http://postgr.es/m/CA+TgmobXEhvHbJtWDuPZM9bVSLiTj-kShxQJ2uM5GPDze9fRYA@mail.gmail.com
-
Robert Haas authored
If your connection to the database server is lost while a COMMIT is in progress, it may be difficult to figure out whether the COMMIT was successful or not. This function will tell you, provided that you don't wait too long to ask. It may be useful in other situations, too. Craig Ringer, reviewed by Simon Riggs and by me Discussion: http://postgr.es/m/CAMsr+YHQiWNEi0daCTboS40T+V5s_+dst3PYv_8v2wNVH+Xx4g@mail.gmail.com
-
Simon Riggs authored
For normal commits and aborts we already reset PgXact->xmin Avoiding touching highly contented shmem improves concurrent performance. Simon Riggs Discussion: CANP8+jJdXE9b+b9F8CQT-LuxxO0PBCB-SZFfMVAdp+akqo4zfg@mail.gmail.com
-
Peter Eisentraut authored
Always return tupleslot and tupledesc from libpqrcv_exec. This avoids requiring callers to handle that separately. Author: Petr Jelinek <petr.jelinek@2ndquadrant.com> Reported-by: Michael Banck <michael.banck@credativ.de>
-
Heikki Linnakangas authored
If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason we cannot attempt to perform SCRAM authentication instead of MD5. The worst that can happen is that the client doesn't support SCRAM, and the authentication will fail. But previously, it would fail for sure, because we would not even try. SCRAM is strictly more secure than MD5, so there's no harm in trying it. This allows for a more graceful transition from MD5 passwords to SCRAM, as user passwords can be changed to SCRAM verifiers incrementally, without changing pg_hba.conf. Refactor the code in auth.c to support that better. Notably, we now have to look up the user's pg_authid entry before sending the password challenge, also when performing MD5 authentication. Also simplify the concept of a "doomed" authentication. Previously, if a user had a password, but it had expired, we still performed SCRAM authentication (but always returned error at the end) using the salt and iteration count from the expired password. Now we construct a fake salt, like we do when the user doesn't have a password or doesn't exist at all. That simplifies get_role_password(), and we can don't need to distinguish the "user has expired password", and "user does not exist" cases in auth.c. On second thoughts, also rename uaSASL to uaSCRAM. It refers to the mechanism specified in pg_hba.conf, and while we use SASL for SCRAM authentication at the protocol level, the mechanism should be called SCRAM, not SASL. As a comparison, we have uaLDAP, even though it looks like the plain 'password' authentication at the protocol level. Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us Reviewed-by: Michael Paquier
-
Teodor Sigaev authored
Assert-enabled build crashes but without asserts it works by wrong way: it may not reset forcing full page write and preventing from starting exclusive backup with the same name as cancelled. Patch replaces pair of booleans nonexclusive_backup_running/exclusive_backup_running to single enum to correctly describe backup state. Backpatch to 9.6 where bug was introduced Reported-by: David Steele Authors: Michael Paquier, David Steele Reviewed-by: Anastasia Lubennikova https://commitfest.postgresql.org/13/1068/
-
Tom Lane authored
Buildfarm member anole sees this union as empty, and doesn't like it.
-
- 23 Mar, 2017 11 commits
-
-
Bruce Momjian authored
-
Peter Eisentraut authored
Reported-by: Thomas Munro <thomas.munro@enterprisedb.com>
-
Peter Eisentraut authored
This only happened with single-byte encodings.
-
Robert Haas authored
Commit 249cf070 assigned to one of the labels in the middle the value that should have been assigned to the first member of the enum. Rushabh's patch didn't have that defect as submitted, but I managed to mess it up while editing. Repair.
-
Peter Eisentraut authored
Add a column collprovider to pg_collation that determines which library provides the collation data. The existing choices are default and libc, and this adds an icu choice, which uses the ICU4C library. The pg_locale_t type is changed to a union that contains the provider-specific locale handles. Users of locale information are changed to look into that struct for the appropriate handle to use. Also add a collversion column that records the version of the collation when it is created, and check at run time whether it is still the same. This detects potentially incompatible library upgrades that can corrupt indexes and other structures. This is currently only supported by ICU-provided collations. initdb initializes the default collation set as before from the `locale -a` output but also adds all available ICU locales with a "-x-icu" appended. Currently, ICU-provided collations can only be explicitly named collations. The global database locales are still always libc-provided. ICU support is enabled by configure --with-icu. Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com> Reviewed-by: Andreas Karlsson <andreas@proxel.se>
-
Robert Haas authored
This provides infrastructure for looking up arbitrary, user-supplied XIDs without a risk of scary-looking failures from within the clog module. Normally, the oldest XID that can be safely looked up in CLOG is the same as the oldest XID that can reused without causing wraparound, and the latter is already tracked. However, while truncation is in progress, the values are different, so we must keep track of them separately. Craig Ringer, reviewed by Simon Riggs and by me. Discussion: http://postgr.es/m/CAMsr+YHQiWNEi0daCTboS40T+V5s_+dst3PYv_8v2wNVH+Xx4g@mail.gmail.com
-
Peter Eisentraut authored
They have been deprecated since PostgreSQL 9.1. Reviewed-by: Magnus Hagander <magnus@hagander.net> Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
-
Robert Haas authored
Previously, it was unsafe to execute a plan in parallel if ExecutorRun() might be called with a non-zero row count. However, it's quite easy to fix things up so that we can support that case, provided that it is known that we will never call ExecutorRun() a second time for the same QueryDesc. Add infrastructure to signal this, and cross-checks to make sure that a caller who claims this is true doesn't later reneg. While that pattern never happens with queries received directly from a client -- there's no way to know whether multiple Execute messages will be sent unless the first one requests all the rows -- it's pretty common for queries originating from procedural languages, which often limit the result to a single tuple or to a user-specified number of tuples. This commit doesn't actually enable parallelism in any additional cases, because currently none of the places that would be able to benefit from this infrastructure pass CURSOR_OPT_PARALLEL_OK in the first place, but it makes it much more palatable to pass CURSOR_OPT_PARALLEL_OK in places where we currently don't, because it eliminates some cases where we'd end up having to run the parallel plan serially. Patch by me, based on some ideas from Rafia Sabih and corrected by Rafia Sabih based on feedback from Dilip Kumar and myself. Discussion: http://postgr.es/m/CA+TgmobXEhvHbJtWDuPZM9bVSLiTj-kShxQJ2uM5GPDze9fRYA@mail.gmail.com
-
Teodor Sigaev authored
GIN vacuum during cleaning posting tree can lock this whole tree for a long time with by holding LockBufferForCleanup() on root. Patch changes it with two ways: first, cleanup lock will be taken only if there is an empty page (which should be deleted) and, second, it tries to lock only subtree, not the whole posting tree. Author: Andrey Borodin with minor editorization by me Reviewed-by: Jeff Davis, me https://commitfest.postgresql.org/13/896/
-
Peter Eisentraut authored
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
-
Peter Eisentraut authored
related to 7c4f5240, per build farm Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
-