- 29 Mar, 2021 2 commits
-
-
Peter Geoghegan authored
Assert that LP_UNUSED items have no storage. If it's worth having defensive code in non-assert builds then it's worth having an assertion as well.
-
David Rowley authored
Here we aim to reduce duplicate work done by contain_volatile_functions() by caching whether PathTargets and RestrictInfos contain any volatile functions the first time contain_volatile_functions() is called for them. Any future calls for these nodes just use the cached value rather than going to the trouble of recursively checking the sub-node all over again. Thanks to Tom Lane for the idea. Any locations in the code which make changes to a PathTarget or RestrictInfo which could change the outcome of the volatility check must change the cached value back to VOLATILITY_UNKNOWN again. contain_volatile_functions() is the only code in charge of setting the cache value to either VOLATILITY_VOLATILE or VOLATILITY_NOVOLATILE. Some existing code does benefit from this additional caching, however, this change is mainly aimed at an upcoming patch that must check for volatility during the join search. Repeated volatility checks in that case can become very expensive when the join search contains more than a few relations. Author: David Rowley Discussion: https://postgr.es/m/3795226.1614059027@sss.pgh.pa.us
-
- 28 Mar, 2021 1 commit
-
-
Stephen Frost authored
Commit c6763156 added an acronym reference for "TLS" but the definition was never added. Author: Daniel Gustafsson Reviewed-by: Michael Paquier Backpatch-through: 9.6 Discussion: https://postgr.es/m/27109504-82DB-41A8-8E63-C0498314F5B0@yesql.se
-
- 27 Mar, 2021 2 commits
-
-
Tomas Vondra authored
The tests used string concatenation to test statistics on expressions, but that made the tests locale-dependent, e.g. because the ordering of '11' and '1X' depends on the collation. This affected both the estimated and actual row couts, breaking some of the tests. Fixed by replacing the string concatenation with upper() function call, so that the text values contain only digits. Discussion: https://postgr.es/m/b650920b-2767-fbc3-c87a-cb8b5d693cbf%40enterprisedb.com
-
Peter Eisentraut authored
-
- 26 Mar, 2021 10 commits
-
-
Tomas Vondra authored
Allow defining extended statistics on expressions, not just just on simple column references. With this commit, expressions are supported by all existing extended statistics kinds, improving the same types of estimates. A simple example may look like this: CREATE TABLE t (a int); CREATE STATISTICS s ON mod(a,10), mod(a,20) FROM t; ANALYZE t; The collected statistics are useful e.g. to estimate queries with those expressions in WHERE or GROUP BY clauses: SELECT * FROM t WHERE mod(a,10) = 0 AND mod(a,20) = 0; SELECT 1 FROM t GROUP BY mod(a,10), mod(a,20); This introduces new internal statistics kind 'e' (expressions) which is built automatically when the statistics object definition includes any expressions. This represents single-expression statistics, as if there was an expression index (but without the index maintenance overhead). The statistics is stored in pg_statistics_ext_data as an array of composite types, which is possible thanks to 79f6a942. CREATE STATISTICS allows building statistics on a single expression, in which case in which case it's not possible to specify statistics kinds. A new system view pg_stats_ext_exprs can be used to display expression statistics, similarly to pg_stats and pg_stats_ext views. ALTER TABLE ... ALTER COLUMN ... TYPE now treats indexes the same way it treats indexes, i.e. it drops and recreates the statistics. This means all statistics are reset, and we no longer try to preserve at least the functional dependencies. This should not be a major issue in practice, as the functional dependencies actually rely on per-column statistics, which were always reset anyway. Author: Tomas Vondra Reviewed-by: Justin Pryzby, Dean Rasheed, Zhihong Yu Discussion: https://postgr.es/m/ad7891d2-e90c-b446-9fe2-7419143847d7%40enterprisedb.com
-
Tomas Vondra authored
The regression tests of extended statistics were taking a fair amount of time, due to using fairly large data sets with a couple thousand rows. So far this was fine, but with tests for statistics on expressions the duration would get a bit excessive. So reduce the size of some of the tests that will be used to test expressions, to keep the duration under control. Done in a separate commit before adding the statistics on expressions, to make it clear which estimates are expected to change. Author: Tomas Vondra Discussion: https://postgr.es/m/ad7891d2-e90c-b446-9fe2-7419143847d7%40enterprisedb.com
-
Tomas Vondra authored
When estimating the number of groups using extended statistics, the code was discarding information about system attributes. This led to strange situation that SELECT 1 FROM t GROUP BY ctid; could have produced higher estimate (equal to pg_class.reltuples) than SELECT 1 FROM t GROUP BY a, b, ctid; with extended statistics on (a,b). Fixed by retaining information about the system attribute. Backpatch all the way to 10, where extended statistics were introduced. Author: Tomas Vondra Backpatch-through: 10
-
Noah Misch authored
Membership consists, implicitly, of the current database owner. Expect use in template databases. Once pg_database_owner has rights within a template, each owner of a database instantiated from that template will exercise those rights. Reviewed by John Naylor. Discussion: https://postgr.es/m/20201228043148.GA1053024@rfd.leadboat.com
-
Noah Misch authored
The next commit would have complicated two or three algorithms, so take this opportunity to consolidate. No functional changes. Reviewed by John Naylor. Discussion: https://postgr.es/m/20201228043148.GA1053024@rfd.leadboat.com
-
Tomas Vondra authored
The deserialization failed to ensure correct alignment, as it assumed it can simply point into the serialized value. The serialization however ignores alignment and copies just the significant bytes in order to make the result as small as possible. This caused failures on systems that are sensitive to mialigned addresses, like sparc, or with address sanitizer enabled. Fixed by copying the serialized data to ensure proper alignment. While at it, fix an issue with serialization on big endian machines, using the same store_att_byval/fetch_att trick as extended statistics. Discussion: https://postgr.es/0c8c3304-d3dd-5e29-d5ac-b50589a23c8c%40enterprisedb.com
-
Tomas Vondra authored
Adds BRIN opclasses similar to the existing minmax, except that instead of summarizing the page range into a single [min,max] range, the summary consists of multiple ranges and/or points, allowing gaps. This allows more efficient handling of data with poor correlation to physical location within the table and/or outlier values, for which the regular minmax opclassed tend to work poorly. It's possible to specify the number of values kept for each page range, either as a single point or an interval boundary. CREATE TABLE t (a int); CREATE INDEX ON t USING brin (a int4_minmax_multi_ops(values_per_range=16)); When building the summary, the values are combined into intervals with the goal to minimize the "covering" (sum of interval lengths), using a support procedure computing distance between two values. Bump catversion, due to various catalog changes. Author: Tomas Vondra <tomas.vondra@postgresql.org> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com> Reviewed-by: Sokolov Yura <y.sokolov@postgrespro.ru> Reviewed-by: John Naylor <john.naylor@enterprisedb.com> Discussion: https://postgr.es/m/c1138ead-7668-f0e1-0638-c3be3237e812@2ndquadrant.com Discussion: https://postgr.es/m/5d78b774-7e9c-c94e-12cf-fef51cc89b1a%402ndquadrant.com
-
Tomas Vondra authored
Adds a BRIN opclass using a Bloom filter to summarize the range. Indexes using the new opclasses allow only equality queries (similar to hash indexes), but that works fine for data like UUID, MAC addresses etc. for which range queries are not very common. This also means the indexes work for data that is not well correlated to physical location within the table, or perhaps even entirely random (which is a common issue with existing BRIN minmax opclasses). It's possible to specify opclass parameters with the usual Bloom filter parameters, i.e. the desired false-positive rate and the expected number of distinct values per page range. CREATE TABLE t (a int); CREATE INDEX ON t USING brin (a int4_bloom_ops(false_positive_rate = 0.05, n_distinct_per_range = 100)); The opclasses do not operate on the indexed values directly, but compute a 32-bit hash first, and the Bloom filter is built on the hash value. Collisions should not be a huge issue though, as the number of distinct values in a page ranges is usually fairly small. Bump catversion, due to various catalog changes. Author: Tomas Vondra <tomas.vondra@postgresql.org> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com> Reviewed-by: Sokolov Yura <y.sokolov@postgrespro.ru> Reviewed-by: Nico Williams <nico@cryptonector.com> Reviewed-by: John Naylor <john.naylor@enterprisedb.com> Discussion: https://postgr.es/m/c1138ead-7668-f0e1-0638-c3be3237e812@2ndquadrant.com Discussion: https://postgr.es/m/5d78b774-7e9c-c94e-12cf-fef51cc89b1a%402ndquadrant.com
-
Tomas Vondra authored
Commit a1c649d8 changed the signature of the BRIN consistent function by adding a new required parameter. Treating the parameter as optional, which would make the change backwards incompatibile, was rejected with the justification that there are few out-of-core extensions, so it's not worth adding making the code more complex, and it's better to deal with that in the extension. But after further thought, that would be rather problematic, because pg_upgrade simply dumps catalog contents and the same version of an extension needs to work on both PostgreSQL versions. Supporting both variants of the consistent function (with 3 or 4 arguments) makes that possible. The signature is not the only thing that changed, as commit 72ccf55c moved handling of IS [NOT] NULL keys from the support procedures. But this change is backward compatible - handling the keys in exension is unnecessary, but harmless. The consistent function will do a bit of unnecessary work, but it should be very cheap. This also undoes most of the changes to the existing opclasses (minmax and inclusion), making them use the old signature again. This should make backpatching simpler. Catversion bump, because of changes in pg_amproc. Author: Tomas Vondra <tomas.vondra@postgresql.org> Author: Nikita Glukhov <n.gluhov@postgrespro.ru> Reviewed-by: Mark Dilger <hornschnorter@gmail.com> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com> Reviewed-by: Masahiko Sawada <masahiko.sawada@enterprisedb.com> Reviewed-by: John Naylor <john.naylor@enterprisedb.com> Discussion: https://postgr.es/m/c1138ead-7668-f0e1-0638-c3be3237e812@2ndquadrant.com
-
Tomas Vondra authored
The BRIN minmax opclasses included amproc entries with mismatching left and right types, but those happen to be unnecessary. The opclasses only need cross-type operators, not cross-type support procedures. Discovered when trying to define equivalent BRIN operator families in an extension. Catversion bump, because of pg_amproc changes. Author: Tomas Vondra Reviewed-by: Alvaro Herrera Discussion: https://postgr.es/m/78c357ab-3395-8433-e7b3-b2cfcc9fdc23%40enterprisedb.com
-
- 25 Mar, 2021 12 commits
-
-
Robert Haas authored
Before, trying to compress a value for insertion into an expression index would crash. Dilip Kumar, with some editing by me. Report by Jaime Casanova. Discussion: http://postgr.es/m/CAJKUy5gcs0zGOp6JXU2mMVdthYhuQpFk=S3V8DOKT=LZC1L36Q@mail.gmail.com
-
Alvaro Herrera authored
Allow a partition be detached from its partitioned table without blocking concurrent queries, by running in two transactions and only requiring ShareUpdateExclusive in the partitioned table. Because it runs in two transactions, it cannot be used in a transaction block. This is the main reason to use dedicated syntax: so that users can choose to use the original mode if they need it. But also, it doesn't work when a default partition exists (because an exclusive lock would still need to be obtained on it, in order to change its partition constraint.) In case the second transaction is cancelled or a crash occurs, there's ALTER TABLE .. DETACH PARTITION .. FINALIZE, which executes the final steps. The main trick to make this work is the addition of column pg_inherits.inhdetachpending, initially false; can only be set true in the first part of this command. Once that is committed, concurrent transactions that use a PartitionDirectory will include or ignore partitions so marked: in optimizer they are ignored if the row is marked committed for the snapshot; in executor they are always included. As a result, and because of the way PartitionDirectory caches partition descriptors, queries that were planned before the detach will see the rows in the detached partition and queries that are planned after the detach, won't. A CHECK constraint is created that duplicates the partition constraint. This is probably not strictly necessary, and some users will prefer to remove it afterwards, but if the partition is re-attached to a partitioned table, the constraint needn't be rechecked. Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Amit Langote <amitlangote09@gmail.com> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200803234854.GA24158@alvherre.pgsql
-
Alvaro Herrera authored
On partition detach, we acquire a SHARE lock on all tables that reference the partitioned table that we're detaching a partition from, but failed to document this fact. My oversight in commit f56f8f8d. Repair. Backpatch to 12. Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/20210325180244.GA12738@alvherre.pgsql
-
Alvaro Herrera authored
The prior commit which introduced it was pretty squalid in terms of code documentation, so add some comments.
-
Alvaro Herrera authored
Struct AlteredRelationInfo gains a new Relation member, to be used only by Phase 2 (ATRewriteCatalogs); this allows ATExecCmd() subroutines open and close the relation internally. A future commit will use this facility to implement an ALTER TABLE subcommand that closes and reopens the relation across transaction boundaries. (It is possible to keep the relation open past phase 2 to be used by phase 3 instead of having to reopen it that point, but there are some minor complications with that; it's not clear that there is much to be won from doing that, though.) Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/20200803234854.GA24158@alvherre.pgsql
-
Alvaro Herrera authored
The original definitions pointlessly disregarded existing ItemPointer macros that do the same thing. Reported-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/20210222201557.GA32655@alvherre.pgsql
-
Alvaro Herrera authored
I introduced this duplicate code in commit 8b08f7d4 for no good reason. Remove it, and backpatch to 11 where it was introduced. Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
-
Peter Eisentraut authored
-
Peter Eisentraut authored
A WHERE clause will be used for row filtering in logical replication. We already have a similar node: 'WHERE (condition here)'. Let's rename the node to a generic name and use it for row filtering too. Author: Euler Taveira <euler.taveira@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/CAHE3wggb715X+mK_DitLXF25B=jE6xyNCH4YOwM860JR7HarGQ@mail.gmail.com
-
Michael Paquier authored
Combo CIDs were referred in the code comments using different terms across various places of the code, so unify a bit the term used with what is currently in use in some of the READMEs. Author: "Hou, Zhijie" Discussion: https://postgr.es/m/1d42865c91404f46af4562532fdbea31@G08CNEXMBPEKD05.g08.fujitsu.local
-
Fujii Masao authored
Previously the WAL replay of COMMIT_TS_SETTS record called TransactionTreeSetCommitTsData() with the argument write_xlog=true, which generated and wrote new COMMIT_TS_SETTS record. This should not be acceptable because it's during recovery. This commit fixes the WAL replay of COMMIT_TS_SETTS record so that it calls TransactionTreeSetCommitTsData() with write_xlog=false and doesn't generate new WAL during recovery. Back-patch to all supported branches. Reported-by: lx zou <zoulx1982@163.com> Author: Fujii Masao Reviewed-by: Alvaro Herrera Discussion: https://postgr.es/m/16931-620d0f2fdc6108f1@postgresql.org
-
Fujii Masao authored
Previously when an archive recovery or a standby was starting and reached the consistent recovery state but hot_standby was configured to off, the error message when a client connectted was "the database system is starting up", which was needless confusing and not really all that accurate either. This commit improves the connection denied error message during recovery, as follows, so that the users immediately know that their servers are configured to deny those connections. - If hot_standby is disabled, the error message "the database system is not accepting connections" and the detail message "Hot standby mode is disabled." are output when clients connect while an archive recovery or a standby is running. - If hot_standby is enabled, the error message "the database system is not yet accepting connections" and the detail message "Consistent recovery state has not been yet reached." are output when clients connect until the consistent recovery state is reached and postmaster starts accepting read only connections. This commit doesn't change the connection denied error message of "the database system is starting up" during normal server startup and crash recovery. Because it's still suitable for those situations. Author: James Coleman Reviewed-by: Alvaro Herrera, Andres Freund, David Zhang, Tom Lane, Fujii Masao Discussion: https://postgr.es/m/CAAaqYe8h5ES_B=F_zDT+Nj9XU7YEwNhKhHA2RE4CFhAQ93hfig@mail.gmail.com
-
- 24 Mar, 2021 13 commits
-
-
Andrew Dunstan authored
Currently instances of PostgresNode find their Postgres executables in the PATH of the caller. This modification allows for instances that know the installation path they are supposed to use, and the module adjusts the environment of methods that call Postgres executables appropriately. This facility is activated by passing the installation path to the constructor: my $node = PostgresNode->get_new_node('mynode', installation_path => '/path/to/installation'); This makes a number of things substantially easier, including . testing third party modules . testing different versions of postgres together . testing different builds of postgres together Discussion: https://postgr.es/m/a94c74f9-6b71-1957-7973-a734ea3cbef1@dunslane.net Reviewed-By: Alvaro Herrera, Michael Paquier, Dagfinn Ilmari Mannsåker
-
Michael Meskes authored
-
Michael Meskes authored
This command declares a SQL identifier for a SQL statement to be used in other embedded SQL statements. The identifier is linked to a connection. Author: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Shawn Wang <shawn.wang.pg@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/TY2PR01MB24438A52DB04E71D0E501452F5630@TY2PR01MB2443.jpnprd01.prod.outlook.com
-
Peter Eisentraut authored
Reported-by: John Naylor <john.naylor@enterprisedb.com>
-
Peter Eisentraut authored
Reported-by: Erik Rijkers <er@xs4all.nl>
-
Stephen Frost authored
Common recommendations are that the checkpoint should be spread out as much as possible, provided we avoid having it take too long. This change updates the default to 0.9 (from 0.5) to match that recommendation. There was some debate about possibly removing the option entirely but it seems there may be some corner-cases where having it set much lower to try to force the checkpoint to be as fast as possible could result in fewer periods of time of reduced performance due to kernel flushing. General agreement is that the "spread more" is the preferred approach though and those who need to tune away from that value are much less common. Reviewed-By: Michael Paquier, Peter Eisentraut, Tom Lane, David Steele, Nathan Bossart Discussion: https://postgr.es/m/20201207175329.GM16415%40tamriel.snowman.net
-
Robert Haas authored
Change the default_toast_compression GUC to be an enum rather than a string. Earlier, uncommitted versions of the patch supported using CREATE ACCESS METHOD to add new compression methods to a running system, but that idea was dropped before commit. So, we can simplify the GUC handling as well, which has the nice side effect of improving the error messages. While updating the documentation to reflect the new GUC type, also move it back to the right place in the list. I moved this while revising what became commit 24f0e395, but apparently the intended ordering is "alphabetical" rather than "whatever Robert thinks looks nice." Rejigger things to avoid having access/toast_compression.h depend on utils/guc.h, so that we don't end up with every file that includes it also depending on something largely unrelated. Move a few inline functions back into the C source file partly to help reduce dependencies and partly just to avoid clutter. A few very minor cosmetic fixes. Original patch by Justin Pryzby, but very heavily edited by me, and reverse reviewed by him and also reviewed by by Tom Lane. Discussion: http://postgr.es/m/CA+TgmoYp=GT_ztUCeZg2i4hkHAQv8o=-nVJ1-TKWTG1zQOmOpg@mail.gmail.com
-
Peter Eisentraut authored
Similar to date_trunc, but allows binning by an arbitrary interval rather than just full units. Author: John Naylor <john.naylor@enterprisedb.com> Reviewed-by: David Fetter <david@fetter.org> Reviewed-by: Isaac Morland <isaac.morland@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Artur Zakirov <zaartur@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CACPNZCt4buQFRgy6DyjuZS-2aPDpccRkrJBmgUfwYc1KiaXYxg@mail.gmail.com
-
Peter Eisentraut authored
Make it the same as another nearby message.
-
Amit Kapila authored
To allow inserts in parallel-mode this feature has to ensure that all the constraints, triggers, etc. are parallel-safe for the partition hierarchy which is costly and we need to find a better way to do that. Additionally, we could have used existing cached information in some cases like indexes, domains, etc. to determine the parallel-safety. List of commits reverted, in reverse chronological order: ed62d373 Doc: Update description for parallel insert reloption. c8f78b61 Add a new GUC and a reloption to enable inserts in parallel-mode. c5be48f0 Improve FK trigger parallel-safety check added by 05c8482f. e2cda3c2 Fix use of relcache TriggerDesc field introduced by commit 05c8482f. e4e87a32 Fix valgrind issue in commit 05c8482f. 05c8482f Enable parallel SELECT for "INSERT INTO ... SELECT ...". Discussion: https://postgr.es/m/E1lMiB9-0001c3-SY@gemulon.postgresql.org
-
Fujii Masao authored
Commit de829ddf added wait event WalrcvExit. But its name is not consistent with other wait events like WalReceiverMain or WalReceiverWaitStart, etc. So this commit renames WalrcvExit to WalReceiverExit. Author: Fujii Masao Reviewed-by: Thomas Munro Discussion: https://postgr.es/m/cced9995-8fa2-7b22-9d91-3f22a2b8c23c@oss.nttdata.com
-
Fujii Masao authored
GetNewOidWithIndex() generates a new OID one by one until it finds one not in the relation. If there are very long runs of consecutive existing OIDs, GetNewOidWithIndex() needs to iterate many times in the loop to find unused OID. Since TOAST table can have a large number of entries and there can be such long runs of OIDs, there is the case where it takes so many iterations to find new OID not in TOAST table. Furthermore if all (i.e., 2^32) OIDs are already used, GetNewOidWithIndex() enters something like busy loop and repeats the iterations until at least one OID is marked as unused. There are some reported troubles caused by a large number of iterations in GetNewOidWithIndex(). For example, when inserting a billion of records into the table, all the backends doing that insertion operation got hang with 100% CPU usage at some point. Previously there was no easy way to detect that GetNewOidWithIndex() failed to find unused OID many times. So, for example, gdb full backtrace of hanged backends needed to be taken, in order to investigate that trouble. This is inconvenient and may not be available in some production environments. To provide easy way for that, this commit makes GetNewOidWithIndex() log that it iterates more than GETNEWOID_LOG_THRESHOLD but have not yet found OID unused in the relation. Also this commit makes it repeat logging with exponentially increasing intervals until it iterates more than GETNEWOID_LOG_MAX_INTERVAL, and makes it finally repeat logging every GETNEWOID_LOG_MAX_INTERVAL unless an unused OID is found. Those macro variables are used not to fill up the server log with the similar messages. In the discusion at pgsql-hackers, there was another idea to report the lots of iterations in GetNewOidWithIndex() via wait event. But since GetNewOidWithIndex() traverses indexes to find unused OID and which will do I/O, acquire locks, etc, which will overwrite the wait event and reset it to nothing once done. So that idea doesn't work well, and we didn't adopt it. Author: Tomohiro Hiramitsu Reviewed-by: Tatsuhito Kasahara, Kyotaro Horiguchi, Tom Lane, Fujii Masao Discussion: https://postgr.es/m/16722-93043fb459a41073@postgresql.org
-
Michael Paquier authored
Using "remain" is confusing, as it implies that the index file can shrink. Instead, use "in total". Per discussion with Peter Geoghegan. Discussion: https://postgr.es/m/CAH2-WzkYgHZzpGOwR14CScJsjaQpvJrEkEfkh_=wGhzLb=yVdQ@mail.gmail.com
-