- 21 Sep, 2017 4 commits
-
-
Andrew Dunstan authored
"\if :{?variable_name}" will be translated to "\if TRUE" if the variable exists and "\if FALSE" otherwise. Thus it will be possible to execute code conditionally on the existence of the variable, regardless of its value. Fabien Coelho, with some review by Robins Tharakan and some light text editing by me. Discussion: https://postgr.es/m/alpine.DEB.2.20.1708260835520.3627@lancre
-
Tom Lane authored
Previously, the code didn't think about this case and would just try to analyze such a column twice. That would fail at the point of inserting the second version of the pg_statistic row, with obscure error messsages like "duplicate key value violates unique constraint" or "tuple already updated by self", depending on context and PG version. We could allow the case by ignoring duplicate column specifications, but it seems better to reject it explicitly. The bogus error messages seem like arguably a bug, so back-patch to all supported versions. Nathan Bossart, per a report from Michael Paquier, and whacked around a bit by me. Discussion: https://postgr.es/m/E061A8E3-5E3D-494D-94F0-E8A9B312BBFC@amazon.com
-
Andrew Dunstan authored
These variables are only ever written to in assertion-enabled builds, and the latest Microsoft compilers complain about such variables in non-assertion-enabled builds. Apparently they don't worry so much about variables that are written to but not read from, so most of our PG_USED_FOR_ASSERTS_ONLY variables don't cause the problem. Discussion: https://postgr.es/m/7800.1505950322@sss.pgh.pa.us
-
Robert Haas authored
This is not used for anything yet, but it is necessary infrastructure for partition-wise join and for partition pruning without constraint exclusion. Ashutosh Bapat, reviewed by Amit Langote and with quite a few changes, mostly cosmetic, by me. Additional review and testing of this patch series by Antonin Houska, Amit Khandekar, Rafia Sabih, Rajkumar Raghuwanshi, Thomas Munro, and Dilip Kumar. Discussion: http://postgr.es/m/CAFjFpRfneFG3H+F6BaiXemMrKF+FY-POpx3Ocy+RiH3yBmXSNw@mail.gmail.com
-
- 20 Sep, 2017 13 commits
-
-
Tom Lane authored
pg_newlocale_from_collation() used malloc() and strdup() directly, which is generally not per backend coding style, and it didn't bother to check for failure results, but would just SIGSEGV instead. Also, if one of the numerous error checks in the middle of the function failed, the already-allocated memory would be leaked permanently. Admittedly, it's not a lot of memory, but it could build up if this function were called repeatedly for a bad collation. The first two problems are easily cured by palloc'ing in TopMemoryContext instead of calling libc directly. We can fairly easily dodge the leakage problem for the struct pg_locale_struct by filling in a temporary variable and allocating permanent storage only once we reach the bottom of the function. It's harder to get rid of the potential leakage for ICU's copy of the collcollate string, but at least that's only allocated after most of the error checks; so live with that aspect. Back-patch to v10 where this code came in, with one or another of the ICU patches.
-
Tom Lane authored
005_encoding.pl neglected to wait for the subscriber's initial synchronization to happen. While we have not seen this fail in the buildfarm, it's pretty easy to demonstrate there's an issue by hacking logicalrep_worker_launch() to fail most of the time. Michael Paquier Discussion: https://postgr.es/m/27032.1505749806@sss.pgh.pa.us
-
Tom Lane authored
GRANT, REVOKE, and some allied commands allow the noise word GROUP before a role name (cf. grantee production in gram.y). This option does not exist elsewhere, but it had nonetheless snuck into the documentation for ALTER ROLE, ALTER USER, and CREATE SCHEMA. Seems to be a copy-and-pasteo in commit 31eae602, which did expand the syntax choices here, but not in that way. Back-patch to 9.5 where that came in. Discussion: https://postgr.es/m/20170916123750.8885.66941@wrigleys.postgresql.org
-
Robert Haas authored
Commit 0a480502 broke it. Report by Andreas Seltenreich. Fix by Ashutosh Bapat. Discussion: http://postgr.es/m/874ls2vrnx.fsf@ansel.ydns.eu
-
Robert Haas authored
Thomas Munro Discussion: http://postgr.es/m/CAEepm=2j-HAgnBUrAazwS0ry7Z_ihk+d7g+Ye3u99+6WbiGt_Q@mail.gmail.com
-
Bruce Momjian authored
This step was erroneously removed four days ago by me. Reported-by: Magnus via IM Backpatch-through: 9.5
-
Peter Eisentraut authored
from gcc-7 -Wformat-truncation (via -Wall)
-
Peter Eisentraut authored
Remove gratuitous differences in the process names shown in pg_stat_activity.backend_type and the ps output. Reviewed-by: Takayuki Tsunakawa <tsunakawa.takay@jp.fujitsu.com>
-
Magnus Hagander authored
Since rsync 3.0.0 (released in 2008), the default way to enumerate changes was changed in a way that makes it less likely that the hardlink sync mode works. Since the whole point of the documented procedure is for the hardlinks to work, change our docs to suggest using the backwards compatibility switch.
-
Andres Freund authored
For performance reasons a larger segment size than the default 16MB can be useful. A larger segment size has two main benefits: Firstly, in setups using archiving, it makes it easier to write scripts that can keep up with higher amounts of WAL, secondly, the WAL has to be written and synced to disk less frequently. But at the same time large segment size are disadvantageous for smaller databases. So far the segment size had to be configured at compile time, often making it unrealistic to choose one fitting to a particularly load. Therefore change it to a initdb time setting. This includes a breaking changes to the xlogreader.h API, which now requires the current segment size to be configured. For that and similar reasons a number of binaries had to be taught how to recognize the current segment size. Author: Beena Emerson, editorialized by Andres Freund Reviewed-By: Andres Freund, David Steele, Kuntal Ghosh, Michael Paquier, Peter Eisentraut, Robert Hass, Tushar Ahuja Discussion: https://postgr.es/m/CAOG9ApEAcQ--1ieKbhFzXSQPw_YLmepaa4hNdnY5+ZULpt81Mw@mail.gmail.com
-
Andres Freund authored
As it turns out we can't rely that the script's monitoring session is terminated with a proper error by the server, because the session might be terminated while already trying to send data. Also improve robustness and error reporting facilities of the test, developed while debugging this issue. Discussion: https://postgr.es/m/20170920020038.kllxgilo7xzwmtto@alap3.anarazel.de
-
Tom Lane authored
The preceding patch allowed us to remove useless GiST support functions. This patch actually does that for all the no-op cases in the core GiST code. This buys us whatever performance gain is to be had, and more importantly exercises the preceding patch. There remain no-op functions in the contrib GiST opclasses, but those will take more work to remove. Discussion: https://postgr.es/m/CAJEAwVELVx9gYscpE=Be6iJxvdW5unZ_LkcAaVNSeOwvdwtD=A@mail.gmail.com
-
Tom Lane authored
There are common use-cases in which the compress and/or decompress functions can be omitted, with the result being that we make no data transformation when storing or retrieving index values. Previously, you had to provide a no-op function anyway, but this patch allows such opclass support functions to be omitted. Furthermore, if the compress function is omitted, then the core code knows that the stored representation is the same as the original data. This means we can allow index-only scans without requiring a fetch function to be provided either. Previously you had to provide a no-op fetch function if you wanted IOS to work. This reportedly provides a small performance benefit in such cases, but IMO the real reason for doing it is just to reduce the amount of useless boilerplate code that has to be written for GiST opclasses. Andrey Borodin, reviewed by Dmitriy Sarafannikov Discussion: https://postgr.es/m/CAJEAwVELVx9gYscpE=Be6iJxvdW5unZ_LkcAaVNSeOwvdwtD=A@mail.gmail.com
-
- 19 Sep, 2017 14 commits
-
-
Andres Freund authored
Reported-By: Robert Haas Discussion: https://postgr.es/m/CA+Tgmoa+YBvWgFST2NVoeXjVSohEpK=vqnVCsoCkhTVVxfLcVQ@mail.gmail.com
-
Peter Eisentraut authored
The plan is to convert the current pg_upgrade test to the TAP framework. This commit just puts a basic TAP test in place so that we can see how the build farm behaves, since the build farm client has some special knowledge of the pg_upgrade tests. Author: Michael Paquier <michael.paquier@gmail.com>
-
Andres Freund authored
The use of strnlen rather than strlen was just paranoia. Instead of giving up on the paranoia, just implement the safeguard differently. And add a comment explaining why we're careful. Author: Andres Freund Discussion: https://postgr.es/m/E1duOkJ-0001Mc-U5@gemulon.postgresql.org
-
Andres Freund authored
Previously multi-byte aware truncation was done on every pgstat_report_activity() call - proving to be a bottleneck for workloads with long query strings that execute quickly. Instead move the truncation to the read side, which commonly is executed far less frequently. That's possible because all server encodings allow to determine the length of a multi-byte string from the first byte. Rename PgBackendStatus.st_activity to st_activity_raw so existing extension users of the field break - their code has to be adjusted to use pgstat_clip_activity(). Author: Andres Freund Tested-By: Khuntal Ghosh Reviewed-By: Robert Haas, Tom Lane Discussion: https://postgr.es/m/20170912071948.pa7igbpkkkviecpz@alap3.anarazel.de
-
Andrew Dunstan authored
This reverts commit 890faaf1 which attempted unsuccessfully to deal with the problem, and instead just comments out these tests like other similar tests elsewhere in the script.
-
Tom Lane authored
Testing indicates this can save a third to a half of the runtime of the function. Pavel Stehule, reviewed by Alexander Kuzmenkov Discussion: https://postgr.es/m/CAFj8pRAT62pRgjoHbgTfJUc2uLmeQ4saUj+yVJAEZUiMwNCmdg@mail.gmail.com
-
Andrew Dunstan authored
Problem introduced with non-ascii characters in commit f2464997 and discovered on various buildfarm animals.
-
Andres Freund authored
Add timeouts in case psql doesn't deliver the expected output, and try to cause the monitoring psql to be fully connected to a backend. This isn't necessarily everything needed, but at least the timeouts should reduce the pain for buildfarm owners. Author: Andres Freund Reported-By: Tom Lane, BF animals prairiedog and calliphoridae Discussion: https://postgr.es/m/E1du6ZT-00043I-91@gemulon.postgresql.org
-
Bruce Momjian authored
Reported-by: Zhou Digoal Discussion: https://postgr.es/m/20170912133722.25637.91@wrigleys.postgresql.org Backpatch-through: 10
-
Andrew Dunstan authored
This is similar to text_pattern_ops. Alexey Chernyshov, reviewed by Jacob Champion.
-
Andres Freund authored
Previously statement_timeout, in the extended protocol, affected all messages till a Sync message. For clients that pipeline/batch query execution that's problematic. Instead disable timeout after each Execute message, and enable, if necessary, the timer in start_xact_command(). As that's done only for Execute and not Parse / Bind, pipelining the latter two could still cause undesirable timeouts. But a survey of protocol implementations shows that all drivers issue Sync messages when preparing, and adding timeout rearming to both is fairly expensive for the common parse / bind / execute sequence. Author: Tatsuo Ishii, editorialized by Andres Freund Reviewed-By: Takayuki Tsunakawa, Andres Freund Discussion: https://postgr.es/m/20170222.115044.1665674502985097185.t-ishii@sraoss.co.jp
-
Andres Freund authored
A bugfix for commit 8c0d7baf. The code would have crashed if hashtable->size_log2 ever had the same value as hashtable->control->size_log2 by coincidence. Per Valgrind. Author: Thomas Munro Reported-By: Tomas Vondra Discussion: https://postgr.es/m/e72fb33c-4f31-f276-e972-263d9b59554d%402ndquadrant.com
-
Andres Freund authored
Given that I managed to break this... We probably should extend the tests to also cover other sub-processes dying, but that's something for later. Author: Andres Freund Discussion: https://postgr.es/m/20170917080752.rcmihzfmgbeuqjk2@alap3.anarazel.de
-
Andres Freund authored
The bug was caused by not re-reading the control file during crash recovery restarts, which lead to an attempt to pfree() shared memory contents. The fix is to re-read the control file, which seems good anyway. It's unclear as of this moment, whether we want to keep the refactoring introduced in the commit referenced above, or come up with an alternative approach. But fixing the bug in the mean time seems like a good idea regardless. A followup commit will introduce regression test coverage for crash restarts. Reported-By: Tom Lane Discussion: https://postgr.es/m/14134.1505572349@sss.pgh.pa.us
-
- 18 Sep, 2017 7 commits
-
-
Tom Lane authored
Make the btree page-flags test macros (P_ISLEAF and friends) return clean boolean values, rather than values that might not fit in a bool. Use them in a few places that were randomly referencing the flag bits directly. In passing, change access/nbtree/'s only direct use of BUFFER_LOCK_SHARE to BT_READ. (Some think we should go the other way, but as long as we have BT_READ/BT_WRITE, let's use them consistently.) Masahiko Sawada, reviewed by Doug Doole Discussion: https://postgr.es/m/CAD21AoBmWPeN=WBB5Jvyz_Nt3rmW1ebUyAnk3ZbJP3RMXALJog@mail.gmail.com
-
Tom Lane authored
Extensions with custom plan nodes might like to use these in their EXPLAIN output. Hadi Moshayedi Discussion: https://postgr.es/m/CA+_kT_dU-rHCN0u6pjA6bN5CZniMfD=-wVqPY4QLrKUY_uJq5w@mail.gmail.com
-
Tom Lane authored
By project convention, these names should include "P" when dealing with a pointer type; that is, if the result of a GETARG macro is of type FOO *, it should be called PG_GETARG_FOO_P not just PG_GETARG_FOO. Some newer types such as JSONB and ranges had not followed the convention, and a number of contrib modules hadn't gotten that memo either. Rename the offending macros to improve consistency. In passing, fix a few places that thought PG_DETOAST_DATUM() returns a Datum; it does not, it returns "struct varlena *". Applying DatumGetPointer to that happens not to cause any bad effects today, but it's formally wrong. Also, adjust an ltree macro that was designed without any thought for what pgindent would do with it. This is all cosmetic and shouldn't have any impact on generated code. Mark Dilger, some further tweaks by me Discussion: https://postgr.es/m/EA5676F4-766F-4F38-8348-ECC7DB427C6A@gmail.com
-
Tom Lane authored
If we failed to get a background worker slot, the code just walked away from the logicalrep-worker slot it already had, leaving that looking like the worker is still starting up. This led to an indefinite hang in subscription startup, as reported by Thomas Munro. We must release the slot on failure. Also fix a thinko: we must capture the worker slot's generation before releasing LogicalRepWorkerLock the first time, else testing to see if it's changed is pretty meaningless. BTW, the CHECK_FOR_INTERRUPTS() in WaitForReplicationWorkerAttach is a ticking time bomb, even without considering the possibility of elog(ERROR) in one of the other functions it calls. Really, this entire business needs a redesign with some actual thought about error recovery. But for now I'm just band-aiding the case observed in testing. Back-patch to v10 where this code was added. Discussion: https://postgr.es/m/CAEepm=2bP3TBMFBArP6o20AZaRduWjMnjCjt22hSdnA-EvrtCw@mail.gmail.com
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
Peter Eisentraut authored
When ALTER SUBSCRIPTION DISABLE is run in the same transaction before DROP SUBSCRIPTION, the latter will hang because workers will still be running, not having seen the DISABLE committed, and DROP SUBSCRIPTION will wait until the workers have vacated the replication origin slots. Previously, DROP SUBSCRIPTION killed the logical replication workers immediately only if it was going to drop the replication slot, otherwise it scheduled the worker killing for the end of the transaction, as a result of 7e174fa7. This, however, causes the present problem. To fix, kill the workers immediately in all cases. This covers all cases: A subscription that doesn't have a replication slot must be disabled. It was either disabled in the same transaction, or it was already disabled before the current transaction, but then there shouldn't be any workers left and this won't make a difference. Reported-by: Arseny Sher <a.sher@postgrespro.ru> Discussion: https://www.postgresql.org/message-id/flat/87mv6av84w.fsf%40ars-thinkpad
-
- 17 Sep, 2017 2 commits
-
-
Tom Lane authored
Add item about number of times statement-level triggers will be fired. Rearrange the compatibility items into (what seems to me) a less random ordering.
-
Tom Lane authored
This lets it do the right thing for, eg, varchar columns. Back-patch to 9.5 where this logic appeared. David Rowley, per report from Kim Rose Carlsen Discussion: https://postgr.es/m/VI1PR05MB17091F9A9876528055D6A827C76D0@VI1PR05MB1709.eurprd05.prod.outlook.com
-