- 19 Jul, 2021 1 commit
-
-
Amit Kapila authored
We don't allow to create replication slot_name as an empty string ('') via SQL API pg_create_logical_replication_slot() but it is allowed to be set via Alter Subscription command. This will lead to apply worker repeatedly keep trying to stream data via slot_name '' and the user is not allowed to create the slot with that name. Author: Japin Li Reviewed-By: Ranier Vilela, Amit Kapila Backpatch-through: 10, where it was introduced Discussion: https://postgr.es/m/MEYP282MB1669CBD98E721C77CA696499B61A9@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
-
- 28 Jun, 2021 1 commit
-
-
Michael Paquier authored
copy_data is not a supported option with this sub-command of ALTER SUBSCRIPTION, which would not make a variable related to it initialized after parsing the option set in DefElems. A refresh could then refer to it. Author: Ranier Vilela Reviewed-by: Peter Smith Discussion: https://postgr.es/m/CAEudQAp5P8nr=ze2Gv=BMj=DJFZnrvendZCZcC-fos3QiDe2sg@mail.gmail.com
-
- 25 Jun, 2021 1 commit
-
-
Peter Eisentraut authored
ALTER SUBSCRIPTION DROP PUBLICATION does not actually support copy_data option, so remove it from tab completion. Also, reword the error message that is thrown when all the publications from a subscription are specified to be dropped. Also, made few doc and cosmetic adjustments. Author: Vignesh C <vignesh21@gmail.com> Reviewed-by:
Bharath Rupireddy <bharath.rupireddy@enterprisedb.com> Reviewed-by:
Japin Li <japinli@hotmail.com> Discussion: https://www.postgresql.org/message-id/flat/CALDaNm21RwsDzs4xj14ApteAF7auyyomHNnp+NEL-sH8m-jMvQ@mail.gmail.com
-
- 16 Jun, 2021 1 commit
-
-
Tom Lane authored
I started out with the goal of reporting ERRCODE_CONNECTION_FAILURE when walrcv_connect() fails, but as I looked around I realized that whoever wrote this code was of the opinion that errcodes are purely optional. That's not my understanding of our project policy. Hence, make sure that an errcode is provided in each ereport that (a) is ERROR or higher level and (b) isn't arguably an internal logic error. Also fix some very dubious existing errcode assignments. While this is not per policy, it's also largely cosmetic, since few of these cases could get reported to applications. So I don't feel a need to back-patch. Discussion: https://postgr.es/m/2189704.1623512522@sss.pgh.pa.us
-
- 12 May, 2021 1 commit
-
-
Tom Lane authored
Also "make reformat-dat-files". The only change worthy of note is that pgindent messed up the formatting of launcher.c's struct LogicalRepWorkerId, which led me to notice that that struct wasn't used at all anymore, so I just took it out.
-
- 07 May, 2021 1 commit
-
-
Alvaro Herrera authored
This patch replaces use of the global "wrconn" variable in AlterSubscription_refresh with a local variable of the same name, making it consistent with other functions in subscriptioncmds.c (e.g. DropSubscription). The global wrconn is only meant to be used for logical apply/tablesync worker. Abusing it this way is known to cause trouble if an apply worker manages to do a subscription refresh, such as reported by Jeremy Finzel and diagnosed by Andres Freund back in November 2020, at https://www.postgresql.org/message-id/20201111215820.qihhrz7fayu6myfi@alap3.anarazel.de Backpatch to 10. In branch master, also move the connection establishment to occur outside the PG_TRY block; this way we can remove a test for NULL in PG_FINALLY, and it also makes the code more consistent with similar code in the same file. Author: Peter Smith <peter.b.smith@fujitsu.com> Reviewed-by:
Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Reviewed-by:
Japin Li <japinli@hotmail.com> Discussion: https://postgr.es/m/CAHut+Pu7Jv9L2BOEx_Z0UtJxfDevQSAUW2mJqWU+CtmDrEZVAg@mail.gmail.com
-
- 06 Apr, 2021 1 commit
-
-
Peter Eisentraut authored
At present, if we want to update publications in a subscription, we can use SET PUBLICATION. However, it requires supplying all publications that exists and the new publications. If we want to add new publications, it's inconvenient. The new syntax only supplies the new publications. When the refresh is true, it only refreshes the new publications. Author: Japin Li <japinli@hotmail.com> Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/MEYP282MB166939D0D6C480B7FBE7EFFBB6BC0@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
-
- 31 Mar, 2021 1 commit
-
-
Tom Lane authored
Put the remote end's error message into the primary error string, instead of relegating it to errdetail(). Although this could end up being awkward if the remote sends us a really long error message, it seems more in keeping with our message style guidelines, and more helpful in situations where the errdetail could get dropped. Peter Smith Discussion: https://postgr.es/m/CAHut+Ps-Qv2yQceCwobQDP0aJOkfDzRFrOaR6+2Op2K=WHGeWg@mail.gmail.com
-
- 17 Feb, 2021 1 commit
-
-
Peter Eisentraut authored
An inconsistent set of debug-level messages was not using errmsg_internal(), thus uselessly exposing the messages to translation work. Fix those.
-
- 15 Feb, 2021 1 commit
-
-
Amit Kapila authored
Author: Amit Kapila Reviewed-by: Tom Lane Discussion: https://postgr.es/m/1610789.1613170207@sss.pgh.pa.us
-
- 12 Feb, 2021 1 commit
-
-
Amit Kapila authored
For the initial table data synchronization in logical replication, we use a single transaction to copy the entire table and then synchronize the position in the stream with the main apply worker. There are multiple downsides of this approach: (a) We have to perform the entire copy operation again if there is any error (network breakdown, error in the database operation, etc.) while we synchronize the WAL position between tablesync worker and apply worker; this will be onerous especially for large copies, (b) Using a single transaction in the synchronization-phase (where we can receive WAL from multiple transactions) will have the risk of exceeding the CID limit, (c) The slot will hold the WAL till the entire sync is complete because we never commit till the end. This patch solves all the above downsides by allowing multiple transactions during the tablesync phase. The initial copy is done in a single transaction and after that, we commit each transaction as we receive. To allow recovery after any error or crash, we use a permanent slot and origin to track the progress. The slot and origin will be removed once we finish the synchronization of the table. We also remove slot and origin of tablesync workers if the user performs DROP SUBSCRIPTION .. or ALTER SUBSCRIPTION .. REFERESH and some of the table syncs are still not finished. The commands ALTER SUBSCRIPTION ... REFRESH PUBLICATION and ALTER SUBSCRIPTION ... SET PUBLICATION ... with refresh option as true cannot be executed inside a transaction block because they can now drop the slots for which we have no provision to rollback. This will also open up the path for logical replication of 2PC transactions on the subscriber side. Previously, we can't do that because of the requirement of maintaining a single transaction in tablesync workers. Bump catalog version due to change of state in the catalog (pg_subscription_rel). Author: Peter Smith, Amit Kapila, and Takamichi Osumi Reviewed-by: Ajin Cherian, Petr Jelinek, Hou Zhijie and Amit Kapila Discussion: https://postgr.es/m/CAA4eK1KHJxaZS-fod-0fey=0tq3=Gkn4ho=8N4-5HWiCfu0H1A@mail.gmail.com
-
- 10 Feb, 2021 1 commit
-
-
Amit Kapila authored
Currently, we get the origin id from the name and then drop the origin by taking ExclusiveLock on ReplicationOriginRelationId. So, two concurrent sessions can get the id from the name at the same time and then when they try to drop the origin, one of the sessions will get the either "tuple concurrently deleted" or "cache lookup failed for replication origin ..". To prevent this race condition we do the entire operation under lock. This obviates the need for replorigin_drop() API and we have removed it so if any extension authors are using it they need to instead use replorigin_drop_by_name. See it's usage in pg_replication_origin_drop(). Author: Peter Smith Reviewed-by: Amit Kapila, Euler Taveira, Petr Jelinek, and Alvaro Herrera Discussion: https://www.postgresql.org/message-id/CAHut%2BPuW8DWV5fskkMWWMqzt-x7RPcNQOtJQBp6SdwyRghCk7A%40mail.gmail.com
-
- 16 Jan, 2021 1 commit
-
-
Amit Kapila authored
The result of TextDatumGetCString is already palloc'ed so we don't need to allocate memory for it again. We decide not to backpatch it as there doesn't seem to be any case where it can create a meaningful leak. Author: Zhijie Hou Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/229fed2eb8c54c71a96ccb99e516eb12@G08CNEXMBPEKD05.g08.fujitsu.local
-
- 02 Jan, 2021 1 commit
-
-
Bruce Momjian authored
Backpatch-through: 9.5
-
- 03 Sep, 2020 1 commit
-
-
Amit Kapila authored
To add support for streaming of in-progress transactions into the built-in logical replication, we need to do three things: * Extend the logical replication protocol, so identify in-progress transactions, and allow adding additional bits of information (e.g. XID of subtransactions). * Modify the output plugin (pgoutput) to implement the new stream API callbacks, by leveraging the extended replication protocol. * Modify the replication apply worker, to properly handle streamed in-progress transaction by spilling the data to disk and then replaying them on commit. We however must explicitly disable streaming replication during replication slot creation, even if the plugin supports it. We don't need to replicate the changes accumulated during this phase, and moreover we don't have a replication connection open so we don't have where to send the data anyway. Author: Tomas Vondra, Dilip Kumar and Amit Kapila Reviewed-by: Amit Kapila, Kuntal Ghosh and Ajin Cherian Tested-by: Neha Sharma, Mahendra Singh Thalor and Ajin Cherian Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
-
- 18 Jul, 2020 1 commit
-
-
Tom Lane authored
This patch adds a "binary" option to CREATE/ALTER SUBSCRIPTION. When that's set, the publisher will send data using the data type's typsend function if any, rather than typoutput. This is generally faster, if slightly less robust. As committed, we won't try to transfer user-defined array or composite types in binary, for fear that type OIDs won't match at the subscriber. This might be changed later, but it seems like fit material for a follow-on patch. Dave Cramer, reviewed by Daniel Gustafsson, Petr Jelinek, and others; adjusted some by me Discussion: https://postgr.es/m/CADK3HH+R3xMn=8t3Ct+uD+qJ1KD=Hbif5NFMJ+d5DkoCzp6Vgw@mail.gmail.com
-
- 06 May, 2020 1 commit
-
-
Alvaro Herrera authored
We were acquiring object locks then deleting objects one by one, instead of acquiring all object locks first, ignoring those that did not exist, and then deleting all objects together. The latter is the correct protocol to use, and what this commits changes to code to do. Failing to follow that leads to "cache lookup failed for relation XYZ" error reports when DROP OWNED runs concurrently with other DDL -- for example, a session termination that removes some temp tables. Author: Álvaro Herrera Reported-by: Mithun Chicklore Yogendra (Mithun CY) Reviewed-by: Ahsan Hadi, Tom Lane Discussion: https://postgr.es/m/CADq3xVZTbzK4ZLKq+dn_vB4QafXXbmMgDP3trY-GuLnib2Ai1w@mail.gmail.com
-
- 10 Mar, 2020 1 commit
-
-
Peter Eisentraut authored
The need for this was removed by 8b9e9644. A number of files now need to include utils/acl.h or parser/parse_node.h explicitly where they previously got it indirectly somehow. Since parser/parse_node.h already includes nodes/parsenodes.h, the latter is then removed where the former was added. Also, remove nodes/pg_list.h from objectaddress.h, since that's included via nodes/parsenodes.h. Reviewed-by:
Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by:
Alvaro Herrera <alvherre@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/7601e258-26b2-8481-36d0-dc9dca6f28f1%402ndquadrant.com
-
- 04 Mar, 2020 1 commit
-
-
Tom Lane authored
Our usual practice for "poor man's enum" catalog columns is to define macros for the possible values and use those, not literal constants, in C code. But for some reason lost in the mists of time, this was never done for typalign/attalign or typstorage/attstorage. It's never too late to make it better though, so let's do that. The reason I got interested in this right now is the need to duplicate some uses of the TYPSTORAGE constants in an upcoming ALTER TYPE patch. But in general, this sort of change aids greppability and readability, so it's a good idea even without any specific motivation. I may have missed a few places that could be converted, and it's even more likely that pending patches will re-introduce some hard-coded references. But that's not fatal --- there's no expectation that we'd actually change any of these values. We can clean up stragglers over time. Discussion: https://postgr.es/m/16457.1583189537@sss.pgh.pa.us
-
- 30 Jan, 2020 1 commit
-
-
Alvaro Herrera authored
Cosmetic cleanup, not worth backpatching. Discussion: https://postgr.es/m/20200129200401.GA6303@alvherre.pgsql Reviewed-by: Tom Lane, Michael Paquier
-
- 11 Jan, 2020 1 commit
-
-
Peter Eisentraut authored
Some callers are not using it, so it's wasteful to have to specify it. Reviewed-by:
Masahiko Sawada <masahiko.sawada@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/CA+fd4k4BcYrYucNfTnK-CQX3+jsG+PRPEhHAUSo-W4P0Lec57A@mail.gmail.com
-
- 01 Jan, 2020 1 commit
-
-
Bruce Momjian authored
Backpatch-through: update all files in master, backpatch legal files through 9.4
-
- 12 Nov, 2019 1 commit
-
-
Amit Kapila authored
Similar to commits 7e735035 and dddf4cdc, this commit makes the order of header file inclusion consistent for backend modules. In the passing, removed a couple of duplicate inclusions. Author: Vignesh C Reviewed-by: Kuntal Ghosh and Amit Kapila Discussion: https://postgr.es/m/CALDaNm2Sznv8RR6Ex-iJO6xAdsxgWhCoETkaYX=+9DW3q0QCfA@mail.gmail.com
-
- 01 Nov, 2019 1 commit
-
-
Peter Eisentraut authored
This gives an alternative way of catching exceptions, for the common case where the cleanup code is the same in the error and non-error cases. So instead of PG_TRY(); { ... code that might throw ereport(ERROR) ... } PG_CATCH(); { cleanup(); PG_RE_THROW(); } PG_END_TRY(); cleanup(); one can write PG_TRY(); { ... code that might throw ereport(ERROR) ... } PG_FINALLY(); { cleanup(); } PG_END_TRY(); Discussion: https://www.postgresql.org/message-id/flat/95a822c3-728b-af0e-d7e5-71890507ae0c%402ndquadrant.com
-
- 29 Jun, 2019 1 commit
-
-
Tom Lane authored
In commit 18555b13 we tentatively established a rule that regression tests should use names containing "regression" for databases, and names starting with "regress_" for all other globally-visible object names, so as to circumscribe the side-effects that "make installcheck" could have on an existing installation. This commit adds a simple enforcement mechanism for that rule: if the code is compiled with ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS defined, it will emit a warning (not an error) whenever a database, role, tablespace, subscription, or replication origin name is created that doesn't obey the rule. Running one or more buildfarm members with that symbol defined should be enough to catch new violations, at least in the regular regression tests. Most TAP tests wouldn't notice such warnings, but that's actually fine because TAP tests don't execute against an existing server anyway. Since it's already the case that running src/test/modules/ tests in installcheck mode is deprecated, we can use that as a home for tests that seem unsafe to run against an existing server, such as tests that might have side-effects on existing roles. Document that (though this commit doesn't in itself make it any less safe than before). Update regress.sgml to define these restrictions more clearly, and to clean up assorted lack-of-up-to-date-ness in its descriptions of the available regression tests. Discussion: https://postgr.es/m/16638.1468620817@sss.pgh.pa.us
-
- 28 May, 2019 1 commit
-
-
Alvaro Herrera authored
I introduced the typo in source code in the course of 75445c15. Repair.
-
- 16 May, 2019 1 commit
-
-
Alvaro Herrera authored
Discussion: https://postgr.es/m/20190515183005.GA26486@alvherre.pgsql
-
- 21 Jan, 2019 2 commits
-
-
Andres Freund authored
Author: Andres Freund Discussion: https://postgr.es/m/20190111000539.xbv7s6w7ilcvm7dp@alap3.anarazel.de
-
Andres Freund authored
A lot of files only included heapam.h for relation_open, heap_open etc - replace the heapam.h include in those files with the narrower header. Author: Andres Freund Discussion: https://postgr.es/m/20190111000539.xbv7s6w7ilcvm7dp@alap3.anarazel.de
-
- 02 Jan, 2019 1 commit
-
-
Bruce Momjian authored
Backpatch-through: certain files through 9.4
-
- 21 Nov, 2018 1 commit
-
-
Andres Freund authored
Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
-
- 16 Nov, 2018 1 commit
-
-
Andres Freund authored
Upcoming work intends to allow pluggable ways to introduce new ways of storing table data. Accessing those table access methods from the executor requires TupleTableSlots to be carry tuples in the native format of such storage methods; otherwise there'll be a significant conversion overhead. Different access methods will require different data to store tuples efficiently (just like virtual, minimal, heap already require fields in TupleTableSlot). To allow that without requiring additional pointer indirections, we want to have different structs (embedding TupleTableSlot) for different types of slots. Thus different types of slots are needed, which requires adapting creators of slots. The slot that most efficiently can represent a type of tuple in an executor node will often depend on the type of slot a child node uses. Therefore we need to track the type of slot is returned by nodes, so parent slots can create slots based on that. Relatedly, JIT compilation of tuple deforming needs to know which type of slot a certain expression refers to, so it can create an appropriate deforming function for the type of tuple in the slot. But not all nodes will only return one type of slot, e.g. an append node will potentially return different types of slots for each of its subplans. Therefore add function that allows to query the type of a node's result slot, and whether it'll always be the same type (whether it's fixed). This can be queried using ExecGetResultSlotOps(). The scan, result, inner, outer type of slots are automatically inferred from ExecInitScanTupleSlot(), ExecInitResultSlot(), left/right subtrees respectively. If that's not correct for a node, that can be overwritten using new fields in PlanState. This commit does not introduce the actually abstracted implementation of different kind of TupleTableSlots, that will be left for a followup commit. The different types of slots introduced will, for now, still use the same backing implementation. While this already partially invalidates the big comment in tuptable.h, it seems to make more sense to update it later, when the different TupleTableSlot implementations actually exist. Author: Ashutosh Bapat and Andres Freund, with changes by Amit Khandekar Discussion: https://postgr.es/m/20181105210039.hh4vvi4vwoq5ba2q@alap3.anarazel.de
-
- 06 Apr, 2018 1 commit
-
-
Peter Eisentraut authored
We don't actually need the insert-or-update logic, so it's clearer to have separate functions for the inserting and updating. Author: Petr Jelinek <petr.jelinek@2ndquadrant.com> Reviewed-by:
Masahiko Sawada <sawada.mshk@gmail.com>
-
- 16 Mar, 2018 1 commit
-
-
Peter Eisentraut authored
We call this thing a "transaction block" everywhere except in a few functions, where it is mysteriously called a "transaction chain". In the SQL standard, a transaction chain is something different. So rename these functions to match the common terminology. Reviewed-by:
Alvaro Herrera <alvherre@alvh.no-ip.org>
-
- 19 Jan, 2018 1 commit
-
-
Peter Eisentraut authored
AclObjectKind was basically just another enumeration for object types, and we already have a preferred one for that. It's only used in aclcheck_error. By using ObjectType instead, we can also give some more precise error messages, for example "index" instead of "relation". Reviewed-by:
Michael Paquier <michael.paquier@gmail.com>
-
- 03 Jan, 2018 1 commit
-
-
Bruce Momjian authored
Backpatch-through: certain files through 9.3
-
- 13 Dec, 2017 1 commit
-
-
Tom Lane authored
This patch makes a number of interrelated changes to reduce the overhead involved in creating/deleting memory contexts. The key ideas are: * Include the AllocSetContext header of an aset.c context in its first malloc request, rather than allocating it separately in TopMemoryContext. This means that we now always create an initial or "keeper" block in an aset, even if it never receives any allocation requests. * Create freelists in which we can save and recycle recently-destroyed asets (this idea is due to Robert Haas). * In the common case where the name of a context is a constant string, just store a pointer to it in the context header, rather than copying the string. The first change eliminates a palloc/pfree cycle per context, and also avoids bloat in TopMemoryContext, at the price that creating a context now involves a malloc/free cycle even if the context never receives any allocations. That would be a loser for some common usage patterns, but recycling short-lived contexts via the freelist eliminates that pain. Avoiding copying constant strings not only saves strlen() and strcpy() overhead, but is an essential part of the freelist optimization because it makes the context header size constant. Currently we make no attempt to use the freelist for contexts with non-constant names. (Perhaps someday we'll need to think harder about that, but in current usage, most contexts with custom names are long-lived anyway.) The freelist management in this initial commit is pretty simplistic, and we might want to refine it later --- but in common workloads that will never matter because the freelists will never get full anyway. To create a context with a non-constant name, one is now required to call AllocSetContextCreateExtended and specify the MEMCONTEXT_COPY_NAME option. AllocSetContextCreate becomes a wrapper macro, and it includes a test that will complain about non-string-literal context name parameters on gcc and similar compilers. An unfortunate side effect of making AllocSetContextCreate a macro is that one is now *required* to use the size parameter abstraction macros (ALLOCSET_DEFAULT_SIZES and friends) with it; the pre-9.6 habit of writing out individual size parameters no longer works unless you switch to AllocSetContextCreateExtended. Internally to the memory-context-related modules, the context creation APIs are simplified, removing the rather baroque original design whereby a context-type module called mcxt.c which then called back into the context-type module. That saved a bit of code duplication, but not much, and it prevented context-type modules from exercising control over the allocation of context headers. In passing, I converted the test-and-elog validation of aset size parameters into Asserts to save a few more cycles. The original thought was that callers might compute size parameters on the fly, but in practice nobody does that, so it's useless to expend cycles on checking those numbers in production builds. Also, mark the memory context method-pointer structs "const", just for cleanliness. Discussion: https://postgr.es/m/2264.1512870796@sss.pgh.pa.us
-
- 23 Sep, 2017 1 commit
-
-
Tom Lane authored
Sigh.
-
- 18 Sep, 2017 1 commit
-
-
Peter Eisentraut authored
When ALTER SUBSCRIPTION DISABLE is run in the same transaction before DROP SUBSCRIPTION, the latter will hang because workers will still be running, not having seen the DISABLE committed, and DROP SUBSCRIPTION will wait until the workers have vacated the replication origin slots. Previously, DROP SUBSCRIPTION killed the logical replication workers immediately only if it was going to drop the replication slot, otherwise it scheduled the worker killing for the end of the transaction, as a result of 7e174fa7. This, however, causes the present problem. To fix, kill the workers immediately in all cases. This covers all cases: A subscription that doesn't have a replication slot must be disabled. It was either disabled in the same transaction, or it was already disabled before the current transaction, but then there shouldn't be any workers left and this won't make a difference. Reported-by:
Arseny Sher <a.sher@postgrespro.ru> Discussion: https://www.postgresql.org/message-id/flat/87mv6av84w.fsf%40ars-thinkpad
-
- 01 Sep, 2017 1 commit
-
-
Alvaro Herrera authored
Commit 9915de6c changed the default behavior of DROP_REPLICATION_SLOT so that it would wait until any session holding the slot active would release it, instead of raising an error. But users are already depending on the original behavior, so revert to it by default and add a WAIT option to invoke the new behavior. Per complaint from Simone Gotti, in Discussion: https://postgr.es/m/CAEvsy6Wgdf90O6pUvg2wSVXL2omH5OPC-38OD4Zzgk-FXavj3Q@mail.gmail.com
-