Commit 5db6df0c authored by Andres Freund's avatar Andres Freund

tableam: Add tuple_{insert, delete, update, lock} and use.

This adds new, required, table AM callbacks for insert/delete/update
and lock_tuple. To be able to reasonably use those, the EvalPlanQual
mechanism had to be adapted, moving more logic into the AM.

Previously both delete/update/lock call-sites and the EPQ mechanism had
to have awareness of the specific tuple format to be able to fetch the
latest version of a tuple. Obviously that needs to be abstracted
away. To do so, move the logic that find the latest row version into
the AM. lock_tuple has a new flag argument,
TUPLE_LOCK_FLAG_FIND_LAST_VERSION, that forces it to lock the last
version, rather than the current one.  It'd have been possible to do
so via a separate callback as well, but finding the last version
usually also necessitates locking the newest version, making it
sensible to combine the two. This replaces the previous use of
EvalPlanQualFetch().  Additionally HeapTupleUpdated, which previously
signaled either a concurrent update or delete, is now split into two,
to avoid callers needing AM specific knowledge to differentiate.

The move of finding the latest row version into tuple_lock means that
encountering a row concurrently moved into another partition will now
raise an error about "tuple to be locked" rather than "tuple to be
updated/deleted" - which is accurate, as that always happens when
locking rows. While possible slightly less helpful for users, it seems
like an acceptable trade-off.

As part of this commit HTSU_Result has been renamed to TM_Result, and
its members been expanded to differentiated between updating and
deleting. HeapUpdateFailureData has been renamed to TM_FailureData.

The interface to speculative insertion is changed so nodeModifyTable.c
does not have to set the speculative token itself anymore. Instead
there's a version of tuple_insert, tuple_insert_speculative, that
performs the speculative insertion (without requiring a flag to signal
that fact), and the speculative insertion is either made permanent
with table_complete_speculative(succeeded = true) or aborted with
succeeded = false).

Note that multi_insert is not yet routed through tableam, nor is
COPY. Changing multi_insert requires changes to copy.c that are large
enough to better be done separately.

Similarly, although simpler, CREATE TABLE AS and CREATE MATERIALIZED
VIEW are also only going to be adjusted in a later commit.

Author: Andres Freund and Haribabu Kommi
Discussion:
    https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
    https://postgr.es/m/20190313003903.nwvrxi7rw3ywhdel@alap3.anarazel.de
    https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
parent f778e537
......@@ -146,7 +146,7 @@ pgrowlocks(PG_FUNCTION_ARGS)
/* scan the relation */
while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
{
HTSU_Result htsu;
TM_Result htsu;
TransactionId xmax;
uint16 infomask;
......@@ -160,9 +160,9 @@ pgrowlocks(PG_FUNCTION_ARGS)
infomask = tuple->t_data->t_infomask;
/*
* A tuple is locked if HTSU returns BeingUpdated.
* A tuple is locked if HTSU returns BeingModified.
*/
if (htsu == HeapTupleBeingUpdated)
if (htsu == TM_BeingModified)
{
char **values;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -1763,7 +1763,7 @@ toast_delete_datum(Relation rel, Datum value, bool is_speculative)
* Have a chunk, delete it
*/
if (is_speculative)
heap_abort_speculative(toastrel, toasttup);
heap_abort_speculative(toastrel, &toasttup->t_self);
else
simple_heap_delete(toastrel, &toasttup->t_self);
}
......
......@@ -176,6 +176,119 @@ table_beginscan_parallel(Relation relation, ParallelTableScanDesc parallel_scan)
}
/* ----------------------------------------------------------------------------
* Functions to make modifications a bit simpler.
* ----------------------------------------------------------------------------
*/
/*
* simple_table_insert - insert a tuple
*
* Currently, this routine differs from table_insert only in supplying a
* default command ID and not allowing access to the speedup options.
*/
void
simple_table_insert(Relation rel, TupleTableSlot *slot)
{
table_insert(rel, slot, GetCurrentCommandId(true), 0, NULL);
}
/*
* simple_table_delete - delete a tuple
*
* This routine may be used to delete a tuple when concurrent updates of
* the target tuple are not expected (for example, because we have a lock
* on the relation associated with the tuple). Any failure is reported
* via ereport().
*/
void
simple_table_delete(Relation rel, ItemPointer tid, Snapshot snapshot)
{
TM_Result result;
TM_FailureData tmfd;
result = table_delete(rel, tid,
GetCurrentCommandId(true),
snapshot, InvalidSnapshot,
true /* wait for commit */ ,
&tmfd, false /* changingPart */ );
switch (result)
{
case TM_SelfModified:
/* Tuple was already updated in current command? */
elog(ERROR, "tuple already updated by self");
break;
case TM_Ok:
/* done successfully */
break;
case TM_Updated:
elog(ERROR, "tuple concurrently updated");
break;
case TM_Deleted:
elog(ERROR, "tuple concurrently deleted");
break;
default:
elog(ERROR, "unrecognized table_delete status: %u", result);
break;
}
}
/*
* simple_table_update - replace a tuple
*
* This routine may be used to update a tuple when concurrent updates of
* the target tuple are not expected (for example, because we have a lock
* on the relation associated with the tuple). Any failure is reported
* via ereport().
*/
void
simple_table_update(Relation rel, ItemPointer otid,
TupleTableSlot *slot,
Snapshot snapshot,
bool *update_indexes)
{
TM_Result result;
TM_FailureData tmfd;
LockTupleMode lockmode;
result = table_update(rel, otid, slot,
GetCurrentCommandId(true),
snapshot, InvalidSnapshot,
true /* wait for commit */ ,
&tmfd, &lockmode, update_indexes);
switch (result)
{
case TM_SelfModified:
/* Tuple was already updated in current command? */
elog(ERROR, "tuple already updated by self");
break;
case TM_Ok:
/* done successfully */
break;
case TM_Updated:
elog(ERROR, "tuple concurrently updated");
break;
case TM_Deleted:
elog(ERROR, "tuple concurrently deleted");
break;
default:
elog(ERROR, "unrecognized table_update status: %u", result);
break;
}
}
/* ----------------------------------------------------------------------------
* Helper functions to implement parallel scans for block oriented AMs.
* ----------------------------------------------------------------------------
......
......@@ -64,6 +64,19 @@ GetTableAmRoutine(Oid amhandler)
Assert(routine->tuple_satisfies_snapshot != NULL);
Assert(routine->tuple_insert != NULL);
/*
* Could be made optional, but would require throwing error during
* parse-analysis.
*/
Assert(routine->tuple_insert_speculative != NULL);
Assert(routine->tuple_complete_speculative != NULL);
Assert(routine->tuple_delete != NULL);
Assert(routine->tuple_update != NULL);
Assert(routine->tuple_lock != NULL);
return routine;
}
......
......@@ -3007,7 +3007,6 @@ CopyFrom(CopyState cstate)
/* And create index entries for it */
if (resultRelInfo->ri_NumIndices > 0)
recheckIndexes = ExecInsertIndexTuples(slot,
&(tuple->t_self),
estate,
false,
NULL,
......@@ -3151,7 +3150,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
cstate->cur_lineno = firstBufferedLineNo + i;
ExecStoreHeapTuple(bufferedTuples[i], myslot, false);
recheckIndexes =
ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
ExecInsertIndexTuples(myslot,
estate, false, NULL, NIL);
ExecARInsertTriggers(estate, resultRelInfo,
myslot,
......
......@@ -15,6 +15,7 @@
#include "access/genam.h"
#include "access/heapam.h"
#include "access/tableam.h"
#include "access/sysattr.h"
#include "access/htup_details.h"
#include "access/xact.h"
......@@ -3285,19 +3286,12 @@ GetTupleForTrigger(EState *estate,
TupleTableSlot **newSlot)
{
Relation relation = relinfo->ri_RelationDesc;
HeapTuple tuple;
Buffer buffer;
BufferHeapTupleTableSlot *boldslot;
Assert(TTS_IS_BUFFERTUPLE(oldslot));
ExecClearTuple(oldslot);
boldslot = (BufferHeapTupleTableSlot *) oldslot;
tuple = &boldslot->base.tupdata;
if (newSlot != NULL)
{
HTSU_Result test;
HeapUpdateFailureData hufd;
TM_Result test;
TM_FailureData tmfd;
int lockflags = 0;
*newSlot = NULL;
......@@ -3307,15 +3301,17 @@ GetTupleForTrigger(EState *estate,
/*
* lock tuple for update
*/
ltrmark:;
tuple->t_self = *tid;
test = heap_lock_tuple(relation, tuple,
estate->es_output_cid,
lockmode, LockWaitBlock,
false, &buffer, &hufd);
if (!IsolationUsesXactSnapshot())
lockflags |= TUPLE_LOCK_FLAG_FIND_LAST_VERSION;
test = table_lock_tuple(relation, tid, estate->es_snapshot, oldslot,
estate->es_output_cid,
lockmode, LockWaitBlock,
lockflags,
&tmfd);
switch (test)
{
case HeapTupleSelfUpdated:
case TM_SelfModified:
/*
* The target tuple was already updated or deleted by the
......@@ -3325,73 +3321,59 @@ ltrmark:;
* enumerated in ExecUpdate and ExecDelete in
* nodeModifyTable.c.
*/
if (hufd.cmax != estate->es_output_cid)
if (tmfd.cmax != estate->es_output_cid)
ereport(ERROR,
(errcode(ERRCODE_TRIGGERED_DATA_CHANGE_VIOLATION),
errmsg("tuple to be updated was already modified by an operation triggered by the current command"),
errhint("Consider using an AFTER trigger instead of a BEFORE trigger to propagate changes to other rows.")));
/* treat it as deleted; do not process */
ReleaseBuffer(buffer);
return false;
case HeapTupleMayBeUpdated:
ExecStorePinnedBufferHeapTuple(tuple, oldslot, buffer);
break;
case HeapTupleUpdated:
ReleaseBuffer(buffer);
if (IsolationUsesXactSnapshot())
ereport(ERROR,
(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
errmsg("could not serialize access due to concurrent update")));
if (ItemPointerIndicatesMovedPartitions(&hufd.ctid))
ereport(ERROR,
(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
errmsg("tuple to be locked was already moved to another partition due to concurrent update")));
if (!ItemPointerEquals(&hufd.ctid, &tuple->t_self))
case TM_Ok:
if (tmfd.traversed)
{
/* it was updated, so look at the updated version */
TupleTableSlot *epqslot;
epqslot = EvalPlanQual(estate,
epqstate,
relation,
relinfo->ri_RangeTableIndex,
lockmode,
&hufd.ctid,
hufd.xmax);
if (!TupIsNull(epqslot))
{
*tid = hufd.ctid;
oldslot);
*newSlot = epqslot;
/*
* If PlanQual failed for updated tuple - we must not
* process this tuple!
*/
if (TupIsNull(epqslot))
return false;
/*
* EvalPlanQual already locked the tuple, but we
* re-call heap_lock_tuple anyway as an easy way of
* re-fetching the correct tuple. Speed is hardly a
* criterion in this path anyhow.
*/
goto ltrmark;
}
*newSlot = epqslot;
}
break;
/*
* if tuple was deleted or PlanQual failed for updated tuple -
* we must not process this tuple!
*/
case TM_Updated:
if (IsolationUsesXactSnapshot())
ereport(ERROR,
(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
errmsg("could not serialize access due to concurrent update")));
elog(ERROR, "unexpected table_lock_tuple status: %u", test);
break;
case TM_Deleted:
if (IsolationUsesXactSnapshot())
ereport(ERROR,
(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
errmsg("could not serialize access due to concurrent delete")));
/* tuple was deleted */
return false;
case HeapTupleInvisible:
case TM_Invisible:
elog(ERROR, "attempted to lock invisible tuple");
break;
default:
ReleaseBuffer(buffer);
elog(ERROR, "unrecognized heap_lock_tuple status: %u", test);
elog(ERROR, "unrecognized table_lock_tuple status: %u", test);
return false; /* keep compiler quiet */
}
}
......@@ -3399,6 +3381,14 @@ ltrmark:;
{
Page page;
ItemId lp;
Buffer buffer;
BufferHeapTupleTableSlot *boldslot;
HeapTuple tuple;
Assert(TTS_IS_BUFFERTUPLE(oldslot));
ExecClearTuple(oldslot);
boldslot = (BufferHeapTupleTableSlot *) oldslot;
tuple = &boldslot->base.tupdata;
buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
......@@ -4286,7 +4276,7 @@ AfterTriggerExecute(EState *estate,
LocTriggerData.tg_trigslot = ExecGetTriggerOldSlot(estate, relInfo);
ItemPointerCopy(&(event->ate_ctid1), &(tuple1.t_self));
if (!heap_fetch(rel, SnapshotAny, &tuple1, &buffer, false, NULL))
if (!heap_fetch(rel, SnapshotAny, &tuple1, &buffer, NULL))
elog(ERROR, "failed to fetch tuple1 for AFTER trigger");
ExecStorePinnedBufferHeapTuple(&tuple1,
LocTriggerData.tg_trigslot,
......@@ -4310,7 +4300,7 @@ AfterTriggerExecute(EState *estate,
LocTriggerData.tg_newslot = ExecGetTriggerNewSlot(estate, relInfo);
ItemPointerCopy(&(event->ate_ctid2), &(tuple2.t_self));
if (!heap_fetch(rel, SnapshotAny, &tuple2, &buffer, false, NULL))
if (!heap_fetch(rel, SnapshotAny, &tuple2, &buffer, NULL))
elog(ERROR, "failed to fetch tuple2 for AFTER trigger");
ExecStorePinnedBufferHeapTuple(&tuple2,
LocTriggerData.tg_newslot,
......
......@@ -271,12 +271,12 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
*/
List *
ExecInsertIndexTuples(TupleTableSlot *slot,
ItemPointer tupleid,
EState *estate,
bool noDupErr,
bool *specConflict,
List *arbiterIndexes)
{
ItemPointer tupleid = &slot->tts_tid;
List *result = NIL;
ResultRelInfo *resultRelInfo;
int i;
......@@ -288,6 +288,8 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
Datum values[INDEX_MAX_KEYS];
bool isnull[INDEX_MAX_KEYS];
Assert(ItemPointerIsValid(tupleid));
/*
* Get information from the result relation info structure.
*/
......
This diff is collapsed.
......@@ -15,7 +15,6 @@
#include "postgres.h"
#include "access/genam.h"
#include "access/heapam.h"
#include "access/relscan.h"
#include "access/tableam.h"
#include "access/transam.h"
......@@ -168,35 +167,28 @@ retry:
/* Found tuple, try to lock it in the lockmode. */
if (found)
{
Buffer buf;
HeapUpdateFailureData hufd;
HTSU_Result res;
HeapTupleData locktup;
HeapTupleTableSlot *hslot = (HeapTupleTableSlot *)outslot;
/* Only a heap tuple has item pointers. */
Assert(TTS_IS_HEAPTUPLE(outslot) || TTS_IS_BUFFERTUPLE(outslot));
ItemPointerCopy(&hslot->tuple->t_self, &locktup.t_self);
TM_FailureData tmfd;
TM_Result res;
PushActiveSnapshot(GetLatestSnapshot());
res = heap_lock_tuple(rel, &locktup, GetCurrentCommandId(false),
lockmode,
LockWaitBlock,
false /* don't follow updates */ ,
&buf, &hufd);
/* the tuple slot already has the buffer pinned */
ReleaseBuffer(buf);
res = table_lock_tuple(rel, &(outslot->tts_tid), GetLatestSnapshot(),
outslot,
GetCurrentCommandId(false),
lockmode,
LockWaitBlock,
0 /* don't follow updates */ ,
&tmfd);
PopActiveSnapshot();
switch (res)
{
case HeapTupleMayBeUpdated:
case TM_Ok:
break;
case HeapTupleUpdated:
case TM_Updated:
/* XXX: Improve handling here */
if (ItemPointerIndicatesMovedPartitions(&hufd.ctid))
if (ItemPointerIndicatesMovedPartitions(&tmfd.ctid))
ereport(LOG,
(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
errmsg("tuple to be locked was already moved to another partition due to concurrent update, retrying")));
......@@ -205,11 +197,17 @@ retry:
(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
errmsg("concurrent update, retrying")));
goto retry;
case HeapTupleInvisible:
case TM_Deleted:
/* XXX: Improve handling here */
ereport(LOG,
(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
errmsg("concurrent delete, retrying")));
goto retry;
case TM_Invisible:
elog(ERROR, "attempted to lock invisible tuple");
break;
default:
elog(ERROR, "unexpected heap_lock_tuple status: %u", res);
elog(ERROR, "unexpected table_lock_tuple status: %u", res);
break;
}
}
......@@ -333,35 +331,28 @@ retry:
/* Found tuple, try to lock it in the lockmode. */
if (found)
{
Buffer buf;
HeapUpdateFailureData hufd;
HTSU_Result res;
HeapTupleData locktup;
HeapTupleTableSlot *hslot = (HeapTupleTableSlot *)outslot;
/* Only a heap tuple has item pointers. */
Assert(TTS_IS_HEAPTUPLE(outslot) || TTS_IS_BUFFERTUPLE(outslot));
ItemPointerCopy(&hslot->tuple->t_self, &locktup.t_self);
TM_FailureData tmfd;
TM_Result res;
PushActiveSnapshot(GetLatestSnapshot());
res = heap_lock_tuple(rel, &locktup, GetCurrentCommandId(false),
lockmode,
LockWaitBlock,
false /* don't follow updates */ ,
&buf, &hufd);
/* the tuple slot already has the buffer pinned */
ReleaseBuffer(buf);
res = table_lock_tuple(rel, &(outslot->tts_tid), GetLatestSnapshot(),
outslot,
GetCurrentCommandId(false),
lockmode,
LockWaitBlock,
0 /* don't follow updates */ ,
&tmfd);
PopActiveSnapshot();
switch (res)
{
case HeapTupleMayBeUpdated:
case TM_Ok:
break;
case HeapTupleUpdated:
case TM_Updated:
/* XXX: Improve handling here */
if (ItemPointerIndicatesMovedPartitions(&hufd.ctid))
if (ItemPointerIndicatesMovedPartitions(&tmfd.ctid))
ereport(LOG,
(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
errmsg("tuple to be locked was already moved to another partition due to concurrent update, retrying")));
......@@ -370,11 +361,17 @@ retry:
(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
errmsg("concurrent update, retrying")));
goto retry;
case HeapTupleInvisible:
case TM_Deleted:
/* XXX: Improve handling here */
ereport(LOG,
(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
errmsg("concurrent delete, retrying")));
goto retry;
case TM_Invisible:
elog(ERROR, "attempted to lock invisible tuple");
break;
default:
elog(ERROR, "unexpected heap_lock_tuple status: %u", res);
elog(ERROR, "unexpected table_lock_tuple status: %u", res);
break;
}
}
......@@ -395,7 +392,6 @@ void
ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
{
bool skip_tuple = false;
HeapTuple tuple;
ResultRelInfo *resultRelInfo = estate->es_result_relation_info;
Relation rel = resultRelInfo->ri_RelationDesc;
......@@ -422,16 +418,11 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
if (resultRelInfo->ri_PartitionCheck)
ExecPartitionCheck(resultRelInfo, slot, estate, true);
/* Materialize slot into a tuple that we can scribble upon. */
tuple = ExecFetchSlotHeapTuple(slot, true, NULL);
/* OK, store the tuple and create index entries for it */
simple_heap_insert(rel, tuple);
ItemPointerCopy(&tuple->t_self, &slot->tts_tid);
simple_table_insert(resultRelInfo->ri_RelationDesc, slot);
if (resultRelInfo->ri_NumIndices > 0)
recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
estate, false, NULL,
recheckIndexes = ExecInsertIndexTuples(slot, estate, false, NULL,
NIL);
/* AFTER ROW INSERT Triggers */
......@@ -459,13 +450,9 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
TupleTableSlot *searchslot, TupleTableSlot *slot)
{
bool skip_tuple = false;
HeapTuple tuple;
ResultRelInfo *resultRelInfo = estate->es_result_relation_info;
Relation rel = resultRelInfo->ri_RelationDesc;
HeapTupleTableSlot *hsearchslot = (HeapTupleTableSlot *)searchslot;
/* We expect the searchslot to contain a heap tuple. */
Assert(TTS_IS_HEAPTUPLE(searchslot) || TTS_IS_BUFFERTUPLE(searchslot));
ItemPointer tid = &(searchslot->tts_tid);
/* For now we support only tables. */
Assert(rel->rd_rel->relkind == RELKIND_RELATION);
......@@ -477,14 +464,14 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
resultRelInfo->ri_TrigDesc->trig_update_before_row)
{
if (!ExecBRUpdateTriggers(estate, epqstate, resultRelInfo,
&hsearchslot->tuple->t_self,
NULL, slot))
tid, NULL, slot))
skip_tuple = true; /* "do nothing" */
}
if (!skip_tuple)
{
List *recheckIndexes = NIL;
bool update_indexes;
/* Check the constraints of the tuple */
if (rel->rd_att->constr)
......@@ -492,23 +479,16 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
if (resultRelInfo->ri_PartitionCheck)
ExecPartitionCheck(resultRelInfo, slot, estate, true);
/* Materialize slot into a tuple that we can scribble upon. */
tuple = ExecFetchSlotHeapTuple(slot, true, NULL);
simple_table_update(rel, tid, slot,estate->es_snapshot,
&update_indexes);
/* OK, update the tuple and index entries for it */
simple_heap_update(rel, &hsearchslot->tuple->t_self, tuple);
ItemPointerCopy(&tuple->t_self, &slot->tts_tid);
if (resultRelInfo->ri_NumIndices > 0 &&
!HeapTupleIsHeapOnly(tuple))
recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
estate, false, NULL,
if (resultRelInfo->ri_NumIndices > 0 && update_indexes)
recheckIndexes = ExecInsertIndexTuples(slot, estate, false, NULL,
NIL);
/* AFTER ROW UPDATE Triggers */
ExecARUpdateTriggers(estate, resultRelInfo,
&(tuple->t_self),
NULL, slot,
tid, NULL, slot,
recheckIndexes, NULL);
list_free(recheckIndexes);
......@@ -528,11 +508,7 @@ ExecSimpleRelationDelete(EState *estate, EPQState *epqstate,
bool skip_tuple = false;
ResultRelInfo *resultRelInfo = estate->es_result_relation_info;
Relation rel = resultRelInfo->ri_RelationDesc;
HeapTupleTableSlot *hsearchslot = (HeapTupleTableSlot *)searchslot;
/* For now we support only tables and heap tuples. */
Assert(rel->rd_rel->relkind == RELKIND_RELATION);
Assert(TTS_IS_HEAPTUPLE(searchslot) || TTS_IS_BUFFERTUPLE(searchslot));
ItemPointer tid = &searchslot->tts_tid;
CheckCmdReplicaIdentity(rel, CMD_DELETE);
......@@ -541,23 +517,18 @@ ExecSimpleRelationDelete(EState *estate, EPQState *epqstate,
resultRelInfo->ri_TrigDesc->trig_delete_before_row)
{
skip_tuple = !ExecBRDeleteTriggers(estate, epqstate, resultRelInfo,
&hsearchslot->tuple->t_self,
NULL, NULL);
tid, NULL, NULL);
}
if (!skip_tuple)
{
List *recheckIndexes = NIL;
/* OK, delete the tuple */
simple_heap_delete(rel, &hsearchslot->tuple->t_self);
simple_table_delete(rel, tid, estate->es_snapshot);
/* AFTER ROW DELETE Triggers */
ExecARDeleteTriggers(estate, resultRelInfo,
&hsearchslot->tuple->t_self, NULL, NULL);
list_free(recheckIndexes);
tid, NULL, NULL);
}
}
......
......@@ -21,14 +21,12 @@
#include "postgres.h"
#include "access/heapam.h"
#include "access/htup_details.h"
#include "access/tableam.h"
#include "access/xact.h"
#include "executor/executor.h"
#include "executor/nodeLockRows.h"
#include "foreign/fdwapi.h"
#include "miscadmin.h"
#include "storage/bufmgr.h"
#include "utils/rel.h"
......@@ -82,11 +80,11 @@ lnext:
ExecRowMark *erm = aerm->rowmark;
Datum datum;
bool isNull;
HeapTupleData tuple;
Buffer buffer;
HeapUpdateFailureData hufd;
ItemPointerData tid;
TM_FailureData tmfd;
LockTupleMode lockmode;
HTSU_Result test;
int lockflags = 0;
TM_Result test;
TupleTableSlot *markSlot;
/* clear any leftover test tuple for this rel */
......@@ -112,6 +110,7 @@ lnext:
/* this child is inactive right now */
erm->ermActive = false;
ItemPointerSetInvalid(&(erm->curCtid));
ExecClearTuple(markSlot);
continue;
}
}
......@@ -160,8 +159,8 @@ lnext:
continue;
}
/* okay, try to lock the tuple */
tuple.t_self = *((ItemPointer) DatumGetPointer(datum));
/* okay, try to lock (and fetch) the tuple */
tid = *((ItemPointer) DatumGetPointer(datum));
switch (erm->markType)
{
case ROW_MARK_EXCLUSIVE:
......@@ -182,18 +181,23 @@ lnext:
break;
}
test = heap_lock_tuple(erm->relation, &tuple,
estate->es_output_cid,
lockmode, erm->waitPolicy, true,
&buffer, &hufd);
ReleaseBuffer(buffer);
lockflags = TUPLE_LOCK_FLAG_LOCK_UPDATE_IN_PROGRESS;
if (!IsolationUsesXactSnapshot())
lockflags |= TUPLE_LOCK_FLAG_FIND_LAST_VERSION;
test = table_lock_tuple(erm->relation, &tid, estate->es_snapshot,
markSlot, estate->es_output_cid,
lockmode, erm->waitPolicy,
lockflags,
&tmfd);
switch (test)
{
case HeapTupleWouldBlock:
case TM_WouldBlock:
/* couldn't lock tuple in SKIP LOCKED mode */
goto lnext;
case HeapTupleSelfUpdated:
case TM_SelfModified:
/*
* The target tuple was already updated or deleted by the
......@@ -204,65 +208,50 @@ lnext:
* to fetch the updated tuple instead, but doing so would
* require changing heap_update and heap_delete to not
* complain about updating "invisible" tuples, which seems
* pretty scary (heap_lock_tuple will not complain, but few
* callers expect HeapTupleInvisible, and we're not one of
* them). So for now, treat the tuple as deleted and do not
* process.
* pretty scary (table_lock_tuple will not complain, but few
* callers expect TM_Invisible, and we're not one of them). So
* for now, treat the tuple as deleted and do not process.
*/
goto lnext;
case HeapTupleMayBeUpdated:
/* got the lock successfully */
case TM_Ok:
/*
* Got the lock successfully, the locked tuple saved in
* markSlot for, if needed, EvalPlanQual testing below.
*/
if (tmfd.traversed)
epq_needed = true;
break;
case HeapTupleUpdated:
case TM_Updated:
if (IsolationUsesXactSnapshot())
ereport(ERROR,
(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
errmsg("could not serialize access due to concurrent update")));
if (ItemPointerIndicatesMovedPartitions(&hufd.ctid))
elog(ERROR, "unexpected table_lock_tuple status: %u",
test);
break;
case TM_Deleted:
if (IsolationUsesXactSnapshot())
ereport(ERROR,
(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
errmsg("tuple to be locked was already moved to another partition due to concurrent update")));
if (ItemPointerEquals(&hufd.ctid, &tuple.t_self))
{
/* Tuple was deleted, so don't return it */
goto lnext;
}
/* updated, so fetch and lock the updated version */
if (!EvalPlanQualFetch(estate, erm->relation,
lockmode, erm->waitPolicy,
&hufd.ctid, hufd.xmax,
markSlot))
{
/*
* Tuple was deleted; or it's locked and we're under SKIP
* LOCKED policy, so don't return it
*/
goto lnext;
}
/* remember the actually locked tuple's TID */
tuple.t_self = markSlot->tts_tid;
/* Remember we need to do EPQ testing */
epq_needed = true;
/* Continue loop until we have all target tuples */
break;
errmsg("could not serialize access due to concurrent update")));
/* tuple was deleted so don't return it */
goto lnext;
case HeapTupleInvisible:
case TM_Invisible:
elog(ERROR, "attempted to lock invisible tuple");
break;
default:
elog(ERROR, "unrecognized heap_lock_tuple status: %u",
elog(ERROR, "unrecognized table_lock_tuple status: %u",
test);
}
/* Remember locked tuple's TID for EPQ testing and WHERE CURRENT OF */
erm->curCtid = tuple.t_self;
erm->curCtid = tid;
}
/*
......@@ -270,49 +259,6 @@ lnext:
*/
if (epq_needed)
{
/*
* Fetch a copy of any rows that were successfully locked without any
* update having occurred. (We do this in a separate pass so as to
* avoid overhead in the common case where there are no concurrent
* updates.) Make sure any inactive child rels have NULL test tuples
* in EPQ.
*/
foreach(lc, node->lr_arowMarks)
{
ExecAuxRowMark *aerm = (ExecAuxRowMark *) lfirst(lc);
ExecRowMark *erm = aerm->rowmark;
TupleTableSlot *markSlot;
HeapTupleData tuple;
Buffer buffer;
markSlot = EvalPlanQualSlot(&node->lr_epqstate, erm->relation, erm->rti);
/* skip non-active child tables, but clear their test tuples */
if (!erm->ermActive)
{
Assert(erm->rti != erm->prti); /* check it's child table */
ExecClearTuple(markSlot);
continue;
}
/* was tuple updated and fetched above? */
if (!TupIsNull(markSlot))
continue;
/* foreign tables should have been fetched above */
Assert(erm->relation->rd_rel->relkind != RELKIND_FOREIGN_TABLE);
Assert(ItemPointerIsValid(&(erm->curCtid)));
/* okay, fetch the tuple */
tuple.t_self = erm->curCtid;
if (!heap_fetch(erm->relation, SnapshotAny, &tuple, &buffer,
false, NULL))
elog(ERROR, "failed to fetch tuple for EvalPlanQual recheck");
ExecStorePinnedBufferHeapTuple(&tuple, markSlot, buffer);
ExecMaterializeSlot(markSlot);
/* successful, use tuple in slot */
}
/*
* Now fetch any non-locked source rows --- the EPQ logic knows how to
* do that.
......
This diff is collapsed.
......@@ -376,7 +376,7 @@ TidNext(TidScanState *node)
if (node->tss_isCurrentOf)
heap_get_latest_tid(heapRelation, snapshot, &tuple->t_self);
if (heap_fetch(heapRelation, snapshot, tuple, &buffer, false, NULL))
if (heap_fetch(heapRelation, snapshot, tuple, &buffer, NULL))
{
/*
* Store the scanned tuple in the scan tuple slot of the scan
......
......@@ -19,6 +19,7 @@
#include "access/sdir.h"
#include "access/skey.h"
#include "access/table.h" /* for backward compatibility */
#include "access/tableam.h"
#include "nodes/lockoptions.h"
#include "nodes/primnodes.h"
#include "storage/bufpage.h"
......@@ -28,39 +29,16 @@
/* "options" flag bits for heap_insert */
#define HEAP_INSERT_SKIP_WAL 0x0001
#define HEAP_INSERT_SKIP_FSM 0x0002
#define HEAP_INSERT_FROZEN 0x0004
#define HEAP_INSERT_SPECULATIVE 0x0008
#define HEAP_INSERT_NO_LOGICAL 0x0010
#define HEAP_INSERT_SKIP_WAL TABLE_INSERT_SKIP_WAL
#define HEAP_INSERT_SKIP_FSM TABLE_INSERT_SKIP_FSM
#define HEAP_INSERT_FROZEN TABLE_INSERT_FROZEN
#define HEAP_INSERT_NO_LOGICAL TABLE_INSERT_NO_LOGICAL
#define HEAP_INSERT_SPECULATIVE 0x0010
typedef struct BulkInsertStateData *BulkInsertState;
#define MaxLockTupleMode LockTupleExclusive
/*
* When heap_update, heap_delete, or heap_lock_tuple fail because the target
* tuple is already outdated, they fill in this struct to provide information
* to the caller about what happened.
* ctid is the target's ctid link: it is the same as the target's TID if the
* target was deleted, or the location of the replacement tuple if the target
* was updated.
* xmax is the outdating transaction's XID. If the caller wants to visit the
* replacement tuple, it must check that this matches before believing the
* replacement is really a match.
* cmax is the outdating command's CID, but only when the failure code is
* HeapTupleSelfUpdated (i.e., something in the current transaction outdated
* the tuple); otherwise cmax is zero. (We make this restriction because
* HeapTupleHeaderGetCmax doesn't work for tuples outdated in other
* transactions.)
*/
typedef struct HeapUpdateFailureData
{
ItemPointerData ctid;
TransactionId xmax;
CommandId cmax;
} HeapUpdateFailureData;
/*
* Descriptor for heap table scans.
*/
......@@ -150,8 +128,7 @@ extern bool heap_getnextslot(TableScanDesc sscan,
ScanDirection direction, struct TupleTableSlot *slot);
extern bool heap_fetch(Relation relation, Snapshot snapshot,
HeapTuple tuple, Buffer *userbuf, bool keep_buf,
Relation stats_relation);
HeapTuple tuple, Buffer *userbuf, Relation stats_relation);
extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
bool *all_dead, bool first_call);
......@@ -170,19 +147,20 @@ extern void heap_insert(Relation relation, HeapTuple tup, CommandId cid,
int options, BulkInsertState bistate);
extern void heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
CommandId cid, int options, BulkInsertState bistate);
extern HTSU_Result heap_delete(Relation relation, ItemPointer tid,
extern TM_Result heap_delete(Relation relation, ItemPointer tid,
CommandId cid, Snapshot crosscheck, bool wait,
HeapUpdateFailureData *hufd, bool changingPart);
extern void heap_finish_speculative(Relation relation, HeapTuple tuple);
extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
struct TM_FailureData *tmfd, bool changingPart);
extern void heap_finish_speculative(Relation relation, ItemPointer tid);
extern void heap_abort_speculative(Relation relation, ItemPointer tid);
extern TM_Result heap_update(Relation relation, ItemPointer otid,
HeapTuple newtup,
CommandId cid, Snapshot crosscheck, bool wait,
HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
struct TM_FailureData *tmfd, LockTupleMode *lockmode);
extern TM_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
bool follow_update,
Buffer *buffer, HeapUpdateFailureData *hufd);
Buffer *buffer, struct TM_FailureData *tmfd);
extern void heap_inplace_update(Relation relation, HeapTuple tuple);
extern bool heap_freeze_tuple(HeapTupleHeader tuple,
TransactionId relfrozenxid, TransactionId relminmxid,
......@@ -223,7 +201,7 @@ extern void heap_vacuum_rel(Relation onerel,
/* in heap/heapam_visibility.c */
extern bool HeapTupleSatisfiesVisibility(HeapTuple stup, Snapshot snapshot,
Buffer buffer);
extern HTSU_Result HeapTupleSatisfiesUpdate(HeapTuple stup, CommandId curcid,
extern TM_Result HeapTupleSatisfiesUpdate(HeapTuple stup, CommandId curcid,
Buffer buffer);
extern HTSV_Result HeapTupleSatisfiesVacuum(HeapTuple stup, TransactionId OldestXmin,
Buffer buffer);
......
This diff is collapsed.
......@@ -198,12 +198,7 @@ extern LockTupleMode ExecUpdateLockMode(EState *estate, ResultRelInfo *relinfo);
extern ExecRowMark *ExecFindRowMark(EState *estate, Index rti, bool missing_ok);
extern ExecAuxRowMark *ExecBuildAuxRowMark(ExecRowMark *erm, List *targetlist);
extern TupleTableSlot *EvalPlanQual(EState *estate, EPQState *epqstate,
Relation relation, Index rti, LockTupleMode lockmode,
ItemPointer tid, TransactionId priorXmax);
extern bool EvalPlanQualFetch(EState *estate, Relation relation,
LockTupleMode lockmode, LockWaitPolicy wait_policy,
ItemPointer tid, TransactionId priorXmax,
TupleTableSlot *slot);
Relation relation, Index rti, TupleTableSlot *testslot);
extern void EvalPlanQualInit(EPQState *epqstate, EState *estate,
Plan *subplan, List *auxrowmarks, int epqParam);
extern void EvalPlanQualSetPlan(EPQState *epqstate,
......@@ -573,9 +568,8 @@ extern TupleTableSlot *ExecGetReturningSlot(EState *estate, ResultRelInfo *relIn
*/
extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
EState *estate, bool noDupErr, bool *specConflict,
List *arbiterIndexes);
extern List *ExecInsertIndexTuples(TupleTableSlot *slot, EState *estate, bool noDupErr,
bool *specConflict, List *arbiterIndexes);
extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
ItemPointer conflictTid, List *arbiterIndexes);
extern void check_exclusion_constraint(Relation heap, Relation index,
......
......@@ -184,17 +184,4 @@ typedef struct SnapshotData
XLogRecPtr lsn; /* position in the WAL stream when taken */
} SnapshotData;
/*
* Result codes for HeapTupleSatisfiesUpdate.
*/
typedef enum
{
HeapTupleMayBeUpdated,
HeapTupleInvisible,
HeapTupleSelfUpdated,
HeapTupleUpdated,
HeapTupleBeingUpdated,
HeapTupleWouldBlock /* can be returned by heap_tuple_lock */
} HTSU_Result;
#endif /* SNAPSHOT_H */
......@@ -15,7 +15,7 @@ step s1u: UPDATE foo SET a=2 WHERE a=1;
step s2d: DELETE FROM foo WHERE a=1; <waiting ...>
step s1c: COMMIT;
step s2d: <... completed>
error in steps s1c s2d: ERROR: tuple to be deleted was already moved to another partition due to concurrent update
error in steps s1c s2d: ERROR: tuple to be locked was already moved to another partition due to concurrent update
step s2c: COMMIT;
starting permutation: s1b s2b s1u s2u s1c s2c
......@@ -25,7 +25,7 @@ step s1u: UPDATE foo SET a=2 WHERE a=1;
step s2u: UPDATE foo SET b='EFG' WHERE a=1; <waiting ...>
step s1c: COMMIT;
step s2u: <... completed>
error in steps s1c s2u: ERROR: tuple to be updated was already moved to another partition due to concurrent update
error in steps s1c s2u: ERROR: tuple to be locked was already moved to another partition due to concurrent update
step s2c: COMMIT;
starting permutation: s1b s2b s2d s1u s2c s1c
......
......@@ -943,7 +943,6 @@ HSParser
HSpool
HStore
HTAB
HTSU_Result
HTSV_Result
HV
Hash
......@@ -982,7 +981,6 @@ HeapTupleData
HeapTupleFields
HeapTupleHeader
HeapTupleHeaderData
HeapUpdateFailureData
HistControl
HotStandbyState
I32
......@@ -2283,6 +2281,8 @@ TBMSharedIteratorState
TBMStatus
TBlockState
TIDBitmap
TM_FailureData
TM_Result
TOKEN_DEFAULT_DACL
TOKEN_INFORMATION_CLASS
TOKEN_PRIVILEGES
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment