Commit 3761fe3c authored by Robert Haas's avatar Robert Haas

Simplify LWLock tranche machinery by removing array_base/array_stride.

array_base and array_stride were added so that we could identify the
offset of an LWLock within a tranche, but this facility is only very
marginally used apart from the main tranche.  So, give every lock in
the main tranche its own tranche ID and get rid of array_base,
array_stride, and all that's attached.  For debugging facilities
(Trace_lwlocks and LWLOCK_STATS) print the pointer address of the
LWLock using %p instead of the offset.  This is arguably more useful,
and certainly a lot cheaper.  Drop the offset-within-tranche from
the information reported to dtrace and from one can't-happen message
inside lwlock.c.

The main user-visible impact of this change is that pg_stat_activity
will now report all waits for LWLocks as "LWLock" rather than
reporting some as "LWLockTranche" and others as "LWLockNamed".

The main motivation for this change is that the need to specify an
array_base and an array_stride is awkward for parallel query.  There
is only a very limited supply of tranche IDs so we can't just keep
allocating new ones, and if we try to use the same tranche IDs every
time then we run into trouble when multiple parallel contexts are
use simultaneously.  So if we didn't get rid of this mechanism we'd
have to make it even more complicated.  By simplifying it in this
way, we instead reduce the size of the generated code for lwlock.c
by about 5%.

Discussion: http://postgr.es/m/CA+TgmoYsFn6NUW1x0AZtupJGUAs1UDY4dJtCN47_Q6D0sP80PA@mail.gmail.com
parent 4e344c2c
...@@ -646,18 +646,11 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser ...@@ -646,18 +646,11 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para> <para>
<literal>LWLockNamed</>: The backend is waiting for a specific named <literal>LWLock</>: The backend is waiting for a lightweight lock.
lightweight lock. Each such lock protects a particular data Each such lock protects a particular data structure in shared memory.
structure in shared memory. <literal>wait_event</> will contain <literal>wait_event</> will contain a name identifying the purpose
the name of the lightweight lock. of the lightweight lock. (Some locks have specific names; others
</para> are part of a group of locks each with a similar purpose.)
</listitem>
<listitem>
<para>
<literal>LWLockTranche</>: The backend is waiting for one of a
group of related lightweight locks. All locks in the group perform
a similar function; <literal>wait_event</> will identify the general
purpose of locks in that group.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
...@@ -825,7 +818,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser ...@@ -825,7 +818,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
<tbody> <tbody>
<row> <row>
<entry morerows="41"><literal>LWLockNamed</></entry> <entry morerows="57"><literal>LWLock</></entry>
<entry><literal>ShmemIndexLock</></entry> <entry><literal>ShmemIndexLock</></entry>
<entry>Waiting to find or allocate space in shared memory.</entry> <entry>Waiting to find or allocate space in shared memory.</entry>
</row> </row>
...@@ -1011,7 +1004,6 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser ...@@ -1011,7 +1004,6 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
<entry>Waiting to read or update old snapshot control information.</entry> <entry>Waiting to read or update old snapshot control information.</entry>
</row> </row>
<row> <row>
<entry morerows="15"><literal>LWLockTranche</></entry>
<entry><literal>clog</></entry> <entry><literal>clog</></entry>
<entry>Waiting for I/O on a clog (transaction status) buffer.</entry> <entry>Waiting for I/O on a clog (transaction status) buffer.</entry>
</row> </row>
...@@ -1279,7 +1271,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i ...@@ -1279,7 +1271,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i
pid | wait_event_type | wait_event pid | wait_event_type | wait_event
------+-----------------+--------------- ------+-----------------+---------------
2540 | Lock | relation 2540 | Lock | relation
6644 | LWLockNamed | ProcArrayLock 6644 | LWLock | ProcArrayLock
(2 rows) (2 rows)
</programlisting> </programlisting>
</para> </para>
...@@ -3347,55 +3339,49 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, ...@@ -3347,55 +3339,49 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid,
</row> </row>
<row> <row>
<entry><literal>lwlock-acquire</literal></entry> <entry><literal>lwlock-acquire</literal></entry>
<entry><literal>(char *, int, LWLockMode)</literal></entry> <entry><literal>(char *, LWLockMode)</literal></entry>
<entry>Probe that fires when an LWLock has been acquired. <entry>Probe that fires when an LWLock has been acquired.
arg0 is the LWLock's tranche. arg0 is the LWLock's tranche.
arg1 is the LWLock's offset within its tranche. arg1 is the requested lock mode, either exclusive or shared.</entry>
arg2 is the requested lock mode, either exclusive or shared.</entry>
</row> </row>
<row> <row>
<entry><literal>lwlock-release</literal></entry> <entry><literal>lwlock-release</literal></entry>
<entry><literal>(char *, int)</literal></entry> <entry><literal>(char *)</literal></entry>
<entry>Probe that fires when an LWLock has been released (but note <entry>Probe that fires when an LWLock has been released (but note
that any released waiters have not yet been awakened). that any released waiters have not yet been awakened).
arg0 is the LWLock's tranche. arg0 is the LWLock's tranche.</entry>
arg1 is the LWLock's offset within its tranche.</entry>
</row> </row>
<row> <row>
<entry><literal>lwlock-wait-start</literal></entry> <entry><literal>lwlock-wait-start</literal></entry>
<entry><literal>(char *, int, LWLockMode)</literal></entry> <entry><literal>(char *, LWLockMode)</literal></entry>
<entry>Probe that fires when an LWLock was not immediately available and <entry>Probe that fires when an LWLock was not immediately available and
a server process has begun to wait for the lock to become available. a server process has begun to wait for the lock to become available.
arg0 is the LWLock's tranche. arg0 is the LWLock's tranche.
arg1 is the LWLock's offset within its tranche. arg1 is the requested lock mode, either exclusive or shared.</entry>
arg2 is the requested lock mode, either exclusive or shared.</entry>
</row> </row>
<row> <row>
<entry><literal>lwlock-wait-done</literal></entry> <entry><literal>lwlock-wait-done</literal></entry>
<entry><literal>(char *, int, LWLockMode)</literal></entry> <entry><literal>(char *, LWLockMode)</literal></entry>
<entry>Probe that fires when a server process has been released from its <entry>Probe that fires when a server process has been released from its
wait for an LWLock (it does not actually have the lock yet). wait for an LWLock (it does not actually have the lock yet).
arg0 is the LWLock's tranche. arg0 is the LWLock's tranche.
arg1 is the LWLock's offset within its tranche. arg1 is the requested lock mode, either exclusive or shared.</entry>
arg2 is the requested lock mode, either exclusive or shared.</entry>
</row> </row>
<row> <row>
<entry><literal>lwlock-condacquire</literal></entry> <entry><literal>lwlock-condacquire</literal></entry>
<entry><literal>(char *, int, LWLockMode)</literal></entry> <entry><literal>(char *, LWLockMode)</literal></entry>
<entry>Probe that fires when an LWLock was successfully acquired when the <entry>Probe that fires when an LWLock was successfully acquired when the
caller specified no waiting. caller specified no waiting.
arg0 is the LWLock's tranche. arg0 is the LWLock's tranche.
arg1 is the LWLock's offset within its tranche. arg1 is the requested lock mode, either exclusive or shared.</entry>
arg2 is the requested lock mode, either exclusive or shared.</entry>
</row> </row>
<row> <row>
<entry><literal>lwlock-condacquire-fail</literal></entry> <entry><literal>lwlock-condacquire-fail</literal></entry>
<entry><literal>(char *, int, LWLockMode)</literal></entry> <entry><literal>(char *, LWLockMode)</literal></entry>
<entry>Probe that fires when an LWLock was not successfully acquired when <entry>Probe that fires when an LWLock was not successfully acquired when
the caller specified no waiting. the caller specified no waiting.
arg0 is the LWLock's tranche. arg0 is the LWLock's tranche.
arg1 is the LWLock's offset within its tranche. arg1 is the requested lock mode, either exclusive or shared.</entry>
arg2 is the requested lock mode, either exclusive or shared.</entry>
</row> </row>
<row> <row>
<entry><literal>lock-wait-start</literal></entry> <entry><literal>lock-wait-start</literal></entry>
......
...@@ -216,9 +216,6 @@ SimpleLruInit(SlruCtl ctl, const char *name, int nslots, int nlsns, ...@@ -216,9 +216,6 @@ SimpleLruInit(SlruCtl ctl, const char *name, int nslots, int nlsns,
Assert(strlen(name) + 1 < SLRU_MAX_NAME_LENGTH); Assert(strlen(name) + 1 < SLRU_MAX_NAME_LENGTH);
strlcpy(shared->lwlock_tranche_name, name, SLRU_MAX_NAME_LENGTH); strlcpy(shared->lwlock_tranche_name, name, SLRU_MAX_NAME_LENGTH);
shared->lwlock_tranche_id = tranche_id; shared->lwlock_tranche_id = tranche_id;
shared->lwlock_tranche.name = shared->lwlock_tranche_name;
shared->lwlock_tranche.array_base = shared->buffer_locks;
shared->lwlock_tranche.array_stride = sizeof(LWLockPadded);
ptr += BUFFERALIGN(offset); ptr += BUFFERALIGN(offset);
for (slotno = 0; slotno < nslots; slotno++) for (slotno = 0; slotno < nslots; slotno++)
...@@ -237,7 +234,8 @@ SimpleLruInit(SlruCtl ctl, const char *name, int nslots, int nlsns, ...@@ -237,7 +234,8 @@ SimpleLruInit(SlruCtl ctl, const char *name, int nslots, int nlsns,
Assert(found); Assert(found);
/* Register SLRU tranche in the main tranches array */ /* Register SLRU tranche in the main tranches array */
LWLockRegisterTranche(shared->lwlock_tranche_id, &shared->lwlock_tranche); LWLockRegisterTranche(shared->lwlock_tranche_id,
shared->lwlock_tranche_name);
/* /*
* Initialize the unshared control struct, including directory path. We * Initialize the unshared control struct, including directory path. We
......
...@@ -517,7 +517,6 @@ typedef struct XLogCtlInsert ...@@ -517,7 +517,6 @@ typedef struct XLogCtlInsert
* WAL insertion locks. * WAL insertion locks.
*/ */
WALInsertLockPadded *WALInsertLocks; WALInsertLockPadded *WALInsertLocks;
LWLockTranche WALInsertLockTranche;
} XLogCtlInsert; } XLogCtlInsert;
/* /*
...@@ -4688,7 +4687,7 @@ XLOGShmemInit(void) ...@@ -4688,7 +4687,7 @@ XLOGShmemInit(void)
/* Initialize local copy of WALInsertLocks and register the tranche */ /* Initialize local copy of WALInsertLocks and register the tranche */
WALInsertLocks = XLogCtl->Insert.WALInsertLocks; WALInsertLocks = XLogCtl->Insert.WALInsertLocks;
LWLockRegisterTranche(LWTRANCHE_WAL_INSERT, LWLockRegisterTranche(LWTRANCHE_WAL_INSERT,
&XLogCtl->Insert.WALInsertLockTranche); "wal_insert");
return; return;
} }
memset(XLogCtl, 0, sizeof(XLogCtlData)); memset(XLogCtl, 0, sizeof(XLogCtlData));
...@@ -4711,11 +4710,7 @@ XLOGShmemInit(void) ...@@ -4711,11 +4710,7 @@ XLOGShmemInit(void)
(WALInsertLockPadded *) allocptr; (WALInsertLockPadded *) allocptr;
allocptr += sizeof(WALInsertLockPadded) * NUM_XLOGINSERT_LOCKS; allocptr += sizeof(WALInsertLockPadded) * NUM_XLOGINSERT_LOCKS;
XLogCtl->Insert.WALInsertLockTranche.name = "wal_insert"; LWLockRegisterTranche(LWTRANCHE_WAL_INSERT, "wal_insert");
XLogCtl->Insert.WALInsertLockTranche.array_base = WALInsertLocks;
XLogCtl->Insert.WALInsertLockTranche.array_stride = sizeof(WALInsertLockPadded);
LWLockRegisterTranche(LWTRANCHE_WAL_INSERT, &XLogCtl->Insert.WALInsertLockTranche);
for (i = 0; i < NUM_XLOGINSERT_LOCKS; i++) for (i = 0; i < NUM_XLOGINSERT_LOCKS; i++)
{ {
LWLockInitialize(&WALInsertLocks[i].l.lock, LWTRANCHE_WAL_INSERT); LWLockInitialize(&WALInsertLocks[i].l.lock, LWTRANCHE_WAL_INSERT);
......
...@@ -3152,11 +3152,8 @@ pgstat_get_wait_event_type(uint32 wait_event_info) ...@@ -3152,11 +3152,8 @@ pgstat_get_wait_event_type(uint32 wait_event_info)
switch (classId) switch (classId)
{ {
case PG_WAIT_LWLOCK_NAMED: case PG_WAIT_LWLOCK:
event_type = "LWLockNamed"; event_type = "LWLock";
break;
case PG_WAIT_LWLOCK_TRANCHE:
event_type = "LWLockTranche";
break; break;
case PG_WAIT_LOCK: case PG_WAIT_LOCK:
event_type = "Lock"; event_type = "Lock";
...@@ -3209,8 +3206,7 @@ pgstat_get_wait_event(uint32 wait_event_info) ...@@ -3209,8 +3206,7 @@ pgstat_get_wait_event(uint32 wait_event_info)
switch (classId) switch (classId)
{ {
case PG_WAIT_LWLOCK_NAMED: case PG_WAIT_LWLOCK:
case PG_WAIT_LWLOCK_TRANCHE:
event_name = GetLWLockIdentifier(classId, eventId); event_name = GetLWLockIdentifier(classId, eventId);
break; break;
case PG_WAIT_LOCK: case PG_WAIT_LOCK:
......
...@@ -143,7 +143,6 @@ typedef struct ReplicationStateOnDisk ...@@ -143,7 +143,6 @@ typedef struct ReplicationStateOnDisk
typedef struct ReplicationStateCtl typedef struct ReplicationStateCtl
{ {
int tranche_id; int tranche_id;
LWLockTranche tranche;
ReplicationState states[FLEXIBLE_ARRAY_MEMBER]; ReplicationState states[FLEXIBLE_ARRAY_MEMBER];
} ReplicationStateCtl; } ReplicationStateCtl;
...@@ -474,11 +473,6 @@ ReplicationOriginShmemInit(void) ...@@ -474,11 +473,6 @@ ReplicationOriginShmemInit(void)
int i; int i;
replication_states_ctl->tranche_id = LWTRANCHE_REPLICATION_ORIGIN; replication_states_ctl->tranche_id = LWTRANCHE_REPLICATION_ORIGIN;
replication_states_ctl->tranche.name = "replication_origin";
replication_states_ctl->tranche.array_base =
&replication_states[0].lock;
replication_states_ctl->tranche.array_stride =
sizeof(ReplicationState);
MemSet(replication_states, 0, ReplicationOriginShmemSize()); MemSet(replication_states, 0, ReplicationOriginShmemSize());
...@@ -488,7 +482,7 @@ ReplicationOriginShmemInit(void) ...@@ -488,7 +482,7 @@ ReplicationOriginShmemInit(void)
} }
LWLockRegisterTranche(replication_states_ctl->tranche_id, LWLockRegisterTranche(replication_states_ctl->tranche_id,
&replication_states_ctl->tranche); "replication_origin");
} }
/* --------------------------------------------------------------------------- /* ---------------------------------------------------------------------------
......
...@@ -98,8 +98,6 @@ ReplicationSlot *MyReplicationSlot = NULL; ...@@ -98,8 +98,6 @@ ReplicationSlot *MyReplicationSlot = NULL;
int max_replication_slots = 0; /* the maximum number of replication int max_replication_slots = 0; /* the maximum number of replication
* slots */ * slots */
static LWLockTranche ReplSlotIOLWLockTranche;
static void ReplicationSlotDropAcquired(void); static void ReplicationSlotDropAcquired(void);
static void ReplicationSlotDropPtr(ReplicationSlot *slot); static void ReplicationSlotDropPtr(ReplicationSlot *slot);
...@@ -141,12 +139,8 @@ ReplicationSlotsShmemInit(void) ...@@ -141,12 +139,8 @@ ReplicationSlotsShmemInit(void)
ShmemInitStruct("ReplicationSlot Ctl", ReplicationSlotsShmemSize(), ShmemInitStruct("ReplicationSlot Ctl", ReplicationSlotsShmemSize(),
&found); &found);
ReplSlotIOLWLockTranche.name = "replication_slot_io";
ReplSlotIOLWLockTranche.array_base =
((char *) ReplicationSlotCtl) + offsetof(ReplicationSlotCtlData, replication_slots) +offsetof(ReplicationSlot, io_in_progress_lock);
ReplSlotIOLWLockTranche.array_stride = sizeof(ReplicationSlot);
LWLockRegisterTranche(LWTRANCHE_REPLICATION_SLOT_IO_IN_PROGRESS, LWLockRegisterTranche(LWTRANCHE_REPLICATION_SLOT_IO_IN_PROGRESS,
&ReplSlotIOLWLockTranche); "replication_slot_io");
if (!found) if (!found)
{ {
......
...@@ -21,8 +21,6 @@ ...@@ -21,8 +21,6 @@
BufferDescPadded *BufferDescriptors; BufferDescPadded *BufferDescriptors;
char *BufferBlocks; char *BufferBlocks;
LWLockMinimallyPadded *BufferIOLWLockArray = NULL; LWLockMinimallyPadded *BufferIOLWLockArray = NULL;
LWLockTranche BufferIOLWLockTranche;
LWLockTranche BufferContentLWLockTranche;
WritebackContext BackendWritebackContext; WritebackContext BackendWritebackContext;
CkptSortItem *CkptBufferIds; CkptSortItem *CkptBufferIds;
...@@ -90,18 +88,8 @@ InitBufferPool(void) ...@@ -90,18 +88,8 @@ InitBufferPool(void)
NBuffers * (Size) sizeof(LWLockMinimallyPadded), NBuffers * (Size) sizeof(LWLockMinimallyPadded),
&foundIOLocks); &foundIOLocks);
BufferIOLWLockTranche.name = "buffer_io"; LWLockRegisterTranche(LWTRANCHE_BUFFER_IO_IN_PROGRESS, "buffer_io");
BufferIOLWLockTranche.array_base = BufferIOLWLockArray; LWLockRegisterTranche(LWTRANCHE_BUFFER_CONTENT, "buffer_content");
BufferIOLWLockTranche.array_stride = sizeof(LWLockMinimallyPadded);
LWLockRegisterTranche(LWTRANCHE_BUFFER_IO_IN_PROGRESS,
&BufferIOLWLockTranche);
BufferContentLWLockTranche.name = "buffer_content";
BufferContentLWLockTranche.array_base =
((char *) BufferDescriptors) + offsetof(BufferDesc, content_lock);
BufferContentLWLockTranche.array_stride = sizeof(BufferDescPadded);
LWLockRegisterTranche(LWTRANCHE_BUFFER_CONTENT,
&BufferContentLWLockTranche);
/* /*
* The array used to sort to-be-checkpointed buffer ids is located in * The array used to sort to-be-checkpointed buffer ids is located in
......
...@@ -106,9 +106,6 @@ static TransactionId *KnownAssignedXids; ...@@ -106,9 +106,6 @@ static TransactionId *KnownAssignedXids;
static bool *KnownAssignedXidsValid; static bool *KnownAssignedXidsValid;
static TransactionId latestObservedXid = InvalidTransactionId; static TransactionId latestObservedXid = InvalidTransactionId;
/* LWLock tranche for backend locks */
static LWLockTranche ProcLWLockTranche;
/* /*
* If we're in STANDBY_SNAPSHOT_PENDING state, standbySnapshotPendingXmin is * If we're in STANDBY_SNAPSHOT_PENDING state, standbySnapshotPendingXmin is
* the highest xid that might still be running that we don't have in * the highest xid that might still be running that we don't have in
...@@ -266,11 +263,7 @@ CreateSharedProcArray(void) ...@@ -266,11 +263,7 @@ CreateSharedProcArray(void)
} }
/* Register and initialize fields of ProcLWLockTranche */ /* Register and initialize fields of ProcLWLockTranche */
ProcLWLockTranche.name = "proc"; LWLockRegisterTranche(LWTRANCHE_PROC, "proc");
ProcLWLockTranche.array_base = (char *) (ProcGlobal->allProcs) +
offsetof(PGPROC, backendLock);
ProcLWLockTranche.array_stride = sizeof(PGPROC);
LWLockRegisterTranche(LWTRANCHE_PROC, &ProcLWLockTranche);
} }
/* /*
......
This diff is collapsed.
...@@ -362,9 +362,6 @@ struct dsa_area ...@@ -362,9 +362,6 @@ struct dsa_area
/* Pointer to the control object in shared memory. */ /* Pointer to the control object in shared memory. */
dsa_area_control *control; dsa_area_control *control;
/* The lock tranche for this process. */
LWLockTranche lwlock_tranche;
/* Has the mapping been pinned? */ /* Has the mapping been pinned? */
bool mapping_pinned; bool mapping_pinned;
...@@ -1207,10 +1204,8 @@ create_internal(void *place, size_t size, ...@@ -1207,10 +1204,8 @@ create_internal(void *place, size_t size,
area->mapping_pinned = false; area->mapping_pinned = false;
memset(area->segment_maps, 0, sizeof(dsa_segment_map) * DSA_MAX_SEGMENTS); memset(area->segment_maps, 0, sizeof(dsa_segment_map) * DSA_MAX_SEGMENTS);
area->high_segment_index = 0; area->high_segment_index = 0;
area->lwlock_tranche.array_base = &area->control->pools[0]; LWLockRegisterTranche(control->lwlock_tranche_id,
area->lwlock_tranche.array_stride = sizeof(dsa_area_pool); control->lwlock_tranche_name);
area->lwlock_tranche.name = control->lwlock_tranche_name;
LWLockRegisterTranche(control->lwlock_tranche_id, &area->lwlock_tranche);
LWLockInitialize(&control->lock, control->lwlock_tranche_id); LWLockInitialize(&control->lock, control->lwlock_tranche_id);
for (i = 0; i < DSA_NUM_SIZE_CLASSES; ++i) for (i = 0; i < DSA_NUM_SIZE_CLASSES; ++i)
LWLockInitialize(DSA_SCLASS_LOCK(area, i), LWLockInitialize(DSA_SCLASS_LOCK(area, i),
...@@ -1267,10 +1262,8 @@ attach_internal(void *place, dsm_segment *segment, dsa_handle handle) ...@@ -1267,10 +1262,8 @@ attach_internal(void *place, dsm_segment *segment, dsa_handle handle)
memset(&area->segment_maps[0], 0, memset(&area->segment_maps[0], 0,
sizeof(dsa_segment_map) * DSA_MAX_SEGMENTS); sizeof(dsa_segment_map) * DSA_MAX_SEGMENTS);
area->high_segment_index = 0; area->high_segment_index = 0;
area->lwlock_tranche.array_base = &area->control->pools[0]; LWLockRegisterTranche(control->lwlock_tranche_id,
area->lwlock_tranche.array_stride = sizeof(dsa_area_pool); control->lwlock_tranche_name);
area->lwlock_tranche.name = control->lwlock_tranche_name;
LWLockRegisterTranche(control->lwlock_tranche_id, &area->lwlock_tranche);
/* Set up the segment map for this process's mapping. */ /* Set up the segment map for this process's mapping. */
segment_map = &area->segment_maps[0]; segment_map = &area->segment_maps[0];
......
...@@ -28,14 +28,14 @@ provider postgresql { ...@@ -28,14 +28,14 @@ provider postgresql {
probe transaction__commit(LocalTransactionId); probe transaction__commit(LocalTransactionId);
probe transaction__abort(LocalTransactionId); probe transaction__abort(LocalTransactionId);
probe lwlock__acquire(const char *, int, LWLockMode); probe lwlock__acquire(const char *, LWLockMode);
probe lwlock__release(const char *, int); probe lwlock__release(const char *);
probe lwlock__wait__start(const char *, int, LWLockMode); probe lwlock__wait__start(const char *, LWLockMode);
probe lwlock__wait__done(const char *, int, LWLockMode); probe lwlock__wait__done(const char *, LWLockMode);
probe lwlock__condacquire(const char *, int, LWLockMode); probe lwlock__condacquire(const char *, LWLockMode);
probe lwlock__condacquire__fail(const char *, int, LWLockMode); probe lwlock__condacquire__fail(const char *, LWLockMode);
probe lwlock__acquire__or__wait(const char *, int, LWLockMode); probe lwlock__acquire__or__wait(const char *, LWLockMode);
probe lwlock__acquire__or__wait__fail(const char *, int, LWLockMode); probe lwlock__acquire__or__wait__fail(const char *, LWLockMode);
probe lock__wait__start(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE); probe lock__wait__start(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE);
probe lock__wait__done(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE); probe lock__wait__done(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE);
......
...@@ -104,7 +104,6 @@ typedef struct SlruSharedData ...@@ -104,7 +104,6 @@ typedef struct SlruSharedData
/* LWLocks */ /* LWLocks */
int lwlock_tranche_id; int lwlock_tranche_id;
LWLockTranche lwlock_tranche;
char lwlock_tranche_name[SLRU_MAX_NAME_LENGTH]; char lwlock_tranche_name[SLRU_MAX_NAME_LENGTH];
LWLockPadded *buffer_locks; LWLockPadded *buffer_locks;
} SlruSharedData; } SlruSharedData;
......
...@@ -715,8 +715,7 @@ typedef enum BackendState ...@@ -715,8 +715,7 @@ typedef enum BackendState
* Wait Classes * Wait Classes
* ---------- * ----------
*/ */
#define PG_WAIT_LWLOCK_NAMED 0x01000000U #define PG_WAIT_LWLOCK 0x01000000U
#define PG_WAIT_LWLOCK_TRANCHE 0x02000000U
#define PG_WAIT_LOCK 0x03000000U #define PG_WAIT_LOCK 0x03000000U
#define PG_WAIT_BUFFER_PIN 0x04000000U #define PG_WAIT_BUFFER_PIN 0x04000000U
#define PG_WAIT_ACTIVITY 0x05000000U #define PG_WAIT_ACTIVITY 0x05000000U
......
...@@ -24,32 +24,6 @@ ...@@ -24,32 +24,6 @@
struct PGPROC; struct PGPROC;
/*
* Prior to PostgreSQL 9.4, every lightweight lock in the system was stored
* in a single array. For convenience and for compatibility with past
* releases, we still have a main array, but it's now also permissible to
* store LWLocks elsewhere in the main shared memory segment or in a dynamic
* shared memory segment. Each array of lwlocks forms a separate "tranche".
*
* It's occasionally necessary to identify a particular LWLock "by name"; e.g.
* because we wish to report the lock to dtrace. We could store a name or
* other identifying information in the lock itself, but since it's common
* to have many nearly-identical locks (e.g. one per buffer) this would end
* up wasting significant amounts of memory. Instead, each lwlock stores a
* tranche ID which tells us which array it's part of. Based on that, we can
* figure out where the lwlock lies within the array using the data structure
* shown below; the lock is then identified based on the tranche name and
* computed array index. We need the array stride because the array might not
* be an array of lwlocks, but rather some larger data structure that includes
* one or more lwlocks per element.
*/
typedef struct LWLockTranche
{
const char *name;
void *array_base;
Size array_stride;
} LWLockTranche;
/* /*
* Code outside of lwlock.c should not manipulate the contents of this * Code outside of lwlock.c should not manipulate the contents of this
* structure directly, but we have to declare it here to allow LWLocks to be * structure directly, but we have to declare it here to allow LWLocks to be
...@@ -118,8 +92,8 @@ extern char *MainLWLockNames[]; ...@@ -118,8 +92,8 @@ extern char *MainLWLockNames[];
/* struct for storing named tranche information */ /* struct for storing named tranche information */
typedef struct NamedLWLockTranche typedef struct NamedLWLockTranche
{ {
LWLockTranche lwLockTranche;
int trancheId; int trancheId;
char *trancheName;
} NamedLWLockTranche; } NamedLWLockTranche;
extern PGDLLIMPORT NamedLWLockTranche *NamedLWLockTrancheArray; extern PGDLLIMPORT NamedLWLockTranche *NamedLWLockTrancheArray;
...@@ -199,9 +173,9 @@ extern LWLockPadded *GetNamedLWLockTranche(const char *tranche_name); ...@@ -199,9 +173,9 @@ extern LWLockPadded *GetNamedLWLockTranche(const char *tranche_name);
* There is another, more flexible method of obtaining lwlocks. First, call * There is another, more flexible method of obtaining lwlocks. First, call
* LWLockNewTrancheId just once to obtain a tranche ID; this allocates from * LWLockNewTrancheId just once to obtain a tranche ID; this allocates from
* a shared counter. Next, each individual process using the tranche should * a shared counter. Next, each individual process using the tranche should
* call LWLockRegisterTranche() to associate that tranche ID with appropriate * call LWLockRegisterTranche() to associate that tranche ID with a name.
* metadata. Finally, LWLockInitialize should be called just once per lwlock, * Finally, LWLockInitialize should be called just once per lwlock, passing
* passing the tranche ID as an argument. * the tranche ID as an argument.
* *
* It may seem strange that each process using the tranche must register it * It may seem strange that each process using the tranche must register it
* separately, but dynamic shared memory segments aren't guaranteed to be * separately, but dynamic shared memory segments aren't guaranteed to be
...@@ -209,17 +183,18 @@ extern LWLockPadded *GetNamedLWLockTranche(const char *tranche_name); ...@@ -209,17 +183,18 @@ extern LWLockPadded *GetNamedLWLockTranche(const char *tranche_name);
* registration in the main shared memory segment wouldn't work for that case. * registration in the main shared memory segment wouldn't work for that case.
*/ */
extern int LWLockNewTrancheId(void); extern int LWLockNewTrancheId(void);
extern void LWLockRegisterTranche(int tranche_id, LWLockTranche *tranche); extern void LWLockRegisterTranche(int tranche_id, char *tranche_name);
extern void LWLockInitialize(LWLock *lock, int tranche_id); extern void LWLockInitialize(LWLock *lock, int tranche_id);
/* /*
* We reserve a few predefined tranche IDs. A call to LWLockNewTrancheId * Every tranche ID less than NUM_INDIVIDUAL_LWLOCKS is reserved; also,
* will never return a value less than LWTRANCHE_FIRST_USER_DEFINED. * we reserve additional tranche IDs for builtin tranches not included in
* the set of individual LWLocks. A call to LWLockNewTrancheId will never
* return a value less than LWTRANCHE_FIRST_USER_DEFINED.
*/ */
typedef enum BuiltinTrancheIds typedef enum BuiltinTrancheIds
{ {
LWTRANCHE_MAIN, LWTRANCHE_CLOG_BUFFERS = NUM_INDIVIDUAL_LWLOCKS,
LWTRANCHE_CLOG_BUFFERS,
LWTRANCHE_COMMITTS_BUFFERS, LWTRANCHE_COMMITTS_BUFFERS,
LWTRANCHE_SUBTRANS_BUFFERS, LWTRANCHE_SUBTRANS_BUFFERS,
LWTRANCHE_MXACTOFFSET_BUFFERS, LWTRANCHE_MXACTOFFSET_BUFFERS,
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment