Commit 5da14938 authored by Tom Lane's avatar Tom Lane

Rename SLRU structures and associated LWLocks.

Originally, the names assigned to SLRUs had no purpose other than
being shmem lookup keys, so not a lot of thought went into them.
As of v13, though, we're exposing them in the pg_stat_slru view and
the pg_stat_reset_slru function, so it seems advisable to take a bit
more care.  Rename them to names based on the associated on-disk
storage directories (which fortunately we *did* think about, to some
extent; since those are also visible to DBAs, consistency seems like
a good thing).  Also rename the associated LWLocks, since those names
are likewise user-exposed now as wait event names.

For the most part I only touched symbols used in the respective modules'
SimpleLruInit() calls, not the names of other related objects.  This
renaming could have been taken further, and maybe someday we will do so.
But for now it seems undesirable to change the names of any globally
visible functions or structs, so some inconsistency is unavoidable.

(But I *did* terminate "oldserxid" with prejudice, as I found that
name both unreadable and not descriptive of the SLRU's contents.)

Table 27.12 needs re-alphabetization now, but I'll leave that till
after the other LWLock renamings I have in mind.

Discussion: https://postgr.es/m/28683.1589405363@sss.pgh.pa.us
parent 756abe2b
...@@ -1754,12 +1754,13 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser ...@@ -1754,12 +1754,13 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
<entry>Waiting to manage space allocation in shared memory.</entry> <entry>Waiting to manage space allocation in shared memory.</entry>
</row> </row>
<row> <row>
<entry><literal>AsyncCtlLock</literal></entry> <entry><literal>NotifySLRULock</literal></entry>
<entry>Waiting to read or update shared notification state.</entry> <entry>Waiting to access the <command>NOTIFY</command> message SLRU
cache.</entry>
</row> </row>
<row> <row>
<entry><literal>AsyncQueueLock</literal></entry> <entry><literal>NotifyQueueLock</literal></entry>
<entry>Waiting to read or update notification messages.</entry> <entry>Waiting to read or update <command>NOTIFY</command> messages.</entry>
</row> </row>
<row> <row>
<entry><literal>AutoFileLock</literal></entry> <entry><literal>AutoFileLock</literal></entry>
...@@ -1785,13 +1786,13 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser ...@@ -1785,13 +1786,13 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
B-tree index.</entry> B-tree index.</entry>
</row> </row>
<row> <row>
<entry><literal>CLogControlLock</literal></entry> <entry><literal>XactSLRULock</literal></entry>
<entry>Waiting to read or update transaction status.</entry> <entry>Waiting to access the transaction status SLRU cache.</entry>
</row> </row>
<row> <row>
<entry><literal>CLogTruncationLock</literal></entry> <entry><literal>XactTruncationLock</literal></entry>
<entry>Waiting to execute <function>pg_xact_status</function> or update <entry>Waiting to execute <function>pg_xact_status</function> or update
the oldest transaction id available to it.</entry> the oldest transaction ID available to it.</entry>
</row> </row>
<row> <row>
<entry><literal>CheckpointLock</literal></entry> <entry><literal>CheckpointLock</literal></entry>
...@@ -1802,8 +1803,8 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser ...@@ -1802,8 +1803,8 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
<entry>Waiting to manage fsync requests.</entry> <entry>Waiting to manage fsync requests.</entry>
</row> </row>
<row> <row>
<entry><literal>CommitTsControlLock</literal></entry> <entry><literal>CommitTsSLRULock</literal></entry>
<entry>Waiting to read or update transaction commit timestamps.</entry> <entry>Waiting to access the commit timestamp SLRU cache.</entry>
</row> </row>
<row> <row>
<entry><literal>CommitTsLock</literal></entry> <entry><literal>CommitTsLock</literal></entry>
...@@ -1828,12 +1829,12 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser ...@@ -1828,12 +1829,12 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
<entry>Waiting to read or update shared multixact state.</entry> <entry>Waiting to read or update shared multixact state.</entry>
</row> </row>
<row> <row>
<entry><literal>MultiXactMemberControlLock</literal></entry> <entry><literal>MultiXactMemberSLRULock</literal></entry>
<entry>Waiting to read or update multixact member mappings.</entry> <entry>Waiting to access the multixact member SLRU cache.</entry>
</row> </row>
<row> <row>
<entry><literal>MultiXactOffsetControlLock</literal></entry> <entry><literal>MultiXactOffsetSLRULock</literal></entry>
<entry>Waiting to read or update multixact offset mappings.</entry> <entry>Waiting to access the multixact offset SLRU cache.</entry>
</row> </row>
<row> <row>
<entry><literal>MultiXactTruncationLock</literal></entry> <entry><literal>MultiXactTruncationLock</literal></entry>
...@@ -1844,9 +1845,9 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser ...@@ -1844,9 +1845,9 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
<entry>Waiting to allocate or assign an OID.</entry> <entry>Waiting to allocate or assign an OID.</entry>
</row> </row>
<row> <row>
<entry><literal>OldSerXidLock</literal></entry> <entry><literal>SerialSLRULock</literal></entry>
<entry>Waiting to read or record conflicting serializable <entry>Waiting to access the serializable transaction conflict SLRU
transactions.</entry> cache.</entry>
</row> </row>
<row> <row>
<entry><literal>OldSnapshotTimeMapLock</literal></entry> <entry><literal>OldSnapshotTimeMapLock</literal></entry>
...@@ -1907,8 +1908,8 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser ...@@ -1907,8 +1908,8 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
<entry>Waiting to find or allocate space in shared memory.</entry> <entry>Waiting to find or allocate space in shared memory.</entry>
</row> </row>
<row> <row>
<entry><literal>SubtransControlLock</literal></entry> <entry><literal>SubtransSLRULock</literal></entry>
<entry>Waiting to read or update sub-transaction information.</entry> <entry>Waiting to access the sub-transaction SLRU cache.</entry>
</row> </row>
<row> <row>
<entry><literal>SyncRepLock</literal></entry> <entry><literal>SyncRepLock</literal></entry>
...@@ -1941,8 +1942,9 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser ...@@ -1941,8 +1942,9 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
<entry>Waiting to allocate or assign a transaction id.</entry> <entry>Waiting to allocate or assign a transaction id.</entry>
</row> </row>
<row> <row>
<entry><literal>async</literal></entry> <entry><literal>NotifyBuffer</literal></entry>
<entry>Waiting for I/O on an async (notify) buffer.</entry> <entry>Waiting for I/O on a <command>NOTIFY</command> message SLRU
buffer.</entry>
</row> </row>
<row> <row>
<entry><literal>buffer_content</literal></entry> <entry><literal>buffer_content</literal></entry>
...@@ -1958,12 +1960,12 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser ...@@ -1958,12 +1960,12 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
pool.</entry> pool.</entry>
</row> </row>
<row> <row>
<entry><literal>clog</literal></entry> <entry><literal>XactBuffer</literal></entry>
<entry>Waiting for I/O on a clog (transaction status) buffer.</entry> <entry>Waiting for I/O on a transaction status SLRU buffer.</entry>
</row> </row>
<row> <row>
<entry><literal>commit_timestamp</literal></entry> <entry><literal>CommitTsBuffer</literal></entry>
<entry>Waiting for I/O on commit timestamp buffer.</entry> <entry>Waiting for I/O on a commit timestamp SLRU buffer.</entry>
</row> </row>
<row> <row>
<entry><literal>lock_manager</literal></entry> <entry><literal>lock_manager</literal></entry>
...@@ -1971,16 +1973,17 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser ...@@ -1971,16 +1973,17 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
join or exit a locking group (used by parallel query).</entry> join or exit a locking group (used by parallel query).</entry>
</row> </row>
<row> <row>
<entry><literal>multixact_member</literal></entry> <entry><literal>MultiXactMember</literal></entry>
<entry>Waiting for I/O on a multixact_member buffer.</entry> <entry>Waiting for I/O on a multixact member SLRU buffer.</entry>
</row> </row>
<row> <row>
<entry><literal>multixact_offset</literal></entry> <entry><literal>MultiXactOffsetBuffer</literal></entry>
<entry>Waiting for I/O on a multixact offset buffer.</entry> <entry>Waiting for I/O on a multixact offset SLRU buffer.</entry>
</row> </row>
<row> <row>
<entry><literal>oldserxid</literal></entry> <entry><literal>SerialBuffer</literal></entry>
<entry>Waiting for I/O on an oldserxid buffer.</entry> <entry>Waiting for I/O on a serializable transaction conflict SLRU
buffer.</entry>
</row> </row>
<row> <row>
<entry><literal>parallel_append</literal></entry> <entry><literal>parallel_append</literal></entry>
...@@ -2018,8 +2021,8 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser ...@@ -2018,8 +2021,8 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
in a parallel query.</entry> in a parallel query.</entry>
</row> </row>
<row> <row>
<entry><literal>subtrans</literal></entry> <entry><literal>SubtransBuffer</literal></entry>
<entry>Waiting for I/O on a subtransaction buffer.</entry> <entry>Waiting for I/O on a sub-transaction SLRU buffer.</entry>
</row> </row>
<row> <row>
<entry><literal>tbm</literal></entry> <entry><literal>tbm</literal></entry>
...@@ -4190,7 +4193,13 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i ...@@ -4190,7 +4193,13 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i
</tgroup> </tgroup>
</table> </table>
<indexterm>
<primary>SLRU</primary>
</indexterm>
<para> <para>
<productname>PostgreSQL</productname> accesses certain on-disk information
via <firstterm>SLRU</firstterm> (simple least-recently-used) caches.
The <structname>pg_stat_slru</structname> view will contain The <structname>pg_stat_slru</structname> view will contain
one row for each tracked SLRU cache, showing statistics about access one row for each tracked SLRU cache, showing statistics about access
to cached pages. to cached pages.
...@@ -4484,11 +4493,15 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i ...@@ -4484,11 +4493,15 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i
Resets statistics to zero for a single SLRU cache, or for all SLRUs in Resets statistics to zero for a single SLRU cache, or for all SLRUs in
the cluster. If the argument is NULL, all counters shown in the cluster. If the argument is NULL, all counters shown in
the <structname>pg_stat_slru</structname> view for all SLRU caches are the <structname>pg_stat_slru</structname> view for all SLRU caches are
reset. The argument can be one of <literal>async</literal>, reset. The argument can be one of
<literal>clog</literal>, <literal>commit_timestamp</literal>, <literal>CommitTs</literal>,
<literal>multixact_offset</literal>, <literal>MultiXactMember</literal>,
<literal>multixact_member</literal>, <literal>oldserxid</literal>, or <literal>MultiXactOffset</literal>,
<literal>subtrans</literal> to reset the counters for only that entry. <literal>Notify</literal>,
<literal>Serial</literal>,
<literal>Subtrans</literal>, or
<literal>Xact</literal>
to reset the counters for only that entry.
If the argument is <literal>other</literal> (or indeed, any If the argument is <literal>other</literal> (or indeed, any
unrecognized name), then the counters for all other SLRU caches, such unrecognized name), then the counters for all other SLRU caches, such
as extension-defined caches, are reset. as extension-defined caches, are reset.
......
...@@ -83,9 +83,9 @@ ...@@ -83,9 +83,9 @@
/* /*
* Link to shared-memory data structures for CLOG control * Link to shared-memory data structures for CLOG control
*/ */
static SlruCtlData ClogCtlData; static SlruCtlData XactCtlData;
#define ClogCtl (&ClogCtlData) #define XactCtl (&XactCtlData)
static int ZeroCLOGPage(int pageno, bool writeXlog); static int ZeroCLOGPage(int pageno, bool writeXlog);
...@@ -280,10 +280,10 @@ TransactionIdSetPageStatus(TransactionId xid, int nsubxids, ...@@ -280,10 +280,10 @@ TransactionIdSetPageStatus(TransactionId xid, int nsubxids,
"group clog threshold less than PGPROC cached subxids"); "group clog threshold less than PGPROC cached subxids");
/* /*
* When there is contention on CLogControlLock, we try to group multiple * When there is contention on XactSLRULock, we try to group multiple
* updates; a single leader process will perform transaction status * updates; a single leader process will perform transaction status
* updates for multiple backends so that the number of times * updates for multiple backends so that the number of times XactSLRULock
* CLogControlLock needs to be acquired is reduced. * needs to be acquired is reduced.
* *
* For this optimization to be safe, the XID in MyPgXact and the subxids * For this optimization to be safe, the XID in MyPgXact and the subxids
* in MyProc must be the same as the ones for which we're setting the * in MyProc must be the same as the ones for which we're setting the
...@@ -300,17 +300,17 @@ TransactionIdSetPageStatus(TransactionId xid, int nsubxids, ...@@ -300,17 +300,17 @@ TransactionIdSetPageStatus(TransactionId xid, int nsubxids,
nsubxids * sizeof(TransactionId)) == 0) nsubxids * sizeof(TransactionId)) == 0)
{ {
/* /*
* If we can immediately acquire CLogControlLock, we update the status * If we can immediately acquire XactSLRULock, we update the status of
* of our own XID and release the lock. If not, try use group XID * our own XID and release the lock. If not, try use group XID
* update. If that doesn't work out, fall back to waiting for the * update. If that doesn't work out, fall back to waiting for the
* lock to perform an update for this transaction only. * lock to perform an update for this transaction only.
*/ */
if (LWLockConditionalAcquire(CLogControlLock, LW_EXCLUSIVE)) if (LWLockConditionalAcquire(XactSLRULock, LW_EXCLUSIVE))
{ {
/* Got the lock without waiting! Do the update. */ /* Got the lock without waiting! Do the update. */
TransactionIdSetPageStatusInternal(xid, nsubxids, subxids, status, TransactionIdSetPageStatusInternal(xid, nsubxids, subxids, status,
lsn, pageno); lsn, pageno);
LWLockRelease(CLogControlLock); LWLockRelease(XactSLRULock);
return; return;
} }
else if (TransactionGroupUpdateXidStatus(xid, status, lsn, pageno)) else if (TransactionGroupUpdateXidStatus(xid, status, lsn, pageno))
...@@ -323,10 +323,10 @@ TransactionIdSetPageStatus(TransactionId xid, int nsubxids, ...@@ -323,10 +323,10 @@ TransactionIdSetPageStatus(TransactionId xid, int nsubxids,
} }
/* Group update not applicable, or couldn't accept this page number. */ /* Group update not applicable, or couldn't accept this page number. */
LWLockAcquire(CLogControlLock, LW_EXCLUSIVE); LWLockAcquire(XactSLRULock, LW_EXCLUSIVE);
TransactionIdSetPageStatusInternal(xid, nsubxids, subxids, status, TransactionIdSetPageStatusInternal(xid, nsubxids, subxids, status,
lsn, pageno); lsn, pageno);
LWLockRelease(CLogControlLock); LWLockRelease(XactSLRULock);
} }
/* /*
...@@ -345,7 +345,7 @@ TransactionIdSetPageStatusInternal(TransactionId xid, int nsubxids, ...@@ -345,7 +345,7 @@ TransactionIdSetPageStatusInternal(TransactionId xid, int nsubxids,
Assert(status == TRANSACTION_STATUS_COMMITTED || Assert(status == TRANSACTION_STATUS_COMMITTED ||
status == TRANSACTION_STATUS_ABORTED || status == TRANSACTION_STATUS_ABORTED ||
(status == TRANSACTION_STATUS_SUB_COMMITTED && !TransactionIdIsValid(xid))); (status == TRANSACTION_STATUS_SUB_COMMITTED && !TransactionIdIsValid(xid)));
Assert(LWLockHeldByMeInMode(CLogControlLock, LW_EXCLUSIVE)); Assert(LWLockHeldByMeInMode(XactSLRULock, LW_EXCLUSIVE));
/* /*
* If we're doing an async commit (ie, lsn is valid), then we must wait * If we're doing an async commit (ie, lsn is valid), then we must wait
...@@ -356,7 +356,7 @@ TransactionIdSetPageStatusInternal(TransactionId xid, int nsubxids, ...@@ -356,7 +356,7 @@ TransactionIdSetPageStatusInternal(TransactionId xid, int nsubxids,
* write-busy, since we don't care if the update reaches disk sooner than * write-busy, since we don't care if the update reaches disk sooner than
* we think. * we think.
*/ */
slotno = SimpleLruReadPage(ClogCtl, pageno, XLogRecPtrIsInvalid(lsn), xid); slotno = SimpleLruReadPage(XactCtl, pageno, XLogRecPtrIsInvalid(lsn), xid);
/* /*
* Set the main transaction id, if any. * Set the main transaction id, if any.
...@@ -374,7 +374,7 @@ TransactionIdSetPageStatusInternal(TransactionId xid, int nsubxids, ...@@ -374,7 +374,7 @@ TransactionIdSetPageStatusInternal(TransactionId xid, int nsubxids,
{ {
for (i = 0; i < nsubxids; i++) for (i = 0; i < nsubxids; i++)
{ {
Assert(ClogCtl->shared->page_number[slotno] == TransactionIdToPage(subxids[i])); Assert(XactCtl->shared->page_number[slotno] == TransactionIdToPage(subxids[i]));
TransactionIdSetStatusBit(subxids[i], TransactionIdSetStatusBit(subxids[i],
TRANSACTION_STATUS_SUB_COMMITTED, TRANSACTION_STATUS_SUB_COMMITTED,
lsn, slotno); lsn, slotno);
...@@ -388,20 +388,20 @@ TransactionIdSetPageStatusInternal(TransactionId xid, int nsubxids, ...@@ -388,20 +388,20 @@ TransactionIdSetPageStatusInternal(TransactionId xid, int nsubxids,
/* Set the subtransactions */ /* Set the subtransactions */
for (i = 0; i < nsubxids; i++) for (i = 0; i < nsubxids; i++)
{ {
Assert(ClogCtl->shared->page_number[slotno] == TransactionIdToPage(subxids[i])); Assert(XactCtl->shared->page_number[slotno] == TransactionIdToPage(subxids[i]));
TransactionIdSetStatusBit(subxids[i], status, lsn, slotno); TransactionIdSetStatusBit(subxids[i], status, lsn, slotno);
} }
ClogCtl->shared->page_dirty[slotno] = true; XactCtl->shared->page_dirty[slotno] = true;
} }
/* /*
* When we cannot immediately acquire CLogControlLock in exclusive mode at * When we cannot immediately acquire XactSLRULock in exclusive mode at
* commit time, add ourselves to a list of processes that need their XIDs * commit time, add ourselves to a list of processes that need their XIDs
* status update. The first process to add itself to the list will acquire * status update. The first process to add itself to the list will acquire
* CLogControlLock in exclusive mode and set transaction status as required * XactSLRULock in exclusive mode and set transaction status as required
* on behalf of all group members. This avoids a great deal of contention * on behalf of all group members. This avoids a great deal of contention
* around CLogControlLock when many processes are trying to commit at once, * around XactSLRULock when many processes are trying to commit at once,
* since the lock need not be repeatedly handed off from one committing * since the lock need not be repeatedly handed off from one committing
* process to the next. * process to the next.
* *
...@@ -493,7 +493,7 @@ TransactionGroupUpdateXidStatus(TransactionId xid, XidStatus status, ...@@ -493,7 +493,7 @@ TransactionGroupUpdateXidStatus(TransactionId xid, XidStatus status,
} }
/* We are the leader. Acquire the lock on behalf of everyone. */ /* We are the leader. Acquire the lock on behalf of everyone. */
LWLockAcquire(CLogControlLock, LW_EXCLUSIVE); LWLockAcquire(XactSLRULock, LW_EXCLUSIVE);
/* /*
* Now that we've got the lock, clear the list of processes waiting for * Now that we've got the lock, clear the list of processes waiting for
...@@ -530,7 +530,7 @@ TransactionGroupUpdateXidStatus(TransactionId xid, XidStatus status, ...@@ -530,7 +530,7 @@ TransactionGroupUpdateXidStatus(TransactionId xid, XidStatus status,
} }
/* We're done with the lock now. */ /* We're done with the lock now. */
LWLockRelease(CLogControlLock); LWLockRelease(XactSLRULock);
/* /*
* Now that we've released the lock, go back and wake everybody up. We * Now that we've released the lock, go back and wake everybody up. We
...@@ -559,7 +559,7 @@ TransactionGroupUpdateXidStatus(TransactionId xid, XidStatus status, ...@@ -559,7 +559,7 @@ TransactionGroupUpdateXidStatus(TransactionId xid, XidStatus status,
/* /*
* Sets the commit status of a single transaction. * Sets the commit status of a single transaction.
* *
* Must be called with CLogControlLock held * Must be called with XactSLRULock held
*/ */
static void static void
TransactionIdSetStatusBit(TransactionId xid, XidStatus status, XLogRecPtr lsn, int slotno) TransactionIdSetStatusBit(TransactionId xid, XidStatus status, XLogRecPtr lsn, int slotno)
...@@ -570,7 +570,7 @@ TransactionIdSetStatusBit(TransactionId xid, XidStatus status, XLogRecPtr lsn, i ...@@ -570,7 +570,7 @@ TransactionIdSetStatusBit(TransactionId xid, XidStatus status, XLogRecPtr lsn, i
char byteval; char byteval;
char curval; char curval;
byteptr = ClogCtl->shared->page_buffer[slotno] + byteno; byteptr = XactCtl->shared->page_buffer[slotno] + byteno;
curval = (*byteptr >> bshift) & CLOG_XACT_BITMASK; curval = (*byteptr >> bshift) & CLOG_XACT_BITMASK;
/* /*
...@@ -610,8 +610,8 @@ TransactionIdSetStatusBit(TransactionId xid, XidStatus status, XLogRecPtr lsn, i ...@@ -610,8 +610,8 @@ TransactionIdSetStatusBit(TransactionId xid, XidStatus status, XLogRecPtr lsn, i
{ {
int lsnindex = GetLSNIndex(slotno, xid); int lsnindex = GetLSNIndex(slotno, xid);
if (ClogCtl->shared->group_lsn[lsnindex] < lsn) if (XactCtl->shared->group_lsn[lsnindex] < lsn)
ClogCtl->shared->group_lsn[lsnindex] = lsn; XactCtl->shared->group_lsn[lsnindex] = lsn;
} }
} }
...@@ -643,15 +643,15 @@ TransactionIdGetStatus(TransactionId xid, XLogRecPtr *lsn) ...@@ -643,15 +643,15 @@ TransactionIdGetStatus(TransactionId xid, XLogRecPtr *lsn)
/* lock is acquired by SimpleLruReadPage_ReadOnly */ /* lock is acquired by SimpleLruReadPage_ReadOnly */
slotno = SimpleLruReadPage_ReadOnly(ClogCtl, pageno, xid); slotno = SimpleLruReadPage_ReadOnly(XactCtl, pageno, xid);
byteptr = ClogCtl->shared->page_buffer[slotno] + byteno; byteptr = XactCtl->shared->page_buffer[slotno] + byteno;
status = (*byteptr >> bshift) & CLOG_XACT_BITMASK; status = (*byteptr >> bshift) & CLOG_XACT_BITMASK;
lsnindex = GetLSNIndex(slotno, xid); lsnindex = GetLSNIndex(slotno, xid);
*lsn = ClogCtl->shared->group_lsn[lsnindex]; *lsn = XactCtl->shared->group_lsn[lsnindex];
LWLockRelease(CLogControlLock); LWLockRelease(XactSLRULock);
return status; return status;
} }
...@@ -690,9 +690,9 @@ CLOGShmemSize(void) ...@@ -690,9 +690,9 @@ CLOGShmemSize(void)
void void
CLOGShmemInit(void) CLOGShmemInit(void)
{ {
ClogCtl->PagePrecedes = CLOGPagePrecedes; XactCtl->PagePrecedes = CLOGPagePrecedes;
SimpleLruInit(ClogCtl, "clog", CLOGShmemBuffers(), CLOG_LSNS_PER_PAGE, SimpleLruInit(XactCtl, "Xact", CLOGShmemBuffers(), CLOG_LSNS_PER_PAGE,
CLogControlLock, "pg_xact", LWTRANCHE_CLOG_BUFFERS); XactSLRULock, "pg_xact", LWTRANCHE_XACT_BUFFER);
} }
/* /*
...@@ -706,16 +706,16 @@ BootStrapCLOG(void) ...@@ -706,16 +706,16 @@ BootStrapCLOG(void)
{ {
int slotno; int slotno;
LWLockAcquire(CLogControlLock, LW_EXCLUSIVE); LWLockAcquire(XactSLRULock, LW_EXCLUSIVE);
/* Create and zero the first page of the commit log */ /* Create and zero the first page of the commit log */
slotno = ZeroCLOGPage(0, false); slotno = ZeroCLOGPage(0, false);
/* Make sure it's written out */ /* Make sure it's written out */
SimpleLruWritePage(ClogCtl, slotno); SimpleLruWritePage(XactCtl, slotno);
Assert(!ClogCtl->shared->page_dirty[slotno]); Assert(!XactCtl->shared->page_dirty[slotno]);
LWLockRelease(CLogControlLock); LWLockRelease(XactSLRULock);
} }
/* /*
...@@ -732,7 +732,7 @@ ZeroCLOGPage(int pageno, bool writeXlog) ...@@ -732,7 +732,7 @@ ZeroCLOGPage(int pageno, bool writeXlog)
{ {
int slotno; int slotno;
slotno = SimpleLruZeroPage(ClogCtl, pageno); slotno = SimpleLruZeroPage(XactCtl, pageno);
if (writeXlog) if (writeXlog)
WriteZeroPageXlogRec(pageno); WriteZeroPageXlogRec(pageno);
...@@ -750,14 +750,14 @@ StartupCLOG(void) ...@@ -750,14 +750,14 @@ StartupCLOG(void)
TransactionId xid = XidFromFullTransactionId(ShmemVariableCache->nextFullXid); TransactionId xid = XidFromFullTransactionId(ShmemVariableCache->nextFullXid);
int pageno = TransactionIdToPage(xid); int pageno = TransactionIdToPage(xid);
LWLockAcquire(CLogControlLock, LW_EXCLUSIVE); LWLockAcquire(XactSLRULock, LW_EXCLUSIVE);
/* /*
* Initialize our idea of the latest page number. * Initialize our idea of the latest page number.
*/ */
ClogCtl->shared->latest_page_number = pageno; XactCtl->shared->latest_page_number = pageno;
LWLockRelease(CLogControlLock); LWLockRelease(XactSLRULock);
} }
/* /*
...@@ -769,12 +769,12 @@ TrimCLOG(void) ...@@ -769,12 +769,12 @@ TrimCLOG(void)
TransactionId xid = XidFromFullTransactionId(ShmemVariableCache->nextFullXid); TransactionId xid = XidFromFullTransactionId(ShmemVariableCache->nextFullXid);
int pageno = TransactionIdToPage(xid); int pageno = TransactionIdToPage(xid);
LWLockAcquire(CLogControlLock, LW_EXCLUSIVE); LWLockAcquire(XactSLRULock, LW_EXCLUSIVE);
/* /*
* Re-Initialize our idea of the latest page number. * Re-Initialize our idea of the latest page number.
*/ */
ClogCtl->shared->latest_page_number = pageno; XactCtl->shared->latest_page_number = pageno;
/* /*
* Zero out the remainder of the current clog page. Under normal * Zero out the remainder of the current clog page. Under normal
...@@ -795,18 +795,18 @@ TrimCLOG(void) ...@@ -795,18 +795,18 @@ TrimCLOG(void)
int slotno; int slotno;
char *byteptr; char *byteptr;
slotno = SimpleLruReadPage(ClogCtl, pageno, false, xid); slotno = SimpleLruReadPage(XactCtl, pageno, false, xid);
byteptr = ClogCtl->shared->page_buffer[slotno] + byteno; byteptr = XactCtl->shared->page_buffer[slotno] + byteno;
/* Zero so-far-unused positions in the current byte */ /* Zero so-far-unused positions in the current byte */
*byteptr &= (1 << bshift) - 1; *byteptr &= (1 << bshift) - 1;
/* Zero the rest of the page */ /* Zero the rest of the page */
MemSet(byteptr + 1, 0, BLCKSZ - byteno - 1); MemSet(byteptr + 1, 0, BLCKSZ - byteno - 1);
ClogCtl->shared->page_dirty[slotno] = true; XactCtl->shared->page_dirty[slotno] = true;
} }
LWLockRelease(CLogControlLock); LWLockRelease(XactSLRULock);
} }
/* /*
...@@ -817,7 +817,7 @@ ShutdownCLOG(void) ...@@ -817,7 +817,7 @@ ShutdownCLOG(void)
{ {
/* Flush dirty CLOG pages to disk */ /* Flush dirty CLOG pages to disk */
TRACE_POSTGRESQL_CLOG_CHECKPOINT_START(false); TRACE_POSTGRESQL_CLOG_CHECKPOINT_START(false);
SimpleLruFlush(ClogCtl, false); SimpleLruFlush(XactCtl, false);
/* /*
* fsync pg_xact to ensure that any files flushed previously are durably * fsync pg_xact to ensure that any files flushed previously are durably
...@@ -836,7 +836,7 @@ CheckPointCLOG(void) ...@@ -836,7 +836,7 @@ CheckPointCLOG(void)
{ {
/* Flush dirty CLOG pages to disk */ /* Flush dirty CLOG pages to disk */
TRACE_POSTGRESQL_CLOG_CHECKPOINT_START(true); TRACE_POSTGRESQL_CLOG_CHECKPOINT_START(true);
SimpleLruFlush(ClogCtl, true); SimpleLruFlush(XactCtl, true);
/* /*
* fsync pg_xact to ensure that any files flushed previously are durably * fsync pg_xact to ensure that any files flushed previously are durably
...@@ -871,12 +871,12 @@ ExtendCLOG(TransactionId newestXact) ...@@ -871,12 +871,12 @@ ExtendCLOG(TransactionId newestXact)
pageno = TransactionIdToPage(newestXact); pageno = TransactionIdToPage(newestXact);
LWLockAcquire(CLogControlLock, LW_EXCLUSIVE); LWLockAcquire(XactSLRULock, LW_EXCLUSIVE);
/* Zero the page and make an XLOG entry about it */ /* Zero the page and make an XLOG entry about it */
ZeroCLOGPage(pageno, true); ZeroCLOGPage(pageno, true);
LWLockRelease(CLogControlLock); LWLockRelease(XactSLRULock);
} }
...@@ -907,7 +907,7 @@ TruncateCLOG(TransactionId oldestXact, Oid oldestxid_datoid) ...@@ -907,7 +907,7 @@ TruncateCLOG(TransactionId oldestXact, Oid oldestxid_datoid)
cutoffPage = TransactionIdToPage(oldestXact); cutoffPage = TransactionIdToPage(oldestXact);
/* Check to see if there's any files that could be removed */ /* Check to see if there's any files that could be removed */
if (!SlruScanDirectory(ClogCtl, SlruScanDirCbReportPresence, &cutoffPage)) if (!SlruScanDirectory(XactCtl, SlruScanDirCbReportPresence, &cutoffPage))
return; /* nothing to remove */ return; /* nothing to remove */
/* /*
...@@ -928,7 +928,7 @@ TruncateCLOG(TransactionId oldestXact, Oid oldestxid_datoid) ...@@ -928,7 +928,7 @@ TruncateCLOG(TransactionId oldestXact, Oid oldestxid_datoid)
WriteTruncateXlogRec(cutoffPage, oldestXact, oldestxid_datoid); WriteTruncateXlogRec(cutoffPage, oldestXact, oldestxid_datoid);
/* Now we can remove the old CLOG segment(s) */ /* Now we can remove the old CLOG segment(s) */
SimpleLruTruncate(ClogCtl, cutoffPage); SimpleLruTruncate(XactCtl, cutoffPage);
} }
...@@ -1007,13 +1007,13 @@ clog_redo(XLogReaderState *record) ...@@ -1007,13 +1007,13 @@ clog_redo(XLogReaderState *record)
memcpy(&pageno, XLogRecGetData(record), sizeof(int)); memcpy(&pageno, XLogRecGetData(record), sizeof(int));
LWLockAcquire(CLogControlLock, LW_EXCLUSIVE); LWLockAcquire(XactSLRULock, LW_EXCLUSIVE);
slotno = ZeroCLOGPage(pageno, false); slotno = ZeroCLOGPage(pageno, false);
SimpleLruWritePage(ClogCtl, slotno); SimpleLruWritePage(XactCtl, slotno);
Assert(!ClogCtl->shared->page_dirty[slotno]); Assert(!XactCtl->shared->page_dirty[slotno]);
LWLockRelease(CLogControlLock); LWLockRelease(XactSLRULock);
} }
else if (info == CLOG_TRUNCATE) else if (info == CLOG_TRUNCATE)
{ {
...@@ -1025,11 +1025,11 @@ clog_redo(XLogReaderState *record) ...@@ -1025,11 +1025,11 @@ clog_redo(XLogReaderState *record)
* During XLOG replay, latest_page_number isn't set up yet; insert a * During XLOG replay, latest_page_number isn't set up yet; insert a
* suitable value to bypass the sanity test in SimpleLruTruncate. * suitable value to bypass the sanity test in SimpleLruTruncate.
*/ */
ClogCtl->shared->latest_page_number = xlrec.pageno; XactCtl->shared->latest_page_number = xlrec.pageno;
AdvanceOldestClogXid(xlrec.oldestXact); AdvanceOldestClogXid(xlrec.oldestXact);
SimpleLruTruncate(ClogCtl, xlrec.pageno); SimpleLruTruncate(XactCtl, xlrec.pageno);
} }
else else
elog(PANIC, "clog_redo: unknown op code %u", info); elog(PANIC, "clog_redo: unknown op code %u", info);
......
...@@ -235,7 +235,7 @@ SetXidCommitTsInPage(TransactionId xid, int nsubxids, ...@@ -235,7 +235,7 @@ SetXidCommitTsInPage(TransactionId xid, int nsubxids,
int slotno; int slotno;
int i; int i;
LWLockAcquire(CommitTsControlLock, LW_EXCLUSIVE); LWLockAcquire(CommitTsSLRULock, LW_EXCLUSIVE);
slotno = SimpleLruReadPage(CommitTsCtl, pageno, true, xid); slotno = SimpleLruReadPage(CommitTsCtl, pageno, true, xid);
...@@ -245,13 +245,13 @@ SetXidCommitTsInPage(TransactionId xid, int nsubxids, ...@@ -245,13 +245,13 @@ SetXidCommitTsInPage(TransactionId xid, int nsubxids,
CommitTsCtl->shared->page_dirty[slotno] = true; CommitTsCtl->shared->page_dirty[slotno] = true;
LWLockRelease(CommitTsControlLock); LWLockRelease(CommitTsSLRULock);
} }
/* /*
* Sets the commit timestamp of a single transaction. * Sets the commit timestamp of a single transaction.
* *
* Must be called with CommitTsControlLock held * Must be called with CommitTsSLRULock held
*/ */
static void static void
TransactionIdSetCommitTs(TransactionId xid, TimestampTz ts, TransactionIdSetCommitTs(TransactionId xid, TimestampTz ts,
...@@ -352,7 +352,7 @@ TransactionIdGetCommitTsData(TransactionId xid, TimestampTz *ts, ...@@ -352,7 +352,7 @@ TransactionIdGetCommitTsData(TransactionId xid, TimestampTz *ts,
if (nodeid) if (nodeid)
*nodeid = entry.nodeid; *nodeid = entry.nodeid;
LWLockRelease(CommitTsControlLock); LWLockRelease(CommitTsSLRULock);
return *ts != 0; return *ts != 0;
} }
...@@ -492,9 +492,9 @@ CommitTsShmemInit(void) ...@@ -492,9 +492,9 @@ CommitTsShmemInit(void)
bool found; bool found;
CommitTsCtl->PagePrecedes = CommitTsPagePrecedes; CommitTsCtl->PagePrecedes = CommitTsPagePrecedes;
SimpleLruInit(CommitTsCtl, "commit_timestamp", CommitTsShmemBuffers(), 0, SimpleLruInit(CommitTsCtl, "CommitTs", CommitTsShmemBuffers(), 0,
CommitTsControlLock, "pg_commit_ts", CommitTsSLRULock, "pg_commit_ts",
LWTRANCHE_COMMITTS_BUFFERS); LWTRANCHE_COMMITTS_BUFFER);
commitTsShared = ShmemInitStruct("CommitTs shared", commitTsShared = ShmemInitStruct("CommitTs shared",
sizeof(CommitTimestampShared), sizeof(CommitTimestampShared),
...@@ -649,9 +649,9 @@ ActivateCommitTs(void) ...@@ -649,9 +649,9 @@ ActivateCommitTs(void)
/* /*
* Re-Initialize our idea of the latest page number. * Re-Initialize our idea of the latest page number.
*/ */
LWLockAcquire(CommitTsControlLock, LW_EXCLUSIVE); LWLockAcquire(CommitTsSLRULock, LW_EXCLUSIVE);
CommitTsCtl->shared->latest_page_number = pageno; CommitTsCtl->shared->latest_page_number = pageno;
LWLockRelease(CommitTsControlLock); LWLockRelease(CommitTsSLRULock);
/* /*
* If CommitTs is enabled, but it wasn't in the previous server run, we * If CommitTs is enabled, but it wasn't in the previous server run, we
...@@ -679,11 +679,11 @@ ActivateCommitTs(void) ...@@ -679,11 +679,11 @@ ActivateCommitTs(void)
{ {
int slotno; int slotno;
LWLockAcquire(CommitTsControlLock, LW_EXCLUSIVE); LWLockAcquire(CommitTsSLRULock, LW_EXCLUSIVE);
slotno = ZeroCommitTsPage(pageno, false); slotno = ZeroCommitTsPage(pageno, false);
SimpleLruWritePage(CommitTsCtl, slotno); SimpleLruWritePage(CommitTsCtl, slotno);
Assert(!CommitTsCtl->shared->page_dirty[slotno]); Assert(!CommitTsCtl->shared->page_dirty[slotno]);
LWLockRelease(CommitTsControlLock); LWLockRelease(CommitTsSLRULock);
} }
/* Change the activation status in shared memory. */ /* Change the activation status in shared memory. */
...@@ -732,9 +732,9 @@ DeactivateCommitTs(void) ...@@ -732,9 +732,9 @@ DeactivateCommitTs(void)
* be overwritten anyway when we wrap around, but it seems better to be * be overwritten anyway when we wrap around, but it seems better to be
* tidy.) * tidy.)
*/ */
LWLockAcquire(CommitTsControlLock, LW_EXCLUSIVE); LWLockAcquire(CommitTsSLRULock, LW_EXCLUSIVE);
(void) SlruScanDirectory(CommitTsCtl, SlruScanDirCbDeleteAll, NULL); (void) SlruScanDirectory(CommitTsCtl, SlruScanDirCbDeleteAll, NULL);
LWLockRelease(CommitTsControlLock); LWLockRelease(CommitTsSLRULock);
} }
/* /*
...@@ -804,12 +804,12 @@ ExtendCommitTs(TransactionId newestXact) ...@@ -804,12 +804,12 @@ ExtendCommitTs(TransactionId newestXact)
pageno = TransactionIdToCTsPage(newestXact); pageno = TransactionIdToCTsPage(newestXact);
LWLockAcquire(CommitTsControlLock, LW_EXCLUSIVE); LWLockAcquire(CommitTsSLRULock, LW_EXCLUSIVE);
/* Zero the page and make an XLOG entry about it */ /* Zero the page and make an XLOG entry about it */
ZeroCommitTsPage(pageno, !InRecovery); ZeroCommitTsPage(pageno, !InRecovery);
LWLockRelease(CommitTsControlLock); LWLockRelease(CommitTsSLRULock);
} }
/* /*
...@@ -974,13 +974,13 @@ commit_ts_redo(XLogReaderState *record) ...@@ -974,13 +974,13 @@ commit_ts_redo(XLogReaderState *record)
memcpy(&pageno, XLogRecGetData(record), sizeof(int)); memcpy(&pageno, XLogRecGetData(record), sizeof(int));
LWLockAcquire(CommitTsControlLock, LW_EXCLUSIVE); LWLockAcquire(CommitTsSLRULock, LW_EXCLUSIVE);
slotno = ZeroCommitTsPage(pageno, false); slotno = ZeroCommitTsPage(pageno, false);
SimpleLruWritePage(CommitTsCtl, slotno); SimpleLruWritePage(CommitTsCtl, slotno);
Assert(!CommitTsCtl->shared->page_dirty[slotno]); Assert(!CommitTsCtl->shared->page_dirty[slotno]);
LWLockRelease(CommitTsControlLock); LWLockRelease(CommitTsSLRULock);
} }
else if (info == COMMIT_TS_TRUNCATE) else if (info == COMMIT_TS_TRUNCATE)
{ {
......
...@@ -192,8 +192,8 @@ static SlruCtlData MultiXactMemberCtlData; ...@@ -192,8 +192,8 @@ static SlruCtlData MultiXactMemberCtlData;
/* /*
* MultiXact state shared across all backends. All this state is protected * MultiXact state shared across all backends. All this state is protected
* by MultiXactGenLock. (We also use MultiXactOffsetControlLock and * by MultiXactGenLock. (We also use MultiXactOffsetSLRULock and
* MultiXactMemberControlLock to guard accesses to the two sets of SLRU * MultiXactMemberSLRULock to guard accesses to the two sets of SLRU
* buffers. For concurrency's sake, we avoid holding more than one of these * buffers. For concurrency's sake, we avoid holding more than one of these
* locks at a time.) * locks at a time.)
*/ */
...@@ -850,7 +850,7 @@ RecordNewMultiXact(MultiXactId multi, MultiXactOffset offset, ...@@ -850,7 +850,7 @@ RecordNewMultiXact(MultiXactId multi, MultiXactOffset offset,
MultiXactOffset *offptr; MultiXactOffset *offptr;
int i; int i;
LWLockAcquire(MultiXactOffsetControlLock, LW_EXCLUSIVE); LWLockAcquire(MultiXactOffsetSLRULock, LW_EXCLUSIVE);
pageno = MultiXactIdToOffsetPage(multi); pageno = MultiXactIdToOffsetPage(multi);
entryno = MultiXactIdToOffsetEntry(multi); entryno = MultiXactIdToOffsetEntry(multi);
...@@ -871,9 +871,9 @@ RecordNewMultiXact(MultiXactId multi, MultiXactOffset offset, ...@@ -871,9 +871,9 @@ RecordNewMultiXact(MultiXactId multi, MultiXactOffset offset,
MultiXactOffsetCtl->shared->page_dirty[slotno] = true; MultiXactOffsetCtl->shared->page_dirty[slotno] = true;
/* Exchange our lock */ /* Exchange our lock */
LWLockRelease(MultiXactOffsetControlLock); LWLockRelease(MultiXactOffsetSLRULock);
LWLockAcquire(MultiXactMemberControlLock, LW_EXCLUSIVE); LWLockAcquire(MultiXactMemberSLRULock, LW_EXCLUSIVE);
prev_pageno = -1; prev_pageno = -1;
...@@ -915,7 +915,7 @@ RecordNewMultiXact(MultiXactId multi, MultiXactOffset offset, ...@@ -915,7 +915,7 @@ RecordNewMultiXact(MultiXactId multi, MultiXactOffset offset,
MultiXactMemberCtl->shared->page_dirty[slotno] = true; MultiXactMemberCtl->shared->page_dirty[slotno] = true;
} }
LWLockRelease(MultiXactMemberControlLock); LWLockRelease(MultiXactMemberSLRULock);
} }
/* /*
...@@ -1321,7 +1321,7 @@ GetMultiXactIdMembers(MultiXactId multi, MultiXactMember **members, ...@@ -1321,7 +1321,7 @@ GetMultiXactIdMembers(MultiXactId multi, MultiXactMember **members,
* time on every multixact creation. * time on every multixact creation.
*/ */
retry: retry:
LWLockAcquire(MultiXactOffsetControlLock, LW_EXCLUSIVE); LWLockAcquire(MultiXactOffsetSLRULock, LW_EXCLUSIVE);
pageno = MultiXactIdToOffsetPage(multi); pageno = MultiXactIdToOffsetPage(multi);
entryno = MultiXactIdToOffsetEntry(multi); entryno = MultiXactIdToOffsetEntry(multi);
...@@ -1367,7 +1367,7 @@ retry: ...@@ -1367,7 +1367,7 @@ retry:
if (nextMXOffset == 0) if (nextMXOffset == 0)
{ {
/* Corner case 2: next multixact is still being filled in */ /* Corner case 2: next multixact is still being filled in */
LWLockRelease(MultiXactOffsetControlLock); LWLockRelease(MultiXactOffsetSLRULock);
CHECK_FOR_INTERRUPTS(); CHECK_FOR_INTERRUPTS();
pg_usleep(1000L); pg_usleep(1000L);
goto retry; goto retry;
...@@ -1376,13 +1376,13 @@ retry: ...@@ -1376,13 +1376,13 @@ retry:
length = nextMXOffset - offset; length = nextMXOffset - offset;
} }
LWLockRelease(MultiXactOffsetControlLock); LWLockRelease(MultiXactOffsetSLRULock);
ptr = (MultiXactMember *) palloc(length * sizeof(MultiXactMember)); ptr = (MultiXactMember *) palloc(length * sizeof(MultiXactMember));
*members = ptr; *members = ptr;
/* Now get the members themselves. */ /* Now get the members themselves. */
LWLockAcquire(MultiXactMemberControlLock, LW_EXCLUSIVE); LWLockAcquire(MultiXactMemberSLRULock, LW_EXCLUSIVE);
truelength = 0; truelength = 0;
prev_pageno = -1; prev_pageno = -1;
...@@ -1422,7 +1422,7 @@ retry: ...@@ -1422,7 +1422,7 @@ retry:
truelength++; truelength++;
} }
LWLockRelease(MultiXactMemberControlLock); LWLockRelease(MultiXactMemberSLRULock);
/* /*
* Copy the result into the local cache. * Copy the result into the local cache.
...@@ -1812,8 +1812,8 @@ MultiXactShmemSize(void) ...@@ -1812,8 +1812,8 @@ MultiXactShmemSize(void)
mul_size(sizeof(MultiXactId) * 2, MaxOldestSlot)) mul_size(sizeof(MultiXactId) * 2, MaxOldestSlot))
size = SHARED_MULTIXACT_STATE_SIZE; size = SHARED_MULTIXACT_STATE_SIZE;
size = add_size(size, SimpleLruShmemSize(NUM_MXACTOFFSET_BUFFERS, 0)); size = add_size(size, SimpleLruShmemSize(NUM_MULTIXACTOFFSET_BUFFERS, 0));
size = add_size(size, SimpleLruShmemSize(NUM_MXACTMEMBER_BUFFERS, 0)); size = add_size(size, SimpleLruShmemSize(NUM_MULTIXACTMEMBER_BUFFERS, 0));
return size; return size;
} }
...@@ -1829,13 +1829,13 @@ MultiXactShmemInit(void) ...@@ -1829,13 +1829,13 @@ MultiXactShmemInit(void)
MultiXactMemberCtl->PagePrecedes = MultiXactMemberPagePrecedes; MultiXactMemberCtl->PagePrecedes = MultiXactMemberPagePrecedes;
SimpleLruInit(MultiXactOffsetCtl, SimpleLruInit(MultiXactOffsetCtl,
"multixact_offset", NUM_MXACTOFFSET_BUFFERS, 0, "MultiXactOffset", NUM_MULTIXACTOFFSET_BUFFERS, 0,
MultiXactOffsetControlLock, "pg_multixact/offsets", MultiXactOffsetSLRULock, "pg_multixact/offsets",
LWTRANCHE_MXACTOFFSET_BUFFERS); LWTRANCHE_MULTIXACTOFFSET_BUFFER);
SimpleLruInit(MultiXactMemberCtl, SimpleLruInit(MultiXactMemberCtl,
"multixact_member", NUM_MXACTMEMBER_BUFFERS, 0, "MultiXactMember", NUM_MULTIXACTMEMBER_BUFFERS, 0,
MultiXactMemberControlLock, "pg_multixact/members", MultiXactMemberSLRULock, "pg_multixact/members",
LWTRANCHE_MXACTMEMBER_BUFFERS); LWTRANCHE_MULTIXACTMEMBER_BUFFER);
/* Initialize our shared state struct */ /* Initialize our shared state struct */
MultiXactState = ShmemInitStruct("Shared MultiXact State", MultiXactState = ShmemInitStruct("Shared MultiXact State",
...@@ -1869,7 +1869,7 @@ BootStrapMultiXact(void) ...@@ -1869,7 +1869,7 @@ BootStrapMultiXact(void)
{ {
int slotno; int slotno;
LWLockAcquire(MultiXactOffsetControlLock, LW_EXCLUSIVE); LWLockAcquire(MultiXactOffsetSLRULock, LW_EXCLUSIVE);
/* Create and zero the first page of the offsets log */ /* Create and zero the first page of the offsets log */
slotno = ZeroMultiXactOffsetPage(0, false); slotno = ZeroMultiXactOffsetPage(0, false);
...@@ -1878,9 +1878,9 @@ BootStrapMultiXact(void) ...@@ -1878,9 +1878,9 @@ BootStrapMultiXact(void)
SimpleLruWritePage(MultiXactOffsetCtl, slotno); SimpleLruWritePage(MultiXactOffsetCtl, slotno);
Assert(!MultiXactOffsetCtl->shared->page_dirty[slotno]); Assert(!MultiXactOffsetCtl->shared->page_dirty[slotno]);
LWLockRelease(MultiXactOffsetControlLock); LWLockRelease(MultiXactOffsetSLRULock);
LWLockAcquire(MultiXactMemberControlLock, LW_EXCLUSIVE); LWLockAcquire(MultiXactMemberSLRULock, LW_EXCLUSIVE);
/* Create and zero the first page of the members log */ /* Create and zero the first page of the members log */
slotno = ZeroMultiXactMemberPage(0, false); slotno = ZeroMultiXactMemberPage(0, false);
...@@ -1889,7 +1889,7 @@ BootStrapMultiXact(void) ...@@ -1889,7 +1889,7 @@ BootStrapMultiXact(void)
SimpleLruWritePage(MultiXactMemberCtl, slotno); SimpleLruWritePage(MultiXactMemberCtl, slotno);
Assert(!MultiXactMemberCtl->shared->page_dirty[slotno]); Assert(!MultiXactMemberCtl->shared->page_dirty[slotno]);
LWLockRelease(MultiXactMemberControlLock); LWLockRelease(MultiXactMemberSLRULock);
} }
/* /*
...@@ -1952,7 +1952,7 @@ MaybeExtendOffsetSlru(void) ...@@ -1952,7 +1952,7 @@ MaybeExtendOffsetSlru(void)
pageno = MultiXactIdToOffsetPage(MultiXactState->nextMXact); pageno = MultiXactIdToOffsetPage(MultiXactState->nextMXact);
LWLockAcquire(MultiXactOffsetControlLock, LW_EXCLUSIVE); LWLockAcquire(MultiXactOffsetSLRULock, LW_EXCLUSIVE);
if (!SimpleLruDoesPhysicalPageExist(MultiXactOffsetCtl, pageno)) if (!SimpleLruDoesPhysicalPageExist(MultiXactOffsetCtl, pageno))
{ {
...@@ -1967,7 +1967,7 @@ MaybeExtendOffsetSlru(void) ...@@ -1967,7 +1967,7 @@ MaybeExtendOffsetSlru(void)
SimpleLruWritePage(MultiXactOffsetCtl, slotno); SimpleLruWritePage(MultiXactOffsetCtl, slotno);
} }
LWLockRelease(MultiXactOffsetControlLock); LWLockRelease(MultiXactOffsetSLRULock);
} }
/* /*
...@@ -2020,7 +2020,7 @@ TrimMultiXact(void) ...@@ -2020,7 +2020,7 @@ TrimMultiXact(void)
LWLockRelease(MultiXactGenLock); LWLockRelease(MultiXactGenLock);
/* Clean up offsets state */ /* Clean up offsets state */
LWLockAcquire(MultiXactOffsetControlLock, LW_EXCLUSIVE); LWLockAcquire(MultiXactOffsetSLRULock, LW_EXCLUSIVE);
/* /*
* (Re-)Initialize our idea of the latest page number for offsets. * (Re-)Initialize our idea of the latest page number for offsets.
...@@ -2051,10 +2051,10 @@ TrimMultiXact(void) ...@@ -2051,10 +2051,10 @@ TrimMultiXact(void)
MultiXactOffsetCtl->shared->page_dirty[slotno] = true; MultiXactOffsetCtl->shared->page_dirty[slotno] = true;
} }
LWLockRelease(MultiXactOffsetControlLock); LWLockRelease(MultiXactOffsetSLRULock);
/* And the same for members */ /* And the same for members */
LWLockAcquire(MultiXactMemberControlLock, LW_EXCLUSIVE); LWLockAcquire(MultiXactMemberSLRULock, LW_EXCLUSIVE);
/* /*
* (Re-)Initialize our idea of the latest page number for members. * (Re-)Initialize our idea of the latest page number for members.
...@@ -2089,7 +2089,7 @@ TrimMultiXact(void) ...@@ -2089,7 +2089,7 @@ TrimMultiXact(void)
MultiXactMemberCtl->shared->page_dirty[slotno] = true; MultiXactMemberCtl->shared->page_dirty[slotno] = true;
} }
LWLockRelease(MultiXactMemberControlLock); LWLockRelease(MultiXactMemberSLRULock);
/* signal that we're officially up */ /* signal that we're officially up */
LWLockAcquire(MultiXactGenLock, LW_EXCLUSIVE); LWLockAcquire(MultiXactGenLock, LW_EXCLUSIVE);
...@@ -2402,12 +2402,12 @@ ExtendMultiXactOffset(MultiXactId multi) ...@@ -2402,12 +2402,12 @@ ExtendMultiXactOffset(MultiXactId multi)
pageno = MultiXactIdToOffsetPage(multi); pageno = MultiXactIdToOffsetPage(multi);
LWLockAcquire(MultiXactOffsetControlLock, LW_EXCLUSIVE); LWLockAcquire(MultiXactOffsetSLRULock, LW_EXCLUSIVE);
/* Zero the page and make an XLOG entry about it */ /* Zero the page and make an XLOG entry about it */
ZeroMultiXactOffsetPage(pageno, true); ZeroMultiXactOffsetPage(pageno, true);
LWLockRelease(MultiXactOffsetControlLock); LWLockRelease(MultiXactOffsetSLRULock);
} }
/* /*
...@@ -2443,12 +2443,12 @@ ExtendMultiXactMember(MultiXactOffset offset, int nmembers) ...@@ -2443,12 +2443,12 @@ ExtendMultiXactMember(MultiXactOffset offset, int nmembers)
pageno = MXOffsetToMemberPage(offset); pageno = MXOffsetToMemberPage(offset);
LWLockAcquire(MultiXactMemberControlLock, LW_EXCLUSIVE); LWLockAcquire(MultiXactMemberSLRULock, LW_EXCLUSIVE);
/* Zero the page and make an XLOG entry about it */ /* Zero the page and make an XLOG entry about it */
ZeroMultiXactMemberPage(pageno, true); ZeroMultiXactMemberPage(pageno, true);
LWLockRelease(MultiXactMemberControlLock); LWLockRelease(MultiXactMemberSLRULock);
} }
/* /*
...@@ -2749,7 +2749,7 @@ find_multixact_start(MultiXactId multi, MultiXactOffset *result) ...@@ -2749,7 +2749,7 @@ find_multixact_start(MultiXactId multi, MultiXactOffset *result)
offptr = (MultiXactOffset *) MultiXactOffsetCtl->shared->page_buffer[slotno]; offptr = (MultiXactOffset *) MultiXactOffsetCtl->shared->page_buffer[slotno];
offptr += entryno; offptr += entryno;
offset = *offptr; offset = *offptr;
LWLockRelease(MultiXactOffsetControlLock); LWLockRelease(MultiXactOffsetSLRULock);
*result = offset; *result = offset;
return true; return true;
...@@ -3230,13 +3230,13 @@ multixact_redo(XLogReaderState *record) ...@@ -3230,13 +3230,13 @@ multixact_redo(XLogReaderState *record)
memcpy(&pageno, XLogRecGetData(record), sizeof(int)); memcpy(&pageno, XLogRecGetData(record), sizeof(int));
LWLockAcquire(MultiXactOffsetControlLock, LW_EXCLUSIVE); LWLockAcquire(MultiXactOffsetSLRULock, LW_EXCLUSIVE);
slotno = ZeroMultiXactOffsetPage(pageno, false); slotno = ZeroMultiXactOffsetPage(pageno, false);
SimpleLruWritePage(MultiXactOffsetCtl, slotno); SimpleLruWritePage(MultiXactOffsetCtl, slotno);
Assert(!MultiXactOffsetCtl->shared->page_dirty[slotno]); Assert(!MultiXactOffsetCtl->shared->page_dirty[slotno]);
LWLockRelease(MultiXactOffsetControlLock); LWLockRelease(MultiXactOffsetSLRULock);
} }
else if (info == XLOG_MULTIXACT_ZERO_MEM_PAGE) else if (info == XLOG_MULTIXACT_ZERO_MEM_PAGE)
{ {
...@@ -3245,13 +3245,13 @@ multixact_redo(XLogReaderState *record) ...@@ -3245,13 +3245,13 @@ multixact_redo(XLogReaderState *record)
memcpy(&pageno, XLogRecGetData(record), sizeof(int)); memcpy(&pageno, XLogRecGetData(record), sizeof(int));
LWLockAcquire(MultiXactMemberControlLock, LW_EXCLUSIVE); LWLockAcquire(MultiXactMemberSLRULock, LW_EXCLUSIVE);
slotno = ZeroMultiXactMemberPage(pageno, false); slotno = ZeroMultiXactMemberPage(pageno, false);
SimpleLruWritePage(MultiXactMemberCtl, slotno); SimpleLruWritePage(MultiXactMemberCtl, slotno);
Assert(!MultiXactMemberCtl->shared->page_dirty[slotno]); Assert(!MultiXactMemberCtl->shared->page_dirty[slotno]);
LWLockRelease(MultiXactMemberControlLock); LWLockRelease(MultiXactMemberSLRULock);
} }
else if (info == XLOG_MULTIXACT_CREATE_ID) else if (info == XLOG_MULTIXACT_CREATE_ID)
{ {
......
...@@ -160,6 +160,17 @@ SimpleLruShmemSize(int nslots, int nlsns) ...@@ -160,6 +160,17 @@ SimpleLruShmemSize(int nslots, int nlsns)
return BUFFERALIGN(sz) + BLCKSZ * nslots; return BUFFERALIGN(sz) + BLCKSZ * nslots;
} }
/*
* Initialize, or attach to, a simple LRU cache in shared memory.
*
* ctl: address of local (unshared) control structure.
* name: name of SLRU. (This is user-visible, pick with care!)
* nslots: number of page slots to use.
* nlsns: number of LSN groups per page (set to zero if not relevant).
* ctllock: LWLock to use to control access to the shared control structure.
* subdir: PGDATA-relative subdirectory that will contain the files.
* tranche_id: LWLock tranche ID to use for the SLRU's per-buffer LWLocks.
*/
void void
SimpleLruInit(SlruCtl ctl, const char *name, int nslots, int nlsns, SimpleLruInit(SlruCtl ctl, const char *name, int nslots, int nlsns,
LWLock *ctllock, const char *subdir, int tranche_id) LWLock *ctllock, const char *subdir, int tranche_id)
......
...@@ -81,7 +81,7 @@ SubTransSetParent(TransactionId xid, TransactionId parent) ...@@ -81,7 +81,7 @@ SubTransSetParent(TransactionId xid, TransactionId parent)
Assert(TransactionIdIsValid(parent)); Assert(TransactionIdIsValid(parent));
Assert(TransactionIdFollows(xid, parent)); Assert(TransactionIdFollows(xid, parent));
LWLockAcquire(SubtransControlLock, LW_EXCLUSIVE); LWLockAcquire(SubtransSLRULock, LW_EXCLUSIVE);
slotno = SimpleLruReadPage(SubTransCtl, pageno, true, xid); slotno = SimpleLruReadPage(SubTransCtl, pageno, true, xid);
ptr = (TransactionId *) SubTransCtl->shared->page_buffer[slotno]; ptr = (TransactionId *) SubTransCtl->shared->page_buffer[slotno];
...@@ -99,7 +99,7 @@ SubTransSetParent(TransactionId xid, TransactionId parent) ...@@ -99,7 +99,7 @@ SubTransSetParent(TransactionId xid, TransactionId parent)
SubTransCtl->shared->page_dirty[slotno] = true; SubTransCtl->shared->page_dirty[slotno] = true;
} }
LWLockRelease(SubtransControlLock); LWLockRelease(SubtransSLRULock);
} }
/* /*
...@@ -129,7 +129,7 @@ SubTransGetParent(TransactionId xid) ...@@ -129,7 +129,7 @@ SubTransGetParent(TransactionId xid)
parent = *ptr; parent = *ptr;
LWLockRelease(SubtransControlLock); LWLockRelease(SubtransSLRULock);
return parent; return parent;
} }
...@@ -191,9 +191,9 @@ void ...@@ -191,9 +191,9 @@ void
SUBTRANSShmemInit(void) SUBTRANSShmemInit(void)
{ {
SubTransCtl->PagePrecedes = SubTransPagePrecedes; SubTransCtl->PagePrecedes = SubTransPagePrecedes;
SimpleLruInit(SubTransCtl, "subtrans", NUM_SUBTRANS_BUFFERS, 0, SimpleLruInit(SubTransCtl, "Subtrans", NUM_SUBTRANS_BUFFERS, 0,
SubtransControlLock, "pg_subtrans", SubtransSLRULock, "pg_subtrans",
LWTRANCHE_SUBTRANS_BUFFERS); LWTRANCHE_SUBTRANS_BUFFER);
/* Override default assumption that writes should be fsync'd */ /* Override default assumption that writes should be fsync'd */
SubTransCtl->do_fsync = false; SubTransCtl->do_fsync = false;
} }
...@@ -213,7 +213,7 @@ BootStrapSUBTRANS(void) ...@@ -213,7 +213,7 @@ BootStrapSUBTRANS(void)
{ {
int slotno; int slotno;
LWLockAcquire(SubtransControlLock, LW_EXCLUSIVE); LWLockAcquire(SubtransSLRULock, LW_EXCLUSIVE);
/* Create and zero the first page of the subtrans log */ /* Create and zero the first page of the subtrans log */
slotno = ZeroSUBTRANSPage(0); slotno = ZeroSUBTRANSPage(0);
...@@ -222,7 +222,7 @@ BootStrapSUBTRANS(void) ...@@ -222,7 +222,7 @@ BootStrapSUBTRANS(void)
SimpleLruWritePage(SubTransCtl, slotno); SimpleLruWritePage(SubTransCtl, slotno);
Assert(!SubTransCtl->shared->page_dirty[slotno]); Assert(!SubTransCtl->shared->page_dirty[slotno]);
LWLockRelease(SubtransControlLock); LWLockRelease(SubtransSLRULock);
} }
/* /*
...@@ -259,7 +259,7 @@ StartupSUBTRANS(TransactionId oldestActiveXID) ...@@ -259,7 +259,7 @@ StartupSUBTRANS(TransactionId oldestActiveXID)
* Whenever we advance into a new page, ExtendSUBTRANS will likewise zero * Whenever we advance into a new page, ExtendSUBTRANS will likewise zero
* the new page without regard to whatever was previously on disk. * the new page without regard to whatever was previously on disk.
*/ */
LWLockAcquire(SubtransControlLock, LW_EXCLUSIVE); LWLockAcquire(SubtransSLRULock, LW_EXCLUSIVE);
startPage = TransactionIdToPage(oldestActiveXID); startPage = TransactionIdToPage(oldestActiveXID);
nextFullXid = ShmemVariableCache->nextFullXid; nextFullXid = ShmemVariableCache->nextFullXid;
...@@ -275,7 +275,7 @@ StartupSUBTRANS(TransactionId oldestActiveXID) ...@@ -275,7 +275,7 @@ StartupSUBTRANS(TransactionId oldestActiveXID)
} }
(void) ZeroSUBTRANSPage(startPage); (void) ZeroSUBTRANSPage(startPage);
LWLockRelease(SubtransControlLock); LWLockRelease(SubtransSLRULock);
} }
/* /*
...@@ -337,12 +337,12 @@ ExtendSUBTRANS(TransactionId newestXact) ...@@ -337,12 +337,12 @@ ExtendSUBTRANS(TransactionId newestXact)
pageno = TransactionIdToPage(newestXact); pageno = TransactionIdToPage(newestXact);
LWLockAcquire(SubtransControlLock, LW_EXCLUSIVE); LWLockAcquire(SubtransSLRULock, LW_EXCLUSIVE);
/* Zero the page */ /* Zero the page */
ZeroSUBTRANSPage(pageno); ZeroSUBTRANSPage(pageno);
LWLockRelease(SubtransControlLock); LWLockRelease(SubtransSLRULock);
} }
......
...@@ -303,22 +303,22 @@ AdvanceNextFullTransactionIdPastXid(TransactionId xid) ...@@ -303,22 +303,22 @@ AdvanceNextFullTransactionIdPastXid(TransactionId xid)
/* /*
* Advance the cluster-wide value for the oldest valid clog entry. * Advance the cluster-wide value for the oldest valid clog entry.
* *
* We must acquire CLogTruncationLock to advance the oldestClogXid. It's not * We must acquire XactTruncationLock to advance the oldestClogXid. It's not
* necessary to hold the lock during the actual clog truncation, only when we * necessary to hold the lock during the actual clog truncation, only when we
* advance the limit, as code looking up arbitrary xids is required to hold * advance the limit, as code looking up arbitrary xids is required to hold
* CLogTruncationLock from when it tests oldestClogXid through to when it * XactTruncationLock from when it tests oldestClogXid through to when it
* completes the clog lookup. * completes the clog lookup.
*/ */
void void
AdvanceOldestClogXid(TransactionId oldest_datfrozenxid) AdvanceOldestClogXid(TransactionId oldest_datfrozenxid)
{ {
LWLockAcquire(CLogTruncationLock, LW_EXCLUSIVE); LWLockAcquire(XactTruncationLock, LW_EXCLUSIVE);
if (TransactionIdPrecedes(ShmemVariableCache->oldestClogXid, if (TransactionIdPrecedes(ShmemVariableCache->oldestClogXid,
oldest_datfrozenxid)) oldest_datfrozenxid))
{ {
ShmemVariableCache->oldestClogXid = oldest_datfrozenxid; ShmemVariableCache->oldestClogXid = oldest_datfrozenxid;
} }
LWLockRelease(CLogTruncationLock); LWLockRelease(XactTruncationLock);
} }
/* /*
......
...@@ -107,7 +107,7 @@ ...@@ -107,7 +107,7 @@
* frontend during startup.) The above design guarantees that notifies from * frontend during startup.) The above design guarantees that notifies from
* other backends will never be missed by ignoring self-notifies. * other backends will never be missed by ignoring self-notifies.
* *
* The amount of shared memory used for notify management (NUM_ASYNC_BUFFERS) * The amount of shared memory used for notify management (NUM_NOTIFY_BUFFERS)
* can be varied without affecting anything but performance. The maximum * can be varied without affecting anything but performance. The maximum
* amount of notification data that can be queued at one time is determined * amount of notification data that can be queued at one time is determined
* by slru.c's wraparound limit; see QUEUE_MAX_PAGE below. * by slru.c's wraparound limit; see QUEUE_MAX_PAGE below.
...@@ -225,7 +225,7 @@ typedef struct QueuePosition ...@@ -225,7 +225,7 @@ typedef struct QueuePosition
* *
* Resist the temptation to make this really large. While that would save * Resist the temptation to make this really large. While that would save
* work in some places, it would add cost in others. In particular, this * work in some places, it would add cost in others. In particular, this
* should likely be less than NUM_ASYNC_BUFFERS, to ensure that backends * should likely be less than NUM_NOTIFY_BUFFERS, to ensure that backends
* catch up before the pages they'll need to read fall out of SLRU cache. * catch up before the pages they'll need to read fall out of SLRU cache.
*/ */
#define QUEUE_CLEANUP_DELAY 4 #define QUEUE_CLEANUP_DELAY 4
...@@ -244,7 +244,7 @@ typedef struct QueueBackendStatus ...@@ -244,7 +244,7 @@ typedef struct QueueBackendStatus
/* /*
* Shared memory state for LISTEN/NOTIFY (excluding its SLRU stuff) * Shared memory state for LISTEN/NOTIFY (excluding its SLRU stuff)
* *
* The AsyncQueueControl structure is protected by the AsyncQueueLock. * The AsyncQueueControl structure is protected by the NotifyQueueLock.
* *
* When holding the lock in SHARED mode, backends may only inspect their own * When holding the lock in SHARED mode, backends may only inspect their own
* entries as well as the head and tail pointers. Consequently we can allow a * entries as well as the head and tail pointers. Consequently we can allow a
...@@ -254,9 +254,9 @@ typedef struct QueueBackendStatus ...@@ -254,9 +254,9 @@ typedef struct QueueBackendStatus
* When holding the lock in EXCLUSIVE mode, backends can inspect the entries * When holding the lock in EXCLUSIVE mode, backends can inspect the entries
* of other backends and also change the head and tail pointers. * of other backends and also change the head and tail pointers.
* *
* AsyncCtlLock is used as the control lock for the pg_notify SLRU buffers. * NotifySLRULock is used as the control lock for the pg_notify SLRU buffers.
* In order to avoid deadlocks, whenever we need both locks, we always first * In order to avoid deadlocks, whenever we need both locks, we always first
* get AsyncQueueLock and then AsyncCtlLock. * get NotifyQueueLock and then NotifySLRULock.
* *
* Each backend uses the backend[] array entry with index equal to its * Each backend uses the backend[] array entry with index equal to its
* BackendId (which can range from 1 to MaxBackends). We rely on this to make * BackendId (which can range from 1 to MaxBackends). We rely on this to make
...@@ -292,9 +292,9 @@ static AsyncQueueControl *asyncQueueControl; ...@@ -292,9 +292,9 @@ static AsyncQueueControl *asyncQueueControl;
/* /*
* The SLRU buffer area through which we access the notification queue * The SLRU buffer area through which we access the notification queue
*/ */
static SlruCtlData AsyncCtlData; static SlruCtlData NotifyCtlData;
#define AsyncCtl (&AsyncCtlData) #define NotifyCtl (&NotifyCtlData)
#define QUEUE_PAGESIZE BLCKSZ #define QUEUE_PAGESIZE BLCKSZ
#define QUEUE_FULL_WARN_INTERVAL 5000 /* warn at most once every 5s */ #define QUEUE_FULL_WARN_INTERVAL 5000 /* warn at most once every 5s */
...@@ -506,7 +506,7 @@ AsyncShmemSize(void) ...@@ -506,7 +506,7 @@ AsyncShmemSize(void)
size = mul_size(MaxBackends + 1, sizeof(QueueBackendStatus)); size = mul_size(MaxBackends + 1, sizeof(QueueBackendStatus));
size = add_size(size, offsetof(AsyncQueueControl, backend)); size = add_size(size, offsetof(AsyncQueueControl, backend));
size = add_size(size, SimpleLruShmemSize(NUM_ASYNC_BUFFERS, 0)); size = add_size(size, SimpleLruShmemSize(NUM_NOTIFY_BUFFERS, 0));
return size; return size;
} }
...@@ -552,18 +552,18 @@ AsyncShmemInit(void) ...@@ -552,18 +552,18 @@ AsyncShmemInit(void)
/* /*
* Set up SLRU management of the pg_notify data. * Set up SLRU management of the pg_notify data.
*/ */
AsyncCtl->PagePrecedes = asyncQueuePagePrecedes; NotifyCtl->PagePrecedes = asyncQueuePagePrecedes;
SimpleLruInit(AsyncCtl, "async", NUM_ASYNC_BUFFERS, 0, SimpleLruInit(NotifyCtl, "Notify", NUM_NOTIFY_BUFFERS, 0,
AsyncCtlLock, "pg_notify", LWTRANCHE_ASYNC_BUFFERS); NotifySLRULock, "pg_notify", LWTRANCHE_NOTIFY_BUFFER);
/* Override default assumption that writes should be fsync'd */ /* Override default assumption that writes should be fsync'd */
AsyncCtl->do_fsync = false; NotifyCtl->do_fsync = false;
if (!found) if (!found)
{ {
/* /*
* During start or reboot, clean out the pg_notify directory. * During start or reboot, clean out the pg_notify directory.
*/ */
(void) SlruScanDirectory(AsyncCtl, SlruScanDirCbDeleteAll, NULL); (void) SlruScanDirectory(NotifyCtl, SlruScanDirCbDeleteAll, NULL);
} }
} }
...@@ -918,7 +918,7 @@ PreCommit_Notify(void) ...@@ -918,7 +918,7 @@ PreCommit_Notify(void)
* Make sure that we have an XID assigned to the current transaction. * Make sure that we have an XID assigned to the current transaction.
* GetCurrentTransactionId is cheap if we already have an XID, but not * GetCurrentTransactionId is cheap if we already have an XID, but not
* so cheap if we don't, and we'd prefer not to do that work while * so cheap if we don't, and we'd prefer not to do that work while
* holding AsyncQueueLock. * holding NotifyQueueLock.
*/ */
(void) GetCurrentTransactionId(); (void) GetCurrentTransactionId();
...@@ -949,7 +949,7 @@ PreCommit_Notify(void) ...@@ -949,7 +949,7 @@ PreCommit_Notify(void)
{ {
/* /*
* Add the pending notifications to the queue. We acquire and * Add the pending notifications to the queue. We acquire and
* release AsyncQueueLock once per page, which might be overkill * release NotifyQueueLock once per page, which might be overkill
* but it does allow readers to get in while we're doing this. * but it does allow readers to get in while we're doing this.
* *
* A full queue is very uncommon and should really not happen, * A full queue is very uncommon and should really not happen,
...@@ -959,14 +959,14 @@ PreCommit_Notify(void) ...@@ -959,14 +959,14 @@ PreCommit_Notify(void)
* transaction, but we have not yet committed to clog, so at this * transaction, but we have not yet committed to clog, so at this
* point in time we can still roll the transaction back. * point in time we can still roll the transaction back.
*/ */
LWLockAcquire(AsyncQueueLock, LW_EXCLUSIVE); LWLockAcquire(NotifyQueueLock, LW_EXCLUSIVE);
asyncQueueFillWarning(); asyncQueueFillWarning();
if (asyncQueueIsFull()) if (asyncQueueIsFull())
ereport(ERROR, ereport(ERROR,
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED), (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
errmsg("too many notifications in the NOTIFY queue"))); errmsg("too many notifications in the NOTIFY queue")));
nextNotify = asyncQueueAddEntries(nextNotify); nextNotify = asyncQueueAddEntries(nextNotify);
LWLockRelease(AsyncQueueLock); LWLockRelease(NotifyQueueLock);
} }
} }
} }
...@@ -1075,7 +1075,7 @@ Exec_ListenPreCommit(void) ...@@ -1075,7 +1075,7 @@ Exec_ListenPreCommit(void)
* We need exclusive lock here so we can look at other backends' entries * We need exclusive lock here so we can look at other backends' entries
* and manipulate the list links. * and manipulate the list links.
*/ */
LWLockAcquire(AsyncQueueLock, LW_EXCLUSIVE); LWLockAcquire(NotifyQueueLock, LW_EXCLUSIVE);
head = QUEUE_HEAD; head = QUEUE_HEAD;
max = QUEUE_TAIL; max = QUEUE_TAIL;
prevListener = InvalidBackendId; prevListener = InvalidBackendId;
...@@ -1101,7 +1101,7 @@ Exec_ListenPreCommit(void) ...@@ -1101,7 +1101,7 @@ Exec_ListenPreCommit(void)
QUEUE_NEXT_LISTENER(MyBackendId) = QUEUE_FIRST_LISTENER; QUEUE_NEXT_LISTENER(MyBackendId) = QUEUE_FIRST_LISTENER;
QUEUE_FIRST_LISTENER = MyBackendId; QUEUE_FIRST_LISTENER = MyBackendId;
} }
LWLockRelease(AsyncQueueLock); LWLockRelease(NotifyQueueLock);
/* Now we are listed in the global array, so remember we're listening */ /* Now we are listed in the global array, so remember we're listening */
amRegisteredListener = true; amRegisteredListener = true;
...@@ -1308,7 +1308,7 @@ asyncQueueUnregister(void) ...@@ -1308,7 +1308,7 @@ asyncQueueUnregister(void)
/* /*
* Need exclusive lock here to manipulate list links. * Need exclusive lock here to manipulate list links.
*/ */
LWLockAcquire(AsyncQueueLock, LW_EXCLUSIVE); LWLockAcquire(NotifyQueueLock, LW_EXCLUSIVE);
/* Mark our entry as invalid */ /* Mark our entry as invalid */
QUEUE_BACKEND_PID(MyBackendId) = InvalidPid; QUEUE_BACKEND_PID(MyBackendId) = InvalidPid;
QUEUE_BACKEND_DBOID(MyBackendId) = InvalidOid; QUEUE_BACKEND_DBOID(MyBackendId) = InvalidOid;
...@@ -1327,7 +1327,7 @@ asyncQueueUnregister(void) ...@@ -1327,7 +1327,7 @@ asyncQueueUnregister(void)
} }
} }
QUEUE_NEXT_LISTENER(MyBackendId) = InvalidBackendId; QUEUE_NEXT_LISTENER(MyBackendId) = InvalidBackendId;
LWLockRelease(AsyncQueueLock); LWLockRelease(NotifyQueueLock);
/* mark ourselves as no longer listed in the global array */ /* mark ourselves as no longer listed in the global array */
amRegisteredListener = false; amRegisteredListener = false;
...@@ -1336,7 +1336,7 @@ asyncQueueUnregister(void) ...@@ -1336,7 +1336,7 @@ asyncQueueUnregister(void)
/* /*
* Test whether there is room to insert more notification messages. * Test whether there is room to insert more notification messages.
* *
* Caller must hold at least shared AsyncQueueLock. * Caller must hold at least shared NotifyQueueLock.
*/ */
static bool static bool
asyncQueueIsFull(void) asyncQueueIsFull(void)
...@@ -1437,8 +1437,8 @@ asyncQueueNotificationToEntry(Notification *n, AsyncQueueEntry *qe) ...@@ -1437,8 +1437,8 @@ asyncQueueNotificationToEntry(Notification *n, AsyncQueueEntry *qe)
* notification to write and return the first still-unwritten cell back. * notification to write and return the first still-unwritten cell back.
* Eventually we will return NULL indicating all is done. * Eventually we will return NULL indicating all is done.
* *
* We are holding AsyncQueueLock already from the caller and grab AsyncCtlLock * We are holding NotifyQueueLock already from the caller and grab
* locally in this function. * NotifySLRULock locally in this function.
*/ */
static ListCell * static ListCell *
asyncQueueAddEntries(ListCell *nextNotify) asyncQueueAddEntries(ListCell *nextNotify)
...@@ -1449,8 +1449,8 @@ asyncQueueAddEntries(ListCell *nextNotify) ...@@ -1449,8 +1449,8 @@ asyncQueueAddEntries(ListCell *nextNotify)
int offset; int offset;
int slotno; int slotno;
/* We hold both AsyncQueueLock and AsyncCtlLock during this operation */ /* We hold both NotifyQueueLock and NotifySLRULock during this operation */
LWLockAcquire(AsyncCtlLock, LW_EXCLUSIVE); LWLockAcquire(NotifySLRULock, LW_EXCLUSIVE);
/* /*
* We work with a local copy of QUEUE_HEAD, which we write back to shared * We work with a local copy of QUEUE_HEAD, which we write back to shared
...@@ -1475,13 +1475,13 @@ asyncQueueAddEntries(ListCell *nextNotify) ...@@ -1475,13 +1475,13 @@ asyncQueueAddEntries(ListCell *nextNotify)
*/ */
pageno = QUEUE_POS_PAGE(queue_head); pageno = QUEUE_POS_PAGE(queue_head);
if (QUEUE_POS_IS_ZERO(queue_head)) if (QUEUE_POS_IS_ZERO(queue_head))
slotno = SimpleLruZeroPage(AsyncCtl, pageno); slotno = SimpleLruZeroPage(NotifyCtl, pageno);
else else
slotno = SimpleLruReadPage(AsyncCtl, pageno, true, slotno = SimpleLruReadPage(NotifyCtl, pageno, true,
InvalidTransactionId); InvalidTransactionId);
/* Note we mark the page dirty before writing in it */ /* Note we mark the page dirty before writing in it */
AsyncCtl->shared->page_dirty[slotno] = true; NotifyCtl->shared->page_dirty[slotno] = true;
while (nextNotify != NULL) while (nextNotify != NULL)
{ {
...@@ -1512,7 +1512,7 @@ asyncQueueAddEntries(ListCell *nextNotify) ...@@ -1512,7 +1512,7 @@ asyncQueueAddEntries(ListCell *nextNotify)
} }
/* Now copy qe into the shared buffer page */ /* Now copy qe into the shared buffer page */
memcpy(AsyncCtl->shared->page_buffer[slotno] + offset, memcpy(NotifyCtl->shared->page_buffer[slotno] + offset,
&qe, &qe,
qe.length); qe.length);
...@@ -1527,7 +1527,7 @@ asyncQueueAddEntries(ListCell *nextNotify) ...@@ -1527,7 +1527,7 @@ asyncQueueAddEntries(ListCell *nextNotify)
* asyncQueueIsFull() ensured that there is room to create this * asyncQueueIsFull() ensured that there is room to create this
* page without overrunning the queue. * page without overrunning the queue.
*/ */
slotno = SimpleLruZeroPage(AsyncCtl, QUEUE_POS_PAGE(queue_head)); slotno = SimpleLruZeroPage(NotifyCtl, QUEUE_POS_PAGE(queue_head));
/* /*
* If the new page address is a multiple of QUEUE_CLEANUP_DELAY, * If the new page address is a multiple of QUEUE_CLEANUP_DELAY,
...@@ -1545,7 +1545,7 @@ asyncQueueAddEntries(ListCell *nextNotify) ...@@ -1545,7 +1545,7 @@ asyncQueueAddEntries(ListCell *nextNotify)
/* Success, so update the global QUEUE_HEAD */ /* Success, so update the global QUEUE_HEAD */
QUEUE_HEAD = queue_head; QUEUE_HEAD = queue_head;
LWLockRelease(AsyncCtlLock); LWLockRelease(NotifySLRULock);
return nextNotify; return nextNotify;
} }
...@@ -1562,9 +1562,9 @@ pg_notification_queue_usage(PG_FUNCTION_ARGS) ...@@ -1562,9 +1562,9 @@ pg_notification_queue_usage(PG_FUNCTION_ARGS)
/* Advance the queue tail so we don't report a too-large result */ /* Advance the queue tail so we don't report a too-large result */
asyncQueueAdvanceTail(); asyncQueueAdvanceTail();
LWLockAcquire(AsyncQueueLock, LW_SHARED); LWLockAcquire(NotifyQueueLock, LW_SHARED);
usage = asyncQueueUsage(); usage = asyncQueueUsage();
LWLockRelease(AsyncQueueLock); LWLockRelease(NotifyQueueLock);
PG_RETURN_FLOAT8(usage); PG_RETURN_FLOAT8(usage);
} }
...@@ -1572,7 +1572,7 @@ pg_notification_queue_usage(PG_FUNCTION_ARGS) ...@@ -1572,7 +1572,7 @@ pg_notification_queue_usage(PG_FUNCTION_ARGS)
/* /*
* Return the fraction of the queue that is currently occupied. * Return the fraction of the queue that is currently occupied.
* *
* The caller must hold AsyncQueueLock in (at least) shared mode. * The caller must hold NotifyQueueLock in (at least) shared mode.
*/ */
static double static double
asyncQueueUsage(void) asyncQueueUsage(void)
...@@ -1601,7 +1601,7 @@ asyncQueueUsage(void) ...@@ -1601,7 +1601,7 @@ asyncQueueUsage(void)
* This is unlikely given the size of the queue, but possible. * This is unlikely given the size of the queue, but possible.
* The warnings show up at most once every QUEUE_FULL_WARN_INTERVAL. * The warnings show up at most once every QUEUE_FULL_WARN_INTERVAL.
* *
* Caller must hold exclusive AsyncQueueLock. * Caller must hold exclusive NotifyQueueLock.
*/ */
static void static void
asyncQueueFillWarning(void) asyncQueueFillWarning(void)
...@@ -1665,7 +1665,7 @@ SignalBackends(void) ...@@ -1665,7 +1665,7 @@ SignalBackends(void)
/* /*
* Identify backends that we need to signal. We don't want to send * Identify backends that we need to signal. We don't want to send
* signals while holding the AsyncQueueLock, so this loop just builds a * signals while holding the NotifyQueueLock, so this loop just builds a
* list of target PIDs. * list of target PIDs.
* *
* XXX in principle these pallocs could fail, which would be bad. Maybe * XXX in principle these pallocs could fail, which would be bad. Maybe
...@@ -1676,7 +1676,7 @@ SignalBackends(void) ...@@ -1676,7 +1676,7 @@ SignalBackends(void)
ids = (BackendId *) palloc(MaxBackends * sizeof(BackendId)); ids = (BackendId *) palloc(MaxBackends * sizeof(BackendId));
count = 0; count = 0;
LWLockAcquire(AsyncQueueLock, LW_EXCLUSIVE); LWLockAcquire(NotifyQueueLock, LW_EXCLUSIVE);
for (BackendId i = QUEUE_FIRST_LISTENER; i > 0; i = QUEUE_NEXT_LISTENER(i)) for (BackendId i = QUEUE_FIRST_LISTENER; i > 0; i = QUEUE_NEXT_LISTENER(i))
{ {
int32 pid = QUEUE_BACKEND_PID(i); int32 pid = QUEUE_BACKEND_PID(i);
...@@ -1710,7 +1710,7 @@ SignalBackends(void) ...@@ -1710,7 +1710,7 @@ SignalBackends(void)
ids[count] = i; ids[count] = i;
count++; count++;
} }
LWLockRelease(AsyncQueueLock); LWLockRelease(NotifyQueueLock);
/* Now send signals */ /* Now send signals */
for (int i = 0; i < count; i++) for (int i = 0; i < count; i++)
...@@ -1720,7 +1720,7 @@ SignalBackends(void) ...@@ -1720,7 +1720,7 @@ SignalBackends(void)
/* /*
* Note: assuming things aren't broken, a signal failure here could * Note: assuming things aren't broken, a signal failure here could
* only occur if the target backend exited since we released * only occur if the target backend exited since we released
* AsyncQueueLock; which is unlikely but certainly possible. So we * NotifyQueueLock; which is unlikely but certainly possible. So we
* just log a low-level debug message if it happens. * just log a low-level debug message if it happens.
*/ */
if (SendProcSignal(pid, PROCSIG_NOTIFY_INTERRUPT, ids[i]) < 0) if (SendProcSignal(pid, PROCSIG_NOTIFY_INTERRUPT, ids[i]) < 0)
...@@ -1930,12 +1930,12 @@ asyncQueueReadAllNotifications(void) ...@@ -1930,12 +1930,12 @@ asyncQueueReadAllNotifications(void)
} page_buffer; } page_buffer;
/* Fetch current state */ /* Fetch current state */
LWLockAcquire(AsyncQueueLock, LW_SHARED); LWLockAcquire(NotifyQueueLock, LW_SHARED);
/* Assert checks that we have a valid state entry */ /* Assert checks that we have a valid state entry */
Assert(MyProcPid == QUEUE_BACKEND_PID(MyBackendId)); Assert(MyProcPid == QUEUE_BACKEND_PID(MyBackendId));
pos = oldpos = QUEUE_BACKEND_POS(MyBackendId); pos = oldpos = QUEUE_BACKEND_POS(MyBackendId);
head = QUEUE_HEAD; head = QUEUE_HEAD;
LWLockRelease(AsyncQueueLock); LWLockRelease(NotifyQueueLock);
if (QUEUE_POS_EQUAL(pos, head)) if (QUEUE_POS_EQUAL(pos, head))
{ {
...@@ -1990,7 +1990,7 @@ asyncQueueReadAllNotifications(void) ...@@ -1990,7 +1990,7 @@ asyncQueueReadAllNotifications(void)
* that happens it is critical that we not try to send the same message * that happens it is critical that we not try to send the same message
* over and over again. Therefore, we place a PG_TRY block here that will * over and over again. Therefore, we place a PG_TRY block here that will
* forcibly advance our queue position before we lose control to an error. * forcibly advance our queue position before we lose control to an error.
* (We could alternatively retake AsyncQueueLock and move the position * (We could alternatively retake NotifyQueueLock and move the position
* before handling each individual message, but that seems like too much * before handling each individual message, but that seems like too much
* lock traffic.) * lock traffic.)
*/ */
...@@ -2007,11 +2007,11 @@ asyncQueueReadAllNotifications(void) ...@@ -2007,11 +2007,11 @@ asyncQueueReadAllNotifications(void)
/* /*
* We copy the data from SLRU into a local buffer, so as to avoid * We copy the data from SLRU into a local buffer, so as to avoid
* holding the AsyncCtlLock while we are examining the entries and * holding the NotifySLRULock while we are examining the entries
* possibly transmitting them to our frontend. Copy only the part * and possibly transmitting them to our frontend. Copy only the
* of the page we will actually inspect. * part of the page we will actually inspect.
*/ */
slotno = SimpleLruReadPage_ReadOnly(AsyncCtl, curpage, slotno = SimpleLruReadPage_ReadOnly(NotifyCtl, curpage,
InvalidTransactionId); InvalidTransactionId);
if (curpage == QUEUE_POS_PAGE(head)) if (curpage == QUEUE_POS_PAGE(head))
{ {
...@@ -2026,10 +2026,10 @@ asyncQueueReadAllNotifications(void) ...@@ -2026,10 +2026,10 @@ asyncQueueReadAllNotifications(void)
copysize = QUEUE_PAGESIZE - curoffset; copysize = QUEUE_PAGESIZE - curoffset;
} }
memcpy(page_buffer.buf + curoffset, memcpy(page_buffer.buf + curoffset,
AsyncCtl->shared->page_buffer[slotno] + curoffset, NotifyCtl->shared->page_buffer[slotno] + curoffset,
copysize); copysize);
/* Release lock that we got from SimpleLruReadPage_ReadOnly() */ /* Release lock that we got from SimpleLruReadPage_ReadOnly() */
LWLockRelease(AsyncCtlLock); LWLockRelease(NotifySLRULock);
/* /*
* Process messages up to the stop position, end of page, or an * Process messages up to the stop position, end of page, or an
...@@ -2040,7 +2040,7 @@ asyncQueueReadAllNotifications(void) ...@@ -2040,7 +2040,7 @@ asyncQueueReadAllNotifications(void)
* But if it has, we will receive (or have already received and * But if it has, we will receive (or have already received and
* queued) another signal and come here again. * queued) another signal and come here again.
* *
* We are not holding AsyncQueueLock here! The queue can only * We are not holding NotifyQueueLock here! The queue can only
* extend beyond the head pointer (see above) and we leave our * extend beyond the head pointer (see above) and we leave our
* backend's pointer where it is so nobody will truncate or * backend's pointer where it is so nobody will truncate or
* rewrite pages under us. Especially we don't want to hold a lock * rewrite pages under us. Especially we don't want to hold a lock
...@@ -2054,9 +2054,9 @@ asyncQueueReadAllNotifications(void) ...@@ -2054,9 +2054,9 @@ asyncQueueReadAllNotifications(void)
PG_FINALLY(); PG_FINALLY();
{ {
/* Update shared state */ /* Update shared state */
LWLockAcquire(AsyncQueueLock, LW_SHARED); LWLockAcquire(NotifyQueueLock, LW_SHARED);
QUEUE_BACKEND_POS(MyBackendId) = pos; QUEUE_BACKEND_POS(MyBackendId) = pos;
LWLockRelease(AsyncQueueLock); LWLockRelease(NotifyQueueLock);
} }
PG_END_TRY(); PG_END_TRY();
...@@ -2070,7 +2070,7 @@ asyncQueueReadAllNotifications(void) ...@@ -2070,7 +2070,7 @@ asyncQueueReadAllNotifications(void)
* *
* The current page must have been fetched into page_buffer from shared * The current page must have been fetched into page_buffer from shared
* memory. (We could access the page right in shared memory, but that * memory. (We could access the page right in shared memory, but that
* would imply holding the AsyncCtlLock throughout this routine.) * would imply holding the NotifySLRULock throughout this routine.)
* *
* We stop if we reach the "stop" position, or reach a notification from an * We stop if we reach the "stop" position, or reach a notification from an
* uncommitted transaction, or reach the end of the page. * uncommitted transaction, or reach the end of the page.
...@@ -2177,7 +2177,7 @@ asyncQueueAdvanceTail(void) ...@@ -2177,7 +2177,7 @@ asyncQueueAdvanceTail(void)
int newtailpage; int newtailpage;
int boundary; int boundary;
LWLockAcquire(AsyncQueueLock, LW_EXCLUSIVE); LWLockAcquire(NotifyQueueLock, LW_EXCLUSIVE);
min = QUEUE_HEAD; min = QUEUE_HEAD;
for (BackendId i = QUEUE_FIRST_LISTENER; i > 0; i = QUEUE_NEXT_LISTENER(i)) for (BackendId i = QUEUE_FIRST_LISTENER; i > 0; i = QUEUE_NEXT_LISTENER(i))
{ {
...@@ -2186,7 +2186,7 @@ asyncQueueAdvanceTail(void) ...@@ -2186,7 +2186,7 @@ asyncQueueAdvanceTail(void)
} }
oldtailpage = QUEUE_POS_PAGE(QUEUE_TAIL); oldtailpage = QUEUE_POS_PAGE(QUEUE_TAIL);
QUEUE_TAIL = min; QUEUE_TAIL = min;
LWLockRelease(AsyncQueueLock); LWLockRelease(NotifyQueueLock);
/* /*
* We can truncate something if the global tail advanced across an SLRU * We can truncate something if the global tail advanced across an SLRU
...@@ -2200,10 +2200,10 @@ asyncQueueAdvanceTail(void) ...@@ -2200,10 +2200,10 @@ asyncQueueAdvanceTail(void)
if (asyncQueuePagePrecedes(oldtailpage, boundary)) if (asyncQueuePagePrecedes(oldtailpage, boundary))
{ {
/* /*
* SimpleLruTruncate() will ask for AsyncCtlLock but will also release * SimpleLruTruncate() will ask for NotifySLRULock but will also
* the lock again. * release the lock again.
*/ */
SimpleLruTruncate(AsyncCtl, newtailpage); SimpleLruTruncate(NotifyCtl, newtailpage);
} }
} }
......
...@@ -147,13 +147,13 @@ PgStat_MsgBgWriter BgWriterStats; ...@@ -147,13 +147,13 @@ PgStat_MsgBgWriter BgWriterStats;
* all SLRUs without an explicit entry (e.g. SLRUs in extensions). * all SLRUs without an explicit entry (e.g. SLRUs in extensions).
*/ */
static const char *const slru_names[] = { static const char *const slru_names[] = {
"async", "CommitTs",
"clog", "MultiXactMember",
"commit_timestamp", "MultiXactOffset",
"multixact_offset", "Notify",
"multixact_member", "Serial",
"oldserxid", "Subtrans",
"subtrans", "Xact",
"other" /* has to be last */ "other" /* has to be last */
}; };
......
...@@ -182,7 +182,7 @@ static const char *const excludeDirContents[] = ...@@ -182,7 +182,7 @@ static const char *const excludeDirContents[] =
/* /*
* Old contents are loaded for possible debugging but are not required for * Old contents are loaded for possible debugging but are not required for
* normal operation, see OldSerXidInit(). * normal operation, see SerialInit().
*/ */
"pg_serial", "pg_serial",
......
...@@ -124,20 +124,20 @@ extern slock_t *ShmemLock; ...@@ -124,20 +124,20 @@ extern slock_t *ShmemLock;
*/ */
static const char *const BuiltinTrancheNames[] = { static const char *const BuiltinTrancheNames[] = {
/* LWTRANCHE_CLOG_BUFFERS: */ /* LWTRANCHE_XACT_BUFFER: */
"clog", "XactBuffer",
/* LWTRANCHE_COMMITTS_BUFFERS: */ /* LWTRANCHE_COMMITTS_BUFFER: */
"commit_timestamp", "CommitTSBuffer",
/* LWTRANCHE_SUBTRANS_BUFFERS: */ /* LWTRANCHE_SUBTRANS_BUFFER: */
"subtrans", "SubtransBuffer",
/* LWTRANCHE_MXACTOFFSET_BUFFERS: */ /* LWTRANCHE_MULTIXACTOFFSET_BUFFER: */
"multixact_offset", "MultiXactOffsetBuffer",
/* LWTRANCHE_MXACTMEMBER_BUFFERS: */ /* LWTRANCHE_MULTIXACTMEMBER_BUFFER: */
"multixact_member", "MultiXactMemberBuffer",
/* LWTRANCHE_ASYNC_BUFFERS: */ /* LWTRANCHE_NOTIFY_BUFFER: */
"async", "NotifyBuffer",
/* LWTRANCHE_OLDSERXID_BUFFERS: */ /* LWTRANCHE_SERIAL_BUFFER: */
"oldserxid", "SerialBuffer",
/* LWTRANCHE_WAL_INSERT: */ /* LWTRANCHE_WAL_INSERT: */
"wal_insert", "wal_insert",
/* LWTRANCHE_BUFFER_CONTENT: */ /* LWTRANCHE_BUFFER_CONTENT: */
......
...@@ -15,11 +15,11 @@ WALBufMappingLock 7 ...@@ -15,11 +15,11 @@ WALBufMappingLock 7
WALWriteLock 8 WALWriteLock 8
ControlFileLock 9 ControlFileLock 9
CheckpointLock 10 CheckpointLock 10
CLogControlLock 11 XactSLRULock 11
SubtransControlLock 12 SubtransSLRULock 12
MultiXactGenLock 13 MultiXactGenLock 13
MultiXactOffsetControlLock 14 MultiXactOffsetSLRULock 14
MultiXactMemberControlLock 15 MultiXactMemberSLRULock 15
RelCacheInitLock 16 RelCacheInitLock 16
CheckpointerCommLock 17 CheckpointerCommLock 17
TwoPhaseStateLock 18 TwoPhaseStateLock 18
...@@ -30,22 +30,22 @@ AutovacuumLock 22 ...@@ -30,22 +30,22 @@ AutovacuumLock 22
AutovacuumScheduleLock 23 AutovacuumScheduleLock 23
SyncScanLock 24 SyncScanLock 24
RelationMappingLock 25 RelationMappingLock 25
AsyncCtlLock 26 NotifySLRULock 26
AsyncQueueLock 27 NotifyQueueLock 27
SerializableXactHashLock 28 SerializableXactHashLock 28
SerializableFinishedListLock 29 SerializableFinishedListLock 29
SerializablePredicateLockListLock 30 SerializablePredicateLockListLock 30
OldSerXidLock 31 SerialSLRULock 31
SyncRepLock 32 SyncRepLock 32
BackgroundWorkerLock 33 BackgroundWorkerLock 33
DynamicSharedMemoryControlLock 34 DynamicSharedMemoryControlLock 34
AutoFileLock 35 AutoFileLock 35
ReplicationSlotAllocationLock 36 ReplicationSlotAllocationLock 36
ReplicationSlotControlLock 37 ReplicationSlotControlLock 37
CommitTsControlLock 38 CommitTsSLRULock 38
CommitTsLock 39 CommitTsLock 39
ReplicationOriginLock 40 ReplicationOriginLock 40
MultiXactTruncationLock 41 MultiXactTruncationLock 41
OldSnapshotTimeMapLock 42 OldSnapshotTimeMapLock 42
LogicalRepWorkerLock 43 LogicalRepWorkerLock 43
CLogTruncationLock 44 XactTruncationLock 44
...@@ -211,7 +211,7 @@ ...@@ -211,7 +211,7 @@
#include "utils/snapmgr.h" #include "utils/snapmgr.h"
/* Uncomment the next line to test the graceful degradation code. */ /* Uncomment the next line to test the graceful degradation code. */
/* #define TEST_OLDSERXID */ /* #define TEST_SUMMARIZE_SERIAL */
/* /*
* Test the most selective fields first, for performance. * Test the most selective fields first, for performance.
...@@ -316,37 +316,37 @@ ...@@ -316,37 +316,37 @@
/* /*
* The SLRU buffer area through which we access the old xids. * The SLRU buffer area through which we access the old xids.
*/ */
static SlruCtlData OldSerXidSlruCtlData; static SlruCtlData SerialSlruCtlData;
#define OldSerXidSlruCtl (&OldSerXidSlruCtlData) #define SerialSlruCtl (&SerialSlruCtlData)
#define OLDSERXID_PAGESIZE BLCKSZ #define SERIAL_PAGESIZE BLCKSZ
#define OLDSERXID_ENTRYSIZE sizeof(SerCommitSeqNo) #define SERIAL_ENTRYSIZE sizeof(SerCommitSeqNo)
#define OLDSERXID_ENTRIESPERPAGE (OLDSERXID_PAGESIZE / OLDSERXID_ENTRYSIZE) #define SERIAL_ENTRIESPERPAGE (SERIAL_PAGESIZE / SERIAL_ENTRYSIZE)
/* /*
* Set maximum pages based on the number needed to track all transactions. * Set maximum pages based on the number needed to track all transactions.
*/ */
#define OLDSERXID_MAX_PAGE (MaxTransactionId / OLDSERXID_ENTRIESPERPAGE) #define SERIAL_MAX_PAGE (MaxTransactionId / SERIAL_ENTRIESPERPAGE)
#define OldSerXidNextPage(page) (((page) >= OLDSERXID_MAX_PAGE) ? 0 : (page) + 1) #define SerialNextPage(page) (((page) >= SERIAL_MAX_PAGE) ? 0 : (page) + 1)
#define OldSerXidValue(slotno, xid) (*((SerCommitSeqNo *) \ #define SerialValue(slotno, xid) (*((SerCommitSeqNo *) \
(OldSerXidSlruCtl->shared->page_buffer[slotno] + \ (SerialSlruCtl->shared->page_buffer[slotno] + \
((((uint32) (xid)) % OLDSERXID_ENTRIESPERPAGE) * OLDSERXID_ENTRYSIZE)))) ((((uint32) (xid)) % SERIAL_ENTRIESPERPAGE) * SERIAL_ENTRYSIZE))))
#define OldSerXidPage(xid) (((uint32) (xid)) / OLDSERXID_ENTRIESPERPAGE) #define SerialPage(xid) (((uint32) (xid)) / SERIAL_ENTRIESPERPAGE)
typedef struct OldSerXidControlData typedef struct SerialControlData
{ {
int headPage; /* newest initialized page */ int headPage; /* newest initialized page */
TransactionId headXid; /* newest valid Xid in the SLRU */ TransactionId headXid; /* newest valid Xid in the SLRU */
TransactionId tailXid; /* oldest xmin we might be interested in */ TransactionId tailXid; /* oldest xmin we might be interested in */
} OldSerXidControlData; } SerialControlData;
typedef struct OldSerXidControlData *OldSerXidControl; typedef struct SerialControlData *SerialControl;
static OldSerXidControl oldSerXidControl; static SerialControl serialControl;
/* /*
* When the oldest committed transaction on the "finished" list is moved to * When the oldest committed transaction on the "finished" list is moved to
...@@ -438,11 +438,11 @@ static void SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact, SERIALIZABLEXACT ...@@ -438,11 +438,11 @@ static void SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact, SERIALIZABLEXACT
static void ReleaseRWConflict(RWConflict conflict); static void ReleaseRWConflict(RWConflict conflict);
static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact); static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact);
static bool OldSerXidPagePrecedesLogically(int p, int q); static bool SerialPagePrecedesLogically(int p, int q);
static void OldSerXidInit(void); static void SerialInit(void);
static void OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo); static void SerialAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo);
static SerCommitSeqNo OldSerXidGetMinConflictCommitSeqNo(TransactionId xid); static SerCommitSeqNo SerialGetMinConflictCommitSeqNo(TransactionId xid);
static void OldSerXidSetActiveSerXmin(TransactionId xid); static void SerialSetActiveSerXmin(TransactionId xid);
static uint32 predicatelock_hash(const void *key, Size keysize); static uint32 predicatelock_hash(const void *key, Size keysize);
static void SummarizeOldestCommittedSxact(void); static void SummarizeOldestCommittedSxact(void);
...@@ -784,26 +784,26 @@ FlagSxactUnsafe(SERIALIZABLEXACT *sxact) ...@@ -784,26 +784,26 @@ FlagSxactUnsafe(SERIALIZABLEXACT *sxact)
/*------------------------------------------------------------------------*/ /*------------------------------------------------------------------------*/
/* /*
* We will work on the page range of 0..OLDSERXID_MAX_PAGE. * We will work on the page range of 0..SERIAL_MAX_PAGE.
* Compares using wraparound logic, as is required by slru.c. * Compares using wraparound logic, as is required by slru.c.
*/ */
static bool static bool
OldSerXidPagePrecedesLogically(int p, int q) SerialPagePrecedesLogically(int p, int q)
{ {
int diff; int diff;
/* /*
* We have to compare modulo (OLDSERXID_MAX_PAGE+1)/2. Both inputs should * We have to compare modulo (SERIAL_MAX_PAGE+1)/2. Both inputs should be
* be in the range 0..OLDSERXID_MAX_PAGE. * in the range 0..SERIAL_MAX_PAGE.
*/ */
Assert(p >= 0 && p <= OLDSERXID_MAX_PAGE); Assert(p >= 0 && p <= SERIAL_MAX_PAGE);
Assert(q >= 0 && q <= OLDSERXID_MAX_PAGE); Assert(q >= 0 && q <= SERIAL_MAX_PAGE);
diff = p - q; diff = p - q;
if (diff >= ((OLDSERXID_MAX_PAGE + 1) / 2)) if (diff >= ((SERIAL_MAX_PAGE + 1) / 2))
diff -= OLDSERXID_MAX_PAGE + 1; diff -= SERIAL_MAX_PAGE + 1;
else if (diff < -((int) (OLDSERXID_MAX_PAGE + 1) / 2)) else if (diff < -((int) (SERIAL_MAX_PAGE + 1) / 2))
diff += OLDSERXID_MAX_PAGE + 1; diff += SERIAL_MAX_PAGE + 1;
return diff < 0; return diff < 0;
} }
...@@ -811,25 +811,25 @@ OldSerXidPagePrecedesLogically(int p, int q) ...@@ -811,25 +811,25 @@ OldSerXidPagePrecedesLogically(int p, int q)
* Initialize for the tracking of old serializable committed xids. * Initialize for the tracking of old serializable committed xids.
*/ */
static void static void
OldSerXidInit(void) SerialInit(void)
{ {
bool found; bool found;
/* /*
* Set up SLRU management of the pg_serial data. * Set up SLRU management of the pg_serial data.
*/ */
OldSerXidSlruCtl->PagePrecedes = OldSerXidPagePrecedesLogically; SerialSlruCtl->PagePrecedes = SerialPagePrecedesLogically;
SimpleLruInit(OldSerXidSlruCtl, "oldserxid", SimpleLruInit(SerialSlruCtl, "Serial",
NUM_OLDSERXID_BUFFERS, 0, OldSerXidLock, "pg_serial", NUM_SERIAL_BUFFERS, 0, SerialSLRULock, "pg_serial",
LWTRANCHE_OLDSERXID_BUFFERS); LWTRANCHE_SERIAL_BUFFER);
/* Override default assumption that writes should be fsync'd */ /* Override default assumption that writes should be fsync'd */
OldSerXidSlruCtl->do_fsync = false; SerialSlruCtl->do_fsync = false;
/* /*
* Create or attach to the OldSerXidControl structure. * Create or attach to the SerialControl structure.
*/ */
oldSerXidControl = (OldSerXidControl) serialControl = (SerialControl)
ShmemInitStruct("OldSerXidControlData", sizeof(OldSerXidControlData), &found); ShmemInitStruct("SerialControlData", sizeof(SerialControlData), &found);
Assert(found == IsUnderPostmaster); Assert(found == IsUnderPostmaster);
if (!found) if (!found)
...@@ -837,9 +837,9 @@ OldSerXidInit(void) ...@@ -837,9 +837,9 @@ OldSerXidInit(void)
/* /*
* Set control information to reflect empty SLRU. * Set control information to reflect empty SLRU.
*/ */
oldSerXidControl->headPage = -1; serialControl->headPage = -1;
oldSerXidControl->headXid = InvalidTransactionId; serialControl->headXid = InvalidTransactionId;
oldSerXidControl->tailXid = InvalidTransactionId; serialControl->tailXid = InvalidTransactionId;
} }
} }
...@@ -849,7 +849,7 @@ OldSerXidInit(void) ...@@ -849,7 +849,7 @@ OldSerXidInit(void)
* An invalid commitSeqNo means that there were no conflicts out from xid. * An invalid commitSeqNo means that there were no conflicts out from xid.
*/ */
static void static void
OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo) SerialAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
{ {
TransactionId tailXid; TransactionId tailXid;
int targetPage; int targetPage;
...@@ -859,16 +859,16 @@ OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo) ...@@ -859,16 +859,16 @@ OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
Assert(TransactionIdIsValid(xid)); Assert(TransactionIdIsValid(xid));
targetPage = OldSerXidPage(xid); targetPage = SerialPage(xid);
LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE); LWLockAcquire(SerialSLRULock, LW_EXCLUSIVE);
/* /*
* If no serializable transactions are active, there shouldn't be anything * If no serializable transactions are active, there shouldn't be anything
* to push out to the SLRU. Hitting this assert would mean there's * to push out to the SLRU. Hitting this assert would mean there's
* something wrong with the earlier cleanup logic. * something wrong with the earlier cleanup logic.
*/ */
tailXid = oldSerXidControl->tailXid; tailXid = serialControl->tailXid;
Assert(TransactionIdIsValid(tailXid)); Assert(TransactionIdIsValid(tailXid));
/* /*
...@@ -877,41 +877,41 @@ OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo) ...@@ -877,41 +877,41 @@ OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
* any new pages that enter the tailXid-headXid range as we advance * any new pages that enter the tailXid-headXid range as we advance
* headXid. * headXid.
*/ */
if (oldSerXidControl->headPage < 0) if (serialControl->headPage < 0)
{ {
firstZeroPage = OldSerXidPage(tailXid); firstZeroPage = SerialPage(tailXid);
isNewPage = true; isNewPage = true;
} }
else else
{ {
firstZeroPage = OldSerXidNextPage(oldSerXidControl->headPage); firstZeroPage = SerialNextPage(serialControl->headPage);
isNewPage = OldSerXidPagePrecedesLogically(oldSerXidControl->headPage, isNewPage = SerialPagePrecedesLogically(serialControl->headPage,
targetPage); targetPage);
} }
if (!TransactionIdIsValid(oldSerXidControl->headXid) if (!TransactionIdIsValid(serialControl->headXid)
|| TransactionIdFollows(xid, oldSerXidControl->headXid)) || TransactionIdFollows(xid, serialControl->headXid))
oldSerXidControl->headXid = xid; serialControl->headXid = xid;
if (isNewPage) if (isNewPage)
oldSerXidControl->headPage = targetPage; serialControl->headPage = targetPage;
if (isNewPage) if (isNewPage)
{ {
/* Initialize intervening pages. */ /* Initialize intervening pages. */
while (firstZeroPage != targetPage) while (firstZeroPage != targetPage)
{ {
(void) SimpleLruZeroPage(OldSerXidSlruCtl, firstZeroPage); (void) SimpleLruZeroPage(SerialSlruCtl, firstZeroPage);
firstZeroPage = OldSerXidNextPage(firstZeroPage); firstZeroPage = SerialNextPage(firstZeroPage);
} }
slotno = SimpleLruZeroPage(OldSerXidSlruCtl, targetPage); slotno = SimpleLruZeroPage(SerialSlruCtl, targetPage);
} }
else else
slotno = SimpleLruReadPage(OldSerXidSlruCtl, targetPage, true, xid); slotno = SimpleLruReadPage(SerialSlruCtl, targetPage, true, xid);
OldSerXidValue(slotno, xid) = minConflictCommitSeqNo; SerialValue(slotno, xid) = minConflictCommitSeqNo;
OldSerXidSlruCtl->shared->page_dirty[slotno] = true; SerialSlruCtl->shared->page_dirty[slotno] = true;
LWLockRelease(OldSerXidLock); LWLockRelease(SerialSLRULock);
} }
/* /*
...@@ -920,7 +920,7 @@ OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo) ...@@ -920,7 +920,7 @@ OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
* will be returned. * will be returned.
*/ */
static SerCommitSeqNo static SerCommitSeqNo
OldSerXidGetMinConflictCommitSeqNo(TransactionId xid) SerialGetMinConflictCommitSeqNo(TransactionId xid)
{ {
TransactionId headXid; TransactionId headXid;
TransactionId tailXid; TransactionId tailXid;
...@@ -929,10 +929,10 @@ OldSerXidGetMinConflictCommitSeqNo(TransactionId xid) ...@@ -929,10 +929,10 @@ OldSerXidGetMinConflictCommitSeqNo(TransactionId xid)
Assert(TransactionIdIsValid(xid)); Assert(TransactionIdIsValid(xid));
LWLockAcquire(OldSerXidLock, LW_SHARED); LWLockAcquire(SerialSLRULock, LW_SHARED);
headXid = oldSerXidControl->headXid; headXid = serialControl->headXid;
tailXid = oldSerXidControl->tailXid; tailXid = serialControl->tailXid;
LWLockRelease(OldSerXidLock); LWLockRelease(SerialSLRULock);
if (!TransactionIdIsValid(headXid)) if (!TransactionIdIsValid(headXid))
return 0; return 0;
...@@ -944,13 +944,13 @@ OldSerXidGetMinConflictCommitSeqNo(TransactionId xid) ...@@ -944,13 +944,13 @@ OldSerXidGetMinConflictCommitSeqNo(TransactionId xid)
return 0; return 0;
/* /*
* The following function must be called without holding OldSerXidLock, * The following function must be called without holding SerialSLRULock,
* but will return with that lock held, which must then be released. * but will return with that lock held, which must then be released.
*/ */
slotno = SimpleLruReadPage_ReadOnly(OldSerXidSlruCtl, slotno = SimpleLruReadPage_ReadOnly(SerialSlruCtl,
OldSerXidPage(xid), xid); SerialPage(xid), xid);
val = OldSerXidValue(slotno, xid); val = SerialValue(slotno, xid);
LWLockRelease(OldSerXidLock); LWLockRelease(SerialSLRULock);
return val; return val;
} }
...@@ -961,9 +961,9 @@ OldSerXidGetMinConflictCommitSeqNo(TransactionId xid) ...@@ -961,9 +961,9 @@ OldSerXidGetMinConflictCommitSeqNo(TransactionId xid)
* the SLRU can be discarded. * the SLRU can be discarded.
*/ */
static void static void
OldSerXidSetActiveSerXmin(TransactionId xid) SerialSetActiveSerXmin(TransactionId xid)
{ {
LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE); LWLockAcquire(SerialSLRULock, LW_EXCLUSIVE);
/* /*
* When no sxacts are active, nothing overlaps, set the xid values to * When no sxacts are active, nothing overlaps, set the xid values to
...@@ -973,9 +973,9 @@ OldSerXidSetActiveSerXmin(TransactionId xid) ...@@ -973,9 +973,9 @@ OldSerXidSetActiveSerXmin(TransactionId xid)
*/ */
if (!TransactionIdIsValid(xid)) if (!TransactionIdIsValid(xid))
{ {
oldSerXidControl->tailXid = InvalidTransactionId; serialControl->tailXid = InvalidTransactionId;
oldSerXidControl->headXid = InvalidTransactionId; serialControl->headXid = InvalidTransactionId;
LWLockRelease(OldSerXidLock); LWLockRelease(SerialSLRULock);
return; return;
} }
...@@ -987,22 +987,22 @@ OldSerXidSetActiveSerXmin(TransactionId xid) ...@@ -987,22 +987,22 @@ OldSerXidSetActiveSerXmin(TransactionId xid)
*/ */
if (RecoveryInProgress()) if (RecoveryInProgress())
{ {
Assert(oldSerXidControl->headPage < 0); Assert(serialControl->headPage < 0);
if (!TransactionIdIsValid(oldSerXidControl->tailXid) if (!TransactionIdIsValid(serialControl->tailXid)
|| TransactionIdPrecedes(xid, oldSerXidControl->tailXid)) || TransactionIdPrecedes(xid, serialControl->tailXid))
{ {
oldSerXidControl->tailXid = xid; serialControl->tailXid = xid;
} }
LWLockRelease(OldSerXidLock); LWLockRelease(SerialSLRULock);
return; return;
} }
Assert(!TransactionIdIsValid(oldSerXidControl->tailXid) Assert(!TransactionIdIsValid(serialControl->tailXid)
|| TransactionIdFollows(xid, oldSerXidControl->tailXid)); || TransactionIdFollows(xid, serialControl->tailXid));
oldSerXidControl->tailXid = xid; serialControl->tailXid = xid;
LWLockRelease(OldSerXidLock); LWLockRelease(SerialSLRULock);
} }
/* /*
...@@ -1016,19 +1016,19 @@ CheckPointPredicate(void) ...@@ -1016,19 +1016,19 @@ CheckPointPredicate(void)
{ {
int tailPage; int tailPage;
LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE); LWLockAcquire(SerialSLRULock, LW_EXCLUSIVE);
/* Exit quickly if the SLRU is currently not in use. */ /* Exit quickly if the SLRU is currently not in use. */
if (oldSerXidControl->headPage < 0) if (serialControl->headPage < 0)
{ {
LWLockRelease(OldSerXidLock); LWLockRelease(SerialSLRULock);
return; return;
} }
if (TransactionIdIsValid(oldSerXidControl->tailXid)) if (TransactionIdIsValid(serialControl->tailXid))
{ {
/* We can truncate the SLRU up to the page containing tailXid */ /* We can truncate the SLRU up to the page containing tailXid */
tailPage = OldSerXidPage(oldSerXidControl->tailXid); tailPage = SerialPage(serialControl->tailXid);
} }
else else
{ {
...@@ -1042,14 +1042,14 @@ CheckPointPredicate(void) ...@@ -1042,14 +1042,14 @@ CheckPointPredicate(void)
* won't be removed until XID horizon advances enough to make it * won't be removed until XID horizon advances enough to make it
* current again. * current again.
*/ */
tailPage = oldSerXidControl->headPage; tailPage = serialControl->headPage;
oldSerXidControl->headPage = -1; serialControl->headPage = -1;
} }
LWLockRelease(OldSerXidLock); LWLockRelease(SerialSLRULock);
/* Truncate away pages that are no longer required */ /* Truncate away pages that are no longer required */
SimpleLruTruncate(OldSerXidSlruCtl, tailPage); SimpleLruTruncate(SerialSlruCtl, tailPage);
/* /*
* Flush dirty SLRU pages to disk * Flush dirty SLRU pages to disk
...@@ -1061,7 +1061,7 @@ CheckPointPredicate(void) ...@@ -1061,7 +1061,7 @@ CheckPointPredicate(void)
* before deleting the file in which they sit, which would be completely * before deleting the file in which they sit, which would be completely
* pointless. * pointless.
*/ */
SimpleLruFlush(OldSerXidSlruCtl, true); SimpleLruFlush(SerialSlruCtl, true);
} }
/*------------------------------------------------------------------------*/ /*------------------------------------------------------------------------*/
...@@ -1275,7 +1275,7 @@ InitPredicateLocks(void) ...@@ -1275,7 +1275,7 @@ InitPredicateLocks(void)
* Initialize the SLRU storage for old committed serializable * Initialize the SLRU storage for old committed serializable
* transactions. * transactions.
*/ */
OldSerXidInit(); SerialInit();
} }
/* /*
...@@ -1324,8 +1324,8 @@ PredicateLockShmemSize(void) ...@@ -1324,8 +1324,8 @@ PredicateLockShmemSize(void)
size = add_size(size, sizeof(SHM_QUEUE)); size = add_size(size, sizeof(SHM_QUEUE));
/* Shared memory structures for SLRU tracking of old committed xids. */ /* Shared memory structures for SLRU tracking of old committed xids. */
size = add_size(size, sizeof(OldSerXidControlData)); size = add_size(size, sizeof(SerialControlData));
size = add_size(size, SimpleLruShmemSize(NUM_OLDSERXID_BUFFERS, 0)); size = add_size(size, SimpleLruShmemSize(NUM_SERIAL_BUFFERS, 0));
return size; return size;
} }
...@@ -1462,7 +1462,7 @@ SummarizeOldestCommittedSxact(void) ...@@ -1462,7 +1462,7 @@ SummarizeOldestCommittedSxact(void)
/* Add to SLRU summary information. */ /* Add to SLRU summary information. */
if (TransactionIdIsValid(sxact->topXid) && !SxactIsReadOnly(sxact)) if (TransactionIdIsValid(sxact->topXid) && !SxactIsReadOnly(sxact))
OldSerXidAdd(sxact->topXid, SxactHasConflictOut(sxact) SerialAdd(sxact->topXid, SxactHasConflictOut(sxact)
? sxact->SeqNo.earliestOutConflictCommit : InvalidSerCommitSeqNo); ? sxact->SeqNo.earliestOutConflictCommit : InvalidSerCommitSeqNo);
/* Summarize and release the detail. */ /* Summarize and release the detail. */
...@@ -1727,7 +1727,7 @@ GetSerializableTransactionSnapshotInt(Snapshot snapshot, ...@@ -1727,7 +1727,7 @@ GetSerializableTransactionSnapshotInt(Snapshot snapshot,
* (in particular, an elog(ERROR) in procarray.c would cause us to leak * (in particular, an elog(ERROR) in procarray.c would cause us to leak
* the sxact). Consider refactoring to avoid this. * the sxact). Consider refactoring to avoid this.
*/ */
#ifdef TEST_OLDSERXID #ifdef TEST_SUMMARIZE_SERIAL
SummarizeOldestCommittedSxact(); SummarizeOldestCommittedSxact();
#endif #endif
LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE); LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
...@@ -1782,7 +1782,7 @@ GetSerializableTransactionSnapshotInt(Snapshot snapshot, ...@@ -1782,7 +1782,7 @@ GetSerializableTransactionSnapshotInt(Snapshot snapshot,
Assert(PredXact->SxactGlobalXminCount == 0); Assert(PredXact->SxactGlobalXminCount == 0);
PredXact->SxactGlobalXmin = snapshot->xmin; PredXact->SxactGlobalXmin = snapshot->xmin;
PredXact->SxactGlobalXminCount = 1; PredXact->SxactGlobalXminCount = 1;
OldSerXidSetActiveSerXmin(snapshot->xmin); SerialSetActiveSerXmin(snapshot->xmin);
} }
else if (TransactionIdEquals(snapshot->xmin, PredXact->SxactGlobalXmin)) else if (TransactionIdEquals(snapshot->xmin, PredXact->SxactGlobalXmin))
{ {
...@@ -3231,7 +3231,7 @@ SetNewSxactGlobalXmin(void) ...@@ -3231,7 +3231,7 @@ SetNewSxactGlobalXmin(void)
} }
} }
OldSerXidSetActiveSerXmin(PredXact->SxactGlobalXmin); SerialSetActiveSerXmin(PredXact->SxactGlobalXmin);
} }
/* /*
...@@ -4084,7 +4084,7 @@ CheckForSerializableConflictOut(Relation relation, TransactionId xid, Snapshot s ...@@ -4084,7 +4084,7 @@ CheckForSerializableConflictOut(Relation relation, TransactionId xid, Snapshot s
*/ */
SerCommitSeqNo conflictCommitSeqNo; SerCommitSeqNo conflictCommitSeqNo;
conflictCommitSeqNo = OldSerXidGetMinConflictCommitSeqNo(xid); conflictCommitSeqNo = SerialGetMinConflictCommitSeqNo(xid);
if (conflictCommitSeqNo != 0) if (conflictCommitSeqNo != 0)
{ {
if (conflictCommitSeqNo != InvalidSerCommitSeqNo if (conflictCommitSeqNo != InvalidSerCommitSeqNo
...@@ -5069,7 +5069,7 @@ predicatelock_twophase_recover(TransactionId xid, uint16 info, ...@@ -5069,7 +5069,7 @@ predicatelock_twophase_recover(TransactionId xid, uint16 info,
{ {
PredXact->SxactGlobalXmin = sxact->xmin; PredXact->SxactGlobalXmin = sxact->xmin;
PredXact->SxactGlobalXminCount = 1; PredXact->SxactGlobalXminCount = 1;
OldSerXidSetActiveSerXmin(sxact->xmin); SerialSetActiveSerXmin(sxact->xmin);
} }
else if (TransactionIdEquals(sxact->xmin, PredXact->SxactGlobalXmin)) else if (TransactionIdEquals(sxact->xmin, PredXact->SxactGlobalXmin))
{ {
......
...@@ -82,7 +82,7 @@ typedef struct ...@@ -82,7 +82,7 @@ typedef struct
* to the low 32 bits of the transaction ID (i.e. the actual XID, without the * to the low 32 bits of the transaction ID (i.e. the actual XID, without the
* epoch). * epoch).
* *
* The caller must hold CLogTruncationLock since it's dealing with arbitrary * The caller must hold XactTruncationLock since it's dealing with arbitrary
* XIDs, and must continue to hold it until it's done with any clog lookups * XIDs, and must continue to hold it until it's done with any clog lookups
* relating to those XIDs. * relating to those XIDs.
*/ */
...@@ -118,13 +118,13 @@ TransactionIdInRecentPast(FullTransactionId fxid, TransactionId *extracted_xid) ...@@ -118,13 +118,13 @@ TransactionIdInRecentPast(FullTransactionId fxid, TransactionId *extracted_xid)
U64FromFullTransactionId(fxid))))); U64FromFullTransactionId(fxid)))));
/* /*
* ShmemVariableCache->oldestClogXid is protected by CLogTruncationLock, * ShmemVariableCache->oldestClogXid is protected by XactTruncationLock,
* but we don't acquire that lock here. Instead, we require the caller to * but we don't acquire that lock here. Instead, we require the caller to
* acquire it, because the caller is presumably going to look up the * acquire it, because the caller is presumably going to look up the
* returned XID. If we took and released the lock within this function, a * returned XID. If we took and released the lock within this function, a
* CLOG truncation could occur before the caller finished with the XID. * CLOG truncation could occur before the caller finished with the XID.
*/ */
Assert(LWLockHeldByMe(CLogTruncationLock)); Assert(LWLockHeldByMe(XactTruncationLock));
/* /*
* If the transaction ID has wrapped around, it's definitely too old to * If the transaction ID has wrapped around, it's definitely too old to
...@@ -672,7 +672,7 @@ pg_xact_status(PG_FUNCTION_ARGS) ...@@ -672,7 +672,7 @@ pg_xact_status(PG_FUNCTION_ARGS)
* We must protect against concurrent truncation of clog entries to avoid * We must protect against concurrent truncation of clog entries to avoid
* an I/O error on SLRU lookup. * an I/O error on SLRU lookup.
*/ */
LWLockAcquire(CLogTruncationLock, LW_SHARED); LWLockAcquire(XactTruncationLock, LW_SHARED);
if (TransactionIdInRecentPast(fxid, &xid)) if (TransactionIdInRecentPast(fxid, &xid))
{ {
Assert(TransactionIdIsValid(xid)); Assert(TransactionIdIsValid(xid));
...@@ -706,7 +706,7 @@ pg_xact_status(PG_FUNCTION_ARGS) ...@@ -706,7 +706,7 @@ pg_xact_status(PG_FUNCTION_ARGS)
{ {
status = NULL; status = NULL;
} }
LWLockRelease(CLogTruncationLock); LWLockRelease(XactTruncationLock);
if (status == NULL) if (status == NULL)
PG_RETURN_NULL(); PG_RETURN_NULL();
......
...@@ -75,7 +75,7 @@ static const char *excludeDirContents[] = ...@@ -75,7 +75,7 @@ static const char *excludeDirContents[] =
/* /*
* Old contents are loaded for possible debugging but are not required for * Old contents are loaded for possible debugging but are not required for
* normal operation, see OldSerXidInit(). * normal operation, see SerialInit().
*/ */
"pg_serial", "pg_serial",
......
...@@ -29,8 +29,8 @@ ...@@ -29,8 +29,8 @@
#define MaxMultiXactOffset ((MultiXactOffset) 0xFFFFFFFF) #define MaxMultiXactOffset ((MultiXactOffset) 0xFFFFFFFF)
/* Number of SLRU buffers to use for multixact */ /* Number of SLRU buffers to use for multixact */
#define NUM_MXACTOFFSET_BUFFERS 8 #define NUM_MULTIXACTOFFSET_BUFFERS 8
#define NUM_MXACTMEMBER_BUFFERS 16 #define NUM_MULTIXACTMEMBER_BUFFERS 16
/* /*
* Possible multixact lock modes ("status"). The first four modes are for * Possible multixact lock modes ("status"). The first four modes are for
......
...@@ -197,7 +197,7 @@ typedef struct VariableCacheData ...@@ -197,7 +197,7 @@ typedef struct VariableCacheData
* aborted */ * aborted */
/* /*
* These fields are protected by CLogTruncationLock * These fields are protected by XactTruncationLock
*/ */
TransactionId oldestClogXid; /* oldest it's safe to look up in clog */ TransactionId oldestClogXid; /* oldest it's safe to look up in clog */
......
...@@ -18,7 +18,7 @@ ...@@ -18,7 +18,7 @@
/* /*
* The number of SLRU page buffers we use for the notification queue. * The number of SLRU page buffers we use for the notification queue.
*/ */
#define NUM_ASYNC_BUFFERS 8 #define NUM_NOTIFY_BUFFERS 8
extern bool Trace_notify; extern bool Trace_notify;
extern volatile sig_atomic_t notifyInterruptPending; extern volatile sig_atomic_t notifyInterruptPending;
......
...@@ -195,13 +195,13 @@ extern void LWLockInitialize(LWLock *lock, int tranche_id); ...@@ -195,13 +195,13 @@ extern void LWLockInitialize(LWLock *lock, int tranche_id);
*/ */
typedef enum BuiltinTrancheIds typedef enum BuiltinTrancheIds
{ {
LWTRANCHE_CLOG_BUFFERS = NUM_INDIVIDUAL_LWLOCKS, LWTRANCHE_XACT_BUFFER = NUM_INDIVIDUAL_LWLOCKS,
LWTRANCHE_COMMITTS_BUFFERS, LWTRANCHE_COMMITTS_BUFFER,
LWTRANCHE_SUBTRANS_BUFFERS, LWTRANCHE_SUBTRANS_BUFFER,
LWTRANCHE_MXACTOFFSET_BUFFERS, LWTRANCHE_MULTIXACTOFFSET_BUFFER,
LWTRANCHE_MXACTMEMBER_BUFFERS, LWTRANCHE_MULTIXACTMEMBER_BUFFER,
LWTRANCHE_ASYNC_BUFFERS, LWTRANCHE_NOTIFY_BUFFER,
LWTRANCHE_OLDSERXID_BUFFERS, LWTRANCHE_SERIAL_BUFFER,
LWTRANCHE_WAL_INSERT, LWTRANCHE_WAL_INSERT,
LWTRANCHE_BUFFER_CONTENT, LWTRANCHE_BUFFER_CONTENT,
LWTRANCHE_BUFFER_IO_IN_PROGRESS, LWTRANCHE_BUFFER_IO_IN_PROGRESS,
......
...@@ -27,8 +27,8 @@ extern int max_predicate_locks_per_relation; ...@@ -27,8 +27,8 @@ extern int max_predicate_locks_per_relation;
extern int max_predicate_locks_per_page; extern int max_predicate_locks_per_page;
/* Number of SLRU buffers to use for predicate locking */ /* Number of SLRU buffers to use for Serial SLRU */
#define NUM_OLDSERXID_BUFFERS 16 #define NUM_SERIAL_BUFFERS 16
/* /*
* A handle used for sharing SERIALIZABLEXACT objects between the participants * A handle used for sharing SERIALIZABLEXACT objects between the participants
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment