Commit 4e15a4db authored by Tom Lane's avatar Tom Lane

Documentation improvement and minor code cleanups for the latch facility.

Improve the documentation around weak-memory-ordering risks, and do a pass
of general editorialization on the comments in the latch code.  Make the
Windows latch code more like the Unix latch code where feasible; in
particular provide the same Assert checks in both implementations.
Fix poorly-placed WaitLatch call in syncrep.c.

This patch resolves, for the moment, concerns around weak-memory-ordering
bugs in latch-related code: we have documented the restrictions and checked
that existing calls meet them.  In 9.2 I hope that we will install suitable
memory barrier instructions in SetLatch/ResetLatch, so that their callers
don't need to be quite so careful.
parent cff60f2d
...@@ -3,60 +3,6 @@ ...@@ -3,60 +3,6 @@
* unix_latch.c * unix_latch.c
* Routines for inter-process latches * Routines for inter-process latches
* *
* A latch is a boolean variable, with operations that let you to sleep
* until it is set. A latch can be set from another process, or a signal
* handler within the same process.
*
* The latch interface is a reliable replacement for the common pattern of
* using pg_usleep() or select() to wait until a signal arrives, where the
* signal handler sets a global variable. Because on some platforms, an
* incoming signal doesn't interrupt sleep, and even on platforms where it
* does there is a race condition if the signal arrives just before
* entering the sleep, the common pattern must periodically wake up and
* poll the global variable. pselect() system call was invented to solve
* the problem, but it is not portable enough. Latches are designed to
* overcome these limitations, allowing you to sleep without polling and
* ensuring a quick response to signals from other processes.
*
* There are two kinds of latches: local and shared. A local latch is
* initialized by InitLatch, and can only be set from the same process.
* A local latch can be used to wait for a signal to arrive, by calling
* SetLatch in the signal handler. A shared latch resides in shared memory,
* and must be initialized at postmaster startup by InitSharedLatch. Before
* a shared latch can be waited on, it must be associated with a process
* with OwnLatch. Only the process owning the latch can wait on it, but any
* process can set it.
*
* There are three basic operations on a latch:
*
* SetLatch - Sets the latch
* ResetLatch - Clears the latch, allowing it to be set again
* WaitLatch - Waits for the latch to become set
*
* The correct pattern to wait for an event is:
*
* for (;;)
* {
* ResetLatch();
* if (work to do)
* Do Stuff();
*
* WaitLatch();
* }
*
* It's important to reset the latch *before* checking if there's work to
* do. Otherwise, if someone sets the latch between the check and the
* ResetLatch call, you will miss it and Wait will block.
*
* To wake up the waiter, you must first set a global flag or something
* else that the main loop tests in the "if (work to do)" part, and call
* SetLatch *after* that. SetLatch is designed to return quickly if the
* latch is already set.
*
*
* Implementation
* --------------
*
* The Unix implementation uses the so-called self-pipe trick to overcome * The Unix implementation uses the so-called self-pipe trick to overcome
* the race condition involved with select() and setting a global flag * the race condition involved with select() and setting a global flag
* in the signal handler. When a latch is set and the current process * in the signal handler. When a latch is set and the current process
...@@ -65,8 +11,8 @@ ...@@ -65,8 +11,8 @@
* interrupt select() on all platforms, and even on platforms where it * interrupt select() on all platforms, and even on platforms where it
* does, a signal that arrives just before the select() call does not * does, a signal that arrives just before the select() call does not
* prevent the select() from entering sleep. An incoming byte on a pipe * prevent the select() from entering sleep. An incoming byte on a pipe
* however reliably interrupts the sleep, and makes select() to return * however reliably interrupts the sleep, and causes select() to return
* immediately if the signal arrives just before select() begins. * immediately even if the signal arrives before select() begins.
* *
* When SetLatch is called from the same process that owns the latch, * When SetLatch is called from the same process that owns the latch,
* SetLatch writes the byte directly to the pipe. If it's owned by another * SetLatch writes the byte directly to the pipe. If it's owned by another
...@@ -100,7 +46,7 @@ ...@@ -100,7 +46,7 @@
/* Are we currently in WaitLatch? The signal handler would like to know. */ /* Are we currently in WaitLatch? The signal handler would like to know. */
static volatile sig_atomic_t waiting = false; static volatile sig_atomic_t waiting = false;
/* Read and write end of the self-pipe */ /* Read and write ends of the self-pipe */
static int selfpipe_readfd = -1; static int selfpipe_readfd = -1;
static int selfpipe_writefd = -1; static int selfpipe_writefd = -1;
...@@ -116,7 +62,7 @@ static void sendSelfPipeByte(void); ...@@ -116,7 +62,7 @@ static void sendSelfPipeByte(void);
void void
InitLatch(volatile Latch *latch) InitLatch(volatile Latch *latch)
{ {
/* Initialize the self pipe if this is our first latch in the process */ /* Initialize the self-pipe if this is our first latch in the process */
if (selfpipe_readfd == -1) if (selfpipe_readfd == -1)
initSelfPipe(); initSelfPipe();
...@@ -127,13 +73,14 @@ InitLatch(volatile Latch *latch) ...@@ -127,13 +73,14 @@ InitLatch(volatile Latch *latch)
/* /*
* Initialize a shared latch that can be set from other processes. The latch * Initialize a shared latch that can be set from other processes. The latch
* is initially owned by no-one, use OwnLatch to associate it with the * is initially owned by no-one; use OwnLatch to associate it with the
* current process. * current process.
* *
* InitSharedLatch needs to be called in postmaster before forking child * InitSharedLatch needs to be called in postmaster before forking child
* processes, usually right after allocating the shared memory block * processes, usually right after allocating the shared memory block
* containing the latch with ShmemInitStruct. The Unix implementation * containing the latch with ShmemInitStruct. (The Unix implementation
* doesn't actually require that, but the Windows one does. * doesn't actually require that, but the Windows one does.) Because of
* this restriction, we have no concurrency issues to worry about here.
*/ */
void void
InitSharedLatch(volatile Latch *latch) InitSharedLatch(volatile Latch *latch)
...@@ -145,23 +92,30 @@ InitSharedLatch(volatile Latch *latch) ...@@ -145,23 +92,30 @@ InitSharedLatch(volatile Latch *latch)
/* /*
* Associate a shared latch with the current process, allowing it to * Associate a shared latch with the current process, allowing it to
* wait on it. * wait on the latch.
* *
* Make sure that latch_sigusr1_handler() is called from the SIGUSR1 signal * Although there is a sanity check for latch-already-owned, we don't do
* handler, as shared latches use SIGUSR1 to for inter-process communication. * any sort of locking here, meaning that we could fail to detect the error
* if two processes try to own the same latch at about the same time. If
* there is any risk of that, caller must provide an interlock to prevent it.
*
* In any process that calls OwnLatch(), make sure that
* latch_sigusr1_handler() is called from the SIGUSR1 signal handler,
* as shared latches use SIGUSR1 for inter-process communication.
*/ */
void void
OwnLatch(volatile Latch *latch) OwnLatch(volatile Latch *latch)
{ {
Assert(latch->is_shared); Assert(latch->is_shared);
/* Initialize the self pipe if this is our first latch in the process */ /* Initialize the self-pipe if this is our first latch in this process */
if (selfpipe_readfd == -1) if (selfpipe_readfd == -1)
initSelfPipe(); initSelfPipe();
/* sanity check */ /* sanity check */
if (latch->owner_pid != 0) if (latch->owner_pid != 0)
elog(ERROR, "latch already owned"); elog(ERROR, "latch already owned");
latch->owner_pid = MyProcPid; latch->owner_pid = MyProcPid;
} }
...@@ -173,25 +127,26 @@ DisownLatch(volatile Latch *latch) ...@@ -173,25 +127,26 @@ DisownLatch(volatile Latch *latch)
{ {
Assert(latch->is_shared); Assert(latch->is_shared);
Assert(latch->owner_pid == MyProcPid); Assert(latch->owner_pid == MyProcPid);
latch->owner_pid = 0; latch->owner_pid = 0;
} }
/* /*
* Wait for a given latch to be set, postmaster death, or until timeout is * Wait for a given latch to be set, or for postmaster death, or until timeout
* exceeded. 'wakeEvents' is a bitmask that specifies which of those events * is exceeded. 'wakeEvents' is a bitmask that specifies which of those events
* to wait for. If the latch is already set (and WL_LATCH_SET is given), the * to wait for. If the latch is already set (and WL_LATCH_SET is given), the
* function returns immediately. * function returns immediately.
* *
* The 'timeout' is given in microseconds. It must be >= 0 if WL_TIMEOUT * The 'timeout' is given in microseconds. It must be >= 0 if WL_TIMEOUT flag
* event is given, otherwise it is ignored. On some platforms, signals cause * is given. On some platforms, signals cause the timeout to be restarted,
* the timeout to be restarted, so beware that the function can sleep for * so beware that the function can sleep for several times longer than the
* several times longer than the specified timeout. * specified timeout.
* *
* The latch must be owned by the current process, ie. it must be a * The latch must be owned by the current process, ie. it must be a
* backend-local latch initialized with InitLatch, or a shared latch * backend-local latch initialized with InitLatch, or a shared latch
* associated with the current process by calling OwnLatch. * associated with the current process by calling OwnLatch.
* *
* Returns bit field indicating which condition(s) caused the wake-up. Note * Returns bit mask indicating which condition(s) caused the wake-up. Note
* that if multiple wake-up conditions are true, there is no guarantee that * that if multiple wake-up conditions are true, there is no guarantee that
* we return all of them in one call, but we will return at least one. Also, * we return all of them in one call, but we will return at least one. Also,
* according to the select(2) man page on Linux, select(2) may spuriously * according to the select(2) man page on Linux, select(2) may spuriously
...@@ -200,7 +155,7 @@ DisownLatch(volatile Latch *latch) ...@@ -200,7 +155,7 @@ DisownLatch(volatile Latch *latch)
* readable, or postmaster has died, even when none of the wake conditions * readable, or postmaster has died, even when none of the wake conditions
* have been satisfied. That should be rare in practice, but the caller * have been satisfied. That should be rare in practice, but the caller
* should not use the return value for anything critical, re-checking the * should not use the return value for anything critical, re-checking the
* situation with PostmasterIsAlive() or read() on a socket if necessary. * situation with PostmasterIsAlive() or read() on a socket as necessary.
*/ */
int int
WaitLatch(volatile Latch *latch, int wakeEvents, long timeout) WaitLatch(volatile Latch *latch, int wakeEvents, long timeout)
...@@ -247,12 +202,18 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock, ...@@ -247,12 +202,18 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
int hifd; int hifd;
/* /*
* Clear the pipe, and check if the latch is set already. If someone * Clear the pipe, then check if the latch is set already. If someone
* sets the latch between this and the select() below, the setter will * sets the latch between this and the select() below, the setter will
* write a byte to the pipe (or signal us and the signal handler will * write a byte to the pipe (or signal us and the signal handler will
* do that), and the select() will return immediately. * do that), and the select() will return immediately.
*
* Note: we assume that the kernel calls involved in drainSelfPipe()
* and SetLatch() will provide adequate synchronization on machines
* with weak memory ordering, so that we cannot miss seeing is_set
* if the signal byte is already in the pipe when we drain it.
*/ */
drainSelfPipe(); drainSelfPipe();
if ((wakeEvents & WL_LATCH_SET) && latch->is_set) if ((wakeEvents & WL_LATCH_SET) && latch->is_set)
{ {
result |= WL_LATCH_SET; result |= WL_LATCH_SET;
...@@ -263,7 +224,10 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock, ...@@ -263,7 +224,10 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
break; break;
} }
/* Must wait ... set up the event masks for select() */
FD_ZERO(&input_mask); FD_ZERO(&input_mask);
FD_ZERO(&output_mask);
FD_SET(selfpipe_readfd, &input_mask); FD_SET(selfpipe_readfd, &input_mask);
hifd = selfpipe_readfd; hifd = selfpipe_readfd;
...@@ -281,7 +245,6 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock, ...@@ -281,7 +245,6 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
hifd = sock; hifd = sock;
} }
FD_ZERO(&output_mask);
if (wakeEvents & WL_SOCKET_WRITEABLE) if (wakeEvents & WL_SOCKET_WRITEABLE)
{ {
FD_SET(sock, &output_mask); FD_SET(sock, &output_mask);
...@@ -320,21 +283,30 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock, ...@@ -320,21 +283,30 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
{ {
result |= WL_POSTMASTER_DEATH; result |= WL_POSTMASTER_DEATH;
} }
} while(result == 0); } while (result == 0);
waiting = false; waiting = false;
return result; return result;
} }
/* /*
* Sets a latch and wakes up anyone waiting on it. Returns quickly if the * Sets a latch and wakes up anyone waiting on it.
* latch is already set. *
* This is cheap if the latch is already set, otherwise not so much.
*/ */
void void
SetLatch(volatile Latch *latch) SetLatch(volatile Latch *latch)
{ {
pid_t owner_pid; pid_t owner_pid;
/*
* XXX there really ought to be a memory barrier operation right here,
* to ensure that any flag variables we might have changed get flushed
* to main memory before we check/set is_set. Without that, we have to
* require that callers provide their own synchronization for machines
* with weak memory ordering (see latch.h).
*/
/* Quick exit if already set */ /* Quick exit if already set */
if (latch->is_set) if (latch->is_set)
return; return;
...@@ -346,13 +318,21 @@ SetLatch(volatile Latch *latch) ...@@ -346,13 +318,21 @@ SetLatch(volatile Latch *latch)
* we're in a signal handler. We use the self-pipe to wake up the select() * we're in a signal handler. We use the self-pipe to wake up the select()
* in that case. If it's another process, send a signal. * in that case. If it's another process, send a signal.
* *
* Fetch owner_pid only once, in case the owner simultaneously disowns the * Fetch owner_pid only once, in case the latch is concurrently getting
* latch and clears owner_pid. XXX: This assumes that pid_t is atomic, * owned or disowned. XXX: This assumes that pid_t is atomic, which isn't
* which isn't guaranteed to be true! In practice, the effective range of * guaranteed to be true! In practice, the effective range of pid_t fits
* pid_t fits in a 32 bit integer, and so should be atomic. In the worst * in a 32 bit integer, and so should be atomic. In the worst case, we
* case, we might end up signaling wrong process if the right one disowns * might end up signaling the wrong process. Even then, you're very
* the latch just as we fetch owner_pid. Even then, you're very unlucky if * unlucky if a process with that bogus pid exists and belongs to
* a process with that bogus pid exists. * Postgres; and PG database processes should handle excess SIGUSR1
* interrupts without a problem anyhow.
*
* Another sort of race condition that's possible here is for a new process
* to own the latch immediately after we look, so we don't signal it.
* This is okay so long as all callers of ResetLatch/WaitLatch follow the
* standard coding convention of waiting at the bottom of their loops,
* not the top, so that they'll correctly process latch-setting events that
* happen before they enter the loop.
*/ */
owner_pid = latch->owner_pid; owner_pid = latch->owner_pid;
if (owner_pid == 0) if (owner_pid == 0)
...@@ -374,11 +354,23 @@ ResetLatch(volatile Latch *latch) ...@@ -374,11 +354,23 @@ ResetLatch(volatile Latch *latch)
Assert(latch->owner_pid == MyProcPid); Assert(latch->owner_pid == MyProcPid);
latch->is_set = false; latch->is_set = false;
/*
* XXX there really ought to be a memory barrier operation right here, to
* ensure that the write to is_set gets flushed to main memory before we
* examine any flag variables. Otherwise a concurrent SetLatch might
* falsely conclude that it needn't signal us, even though we have missed
* seeing some flag updates that SetLatch was supposed to inform us of.
* For the moment, callers must supply their own synchronization of flag
* variables (see latch.h).
*/
} }
/* /*
* SetLatch uses SIGUSR1 to wake up the process waiting on the latch. Wake * SetLatch uses SIGUSR1 to wake up the process waiting on the latch.
* up WaitLatch. *
* Wake up WaitLatch, if we're waiting. (We might not be, since SIGUSR1 is
* overloaded for multiple purposes.)
*/ */
void void
latch_sigusr1_handler(void) latch_sigusr1_handler(void)
......
/*------------------------------------------------------------------------- /*-------------------------------------------------------------------------
* *
* win32_latch.c * win32_latch.c
* Windows implementation of latches. * Routines for inter-process latches
* *
* See unix_latch.c for information on usage. * See unix_latch.c for header comments for the exported functions;
* the API presented here is supposed to be the same as there.
* *
* The Windows implementation uses Windows events that are inherited by * The Windows implementation uses Windows events that are inherited by
* all postmaster child processes. * all postmaster child processes.
...@@ -24,7 +25,6 @@ ...@@ -24,7 +25,6 @@
#include "miscadmin.h" #include "miscadmin.h"
#include "postmaster/postmaster.h" #include "postmaster/postmaster.h"
#include "replication/walsender.h"
#include "storage/latch.h" #include "storage/latch.h"
#include "storage/shmem.h" #include "storage/shmem.h"
...@@ -89,7 +89,7 @@ WaitLatch(volatile Latch *latch, int wakeEvents, long timeout) ...@@ -89,7 +89,7 @@ WaitLatch(volatile Latch *latch, int wakeEvents, long timeout)
} }
int int
WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, SOCKET sock, WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
long timeout) long timeout)
{ {
DWORD rc; DWORD rc;
...@@ -101,12 +101,15 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, SOCKET sock, ...@@ -101,12 +101,15 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, SOCKET sock,
int pmdeath_eventno = 0; int pmdeath_eventno = 0;
long timeout_ms; long timeout_ms;
Assert(wakeEvents != 0);
/* Ignore WL_SOCKET_* events if no valid socket is given */ /* Ignore WL_SOCKET_* events if no valid socket is given */
if (sock == PGINVALID_SOCKET) if (sock == PGINVALID_SOCKET)
wakeEvents &= ~(WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE); wakeEvents &= ~(WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE);
Assert(wakeEvents != 0); /* must have at least one wake event */
if ((wakeEvents & WL_LATCH_SET) && latch->owner_pid != MyProcPid)
elog(ERROR, "cannot wait on a latch owned by another process");
/* Convert timeout to milliseconds for WaitForMultipleObjects() */ /* Convert timeout to milliseconds for WaitForMultipleObjects() */
if (wakeEvents & WL_TIMEOUT) if (wakeEvents & WL_TIMEOUT)
{ {
...@@ -122,8 +125,8 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, SOCKET sock, ...@@ -122,8 +125,8 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, SOCKET sock,
events[0] = latchevent; events[0] = latchevent;
events[1] = pgwin32_signal_event; events[1] = pgwin32_signal_event;
numevents = 2; numevents = 2;
if (((wakeEvents & WL_SOCKET_READABLE) || if ((wakeEvents & WL_SOCKET_READABLE) ||
(wakeEvents & WL_SOCKET_WRITEABLE))) (wakeEvents & WL_SOCKET_WRITEABLE))
{ {
int flags = 0; int flags = 0;
...@@ -152,7 +155,7 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, SOCKET sock, ...@@ -152,7 +155,7 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, SOCKET sock,
*/ */
if (!ResetEvent(latchevent)) if (!ResetEvent(latchevent))
elog(ERROR, "ResetEvent failed: error code %d", (int) GetLastError()); elog(ERROR, "ResetEvent failed: error code %d", (int) GetLastError());
if (latch->is_set && (wakeEvents & WL_LATCH_SET)) if ((wakeEvents & WL_LATCH_SET) && latch->is_set)
{ {
result |= WL_LATCH_SET; result |= WL_LATCH_SET;
/* /*
...@@ -171,17 +174,17 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, SOCKET sock, ...@@ -171,17 +174,17 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, SOCKET sock,
else if (rc == WAIT_OBJECT_0 + 1) else if (rc == WAIT_OBJECT_0 + 1)
pgwin32_dispatch_queued_signals(); pgwin32_dispatch_queued_signals();
else if (rc == WAIT_TIMEOUT)
{
result |= WL_TIMEOUT;
}
else if ((wakeEvents & WL_POSTMASTER_DEATH) && else if ((wakeEvents & WL_POSTMASTER_DEATH) &&
rc == WAIT_OBJECT_0 + pmdeath_eventno) rc == WAIT_OBJECT_0 + pmdeath_eventno)
{ {
/* Postmaster died */ /* Postmaster died */
result |= WL_POSTMASTER_DEATH; result |= WL_POSTMASTER_DEATH;
} }
else if (rc == WAIT_TIMEOUT) else if ((wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) &&
{
result |= WL_TIMEOUT;
}
else if ((wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) != 0 &&
rc == WAIT_OBJECT_0 + 2) /* socket is at event slot 2 */ rc == WAIT_OBJECT_0 + 2) /* socket is at event slot 2 */
{ {
WSANETWORKEVENTS resEvents; WSANETWORKEVENTS resEvents;
...@@ -206,7 +209,7 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, SOCKET sock, ...@@ -206,7 +209,7 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, SOCKET sock,
else if (rc != WAIT_OBJECT_0) else if (rc != WAIT_OBJECT_0)
elog(ERROR, "unexpected return code from WaitForMultipleObjects(): %d", (int) rc); elog(ERROR, "unexpected return code from WaitForMultipleObjects(): %d", (int) rc);
} }
while(result == 0); while (result == 0);
/* Clean up the handle we created for the socket */ /* Clean up the handle we created for the socket */
if (sockevent != WSA_INVALID_EVENT) if (sockevent != WSA_INVALID_EVENT)
...@@ -231,15 +234,10 @@ SetLatch(volatile Latch *latch) ...@@ -231,15 +234,10 @@ SetLatch(volatile Latch *latch)
/* /*
* See if anyone's waiting for the latch. It can be the current process if * See if anyone's waiting for the latch. It can be the current process if
* we're in a signal handler. Use a local variable here in case the latch * we're in a signal handler.
* is just disowned between the test and the SetEvent call, and event
* field set to NULL.
* *
* Fetch handle field only once, in case the owner simultaneously disowns * Use a local variable here just in case somebody changes the event field
* the latch and clears handle. This assumes that HANDLE is atomic, which * concurrently (which really should not happen).
* isn't guaranteed to be true! In practice, it should be, and in the
* worst case we end up calling SetEvent with a bogus handle, and SetEvent
* will return an error with no harm done.
*/ */
handle = latch->event; handle = latch->event;
if (handle) if (handle)
...@@ -256,5 +254,8 @@ SetLatch(volatile Latch *latch) ...@@ -256,5 +254,8 @@ SetLatch(volatile Latch *latch)
void void
ResetLatch(volatile Latch *latch) ResetLatch(volatile Latch *latch)
{ {
/* Only the owner should reset the latch */
Assert(latch->owner_pid == MyProcPid);
latch->is_set = false; latch->is_set = false;
} }
...@@ -166,13 +166,6 @@ SyncRepWaitForLSN(XLogRecPtr XactCommitLSN) ...@@ -166,13 +166,6 @@ SyncRepWaitForLSN(XLogRecPtr XactCommitLSN)
{ {
int syncRepState; int syncRepState;
/*
* Wait on latch for up to 60 seconds. This allows us to check for
* postmaster death regularly while waiting. Note that timeout here
* does not necessarily release from loop.
*/
WaitLatch(&MyProc->waitLatch, WL_LATCH_SET | WL_TIMEOUT, 60000000L);
/* Must reset the latch before testing state. */ /* Must reset the latch before testing state. */
ResetLatch(&MyProc->waitLatch); ResetLatch(&MyProc->waitLatch);
...@@ -184,6 +177,12 @@ SyncRepWaitForLSN(XLogRecPtr XactCommitLSN) ...@@ -184,6 +177,12 @@ SyncRepWaitForLSN(XLogRecPtr XactCommitLSN)
* walsender changes the state to SYNC_REP_WAIT_COMPLETE, it will * walsender changes the state to SYNC_REP_WAIT_COMPLETE, it will
* never update it again, so we can't be seeing a stale value in that * never update it again, so we can't be seeing a stale value in that
* case. * case.
*
* Note: on machines with weak memory ordering, the acquisition of
* the lock is essential to avoid race conditions: we cannot be sure
* the sender's state update has reached main memory until we acquire
* the lock. We could get rid of this dance if SetLatch/ResetLatch
* contained memory barriers.
*/ */
syncRepState = MyProc->syncRepState; syncRepState = MyProc->syncRepState;
if (syncRepState == SYNC_REP_WAITING) if (syncRepState == SYNC_REP_WAITING)
...@@ -246,6 +245,13 @@ SyncRepWaitForLSN(XLogRecPtr XactCommitLSN) ...@@ -246,6 +245,13 @@ SyncRepWaitForLSN(XLogRecPtr XactCommitLSN)
SyncRepCancelWait(); SyncRepCancelWait();
break; break;
} }
/*
* Wait on latch for up to 60 seconds. This allows us to check for
* cancel/die signal or postmaster death regularly while waiting. Note
* that timeout here does not necessarily release from loop.
*/
WaitLatch(&MyProc->waitLatch, WL_LATCH_SET | WL_TIMEOUT, 60000000L);
} }
/* /*
......
...@@ -332,7 +332,7 @@ InitProcess(void) ...@@ -332,7 +332,7 @@ InitProcess(void)
MyProc->waitLSN.xrecoff = 0; MyProc->waitLSN.xrecoff = 0;
MyProc->syncRepState = SYNC_REP_NOT_WAITING; MyProc->syncRepState = SYNC_REP_NOT_WAITING;
SHMQueueElemInit(&(MyProc->syncRepLinks)); SHMQueueElemInit(&(MyProc->syncRepLinks));
OwnLatch((Latch *) &MyProc->waitLatch); OwnLatch(&MyProc->waitLatch);
/* /*
* We might be reusing a semaphore that belonged to a failed process. So * We might be reusing a semaphore that belonged to a failed process. So
......
...@@ -3,6 +3,67 @@ ...@@ -3,6 +3,67 @@
* latch.h * latch.h
* Routines for interprocess latches * Routines for interprocess latches
* *
* A latch is a boolean variable, with operations that let processes sleep
* until it is set. A latch can be set from another process, or a signal
* handler within the same process.
*
* The latch interface is a reliable replacement for the common pattern of
* using pg_usleep() or select() to wait until a signal arrives, where the
* signal handler sets a flag variable. Because on some platforms an
* incoming signal doesn't interrupt sleep, and even on platforms where it
* does there is a race condition if the signal arrives just before
* entering the sleep, the common pattern must periodically wake up and
* poll the flag variable. The pselect() system call was invented to solve
* this problem, but it is not portable enough. Latches are designed to
* overcome these limitations, allowing you to sleep without polling and
* ensuring quick response to signals from other processes.
*
* There are two kinds of latches: local and shared. A local latch is
* initialized by InitLatch, and can only be set from the same process.
* A local latch can be used to wait for a signal to arrive, by calling
* SetLatch in the signal handler. A shared latch resides in shared memory,
* and must be initialized at postmaster startup by InitSharedLatch. Before
* a shared latch can be waited on, it must be associated with a process
* with OwnLatch. Only the process owning the latch can wait on it, but any
* process can set it.
*
* There are three basic operations on a latch:
*
* SetLatch - Sets the latch
* ResetLatch - Clears the latch, allowing it to be set again
* WaitLatch - Waits for the latch to become set
*
* WaitLatch includes a provision for timeouts (which should hopefully not
* be necessary once the code is fully latch-ified) and a provision for
* postmaster child processes to wake up immediately on postmaster death.
* See unix_latch.c for detailed specifications for the exported functions.
*
* The correct pattern to wait for event(s) is:
*
* for (;;)
* {
* ResetLatch();
* if (work to do)
* Do Stuff();
* WaitLatch();
* }
*
* It's important to reset the latch *before* checking if there's work to
* do. Otherwise, if someone sets the latch between the check and the
* ResetLatch call, you will miss it and Wait will incorrectly block.
*
* To wake up the waiter, you must first set a global flag or something
* else that the wait loop tests in the "if (work to do)" part, and call
* SetLatch *after* that. SetLatch is designed to return quickly if the
* latch is already set.
*
* Presently, when using a shared latch for interprocess signalling, the
* flag variable(s) set by senders and inspected by the wait loop must
* be protected by spinlocks or LWLocks, else it is possible to miss events
* on machines with weak memory ordering (such as PPC). This restriction
* will be lifted in future by inserting suitable memory barriers into
* SetLatch and ResetLatch.
*
* *
* Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group * Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 1994, Regents of the University of California
...@@ -51,16 +112,17 @@ extern int WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, ...@@ -51,16 +112,17 @@ extern int WaitLatchOrSocket(volatile Latch *latch, int wakeEvents,
extern void SetLatch(volatile Latch *latch); extern void SetLatch(volatile Latch *latch);
extern void ResetLatch(volatile Latch *latch); extern void ResetLatch(volatile Latch *latch);
#define TestLatch(latch) (((volatile Latch *) latch)->is_set) /* beware of memory ordering issues if you use this macro! */
#define TestLatch(latch) (((volatile Latch *) (latch))->is_set)
/* /*
* Unix implementation uses SIGUSR1 for inter-process signaling, Win32 doesn't * Unix implementation uses SIGUSR1 for inter-process signaling.
* need this. * Win32 doesn't need this.
*/ */
#ifndef WIN32 #ifndef WIN32
extern void latch_sigusr1_handler(void); extern void latch_sigusr1_handler(void);
#else #else
#define latch_sigusr1_handler() #define latch_sigusr1_handler() ((void) 0)
#endif #endif
#endif /* LATCH_H */ #endif /* LATCH_H */
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment