mirror of
https://github.com/postgres/postgres.git
synced 2025-06-07 11:02:12 +03:00
Documentation improvement and minor code cleanups for the latch facility.
Improve the documentation around weak-memory-ordering risks, and do a pass of general editorialization on the comments in the latch code. Make the Windows latch code more like the Unix latch code where feasible; in particular provide the same Assert checks in both implementations. Fix poorly-placed WaitLatch call in syncrep.c. This patch resolves, for the moment, concerns around weak-memory-ordering bugs in latch-related code: we have documented the restrictions and checked that existing calls meet them. In 9.2 I hope that we will install suitable memory barrier instructions in SetLatch/ResetLatch, so that their callers don't need to be quite so careful.
This commit is contained in:
parent
028a0c5a29
commit
6760a4d402
@ -3,60 +3,6 @@
|
|||||||
* unix_latch.c
|
* unix_latch.c
|
||||||
* Routines for inter-process latches
|
* Routines for inter-process latches
|
||||||
*
|
*
|
||||||
* A latch is a boolean variable, with operations that let you to sleep
|
|
||||||
* until it is set. A latch can be set from another process, or a signal
|
|
||||||
* handler within the same process.
|
|
||||||
*
|
|
||||||
* The latch interface is a reliable replacement for the common pattern of
|
|
||||||
* using pg_usleep() or select() to wait until a signal arrives, where the
|
|
||||||
* signal handler sets a global variable. Because on some platforms, an
|
|
||||||
* incoming signal doesn't interrupt sleep, and even on platforms where it
|
|
||||||
* does there is a race condition if the signal arrives just before
|
|
||||||
* entering the sleep, the common pattern must periodically wake up and
|
|
||||||
* poll the global variable. pselect() system call was invented to solve
|
|
||||||
* the problem, but it is not portable enough. Latches are designed to
|
|
||||||
* overcome these limitations, allowing you to sleep without polling and
|
|
||||||
* ensuring a quick response to signals from other processes.
|
|
||||||
*
|
|
||||||
* There are two kinds of latches: local and shared. A local latch is
|
|
||||||
* initialized by InitLatch, and can only be set from the same process.
|
|
||||||
* A local latch can be used to wait for a signal to arrive, by calling
|
|
||||||
* SetLatch in the signal handler. A shared latch resides in shared memory,
|
|
||||||
* and must be initialized at postmaster startup by InitSharedLatch. Before
|
|
||||||
* a shared latch can be waited on, it must be associated with a process
|
|
||||||
* with OwnLatch. Only the process owning the latch can wait on it, but any
|
|
||||||
* process can set it.
|
|
||||||
*
|
|
||||||
* There are three basic operations on a latch:
|
|
||||||
*
|
|
||||||
* SetLatch - Sets the latch
|
|
||||||
* ResetLatch - Clears the latch, allowing it to be set again
|
|
||||||
* WaitLatch - Waits for the latch to become set
|
|
||||||
*
|
|
||||||
* The correct pattern to wait for an event is:
|
|
||||||
*
|
|
||||||
* for (;;)
|
|
||||||
* {
|
|
||||||
* ResetLatch();
|
|
||||||
* if (work to do)
|
|
||||||
* Do Stuff();
|
|
||||||
*
|
|
||||||
* WaitLatch();
|
|
||||||
* }
|
|
||||||
*
|
|
||||||
* It's important to reset the latch *before* checking if there's work to
|
|
||||||
* do. Otherwise, if someone sets the latch between the check and the
|
|
||||||
* ResetLatch call, you will miss it and Wait will block.
|
|
||||||
*
|
|
||||||
* To wake up the waiter, you must first set a global flag or something
|
|
||||||
* else that the main loop tests in the "if (work to do)" part, and call
|
|
||||||
* SetLatch *after* that. SetLatch is designed to return quickly if the
|
|
||||||
* latch is already set.
|
|
||||||
*
|
|
||||||
*
|
|
||||||
* Implementation
|
|
||||||
* --------------
|
|
||||||
*
|
|
||||||
* The Unix implementation uses the so-called self-pipe trick to overcome
|
* The Unix implementation uses the so-called self-pipe trick to overcome
|
||||||
* the race condition involved with select() and setting a global flag
|
* the race condition involved with select() and setting a global flag
|
||||||
* in the signal handler. When a latch is set and the current process
|
* in the signal handler. When a latch is set and the current process
|
||||||
@ -65,8 +11,8 @@
|
|||||||
* interrupt select() on all platforms, and even on platforms where it
|
* interrupt select() on all platforms, and even on platforms where it
|
||||||
* does, a signal that arrives just before the select() call does not
|
* does, a signal that arrives just before the select() call does not
|
||||||
* prevent the select() from entering sleep. An incoming byte on a pipe
|
* prevent the select() from entering sleep. An incoming byte on a pipe
|
||||||
* however reliably interrupts the sleep, and makes select() to return
|
* however reliably interrupts the sleep, and causes select() to return
|
||||||
* immediately if the signal arrives just before select() begins.
|
* immediately even if the signal arrives before select() begins.
|
||||||
*
|
*
|
||||||
* When SetLatch is called from the same process that owns the latch,
|
* When SetLatch is called from the same process that owns the latch,
|
||||||
* SetLatch writes the byte directly to the pipe. If it's owned by another
|
* SetLatch writes the byte directly to the pipe. If it's owned by another
|
||||||
@ -99,7 +45,7 @@
|
|||||||
/* Are we currently in WaitLatch? The signal handler would like to know. */
|
/* Are we currently in WaitLatch? The signal handler would like to know. */
|
||||||
static volatile sig_atomic_t waiting = false;
|
static volatile sig_atomic_t waiting = false;
|
||||||
|
|
||||||
/* Read and write end of the self-pipe */
|
/* Read and write ends of the self-pipe */
|
||||||
static int selfpipe_readfd = -1;
|
static int selfpipe_readfd = -1;
|
||||||
static int selfpipe_writefd = -1;
|
static int selfpipe_writefd = -1;
|
||||||
|
|
||||||
@ -115,7 +61,7 @@ static void sendSelfPipeByte(void);
|
|||||||
void
|
void
|
||||||
InitLatch(volatile Latch *latch)
|
InitLatch(volatile Latch *latch)
|
||||||
{
|
{
|
||||||
/* Initialize the self pipe if this is our first latch in the process */
|
/* Initialize the self-pipe if this is our first latch in the process */
|
||||||
if (selfpipe_readfd == -1)
|
if (selfpipe_readfd == -1)
|
||||||
initSelfPipe();
|
initSelfPipe();
|
||||||
|
|
||||||
@ -126,13 +72,14 @@ InitLatch(volatile Latch *latch)
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* Initialize a shared latch that can be set from other processes. The latch
|
* Initialize a shared latch that can be set from other processes. The latch
|
||||||
* is initially owned by no-one, use OwnLatch to associate it with the
|
* is initially owned by no-one; use OwnLatch to associate it with the
|
||||||
* current process.
|
* current process.
|
||||||
*
|
*
|
||||||
* InitSharedLatch needs to be called in postmaster before forking child
|
* InitSharedLatch needs to be called in postmaster before forking child
|
||||||
* processes, usually right after allocating the shared memory block
|
* processes, usually right after allocating the shared memory block
|
||||||
* containing the latch with ShmemInitStruct. The Unix implementation
|
* containing the latch with ShmemInitStruct. (The Unix implementation
|
||||||
* doesn't actually require that, but the Windows one does.
|
* doesn't actually require that, but the Windows one does.) Because of
|
||||||
|
* this restriction, we have no concurrency issues to worry about here.
|
||||||
*/
|
*/
|
||||||
void
|
void
|
||||||
InitSharedLatch(volatile Latch *latch)
|
InitSharedLatch(volatile Latch *latch)
|
||||||
@ -144,23 +91,30 @@ InitSharedLatch(volatile Latch *latch)
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* Associate a shared latch with the current process, allowing it to
|
* Associate a shared latch with the current process, allowing it to
|
||||||
* wait on it.
|
* wait on the latch.
|
||||||
*
|
*
|
||||||
* Make sure that latch_sigusr1_handler() is called from the SIGUSR1 signal
|
* Although there is a sanity check for latch-already-owned, we don't do
|
||||||
* handler, as shared latches use SIGUSR1 to for inter-process communication.
|
* any sort of locking here, meaning that we could fail to detect the error
|
||||||
|
* if two processes try to own the same latch at about the same time. If
|
||||||
|
* there is any risk of that, caller must provide an interlock to prevent it.
|
||||||
|
*
|
||||||
|
* In any process that calls OwnLatch(), make sure that
|
||||||
|
* latch_sigusr1_handler() is called from the SIGUSR1 signal handler,
|
||||||
|
* as shared latches use SIGUSR1 for inter-process communication.
|
||||||
*/
|
*/
|
||||||
void
|
void
|
||||||
OwnLatch(volatile Latch *latch)
|
OwnLatch(volatile Latch *latch)
|
||||||
{
|
{
|
||||||
Assert(latch->is_shared);
|
Assert(latch->is_shared);
|
||||||
|
|
||||||
/* Initialize the self pipe if this is our first latch in the process */
|
/* Initialize the self-pipe if this is our first latch in this process */
|
||||||
if (selfpipe_readfd == -1)
|
if (selfpipe_readfd == -1)
|
||||||
initSelfPipe();
|
initSelfPipe();
|
||||||
|
|
||||||
/* sanity check */
|
/* sanity check */
|
||||||
if (latch->owner_pid != 0)
|
if (latch->owner_pid != 0)
|
||||||
elog(ERROR, "latch already owned");
|
elog(ERROR, "latch already owned");
|
||||||
|
|
||||||
latch->owner_pid = MyProcPid;
|
latch->owner_pid = MyProcPid;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -172,6 +126,7 @@ DisownLatch(volatile Latch *latch)
|
|||||||
{
|
{
|
||||||
Assert(latch->is_shared);
|
Assert(latch->is_shared);
|
||||||
Assert(latch->owner_pid == MyProcPid);
|
Assert(latch->owner_pid == MyProcPid);
|
||||||
|
|
||||||
latch->owner_pid = 0;
|
latch->owner_pid = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -229,21 +184,31 @@ WaitLatchOrSocket(volatile Latch *latch, pgsocket sock, bool forRead,
|
|||||||
int hifd;
|
int hifd;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Clear the pipe, and check if the latch is set already. If someone
|
* Clear the pipe, then check if the latch is set already. If someone
|
||||||
* sets the latch between this and the select() below, the setter will
|
* sets the latch between this and the select() below, the setter will
|
||||||
* write a byte to the pipe (or signal us and the signal handler will
|
* write a byte to the pipe (or signal us and the signal handler will
|
||||||
* do that), and the select() will return immediately.
|
* do that), and the select() will return immediately.
|
||||||
|
*
|
||||||
|
* Note: we assume that the kernel calls involved in drainSelfPipe()
|
||||||
|
* and SetLatch() will provide adequate synchronization on machines
|
||||||
|
* with weak memory ordering, so that we cannot miss seeing is_set
|
||||||
|
* if the signal byte is already in the pipe when we drain it.
|
||||||
*/
|
*/
|
||||||
drainSelfPipe();
|
drainSelfPipe();
|
||||||
|
|
||||||
if (latch->is_set)
|
if (latch->is_set)
|
||||||
{
|
{
|
||||||
result = 1;
|
result = 1;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Must wait ... set up the event masks for select() */
|
||||||
FD_ZERO(&input_mask);
|
FD_ZERO(&input_mask);
|
||||||
|
FD_ZERO(&output_mask);
|
||||||
|
|
||||||
FD_SET(selfpipe_readfd, &input_mask);
|
FD_SET(selfpipe_readfd, &input_mask);
|
||||||
hifd = selfpipe_readfd;
|
hifd = selfpipe_readfd;
|
||||||
|
|
||||||
if (sock != PGINVALID_SOCKET && forRead)
|
if (sock != PGINVALID_SOCKET && forRead)
|
||||||
{
|
{
|
||||||
FD_SET(sock, &input_mask);
|
FD_SET(sock, &input_mask);
|
||||||
@ -251,7 +216,6 @@ WaitLatchOrSocket(volatile Latch *latch, pgsocket sock, bool forRead,
|
|||||||
hifd = sock;
|
hifd = sock;
|
||||||
}
|
}
|
||||||
|
|
||||||
FD_ZERO(&output_mask);
|
|
||||||
if (sock != PGINVALID_SOCKET && forWrite)
|
if (sock != PGINVALID_SOCKET && forWrite)
|
||||||
{
|
{
|
||||||
FD_SET(sock, &output_mask);
|
FD_SET(sock, &output_mask);
|
||||||
@ -288,14 +252,23 @@ WaitLatchOrSocket(volatile Latch *latch, pgsocket sock, bool forRead,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Sets a latch and wakes up anyone waiting on it. Returns quickly if the
|
* Sets a latch and wakes up anyone waiting on it.
|
||||||
* latch is already set.
|
*
|
||||||
|
* This is cheap if the latch is already set, otherwise not so much.
|
||||||
*/
|
*/
|
||||||
void
|
void
|
||||||
SetLatch(volatile Latch *latch)
|
SetLatch(volatile Latch *latch)
|
||||||
{
|
{
|
||||||
pid_t owner_pid;
|
pid_t owner_pid;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* XXX there really ought to be a memory barrier operation right here,
|
||||||
|
* to ensure that any flag variables we might have changed get flushed
|
||||||
|
* to main memory before we check/set is_set. Without that, we have to
|
||||||
|
* require that callers provide their own synchronization for machines
|
||||||
|
* with weak memory ordering (see latch.h).
|
||||||
|
*/
|
||||||
|
|
||||||
/* Quick exit if already set */
|
/* Quick exit if already set */
|
||||||
if (latch->is_set)
|
if (latch->is_set)
|
||||||
return;
|
return;
|
||||||
@ -307,13 +280,21 @@ SetLatch(volatile Latch *latch)
|
|||||||
* we're in a signal handler. We use the self-pipe to wake up the select()
|
* we're in a signal handler. We use the self-pipe to wake up the select()
|
||||||
* in that case. If it's another process, send a signal.
|
* in that case. If it's another process, send a signal.
|
||||||
*
|
*
|
||||||
* Fetch owner_pid only once, in case the owner simultaneously disowns the
|
* Fetch owner_pid only once, in case the latch is concurrently getting
|
||||||
* latch and clears owner_pid. XXX: This assumes that pid_t is atomic,
|
* owned or disowned. XXX: This assumes that pid_t is atomic, which isn't
|
||||||
* which isn't guaranteed to be true! In practice, the effective range of
|
* guaranteed to be true! In practice, the effective range of pid_t fits
|
||||||
* pid_t fits in a 32 bit integer, and so should be atomic. In the worst
|
* in a 32 bit integer, and so should be atomic. In the worst case, we
|
||||||
* case, we might end up signaling wrong process if the right one disowns
|
* might end up signaling the wrong process. Even then, you're very
|
||||||
* the latch just as we fetch owner_pid. Even then, you're very unlucky if
|
* unlucky if a process with that bogus pid exists and belongs to
|
||||||
* a process with that bogus pid exists.
|
* Postgres; and PG database processes should handle excess SIGUSR1
|
||||||
|
* interrupts without a problem anyhow.
|
||||||
|
*
|
||||||
|
* Another sort of race condition that's possible here is for a new process
|
||||||
|
* to own the latch immediately after we look, so we don't signal it.
|
||||||
|
* This is okay so long as all callers of ResetLatch/WaitLatch follow the
|
||||||
|
* standard coding convention of waiting at the bottom of their loops,
|
||||||
|
* not the top, so that they'll correctly process latch-setting events that
|
||||||
|
* happen before they enter the loop.
|
||||||
*/
|
*/
|
||||||
owner_pid = latch->owner_pid;
|
owner_pid = latch->owner_pid;
|
||||||
if (owner_pid == 0)
|
if (owner_pid == 0)
|
||||||
@ -335,11 +316,23 @@ ResetLatch(volatile Latch *latch)
|
|||||||
Assert(latch->owner_pid == MyProcPid);
|
Assert(latch->owner_pid == MyProcPid);
|
||||||
|
|
||||||
latch->is_set = false;
|
latch->is_set = false;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* XXX there really ought to be a memory barrier operation right here, to
|
||||||
|
* ensure that the write to is_set gets flushed to main memory before we
|
||||||
|
* examine any flag variables. Otherwise a concurrent SetLatch might
|
||||||
|
* falsely conclude that it needn't signal us, even though we have missed
|
||||||
|
* seeing some flag updates that SetLatch was supposed to inform us of.
|
||||||
|
* For the moment, callers must supply their own synchronization of flag
|
||||||
|
* variables (see latch.h).
|
||||||
|
*/
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* SetLatch uses SIGUSR1 to wake up the process waiting on the latch. Wake
|
* SetLatch uses SIGUSR1 to wake up the process waiting on the latch.
|
||||||
* up WaitLatch.
|
*
|
||||||
|
* Wake up WaitLatch, if we're waiting. (We might not be, since SIGUSR1 is
|
||||||
|
* overloaded for multiple purposes.)
|
||||||
*/
|
*/
|
||||||
void
|
void
|
||||||
latch_sigusr1_handler(void)
|
latch_sigusr1_handler(void)
|
||||||
|
@ -1,9 +1,10 @@
|
|||||||
/*-------------------------------------------------------------------------
|
/*-------------------------------------------------------------------------
|
||||||
*
|
*
|
||||||
* win32_latch.c
|
* win32_latch.c
|
||||||
* Windows implementation of latches.
|
* Routines for inter-process latches
|
||||||
*
|
*
|
||||||
* See unix_latch.c for information on usage.
|
* See unix_latch.c for header comments for the exported functions;
|
||||||
|
* the API presented here is supposed to be the same as there.
|
||||||
*
|
*
|
||||||
* The Windows implementation uses Windows events that are inherited by
|
* The Windows implementation uses Windows events that are inherited by
|
||||||
* all postmaster child processes.
|
* all postmaster child processes.
|
||||||
@ -23,7 +24,6 @@
|
|||||||
#include <unistd.h>
|
#include <unistd.h>
|
||||||
|
|
||||||
#include "miscadmin.h"
|
#include "miscadmin.h"
|
||||||
#include "replication/walsender.h"
|
|
||||||
#include "storage/latch.h"
|
#include "storage/latch.h"
|
||||||
#include "storage/shmem.h"
|
#include "storage/shmem.h"
|
||||||
|
|
||||||
@ -88,7 +88,7 @@ WaitLatch(volatile Latch *latch, long timeout)
|
|||||||
}
|
}
|
||||||
|
|
||||||
int
|
int
|
||||||
WaitLatchOrSocket(volatile Latch *latch, SOCKET sock, bool forRead,
|
WaitLatchOrSocket(volatile Latch *latch, pgsocket sock, bool forRead,
|
||||||
bool forWrite, long timeout)
|
bool forWrite, long timeout)
|
||||||
{
|
{
|
||||||
DWORD rc;
|
DWORD rc;
|
||||||
@ -98,6 +98,9 @@ WaitLatchOrSocket(volatile Latch *latch, SOCKET sock, bool forRead,
|
|||||||
int numevents;
|
int numevents;
|
||||||
int result = 0;
|
int result = 0;
|
||||||
|
|
||||||
|
if (latch->owner_pid != MyProcPid)
|
||||||
|
elog(ERROR, "cannot wait on a latch owned by another process");
|
||||||
|
|
||||||
latchevent = latch->event;
|
latchevent = latch->event;
|
||||||
|
|
||||||
events[0] = latchevent;
|
events[0] = latchevent;
|
||||||
@ -187,15 +190,10 @@ SetLatch(volatile Latch *latch)
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* See if anyone's waiting for the latch. It can be the current process if
|
* See if anyone's waiting for the latch. It can be the current process if
|
||||||
* we're in a signal handler. Use a local variable here in case the latch
|
* we're in a signal handler.
|
||||||
* is just disowned between the test and the SetEvent call, and event
|
|
||||||
* field set to NULL.
|
|
||||||
*
|
*
|
||||||
* Fetch handle field only once, in case the owner simultaneously disowns
|
* Use a local variable here just in case somebody changes the event field
|
||||||
* the latch and clears handle. This assumes that HANDLE is atomic, which
|
* concurrently (which really should not happen).
|
||||||
* isn't guaranteed to be true! In practice, it should be, and in the
|
|
||||||
* worst case we end up calling SetEvent with a bogus handle, and SetEvent
|
|
||||||
* will return an error with no harm done.
|
|
||||||
*/
|
*/
|
||||||
handle = latch->event;
|
handle = latch->event;
|
||||||
if (handle)
|
if (handle)
|
||||||
@ -212,5 +210,8 @@ SetLatch(volatile Latch *latch)
|
|||||||
void
|
void
|
||||||
ResetLatch(volatile Latch *latch)
|
ResetLatch(volatile Latch *latch)
|
||||||
{
|
{
|
||||||
|
/* Only the owner should reset the latch */
|
||||||
|
Assert(latch->owner_pid == MyProcPid);
|
||||||
|
|
||||||
latch->is_set = false;
|
latch->is_set = false;
|
||||||
}
|
}
|
||||||
|
@ -166,13 +166,6 @@ SyncRepWaitForLSN(XLogRecPtr XactCommitLSN)
|
|||||||
{
|
{
|
||||||
int syncRepState;
|
int syncRepState;
|
||||||
|
|
||||||
/*
|
|
||||||
* Wait on latch for up to 60 seconds. This allows us to check for
|
|
||||||
* postmaster death regularly while waiting. Note that timeout here
|
|
||||||
* does not necessarily release from loop.
|
|
||||||
*/
|
|
||||||
WaitLatch(&MyProc->waitLatch, 60000000L);
|
|
||||||
|
|
||||||
/* Must reset the latch before testing state. */
|
/* Must reset the latch before testing state. */
|
||||||
ResetLatch(&MyProc->waitLatch);
|
ResetLatch(&MyProc->waitLatch);
|
||||||
|
|
||||||
@ -184,6 +177,12 @@ SyncRepWaitForLSN(XLogRecPtr XactCommitLSN)
|
|||||||
* walsender changes the state to SYNC_REP_WAIT_COMPLETE, it will
|
* walsender changes the state to SYNC_REP_WAIT_COMPLETE, it will
|
||||||
* never update it again, so we can't be seeing a stale value in that
|
* never update it again, so we can't be seeing a stale value in that
|
||||||
* case.
|
* case.
|
||||||
|
*
|
||||||
|
* Note: on machines with weak memory ordering, the acquisition of
|
||||||
|
* the lock is essential to avoid race conditions: we cannot be sure
|
||||||
|
* the sender's state update has reached main memory until we acquire
|
||||||
|
* the lock. We could get rid of this dance if SetLatch/ResetLatch
|
||||||
|
* contained memory barriers.
|
||||||
*/
|
*/
|
||||||
syncRepState = MyProc->syncRepState;
|
syncRepState = MyProc->syncRepState;
|
||||||
if (syncRepState == SYNC_REP_WAITING)
|
if (syncRepState == SYNC_REP_WAITING)
|
||||||
@ -246,6 +245,13 @@ SyncRepWaitForLSN(XLogRecPtr XactCommitLSN)
|
|||||||
SyncRepCancelWait();
|
SyncRepCancelWait();
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Wait on latch for up to 60 seconds. This allows us to check for
|
||||||
|
* cancel/die signal or postmaster death regularly while waiting. Note
|
||||||
|
* that timeout here does not necessarily release from loop.
|
||||||
|
*/
|
||||||
|
WaitLatch(&MyProc->waitLatch, 60000000L);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -338,7 +338,7 @@ InitProcess(void)
|
|||||||
MyProc->waitLSN.xrecoff = 0;
|
MyProc->waitLSN.xrecoff = 0;
|
||||||
MyProc->syncRepState = SYNC_REP_NOT_WAITING;
|
MyProc->syncRepState = SYNC_REP_NOT_WAITING;
|
||||||
SHMQueueElemInit(&(MyProc->syncRepLinks));
|
SHMQueueElemInit(&(MyProc->syncRepLinks));
|
||||||
OwnLatch((Latch *) &MyProc->waitLatch);
|
OwnLatch(&MyProc->waitLatch);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* We might be reusing a semaphore that belonged to a failed process. So
|
* We might be reusing a semaphore that belonged to a failed process. So
|
||||||
|
@ -3,6 +3,66 @@
|
|||||||
* latch.h
|
* latch.h
|
||||||
* Routines for interprocess latches
|
* Routines for interprocess latches
|
||||||
*
|
*
|
||||||
|
* A latch is a boolean variable, with operations that let processes sleep
|
||||||
|
* until it is set. A latch can be set from another process, or a signal
|
||||||
|
* handler within the same process.
|
||||||
|
*
|
||||||
|
* The latch interface is a reliable replacement for the common pattern of
|
||||||
|
* using pg_usleep() or select() to wait until a signal arrives, where the
|
||||||
|
* signal handler sets a flag variable. Because on some platforms an
|
||||||
|
* incoming signal doesn't interrupt sleep, and even on platforms where it
|
||||||
|
* does there is a race condition if the signal arrives just before
|
||||||
|
* entering the sleep, the common pattern must periodically wake up and
|
||||||
|
* poll the flag variable. The pselect() system call was invented to solve
|
||||||
|
* this problem, but it is not portable enough. Latches are designed to
|
||||||
|
* overcome these limitations, allowing you to sleep without polling and
|
||||||
|
* ensuring quick response to signals from other processes.
|
||||||
|
*
|
||||||
|
* There are two kinds of latches: local and shared. A local latch is
|
||||||
|
* initialized by InitLatch, and can only be set from the same process.
|
||||||
|
* A local latch can be used to wait for a signal to arrive, by calling
|
||||||
|
* SetLatch in the signal handler. A shared latch resides in shared memory,
|
||||||
|
* and must be initialized at postmaster startup by InitSharedLatch. Before
|
||||||
|
* a shared latch can be waited on, it must be associated with a process
|
||||||
|
* with OwnLatch. Only the process owning the latch can wait on it, but any
|
||||||
|
* process can set it.
|
||||||
|
*
|
||||||
|
* There are three basic operations on a latch:
|
||||||
|
*
|
||||||
|
* SetLatch - Sets the latch
|
||||||
|
* ResetLatch - Clears the latch, allowing it to be set again
|
||||||
|
* WaitLatch - Waits for the latch to become set
|
||||||
|
*
|
||||||
|
* WaitLatch includes a provision for timeouts (which should hopefully not
|
||||||
|
* be necessary once the code is fully latch-ified).
|
||||||
|
* See unix_latch.c for detailed specifications for the exported functions.
|
||||||
|
*
|
||||||
|
* The correct pattern to wait for event(s) is:
|
||||||
|
*
|
||||||
|
* for (;;)
|
||||||
|
* {
|
||||||
|
* ResetLatch();
|
||||||
|
* if (work to do)
|
||||||
|
* Do Stuff();
|
||||||
|
* WaitLatch();
|
||||||
|
* }
|
||||||
|
*
|
||||||
|
* It's important to reset the latch *before* checking if there's work to
|
||||||
|
* do. Otherwise, if someone sets the latch between the check and the
|
||||||
|
* ResetLatch call, you will miss it and Wait will incorrectly block.
|
||||||
|
*
|
||||||
|
* To wake up the waiter, you must first set a global flag or something
|
||||||
|
* else that the wait loop tests in the "if (work to do)" part, and call
|
||||||
|
* SetLatch *after* that. SetLatch is designed to return quickly if the
|
||||||
|
* latch is already set.
|
||||||
|
*
|
||||||
|
* Presently, when using a shared latch for interprocess signalling, the
|
||||||
|
* flag variable(s) set by senders and inspected by the wait loop must
|
||||||
|
* be protected by spinlocks or LWLocks, else it is possible to miss events
|
||||||
|
* on machines with weak memory ordering (such as PPC). This restriction
|
||||||
|
* will be lifted in future by inserting suitable memory barriers into
|
||||||
|
* SetLatch and ResetLatch.
|
||||||
|
*
|
||||||
*
|
*
|
||||||
* Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
|
* Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
|
||||||
* Portions Copyright (c) 1994, Regents of the University of California
|
* Portions Copyright (c) 1994, Regents of the University of California
|
||||||
@ -44,16 +104,17 @@ extern int WaitLatchOrSocket(volatile Latch *latch, pgsocket sock,
|
|||||||
extern void SetLatch(volatile Latch *latch);
|
extern void SetLatch(volatile Latch *latch);
|
||||||
extern void ResetLatch(volatile Latch *latch);
|
extern void ResetLatch(volatile Latch *latch);
|
||||||
|
|
||||||
#define TestLatch(latch) (((volatile Latch *) latch)->is_set)
|
/* beware of memory ordering issues if you use this macro! */
|
||||||
|
#define TestLatch(latch) (((volatile Latch *) (latch))->is_set)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Unix implementation uses SIGUSR1 for inter-process signaling, Win32 doesn't
|
* Unix implementation uses SIGUSR1 for inter-process signaling.
|
||||||
* need this.
|
* Win32 doesn't need this.
|
||||||
*/
|
*/
|
||||||
#ifndef WIN32
|
#ifndef WIN32
|
||||||
extern void latch_sigusr1_handler(void);
|
extern void latch_sigusr1_handler(void);
|
||||||
#else
|
#else
|
||||||
#define latch_sigusr1_handler()
|
#define latch_sigusr1_handler() ((void) 0)
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#endif /* LATCH_H */
|
#endif /* LATCH_H */
|
||||||
|
Loading…
x
Reference in New Issue
Block a user