mirror of
https://github.com/postgres/postgres.git
synced 2025-04-22 23:02:54 +03:00
Complete TODO item:
* -HOLDER/HOLDERTAB rename to PROCLOCK/PROCLOCKTAG
This commit is contained in:
parent
97377048b4
commit
b75fcf9326
@ -1,4 +1,4 @@
|
||||
$Header: /cvsroot/pgsql/src/backend/storage/lmgr/README,v 1.10 2002/04/15 23:46:13 momjian Exp $
|
||||
$Header: /cvsroot/pgsql/src/backend/storage/lmgr/README,v 1.11 2002/07/19 00:17:40 momjian Exp $
|
||||
|
||||
|
||||
LOCKING OVERVIEW
|
||||
@ -7,38 +7,40 @@ Postgres uses three types of interprocess locks:
|
||||
|
||||
* Spinlocks. These are intended for *very* short-term locks. If a lock
|
||||
is to be held more than a few dozen instructions, or across any sort of
|
||||
kernel call (or even a call to a nontrivial subroutine), don't use a spinlock.
|
||||
Spinlocks are primarily used as infrastructure for lightweight locks.
|
||||
They are implemented using a hardware atomic-test-and-set instruction,
|
||||
if available. Waiting processes busy-loop until they can get the lock.
|
||||
There is no provision for deadlock detection, automatic release on error,
|
||||
or any other nicety. There is a timeout if the lock cannot be gotten after
|
||||
a minute or so (which is approximately forever in comparison to the intended
|
||||
lock hold time, so this is certainly an error condition).
|
||||
kernel call (or even a call to a nontrivial subroutine), don't use a
|
||||
spinlock. Spinlocks are primarily used as infrastructure for lightweight
|
||||
locks. They are implemented using a hardware atomic-test-and-set
|
||||
instruction, if available. Waiting processes busy-loop until they can
|
||||
get the lock. There is no provision for deadlock detection, automatic
|
||||
release on error, or any other nicety. There is a timeout if the lock
|
||||
cannot be gotten after a minute or so (which is approximately forever in
|
||||
comparison to the intended lock hold time, so this is certainly an error
|
||||
condition).
|
||||
|
||||
* Lightweight locks (LWLocks). These locks are typically used to interlock
|
||||
access to datastructures in shared memory. LWLocks support both exclusive
|
||||
and shared lock modes (for read/write and read-only access to a shared object).
|
||||
There is no provision for deadlock detection, but the LWLock manager will
|
||||
automatically release held LWLocks during elog() recovery, so it is safe to
|
||||
raise an error while holding LWLocks. Obtaining or releasing an LWLock is
|
||||
quite fast (a few dozen instructions) when there is no contention for the
|
||||
lock. When a process has to wait for an LWLock, it blocks on a SysV semaphore
|
||||
so as to not consume CPU time. Waiting processes will be granted the lock
|
||||
in arrival order. There is no timeout.
|
||||
* Lightweight locks (LWLocks). These locks are typically used to
|
||||
interlock access to datastructures in shared memory. LWLocks support
|
||||
both exclusive and shared lock modes (for read/write and read-only
|
||||
access to a shared object). There is no provision for deadlock
|
||||
detection, but the LWLock manager will automatically release held
|
||||
LWLocks during elog() recovery, so it is safe to raise an error while
|
||||
holding LWLocks. Obtaining or releasing an LWLock is quite fast (a few
|
||||
dozen instructions) when there is no contention for the lock. When a
|
||||
process has to wait for an LWLock, it blocks on a SysV semaphore so as
|
||||
to not consume CPU time. Waiting processes will be granted the lock in
|
||||
arrival order. There is no timeout.
|
||||
|
||||
* Regular locks (a/k/a heavyweight locks). The regular lock manager supports
|
||||
a variety of lock modes with table-driven semantics, and it has full deadlock
|
||||
detection and automatic release at transaction end. Regular locks should be
|
||||
used for all user-driven lock requests.
|
||||
* Regular locks (a/k/a heavyweight locks). The regular lock manager
|
||||
supports a variety of lock modes with table-driven semantics, and it has
|
||||
full deadlock detection and automatic release at transaction end.
|
||||
Regular locks should be used for all user-driven lock requests.
|
||||
|
||||
Acquisition of either a spinlock or a lightweight lock causes query cancel
|
||||
and die() interrupts to be held off until all such locks are released.
|
||||
No such restriction exists for regular locks, however. Also note that we
|
||||
can accept query cancel and die() interrupts while waiting for a regular
|
||||
lock, but we will not accept them while waiting for spinlocks or LW locks.
|
||||
It is therefore not a good idea to use LW locks when the wait time might
|
||||
exceed a few seconds.
|
||||
Acquisition of either a spinlock or a lightweight lock causes query
|
||||
cancel and die() interrupts to be held off until all such locks are
|
||||
released. No such restriction exists for regular locks, however. Also
|
||||
note that we can accept query cancel and die() interrupts while waiting
|
||||
for a regular lock, but we will not accept them while waiting for
|
||||
spinlocks or LW locks. It is therefore not a good idea to use LW locks
|
||||
when the wait time might exceed a few seconds.
|
||||
|
||||
The rest of this README file discusses the regular lock manager in detail.
|
||||
|
||||
@ -46,9 +48,9 @@ The rest of this README file discusses the regular lock manager in detail.
|
||||
LOCK DATA STRUCTURES
|
||||
|
||||
There are two fundamental lock structures: the per-lockable-object LOCK
|
||||
struct, and the per-lock-holder HOLDER struct. A LOCK object exists
|
||||
struct, and the per-lock-holder PROCLOCK struct. A LOCK object exists
|
||||
for each lockable object that currently has locks held or requested on it.
|
||||
A HOLDER struct exists for each transaction that is holding or requesting
|
||||
A PROCLOCK struct exists for each transaction that is holding or requesting
|
||||
lock(s) on each LOCK object.
|
||||
|
||||
Lock methods describe the overall locking behavior. Currently there are
|
||||
@ -102,9 +104,9 @@ waitMask -
|
||||
is 1 if and only if requested[i] > granted[i].
|
||||
|
||||
lockHolders -
|
||||
This is a shared memory queue of all the HOLDER structs associated with
|
||||
the lock object. Note that both granted and waiting HOLDERs are in this
|
||||
list (indeed, the same HOLDER might have some already-granted locks and
|
||||
This is a shared memory queue of all the PROCLOCK structs associated with
|
||||
the lock object. Note that both granted and waiting PROCLOCKs are in this
|
||||
list (indeed, the same PROCLOCK might have some already-granted locks and
|
||||
be waiting for more!).
|
||||
|
||||
waitProcs -
|
||||
@ -144,22 +146,22 @@ zero, the lock object is no longer needed and can be freed.
|
||||
|
||||
---------------------------------------------------------------------------
|
||||
|
||||
The lock manager's HOLDER objects contain:
|
||||
The lock manager's PROCLOCK objects contain:
|
||||
|
||||
tag -
|
||||
The key fields that are used for hashing entries in the shared memory
|
||||
holder hash table. This is declared as a separate struct to ensure that
|
||||
PROCLOCK hash table. This is declared as a separate struct to ensure that
|
||||
we always zero out the correct number of bytes.
|
||||
|
||||
tag.lock
|
||||
SHMEM offset of the LOCK object this holder is for.
|
||||
SHMEM offset of the LOCK object this PROCLOCK is for.
|
||||
|
||||
tag.proc
|
||||
SHMEM offset of PROC of backend process that owns this holder.
|
||||
SHMEM offset of PROC of backend process that owns this PROCLOCK.
|
||||
|
||||
tag.xid
|
||||
XID of transaction this holder is for, or InvalidTransactionId
|
||||
if the holder is for session-level locking.
|
||||
XID of transaction this PROCLOCK is for, or InvalidTransactionId
|
||||
if the PROCLOCK is for session-level locking.
|
||||
|
||||
Note that this structure will support multiple transactions running
|
||||
concurrently in one backend, which may be handy if we someday decide
|
||||
@ -169,18 +171,18 @@ tag -
|
||||
transaction operations like VACUUM.
|
||||
|
||||
holding -
|
||||
The number of successfully acquired locks of each type for this holder.
|
||||
The number of successfully acquired locks of each type for this PROCLOCK.
|
||||
This should be <= the corresponding granted[] value of the lock object!
|
||||
|
||||
nHolding -
|
||||
Sum of the holding[] array.
|
||||
|
||||
lockLink -
|
||||
List link for shared memory queue of all the HOLDER objects for the
|
||||
List link for shared memory queue of all the PROCLOCK objects for the
|
||||
same LOCK.
|
||||
|
||||
procLink -
|
||||
List link for shared memory queue of all the HOLDER objects for the
|
||||
List link for shared memory queue of all the PROCLOCK objects for the
|
||||
same backend.
|
||||
|
||||
---------------------------------------------------------------------------
|
||||
@ -193,47 +195,48 @@ fairly standard in essence, but there are many special considerations
|
||||
needed to deal with Postgres' generalized locking model.
|
||||
|
||||
A key design consideration is that we want to make routine operations
|
||||
(lock grant and release) run quickly when there is no deadlock, and avoid
|
||||
the overhead of deadlock handling as much as possible. We do this using
|
||||
an "optimistic waiting" approach: if a process cannot acquire the lock
|
||||
it wants immediately, it goes to sleep without any deadlock check. But
|
||||
it also sets a delay timer, with a delay of DeadlockTimeout milliseconds
|
||||
(typically set to one second). If the delay expires before the process is
|
||||
granted the lock it wants, it runs the deadlock detection/breaking code.
|
||||
Normally this code will determine that there is no deadlock condition,
|
||||
and then the process will go back to sleep and wait quietly until it is
|
||||
granted the lock. But if a deadlock condition does exist, it will be
|
||||
resolved, usually by aborting the detecting process' transaction. In this
|
||||
way, we avoid deadlock handling overhead whenever the wait time for a lock
|
||||
is less than DeadlockTimeout, while not imposing an unreasonable delay of
|
||||
detection when there is an error.
|
||||
(lock grant and release) run quickly when there is no deadlock, and
|
||||
avoid the overhead of deadlock handling as much as possible. We do this
|
||||
using an "optimistic waiting" approach: if a process cannot acquire the
|
||||
lock it wants immediately, it goes to sleep without any deadlock check.
|
||||
But it also sets a delay timer, with a delay of DeadlockTimeout
|
||||
milliseconds (typically set to one second). If the delay expires before
|
||||
the process is granted the lock it wants, it runs the deadlock
|
||||
detection/breaking code. Normally this code will determine that there is
|
||||
no deadlock condition, and then the process will go back to sleep and
|
||||
wait quietly until it is granted the lock. But if a deadlock condition
|
||||
does exist, it will be resolved, usually by aborting the detecting
|
||||
process' transaction. In this way, we avoid deadlock handling overhead
|
||||
whenever the wait time for a lock is less than DeadlockTimeout, while
|
||||
not imposing an unreasonable delay of detection when there is an error.
|
||||
|
||||
Lock acquisition (routines LockAcquire and ProcSleep) follows these rules:
|
||||
|
||||
1. A lock request is granted immediately if it does not conflict with any
|
||||
existing or waiting lock request, or if the process already holds an
|
||||
1. A lock request is granted immediately if it does not conflict with
|
||||
any existing or waiting lock request, or if the process already holds an
|
||||
instance of the same lock type (eg, there's no penalty to acquire a read
|
||||
lock twice). Note that a process never conflicts with itself, eg one can
|
||||
obtain read lock when one already holds exclusive lock.
|
||||
lock twice). Note that a process never conflicts with itself, eg one
|
||||
can obtain read lock when one already holds exclusive lock.
|
||||
|
||||
2. Otherwise the process joins the lock's wait queue. Normally it will be
|
||||
added to the end of the queue, but there is an exception: if the process
|
||||
already holds locks on this same lockable object that conflict with the
|
||||
request of any pending waiter, then the process will be inserted in the
|
||||
wait queue just ahead of the first such waiter. (If we did not make this
|
||||
check, the deadlock detection code would adjust the queue order to resolve
|
||||
the conflict, but it's relatively cheap to make the check in ProcSleep and
|
||||
avoid a deadlock timeout delay in this case.) Note special case when
|
||||
inserting before the end of the queue: if the process's request does not
|
||||
conflict with any existing lock nor any waiting request before its insertion
|
||||
point, then go ahead and grant the lock without waiting.
|
||||
2. Otherwise the process joins the lock's wait queue. Normally it will
|
||||
be added to the end of the queue, but there is an exception: if the
|
||||
process already holds locks on this same lockable object that conflict
|
||||
with the request of any pending waiter, then the process will be
|
||||
inserted in the wait queue just ahead of the first such waiter. (If we
|
||||
did not make this check, the deadlock detection code would adjust the
|
||||
queue order to resolve the conflict, but it's relatively cheap to make
|
||||
the check in ProcSleep and avoid a deadlock timeout delay in this case.)
|
||||
Note special case when inserting before the end of the queue: if the
|
||||
process's request does not conflict with any existing lock nor any
|
||||
waiting request before its insertion point, then go ahead and grant the
|
||||
lock without waiting.
|
||||
|
||||
When a lock is released, the lock release routine (ProcLockWakeup) scans
|
||||
the lock object's wait queue. Each waiter is awoken if (a) its request
|
||||
does not conflict with already-granted locks, and (b) its request does
|
||||
not conflict with the requests of prior un-wakable waiters. Rule (b)
|
||||
ensures that conflicting requests are granted in order of arrival.
|
||||
There are cases where a later waiter must be allowed to go in front of
|
||||
ensures that conflicting requests are granted in order of arrival. There
|
||||
are cases where a later waiter must be allowed to go in front of
|
||||
conflicting earlier waiters to avoid deadlock, but it is not
|
||||
ProcLockWakeup's responsibility to recognize these cases; instead, the
|
||||
deadlock detection code will re-order the wait queue when necessary.
|
||||
@ -242,35 +245,36 @@ To perform deadlock checking, we use the standard method of viewing the
|
||||
various processes as nodes in a directed graph (the waits-for graph or
|
||||
WFG). There is a graph edge leading from process A to process B if A
|
||||
waits for B, ie, A is waiting for some lock and B holds a conflicting
|
||||
lock. There is a deadlock condition if and only if the WFG contains
|
||||
a cycle. We detect cycles by searching outward along waits-for edges
|
||||
to see if we return to our starting point. There are three possible
|
||||
lock. There is a deadlock condition if and only if the WFG contains a
|
||||
cycle. We detect cycles by searching outward along waits-for edges to
|
||||
see if we return to our starting point. There are three possible
|
||||
outcomes:
|
||||
|
||||
1. All outgoing paths terminate at a running process (which has no
|
||||
outgoing edge).
|
||||
|
||||
2. A deadlock is detected by looping back to the start point. We resolve
|
||||
such a deadlock by canceling the start point's lock request and reporting
|
||||
an error in that transaction, which normally leads to transaction abort
|
||||
and release of that transaction's held locks. Note that it's sufficient
|
||||
to cancel one request to remove the cycle; we don't need to kill all the
|
||||
transactions involved.
|
||||
2. A deadlock is detected by looping back to the start point. We
|
||||
resolve such a deadlock by canceling the start point's lock request and
|
||||
reporting an error in that transaction, which normally leads to
|
||||
transaction abort and release of that transaction's held locks. Note
|
||||
that it's sufficient to cancel one request to remove the cycle; we don't
|
||||
need to kill all the transactions involved.
|
||||
|
||||
3. Some path(s) loop back to a node other than the start point. This
|
||||
indicates a deadlock, but one that does not involve our starting process.
|
||||
We ignore this condition on the grounds that resolving such a deadlock
|
||||
is the responsibility of the processes involved --- killing our start-
|
||||
point process would not resolve the deadlock. So, cases 1 and 3 both
|
||||
report "no deadlock".
|
||||
indicates a deadlock, but one that does not involve our starting
|
||||
process. We ignore this condition on the grounds that resolving such a
|
||||
deadlock is the responsibility of the processes involved --- killing our
|
||||
start- point process would not resolve the deadlock. So, cases 1 and 3
|
||||
both report "no deadlock".
|
||||
|
||||
Postgres' situation is a little more complex than the standard discussion
|
||||
of deadlock detection, for two reasons:
|
||||
|
||||
1. A process can be waiting for more than one other process, since there
|
||||
might be multiple holders of (non-conflicting) lock types that all conflict
|
||||
with the waiter's request. This creates no real difficulty however; we
|
||||
simply need to be prepared to trace more than one outgoing edge.
|
||||
might be multiple PROCLOCKs of (non-conflicting) lock types that all
|
||||
conflict with the waiter's request. This creates no real difficulty
|
||||
however; we simply need to be prepared to trace more than one outgoing
|
||||
edge.
|
||||
|
||||
2. If a process A is behind a process B in some lock's wait queue, and
|
||||
their requested locks conflict, then we must say that A waits for B, since
|
||||
@ -409,16 +413,18 @@ LockWaitCancel (abort a waiter due to outside factors) must run
|
||||
ProcLockWakeup, in case the canceled waiter was soft-blocking other
|
||||
waiters.
|
||||
|
||||
4. We can minimize excess rearrangement-trial work by being careful to scan
|
||||
the wait queue from the front when looking for soft edges. For example,
|
||||
if we have queue order A,B,C and C has deadlock conflicts with both A and B,
|
||||
we want to generate the "C before A" constraint first, rather than wasting
|
||||
time with "C before B", which won't move C far enough up. So we look for
|
||||
soft edges outgoing from C starting at the front of the wait queue.
|
||||
4. We can minimize excess rearrangement-trial work by being careful to
|
||||
scan the wait queue from the front when looking for soft edges. For
|
||||
example, if we have queue order A,B,C and C has deadlock conflicts with
|
||||
both A and B, we want to generate the "C before A" constraint first,
|
||||
rather than wasting time with "C before B", which won't move C far
|
||||
enough up. So we look for soft edges outgoing from C starting at the
|
||||
front of the wait queue.
|
||||
|
||||
5. The working data structures needed by the deadlock detection code can
|
||||
be limited to numbers of entries computed from MaxBackends. Therefore,
|
||||
we can allocate the worst-case space needed during backend startup.
|
||||
This seems a safer approach than trying to allocate workspace on the fly;
|
||||
we don't want to risk having the deadlock detector run out of memory,
|
||||
else we really have no guarantees at all that deadlock will be detected.
|
||||
we can allocate the worst-case space needed during backend startup. This
|
||||
seems a safer approach than trying to allocate workspace on the fly; we
|
||||
don't want to risk having the deadlock detector run out of memory, else
|
||||
we really have no guarantees at all that deadlock will be detected.
|
||||
|
||||
|
@ -12,7 +12,7 @@
|
||||
*
|
||||
*
|
||||
* IDENTIFICATION
|
||||
* $Header: /cvsroot/pgsql/src/backend/storage/lmgr/deadlock.c,v 1.11 2002/07/18 23:06:19 momjian Exp $
|
||||
* $Header: /cvsroot/pgsql/src/backend/storage/lmgr/deadlock.c,v 1.12 2002/07/19 00:17:40 momjian Exp $
|
||||
*
|
||||
* Interface:
|
||||
*
|
||||
@ -377,7 +377,7 @@ FindLockCycleRecurse(PGPROC *checkProc,
|
||||
{
|
||||
PGPROC *proc;
|
||||
LOCK *lock;
|
||||
HOLDER *holder;
|
||||
PROCLOCK *holder;
|
||||
SHM_QUEUE *lockHolders;
|
||||
LOCKMETHODTABLE *lockMethodTable;
|
||||
PROC_QUEUE *waitQueue;
|
||||
@ -427,8 +427,8 @@ FindLockCycleRecurse(PGPROC *checkProc,
|
||||
*/
|
||||
lockHolders = &(lock->lockHolders);
|
||||
|
||||
holder = (HOLDER *) SHMQueueNext(lockHolders, lockHolders,
|
||||
offsetof(HOLDER, lockLink));
|
||||
holder = (PROCLOCK *) SHMQueueNext(lockHolders, lockHolders,
|
||||
offsetof(PROCLOCK, lockLink));
|
||||
|
||||
while (holder)
|
||||
{
|
||||
@ -451,8 +451,8 @@ FindLockCycleRecurse(PGPROC *checkProc,
|
||||
}
|
||||
}
|
||||
|
||||
holder = (HOLDER *) SHMQueueNext(lockHolders, &holder->lockLink,
|
||||
offsetof(HOLDER, lockLink));
|
||||
holder = (PROCLOCK *) SHMQueueNext(lockHolders, &holder->lockLink,
|
||||
offsetof(PROCLOCK, lockLink));
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -8,7 +8,7 @@
|
||||
*
|
||||
*
|
||||
* IDENTIFICATION
|
||||
* $Header: /cvsroot/pgsql/src/backend/storage/lmgr/lock.c,v 1.109 2002/07/18 23:06:19 momjian Exp $
|
||||
* $Header: /cvsroot/pgsql/src/backend/storage/lmgr/lock.c,v 1.110 2002/07/19 00:17:40 momjian Exp $
|
||||
*
|
||||
* NOTES
|
||||
* Outside modules can create a lock table and acquire/release
|
||||
@ -48,7 +48,7 @@ int max_locks_per_xact; /* set by guc.c */
|
||||
|
||||
|
||||
static int WaitOnLock(LOCKMETHOD lockmethod, LOCKMODE lockmode,
|
||||
LOCK *lock, HOLDER *holder);
|
||||
LOCK *lock, PROCLOCK *holder);
|
||||
static void LockCountMyLocks(SHMEM_OFFSET lockOffset, PGPROC *proc,
|
||||
int *myHolding);
|
||||
|
||||
@ -125,18 +125,18 @@ LOCK_PRINT(const char *where, const LOCK *lock, LOCKMODE type)
|
||||
|
||||
|
||||
inline static void
|
||||
HOLDER_PRINT(const char *where, const HOLDER *holderP)
|
||||
PROCLOCK_PRINT(const char *where, const PROCLOCK *holderP)
|
||||
{
|
||||
if (
|
||||
(((HOLDER_LOCKMETHOD(*holderP) == DEFAULT_LOCKMETHOD && Trace_locks)
|
||||
|| (HOLDER_LOCKMETHOD(*holderP) == USER_LOCKMETHOD && Trace_userlocks))
|
||||
(((PROCLOCK_LOCKMETHOD(*holderP) == DEFAULT_LOCKMETHOD && Trace_locks)
|
||||
|| (PROCLOCK_LOCKMETHOD(*holderP) == USER_LOCKMETHOD && Trace_userlocks))
|
||||
&& (((LOCK *) MAKE_PTR(holderP->tag.lock))->tag.relId >= (Oid) Trace_lock_oidmin))
|
||||
|| (Trace_lock_table && (((LOCK *) MAKE_PTR(holderP->tag.lock))->tag.relId == Trace_lock_table))
|
||||
)
|
||||
elog(LOG,
|
||||
"%s: holder(%lx) lock(%lx) tbl(%d) proc(%lx) xid(%u) hold(%d,%d,%d,%d,%d,%d,%d)=%d",
|
||||
where, MAKE_OFFSET(holderP), holderP->tag.lock,
|
||||
HOLDER_LOCKMETHOD(*(holderP)),
|
||||
PROCLOCK_LOCKMETHOD(*(holderP)),
|
||||
holderP->tag.proc, holderP->tag.xid,
|
||||
holderP->holding[1], holderP->holding[2], holderP->holding[3],
|
||||
holderP->holding[4], holderP->holding[5], holderP->holding[6],
|
||||
@ -146,7 +146,7 @@ HOLDER_PRINT(const char *where, const HOLDER *holderP)
|
||||
#else /* not LOCK_DEBUG */
|
||||
|
||||
#define LOCK_PRINT(where, lock, type)
|
||||
#define HOLDER_PRINT(where, holderP)
|
||||
#define PROCLOCK_PRINT(where, holderP)
|
||||
#endif /* not LOCK_DEBUG */
|
||||
|
||||
|
||||
@ -316,11 +316,11 @@ LockMethodTableInit(char *tabName,
|
||||
Assert(lockMethodTable->lockHash->hash == tag_hash);
|
||||
|
||||
/*
|
||||
* allocate a hash table for HOLDER structs. This is used to store
|
||||
* allocate a hash table for PROCLOCK structs. This is used to store
|
||||
* per-lock-holder information.
|
||||
*/
|
||||
info.keysize = sizeof(HOLDERTAG);
|
||||
info.entrysize = sizeof(HOLDER);
|
||||
info.keysize = sizeof(PROCLOCKTAG);
|
||||
info.entrysize = sizeof(PROCLOCK);
|
||||
info.hash = tag_hash;
|
||||
hash_flags = (HASH_ELEM | HASH_FUNCTION);
|
||||
|
||||
@ -440,8 +440,8 @@ bool
|
||||
LockAcquire(LOCKMETHOD lockmethod, LOCKTAG *locktag,
|
||||
TransactionId xid, LOCKMODE lockmode, bool dontWait)
|
||||
{
|
||||
HOLDER *holder;
|
||||
HOLDERTAG holdertag;
|
||||
PROCLOCK *holder;
|
||||
PROCLOCKTAG holdertag;
|
||||
HTAB *holderTable;
|
||||
bool found;
|
||||
LOCK *lock;
|
||||
@ -513,7 +513,7 @@ LockAcquire(LOCKMETHOD lockmethod, LOCKTAG *locktag,
|
||||
/*
|
||||
* Create the hash key for the holder table.
|
||||
*/
|
||||
MemSet(&holdertag, 0, sizeof(HOLDERTAG)); /* must clear padding,
|
||||
MemSet(&holdertag, 0, sizeof(PROCLOCKTAG)); /* must clear padding,
|
||||
* needed */
|
||||
holdertag.lock = MAKE_OFFSET(lock);
|
||||
holdertag.proc = MAKE_OFFSET(MyProc);
|
||||
@ -523,7 +523,7 @@ LockAcquire(LOCKMETHOD lockmethod, LOCKTAG *locktag,
|
||||
* Find or create a holder entry with this tag
|
||||
*/
|
||||
holderTable = lockMethodTable->holderHash;
|
||||
holder = (HOLDER *) hash_search(holderTable,
|
||||
holder = (PROCLOCK *) hash_search(holderTable,
|
||||
(void *) &holdertag,
|
||||
HASH_ENTER, &found);
|
||||
if (!holder)
|
||||
@ -543,11 +543,11 @@ LockAcquire(LOCKMETHOD lockmethod, LOCKTAG *locktag,
|
||||
/* Add holder to appropriate lists */
|
||||
SHMQueueInsertBefore(&lock->lockHolders, &holder->lockLink);
|
||||
SHMQueueInsertBefore(&MyProc->procHolders, &holder->procLink);
|
||||
HOLDER_PRINT("LockAcquire: new", holder);
|
||||
PROCLOCK_PRINT("LockAcquire: new", holder);
|
||||
}
|
||||
else
|
||||
{
|
||||
HOLDER_PRINT("LockAcquire: found", holder);
|
||||
PROCLOCK_PRINT("LockAcquire: found", holder);
|
||||
Assert((holder->nHolding >= 0) && (holder->holding[lockmode] >= 0));
|
||||
Assert(holder->nHolding <= lock->nGranted);
|
||||
|
||||
@ -600,7 +600,7 @@ LockAcquire(LOCKMETHOD lockmethod, LOCKTAG *locktag,
|
||||
if (holder->holding[lockmode] > 0)
|
||||
{
|
||||
GrantLock(lock, holder, lockmode);
|
||||
HOLDER_PRINT("LockAcquire: owning", holder);
|
||||
PROCLOCK_PRINT("LockAcquire: owning", holder);
|
||||
LWLockRelease(masterLock);
|
||||
return TRUE;
|
||||
}
|
||||
@ -613,7 +613,7 @@ LockAcquire(LOCKMETHOD lockmethod, LOCKTAG *locktag,
|
||||
if (myHolding[lockmode] > 0)
|
||||
{
|
||||
GrantLock(lock, holder, lockmode);
|
||||
HOLDER_PRINT("LockAcquire: my other XID owning", holder);
|
||||
PROCLOCK_PRINT("LockAcquire: my other XID owning", holder);
|
||||
LWLockRelease(masterLock);
|
||||
return TRUE;
|
||||
}
|
||||
@ -650,14 +650,14 @@ LockAcquire(LOCKMETHOD lockmethod, LOCKTAG *locktag,
|
||||
{
|
||||
SHMQueueDelete(&holder->lockLink);
|
||||
SHMQueueDelete(&holder->procLink);
|
||||
holder = (HOLDER *) hash_search(holderTable,
|
||||
holder = (PROCLOCK *) hash_search(holderTable,
|
||||
(void *) holder,
|
||||
HASH_REMOVE, NULL);
|
||||
if (!holder)
|
||||
elog(WARNING, "LockAcquire: remove holder, table corrupted");
|
||||
}
|
||||
else
|
||||
HOLDER_PRINT("LockAcquire: NHOLDING", holder);
|
||||
PROCLOCK_PRINT("LockAcquire: NHOLDING", holder);
|
||||
lock->nRequested--;
|
||||
lock->requested[lockmode]--;
|
||||
LOCK_PRINT("LockAcquire: conditional lock failed", lock, lockmode);
|
||||
@ -702,13 +702,13 @@ LockAcquire(LOCKMETHOD lockmethod, LOCKTAG *locktag,
|
||||
*/
|
||||
if (!((holder->nHolding > 0) && (holder->holding[lockmode] > 0)))
|
||||
{
|
||||
HOLDER_PRINT("LockAcquire: INCONSISTENT", holder);
|
||||
PROCLOCK_PRINT("LockAcquire: INCONSISTENT", holder);
|
||||
LOCK_PRINT("LockAcquire: INCONSISTENT", lock, lockmode);
|
||||
/* Should we retry ? */
|
||||
LWLockRelease(masterLock);
|
||||
return FALSE;
|
||||
}
|
||||
HOLDER_PRINT("LockAcquire: granted", holder);
|
||||
PROCLOCK_PRINT("LockAcquire: granted", holder);
|
||||
LOCK_PRINT("LockAcquire: granted", lock, lockmode);
|
||||
}
|
||||
|
||||
@ -737,7 +737,7 @@ int
|
||||
LockCheckConflicts(LOCKMETHODTABLE *lockMethodTable,
|
||||
LOCKMODE lockmode,
|
||||
LOCK *lock,
|
||||
HOLDER *holder,
|
||||
PROCLOCK *holder,
|
||||
PGPROC *proc,
|
||||
int *myHolding) /* myHolding[] array or NULL */
|
||||
{
|
||||
@ -758,7 +758,7 @@ LockCheckConflicts(LOCKMETHODTABLE *lockMethodTable,
|
||||
*/
|
||||
if (!(lockMethodTable->conflictTab[lockmode] & lock->grantMask))
|
||||
{
|
||||
HOLDER_PRINT("LockCheckConflicts: no conflict", holder);
|
||||
PROCLOCK_PRINT("LockCheckConflicts: no conflict", holder);
|
||||
return STATUS_OK;
|
||||
}
|
||||
|
||||
@ -792,11 +792,11 @@ LockCheckConflicts(LOCKMETHODTABLE *lockMethodTable,
|
||||
if (!(lockMethodTable->conflictTab[lockmode] & bitmask))
|
||||
{
|
||||
/* no conflict. OK to get the lock */
|
||||
HOLDER_PRINT("LockCheckConflicts: resolved", holder);
|
||||
PROCLOCK_PRINT("LockCheckConflicts: resolved", holder);
|
||||
return STATUS_OK;
|
||||
}
|
||||
|
||||
HOLDER_PRINT("LockCheckConflicts: conflicting", holder);
|
||||
PROCLOCK_PRINT("LockCheckConflicts: conflicting", holder);
|
||||
return STATUS_FOUND;
|
||||
}
|
||||
|
||||
@ -814,13 +814,13 @@ static void
|
||||
LockCountMyLocks(SHMEM_OFFSET lockOffset, PGPROC *proc, int *myHolding)
|
||||
{
|
||||
SHM_QUEUE *procHolders = &(proc->procHolders);
|
||||
HOLDER *holder;
|
||||
PROCLOCK *holder;
|
||||
int i;
|
||||
|
||||
MemSet(myHolding, 0, MAX_LOCKMODES * sizeof(int));
|
||||
|
||||
holder = (HOLDER *) SHMQueueNext(procHolders, procHolders,
|
||||
offsetof(HOLDER, procLink));
|
||||
holder = (PROCLOCK *) SHMQueueNext(procHolders, procHolders,
|
||||
offsetof(PROCLOCK, procLink));
|
||||
|
||||
while (holder)
|
||||
{
|
||||
@ -830,8 +830,8 @@ LockCountMyLocks(SHMEM_OFFSET lockOffset, PGPROC *proc, int *myHolding)
|
||||
myHolding[i] += holder->holding[i];
|
||||
}
|
||||
|
||||
holder = (HOLDER *) SHMQueueNext(procHolders, &holder->procLink,
|
||||
offsetof(HOLDER, procLink));
|
||||
holder = (PROCLOCK *) SHMQueueNext(procHolders, &holder->procLink,
|
||||
offsetof(PROCLOCK, procLink));
|
||||
}
|
||||
}
|
||||
|
||||
@ -843,7 +843,7 @@ LockCountMyLocks(SHMEM_OFFSET lockOffset, PGPROC *proc, int *myHolding)
|
||||
* and have its waitLock/waitHolder fields cleared. That's not done here.
|
||||
*/
|
||||
void
|
||||
GrantLock(LOCK *lock, HOLDER *holder, LOCKMODE lockmode)
|
||||
GrantLock(LOCK *lock, PROCLOCK *holder, LOCKMODE lockmode)
|
||||
{
|
||||
lock->nGranted++;
|
||||
lock->granted[lockmode]++;
|
||||
@ -868,7 +868,7 @@ GrantLock(LOCK *lock, HOLDER *holder, LOCKMODE lockmode)
|
||||
*/
|
||||
static int
|
||||
WaitOnLock(LOCKMETHOD lockmethod, LOCKMODE lockmode,
|
||||
LOCK *lock, HOLDER *holder)
|
||||
LOCK *lock, PROCLOCK *holder)
|
||||
{
|
||||
LOCKMETHODTABLE *lockMethodTable = LockMethodTable[lockmethod];
|
||||
char *new_status,
|
||||
@ -984,8 +984,8 @@ LockRelease(LOCKMETHOD lockmethod, LOCKTAG *locktag,
|
||||
LOCK *lock;
|
||||
LWLockId masterLock;
|
||||
LOCKMETHODTABLE *lockMethodTable;
|
||||
HOLDER *holder;
|
||||
HOLDERTAG holdertag;
|
||||
PROCLOCK *holder;
|
||||
PROCLOCKTAG holdertag;
|
||||
HTAB *holderTable;
|
||||
bool wakeupNeeded = false;
|
||||
|
||||
@ -1031,14 +1031,14 @@ LockRelease(LOCKMETHOD lockmethod, LOCKTAG *locktag,
|
||||
/*
|
||||
* Find the holder entry for this holder.
|
||||
*/
|
||||
MemSet(&holdertag, 0, sizeof(HOLDERTAG)); /* must clear padding,
|
||||
MemSet(&holdertag, 0, sizeof(PROCLOCKTAG)); /* must clear padding,
|
||||
* needed */
|
||||
holdertag.lock = MAKE_OFFSET(lock);
|
||||
holdertag.proc = MAKE_OFFSET(MyProc);
|
||||
TransactionIdStore(xid, &holdertag.xid);
|
||||
|
||||
holderTable = lockMethodTable->holderHash;
|
||||
holder = (HOLDER *) hash_search(holderTable,
|
||||
holder = (PROCLOCK *) hash_search(holderTable,
|
||||
(void *) &holdertag,
|
||||
HASH_FIND_SAVE, NULL);
|
||||
if (!holder)
|
||||
@ -1052,7 +1052,7 @@ LockRelease(LOCKMETHOD lockmethod, LOCKTAG *locktag,
|
||||
elog(WARNING, "LockRelease: holder table corrupted");
|
||||
return FALSE;
|
||||
}
|
||||
HOLDER_PRINT("LockRelease: found", holder);
|
||||
PROCLOCK_PRINT("LockRelease: found", holder);
|
||||
|
||||
/*
|
||||
* Check that we are actually holding a lock of the type we want to
|
||||
@ -1060,7 +1060,7 @@ LockRelease(LOCKMETHOD lockmethod, LOCKTAG *locktag,
|
||||
*/
|
||||
if (!(holder->holding[lockmode] > 0))
|
||||
{
|
||||
HOLDER_PRINT("LockRelease: WRONGTYPE", holder);
|
||||
PROCLOCK_PRINT("LockRelease: WRONGTYPE", holder);
|
||||
Assert(holder->holding[lockmode] >= 0);
|
||||
LWLockRelease(masterLock);
|
||||
elog(WARNING, "LockRelease: you don't own a lock of type %s",
|
||||
@ -1128,7 +1128,7 @@ LockRelease(LOCKMETHOD lockmethod, LOCKTAG *locktag,
|
||||
*/
|
||||
holder->holding[lockmode]--;
|
||||
holder->nHolding--;
|
||||
HOLDER_PRINT("LockRelease: updated", holder);
|
||||
PROCLOCK_PRINT("LockRelease: updated", holder);
|
||||
Assert((holder->nHolding >= 0) && (holder->holding[lockmode] >= 0));
|
||||
|
||||
/*
|
||||
@ -1137,10 +1137,10 @@ LockRelease(LOCKMETHOD lockmethod, LOCKTAG *locktag,
|
||||
*/
|
||||
if (holder->nHolding == 0)
|
||||
{
|
||||
HOLDER_PRINT("LockRelease: deleting", holder);
|
||||
PROCLOCK_PRINT("LockRelease: deleting", holder);
|
||||
SHMQueueDelete(&holder->lockLink);
|
||||
SHMQueueDelete(&holder->procLink);
|
||||
holder = (HOLDER *) hash_search(holderTable,
|
||||
holder = (PROCLOCK *) hash_search(holderTable,
|
||||
(void *) &holder,
|
||||
HASH_REMOVE_SAVED, NULL);
|
||||
if (!holder)
|
||||
@ -1177,8 +1177,8 @@ LockReleaseAll(LOCKMETHOD lockmethod, PGPROC *proc,
|
||||
bool allxids, TransactionId xid)
|
||||
{
|
||||
SHM_QUEUE *procHolders = &(proc->procHolders);
|
||||
HOLDER *holder;
|
||||
HOLDER *nextHolder;
|
||||
PROCLOCK *holder;
|
||||
PROCLOCK *nextHolder;
|
||||
LWLockId masterLock;
|
||||
LOCKMETHODTABLE *lockMethodTable;
|
||||
int i,
|
||||
@ -1204,16 +1204,16 @@ LockReleaseAll(LOCKMETHOD lockmethod, PGPROC *proc,
|
||||
|
||||
LWLockAcquire(masterLock, LW_EXCLUSIVE);
|
||||
|
||||
holder = (HOLDER *) SHMQueueNext(procHolders, procHolders,
|
||||
offsetof(HOLDER, procLink));
|
||||
holder = (PROCLOCK *) SHMQueueNext(procHolders, procHolders,
|
||||
offsetof(PROCLOCK, procLink));
|
||||
|
||||
while (holder)
|
||||
{
|
||||
bool wakeupNeeded = false;
|
||||
|
||||
/* Get link first, since we may unlink/delete this holder */
|
||||
nextHolder = (HOLDER *) SHMQueueNext(procHolders, &holder->procLink,
|
||||
offsetof(HOLDER, procLink));
|
||||
nextHolder = (PROCLOCK *) SHMQueueNext(procHolders, &holder->procLink,
|
||||
offsetof(PROCLOCK, procLink));
|
||||
|
||||
Assert(holder->tag.proc == MAKE_OFFSET(proc));
|
||||
|
||||
@ -1227,7 +1227,7 @@ LockReleaseAll(LOCKMETHOD lockmethod, PGPROC *proc,
|
||||
if (!allxids && !TransactionIdEquals(xid, holder->tag.xid))
|
||||
goto next_item;
|
||||
|
||||
HOLDER_PRINT("LockReleaseAll", holder);
|
||||
PROCLOCK_PRINT("LockReleaseAll", holder);
|
||||
LOCK_PRINT("LockReleaseAll", lock, 0);
|
||||
Assert(lock->nRequested >= 0);
|
||||
Assert(lock->nGranted >= 0);
|
||||
@ -1281,7 +1281,7 @@ LockReleaseAll(LOCKMETHOD lockmethod, PGPROC *proc,
|
||||
}
|
||||
LOCK_PRINT("LockReleaseAll: updated", lock, 0);
|
||||
|
||||
HOLDER_PRINT("LockReleaseAll: deleting", holder);
|
||||
PROCLOCK_PRINT("LockReleaseAll: deleting", holder);
|
||||
|
||||
/*
|
||||
* Remove the holder entry from the linked lists
|
||||
@ -1292,7 +1292,7 @@ LockReleaseAll(LOCKMETHOD lockmethod, PGPROC *proc,
|
||||
/*
|
||||
* remove the holder entry from the hashtable
|
||||
*/
|
||||
holder = (HOLDER *) hash_search(lockMethodTable->holderHash,
|
||||
holder = (PROCLOCK *) hash_search(lockMethodTable->holderHash,
|
||||
(void *) holder,
|
||||
HASH_REMOVE,
|
||||
NULL);
|
||||
@ -1353,7 +1353,7 @@ LockShmemSize(int maxBackends)
|
||||
size += hash_estimate_size(max_table_size, sizeof(LOCK));
|
||||
|
||||
/* holderHash table */
|
||||
size += hash_estimate_size(max_table_size, sizeof(HOLDER));
|
||||
size += hash_estimate_size(max_table_size, sizeof(PROCLOCK));
|
||||
|
||||
/*
|
||||
* Since the lockHash entry count above is only an estimate, add 10%
|
||||
@ -1376,7 +1376,7 @@ DumpLocks(void)
|
||||
{
|
||||
PGPROC *proc;
|
||||
SHM_QUEUE *procHolders;
|
||||
HOLDER *holder;
|
||||
PROCLOCK *holder;
|
||||
LOCK *lock;
|
||||
int lockmethod = DEFAULT_LOCKMETHOD;
|
||||
LOCKMETHODTABLE *lockMethodTable;
|
||||
@ -1395,8 +1395,8 @@ DumpLocks(void)
|
||||
if (proc->waitLock)
|
||||
LOCK_PRINT("DumpLocks: waiting on", proc->waitLock, 0);
|
||||
|
||||
holder = (HOLDER *) SHMQueueNext(procHolders, procHolders,
|
||||
offsetof(HOLDER, procLink));
|
||||
holder = (PROCLOCK *) SHMQueueNext(procHolders, procHolders,
|
||||
offsetof(PROCLOCK, procLink));
|
||||
|
||||
while (holder)
|
||||
{
|
||||
@ -1404,11 +1404,11 @@ DumpLocks(void)
|
||||
|
||||
lock = (LOCK *) MAKE_PTR(holder->tag.lock);
|
||||
|
||||
HOLDER_PRINT("DumpLocks", holder);
|
||||
PROCLOCK_PRINT("DumpLocks", holder);
|
||||
LOCK_PRINT("DumpLocks", lock, 0);
|
||||
|
||||
holder = (HOLDER *) SHMQueueNext(procHolders, &holder->procLink,
|
||||
offsetof(HOLDER, procLink));
|
||||
holder = (PROCLOCK *) SHMQueueNext(procHolders, &holder->procLink,
|
||||
offsetof(PROCLOCK, procLink));
|
||||
}
|
||||
}
|
||||
|
||||
@ -1419,7 +1419,7 @@ void
|
||||
DumpAllLocks(void)
|
||||
{
|
||||
PGPROC *proc;
|
||||
HOLDER *holder;
|
||||
PROCLOCK *holder;
|
||||
LOCK *lock;
|
||||
int lockmethod = DEFAULT_LOCKMETHOD;
|
||||
LOCKMETHODTABLE *lockMethodTable;
|
||||
@ -1441,9 +1441,9 @@ DumpAllLocks(void)
|
||||
LOCK_PRINT("DumpAllLocks: waiting on", proc->waitLock, 0);
|
||||
|
||||
hash_seq_init(&status, holderTable);
|
||||
while ((holder = (HOLDER *) hash_seq_search(&status)) != NULL)
|
||||
while ((holder = (PROCLOCK *) hash_seq_search(&status)) != NULL)
|
||||
{
|
||||
HOLDER_PRINT("DumpAllLocks", holder);
|
||||
PROCLOCK_PRINT("DumpAllLocks", holder);
|
||||
|
||||
if (holder->tag.lock)
|
||||
{
|
||||
|
@ -8,7 +8,7 @@
|
||||
*
|
||||
*
|
||||
* IDENTIFICATION
|
||||
* $Header: /cvsroot/pgsql/src/backend/storage/lmgr/proc.c,v 1.123 2002/07/18 23:06:20 momjian Exp $
|
||||
* $Header: /cvsroot/pgsql/src/backend/storage/lmgr/proc.c,v 1.124 2002/07/19 00:17:40 momjian Exp $
|
||||
*
|
||||
*-------------------------------------------------------------------------
|
||||
*/
|
||||
@ -501,7 +501,7 @@ int
|
||||
ProcSleep(LOCKMETHODTABLE *lockMethodTable,
|
||||
LOCKMODE lockmode,
|
||||
LOCK *lock,
|
||||
HOLDER *holder)
|
||||
PROCLOCK *holder)
|
||||
{
|
||||
LWLockId masterLock = lockMethodTable->masterLock;
|
||||
PROC_QUEUE *waitQueue = &(lock->waitProcs);
|
||||
|
@ -7,7 +7,7 @@
|
||||
* Portions Copyright (c) 1996-2002, PostgreSQL Global Development Group
|
||||
* Portions Copyright (c) 1994, Regents of the University of California
|
||||
*
|
||||
* $Id: lock.h,v 1.62 2002/07/18 23:06:20 momjian Exp $
|
||||
* $Id: lock.h,v 1.63 2002/07/19 00:17:40 momjian Exp $
|
||||
*
|
||||
*-------------------------------------------------------------------------
|
||||
*/
|
||||
@ -130,7 +130,7 @@ typedef struct LOCKTAG
|
||||
* tag -- uniquely identifies the object being locked
|
||||
* grantMask -- bitmask for all lock types currently granted on this object.
|
||||
* waitMask -- bitmask for all lock types currently awaited on this object.
|
||||
* lockHolders -- list of HOLDER objects for this lock.
|
||||
* lockHolders -- list of PROCLOCK objects for this lock.
|
||||
* waitProcs -- queue of processes waiting for this lock.
|
||||
* requested -- count of each lock type currently requested on the lock
|
||||
* (includes requests already granted!!).
|
||||
@ -146,7 +146,7 @@ typedef struct LOCK
|
||||
/* data */
|
||||
int grantMask; /* bitmask for lock types already granted */
|
||||
int waitMask; /* bitmask for lock types awaited */
|
||||
SHM_QUEUE lockHolders; /* list of HOLDER objects assoc. with lock */
|
||||
SHM_QUEUE lockHolders; /* list of PROCLOCK objects assoc. with lock */
|
||||
PROC_QUEUE waitProcs; /* list of PGPROC objects waiting on lock */
|
||||
int requested[MAX_LOCKMODES]; /* counts of requested
|
||||
* locks */
|
||||
@ -163,8 +163,8 @@ typedef struct LOCK
|
||||
* on the same lockable object. We need to store some per-holder information
|
||||
* for each such holder (or would-be holder).
|
||||
*
|
||||
* HOLDERTAG is the key information needed to look up a HOLDER item in the
|
||||
* holder hashtable. A HOLDERTAG value uniquely identifies a lock holder.
|
||||
* PROCLOCKTAG is the key information needed to look up a PROCLOCK item in the
|
||||
* holder hashtable. A PROCLOCKTAG value uniquely identifies a lock holder.
|
||||
*
|
||||
* There are two possible kinds of holder tags: a transaction (identified
|
||||
* both by the PGPROC of the backend running it, and the xact's own ID) and
|
||||
@ -180,32 +180,32 @@ typedef struct LOCK
|
||||
* Otherwise, holder objects whose counts have gone to zero are recycled
|
||||
* as soon as convenient.
|
||||
*
|
||||
* Each HOLDER object is linked into lists for both the associated LOCK object
|
||||
* and the owning PGPROC object. Note that the HOLDER is entered into these
|
||||
* Each PROCLOCK object is linked into lists for both the associated LOCK object
|
||||
* and the owning PGPROC object. Note that the PROCLOCK is entered into these
|
||||
* lists as soon as it is created, even if no lock has yet been granted.
|
||||
* A PGPROC that is waiting for a lock to be granted will also be linked into
|
||||
* the lock's waitProcs queue.
|
||||
*/
|
||||
typedef struct HOLDERTAG
|
||||
typedef struct PROCLOCKTAG
|
||||
{
|
||||
SHMEM_OFFSET lock; /* link to per-lockable-object information */
|
||||
SHMEM_OFFSET proc; /* link to PGPROC of owning backend */
|
||||
TransactionId xid; /* xact ID, or InvalidTransactionId */
|
||||
} HOLDERTAG;
|
||||
} PROCLOCKTAG;
|
||||
|
||||
typedef struct HOLDER
|
||||
typedef struct PROCLOCK
|
||||
{
|
||||
/* tag */
|
||||
HOLDERTAG tag; /* unique identifier of holder object */
|
||||
PROCLOCKTAG tag; /* unique identifier of holder object */
|
||||
|
||||
/* data */
|
||||
int holding[MAX_LOCKMODES]; /* count of locks currently held */
|
||||
int nHolding; /* total of holding[] array */
|
||||
SHM_QUEUE lockLink; /* list link for lock's list of holders */
|
||||
SHM_QUEUE procLink; /* list link for process's list of holders */
|
||||
} HOLDER;
|
||||
} PROCLOCK;
|
||||
|
||||
#define HOLDER_LOCKMETHOD(holder) \
|
||||
#define PROCLOCK_LOCKMETHOD(holder) \
|
||||
(((LOCK *) MAKE_PTR((holder).tag.lock))->tag.lockmethod)
|
||||
|
||||
|
||||
@ -225,9 +225,9 @@ extern bool LockReleaseAll(LOCKMETHOD lockmethod, PGPROC *proc,
|
||||
bool allxids, TransactionId xid);
|
||||
extern int LockCheckConflicts(LOCKMETHODTABLE *lockMethodTable,
|
||||
LOCKMODE lockmode,
|
||||
LOCK *lock, HOLDER *holder, PGPROC *proc,
|
||||
LOCK *lock, PROCLOCK *holder, PGPROC *proc,
|
||||
int *myHolding);
|
||||
extern void GrantLock(LOCK *lock, HOLDER *holder, LOCKMODE lockmode);
|
||||
extern void GrantLock(LOCK *lock, PROCLOCK *holder, LOCKMODE lockmode);
|
||||
extern void RemoveFromWaitQueue(PGPROC *proc);
|
||||
extern int LockShmemSize(int maxBackends);
|
||||
extern bool DeadLockCheck(PGPROC *proc);
|
||||
|
@ -7,7 +7,7 @@
|
||||
* Portions Copyright (c) 1996-2002, PostgreSQL Global Development Group
|
||||
* Portions Copyright (c) 1994, Regents of the University of California
|
||||
*
|
||||
* $Id: proc.h,v 1.58 2002/07/13 01:02:14 momjian Exp $
|
||||
* $Id: proc.h,v 1.59 2002/07/19 00:17:40 momjian Exp $
|
||||
*
|
||||
*-------------------------------------------------------------------------
|
||||
*/
|
||||
@ -61,12 +61,12 @@ struct PGPROC
|
||||
/* Info about lock the process is currently waiting for, if any. */
|
||||
/* waitLock and waitHolder are NULL if not currently waiting. */
|
||||
LOCK *waitLock; /* Lock object we're sleeping on ... */
|
||||
HOLDER *waitHolder; /* Per-holder info for awaited lock */
|
||||
PROCLOCK *waitHolder; /* Per-holder info for awaited lock */
|
||||
LOCKMODE waitLockMode; /* type of lock we're waiting for */
|
||||
LOCKMASK heldLocks; /* bitmask for lock types already held on
|
||||
* this lock object by this backend */
|
||||
|
||||
SHM_QUEUE procHolders; /* list of HOLDER objects for locks held
|
||||
SHM_QUEUE procHolders; /* list of PROCLOCK objects for locks held
|
||||
* or awaited by this backend */
|
||||
};
|
||||
|
||||
@ -101,7 +101,7 @@ extern void ProcReleaseLocks(bool isCommit);
|
||||
|
||||
extern void ProcQueueInit(PROC_QUEUE *queue);
|
||||
extern int ProcSleep(LOCKMETHODTABLE *lockMethodTable, LOCKMODE lockmode,
|
||||
LOCK *lock, HOLDER *holder);
|
||||
LOCK *lock, PROCLOCK *holder);
|
||||
extern PGPROC *ProcWakeup(PGPROC *proc, int errType);
|
||||
extern void ProcLockWakeup(LOCKMETHODTABLE *lockMethodTable, LOCK *lock);
|
||||
extern bool LockWaitCancel(void);
|
||||
|
Loading…
x
Reference in New Issue
Block a user