1
0
mirror of https://github.com/postgres/postgres.git synced 2025-11-07 19:06:32 +03:00

Tighten up error recovery for fast-path locking.

The previous code could cause a backend crash after BEGIN; SAVEPOINT a;
LOCK TABLE foo (interrupted by ^C or statement timeout); ROLLBACK TO
SAVEPOINT a; LOCK TABLE foo, and might have leaked strong-lock counts
in other situations.

Report by Zoltán Böszörményi; patch review by Jeff Davis.
This commit is contained in:
Robert Haas
2012-04-18 11:17:30 -04:00
parent ab77b2da8b
commit 53c5b869b4
7 changed files with 94 additions and 31 deletions

View File

@@ -635,17 +635,20 @@ IsWaitingForLock(void)
}
/*
* Cancel any pending wait for lock, when aborting a transaction.
* Cancel any pending wait for lock, when aborting a transaction, and revert
* any strong lock count acquisition for a lock being acquired.
*
* (Normally, this would only happen if we accept a cancel/die
* interrupt while waiting; but an ereport(ERROR) while waiting is
* within the realm of possibility, too.)
* interrupt while waiting; but an ereport(ERROR) before or during the lock
* wait is within the realm of possibility, too.)
*/
void
LockWaitCancel(void)
LockErrorCleanup(void)
{
LWLockId partitionLock;
AbortStrongLockAcquire();
/* Nothing to do if we weren't waiting for a lock */
if (lockAwaited == NULL)
return;
@@ -709,7 +712,7 @@ ProcReleaseLocks(bool isCommit)
if (!MyProc)
return;
/* If waiting, get off wait queue (should only be needed after error) */
LockWaitCancel();
LockErrorCleanup();
/* Release locks */
LockReleaseAll(DEFAULT_LOCKMETHOD, !isCommit);
@@ -1019,7 +1022,7 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable)
* NOTE: this may also cause us to exit critical-section state, possibly
* allowing a cancel/die interrupt to be accepted. This is OK because we
* have recorded the fact that we are waiting for a lock, and so
* LockWaitCancel will clean up if cancel/die happens.
* LockErrorCleanup will clean up if cancel/die happens.
*/
LWLockRelease(partitionLock);
@@ -1062,7 +1065,7 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable)
* don't, because we have no shared-state-change work to do after being
* granted the lock (the grantor did it all). We do have to worry about
* updating the locallock table, but if we lose control to an error,
* LockWaitCancel will fix that up.
* LockErrorCleanup will fix that up.
*/
do
{
@@ -1207,7 +1210,7 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable)
LWLockAcquire(partitionLock, LW_EXCLUSIVE);
/*
* We no longer want LockWaitCancel to do anything.
* We no longer want LockErrorCleanup to do anything.
*/
lockAwaited = NULL;