mirror of
https://github.com/postgres/postgres.git
synced 2025-11-06 07:49:08 +03:00
pgindent run for 9.4
This includes removing tabs after periods in C comments, which was applied to back branches, so this change should not effect backpatching.
This commit is contained in:
@@ -187,7 +187,7 @@ static int FastPathLocalUseCount = 0;
|
||||
|
||||
/*
|
||||
* The fast-path lock mechanism is concerned only with relation locks on
|
||||
* unshared relations by backends bound to a database. The fast-path
|
||||
* unshared relations by backends bound to a database. The fast-path
|
||||
* mechanism exists mostly to accelerate acquisition and release of locks
|
||||
* that rarely conflict. Because ShareUpdateExclusiveLock is
|
||||
* self-conflicting, it can't use the fast-path mechanism; but it also does
|
||||
@@ -914,7 +914,7 @@ LockAcquireExtended(const LOCKTAG *locktag,
|
||||
|
||||
/*
|
||||
* If lock requested conflicts with locks requested by waiters, must join
|
||||
* wait queue. Otherwise, check for conflict with already-held locks.
|
||||
* wait queue. Otherwise, check for conflict with already-held locks.
|
||||
* (That's last because most complex check.)
|
||||
*/
|
||||
if (lockMethodTable->conflictTab[lockmode] & lock->waitMask)
|
||||
@@ -995,7 +995,7 @@ LockAcquireExtended(const LOCKTAG *locktag,
|
||||
|
||||
/*
|
||||
* NOTE: do not do any material change of state between here and
|
||||
* return. All required changes in locktable state must have been
|
||||
* return. All required changes in locktable state must have been
|
||||
* done when the lock was granted to us --- see notes in WaitOnLock.
|
||||
*/
|
||||
|
||||
@@ -1032,7 +1032,7 @@ LockAcquireExtended(const LOCKTAG *locktag,
|
||||
{
|
||||
/*
|
||||
* Decode the locktag back to the original values, to avoid sending
|
||||
* lots of empty bytes with every message. See lock.h to check how a
|
||||
* lots of empty bytes with every message. See lock.h to check how a
|
||||
* locktag is defined for LOCKTAG_RELATION
|
||||
*/
|
||||
LogAccessExclusiveLock(locktag->locktag_field1,
|
||||
@@ -1289,7 +1289,7 @@ LockCheckConflicts(LockMethod lockMethodTable,
|
||||
}
|
||||
|
||||
/*
|
||||
* Rats. Something conflicts. But it could still be my own lock. We have
|
||||
* Rats. Something conflicts. But it could still be my own lock. We have
|
||||
* to construct a conflict mask that does not reflect our own locks, but
|
||||
* only lock types held by other processes.
|
||||
*/
|
||||
@@ -1381,7 +1381,7 @@ UnGrantLock(LOCK *lock, LOCKMODE lockmode,
|
||||
|
||||
/*
|
||||
* We need only run ProcLockWakeup if the released lock conflicts with at
|
||||
* least one of the lock types requested by waiter(s). Otherwise whatever
|
||||
* least one of the lock types requested by waiter(s). Otherwise whatever
|
||||
* conflict made them wait must still exist. NOTE: before MVCC, we could
|
||||
* skip wakeup if lock->granted[lockmode] was still positive. But that's
|
||||
* not true anymore, because the remaining granted locks might belong to
|
||||
@@ -1401,7 +1401,7 @@ UnGrantLock(LOCK *lock, LOCKMODE lockmode,
|
||||
}
|
||||
|
||||
/*
|
||||
* CleanUpLock -- clean up after releasing a lock. We garbage-collect the
|
||||
* CleanUpLock -- clean up after releasing a lock. We garbage-collect the
|
||||
* proclock and lock objects if possible, and call ProcLockWakeup if there
|
||||
* are remaining requests and the caller says it's OK. (Normally, this
|
||||
* should be called after UnGrantLock, and wakeupNeeded is the result from
|
||||
@@ -1823,7 +1823,7 @@ LockRelease(const LOCKTAG *locktag, LOCKMODE lockmode, bool sessionLock)
|
||||
}
|
||||
|
||||
/*
|
||||
* Decrease the total local count. If we're still holding the lock, we're
|
||||
* Decrease the total local count. If we're still holding the lock, we're
|
||||
* done.
|
||||
*/
|
||||
locallock->nLocks--;
|
||||
@@ -1955,7 +1955,7 @@ LockReleaseAll(LOCKMETHODID lockmethodid, bool allLocks)
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Get rid of our fast-path VXID lock, if appropriate. Note that this is
|
||||
* Get rid of our fast-path VXID lock, if appropriate. Note that this is
|
||||
* the only way that the lock we hold on our own VXID can ever get
|
||||
* released: it is always and only released when a toplevel transaction
|
||||
* ends.
|
||||
@@ -2042,7 +2042,7 @@ LockReleaseAll(LOCKMETHODID lockmethodid, bool allLocks)
|
||||
* fast-path data structures, we must acquire it before attempting
|
||||
* to release the lock via the fast-path. We will continue to
|
||||
* hold the LWLock until we're done scanning the locallock table,
|
||||
* unless we hit a transferred fast-path lock. (XXX is this
|
||||
* unless we hit a transferred fast-path lock. (XXX is this
|
||||
* really such a good idea? There could be a lot of entries ...)
|
||||
*/
|
||||
if (!have_fast_path_lwlock)
|
||||
@@ -2061,7 +2061,7 @@ LockReleaseAll(LOCKMETHODID lockmethodid, bool allLocks)
|
||||
|
||||
/*
|
||||
* Our lock, originally taken via the fast path, has been
|
||||
* transferred to the main lock table. That's going to require
|
||||
* transferred to the main lock table. That's going to require
|
||||
* some extra work, so release our fast-path lock before starting.
|
||||
*/
|
||||
LWLockRelease(MyProc->backendLock);
|
||||
@@ -2070,7 +2070,7 @@ LockReleaseAll(LOCKMETHODID lockmethodid, bool allLocks)
|
||||
/*
|
||||
* Now dump the lock. We haven't got a pointer to the LOCK or
|
||||
* PROCLOCK in this case, so we have to handle this a bit
|
||||
* differently than a normal lock release. Unfortunately, this
|
||||
* differently than a normal lock release. Unfortunately, this
|
||||
* requires an extra LWLock acquire-and-release cycle on the
|
||||
* partitionLock, but hopefully it shouldn't happen often.
|
||||
*/
|
||||
@@ -2505,9 +2505,9 @@ FastPathTransferRelationLocks(LockMethod lockMethodTable, const LOCKTAG *locktag
|
||||
* acquiring proc->backendLock. In particular, it's certainly safe to
|
||||
* assume that if the target backend holds any fast-path locks, it
|
||||
* must have performed a memory-fencing operation (in particular, an
|
||||
* LWLock acquisition) since setting proc->databaseId. However, it's
|
||||
* LWLock acquisition) since setting proc->databaseId. However, it's
|
||||
* less clear that our backend is certain to have performed a memory
|
||||
* fencing operation since the other backend set proc->databaseId. So
|
||||
* fencing operation since the other backend set proc->databaseId. So
|
||||
* for now, we test it after acquiring the LWLock just to be safe.
|
||||
*/
|
||||
if (proc->databaseId != locktag->locktag_field1)
|
||||
@@ -3021,7 +3021,7 @@ AtPrepare_Locks(void)
|
||||
continue;
|
||||
|
||||
/*
|
||||
* If we have both session- and transaction-level locks, fail. This
|
||||
* If we have both session- and transaction-level locks, fail. This
|
||||
* should never happen with regular locks, since we only take those at
|
||||
* session level in some special operations like VACUUM. It's
|
||||
* possible to hit this with advisory locks, though.
|
||||
@@ -3030,7 +3030,7 @@ AtPrepare_Locks(void)
|
||||
* the transactional hold to the prepared xact. However, that would
|
||||
* require two PROCLOCK objects, and we cannot be sure that another
|
||||
* PROCLOCK will be available when it comes time for PostPrepare_Locks
|
||||
* to do the deed. So for now, we error out while we can still do so
|
||||
* to do the deed. So for now, we error out while we can still do so
|
||||
* safely.
|
||||
*/
|
||||
if (haveSessionLock)
|
||||
@@ -3219,7 +3219,7 @@ PostPrepare_Locks(TransactionId xid)
|
||||
/*
|
||||
* We cannot simply modify proclock->tag.myProc to reassign
|
||||
* ownership of the lock, because that's part of the hash key and
|
||||
* the proclock would then be in the wrong hash chain. Instead
|
||||
* the proclock would then be in the wrong hash chain. Instead
|
||||
* use hash_update_hash_key. (We used to create a new hash entry,
|
||||
* but that risks out-of-memory failure if other processes are
|
||||
* busy making proclocks too.) We must unlink the proclock from
|
||||
@@ -3319,7 +3319,7 @@ GetLockStatusData(void)
|
||||
|
||||
/*
|
||||
* First, we iterate through the per-backend fast-path arrays, locking
|
||||
* them one at a time. This might produce an inconsistent picture of the
|
||||
* them one at a time. This might produce an inconsistent picture of the
|
||||
* system state, but taking all of those LWLocks at the same time seems
|
||||
* impractical (in particular, note MAX_SIMUL_LWLOCKS). It shouldn't
|
||||
* matter too much, because none of these locks can be involved in lock
|
||||
@@ -3398,7 +3398,7 @@ GetLockStatusData(void)
|
||||
* will be self-consistent.
|
||||
*
|
||||
* Since this is a read-only operation, we take shared instead of
|
||||
* exclusive lock. There's not a whole lot of point to this, because all
|
||||
* exclusive lock. There's not a whole lot of point to this, because all
|
||||
* the normal operations require exclusive lock, but it doesn't hurt
|
||||
* anything either. It will at least allow two backends to do
|
||||
* GetLockStatusData in parallel.
|
||||
@@ -3917,7 +3917,7 @@ lock_twophase_postabort(TransactionId xid, uint16 info,
|
||||
* as MyProc->lxid, you might wonder if we really need both. The
|
||||
* difference is that MyProc->lxid is set and cleared unlocked, and
|
||||
* examined by procarray.c, while fpLocalTransactionId is protected by
|
||||
* backendLock and is used only by the locking subsystem. Doing it this
|
||||
* backendLock and is used only by the locking subsystem. Doing it this
|
||||
* way makes it easier to verify that there are no funny race conditions.
|
||||
*
|
||||
* We don't bother recording this lock in the local lock table, since it's
|
||||
|
||||
Reference in New Issue
Block a user