1
0
mirror of https://github.com/postgres/postgres.git synced 2025-11-25 12:03:53 +03:00

Replace buffer I/O locks with condition variables.

1.  Backends waiting for buffer I/O are now interruptible.

2.  If something goes wrong in a backend that is currently performing
I/O, waiting backends no longer wake up until that backend reaches
AbortBufferIO() and broadcasts on the CV.  Previously, any waiters would
wake up (because the I/O lock was automatically released) and then
busy-loop until AbortBufferIO() cleared BM_IO_IN_PROGRESS.

3.  LWLockMinimallyPadded is removed, as it would now be unused.

Author: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (earlier version, 2016)
Discussion: https://postgr.es/m/CA%2BhUKGJ8nBFrjLuCTuqKN0pd2PQOwj9b_jnsiGFFMDvUxahj_A%40mail.gmail.com
Discussion: https://postgr.es/m/CA+Tgmoaj2aPti0yho7FeEf2qt-JgQPRWb0gci_o1Hfr=C56Xng@mail.gmail.com
This commit is contained in:
Thomas Munro
2021-03-11 10:05:58 +13:00
parent c3ffe34863
commit d87251048a
11 changed files with 61 additions and 116 deletions

View File

@@ -48,29 +48,8 @@ typedef struct LWLock
* even more padding so that each LWLock takes up an entire cache line; this is
* useful, for example, in the main LWLock array, where the overall number of
* locks is small but some are heavily contended.
*
* When allocating a tranche that contains data other than LWLocks, it is
* probably best to include a bare LWLock and then pad the resulting structure
* as necessary for performance. For an array that contains only LWLocks,
* LWLockMinimallyPadded can be used for cases where we just want to ensure
* that we don't cross cache line boundaries within a single lock, while
* LWLockPadded can be used for cases where we want each lock to be an entire
* cache line.
*
* An LWLockMinimallyPadded might contain more than the absolute minimum amount
* of padding required to keep a lock from crossing a cache line boundary,
* because an unpadded LWLock will normally fit into 16 bytes. We ignore that
* possibility when determining the minimal amount of padding. Older releases
* had larger LWLocks, so 32 really was the minimum, and packing them in
* tighter might hurt performance.
*
* LWLOCK_MINIMAL_SIZE should be 32 on basically all common platforms, but
* because pg_atomic_uint32 is more than 4 bytes on some obscure platforms, we
* allow for the possibility that it might be 64. Even on those platforms,
* we probably won't exceed 32 bytes unless LOCK_DEBUG is defined.
*/
#define LWLOCK_PADDED_SIZE PG_CACHE_LINE_SIZE
#define LWLOCK_MINIMAL_SIZE (sizeof(LWLock) <= 32 ? 32 : 64)
/* LWLock, padded to a full cache line size */
typedef union LWLockPadded
@@ -79,13 +58,6 @@ typedef union LWLockPadded
char pad[LWLOCK_PADDED_SIZE];
} LWLockPadded;
/* LWLock, minimally padded */
typedef union LWLockMinimallyPadded
{
LWLock lock;
char pad[LWLOCK_MINIMAL_SIZE];
} LWLockMinimallyPadded;
extern PGDLLIMPORT LWLockPadded *MainLWLockArray;
/* struct for storing named tranche information */
@@ -202,7 +174,6 @@ typedef enum BuiltinTrancheIds
LWTRANCHE_SERIAL_BUFFER,
LWTRANCHE_WAL_INSERT,
LWTRANCHE_BUFFER_CONTENT,
LWTRANCHE_BUFFER_IO,
LWTRANCHE_REPLICATION_ORIGIN_STATE,
LWTRANCHE_REPLICATION_SLOT_IO,
LWTRANCHE_LOCK_FASTPATH,