1
0
mirror of https://github.com/postgres/postgres.git synced 2025-07-23 03:21:12 +03:00
Files
postgres/src/backend/replication
Michael Paquier 2e57790836 Fix race with synchronous_standby_names at startup
synchronous_standby_names cannot be reloaded safely by backends, and the
checkpointer is in charge of updating a state in shared memory if the
GUC is enabled in WalSndCtl, to let the backends know if they should
wait or not for a given LSN.  This provides a strict control on the
timing of the waiting queues if the GUC is enabled or disabled, then
reloaded.  The checkpointer is also in charge of waking up the backends
that could be waiting for a LSN when the GUC is disabled.

This logic had a race condition at startup, where it would be possible
for backends to not wait for a LSN even if synchronous_standby_names is
enabled.  This would cause visibility issues with transactions that we
should be waiting for but they were not.  The problem lasts until the
checkpointer does its initial update of the shared memory state when it
loads synchronous_standby_names.

In order to take care of this problem, the shared memory state in
WalSndCtl is extended to detect if it has been initialized by the
checkpointer, and not only check if synchronous_standby_names is
defined.  In WalSndCtlData, sync_standbys_defined is renamed to
sync_standbys_status, a bits8 able to know about two states:
- If the shared memory state has been initialized.  This flag is set by
the checkpointer at startup once, and never removed.
- If synchronous_standby_names is known as defined in the shared memory
state.  This is the same as the previous sync_standbys_defined in
WalSndCtl.

This method gives a way for backends to decide what they should do until
the shared memory area is initialized, and they now ultimately fall back
to a check on the GUC value in this case, which is the best thing that
can be done.

Fortunately, SyncRepUpdateSyncStandbysDefined() is called immediately by
the checkpointer when this process starts, so the window is very narrow.
It is possible to enlarge the problematic window by making the
checkpointer wait at the beginning of SyncRepUpdateSyncStandbysDefined()
with a hardcoded sleep for example, and doing so has showed that a 2PC
visibility test is indeed failing.  On machines slow enough, this bug
would cause spurious failures.

In 17~, we have looked at the possibility of adding an injection point
to have a reproducible test, but as the problematic window happens at
early startup, we would need to invent a way to make an injection point
optionally persistent across restarts when attached, something that
would be fine for this case as it would involve the checkpointer.  This
issue is quite old, and can be reproduced on all the stable branches.

Author: Melnikov Maksim <m.melnikov@postgrespro.ru>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/163fcbec-900b-4b07-beaa-d2ead8634bec@postgrespro.ru
Backpatch-through: 13
2025-04-11 10:00:21 +09:00
..
2022-09-04 12:09:01 +07:00
2023-11-06 15:18:04 +01:00
2025-01-01 11:21:55 -05:00

src/backend/replication/README

Walreceiver - libpqwalreceiver API
----------------------------------

The transport-specific part of walreceiver, responsible for connecting to
the primary server, receiving WAL files and sending messages, is loaded
dynamically to avoid having to link the main server binary with libpq.
The dynamically loaded module is in libpqwalreceiver subdirectory.

The dynamically loaded module implements a set of functions with details
about each one of them provided in src/include/replication/walreceiver.h.

This API should be considered internal at the moment, but we could open it
up for 3rd party replacements of libpqwalreceiver in the future, allowing
pluggable methods for receiving WAL.

Walreceiver IPC
---------------

When the WAL replay in startup process has reached the end of archived WAL,
restorable using restore_command, it starts up the walreceiver process
to fetch more WAL (if streaming replication is configured).

Walreceiver is a postmaster subprocess, so the startup process can't fork it
directly. Instead, it sends a signal to postmaster, asking postmaster to launch
it. Before that, however, startup process fills in WalRcvData->conninfo
and WalRcvData->slotname, and initializes the starting point in
WalRcvData->receiveStart.

As walreceiver receives WAL from the primary server, and writes and flushes
it to disk (in pg_wal), it updates WalRcvData->flushedUpto and signals
the startup process to know how far WAL replay can advance.

Walreceiver sends information about replication progress to the primary server
whenever it either writes or flushes new WAL, or the specified interval elapses.
This is used for reporting purpose.

Walsender IPC
-------------

At shutdown, postmaster handles walsender processes differently from regular
backends. It waits for regular backends to die before writing the
shutdown checkpoint and terminating pgarch and other auxiliary processes, but
that's not desirable for walsenders, because we want the standby servers to
receive all the WAL, including the shutdown checkpoint, before the primary
is shut down. Therefore postmaster treats walsenders like the pgarch process,
and instructs them to terminate at the PM_WAIT_XLOG_ARCHIVAL phase, after all
regular backends have died and checkpointer has issued the shutdown checkpoint.

When postmaster accepts a connection, it immediately forks a new process
to handle the handshake and authentication, and the process initializes to
become a backend. Postmaster doesn't know if the process becomes a regular
backend or a walsender process at that time - that's indicated in the
connection handshake - so we need some extra signaling to let postmaster
identify walsender processes.

When walsender process starts up, it marks itself as a walsender process in
the PMSignal array. That way postmaster can tell it apart from regular
backends.

Note that no big harm is done if postmaster thinks that a walsender is a
regular backend; it will just terminate the walsender earlier in the shutdown
phase. A walsender will look like a regular backend until it's done with the
initialization and has marked itself in PMSignal array, and at process
termination, after unmarking the PMSignal slot.

Each walsender allocates an entry from the WalSndCtl array, and tracks
information about replication progress. User can monitor them via
statistics views.


Walsender - walreceiver protocol
--------------------------------

See manual.