mirror of
https://github.com/postgres/postgres.git
synced 2025-05-11 05:41:32 +03:00
Most of the time, the last replayed record comes from the recovery target timeline, but there is a corner case where it makes a difference. When the startup process scans for a new timeline, and decides to change recovery target timeline, there is a window where the recovery target TLI has already been bumped, but there are no WAL segments from the new timeline in pg_xlog yet. For example, if we have just replayed up to point 0/30002D8, on timeline 1, there is a WAL file called 000000010000000000000003 in pg_xlog that contains the WAL up to that point. When recovery switches recovery target timeline to 2, a walsender can immediately try to read WAL from 0/30002D8, from timeline 2, so it will try to open WAL file 000000020000000000000003. However, that doesn't exist yet - the startup process hasn't copied that file from the archive yet nor has the walreceiver streamed it yet, so walsender fails with error "requested WAL segment 000000020000000000000003 has already been removed". That's harmless, in that the standby will try to reconnect later and by that time the segment is already created, but error messages that should be ignored are not good. To fix that, have walsender track the TLI of the last replayed record, instead of the recovery target timeline. That way walsender will not try to read anything from timeline 2, until the WAL segment has been created and at least one record has been replayed from it. The recovery target timeline is now xlog.c's internal affair, it doesn't need to be exposed in shared memory anymore. This fixes the error reported by Thom Brown. depesz the same error message, but I'm not sure if this fixes his scenario.
src/backend/replication/README Walreceiver - libpqwalreceiver API ---------------------------------- The transport-specific part of walreceiver, responsible for connecting to the primary server, receiving WAL files and sending messages, is loaded dynamically to avoid having to link the main server binary with libpq. The dynamically loaded module is in libpqwalreceiver subdirectory. The dynamically loaded module implements four functions: bool walrcv_connect(char *conninfo, XLogRecPtr startpoint) Establish connection to the primary, and starts streaming from 'startpoint'. Returns true on success. bool walrcv_receive(int timeout, unsigned char *type, char **buffer, int *len) Retrieve any message available through the connection, blocking for maximum of 'timeout' ms. If a message was successfully read, returns true, otherwise false. On success, a pointer to the message payload is stored in *buffer, length in *len, and the type of message received in *type. The returned buffer is valid until the next call to walrcv_* functions, the caller should not attempt freeing it. void walrcv_send(const char *buffer, int nbytes) Send a message to XLOG stream. void walrcv_disconnect(void); Disconnect. This API should be considered internal at the moment, but we could open it up for 3rd party replacements of libpqwalreceiver in the future, allowing pluggable methods for receiving WAL. Walreceiver IPC --------------- When the WAL replay in startup process has reached the end of archived WAL, recoverable using recovery_command, it starts up the walreceiver process to fetch more WAL (if streaming replication is configured). Walreceiver is a postmaster subprocess, so the startup process can't fork it directly. Instead, it sends a signal to postmaster, asking postmaster to launch it. Before that, however, startup process fills in WalRcvData->conninfo, and initializes the starting point in WalRcvData->receiveStart. As walreceiver receives WAL from the master server, and writes and flushes it to disk (in pg_xlog), it updates WalRcvData->receivedUpto and signals the startup process to know how far WAL replay can advance. Walreceiver sends information about replication progress to the master server whenever it either writes or flushes new WAL, or the specified interval elapses. This is used for reporting purpose. Walsender IPC ------------- At shutdown, postmaster handles walsender processes differently from regular backends. It waits for regular backends to die before writing the shutdown checkpoint and terminating pgarch and other auxiliary processes, but that's not desirable for walsenders, because we want the standby servers to receive all the WAL, including the shutdown checkpoint, before the master is shut down. Therefore postmaster treats walsenders like the pgarch process, and instructs them to terminate at PM_SHUTDOWN_2 phase, after all regular backends have died and checkpointer has issued the shutdown checkpoint. When postmaster accepts a connection, it immediately forks a new process to handle the handshake and authentication, and the process initializes to become a backend. Postmaster doesn't know if the process becomes a regular backend or a walsender process at that time - that's indicated in the connection handshake - so we need some extra signaling to let postmaster identify walsender processes. When walsender process starts up, it marks itself as a walsender process in the PMSignal array. That way postmaster can tell it apart from regular backends. Note that no big harm is done if postmaster thinks that a walsender is a regular backend; it will just terminate the walsender earlier in the shutdown phase. A walsender will look like a regular backend until it's done with the initialization and has marked itself in PMSignal array, and at process termination, after unmarking the PMSignal slot. Each walsender allocates an entry from the WalSndCtl array, and tracks information about replication progress. User can monitor them via statistics views. Walsender - walreceiver protocol -------------------------------- See manual.