mirror of
https://github.com/postgres/postgres.git
synced 2025-07-27 12:41:57 +03:00
Add a slot synchronization function.
This commit introduces a new SQL function pg_sync_replication_slots() which is used to synchronize the logical replication slots from the primary server to the physical standby so that logical replication can be resumed after a failover or planned switchover. A new 'synced' flag is introduced in pg_replication_slots view, indicating whether the slot has been synchronized from the primary server. On a standby, synced slots cannot be dropped or consumed, and any attempt to perform logical decoding on them will result in an error. The logical replication slots on the primary can be synchronized to the hot standby by using the 'failover' parameter of pg-create-logical-replication-slot(), or by using the 'failover' option of CREATE SUBSCRIPTION during slot creation, and then calling pg_sync_replication_slots() on standby. For the synchronization to work, it is mandatory to have a physical replication slot between the primary and the standby aka 'primary_slot_name' should be configured on the standby, and 'hot_standby_feedback' must be enabled on the standby. It is also necessary to specify a valid 'dbname' in the 'primary_conninfo'. If a logical slot is invalidated on the primary, then that slot on the standby is also invalidated. If a logical slot on the primary is valid but is invalidated on the standby, then that slot is dropped but will be recreated on the standby in the next pg_sync_replication_slots() call provided the slot still exists on the primary server. It is okay to recreate such slots as long as these are not consumable on standby (which is the case currently). This situation may occur due to the following reasons: - The 'max_slot_wal_keep_size' on the standby is insufficient to retain WAL records from the restart_lsn of the slot. - 'primary_slot_name' is temporarily reset to null and the physical slot is removed. The slot synchronization status on the standby can be monitored using the 'synced' column of pg_replication_slots view. A functionality to automatically synchronize slots by a background worker and allow logical walsenders to wait for the physical will be done in subsequent commits. Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
This commit is contained in:
@ -72,6 +72,7 @@
|
||||
#include "postmaster/interrupt.h"
|
||||
#include "replication/decode.h"
|
||||
#include "replication/logical.h"
|
||||
#include "replication/slotsync.h"
|
||||
#include "replication/slot.h"
|
||||
#include "replication/snapbuild.h"
|
||||
#include "replication/syncrep.h"
|
||||
@ -243,7 +244,6 @@ static void WalSndShutdown(void) pg_attribute_noreturn();
|
||||
static void XLogSendPhysical(void);
|
||||
static void XLogSendLogical(void);
|
||||
static void WalSndDone(WalSndSendDataCallback send_data);
|
||||
static XLogRecPtr GetStandbyFlushRecPtr(TimeLineID *tli);
|
||||
static void IdentifySystem(void);
|
||||
static void UploadManifest(void);
|
||||
static bool HandleUploadManifestPacket(StringInfo buf, off_t *offset,
|
||||
@ -1224,7 +1224,7 @@ CreateReplicationSlot(CreateReplicationSlotCmd *cmd)
|
||||
{
|
||||
ReplicationSlotCreate(cmd->slotname, false,
|
||||
cmd->temporary ? RS_TEMPORARY : RS_PERSISTENT,
|
||||
false, false);
|
||||
false, false, false);
|
||||
|
||||
if (reserve_wal)
|
||||
{
|
||||
@ -1255,7 +1255,7 @@ CreateReplicationSlot(CreateReplicationSlotCmd *cmd)
|
||||
*/
|
||||
ReplicationSlotCreate(cmd->slotname, true,
|
||||
cmd->temporary ? RS_TEMPORARY : RS_EPHEMERAL,
|
||||
two_phase, failover);
|
||||
two_phase, failover, false);
|
||||
|
||||
/*
|
||||
* Do options check early so that we can bail before calling the
|
||||
@ -3385,14 +3385,17 @@ WalSndDone(WalSndSendDataCallback send_data)
|
||||
}
|
||||
|
||||
/*
|
||||
* Returns the latest point in WAL that has been safely flushed to disk, and
|
||||
* can be sent to the standby. This should only be called when in recovery,
|
||||
* ie. we're streaming to a cascaded standby.
|
||||
* Returns the latest point in WAL that has been safely flushed to disk.
|
||||
* This should only be called when in recovery.
|
||||
*
|
||||
* This is called either by cascading walsender to find WAL postion to be sent
|
||||
* to a cascaded standby or by slot synchronization function to validate remote
|
||||
* slot's lsn before syncing it locally.
|
||||
*
|
||||
* As a side-effect, *tli is updated to the TLI of the last
|
||||
* replayed WAL record.
|
||||
*/
|
||||
static XLogRecPtr
|
||||
XLogRecPtr
|
||||
GetStandbyFlushRecPtr(TimeLineID *tli)
|
||||
{
|
||||
XLogRecPtr replayPtr;
|
||||
@ -3401,6 +3404,8 @@ GetStandbyFlushRecPtr(TimeLineID *tli)
|
||||
TimeLineID receiveTLI;
|
||||
XLogRecPtr result;
|
||||
|
||||
Assert(am_cascading_walsender || IsSyncingReplicationSlots());
|
||||
|
||||
/*
|
||||
* We can safely send what's already been replayed. Also, if walreceiver
|
||||
* is streaming WAL from the same timeline, we can send anything that it
|
||||
|
Reference in New Issue
Block a user