mirror of
https://github.com/postgres/postgres.git
synced 2025-11-09 06:21:09 +03:00
Allow read only connections during recovery, known as Hot Standby.
Enabled by recovery_connections = on (default) and forcing archive recovery using a recovery.conf. Recovery processing now emulates the original transactions as they are replayed, providing full locking and MVCC behaviour for read only queries. Recovery must enter consistent state before connections are allowed, so there is a delay, typically short, before connections succeed. Replay of recovering transactions can conflict and in some cases deadlock with queries during recovery; these result in query cancellation after max_standby_delay seconds have expired. Infrastructure changes have minor effects on normal running, though introduce four new types of WAL record. New test mode "make standbycheck" allows regression tests of static command behaviour on a standby server while in recovery. Typical and extreme dynamic behaviours have been checked via code inspection and manual testing. Few port specific behaviours have been utilised, though primary testing has been on Linux only so far. This commit is the basic patch. Additional changes will follow in this release to enhance some aspects of behaviour, notably improved handling of conflicts, deadlock detection and query cancellation. Changes to VACUUM FULL are also required. Simon Riggs, with significant and lengthy review by Heikki Linnakangas, including streamlined redesign of snapshot creation and two-phase commit. Important contributions from Florian Pflug, Mark Kirkwood, Merlin Moncure, Greg Stark, Gianni Ciolli, Gabriele Bartolini, Hannu Krosing, Robert Haas, Tatsuo Ishii, Hiroyuki Yamada plus support and feedback from many other community members.
This commit is contained in:
@@ -1,4 +1,4 @@
|
||||
$PostgreSQL: pgsql/src/backend/access/nbtree/README,v 1.20 2008/03/21 13:23:27 momjian Exp $
|
||||
$PostgreSQL: pgsql/src/backend/access/nbtree/README,v 1.21 2009/12/19 01:32:32 sriggs Exp $
|
||||
|
||||
Btree Indexing
|
||||
==============
|
||||
@@ -401,6 +401,33 @@ of the WAL entry.) If the parent page becomes half-dead but is not
|
||||
immediately deleted due to a subsequent crash, there is no loss of
|
||||
consistency, and the empty page will be picked up by the next VACUUM.
|
||||
|
||||
Scans during Recovery
|
||||
---------------------
|
||||
|
||||
The btree index type can be safely used during recovery. During recovery
|
||||
we have at most one writer and potentially many readers. In that
|
||||
situation the locking requirements can be relaxed and we do not need
|
||||
double locking during block splits. Each WAL record makes changes to a
|
||||
single level of the btree using the correct locking sequence and so
|
||||
is safe for concurrent readers. Some readers may observe a block split
|
||||
in progress as they descend the tree, but they will simply move right
|
||||
onto the correct page.
|
||||
|
||||
During recovery all index scans start with ignore_killed_tuples = false
|
||||
and we never set kill_prior_tuple. We do this because the oldest xmin
|
||||
on the standby server can be older than the oldest xmin on the master
|
||||
server, which means tuples can be marked as killed even when they are
|
||||
still visible on the standby. We don't WAL log tuple killed bits, but
|
||||
they can still appear in the standby because of full page writes. So
|
||||
we must always ignore them in standby, and that means it's not worth
|
||||
setting them either.
|
||||
|
||||
Note that we talk about scans that are started during recovery. We go to
|
||||
a little trouble to allow a scan to start during recovery and end during
|
||||
normal running after recovery has completed. This is a key capability
|
||||
because it allows running applications to continue while the standby
|
||||
changes state into a normally running server.
|
||||
|
||||
Other Things That Are Handy to Know
|
||||
-----------------------------------
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
*
|
||||
*
|
||||
* IDENTIFICATION
|
||||
* $PostgreSQL: pgsql/src/backend/access/nbtree/nbtinsert.c,v 1.174 2009/10/02 21:14:04 tgl Exp $
|
||||
* $PostgreSQL: pgsql/src/backend/access/nbtree/nbtinsert.c,v 1.175 2009/12/19 01:32:32 sriggs Exp $
|
||||
*
|
||||
*-------------------------------------------------------------------------
|
||||
*/
|
||||
@@ -2025,7 +2025,7 @@ _bt_vacuum_one_page(Relation rel, Buffer buffer)
|
||||
}
|
||||
|
||||
if (ndeletable > 0)
|
||||
_bt_delitems(rel, buffer, deletable, ndeletable);
|
||||
_bt_delitems(rel, buffer, deletable, ndeletable, false, 0);
|
||||
|
||||
/*
|
||||
* Note: if we didn't find any LP_DEAD items, then the page's
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
*
|
||||
*
|
||||
* IDENTIFICATION
|
||||
* $PostgreSQL: pgsql/src/backend/access/nbtree/nbtpage.c,v 1.113 2009/05/05 19:02:22 tgl Exp $
|
||||
* $PostgreSQL: pgsql/src/backend/access/nbtree/nbtpage.c,v 1.114 2009/12/19 01:32:33 sriggs Exp $
|
||||
*
|
||||
* NOTES
|
||||
* Postgres btree pages look like ordinary relation pages. The opaque
|
||||
@@ -653,19 +653,33 @@ _bt_page_recyclable(Page page)
|
||||
*
|
||||
* This routine assumes that the caller has pinned and locked the buffer.
|
||||
* Also, the given itemnos *must* appear in increasing order in the array.
|
||||
*
|
||||
* We record VACUUMs and b-tree deletes differently in WAL. InHotStandby
|
||||
* we need to be able to pin all of the blocks in the btree in physical
|
||||
* order when replaying the effects of a VACUUM, just as we do for the
|
||||
* original VACUUM itself. lastBlockVacuumed allows us to tell whether an
|
||||
* intermediate range of blocks has had no changes at all by VACUUM,
|
||||
* and so must be scanned anyway during replay. We always write a WAL record
|
||||
* for the last block in the index, whether or not it contained any items
|
||||
* to be removed. This allows us to scan right up to end of index to
|
||||
* ensure correct locking.
|
||||
*/
|
||||
void
|
||||
_bt_delitems(Relation rel, Buffer buf,
|
||||
OffsetNumber *itemnos, int nitems)
|
||||
OffsetNumber *itemnos, int nitems, bool isVacuum,
|
||||
BlockNumber lastBlockVacuumed)
|
||||
{
|
||||
Page page = BufferGetPage(buf);
|
||||
BTPageOpaque opaque;
|
||||
|
||||
Assert(isVacuum || lastBlockVacuumed == 0);
|
||||
|
||||
/* No ereport(ERROR) until changes are logged */
|
||||
START_CRIT_SECTION();
|
||||
|
||||
/* Fix the page */
|
||||
PageIndexMultiDelete(page, itemnos, nitems);
|
||||
if (nitems > 0)
|
||||
PageIndexMultiDelete(page, itemnos, nitems);
|
||||
|
||||
/*
|
||||
* We can clear the vacuum cycle ID since this page has certainly been
|
||||
@@ -688,15 +702,36 @@ _bt_delitems(Relation rel, Buffer buf,
|
||||
/* XLOG stuff */
|
||||
if (!rel->rd_istemp)
|
||||
{
|
||||
xl_btree_delete xlrec;
|
||||
XLogRecPtr recptr;
|
||||
XLogRecData rdata[2];
|
||||
|
||||
xlrec.node = rel->rd_node;
|
||||
xlrec.block = BufferGetBlockNumber(buf);
|
||||
if (isVacuum)
|
||||
{
|
||||
xl_btree_vacuum xlrec_vacuum;
|
||||
xlrec_vacuum.node = rel->rd_node;
|
||||
xlrec_vacuum.block = BufferGetBlockNumber(buf);
|
||||
|
||||
xlrec_vacuum.lastBlockVacuumed = lastBlockVacuumed;
|
||||
rdata[0].data = (char *) &xlrec_vacuum;
|
||||
rdata[0].len = SizeOfBtreeVacuum;
|
||||
}
|
||||
else
|
||||
{
|
||||
xl_btree_delete xlrec_delete;
|
||||
xlrec_delete.node = rel->rd_node;
|
||||
xlrec_delete.block = BufferGetBlockNumber(buf);
|
||||
|
||||
/*
|
||||
* XXX: We would like to set an accurate latestRemovedXid, but
|
||||
* there is no easy way of obtaining a useful value. So we punt
|
||||
* and store InvalidTransactionId, which forces the standby to
|
||||
* wait for/cancel all currently running transactions.
|
||||
*/
|
||||
xlrec_delete.latestRemovedXid = InvalidTransactionId;
|
||||
rdata[0].data = (char *) &xlrec_delete;
|
||||
rdata[0].len = SizeOfBtreeDelete;
|
||||
}
|
||||
|
||||
rdata[0].data = (char *) &xlrec;
|
||||
rdata[0].len = SizeOfBtreeDelete;
|
||||
rdata[0].buffer = InvalidBuffer;
|
||||
rdata[0].next = &(rdata[1]);
|
||||
|
||||
@@ -719,7 +754,10 @@ _bt_delitems(Relation rel, Buffer buf,
|
||||
rdata[1].buffer_std = true;
|
||||
rdata[1].next = NULL;
|
||||
|
||||
recptr = XLogInsert(RM_BTREE_ID, XLOG_BTREE_DELETE, rdata);
|
||||
if (isVacuum)
|
||||
recptr = XLogInsert(RM_BTREE_ID, XLOG_BTREE_VACUUM, rdata);
|
||||
else
|
||||
recptr = XLogInsert(RM_BTREE_ID, XLOG_BTREE_DELETE, rdata);
|
||||
|
||||
PageSetLSN(page, recptr);
|
||||
PageSetTLI(page, ThisTimeLineID);
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
* Portions Copyright (c) 1994, Regents of the University of California
|
||||
*
|
||||
* IDENTIFICATION
|
||||
* $PostgreSQL: pgsql/src/backend/access/nbtree/nbtree.c,v 1.172 2009/07/29 20:56:18 tgl Exp $
|
||||
* $PostgreSQL: pgsql/src/backend/access/nbtree/nbtree.c,v 1.173 2009/12/19 01:32:33 sriggs Exp $
|
||||
*
|
||||
*-------------------------------------------------------------------------
|
||||
*/
|
||||
@@ -57,7 +57,8 @@ typedef struct
|
||||
IndexBulkDeleteCallback callback;
|
||||
void *callback_state;
|
||||
BTCycleId cycleid;
|
||||
BlockNumber lastUsedPage;
|
||||
BlockNumber lastBlockVacuumed; /* last blkno reached by Vacuum scan */
|
||||
BlockNumber lastUsedPage; /* blkno of last non-recyclable page */
|
||||
BlockNumber totFreePages; /* true total # of free pages */
|
||||
MemoryContext pagedelcontext;
|
||||
} BTVacState;
|
||||
@@ -629,6 +630,7 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
|
||||
vstate.callback = callback;
|
||||
vstate.callback_state = callback_state;
|
||||
vstate.cycleid = cycleid;
|
||||
vstate.lastBlockVacuumed = BTREE_METAPAGE; /* Initialise at first block */
|
||||
vstate.lastUsedPage = BTREE_METAPAGE;
|
||||
vstate.totFreePages = 0;
|
||||
|
||||
@@ -705,6 +707,32 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
|
||||
num_pages = new_pages;
|
||||
}
|
||||
|
||||
/*
|
||||
* InHotStandby we need to scan right up to the end of the index for
|
||||
* correct locking, so we may need to write a WAL record for the final
|
||||
* block in the index if it was not vacuumed. It's possible that VACUUMing
|
||||
* has actually removed zeroed pages at the end of the index so we need to
|
||||
* take care to issue the record for last actual block and not for the
|
||||
* last block that was scanned. Ignore empty indexes.
|
||||
*/
|
||||
if (XLogStandbyInfoActive() &&
|
||||
num_pages > 1 && vstate.lastBlockVacuumed < (num_pages - 1))
|
||||
{
|
||||
Buffer buf;
|
||||
|
||||
/*
|
||||
* We can't use _bt_getbuf() here because it always applies
|
||||
* _bt_checkpage(), which will barf on an all-zero page. We want to
|
||||
* recycle all-zero pages, not fail. Also, we want to use a nondefault
|
||||
* buffer access strategy.
|
||||
*/
|
||||
buf = ReadBufferExtended(rel, MAIN_FORKNUM, num_pages - 1, RBM_NORMAL,
|
||||
info->strategy);
|
||||
LockBufferForCleanup(buf);
|
||||
_bt_delitems(rel, buf, NULL, 0, true, vstate.lastBlockVacuumed);
|
||||
_bt_relbuf(rel, buf);
|
||||
}
|
||||
|
||||
MemoryContextDelete(vstate.pagedelcontext);
|
||||
|
||||
/* update statistics */
|
||||
@@ -847,6 +875,26 @@ restart:
|
||||
itup = (IndexTuple) PageGetItem(page,
|
||||
PageGetItemId(page, offnum));
|
||||
htup = &(itup->t_tid);
|
||||
|
||||
/*
|
||||
* During Hot Standby we currently assume that XLOG_BTREE_VACUUM
|
||||
* records do not produce conflicts. That is only true as long
|
||||
* as the callback function depends only upon whether the index
|
||||
* tuple refers to heap tuples removed in the initial heap scan.
|
||||
* When vacuum starts it derives a value of OldestXmin. Backends
|
||||
* taking later snapshots could have a RecentGlobalXmin with a
|
||||
* later xid than the vacuum's OldestXmin, so it is possible that
|
||||
* row versions deleted after OldestXmin could be marked as killed
|
||||
* by other backends. The callback function *could* look at the
|
||||
* index tuple state in isolation and decide to delete the index
|
||||
* tuple, though currently it does not. If it ever did, we would
|
||||
* need to reconsider whether XLOG_BTREE_VACUUM records should
|
||||
* cause conflicts. If they did cause conflicts they would be
|
||||
* fairly harsh conflicts, since we haven't yet worked out a way
|
||||
* to pass a useful value for latestRemovedXid on the
|
||||
* XLOG_BTREE_VACUUM records. This applies to *any* type of index
|
||||
* that marks index tuples as killed.
|
||||
*/
|
||||
if (callback(htup, callback_state))
|
||||
deletable[ndeletable++] = offnum;
|
||||
}
|
||||
@@ -858,7 +906,19 @@ restart:
|
||||
*/
|
||||
if (ndeletable > 0)
|
||||
{
|
||||
_bt_delitems(rel, buf, deletable, ndeletable);
|
||||
BlockNumber lastBlockVacuumed = BufferGetBlockNumber(buf);
|
||||
|
||||
_bt_delitems(rel, buf, deletable, ndeletable, true, vstate->lastBlockVacuumed);
|
||||
|
||||
/*
|
||||
* Keep track of the block number of the lastBlockVacuumed, so
|
||||
* we can scan those blocks as well during WAL replay. This then
|
||||
* provides concurrency protection and allows btrees to be used
|
||||
* while in recovery.
|
||||
*/
|
||||
if (lastBlockVacuumed > vstate->lastBlockVacuumed)
|
||||
vstate->lastBlockVacuumed = lastBlockVacuumed;
|
||||
|
||||
stats->tuples_removed += ndeletable;
|
||||
/* must recompute maxoff */
|
||||
maxoff = PageGetMaxOffsetNumber(page);
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
* Portions Copyright (c) 1994, Regents of the University of California
|
||||
*
|
||||
* IDENTIFICATION
|
||||
* $PostgreSQL: pgsql/src/backend/access/nbtree/nbtxlog.c,v 1.55 2009/06/11 14:48:54 momjian Exp $
|
||||
* $PostgreSQL: pgsql/src/backend/access/nbtree/nbtxlog.c,v 1.56 2009/12/19 01:32:33 sriggs Exp $
|
||||
*
|
||||
*-------------------------------------------------------------------------
|
||||
*/
|
||||
@@ -16,7 +16,11 @@
|
||||
|
||||
#include "access/nbtree.h"
|
||||
#include "access/transam.h"
|
||||
#include "access/xact.h"
|
||||
#include "storage/bufmgr.h"
|
||||
#include "storage/procarray.h"
|
||||
#include "storage/standby.h"
|
||||
#include "miscadmin.h"
|
||||
|
||||
/*
|
||||
* We must keep track of expected insertions due to page splits, and apply
|
||||
@@ -458,6 +462,97 @@ btree_xlog_split(bool onleft, bool isroot,
|
||||
xlrec->leftsib, xlrec->rightsib, isroot);
|
||||
}
|
||||
|
||||
static void
|
||||
btree_xlog_vacuum(XLogRecPtr lsn, XLogRecord *record)
|
||||
{
|
||||
xl_btree_vacuum *xlrec;
|
||||
Buffer buffer;
|
||||
Page page;
|
||||
BTPageOpaque opaque;
|
||||
|
||||
xlrec = (xl_btree_vacuum *) XLogRecGetData(record);
|
||||
|
||||
/*
|
||||
* If queries might be active then we need to ensure every block is unpinned
|
||||
* between the lastBlockVacuumed and the current block, if there are any.
|
||||
* This ensures that every block in the index is touched during VACUUM as
|
||||
* required to ensure scans work correctly.
|
||||
*/
|
||||
if (standbyState == STANDBY_SNAPSHOT_READY &&
|
||||
(xlrec->lastBlockVacuumed + 1) != xlrec->block)
|
||||
{
|
||||
BlockNumber blkno = xlrec->lastBlockVacuumed + 1;
|
||||
|
||||
for (; blkno < xlrec->block; blkno++)
|
||||
{
|
||||
/*
|
||||
* XXX we don't actually need to read the block, we
|
||||
* just need to confirm it is unpinned. If we had a special call
|
||||
* into the buffer manager we could optimise this so that
|
||||
* if the block is not in shared_buffers we confirm it as unpinned.
|
||||
*
|
||||
* Another simple optimization would be to check if there's any
|
||||
* backends running; if not, we could just skip this.
|
||||
*/
|
||||
buffer = XLogReadBufferExtended(xlrec->node, MAIN_FORKNUM, blkno, RBM_NORMAL);
|
||||
if (BufferIsValid(buffer))
|
||||
{
|
||||
LockBufferForCleanup(buffer);
|
||||
UnlockReleaseBuffer(buffer);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* If the block was restored from a full page image, nothing more to do.
|
||||
* The RestoreBkpBlocks() call already pinned and took cleanup lock on
|
||||
* it. XXX: Perhaps we should call RestoreBkpBlocks() *after* the loop
|
||||
* above, to make the disk access more sequential.
|
||||
*/
|
||||
if (record->xl_info & XLR_BKP_BLOCK_1)
|
||||
return;
|
||||
|
||||
/*
|
||||
* Like in btvacuumpage(), we need to take a cleanup lock on every leaf
|
||||
* page. See nbtree/README for details.
|
||||
*/
|
||||
buffer = XLogReadBufferExtended(xlrec->node, MAIN_FORKNUM, xlrec->block, RBM_NORMAL);
|
||||
if (!BufferIsValid(buffer))
|
||||
return;
|
||||
LockBufferForCleanup(buffer);
|
||||
page = (Page) BufferGetPage(buffer);
|
||||
|
||||
if (XLByteLE(lsn, PageGetLSN(page)))
|
||||
{
|
||||
UnlockReleaseBuffer(buffer);
|
||||
return;
|
||||
}
|
||||
|
||||
if (record->xl_len > SizeOfBtreeVacuum)
|
||||
{
|
||||
OffsetNumber *unused;
|
||||
OffsetNumber *unend;
|
||||
|
||||
unused = (OffsetNumber *) ((char *) xlrec + SizeOfBtreeVacuum);
|
||||
unend = (OffsetNumber *) ((char *) xlrec + record->xl_len);
|
||||
|
||||
if ((unend - unused) > 0)
|
||||
PageIndexMultiDelete(page, unused, unend - unused);
|
||||
}
|
||||
|
||||
/*
|
||||
* Mark the page as not containing any LP_DEAD items --- see comments in
|
||||
* _bt_delitems().
|
||||
*/
|
||||
opaque = (BTPageOpaque) PageGetSpecialPointer(page);
|
||||
opaque->btpo_flags &= ~BTP_HAS_GARBAGE;
|
||||
|
||||
PageSetLSN(page, lsn);
|
||||
PageSetTLI(page, ThisTimeLineID);
|
||||
MarkBufferDirty(buffer);
|
||||
UnlockReleaseBuffer(buffer);
|
||||
}
|
||||
|
||||
static void
|
||||
btree_xlog_delete(XLogRecPtr lsn, XLogRecord *record)
|
||||
{
|
||||
@@ -470,6 +565,11 @@ btree_xlog_delete(XLogRecPtr lsn, XLogRecord *record)
|
||||
return;
|
||||
|
||||
xlrec = (xl_btree_delete *) XLogRecGetData(record);
|
||||
|
||||
/*
|
||||
* We don't need to take a cleanup lock to apply these changes.
|
||||
* See nbtree/README for details.
|
||||
*/
|
||||
buffer = XLogReadBuffer(xlrec->node, xlrec->block, false);
|
||||
if (!BufferIsValid(buffer))
|
||||
return;
|
||||
@@ -714,7 +814,43 @@ btree_redo(XLogRecPtr lsn, XLogRecord *record)
|
||||
{
|
||||
uint8 info = record->xl_info & ~XLR_INFO_MASK;
|
||||
|
||||
RestoreBkpBlocks(lsn, record, false);
|
||||
/*
|
||||
* Btree delete records can conflict with standby queries. You might
|
||||
* think that vacuum records would conflict as well, but we've handled
|
||||
* that already. XLOG_HEAP2_CLEANUP_INFO records provide the highest xid
|
||||
* cleaned by the vacuum of the heap and so we can resolve any conflicts
|
||||
* just once when that arrives. After that any we know that no conflicts
|
||||
* exist from individual btree vacuum records on that index.
|
||||
*/
|
||||
if (InHotStandby)
|
||||
{
|
||||
if (info == XLOG_BTREE_DELETE)
|
||||
{
|
||||
xl_btree_delete *xlrec = (xl_btree_delete *) XLogRecGetData(record);
|
||||
VirtualTransactionId *backends;
|
||||
|
||||
/*
|
||||
* XXX Currently we put everybody on death row, because
|
||||
* currently _bt_delitems() supplies InvalidTransactionId.
|
||||
* This can be fairly painful, so providing a better value
|
||||
* here is worth some thought and possibly some effort to
|
||||
* improve.
|
||||
*/
|
||||
backends = GetConflictingVirtualXIDs(xlrec->latestRemovedXid,
|
||||
InvalidOid,
|
||||
true);
|
||||
|
||||
ResolveRecoveryConflictWithVirtualXIDs(backends,
|
||||
"b-tree delete",
|
||||
CONFLICT_MODE_ERROR);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Vacuum needs to pin and take cleanup lock on every leaf page,
|
||||
* a regular exclusive lock is enough for all other purposes.
|
||||
*/
|
||||
RestoreBkpBlocks(lsn, record, (info == XLOG_BTREE_VACUUM));
|
||||
|
||||
switch (info)
|
||||
{
|
||||
@@ -739,6 +875,9 @@ btree_redo(XLogRecPtr lsn, XLogRecord *record)
|
||||
case XLOG_BTREE_SPLIT_R_ROOT:
|
||||
btree_xlog_split(false, true, lsn, record);
|
||||
break;
|
||||
case XLOG_BTREE_VACUUM:
|
||||
btree_xlog_vacuum(lsn, record);
|
||||
break;
|
||||
case XLOG_BTREE_DELETE:
|
||||
btree_xlog_delete(lsn, record);
|
||||
break;
|
||||
@@ -843,13 +982,24 @@ btree_desc(StringInfo buf, uint8 xl_info, char *rec)
|
||||
xlrec->level, xlrec->firstright);
|
||||
break;
|
||||
}
|
||||
case XLOG_BTREE_VACUUM:
|
||||
{
|
||||
xl_btree_vacuum *xlrec = (xl_btree_vacuum *) rec;
|
||||
|
||||
appendStringInfo(buf, "vacuum: rel %u/%u/%u; blk %u, lastBlockVacuumed %u",
|
||||
xlrec->node.spcNode, xlrec->node.dbNode,
|
||||
xlrec->node.relNode, xlrec->block,
|
||||
xlrec->lastBlockVacuumed);
|
||||
break;
|
||||
}
|
||||
case XLOG_BTREE_DELETE:
|
||||
{
|
||||
xl_btree_delete *xlrec = (xl_btree_delete *) rec;
|
||||
|
||||
appendStringInfo(buf, "delete: rel %u/%u/%u; blk %u",
|
||||
appendStringInfo(buf, "delete: rel %u/%u/%u; blk %u, latestRemovedXid %u",
|
||||
xlrec->node.spcNode, xlrec->node.dbNode,
|
||||
xlrec->node.relNode, xlrec->block);
|
||||
xlrec->node.relNode, xlrec->block,
|
||||
xlrec->latestRemovedXid);
|
||||
break;
|
||||
}
|
||||
case XLOG_BTREE_DELETE_PAGE:
|
||||
|
||||
Reference in New Issue
Block a user