1
0
mirror of https://github.com/postgres/postgres.git synced 2025-10-21 02:52:47 +03:00

Refactor how VACUUM passes around its XID cutoffs.

Use a dedicated struct for the XID/MXID cutoffs used by VACUUM, such as
FreezeLimit and OldestXmin.  This state is initialized in vacuum.c, and
then passed around by code from vacuumlazy.c to heapam.c freezing
related routines.  The new convention is that everybody works off of the
same cutoff state, which is passed around via pointers to const.

Also simplify some of the logic for dealing with frozen xmin in
heap_prepare_freeze_tuple: add dedicated "xmin_already_frozen" state to
clearly distinguish xmin XIDs that we're going to freeze from those that
were already frozen from before.  That way the routine's xmin handling
code is symmetrical with the existing xmax handling code.  This is
preparation for an upcoming commit that will add page level freezing.

Also refactor the control flow within FreezeMultiXactId(), while adding
stricter sanity checks.  We now test OldestXmin directly, instead of
using FreezeLimit as an inexact proxy for OldestXmin.  This is further
preparation for the page level freezing work, which will make the
function's caller cede control of page level freezing to the function
where appropriate (where heap_prepare_freeze_tuple sees a tuple that
happens to contain a MultiXactId in its xmax).

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Jeff Davis <pgsql@j-davis.com>
Discussion: https://postgr.es/m/CAH2-WznS9TxXmz2_=SY+SyJyDFbiOftKofM9=aDo68BbXNBUMA@mail.gmail.com
This commit is contained in:
Peter Geoghegan
2022-12-22 09:37:59 -08:00
parent e42e312430
commit 4ce3afb82e
8 changed files with 438 additions and 465 deletions

View File

@@ -38,6 +38,7 @@
typedef struct BulkInsertStateData *BulkInsertState;
struct TupleTableSlot;
struct VacuumCutoffs;
#define MaxLockTupleMode LockTupleExclusive
@@ -178,8 +179,7 @@ extern TM_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
extern void heap_inplace_update(Relation relation, HeapTuple tuple);
extern bool heap_prepare_freeze_tuple(HeapTupleHeader tuple,
TransactionId relfrozenxid, TransactionId relminmxid,
TransactionId cutoff_xid, TransactionId cutoff_multi,
const struct VacuumCutoffs *cutoffs,
HeapTupleFreeze *frz, bool *totally_frozen,
TransactionId *relfrozenxid_out,
MultiXactId *relminmxid_out);
@@ -188,9 +188,9 @@ extern void heap_freeze_execute_prepared(Relation rel, Buffer buffer,
HeapTupleFreeze *tuples, int ntuples);
extern bool heap_freeze_tuple(HeapTupleHeader tuple,
TransactionId relfrozenxid, TransactionId relminmxid,
TransactionId cutoff_xid, TransactionId cutoff_multi);
extern bool heap_tuple_would_freeze(HeapTupleHeader tuple, TransactionId cutoff_xid,
MultiXactId cutoff_multi,
TransactionId FreezeLimit, TransactionId MultiXactCutoff);
extern bool heap_tuple_would_freeze(HeapTupleHeader tuple,
const struct VacuumCutoffs *cutoffs,
TransactionId *relfrozenxid_out,
MultiXactId *relminmxid_out);
extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple);

View File

@@ -1634,7 +1634,7 @@ table_relation_copy_data(Relation rel, const RelFileLocator *newrlocator)
* in that index's order; if false and OldIndex is InvalidOid, no sorting is
* performed
* - OldIndex - see use_sort
* - OldestXmin - computed by vacuum_set_xid_limits(), even when
* - OldestXmin - computed by vacuum_get_cutoffs(), even when
* not needed for the relation's AM
* - *xid_cutoff - ditto
* - *multi_cutoff - ditto