1
0
mirror of https://github.com/postgres/postgres.git synced 2025-11-21 00:42:43 +03:00

Combine freezing and pruning steps in VACUUM

Execute both freezing and pruning of tuples in the same
heap_page_prune() function, now called heap_page_prune_and_freeze(),
and emit a single WAL record containing all changes. That reduces the
overall amount of WAL generated.

This moves the freezing logic from vacuumlazy.c to the
heap_page_prune_and_freeze() function. The main difference in the
coding is that in vacuumlazy.c, we looked at the tuples after the
pruning had already happened, but in heap_page_prune_and_freeze() we
operate on the tuples before pruning. The heap_prepare_freeze_tuple()
function is now invoked after we have determined that a tuple is not
going to be pruned away.

VACUUM no longer needs to loop through the items on the page after
pruning. heap_page_prune_and_freeze() does all the work. It now
returns the list of dead offsets, including existing LP_DEAD items, to
the caller. Similarly it's now responsible for tracking 'all_visible',
'all_frozen', and 'hastup' on the caller's behalf.

Author: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://www.postgresql.org/message-id/20240330055710.kqg6ii2cdojsxgje@liskov
This commit is contained in:
Heikki Linnakangas
2024-04-03 19:32:28 +03:00
parent 26d138f644
commit 6dbb490261
7 changed files with 813 additions and 545 deletions

View File

@@ -6447,9 +6447,9 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask,
* XIDs or MultiXactIds that will need to be processed by a future VACUUM.
*
* VACUUM caller must assemble HeapTupleFreeze freeze plan entries for every
* tuple that we returned true for, and call heap_freeze_execute_prepared to
* execute freezing. Caller must initialize pagefrz fields for page as a
* whole before first call here for each heap page.
* tuple that we returned true for, and then execute freezing. Caller must
* initialize pagefrz fields for page as a whole before first call here for
* each heap page.
*
* VACUUM caller decides on whether or not to freeze the page as a whole.
* We'll often prepare freeze plans for a page that caller just discards.
@@ -6765,35 +6765,19 @@ heap_execute_freeze_tuple(HeapTupleHeader tuple, HeapTupleFreeze *frz)
}
/*
* heap_freeze_execute_prepared
* Perform xmin/xmax XID status sanity checks before actually executing freeze
* plans.
*
* Executes freezing of one or more heap tuples on a page on behalf of caller.
* Caller passes an array of tuple plans from heap_prepare_freeze_tuple.
* Caller must set 'offset' in each plan for us. Note that we destructively
* sort caller's tuples array in-place, so caller had better be done with it.
*
* WAL-logs the changes so that VACUUM can advance the rel's relfrozenxid
* later on without any risk of unsafe pg_xact lookups, even following a hard
* crash (or when querying from a standby). We represent freezing by setting
* infomask bits in tuple headers, but this shouldn't be thought of as a hint.
* See section on buffer access rules in src/backend/storage/buffer/README.
* heap_prepare_freeze_tuple doesn't perform these checks directly because
* pg_xact lookups are relatively expensive. They shouldn't be repeated by
* successive VACUUMs that each decide against freezing the same page.
*/
void
heap_freeze_execute_prepared(Relation rel, Buffer buffer,
TransactionId snapshotConflictHorizon,
HeapTupleFreeze *tuples, int ntuples)
heap_pre_freeze_checks(Buffer buffer,
HeapTupleFreeze *tuples, int ntuples)
{
Page page = BufferGetPage(buffer);
Assert(ntuples > 0);
/*
* Perform xmin/xmax XID status sanity checks before critical section.
*
* heap_prepare_freeze_tuple doesn't perform these checks directly because
* pg_xact lookups are relatively expensive. They shouldn't be repeated
* by successive VACUUMs that each decide against freezing the same page.
*/
for (int i = 0; i < ntuples; i++)
{
HeapTupleFreeze *frz = tuples + i;
@@ -6832,8 +6816,19 @@ heap_freeze_execute_prepared(Relation rel, Buffer buffer,
xmax)));
}
}
}
START_CRIT_SECTION();
/*
* Helper which executes freezing of one or more heap tuples on a page on
* behalf of caller. Caller passes an array of tuple plans from
* heap_prepare_freeze_tuple. Caller must set 'offset' in each plan for us.
* Must be called in a critical section that also marks the buffer dirty and,
* if needed, emits WAL.
*/
void
heap_freeze_prepared_tuples(Buffer buffer, HeapTupleFreeze *tuples, int ntuples)
{
Page page = BufferGetPage(buffer);
for (int i = 0; i < ntuples; i++)
{
@@ -6844,22 +6839,6 @@ heap_freeze_execute_prepared(Relation rel, Buffer buffer,
htup = (HeapTupleHeader) PageGetItem(page, itemid);
heap_execute_freeze_tuple(htup, frz);
}
MarkBufferDirty(buffer);
/* Now WAL-log freezing if necessary */
if (RelationNeedsWAL(rel))
{
log_heap_prune_and_freeze(rel, buffer, snapshotConflictHorizon,
false, /* no cleanup lock required */
PRUNE_VACUUM_SCAN,
tuples, ntuples,
NULL, 0, /* redirected */
NULL, 0, /* dead */
NULL, 0); /* unused */
}
END_CRIT_SECTION();
}
/*