1
0
mirror of https://github.com/postgres/postgres.git synced 2025-07-02 09:02:37 +03:00

Make large sequential scans and VACUUMs work in a limited-size "ring" of

buffers, rather than blowing out the whole shared-buffer arena.  Aside from
avoiding cache spoliation, this fixes the problem that VACUUM formerly tended
to cause a WAL flush for every page it modified, because we had it hacked to
use only a single buffer.  Those flushes will now occur only once per
ring-ful.  The exact ring size, and the threshold for seqscans to switch into
the ring usage pattern, remain under debate; but the infrastructure seems
done.  The key bit of infrastructure is a new optional BufferAccessStrategy
object that can be passed to ReadBuffer operations; this replaces the former
StrategyHintVacuum API.

This patch also changes the buffer usage-count methodology a bit: we now
advance usage_count when first pinning a buffer, rather than when last
unpinning it.  To preserve the behavior that a buffer's lifetime starts to
decrease when it's released, the clock sweep code is modified to not decrement
usage_count of pinned buffers.

Work not done in this commit: teach GiST and GIN indexes to use the vacuum
BufferAccessStrategy for vacuum-driven fetches.

Original patch by Simon, reworked by Heikki and again by Tom.
This commit is contained in:
Tom Lane
2007-05-30 20:12:03 +00:00
parent 0a6f2ee84d
commit d526575f89
24 changed files with 722 additions and 262 deletions

View File

@ -8,7 +8,7 @@
*
*
* IDENTIFICATION
* $PostgreSQL: pgsql/src/backend/commands/analyze.c,v 1.107 2007/04/30 03:23:48 tgl Exp $
* $PostgreSQL: pgsql/src/backend/commands/analyze.c,v 1.108 2007/05/30 20:11:56 tgl Exp $
*
*-------------------------------------------------------------------------
*/
@ -63,10 +63,13 @@ typedef struct AnlIndexData
/* Default statistics target (GUC parameter) */
int default_statistics_target = 10;
/* A few variables that don't seem worth passing around as parameters */
static int elevel = -1;
static MemoryContext anl_context = NULL;
static BufferAccessStrategy vac_strategy;
static void BlockSampler_Init(BlockSampler bs, BlockNumber nblocks,
int samplesize);
@ -94,7 +97,8 @@ static bool std_typanalyze(VacAttrStats *stats);
* analyze_rel() -- analyze one relation
*/
void
analyze_rel(Oid relid, VacuumStmt *vacstmt)
analyze_rel(Oid relid, VacuumStmt *vacstmt,
BufferAccessStrategy bstrategy)
{
Relation onerel;
int attr_cnt,
@ -120,6 +124,8 @@ analyze_rel(Oid relid, VacuumStmt *vacstmt)
else
elevel = DEBUG2;
vac_strategy = bstrategy;
/*
* Use the current context for storing analysis info. vacuum.c ensures
* that this context will be cleared when I return, thus releasing the
@ -845,7 +851,7 @@ acquire_sample_rows(Relation onerel, HeapTuple *rows, int targrows,
* looking at it. We don't maintain a lock on the page, so tuples
* could get added to it, but we ignore such tuples.
*/
targbuffer = ReadBuffer(onerel, targblock);
targbuffer = ReadBufferWithStrategy(onerel, targblock, vac_strategy);
LockBuffer(targbuffer, BUFFER_LOCK_SHARE);
targpage = BufferGetPage(targbuffer);
maxoffset = PageGetMaxOffsetNumber(targpage);