mirror of
https://github.com/postgres/postgres.git
synced 2025-06-29 10:41:53 +03:00
Make large sequential scans and VACUUMs work in a limited-size "ring" of
buffers, rather than blowing out the whole shared-buffer arena. Aside from avoiding cache spoliation, this fixes the problem that VACUUM formerly tended to cause a WAL flush for every page it modified, because we had it hacked to use only a single buffer. Those flushes will now occur only once per ring-ful. The exact ring size, and the threshold for seqscans to switch into the ring usage pattern, remain under debate; but the infrastructure seems done. The key bit of infrastructure is a new optional BufferAccessStrategy object that can be passed to ReadBuffer operations; this replaces the former StrategyHintVacuum API. This patch also changes the buffer usage-count methodology a bit: we now advance usage_count when first pinning a buffer, rather than when last unpinning it. To preserve the behavior that a buffer's lifetime starts to decrease when it's released, the clock sweep code is modified to not decrement usage_count of pinned buffers. Work not done in this commit: teach GiST and GIN indexes to use the vacuum BufferAccessStrategy for vacuum-driven fetches. Original patch by Simon, reworked by Heikki and again by Tom.
This commit is contained in:
@ -7,7 +7,7 @@
|
||||
* Portions Copyright (c) 1996-2007, PostgreSQL Global Development Group
|
||||
* Portions Copyright (c) 1994, Regents of the University of California
|
||||
*
|
||||
* $PostgreSQL: pgsql/src/backend/access/transam/xlog.c,v 1.269 2007/05/20 21:08:19 tgl Exp $
|
||||
* $PostgreSQL: pgsql/src/backend/access/transam/xlog.c,v 1.270 2007/05/30 20:11:55 tgl Exp $
|
||||
*
|
||||
*-------------------------------------------------------------------------
|
||||
*/
|
||||
@ -1799,6 +1799,36 @@ XLogFlush(XLogRecPtr record)
|
||||
LogwrtResult.Flush.xlogid, LogwrtResult.Flush.xrecoff);
|
||||
}
|
||||
|
||||
/*
|
||||
* Test whether XLOG data has been flushed up to (at least) the given position.
|
||||
*
|
||||
* Returns true if a flush is still needed. (It may be that someone else
|
||||
* is already in process of flushing that far, however.)
|
||||
*/
|
||||
bool
|
||||
XLogNeedsFlush(XLogRecPtr record)
|
||||
{
|
||||
/* Quick exit if already known flushed */
|
||||
if (XLByteLE(record, LogwrtResult.Flush))
|
||||
return false;
|
||||
|
||||
/* read LogwrtResult and update local state */
|
||||
{
|
||||
/* use volatile pointer to prevent code rearrangement */
|
||||
volatile XLogCtlData *xlogctl = XLogCtl;
|
||||
|
||||
SpinLockAcquire(&xlogctl->info_lck);
|
||||
LogwrtResult = xlogctl->LogwrtResult;
|
||||
SpinLockRelease(&xlogctl->info_lck);
|
||||
}
|
||||
|
||||
/* check again */
|
||||
if (XLByteLE(record, LogwrtResult.Flush))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* Create a new XLOG file segment, or open a pre-existing one.
|
||||
*
|
||||
|
Reference in New Issue
Block a user