Commit 31966b15 invented a way for functions dealing with relation
extension to accept a Relation in online code and an SMgrRelation in
recovery code. It seems highly likely that future bufmgr.c interfaces
will face the same problem, and need to do something similar.
Generalize the names so that each interface doesn't have to re-invent
the wheel.
Back-patch to 16. Since extension AM authors might start using the
constructor macros once 16 ships, we agreed to do the rename in 16
rather than waiting for 17.
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA%2BhUKG%2B6tLD2BhpRWycEoti6LVLyQq457UL4ticP5xd8LqHySA%40mail.gmail.com
Commit b91dd9de was concerned with a theoretical problem with our
non-atomic condition variable operations. If you stop sleeping, and
then cancel the sleep in a separate step, you might be signaled in
between, and that could be lost. That doesn't matter for callers of
ConditionVariableBroadcast(), but callers of ConditionVariableSignal()
might be upset if a signal went missing like this.
Commit bc971f4025 interacted badly with that logic, because it doesn't
use ConditionVariableSleep(), which would normally put us back in the
wait list. ConditionVariableCancelSleep() would be confused and think
we'd received an extra signal, and try to forward it to another backend,
resulting in wakeup storms.
New idea: ConditionVariableCancelSleep() can just return true if we've
been signaled. Hypothetical users of ConditionVariableSignal() would
then still have a way to deal with rare lost signals if they are
concerned about that problem.
Back-patch to 16, where bc971f4025 arrived.
Reported-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/2840876b-4cfe-240f-0a7e-29ffd66711e7%40enterprisedb.com
Currently, the names of the custom wait event must be registered for
each backend, requiring all these to link to the shared memory area of
an extension, even if these are not loaded with
shared_preload_libraries.
This patch relaxes the constraints related to this infrastructure by
storing the wait events and their names in two dynamic hash tables in
shared memory. This has the advantage to simplify the registration of
custom wait events to a single routine call that returns an event ID
ready for consumption:
uint32 WaitEventExtensionNew(const char *wait_event_name);
The caller of this routine can then cache locally the ID returned, to be
used for pgstat_report_wait_start(), WaitLatch() or a similar routine.
The implementation uses two hash tables: one with a key based on the
event name to avoid duplicates and a second using the event ID as key
for event lookups, like on pg_stat_activity. These tables can hold a
minimum of 16 entries, and a maximum of 128 entries, which should be plenty
enough.
The code changes done in worker_spi show how things are simplified (most
of the code removed in this commit comes from there):
- worker_spi_init() is gone.
- No more shared memory hooks required (size requested and
initialization).
- The custom wait event ID is cached in the process that needs to set
it, with one single call to WaitEventExtensionNew() to retrieve it.
Per suggestion from Andres Freund.
Author: Masahiro Ikeda, with a few tweaks from me.
Discussion: https://postgr.es/m/20230801032349.aaiuvhtrcvvcwzcx@awork3.anarazel.de
Two backend routines are added to allow extension to allocate and define
custom wait events, all of these being allocated in the type
"Extension":
* WaitEventExtensionNew(), that allocates a wait event ID computed from
a counter in shared memory.
* WaitEventExtensionRegisterName(), to associate a custom string to the
wait event ID allocated.
Note that this includes an example of how to use this new facility in
worker_spi with tests in TAP for various scenarios, and some
documentation about how to use them.
Any code in the tree that currently uses WAIT_EVENT_EXTENSION could
switch to this new facility to define custom wait events. This is left
as work for future patches.
Author: Masahiro Ikeda
Reviewed-by: Andres Freund, Michael Paquier, Tristan Partin, Bharath
Rupireddy
Discussion: https://postgr.es/m/b9f5411acda0cf15c8fbb767702ff43e@oss.nttdata.com
Commit 5764f611e used dclist_delete_from() to remove the proc from the
wait queue. However, since it doesn't clear dist_node's next/prev to
NULL, it could call RemoveFromWaitQueue() twice: when the process
detects a deadlock and then when cleaning up locks on aborting the
transaction. The waiting lock information is cleared in the first
call, so it led to a crash in the second call.
Backpatch to v16, where the change was introduced.
Bug: #18031
Reported-by: Justin Pryzby, Alexander Lakhin
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/ZKy4AdrLEfbqrxGJ%40telsasoft.com
Discussion: https://postgr.es/m/18031-ebe2d08cb405f6cc@postgresql.org
Backpatch-through: 16
This commit adds a few comments about what LWLockWaitForVar() relies on
when a backend waits for a variable update on its LWLocks for WAL
insertions up to an expected LSN.
First, LWLockWaitForVar() does not include a memory barrier, relying on
a spinlock taken at the beginning of WaitXLogInsertionsToFinish(). This
was hidden behind two layers of routines in lwlock.c. This assumption
is now documented at the top of LWLockWaitForVar(), and detailed at bit
more within LWLockConflictsWithVar().
Second, document why WaitXLogInsertionsToFinish() does not include
memory barriers, relying on a spinlock at its top, which is, per Andres'
input, fine for two different reasons, both depending on the fact that
the caller of WaitXLogInsertionsToFinish() is waiting for a LSN up to a
certain value.
This area's documentation and assumptions could be improved more in the
future, but at least that's a beginning.
Author: Bharath Rupireddy, Andres Freund
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CALj2ACVF+6jLvqKe6xhDzCCkr=rfd6upaGc3477Pji1Ke9G7Bg@mail.gmail.com
The WAL insertion lock variable insertingAt is currently being read
and written with the help of the LWLock wait list lock to avoid any read
of torn values. This wait list lock can become a point of contention on
a highly concurrent write workloads.
This commit switches insertingAt to a 64b atomic variable that provides
torn-free reads/writes. On platforms without 64b atomic support, the
fallback implementation uses spinlocks to provide the same guarantees
for the values read. LWLockWaitForVar(), through
LWLockConflictsWithVar(), reads the new value to check if it still needs
to wait with a u64 atomic operation. LWLockUpdateVar() updates the
variable before waking up the waiters with an exchange_u64 (full memory
barrier). LWLockReleaseClearVar() now uses also an exchange_u64 to
reset the variable. Before this commit, all these steps relied on
LWLockWaitListLock() and LWLockWaitListUnlock().
This reduces contention on LWLock wait list lock and improves
performance of highly-concurrent write workloads. Here are some
numbers using pg_logical_emit_message() (HEAD at d6677b93) with various
arbitrary record lengths and clients up to 1k on a rather-large machine
(64 vCPUs, 512GB of RAM, 16 cores per sockets, 2 sockets), in terms of
TPS numbers coming from pgbench:
message_size_b | 16 | 64 | 256 | 1024
--------------------+--------+--------+--------+-------
patch_4_clients | 83830 | 82929 | 80478 | 73131
patch_16_clients | 267655 | 264973 | 250566 | 213985
patch_64_clients | 380423 | 378318 | 356907 | 294248
patch_256_clients | 360915 | 354436 | 326209 | 263664
patch_512_clients | 332654 | 321199 | 287521 | 240128
patch_1024_clients | 288263 | 276614 | 258220 | 217063
patch_2048_clients | 252280 | 243558 | 230062 | 192429
patch_4096_clients | 212566 | 213654 | 205951 | 166955
head_4_clients | 83686 | 83766 | 81233 | 73749
head_16_clients | 266503 | 265546 | 249261 | 213645
head_64_clients | 366122 | 363462 | 341078 | 261707
head_256_clients | 132600 | 132573 | 134392 | 165799
head_512_clients | 118937 | 114332 | 116860 | 150672
head_1024_clients | 133546 | 115256 | 125236 | 151390
head_2048_clients | 137877 | 117802 | 120909 | 138165
head_4096_clients | 113440 | 115611 | 120635 | 114361
Bharath has been measuring similar improvements, where the limit of the
WAL insertion lock begins to be felt when more than 256 concurrent
clients are involved in this specific workload.
An extra patch has been discussed to introduce a fast-exit path in
LWLockUpdateVar() when there are no waiters, still this does not
influence the write-heavy workload cases discussed as there are always
waiters. This will be considered separately.
Author: Bharath Rupireddy
Reviewed-by: Nathan Bossart, Andres Freund, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACVF+6jLvqKe6xhDzCCkr=rfd6upaGc3477Pji1Ke9G7Bg@mail.gmail.com
Due to the bug LimitAdditionalPins() could return 0, violating
LimitAdditionalPins()'s API ("One additional pin is always allowed"). This
could be hit when setting shared_buffers very low and using a fair amount of
concurrency.
This bug was introduced in 31966b151e.
Author: "Anton A. Melnikov" <aamelnikov@inbox.ru>
Reported-by: "Anton A. Melnikov" <aamelnikov@inbox.ru>
Reported-by: Victoria Shepard
Discussion: https://postgr.es/m/ae46f2fb-5586-3de0-b54b-1bb0f6410ebd@inbox.ru
Backpatch: 16-
This is useful to show the allocation state of huge pages when setting
up a server with "huge_pages = try", where allocating huge pages would
be attempted but the server would continue its startup sequence even if
the allocation fails. The effective status of huge pages is not easily
visible without OS-level tools (or for instance, a lookup at
/proc/N/smaps), and the environments where Postgres runs may not
authorize that. Like the other GUCs related to huge pages, this works
for Linux and Windows.
This GUC can report as values:
- "on", if huge pages were allocated.
- "off", if huge pages were not allocated.
- "unknown", a special state that could only be seen when using for
example postgres -C because it is only possible to know if the shared
memory allocation worked after we can check for the GUC values, even if
checking a runtime-computed GUC. This value should never be seen when
querying for the GUC on a running server. An assertion is added to
check that.
The discussion has also turned around having a new function to grab this
status, but this would have required more tricks for -DEXEC_BACKEND,
something that GUCs already handle.
Noriyoshi Shinoda has initiated the thread that has led to the result of
this commit.
Author: Justin Pryzby
Reviewed-by: Nathan Bossart, Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/TU4PR8401MB1152EBB0D271F827E2E37A01EECC9@TU4PR8401MB1152.NAMPRD84.PROD.OUTLOOK.COM
This commit reverts the work done by commits 3ba59ccc89 and 72e78d831a.
Those commits were incorrect in asserting that we never acquire any other
heavy-weight lock after acquring page lock other than relation extension
lock. We can acquire a lock on catalogs while doing catalog look up after
acquring page lock.
This won't impact any existing feature but we need to think some other way
to achieve this before parallelizing other write operations or even
improving the parallelism in vacuum (like allowing multiple workers
for an index).
Reported-by: Jaime Casanova
Author: Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/CAJKUy5jffnRKNvRHKQ0LynRb0RJC-o4P8Ku3x9vGAVLwDBWumQ@mail.gmail.com
The documentation and the code is generated automatically from a new
file called wait_event_names.txt, formatted in sections dedicated to
each wait event class (Timeout, Lock, IO, etc.) with three tab-separated
fields:
- C symbol in enums
- Format in the system views
- Description in the docs
Using this approach has several advantages, as we have proved to be
rather bad in maintaining this area of the tree across the years:
- The order of each item in the documentation and the code, which should
be alphabetical, has become incorrect multiple times, and the script
generating the code and documentation has a few rules to enforce that,
making the maintenance a no-brainer.
- Some wait events were added to the code, but not documented, so this
cannot be missed now.
- The order of the tables for each wait event class is enforced in the
documentation (the input .txt file does so as well for clarity, though
this is not mandatory).
- Less code, shaving 1.2k lines from the tree, with 1/3 of the savings
coming from the code, the rest from the documentation.
The wait event types "Lock" and "LWLock" still have their own code path
for their code, hence only the documentation is created for them. These
classes are listed with a special marker called WAIT_EVENT_DOCONLY in
the input file.
Adding a new wait event now requires only an update of
wait_event_names.txt, with "Lock" and "LWLock" treated as exceptions.
This commit has been tested with configure/Makefile, the CI and VPATH
build. clean, distclean and maintainer-clean were working fine.
Author: Bertrand Drouvot, Michael Paquier
Discussion: https://postgr.es/m/77a86b3a-c4a8-5f5d-69b9-d70bbf2e9b98@gmail.com
If you create a table and don't insert any data into it, the relation file
is never fsync'd. You don't lose data, because an empty table doesn't have
any data to begin with, but if you crash and lose the file, subsequent
operations on the table will fail with "could not open file" error.
To fix, register an fsync request in mdcreate(), like we do for mdwrite().
Per discussion, we probably should also fsync the containing directory
after creating a new file. But that's a separate and much wider issue.
Backpatch to all supported versions.
Reviewed-by: Andres Freund, Thomas Munro
Discussion: https://www.postgresql.org/message-id/d47d8122-415e-425c-d0a2-e0160829702d%40iki.fi
The following changes are done:
- Addition of WaitEventBufferPin and WaitEventExtension, that hold a
list of wait events related to each category.
- Addition of two functions that encapsulate the list of wait events for
each category.
- Rename BUFFER_PIN to BUFFERPIN (only this wait event class used an
underscore, requiring a specific rule in the automation script).
These changes make a bit easier the automatic generation of all the code
and documentation related to wait events, as all the wait event
categories are now controlled by consistent structures and functions.
Author: Bertrand Drouvot
Discussion: https://postgr.es/m/c6f35117-4b20-4c78-1df5-d3056010dcf5@gmail.com
Discussion: https://postgr.es/m/77a86b3a-c4a8-5f5d-69b9-d70bbf2e9b98@gmail.com
Run pgindent and pgperltidy. It seems we're still some ways
away from all committers doing this automatically. Now that
we have a buildfarm animal that will whine about poorly-indented
code, we'll try to keep the tree more tidy.
Discussion: https://postgr.es/m/3156045.1687208823@sss.pgh.pa.us
Starting with 4d330a61bb we can use posix_fallocate() to extend
files. Unfortunately in some situation, e.g. on tmpfs filesystems, EINTR may
be returned. See also 4518c798b2.
To fix, add a retry path to FileFallocate(). In contrast to 4518c798b2 the
amount we extend by is limited and the extending may happen at a high
frequency, so disabling signals does not appear to be the correct path here.
Also add retry paths to other file operations currently lacking them (around
fdatasync(), fsync(), ftruncate(), posix_fadvise(), sync_file_range(),
truncate()) - they are all documented or have been observed to return EINTR.
Even though most of these functions used in the back branches, it does not
seem worth the risk to backpatch - outside of the new-to-16 case of
posix_fallocate() I am not aware of problem reports due to the lack of
retries.
Reported-by: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/ZEZDj1H61ryrmY9o@msg.df7cb.de
Backpatch: -
Previously stats in the startup process would only get reported during
shutdown of the startup process. It has been that way for a long time, but
became a lot more noticeable with the new pg_stat_io view, which separates out
IO done by different backend types...
While replaying after every XLOG_RUNNING_XACTS isn't the prettiest approach,
it has the advantage of being quite easy. Given that we're well past feature
freeze...
It's not a problem that we don't report stats more frequently with
wal_level=minimal, in that case stats can't be read before the stats process
has shut down.
Besides the above, this commit also changes pgstat_report_stat() to acquire
the timestamp with GetCurrentTimestamp() instead of
GetCurrentTransactionStopTimestamp().
Thanks to Melih Mutlu, Kyotaro Horiguchi for prototypes of other approaches to
solving this issue.
Reported-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Discussion: https://postgr.es/m/5315aedc-fbca-1556-c5de-dc2e00b23a14@oss.nttdata.com
- Commit 3eb77eba5a, which moved the pending ops queue from md.c to
sync.c, introduced a duplicate, unused 'pendingOpsCxt'
variable. (I'm surprised none of the compilers or static analysis
tools have complained about that.)
- Commit c2fe139c20 moved the 'synchronize_seqscans' variable and
introduced an extern declaration in tableam.h, making the one in
guc_tables.c unnecessary.
- Commit 6f0cf87872 removed the 'pgstat_temp_directory' GUC, but
forgot to remove the corresponding global variable.
- Commit 1b4e729eaa removed the 'pg_krb_realm' GUC, and its global
variable, but forgot the declaration in auth.h.
Spotted all these by reading the code.
The assertion checked that the size of the relation is not "too large" - but
the code is explicitly dealing with the possibility of another backend
extending the relation concurrently. In that case the new relation size could
be bigger than what the current backend needs, wrongly triggering an assertion
failure.
Unfortunately it is hard to write a reliable and affordable regression tests
for this, as a lot of concurrency is needed to encounter the bug.
Introduced in 31966b151e.
Reported-by: Melanie Plageman <melanieplageman@gmail.com>
Run pgindent, pgperltidy, and reformat-dat-files.
This set of diffs is a bit larger than typical. We've updated to
pg_bsd_indent 2.1.2, which properly indents variable declarations that
have multi-line initialization expressions (the continuation lines are
now indented one tab stop). We've also updated to perltidy version
20230309 and changed some of its settings, which reduces its desire to
add whitespace to lines to make assignments etc. line up. Going
forward, that should make for fewer random-seeming changes to existing
code.
Discussion: https://postgr.es/m/20230428092545.qfb3y5wcu4cm75ur@alvherre.pgsql
Most (older) comments in md.c and smgr.c are indented with a leading
tab on all lines, which isn't the current style and makes updating the
comments a bit annoying. This reindents all these lines with a single
space, as is the normal style. This issue exists in various shapes
throughout the code but it's pretty consistent here, and since there
is a patch pending to refresh some of the comments in these files, it
seems sensible to clean this up here separately.
Discussion: https://www.postgresql.org/message-id/flat/22fed8ba-01c3-2008-a256-4ea912d68fab%40enterprisedb.com
Give the new GUC introduced by d4e71df6 a name that is clearly not
intended for mainstream use quite yet.
Future proposals would drop the prefix only after adding infrastructure
to make it efficient. Having the switch in the tree sooner is good
because it might lead to new discoveries about the hazards awaiting us
on a wide range of systems, but that name was too enticing and could
lead to cross-version confusion in future, per complaints from Noah and
Justin.
Suggested-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> (the idea, not the patch)
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (ditto)
Discussion: https://postgr.es/m/20230430041106.GA2268796%40rfd.leadboat.com
This wait event was documented as "CommitTsBuffer" since its
introduction, but the code named it "CommitTSBuffer". This commit fixes
the code to follow the term documented, which is also more consistent
with the naming of the other wait events used for commit timestamps.
Introduced by 5da1493.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/e8c38840-596a-83d6-bd8d-cebc51111572@gmail.com
Backpatch-through: 13
vacuum_defer_cleanup_age was introduced before hot_standby_feedback and
replication slots existed. It is hard to use reasonably - commonly it will
either be set too low (not preventing recovery conflicts, while still causing
some bloat), or too high (causing a lot of bloat). The alternatives do not
have that issue.
That on its own might not be sufficient reason to remove
vacuum_defer_cleanup_age, but it also complicates computation of xid
horizons. See e.g. the bug fixed in be504a3e97. It also is untested.
This commit removes TransactionIdRetreatSafely(), as there are no users
anymore. There might be potential future users, hence noting that here.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20230317230930.nhsgk3qfk7f4axls@awork3.anarazel.de
This fixes many spelling mistakes in comments, but a few references to
invalid parameter names, function names and option names too in comments
and also some in string constants
Also, fix an #undef that was undefining the incorrect definition
Author: Alexander Lakhin
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/d5f68d19-c0fc-91a9-118d-7c6a5a3f5fad@gmail.com
For some reason I had not implemented RBM_ZERO_AND_CLEANUP_LOCK support in
ExtendBufferedRelTo(), likely thinking it not being reachable. But it is
reachable, e.g. when replaying a WAL record for a page in a relation that
subsequently is truncated (likely only reachable when doing crash recovery or
PITR, not during ongoing streaming replication).
As now all of the RBM_* modes are supported, remove assertions checking mode.
As we had no test coverage for this scenario, add a new TAP test. There's
plenty more that ought to be tested in this area...
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/392271.1681238924%40sss.pgh.pa.us
Discussion: https://postgr.es/m/0b5eb82b-cb99-e0a4-b932-3dc60e2e3926@gmail.com
Make sure that function declarations use names that exactly match the
corresponding names from function definitions in a few places. These
inconsistencies were all introduced relatively recently, after the code
base had parameter name mismatches fixed in bulk (see commits starting
with commits 4274dc22 and 035ce1fe).
pg_bsd_indent still has a couple of similar inconsistencies, which I
(pgeoghegan) have left untouched for now.
Like all earlier commits that cleaned up function parameter names, this
commit was written with help from clang-tidy.
During WAL replay on the standby, when a conflict with a logical slot is
identified, invalidate such slots. There are two sources of conflicts:
1) Using the information added in 6af1793954, logical slots are invalidated if
required rows are removed
2) wal_level on the primary server is reduced to below logical
Uses the infrastructure introduced in the prior commit. FIXME: add commit
reference.
Change InvalidatePossiblyObsoleteSlot() to use a recovery conflict to
interrupt use of a slot, if called in the startup process. The new recovery
conflict is added to pg_stat_database_conflicts, as confl_active_logicalslot.
See 6af1793954 for an overall design of logical decoding on a standby.
Bumps catversion for the addition of the pg_stat_database_conflicts column.
Bumps PGSTAT_FILE_FORMAT_ID for the same reason.
Author: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Author: Amit Khandekar <amitdkhan.pg@gmail.com> (in an older version)
Reviewed-by: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20230407075009.igg7be27ha2htkbt@awork3.anarazel.de
Provide a way to ask the kernel to use O_DIRECT (or local equivalent)
where available for data and WAL files, to avoid or minimize kernel
caching. This hurts performance currently and is not intended for end
users yet. Later proposed work would introduce our own I/O clustering,
read-ahead, etc to replace the facilities the kernel disables with this
option.
The only user-visible change, if the developer-only GUC is not used, is
that this commit also removes the obscure logic that would activate
O_DIRECT for the WAL when wal_sync_method=open_[data]sync and
wal_level=minimal (which also requires max_wal_senders=0). Those are
non-default and unlikely settings, and this behavior wasn't (correctly)
documented. The same effect can be achieved with io_direct=wal.
Author: Thomas Munro <thomas.munro@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKGK1X532hYqJ_MzFWt0n1zt8trz980D79WbjwnT-yYLZpg%40mail.gmail.com
In order to have the option to use O_DIRECT/FILE_FLAG_NO_BUFFERING in a
later commit, we need the addresses of user space buffers to be well
aligned. The exact requirements vary by OS and file system (typically
sectors and/or memory pages). The address alignment size is set to
4096, which is enough for currently known systems: it matches modern
sectors and common memory page size. There is no standard governing
O_DIRECT's requirements so we might eventually have to reconsider this
with more information from the field or future systems.
Aligning I/O buffers on memory pages is also known to improve regular
buffered I/O performance.
Three classes of I/O buffers for regular data pages are adjusted:
(1) Heap buffers are now allocated with the new palloc_aligned() or
MemoryContextAllocAligned() functions introduced by commit 439f6175.
(2) Stack buffers now use a new struct PGIOAlignedBlock to respect
PG_IO_ALIGN_SIZE, if possible with this compiler. (3) The buffer
pool is also aligned in shared memory.
WAL buffers were already aligned on XLOG_BLCKSZ. It's possible for
XLOG_BLCKSZ to be configured smaller than PG_IO_ALIGNED_SIZE and thus
for O_DIRECT WAL writes to fail to be well aligned, but that's a
pre-existing condition and will be addressed by a later commit.
BufFiles are not yet addressed (there's no current plan to use O_DIRECT
for those, but they could potentially get some incidental speedup even
in plain buffered I/O operations through better alignment).
If we can't align stack objects suitably using the compiler extensions
we know about, we disable the use of O_DIRECT by setting PG_O_DIRECT to
0. This avoids the need to consider systems that have O_DIRECT but
can't align stack objects the way we want; such systems could in theory
be supported with more work but we don't currently know of any such
machines, so it's easier to pretend there is no O_DIRECT support
instead. That's an existing and tested class of system.
Add assertions that all buffers passed into smgrread(), smgrwrite() and
smgrextend() are correctly aligned, unless PG_O_DIRECT is 0 (= stack
alignment tricks may be unavailable) or the block size has been set too
small to allow arrays of buffers to be all aligned.
Author: Thomas Munro <thomas.munro@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/CA+hUKGK1X532hYqJ_MzFWt0n1zt8trz980D79WbjwnT-yYLZpg@mail.gmail.com
Both pgstat_database and pgBufferUsage count IO timing for reads of temporary
relation blocks into local buffers. However, both failed to count write IO
timing for flushes of dirty local buffers. Fix.
Additionally, FlushRelationBuffers() seems to have omitted counting write
IO (both count and timing) stats for both pgstat_database and
pgBufferUsage. Fix.
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20230321023451.7rzy4kjj2iktrg2r%40awork3.anarazel.de
The same condition accidentally was copied to both branches. Manual testing
confirms that otherwise the error recovery path works fine.
Found while reviewing the logical-decoding-on-standby patch.
Add new options to the VACUUM and ANALYZE commands called
BUFFER_USAGE_LIMIT to allow users more control over how large to make the
buffer access strategy that is used to limit the usage of buffers in
shared buffers. Larger rings can allow VACUUM to run more quickly but
have the drawback of VACUUM possibly evicting more buffers from shared
buffers that might be useful for other queries running on the database.
Here we also add a new GUC named vacuum_buffer_usage_limit which controls
how large to make the access strategy when it's not specified in the
VACUUM/ANALYZE command. This defaults to 256KB, which is the same size as
the access strategy was prior to this change. This setting also
controls how large to make the buffer access strategy for autovacuum.
Per idea by Andres Freund.
Author: Melanie Plageman
Reviewed-by: David Rowley
Reviewed-by: Andres Freund
Reviewed-by: Justin Pryzby
Reviewed-by: Bharath Rupireddy
Discussion: https://postgr.es/m/20230111182720.ejifsclfwymw2reb@awork3.anarazel.de