mirror of
https://github.com/postgres/postgres.git
synced 2025-10-27 00:12:01 +03:00
aio: Add core asynchronous I/O infrastructure
The main motivations to use AIO in PostgreSQL are: a) Reduce the time spent waiting for IO by issuing IO sufficiently early. In a few places we have approximated this using posix_fadvise() based prefetching, but that is fairly limited (no completion feedback, double the syscalls, only works with buffered IO, only works on some OSs). b) Allow to use Direct-I/O (DIO). DIO can offload most of the work for IO to hardware and thus increase throughput / decrease CPU utilization, as well as reduce latency. While we have gained the ability to configure DIO ind4e71df6, it is not yet usable for real world workloads, as every IO is executed synchronously. For portability, the new AIO infrastructure allows to implement AIO using different methods. The choice of the AIO method is controlled by the new io_method GUC. As of this commit, the only implemented method is "sync", i.e. AIO is not actually executed asynchronously. The "sync" method exists to allow to bypass most of the new code initially. Subsequent commits will introduce additional IO methods, including a cross-platform method implemented using worker processes and a linux specific method using io_uring. To allow different parts of postgres to use AIO, the core AIO infrastructure does not need to know what kind of files it is operating on. The necessary behavioral differences for different files are abstracted as "AIO Targets". One example target would be smgr. For boring portability reasons, all targets currently need to be added to an array in aio_target.c. This commit does not implement any AIO targets, just the infrastructure for them. The smgr target will be added in a later commit. Completion (and other events) of IOs for one type of file (i.e. one AIO target) need to be reacted to differently, based on the IO operation and the callsite. This is made possible by callbacks that can be registered on IOs. E.g. an smgr read into a local buffer does not need to update the corresponding BufferDesc (as there is none), but a read into shared buffers does. This commit does not contain any callbacks, they will be added in subsequent commits. For now the AIO infrastructure only understands READV and WRITEV operations, but it is expected that more operations will be added. E.g. fsync/fdatasync, flush_range and network operations like send/recv. As of this commit, nothing uses the AIO infrastructure. Later commits will add an smgr target, md.c and bufmgr.c callbacks and then finally use AIO for read_stream.c IO, which, in one fell swoop, will convert all read stream users to AIO. The goal is to use AIO in many more places. There are patches to use AIO for checkpointer and bgwriter that are reasonably close to being ready. There also are prototypes to use it for WAL, relation extension, backend writes and many more. Those prototypes were important to ensure the design of the AIO subsystem is not too limiting (e.g. WAL writes need to happen in critical sections, which influenced a lot of the design). A future commit will add an AIO README explaining the AIO architecture and how to use the AIO subsystem. The README is added later, as it references details only added in later commits. Many many more people than the folks named below have contributed with feedback, work on semi-independent patches etc. E.g. various folks have contributed patches to use the read stream infrastructure (added by Thomas inb5a9b18cd0) in more places. Similarly, a *lot* of folks have contributed to the CI infrastructure, which I had started to work on to make adding AIO feasible. Some of the work by contributors has gone into the "v1" prototype of AIO, which heavily influenced the current design of the AIO subsystem. None of the code from that directly survives, but without the prototype, the current version of the AIO infrastructure would not exist. Similarly, the reviewers below have not necessarily looked at the current design or the whole infrastructure, but have provided very valuable input. I am to blame for problems, not they. Author: Andres Freund <andres@anarazel.de> Co-authored-by: Thomas Munro <thomas.munro@gmail.com> Co-authored-by: Nazir Bilal Yavuz <byavuz81@gmail.com> Co-authored-by: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Noah Misch <noah@leadboat.com> Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com> Reviewed-by: Antonin Houska <ah@cybertec.at> Discussion: https://postgr.es/m/uvrtrknj4kdytuboidbhwclo4gxhswwcpgadptsjvjqcluzmah%40brqs62irg4dt Discussion: https://postgr.es/m/20210223100344.llw5an2aklengrmn@alap3.anarazel.de Discussion: https://postgr.es/m/stj36ea6yyhoxtqkhpieia2z4krnam7qyetc57rfezgk4zgapf@gcnactj4z56m
This commit is contained in:
395
src/include/storage/aio_internal.h
Normal file
395
src/include/storage/aio_internal.h
Normal file
@@ -0,0 +1,395 @@
|
||||
/*-------------------------------------------------------------------------
|
||||
*
|
||||
* aio_internal.h
|
||||
* AIO related declarations that should only be used by the AIO subsystem
|
||||
* internally.
|
||||
*
|
||||
*
|
||||
* Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
|
||||
* Portions Copyright (c) 1994, Regents of the University of California
|
||||
*
|
||||
* src/include/storage/aio_internal.h
|
||||
*
|
||||
*-------------------------------------------------------------------------
|
||||
*/
|
||||
#ifndef AIO_INTERNAL_H
|
||||
#define AIO_INTERNAL_H
|
||||
|
||||
|
||||
#include "lib/ilist.h"
|
||||
#include "port/pg_iovec.h"
|
||||
#include "storage/aio.h"
|
||||
#include "storage/condition_variable.h"
|
||||
|
||||
|
||||
/*
|
||||
* The maximum number of IOs that can be batch submitted at once.
|
||||
*/
|
||||
#define PGAIO_SUBMIT_BATCH_SIZE 32
|
||||
|
||||
|
||||
|
||||
/*
|
||||
* State machine for handles. With some exceptions, noted below, handles move
|
||||
* linearly through all states.
|
||||
*
|
||||
* State changes should all go through pgaio_io_update_state().
|
||||
*/
|
||||
typedef enum PgAioHandleState
|
||||
{
|
||||
/* not in use */
|
||||
PGAIO_HS_IDLE = 0,
|
||||
|
||||
/*
|
||||
* Returned by pgaio_io_acquire(). The next state is either DEFINED (if
|
||||
* pgaio_io_prep_*() is called), or IDLE (if pgaio_io_release() is
|
||||
* called).
|
||||
*/
|
||||
PGAIO_HS_HANDED_OUT,
|
||||
|
||||
/*
|
||||
* pgaio_io_prep_*() has been called, but IO is not yet staged. At this
|
||||
* point the handle has all the information for the IO to be executed.
|
||||
*/
|
||||
PGAIO_HS_DEFINED,
|
||||
|
||||
/*
|
||||
* stage() callbacks have been called, handle ready to be submitted for
|
||||
* execution. Unless in batchmode (see c.f. pgaio_enter_batchmode()), the
|
||||
* IO will be submitted immediately after.
|
||||
*/
|
||||
PGAIO_HS_STAGED,
|
||||
|
||||
/* IO has been submitted to the IO method for execution */
|
||||
PGAIO_HS_SUBMITTED,
|
||||
|
||||
/* IO finished, but result has not yet been processed */
|
||||
PGAIO_HS_COMPLETED_IO,
|
||||
|
||||
/*
|
||||
* IO completed, shared completion has been called.
|
||||
*
|
||||
* If the IO completion occurs in the issuing backend, local callbacks
|
||||
* will immediately be called. Otherwise the handle stays in
|
||||
* COMPLETED_SHARED until the issuing backend waits for the completion of
|
||||
* the IO.
|
||||
*/
|
||||
PGAIO_HS_COMPLETED_SHARED,
|
||||
|
||||
/*
|
||||
* IO completed, local completion has been called.
|
||||
*
|
||||
* After this the handle will be made reusable and go into IDLE state.
|
||||
*/
|
||||
PGAIO_HS_COMPLETED_LOCAL,
|
||||
} PgAioHandleState;
|
||||
|
||||
|
||||
struct ResourceOwnerData;
|
||||
|
||||
/* typedef is in aio_types.h */
|
||||
struct PgAioHandle
|
||||
{
|
||||
/* all state updates should go through pgaio_io_update_state() */
|
||||
PgAioHandleState state:8;
|
||||
|
||||
/* what are we operating on */
|
||||
PgAioTargetID target:8;
|
||||
|
||||
/* which IO operation */
|
||||
PgAioOp op:8;
|
||||
|
||||
/* bitfield of PgAioHandleFlags */
|
||||
uint8 flags;
|
||||
|
||||
uint8 num_callbacks;
|
||||
|
||||
/* using the proper type here would use more space */
|
||||
uint8 callbacks[PGAIO_HANDLE_MAX_CALLBACKS];
|
||||
|
||||
/* data forwarded to each callback */
|
||||
uint8 callbacks_data[PGAIO_HANDLE_MAX_CALLBACKS];
|
||||
|
||||
/*
|
||||
* Length of data associated with handle using
|
||||
* pgaio_io_set_handle_data_*().
|
||||
*/
|
||||
uint8 handle_data_len;
|
||||
|
||||
/* XXX: could be optimized out with some pointer math */
|
||||
int32 owner_procno;
|
||||
|
||||
/* raw result of the IO operation */
|
||||
int32 result;
|
||||
|
||||
/**
|
||||
* In which list the handle is registered, depends on the state:
|
||||
* - IDLE, in per-backend list
|
||||
* - HANDED_OUT - not in a list
|
||||
* - DEFINED - not in a list
|
||||
* - STAGED - in per-backend staged array
|
||||
* - SUBMITTED - in issuer's in_flight list
|
||||
* - COMPLETED_IO - in issuer's in_flight list
|
||||
* - COMPLETED_SHARED - in issuer's in_flight list
|
||||
**/
|
||||
dlist_node node;
|
||||
|
||||
struct ResourceOwnerData *resowner;
|
||||
dlist_node resowner_node;
|
||||
|
||||
/* incremented every time the IO handle is reused */
|
||||
uint64 generation;
|
||||
|
||||
/*
|
||||
* To wait for the IO to complete other backends can wait on this CV. Note
|
||||
* that, if in SUBMITTED state, a waiter first needs to check if it needs
|
||||
* to do work via IoMethodOps->wait_one().
|
||||
*/
|
||||
ConditionVariable cv;
|
||||
|
||||
/* result of shared callback, passed to issuer callback */
|
||||
PgAioResult distilled_result;
|
||||
|
||||
/*
|
||||
* Index into PgAioCtl->iovecs and PgAioCtl->handle_data.
|
||||
*
|
||||
* At the moment there's no need to differentiate between the two, but
|
||||
* that won't necessarily stay that way.
|
||||
*/
|
||||
uint32 iovec_off;
|
||||
|
||||
/*
|
||||
* If not NULL, this memory location will be updated with information
|
||||
* about the IOs completion iff the issuing backend learns about the IOs
|
||||
* completion.
|
||||
*/
|
||||
PgAioReturn *report_return;
|
||||
|
||||
/* Data necessary for the IO to be performed */
|
||||
PgAioOpData op_data;
|
||||
|
||||
/*
|
||||
* Data necessary to identify the object undergoing IO to higher-level
|
||||
* code. Needs to be sufficient to allow another backend to reopen the
|
||||
* file.
|
||||
*/
|
||||
PgAioTargetData target_data;
|
||||
};
|
||||
|
||||
|
||||
typedef struct PgAioBackend
|
||||
{
|
||||
/* index into PgAioCtl->io_handles */
|
||||
uint32 io_handle_off;
|
||||
|
||||
/* IO Handles that currently are not used */
|
||||
dclist_head idle_ios;
|
||||
|
||||
/*
|
||||
* Only one IO may be returned by pgaio_io_acquire()/pgaio_io_acquire_nb()
|
||||
* without having been either defined (by actually associating it with IO)
|
||||
* or released (with pgaio_io_release()). This restriction is necessary to
|
||||
* guarantee that we always can acquire an IO. ->handed_out_io is used to
|
||||
* enforce that rule.
|
||||
*/
|
||||
PgAioHandle *handed_out_io;
|
||||
|
||||
/* Are we currently in batchmode? See pgaio_enter_batchmode(). */
|
||||
bool in_batchmode;
|
||||
|
||||
/*
|
||||
* IOs that are defined, but not yet submitted.
|
||||
*/
|
||||
uint16 num_staged_ios;
|
||||
PgAioHandle *staged_ios[PGAIO_SUBMIT_BATCH_SIZE];
|
||||
|
||||
/*
|
||||
* List of in-flight IOs. Also contains IOs that aren't strictly speaking
|
||||
* in-flight anymore, but have been waited-for and completed by another
|
||||
* backend. Once this backend sees such an IO it'll be reclaimed.
|
||||
*
|
||||
* The list is ordered by submission time, with more recently submitted
|
||||
* IOs being appended at the end.
|
||||
*/
|
||||
dclist_head in_flight_ios;
|
||||
} PgAioBackend;
|
||||
|
||||
|
||||
typedef struct PgAioCtl
|
||||
{
|
||||
int backend_state_count;
|
||||
PgAioBackend *backend_state;
|
||||
|
||||
/*
|
||||
* Array of iovec structs. Each iovec is owned by a specific backend. The
|
||||
* allocation is in PgAioCtl to allow the maximum number of iovecs for
|
||||
* individual IOs to be configurable with PGC_POSTMASTER GUC.
|
||||
*/
|
||||
uint32 iovec_count;
|
||||
struct iovec *iovecs;
|
||||
|
||||
/*
|
||||
* For, e.g., an IO covering multiple buffers in shared / temp buffers, we
|
||||
* need to get Buffer IDs during completion to be able to change the
|
||||
* BufferDesc state accordingly. This space can be used to store e.g.
|
||||
* Buffer IDs. Note that the actual iovec might be shorter than this,
|
||||
* because we combine neighboring pages into one larger iovec entry.
|
||||
*/
|
||||
uint64 *handle_data;
|
||||
|
||||
uint32 io_handle_count;
|
||||
PgAioHandle *io_handles;
|
||||
} PgAioCtl;
|
||||
|
||||
|
||||
|
||||
/*
|
||||
* Callbacks used to implement an IO method.
|
||||
*/
|
||||
typedef struct IoMethodOps
|
||||
{
|
||||
/* global initialization */
|
||||
|
||||
/*
|
||||
* Amount of additional shared memory to reserve for the io_method. Called
|
||||
* just like a normal ipci.c style *Size() function. Optional.
|
||||
*/
|
||||
size_t (*shmem_size) (void);
|
||||
|
||||
/*
|
||||
* Initialize shared memory. First time is true if AIO's shared memory was
|
||||
* just initialized, false otherwise. Optional.
|
||||
*/
|
||||
void (*shmem_init) (bool first_time);
|
||||
|
||||
/*
|
||||
* Per-backend initialization. Optional.
|
||||
*/
|
||||
void (*init_backend) (void);
|
||||
|
||||
|
||||
/* handling of IOs */
|
||||
|
||||
/* optional */
|
||||
bool (*needs_synchronous_execution) (PgAioHandle *ioh);
|
||||
|
||||
/*
|
||||
* Start executing passed in IOs.
|
||||
*
|
||||
* Will not be called if ->needs_synchronous_execution() returned true.
|
||||
*
|
||||
* num_staged_ios is <= PGAIO_SUBMIT_BATCH_SIZE.
|
||||
*
|
||||
* Always called in a critical section.
|
||||
*/
|
||||
int (*submit) (uint16 num_staged_ios, PgAioHandle **staged_ios);
|
||||
|
||||
/*
|
||||
* Wait for the IO to complete. Optional.
|
||||
*
|
||||
* If not provided, it needs to be guaranteed that the IO method calls
|
||||
* pgaio_io_process_completion() without further interaction by the
|
||||
* issuing backend.
|
||||
*/
|
||||
void (*wait_one) (PgAioHandle *ioh,
|
||||
uint64 ref_generation);
|
||||
} IoMethodOps;
|
||||
|
||||
|
||||
/* aio.c */
|
||||
extern bool pgaio_io_was_recycled(PgAioHandle *ioh, uint64 ref_generation, PgAioHandleState *state);
|
||||
extern void pgaio_io_stage(PgAioHandle *ioh, PgAioOp op);
|
||||
extern void pgaio_io_process_completion(PgAioHandle *ioh, int result);
|
||||
extern void pgaio_io_prepare_submit(PgAioHandle *ioh);
|
||||
extern bool pgaio_io_needs_synchronous_execution(PgAioHandle *ioh);
|
||||
extern const char *pgaio_io_get_state_name(PgAioHandle *ioh);
|
||||
const char *pgaio_result_status_string(PgAioResultStatus rs);
|
||||
extern void pgaio_shutdown(int code, Datum arg);
|
||||
|
||||
/* aio_callback.c */
|
||||
extern void pgaio_io_call_stage(PgAioHandle *ioh);
|
||||
extern void pgaio_io_call_complete_shared(PgAioHandle *ioh);
|
||||
extern void pgaio_io_call_complete_local(PgAioHandle *ioh);
|
||||
|
||||
/* aio_io.c */
|
||||
extern void pgaio_io_perform_synchronously(PgAioHandle *ioh);
|
||||
extern const char *pgaio_io_get_op_name(PgAioHandle *ioh);
|
||||
|
||||
/* aio_target.c */
|
||||
extern bool pgaio_io_can_reopen(PgAioHandle *ioh);
|
||||
extern void pgaio_io_reopen(PgAioHandle *ioh);
|
||||
extern const char *pgaio_io_get_target_name(PgAioHandle *ioh);
|
||||
|
||||
|
||||
/*
|
||||
* The AIO subsystem has fairly verbose debug logging support. This can be
|
||||
* enabled/disabled at build time. The reason for this is that
|
||||
* a) the verbosity can make debugging things on higher levels hard
|
||||
* b) even if logging can be skipped due to elevel checks, it still causes a
|
||||
* measurable slowdown
|
||||
*
|
||||
* XXX: This likely should be eventually be disabled by default, at least in
|
||||
* non-assert builds.
|
||||
*/
|
||||
#define PGAIO_VERBOSE 1
|
||||
|
||||
/*
|
||||
* Simple ereport() wrapper that only logs if PGAIO_VERBOSE is defined.
|
||||
*
|
||||
* This intentionally still compiles the code, guarded by a constant if (0),
|
||||
* if verbose logging is disabled, to make it less likely that debug logging
|
||||
* is silently broken.
|
||||
*
|
||||
* The current definition requires passing at least one argument.
|
||||
*/
|
||||
#define pgaio_debug(elevel, msg, ...) \
|
||||
do { \
|
||||
if (PGAIO_VERBOSE) \
|
||||
ereport(elevel, \
|
||||
errhidestmt(true), errhidecontext(true), \
|
||||
errmsg_internal(msg, \
|
||||
__VA_ARGS__)); \
|
||||
} while(0)
|
||||
|
||||
/*
|
||||
* Simple ereport() wrapper. Note that the definition requires passing at
|
||||
* least one argument.
|
||||
*/
|
||||
#define pgaio_debug_io(elevel, ioh, msg, ...) \
|
||||
pgaio_debug(elevel, "io %-10d|op %-5s|target %-4s|state %-16s: " msg, \
|
||||
pgaio_io_get_id(ioh), \
|
||||
pgaio_io_get_op_name(ioh), \
|
||||
pgaio_io_get_target_name(ioh), \
|
||||
pgaio_io_get_state_name(ioh), \
|
||||
__VA_ARGS__)
|
||||
|
||||
|
||||
#ifdef USE_INJECTION_POINTS
|
||||
|
||||
extern void pgaio_io_call_inj(PgAioHandle *ioh, const char *injection_point);
|
||||
|
||||
/* just for use in tests, from within injection points */
|
||||
extern PgAioHandle *pgaio_inj_io_get(void);
|
||||
|
||||
#else
|
||||
|
||||
#define pgaio_io_call_inj(ioh, injection_point) (void) 0
|
||||
|
||||
/*
|
||||
* no fallback for pgaio_inj_io_get, all code using injection points better be
|
||||
* guarded by USE_INJECTION_POINTS.
|
||||
*/
|
||||
|
||||
#endif
|
||||
|
||||
|
||||
/* Declarations for the tables of function pointers exposed by each IO method. */
|
||||
extern PGDLLIMPORT const IoMethodOps pgaio_sync_ops;
|
||||
|
||||
extern PGDLLIMPORT const IoMethodOps *pgaio_method_ops;
|
||||
extern PGDLLIMPORT PgAioCtl *pgaio_ctl;
|
||||
extern PGDLLIMPORT PgAioBackend *pgaio_my_backend;
|
||||
|
||||
|
||||
|
||||
#endif /* AIO_INTERNAL_H */
|
||||
Reference in New Issue
Block a user