1
0
mirror of https://github.com/postgres/postgres.git synced 2025-10-25 13:17:41 +03:00

aio: Add core asynchronous I/O infrastructure

The main motivations to use AIO in PostgreSQL are:

a) Reduce the time spent waiting for IO by issuing IO sufficiently early.

   In a few places we have approximated this using posix_fadvise() based
   prefetching, but that is fairly limited (no completion feedback, double the
   syscalls, only works with buffered IO, only works on some OSs).

b) Allow to use Direct-I/O (DIO).

   DIO can offload most of the work for IO to hardware and thus increase
   throughput / decrease CPU utilization, as well as reduce latency.  While we
   have gained the ability to configure DIO in d4e71df6, it is not yet usable
   for real world workloads, as every IO is executed synchronously.

For portability, the new AIO infrastructure allows to implement AIO using
different methods. The choice of the AIO method is controlled by the new
io_method GUC. As of this commit, the only implemented method is "sync",
i.e. AIO is not actually executed asynchronously. The "sync" method exists to
allow to bypass most of the new code initially.

Subsequent commits will introduce additional IO methods, including a
cross-platform method implemented using worker processes and a linux specific
method using io_uring.

To allow different parts of postgres to use AIO, the core AIO infrastructure
does not need to know what kind of files it is operating on. The necessary
behavioral differences for different files are abstracted as "AIO
Targets". One example target would be smgr. For boring portability reasons,
all targets currently need to be added to an array in aio_target.c.  This
commit does not implement any AIO targets, just the infrastructure for
them. The smgr target will be added in a later commit.

Completion (and other events) of IOs for one type of file (i.e. one AIO
target) need to be reacted to differently, based on the IO operation and the
callsite. This is made possible by callbacks that can be registered on
IOs. E.g. an smgr read into a local buffer does not need to update the
corresponding BufferDesc (as there is none), but a read into shared buffers
does.  This commit does not contain any callbacks, they will be added in
subsequent commits.

For now the AIO infrastructure only understands READV and WRITEV operations,
but it is expected that more operations will be added. E.g. fsync/fdatasync,
flush_range and network operations like send/recv.

As of this commit, nothing uses the AIO infrastructure. Later commits will add
an smgr target, md.c and bufmgr.c callbacks and then finally use AIO for
read_stream.c IO, which, in one fell swoop, will convert all read stream users
to AIO.

The goal is to use AIO in many more places. There are patches to use AIO for
checkpointer and bgwriter that are reasonably close to being ready. There also
are prototypes to use it for WAL, relation extension, backend writes and many
more. Those prototypes were important to ensure the design of the AIO
subsystem is not too limiting (e.g. WAL writes need to happen in critical
sections, which influenced a lot of the design).

A future commit will add an AIO README explaining the AIO architecture and how
to use the AIO subsystem. The README is added later, as it references details
only added in later commits.

Many many more people than the folks named below have contributed with
feedback, work on semi-independent patches etc. E.g. various folks have
contributed patches to use the read stream infrastructure (added by Thomas in
b5a9b18cd0) in more places. Similarly, a *lot* of folks have contributed to
the CI infrastructure, which I had started to work on to make adding AIO
feasible.

Some of the work by contributors has gone into the "v1" prototype of AIO,
which heavily influenced the current design of the AIO subsystem. None of the
code from that directly survives, but without the prototype, the current
version of the AIO infrastructure would not exist.

Similarly, the reviewers below have not necessarily looked at the current
design or the whole infrastructure, but have provided very valuable input. I
am to blame for problems, not they.

Author: Andres Freund <andres@anarazel.de>
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Co-authored-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Co-authored-by: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Discussion: https://postgr.es/m/uvrtrknj4kdytuboidbhwclo4gxhswwcpgadptsjvjqcluzmah%40brqs62irg4dt
Discussion: https://postgr.es/m/20210223100344.llw5an2aklengrmn@alap3.anarazel.de
Discussion: https://postgr.es/m/stj36ea6yyhoxtqkhpieia2z4krnam7qyetc57rfezgk4zgapf@gcnactj4z56m
This commit is contained in:
Andres Freund
2025-03-17 18:51:33 -04:00
parent 02844012b3
commit da7226993f
13 changed files with 2834 additions and 0 deletions

View File

@@ -3,6 +3,10 @@
* aio.h
* Main AIO interface
*
* This is the header to include when actually issuing AIO. When just
* declaring functions involving an AIO related type, it might suffice to
* include aio_types.h. Initialization related functions are in the dedicated
* aio_init.h.
*
* Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
@@ -14,6 +18,9 @@
#ifndef AIO_H
#define AIO_H
#include "storage/aio_types.h"
#include "storage/procnumber.h"
/* Enum for io_method GUC. */
@@ -26,9 +33,313 @@ typedef enum IoMethod
#define DEFAULT_IO_METHOD IOMETHOD_SYNC
/*
* Flags for an IO that can be set with pgaio_io_set_flag().
*/
typedef enum PgAioHandleFlags
{
/*
* The IO references backend local memory.
*
* This needs to be set on an IO whenever the IO references process-local
* memory. Some IO methods do not support executing IO that references
* process local memory and thus need to fall back to executing IO
* synchronously for IOs with this flag set.
*
* Required for correctness.
*/
PGAIO_HF_REFERENCES_LOCAL = 1 << 1,
/*
* Hint that IO will be executed synchronously.
*
* This can make it a bit cheaper to execute synchronous IO via the AIO
* interface, to avoid needing an AIO and non-AIO version of code.
*
* Advantageous to set, if applicable, but not required for correctness.
*/
PGAIO_HF_SYNCHRONOUS = 1 << 0,
/*
* IO is using buffered IO, used to control heuristic in some IO methods.
*
* Advantageous to set, if applicable, but not required for correctness.
*/
PGAIO_HF_BUFFERED = 1 << 2,
} PgAioHandleFlags;
/*
* The IO operations supported by the AIO subsystem.
*
* This could be in aio_internal.h, as it is not pubicly referenced, but
* PgAioOpData currently *does* need to be public, therefore keeping this
* public seems to make sense.
*/
typedef enum PgAioOp
{
/* intentionally the zero value, to help catch zeroed memory etc */
PGAIO_OP_INVALID = 0,
PGAIO_OP_READV,
PGAIO_OP_WRITEV,
/**
* In the near term we'll need at least:
* - fsync / fdatasync
* - flush_range
*
* Eventually we'll additionally want at least:
* - send
* - recv
* - accept
**/
} PgAioOp;
#define PGAIO_OP_COUNT (PGAIO_OP_WRITEV + 1)
/*
* On what is IO being performed?
*
* PgAioTargetID specific behaviour should be implemented in
* aio_target.c.
*/
typedef enum PgAioTargetID
{
/* intentionally the zero value, to help catch zeroed memory etc */
PGAIO_TID_INVALID = 0,
} PgAioTargetID;
#define PGAIO_TID_COUNT (PGAIO_TID_INVALID + 1)
/*
* Data necessary for support IO operations (see PgAioOp).
*
* NB: Note that the FDs in here may *not* be relied upon for re-issuing
* requests (e.g. for partial reads/writes or in an IO worker) - the FD might
* be from another process, or closed since. That's not a problem for staged
* IOs, as all staged IOs are submitted when closing an FD.
*/
typedef union
{
struct
{
int fd;
uint16 iov_length;
uint64 offset;
} read;
struct
{
int fd;
uint16 iov_length;
uint64 offset;
} write;
} PgAioOpData;
/*
* Information the object that IO is executed on. Mostly callbacks that
* operate on PgAioTargetData.
*
* typedef is in aio_types.h
*/
struct PgAioTargetInfo
{
/*
* To support executing using worker processes, the file descriptor for an
* IO may need to be be reopened in a different process.
*/
void (*reopen) (PgAioHandle *ioh);
/* describe the target of the IO, used for log messages and views */
char *(*describe_identity) (const PgAioTargetData *sd);
/* name of the target, used in log messages / views */
const char *name;
};
/*
* IDs for callbacks that can be registered on an IO.
*
* Callbacks are identified by an ID rather than a function pointer. There are
* two main reasons:
*
* 1) Memory within PgAioHandle is precious, due to the number of PgAioHandle
* structs in pre-allocated shared memory.
*
* 2) Due to EXEC_BACKEND function pointers are not necessarily stable between
* different backends, therefore function pointers cannot directly be in
* shared memory.
*
* Without 2), we could fairly easily allow to add new callbacks, by filling a
* ID->pointer mapping table on demand. In the presence of 2 that's still
* doable, but harder, because every process has to re-register the pointers
* so that a local ID->"backend local pointer" mapping can be maintained.
*/
typedef enum PgAioHandleCallbackID
{
PGAIO_HCB_INVALID,
} PgAioHandleCallbackID;
typedef void (*PgAioHandleCallbackStage) (PgAioHandle *ioh, uint8 cb_flags);
typedef PgAioResult (*PgAioHandleCallbackComplete) (PgAioHandle *ioh, PgAioResult prior_result, uint8 cb_flags);
typedef void (*PgAioHandleCallbackReport) (PgAioResult result, const PgAioTargetData *target_data, int elevel);
/* typedef is in aio_types.h */
struct PgAioHandleCallbacks
{
/*
* Prepare resources affected by the IO for execution. This could e.g.
* include moving ownership of buffer pins to the AIO subsystem.
*/
PgAioHandleCallbackStage stage;
/*
* Update the state of resources affected by the IO to reflect completion
* of the IO. This could e.g. include updating shared buffer state to
* signal the IO has finished.
*
* The _shared suffix indicates that this is executed by the backend that
* completed the IO, which may or may not be the backend that issued the
* IO. Obviously the callback thus can only modify resources in shared
* memory.
*
* The latest registered callback is called first. This allows
* higher-level code to register callbacks that can rely on callbacks
* registered by lower-level code to already have been executed.
*
* NB: This is called in a critical section. Errors can be signalled by
* the callback's return value, it's the responsibility of the IO's issuer
* to react appropriately.
*/
PgAioHandleCallbackComplete complete_shared;
/*
* Like complete_shared, except called in the issuing backend.
*
* This variant of the completion callback is useful when backend-local
* state has to be updated to reflect the IO's completion. E.g. a
* temporary buffer's BufferDesc isn't accessible in complete_shared.
*
* Local callbacks are only called after complete_shared for all
* registered callbacks has been called.
*/
PgAioHandleCallbackComplete complete_local;
/*
* Report the result of an IO operation. This is e.g. used to raise an
* error after an IO failed at the appropriate time (i.e. not when the IO
* failed, but under control of the code that issued the IO).
*/
PgAioHandleCallbackReport report;
};
/*
* How many callbacks can be registered for one IO handle. Currently we only
* need two, but it's not hard to imagine needing a few more.
*/
#define PGAIO_HANDLE_MAX_CALLBACKS 4
/* --------------------------------------------------------------------------------
* IO Handles
* --------------------------------------------------------------------------------
*/
/* functions in aio.c */
struct ResourceOwnerData;
extern PgAioHandle *pgaio_io_acquire(struct ResourceOwnerData *resowner, PgAioReturn *ret);
extern PgAioHandle *pgaio_io_acquire_nb(struct ResourceOwnerData *resowner, PgAioReturn *ret);
extern void pgaio_io_release(PgAioHandle *ioh);
struct dlist_node;
extern void pgaio_io_release_resowner(struct dlist_node *ioh_node, bool on_error);
extern void pgaio_io_set_flag(PgAioHandle *ioh, PgAioHandleFlags flag);
extern int pgaio_io_get_id(PgAioHandle *ioh);
extern ProcNumber pgaio_io_get_owner(PgAioHandle *ioh);
extern void pgaio_io_get_wref(PgAioHandle *ioh, PgAioWaitRef *iow);
/* functions in aio_io.c */
struct iovec;
extern int pgaio_io_get_iovec(PgAioHandle *ioh, struct iovec **iov);
extern PgAioOp pgaio_io_get_op(PgAioHandle *ioh);
extern PgAioOpData *pgaio_io_get_op_data(PgAioHandle *ioh);
extern void pgaio_io_prep_readv(PgAioHandle *ioh,
int fd, int iovcnt, uint64 offset);
extern void pgaio_io_prep_writev(PgAioHandle *ioh,
int fd, int iovcnt, uint64 offset);
/* functions in aio_target.c */
extern void pgaio_io_set_target(PgAioHandle *ioh, PgAioTargetID targetid);
extern bool pgaio_io_has_target(PgAioHandle *ioh);
extern PgAioTargetData *pgaio_io_get_target_data(PgAioHandle *ioh);
extern char *pgaio_io_get_target_description(PgAioHandle *ioh);
/* functions in aio_callback.c */
extern void pgaio_io_register_callbacks(PgAioHandle *ioh, PgAioHandleCallbackID cb_id,
uint8 cb_data);
extern void pgaio_io_set_handle_data_64(PgAioHandle *ioh, uint64 *data, uint8 len);
extern void pgaio_io_set_handle_data_32(PgAioHandle *ioh, uint32 *data, uint8 len);
extern uint64 *pgaio_io_get_handle_data(PgAioHandle *ioh, uint8 *len);
/* --------------------------------------------------------------------------------
* IO Wait References
* --------------------------------------------------------------------------------
*/
extern void pgaio_wref_clear(PgAioWaitRef *iow);
extern bool pgaio_wref_valid(PgAioWaitRef *iow);
extern int pgaio_wref_get_id(PgAioWaitRef *iow);
extern void pgaio_wref_wait(PgAioWaitRef *iow);
extern bool pgaio_wref_check_done(PgAioWaitRef *iow);
/* --------------------------------------------------------------------------------
* IO Result
* --------------------------------------------------------------------------------
*/
extern void pgaio_result_report(PgAioResult result, const PgAioTargetData *target_data,
int elevel);
/* --------------------------------------------------------------------------------
* Actions on multiple IOs.
* --------------------------------------------------------------------------------
*/
extern void pgaio_enter_batchmode(void);
extern void pgaio_exit_batchmode(void);
extern void pgaio_submit_staged(void);
extern bool pgaio_have_staged(void);
/* --------------------------------------------------------------------------------
* Other
* --------------------------------------------------------------------------------
*/
extern void pgaio_closing_fd(int fd);
/* GUCs */
extern PGDLLIMPORT int io_method;

View File

@@ -0,0 +1,395 @@
/*-------------------------------------------------------------------------
*
* aio_internal.h
* AIO related declarations that should only be used by the AIO subsystem
* internally.
*
*
* Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* src/include/storage/aio_internal.h
*
*-------------------------------------------------------------------------
*/
#ifndef AIO_INTERNAL_H
#define AIO_INTERNAL_H
#include "lib/ilist.h"
#include "port/pg_iovec.h"
#include "storage/aio.h"
#include "storage/condition_variable.h"
/*
* The maximum number of IOs that can be batch submitted at once.
*/
#define PGAIO_SUBMIT_BATCH_SIZE 32
/*
* State machine for handles. With some exceptions, noted below, handles move
* linearly through all states.
*
* State changes should all go through pgaio_io_update_state().
*/
typedef enum PgAioHandleState
{
/* not in use */
PGAIO_HS_IDLE = 0,
/*
* Returned by pgaio_io_acquire(). The next state is either DEFINED (if
* pgaio_io_prep_*() is called), or IDLE (if pgaio_io_release() is
* called).
*/
PGAIO_HS_HANDED_OUT,
/*
* pgaio_io_prep_*() has been called, but IO is not yet staged. At this
* point the handle has all the information for the IO to be executed.
*/
PGAIO_HS_DEFINED,
/*
* stage() callbacks have been called, handle ready to be submitted for
* execution. Unless in batchmode (see c.f. pgaio_enter_batchmode()), the
* IO will be submitted immediately after.
*/
PGAIO_HS_STAGED,
/* IO has been submitted to the IO method for execution */
PGAIO_HS_SUBMITTED,
/* IO finished, but result has not yet been processed */
PGAIO_HS_COMPLETED_IO,
/*
* IO completed, shared completion has been called.
*
* If the IO completion occurs in the issuing backend, local callbacks
* will immediately be called. Otherwise the handle stays in
* COMPLETED_SHARED until the issuing backend waits for the completion of
* the IO.
*/
PGAIO_HS_COMPLETED_SHARED,
/*
* IO completed, local completion has been called.
*
* After this the handle will be made reusable and go into IDLE state.
*/
PGAIO_HS_COMPLETED_LOCAL,
} PgAioHandleState;
struct ResourceOwnerData;
/* typedef is in aio_types.h */
struct PgAioHandle
{
/* all state updates should go through pgaio_io_update_state() */
PgAioHandleState state:8;
/* what are we operating on */
PgAioTargetID target:8;
/* which IO operation */
PgAioOp op:8;
/* bitfield of PgAioHandleFlags */
uint8 flags;
uint8 num_callbacks;
/* using the proper type here would use more space */
uint8 callbacks[PGAIO_HANDLE_MAX_CALLBACKS];
/* data forwarded to each callback */
uint8 callbacks_data[PGAIO_HANDLE_MAX_CALLBACKS];
/*
* Length of data associated with handle using
* pgaio_io_set_handle_data_*().
*/
uint8 handle_data_len;
/* XXX: could be optimized out with some pointer math */
int32 owner_procno;
/* raw result of the IO operation */
int32 result;
/**
* In which list the handle is registered, depends on the state:
* - IDLE, in per-backend list
* - HANDED_OUT - not in a list
* - DEFINED - not in a list
* - STAGED - in per-backend staged array
* - SUBMITTED - in issuer's in_flight list
* - COMPLETED_IO - in issuer's in_flight list
* - COMPLETED_SHARED - in issuer's in_flight list
**/
dlist_node node;
struct ResourceOwnerData *resowner;
dlist_node resowner_node;
/* incremented every time the IO handle is reused */
uint64 generation;
/*
* To wait for the IO to complete other backends can wait on this CV. Note
* that, if in SUBMITTED state, a waiter first needs to check if it needs
* to do work via IoMethodOps->wait_one().
*/
ConditionVariable cv;
/* result of shared callback, passed to issuer callback */
PgAioResult distilled_result;
/*
* Index into PgAioCtl->iovecs and PgAioCtl->handle_data.
*
* At the moment there's no need to differentiate between the two, but
* that won't necessarily stay that way.
*/
uint32 iovec_off;
/*
* If not NULL, this memory location will be updated with information
* about the IOs completion iff the issuing backend learns about the IOs
* completion.
*/
PgAioReturn *report_return;
/* Data necessary for the IO to be performed */
PgAioOpData op_data;
/*
* Data necessary to identify the object undergoing IO to higher-level
* code. Needs to be sufficient to allow another backend to reopen the
* file.
*/
PgAioTargetData target_data;
};
typedef struct PgAioBackend
{
/* index into PgAioCtl->io_handles */
uint32 io_handle_off;
/* IO Handles that currently are not used */
dclist_head idle_ios;
/*
* Only one IO may be returned by pgaio_io_acquire()/pgaio_io_acquire_nb()
* without having been either defined (by actually associating it with IO)
* or released (with pgaio_io_release()). This restriction is necessary to
* guarantee that we always can acquire an IO. ->handed_out_io is used to
* enforce that rule.
*/
PgAioHandle *handed_out_io;
/* Are we currently in batchmode? See pgaio_enter_batchmode(). */
bool in_batchmode;
/*
* IOs that are defined, but not yet submitted.
*/
uint16 num_staged_ios;
PgAioHandle *staged_ios[PGAIO_SUBMIT_BATCH_SIZE];
/*
* List of in-flight IOs. Also contains IOs that aren't strictly speaking
* in-flight anymore, but have been waited-for and completed by another
* backend. Once this backend sees such an IO it'll be reclaimed.
*
* The list is ordered by submission time, with more recently submitted
* IOs being appended at the end.
*/
dclist_head in_flight_ios;
} PgAioBackend;
typedef struct PgAioCtl
{
int backend_state_count;
PgAioBackend *backend_state;
/*
* Array of iovec structs. Each iovec is owned by a specific backend. The
* allocation is in PgAioCtl to allow the maximum number of iovecs for
* individual IOs to be configurable with PGC_POSTMASTER GUC.
*/
uint32 iovec_count;
struct iovec *iovecs;
/*
* For, e.g., an IO covering multiple buffers in shared / temp buffers, we
* need to get Buffer IDs during completion to be able to change the
* BufferDesc state accordingly. This space can be used to store e.g.
* Buffer IDs. Note that the actual iovec might be shorter than this,
* because we combine neighboring pages into one larger iovec entry.
*/
uint64 *handle_data;
uint32 io_handle_count;
PgAioHandle *io_handles;
} PgAioCtl;
/*
* Callbacks used to implement an IO method.
*/
typedef struct IoMethodOps
{
/* global initialization */
/*
* Amount of additional shared memory to reserve for the io_method. Called
* just like a normal ipci.c style *Size() function. Optional.
*/
size_t (*shmem_size) (void);
/*
* Initialize shared memory. First time is true if AIO's shared memory was
* just initialized, false otherwise. Optional.
*/
void (*shmem_init) (bool first_time);
/*
* Per-backend initialization. Optional.
*/
void (*init_backend) (void);
/* handling of IOs */
/* optional */
bool (*needs_synchronous_execution) (PgAioHandle *ioh);
/*
* Start executing passed in IOs.
*
* Will not be called if ->needs_synchronous_execution() returned true.
*
* num_staged_ios is <= PGAIO_SUBMIT_BATCH_SIZE.
*
* Always called in a critical section.
*/
int (*submit) (uint16 num_staged_ios, PgAioHandle **staged_ios);
/*
* Wait for the IO to complete. Optional.
*
* If not provided, it needs to be guaranteed that the IO method calls
* pgaio_io_process_completion() without further interaction by the
* issuing backend.
*/
void (*wait_one) (PgAioHandle *ioh,
uint64 ref_generation);
} IoMethodOps;
/* aio.c */
extern bool pgaio_io_was_recycled(PgAioHandle *ioh, uint64 ref_generation, PgAioHandleState *state);
extern void pgaio_io_stage(PgAioHandle *ioh, PgAioOp op);
extern void pgaio_io_process_completion(PgAioHandle *ioh, int result);
extern void pgaio_io_prepare_submit(PgAioHandle *ioh);
extern bool pgaio_io_needs_synchronous_execution(PgAioHandle *ioh);
extern const char *pgaio_io_get_state_name(PgAioHandle *ioh);
const char *pgaio_result_status_string(PgAioResultStatus rs);
extern void pgaio_shutdown(int code, Datum arg);
/* aio_callback.c */
extern void pgaio_io_call_stage(PgAioHandle *ioh);
extern void pgaio_io_call_complete_shared(PgAioHandle *ioh);
extern void pgaio_io_call_complete_local(PgAioHandle *ioh);
/* aio_io.c */
extern void pgaio_io_perform_synchronously(PgAioHandle *ioh);
extern const char *pgaio_io_get_op_name(PgAioHandle *ioh);
/* aio_target.c */
extern bool pgaio_io_can_reopen(PgAioHandle *ioh);
extern void pgaio_io_reopen(PgAioHandle *ioh);
extern const char *pgaio_io_get_target_name(PgAioHandle *ioh);
/*
* The AIO subsystem has fairly verbose debug logging support. This can be
* enabled/disabled at build time. The reason for this is that
* a) the verbosity can make debugging things on higher levels hard
* b) even if logging can be skipped due to elevel checks, it still causes a
* measurable slowdown
*
* XXX: This likely should be eventually be disabled by default, at least in
* non-assert builds.
*/
#define PGAIO_VERBOSE 1
/*
* Simple ereport() wrapper that only logs if PGAIO_VERBOSE is defined.
*
* This intentionally still compiles the code, guarded by a constant if (0),
* if verbose logging is disabled, to make it less likely that debug logging
* is silently broken.
*
* The current definition requires passing at least one argument.
*/
#define pgaio_debug(elevel, msg, ...) \
do { \
if (PGAIO_VERBOSE) \
ereport(elevel, \
errhidestmt(true), errhidecontext(true), \
errmsg_internal(msg, \
__VA_ARGS__)); \
} while(0)
/*
* Simple ereport() wrapper. Note that the definition requires passing at
* least one argument.
*/
#define pgaio_debug_io(elevel, ioh, msg, ...) \
pgaio_debug(elevel, "io %-10d|op %-5s|target %-4s|state %-16s: " msg, \
pgaio_io_get_id(ioh), \
pgaio_io_get_op_name(ioh), \
pgaio_io_get_target_name(ioh), \
pgaio_io_get_state_name(ioh), \
__VA_ARGS__)
#ifdef USE_INJECTION_POINTS
extern void pgaio_io_call_inj(PgAioHandle *ioh, const char *injection_point);
/* just for use in tests, from within injection points */
extern PgAioHandle *pgaio_inj_io_get(void);
#else
#define pgaio_io_call_inj(ioh, injection_point) (void) 0
/*
* no fallback for pgaio_inj_io_get, all code using injection points better be
* guarded by USE_INJECTION_POINTS.
*/
#endif
/* Declarations for the tables of function pointers exposed by each IO method. */
extern PGDLLIMPORT const IoMethodOps pgaio_sync_ops;
extern PGDLLIMPORT const IoMethodOps *pgaio_method_ops;
extern PGDLLIMPORT PgAioCtl *pgaio_ctl;
extern PGDLLIMPORT PgAioBackend *pgaio_my_backend;
#endif /* AIO_INTERNAL_H */

View File

@@ -0,0 +1,117 @@
/*-------------------------------------------------------------------------
*
* aio_types.h
* AIO related types that are useful to include separately, to reduce the
* "include burden".
*
*
* Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* src/include/storage/aio_types.h
*
*-------------------------------------------------------------------------
*/
#ifndef AIO_TYPES_H
#define AIO_TYPES_H
#include "storage/block.h"
#include "storage/relfilelocator.h"
typedef struct PgAioHandle PgAioHandle;
typedef struct PgAioHandleCallbacks PgAioHandleCallbacks;
typedef struct PgAioTargetInfo PgAioTargetInfo;
/*
* A reference to an IO that can be used to wait for the IO (using
* pgaio_wref_wait()) to complete.
*
* These can be passed across process boundaries.
*/
typedef struct PgAioWaitRef
{
/* internal ID identifying the specific PgAioHandle */
uint32 aio_index;
/*
* IO handles are reused. To detect if a handle was reused, and thereby
* avoid unnecessarily waiting for a newer IO, each time the handle is
* reused a generation number is increased.
*
* To avoid requiring alignment sufficient for an int64, split the
* generation into two.
*/
uint32 generation_upper;
uint32 generation_lower;
} PgAioWaitRef;
/*
* Information identifying what the IO is being performed on.
*
* This needs sufficient information to
*
* a) Reopen the file for the IO if the IO is executed in a context that
* cannot use the FD provided initially (e.g. because the IO is executed in
* a worker process).
*
* b) Describe the object the IO is performed on in log / error messages.
*/
typedef union PgAioTargetData
{
/* just as an example placeholder for later */
struct
{
uint32 queue_id;
} wal;
} PgAioTargetData;
/*
* The status of an AIO operation.
*/
typedef enum PgAioResultStatus
{
ARS_UNKNOWN, /* not yet completed / uninitialized */
ARS_OK,
ARS_PARTIAL, /* did not fully succeed, but no error */
ARS_ERROR,
} PgAioResultStatus;
/*
* Result of IO operation, visible only to the initiator of IO.
*/
typedef struct PgAioResult
{
/*
* This is of type PgAioHandleCallbackID, but can't use a bitfield of an
* enum, because some compilers treat enums as signed.
*/
uint32 id:8;
/* of type PgAioResultStatus, see above */
uint32 status:2;
/* meaning defined by callback->error */
uint32 error_data:22;
int32 result;
} PgAioResult;
/*
* Combination of PgAioResult with minimal metadata about the IO.
*
* Contains sufficient information to be able, in case the IO [partially]
* fails, to log/raise an error under control of the IO issuing code.
*/
typedef struct PgAioReturn
{
PgAioResult result;
PgAioTargetData target_data;
} PgAioReturn;
#endif /* AIO_TYPES_H */