mirror of
https://github.com/postgres/postgres.git
synced 2025-11-09 06:21:09 +03:00
Perform apply of large transactions by parallel workers.
Currently, for large transactions, the publisher sends the data in multiple streams (changes divided into chunks depending upon logical_decoding_work_mem), and then on the subscriber-side, the apply worker writes the changes into temporary files and once it receives the commit, it reads from those files and applies the entire transaction. To improve the performance of such transactions, we can instead allow them to be applied via parallel workers. In this approach, we assign a new parallel apply worker (if available) as soon as the xact's first stream is received and the leader apply worker will send changes to this new worker via shared memory. The parallel apply worker will directly apply the change instead of writing it to temporary files. However, if the leader apply worker times out while attempting to send a message to the parallel apply worker, it will switch to "partial serialize" mode - in this mode, the leader serializes all remaining changes to a file and notifies the parallel apply workers to read and apply them at the end of the transaction. We use a non-blocking way to send the messages from the leader apply worker to the parallel apply to avoid deadlocks. We keep this parallel apply assigned till the transaction commit is received and also wait for the worker to finish at commit. This preserves commit ordering and avoid writing to and reading from files in most cases. We still need to spill if there is no worker available. This patch also extends the SUBSCRIPTION 'streaming' parameter so that the user can control whether to apply the streaming transaction in a parallel apply worker or spill the change to disk. The user can set the streaming parameter to 'on/off', or 'parallel'. The parameter value 'parallel' means the streaming will be applied via a parallel apply worker, if available. The parameter value 'on' means the streaming transaction will be spilled to disk. The default value is 'off' (same as current behaviour). In addition, the patch extends the logical replication STREAM_ABORT message so that abort_lsn and abort_time can also be sent which can be used to update the replication origin in parallel apply worker when the streaming transaction is aborted. Because this message extension is needed to support parallel streaming, parallel streaming is not supported for publications on servers < PG16. Author: Hou Zhijie, Wang wei, Amit Kapila with design inputs from Sawada Masahiko Reviewed-by: Sawada Masahiko, Peter Smith, Dilip Kumar, Shi yu, Kuroda Hayato, Shveta Mallik Discussion: https://postgr.es/m/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com
This commit is contained in:
@@ -1075,12 +1075,20 @@ ReplicationOriginExitCleanup(int code, Datum arg)
|
||||
* array doesn't have to be searched when calling
|
||||
* replorigin_session_advance().
|
||||
*
|
||||
* Obviously only one such cached origin can exist per process and the current
|
||||
* cached value can only be set again after the previous value is torn down
|
||||
* with replorigin_session_reset().
|
||||
* Normally only one such cached origin can exist per process so the cached
|
||||
* value can only be set again after the previous value is torn down with
|
||||
* replorigin_session_reset(). For this normal case pass acquired_by = 0
|
||||
* (meaning the slot is not allowed to be already acquired by another process).
|
||||
*
|
||||
* However, sometimes multiple processes can safely re-use the same origin slot
|
||||
* (for example, multiple parallel apply processes can safely use the same
|
||||
* origin, provided they maintain commit order by allowing only one process to
|
||||
* commit at a time). For this case the first process must pass acquired_by =
|
||||
* 0, and then the other processes sharing that same origin can pass
|
||||
* acquired_by = PID of the first process.
|
||||
*/
|
||||
void
|
||||
replorigin_session_setup(RepOriginId node)
|
||||
replorigin_session_setup(RepOriginId node, int acquired_by)
|
||||
{
|
||||
static bool registered_cleanup;
|
||||
int i;
|
||||
@@ -1122,7 +1130,7 @@ replorigin_session_setup(RepOriginId node)
|
||||
if (curstate->roident != node)
|
||||
continue;
|
||||
|
||||
else if (curstate->acquired_by != 0)
|
||||
else if (curstate->acquired_by != 0 && acquired_by == 0)
|
||||
{
|
||||
ereport(ERROR,
|
||||
(errcode(ERRCODE_OBJECT_IN_USE),
|
||||
@@ -1153,7 +1161,11 @@ replorigin_session_setup(RepOriginId node)
|
||||
|
||||
Assert(session_replication_state->roident != InvalidRepOriginId);
|
||||
|
||||
session_replication_state->acquired_by = MyProcPid;
|
||||
if (acquired_by == 0)
|
||||
session_replication_state->acquired_by = MyProcPid;
|
||||
else if (session_replication_state->acquired_by != acquired_by)
|
||||
elog(ERROR, "could not find replication state slot for replication origin with OID %u which was acquired by %d",
|
||||
node, acquired_by);
|
||||
|
||||
LWLockRelease(ReplicationOriginLock);
|
||||
|
||||
@@ -1337,7 +1349,7 @@ pg_replication_origin_session_setup(PG_FUNCTION_ARGS)
|
||||
|
||||
name = text_to_cstring((text *) DatumGetPointer(PG_GETARG_DATUM(0)));
|
||||
origin = replorigin_by_name(name, false);
|
||||
replorigin_session_setup(origin);
|
||||
replorigin_session_setup(origin, 0);
|
||||
|
||||
replorigin_session_origin = origin;
|
||||
|
||||
|
||||
Reference in New Issue
Block a user