mirror of
https://github.com/postgres/postgres.git
synced 2025-11-25 12:03:53 +03:00
Add parallel pg_dump option.
New infrastructure is added which creates a set number of workers (threads on Windows, forked processes on Unix). Jobs are then handed out to these workers by the master process as needed. pg_restore is adjusted to use this new infrastructure in place of the old setup which created a new worker for each step on the fly. Parallel dumps acquire a snapshot clone in order to stay consistent, if available. The parallel option is selected by the -j / --jobs command line parameter of pg_dump. Joachim Wieland, lightly editorialized by Andrew Dunstan.
This commit is contained in:
@@ -158,6 +158,12 @@ InitArchiveFmt_Tar(ArchiveHandle *AH)
|
||||
AH->ClonePtr = NULL;
|
||||
AH->DeClonePtr = NULL;
|
||||
|
||||
AH->MasterStartParallelItemPtr = NULL;
|
||||
AH->MasterEndParallelItemPtr = NULL;
|
||||
|
||||
AH->WorkerJobDumpPtr = NULL;
|
||||
AH->WorkerJobRestorePtr = NULL;
|
||||
|
||||
/*
|
||||
* Set up some special context used in compressing data.
|
||||
*/
|
||||
@@ -828,7 +834,7 @@ _CloseArchive(ArchiveHandle *AH)
|
||||
/*
|
||||
* Now send the data (tables & blobs)
|
||||
*/
|
||||
WriteDataChunks(AH);
|
||||
WriteDataChunks(AH, NULL);
|
||||
|
||||
/*
|
||||
* Now this format wants to append a script which does a full restore
|
||||
|
||||
Reference in New Issue
Block a user