1
0
mirror of https://github.com/postgres/postgres.git synced 2025-11-25 12:03:53 +03:00

Add parallel pg_dump option.

New infrastructure is added which creates a set number of workers
(threads on Windows, forked processes on Unix). Jobs are then
handed out to these workers by the master process as needed.
pg_restore is adjusted to use this new infrastructure in place of the
old setup which created a new worker for each step on the fly. Parallel
dumps acquire a snapshot clone in order to stay consistent, if
available.

The parallel option is selected by the -j / --jobs command line
parameter of pg_dump.

Joachim Wieland, lightly editorialized by Andrew Dunstan.
This commit is contained in:
Andrew Dunstan
2013-03-24 11:27:20 -04:00
parent 3b91fe185a
commit 9e257a181c
22 changed files with 2776 additions and 830 deletions

View File

@@ -158,6 +158,12 @@ InitArchiveFmt_Tar(ArchiveHandle *AH)
AH->ClonePtr = NULL;
AH->DeClonePtr = NULL;
AH->MasterStartParallelItemPtr = NULL;
AH->MasterEndParallelItemPtr = NULL;
AH->WorkerJobDumpPtr = NULL;
AH->WorkerJobRestorePtr = NULL;
/*
* Set up some special context used in compressing data.
*/
@@ -828,7 +834,7 @@ _CloseArchive(ArchiveHandle *AH)
/*
* Now send the data (tables & blobs)
*/
WriteDataChunks(AH);
WriteDataChunks(AH, NULL);
/*
* Now this format wants to append a script which does a full restore