mirror of
https://github.com/postgres/postgres.git
synced 2025-12-21 05:21:08 +03:00
Add parallel pg_dump option.
New infrastructure is added which creates a set number of workers (threads on Windows, forked processes on Unix). Jobs are then handed out to these workers by the master process as needed. pg_restore is adjusted to use this new infrastructure in place of the old setup which created a new worker for each step on the fly. Parallel dumps acquire a snapshot clone in order to stay consistent, if available. The parallel option is selected by the -j / --jobs command line parameter of pg_dump. Joachim Wieland, lightly editorialized by Andrew Dunstan.
This commit is contained in:
@@ -1433,6 +1433,15 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
|
||||
base backup.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Experiment with the parallel dump and restore modes of both
|
||||
<application>pg_dump</> and <application>pg_restore</> and find the
|
||||
optimal number of concurrent jobs to use. Dumping and restoring in
|
||||
parallel by means of the <option>-j</> option should give you a
|
||||
significantly higher performance over the serial mode.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Consider whether the whole dump should be restored as a single
|
||||
|
||||
Reference in New Issue
Block a user