1
0
mirror of https://github.com/postgres/postgres.git synced 2025-08-08 06:02:22 +03:00

Revert Non text modes for pg_dumpall, and pg_restore support

Recent discussions of the mechanisms used to manage global data have
raised concerns about their robustness and security. Rather than try
to deal with those concerns at a very late stage of the release cycle,
the conclusion is to revert these features and work on them for the
next release.

This reverts parts or all of the following commits:

1495eff7bd Non text modes for pg_dumpall, correspondingly change pg_restore
5db3bf7391 Clean up from commit 1495eff7bd
289f74d0cb Add more TAP tests for pg_dumpall
2ef5790806 Fix a couple of error messages and tests for them
b52a4a5f28 Clean up error messages from 1495eff7bd
4170298b6e Further cleanup for directory creation on pg_dump/pg_dumpall
22cb6d2895 Fix memory leak in pg_restore.c
928394b664 Improve various new-to-v18 appendStringInfo calls
39729ec01d Fix fat fingering in 22cb6d2895
5822bf21d5 Add missing space in pg_restore documentation.
f09088a01d Free memory properly in pg_restore.c
40b9c27014 pg_restore cleanups
4aad2cb770 Portability fix: isdigit() must be passed an unsigned char.
88e947136b Fix typos and grammar in the code
f60420cff6 doc: Alphabetize long options for pg_dump[all].
bc35adee8d doc: Put new options in consistent order on man pages
a876464abc Message style improvements
dec6643487 Improve pg_dump/pg_dumpall help synopses and terminology
0ebd242555 Run pgperltidy

Discussion: https://postgr.es/m/20250708212819.09.nmisch@google.com

Backpatch-to: 18
Reviewed-by: Noah Misch <noah@leadboat.com>
This commit is contained in:
Andrew Dunstan
2025-07-30 11:04:05 -04:00
parent cd2d52cc6b
commit 4a9ee867bf
13 changed files with 85 additions and 1540 deletions

View File

@@ -16,10 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
<refpurpose>
export a <productname>PostgreSQL</productname> database cluster as an SQL script or to other formats
</refpurpose>
<refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -36,7 +33,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
of a cluster into an SQL script file or an archive. The output contains
of a cluster into one script file. The script file contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -55,16 +52,11 @@ PostgreSQL documentation
</para>
<para>
Plain text SQL scripts will be written to the standard output. Use the
The SQL script will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
<para>
Archives in other formats will be placed in a directory named using the
<option>-f</option>/<option>--file</option>, which is required in this case.
</para>
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -129,81 +121,6 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><option>-F <replaceable class="parameter">format</replaceable></option></term>
<term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
<listitem>
<para>
Specify the format of dump files. In plain format, all the dump data is
sent in a single text stream. This is the default.
In all other modes, <application>pg_dumpall</application> first creates two files:
<filename>global.dat</filename> and <filename>map.dat</filename>, in the directory
specified by <option>--file</option>.
The first file contains global data, such as roles and tablespaces. The second
contains a mapping between database oids and names. These files are used by
<application>pg_restore</application>. Data for individual databases is placed in
<filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
<variablelist>
<varlistentry>
<term><literal>d</literal></term>
<term><literal>directory</literal></term>
<listitem>
<para>
Output directory-format archives for each database,
suitable for input into pg_restore. The directory
will have database <type>oid</type> as its name.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>p</literal></term>
<term><literal>plain</literal></term>
<listitem>
<para>
Output a plain-text SQL script file (the default).
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>c</literal></term>
<term><literal>custom</literal></term>
<listitem>
<para>
Output a custom-format archive for each database,
suitable for input into pg_restore. The archive
will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
<type>oid</type> of the database.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>t</literal></term>
<term><literal>tar</literal></term>
<listitem>
<para>
Output a tar-format archive for each database,
suitable for input into pg_restore. The archive
will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
<type>oid</type> of the database.
</para>
</listitem>
</varlistentry>
</variablelist>
Note: see <xref linkend="app-pgdump"/> for details
of how the various non plain text archives work.
</para>
</listitem>
</varlistentry>

View File

@@ -18,9 +18,8 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
restore <productname>PostgreSQL</productname> databases from archives
created by <application>pg_dump</application> or
<application>pg_dumpall</application>
restore a <productname>PostgreSQL</productname> database from an
archive file created by <application>pg_dump</application>
</refpurpose>
</refnamediv>
@@ -39,14 +38,13 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
<productname>PostgreSQL</productname> database or cluster from an archive
created by <xref linkend="app-pgdump"/> or
<xref linkend="app-pg-dumpall"/> in one of the non-plain-text
<productname>PostgreSQL</productname> database from an archive
created by <xref linkend="app-pgdump"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
database or cluster to the state it was in at the time it was saved. The
archives also allow <application>pg_restore</application> to
database to the state it was in at the time it was saved. The
archive files also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
prior to being restored. The archive formats are designed to be
prior to being restored. The archive files are designed to be
portable across architectures.
</para>
@@ -54,17 +52,10 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
the database.
When restoring from a dump made by <application>pg_dumpall</application>,
each database will be created and then the restoration will be run in that
database.
Otherwise, when a database name is not specified, a script containing the SQL
commands necessary to rebuild the database or cluster is created and written
the database. Otherwise, a script containing the SQL
commands necessary to rebuild the database is created and written
to a file or standard output. This script output is equivalent to
the plain text output format of <application>pg_dump</application> or
<application>pg_dumpall</application>.
the plain text output format of <application>pg_dump</application>.
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -149,8 +140,6 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
<option>--create</option> is required when restoring multiple databases
from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -246,19 +235,6 @@ PostgreSQL documentation
</listitem>
</varlistentry>
<varlistentry>
<term><option>-g</option></term>
<term><option>--globals-only</option></term>
<listitem>
<para>
Restore only global objects (roles and tablespaces), no databases.
</para>
<para>
This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -603,28 +579,6 @@ PostgreSQL documentation
</listitem>
</varlistentry>
<varlistentry>
<term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
<listitem>
<para>
Do not restore databases whose name matches
<replaceable class="parameter">pattern</replaceable>.
Multiple patterns can be excluded by writing multiple
<option>--exclude-database</option> switches. The
<replaceable class="parameter">pattern</replaceable> parameter is
interpreted as a pattern according to the same rules used by
<application>psql</application>'s <literal>\d</literal>
commands (see <xref linkend="app-psql-patterns"/>),
so multiple databases can also be excluded by writing wildcard
characters in the pattern. When using wildcards, be careful to
quote the pattern if needed to prevent shell wildcard expansion.
</para>
<para>
This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>

View File

@@ -102,7 +102,6 @@ tests += {
't/003_pg_dump_with_server.pl',
't/004_pg_dump_parallel.pl',
't/005_pg_dump_filterfile.pl',
't/006_pg_dumpall.pl',
't/010_dump_connstr.pl',
],
},

View File

@@ -333,16 +333,6 @@ on_exit_close_archive(Archive *AHX)
on_exit_nicely(archive_close_connection, &shutdown_info);
}
/*
* When pg_restore restores multiple databases, then update already added entry
* into array for cleanup.
*/
void
replace_on_exit_close_archive(Archive *AHX)
{
shutdown_info.AHX = AHX;
}
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.

View File

@@ -308,7 +308,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
extern void RestoreArchive(Archive *AHX, bool append_data);
extern void RestoreArchive(Archive *AHX);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);

View File

@@ -85,7 +85,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
const pg_compress_specification compression_spec, bool append_data);
const pg_compress_specification compression_spec);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -337,14 +337,9 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
/*
* RestoreArchive
*
* If append_data is set, then append data into file as we are restoring dump
* of multiple databases which was taken by pg_dumpall.
*/
/* Public */
void
RestoreArchive(Archive *AHX, bool append_data)
RestoreArchive(Archive *AHX)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -461,7 +456,7 @@ RestoreArchive(Archive *AHX, bool append_data)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
SetOutput(AH, ropt->filename, ropt->compression_spec);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1300,7 +1295,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
SetOutput(AH, ropt->filename, out_compression_spec, false);
SetOutput(AH, ropt->filename, out_compression_spec);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1679,8 +1674,7 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
const pg_compress_specification compression_spec,
bool append_data)
const pg_compress_specification compression_spec)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1700,7 +1694,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
if (append_data || AH->mode == archModeAppend)
if (AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;

View File

@@ -394,7 +394,6 @@ struct _tocEntry
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);

View File

@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
RestoreArchive((Archive *) AH, false);
RestoreArchive((Archive *) AH);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);

View File

@@ -1227,7 +1227,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
RestoreArchive(fout, false);
RestoreArchive(fout);
CloseArchive(fout);

View File

@@ -65,10 +65,9 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpDatabases(PGconn *conn);
static void dumpTimestamp(const char *msg);
static int runPgDump(const char *dbname, const char *create_opts,
char *dbfile, ArchiveFormat archDumpFormat);
static int runPgDump(const char *dbname, const char *create_opts);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -77,7 +76,6 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
@@ -150,7 +148,6 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
{"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -201,8 +198,6 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
ArchiveFormat archDumpFormat = archNull;
const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -252,7 +247,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -280,9 +275,7 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
case 'F':
formatName = pg_strdup(optarg);
break;
case 'g':
globals_only = true;
break;
@@ -431,21 +424,6 @@ main(int argc, char *argv[])
exit_nicely(1);
}
/* Get format for dump. */
archDumpFormat = parseDumpFormat(formatName);
/*
* If a non-plain format is specified, a file name is also required as the
* path to the main directory.
*/
if (archDumpFormat != archNull &&
(!filename || strcmp(filename, "") == 0))
{
pg_log_error("option -F/--format=d|c|t requires option -f/--file");
pg_log_error_hint("Try \"%s --help\" for more information.", progname);
exit_nicely(1);
}
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -510,33 +488,6 @@ main(int argc, char *argv[])
if (sequence_data)
appendPQExpBufferStr(pgdumpopts, " --sequence-data");
/*
* Open the output file if required, otherwise use stdout. If required,
* then create new directory and global.dat file.
*/
if (archDumpFormat != archNull)
{
char global_path[MAXPGPATH];
/* Create new directory or accept the empty existing directory. */
create_or_open_dir(filename);
snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
OPF = fopen(global_path, PG_BINARY_W);
if (!OPF)
pg_fatal("could not open file \"%s\": %m", global_path);
}
else if (filename)
{
OPF = fopen(filename, PG_BINARY_W);
if (!OPF)
pg_fatal("could not open output file \"%s\": %m",
filename);
}
else
OPF = stdout;
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -576,6 +527,19 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
/*
* Open the output file if required, otherwise use stdout
*/
if (filename)
{
OPF = fopen(filename, PG_BINARY_W);
if (!OPF)
pg_fatal("could not open output file \"%s\": %m",
filename);
}
else
OPF = stdout;
/*
* Set the client encoding if requested.
*/
@@ -675,7 +639,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
dumpDatabases(conn, archDumpFormat);
dumpDatabases(conn);
PQfinish(conn);
@@ -688,7 +652,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
if (dosync && (archDumpFormat == archNull))
if (dosync)
(void) fsync_fname(filename, false);
}
@@ -699,14 +663,12 @@ main(int argc, char *argv[])
static void
help(void)
{
printf(_("%s exports a PostgreSQL database cluster as an SQL script or to other formats.\n\n"), progname);
printf(_("%s exports a PostgreSQL database cluster as an SQL script.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
" plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1013,6 +975,9 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
if (PQntuples(res) > 0)
fprintf(OPF, "\n--\n-- User Configurations\n--\n");
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
@@ -1526,7 +1491,6 @@ dumpUserConfig(PGconn *conn, const char *username)
{
PQExpBuffer buf = createPQExpBuffer();
PGresult *res;
static bool header_done = false;
printfPQExpBuffer(buf, "SELECT unnest(setconfig) FROM pg_db_role_setting "
"WHERE setdatabase = 0 AND setrole = "
@@ -1538,13 +1502,7 @@ dumpUserConfig(PGconn *conn, const char *username)
res = executeQuery(conn, buf->data);
if (PQntuples(res) > 0)
{
if (!header_done)
fprintf(OPF, "\n--\n-- User Configurations\n--\n");
header_done = true;
fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", username);
}
for (int i = 0; i < PQntuples(res); i++)
{
@@ -1618,13 +1576,10 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
dumpDatabases(PGconn *conn)
{
PGresult *res;
int i;
char db_subdir[MAXPGPATH];
char dbfilepath[MAXPGPATH];
FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1638,42 +1593,18 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
"SELECT datname, oid "
"SELECT datname "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
if (archDumpFormat == archNull && PQntuples(res) > 0)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
/*
* If directory/tar/custom format is specified, create a subdirectory
* under the main directory and each database dump file or subdirectory
* will be created in that subdirectory by pg_dump.
*/
if (archDumpFormat != archNull)
{
char map_file_path[MAXPGPATH];
snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
/* Create a subdirectory with 'databases' name under main directory. */
if (mkdir(db_subdir, pg_dir_create_mode) != 0)
pg_fatal("could not create directory \"%s\": %m", db_subdir);
snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
/* Create a map file (to store dboid and dbname) */
map_file = fopen(map_file_path, PG_BINARY_W);
if (!map_file)
pg_fatal("could not open file \"%s\": %m", map_file_path);
}
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
char *oid = PQgetvalue(res, i, 1);
const char *create_opts = "";
const char *create_opts;
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1687,26 +1618,8 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
continue;
}
/*
* If this is not a plain format dump, then append dboid and dbname to
* the map.dat file.
*/
if (archDumpFormat != archNull)
{
if (archDumpFormat == archCustom)
snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
else if (archDumpFormat == archTar)
snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
else
snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
/* Put one line entry for dboid and dbname in map file. */
fprintf(map_file, "%s %s\n", oid, dbname);
}
pg_log_info("dumping database \"%s\"", dbname);
if (archDumpFormat == archNull)
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
/*
@@ -1721,40 +1634,32 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
if (output_clean)
create_opts = "--clean --create";
else
{
create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
else if (archDumpFormat == archNull)
fprintf(OPF, "\\connect %s\n\n", dbname);
}
}
else
create_opts = "--create";
if (filename)
fclose(OPF);
ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
ret = runPgDump(dbname, create_opts);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
char global_path[MAXPGPATH];
if (archDumpFormat != archNull)
snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
else
snprintf(global_path, MAXPGPATH, "%s", filename);
OPF = fopen(global_path, PG_BINARY_A);
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
global_path);
filename);
}
}
/* Close map file */
if (archDumpFormat != archNull)
fclose(map_file);
PQclear(res);
}
@@ -1764,8 +1669,7 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
* Run pg_dump on dbname, with specified options.
*/
static int
runPgDump(const char *dbname, const char *create_opts, char *dbfile,
ArchiveFormat archDumpFormat)
runPgDump(const char *dbname, const char *create_opts)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1774,24 +1678,6 @@ runPgDump(const char *dbname, const char *create_opts, char *dbfile,
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
/*
* If this is not a plain format dump, then append file name and dump
* format to the pg_dump command to get archive dump.
*/
if (archDumpFormat != archNull)
{
printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
dbfile, create_opts);
if (archDumpFormat == archDirectory)
appendPQExpBufferStr(&cmd, " --format=directory ");
else if (archDumpFormat == archCustom)
appendPQExpBufferStr(&cmd, " --format=custom ");
else if (archDumpFormat == archTar)
appendPQExpBufferStr(&cmd, " --format=tar ");
}
else
{
printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
pgdumpopts->data, create_opts);
@@ -1803,7 +1689,6 @@ runPgDump(const char *dbname, const char *create_opts, char *dbfile,
appendPQExpBufferStr(&cmd, " -Fa ");
else
appendPQExpBufferStr(&cmd, " -Fp ");
}
/*
* Append the database name to the already-constructed stem of connection
@@ -1948,36 +1833,3 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
/*
* parseDumpFormat
*
* This will validate dump formats.
*/
static ArchiveFormat
parseDumpFormat(const char *format)
{
ArchiveFormat archDumpFormat;
if (pg_strcasecmp(format, "c") == 0)
archDumpFormat = archCustom;
else if (pg_strcasecmp(format, "custom") == 0)
archDumpFormat = archCustom;
else if (pg_strcasecmp(format, "d") == 0)
archDumpFormat = archDirectory;
else if (pg_strcasecmp(format, "directory") == 0)
archDumpFormat = archDirectory;
else if (pg_strcasecmp(format, "p") == 0)
archDumpFormat = archNull;
else if (pg_strcasecmp(format, "plain") == 0)
archDumpFormat = archNull;
else if (pg_strcasecmp(format, "t") == 0)
archDumpFormat = archTar;
else if (pg_strcasecmp(format, "tar") == 0)
archDumpFormat = archTar;
else
pg_fatal("unrecognized output format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
format);
return archDumpFormat;
}

View File

@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
* from a backup archive created by pg_dump/pg_dumpall using the archiver
* from a backup archive created by pg_dump using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,15 +41,11 @@
#include "postgres_fe.h"
#include <ctype.h>
#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
#include "common/string.h"
#include "connectdb.h"
#include "fe_utils/option_utils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
@@ -57,43 +53,18 @@
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
static bool file_exists_in_directory(const char *dir, const char *filename);
static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
int numWorkers, bool append_data, int num);
static int read_one_statement(StringInfo inBuf, FILE *pfile);
static int restore_all_databases(PGconn *conn, const char *dumpdirpath,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
static int process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
const char *outfile);
static void copy_or_print_global_file(const char *outfile, FILE *pfile);
static int get_dbnames_list_to_restore(PGconn *conn,
SimplePtrList *dbname_oid_list,
SimpleStringList db_exclude_patterns);
static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
SimplePtrList *dbname_oid_list);
/*
* Stores a database OID and the corresponding name.
*/
typedef struct DbOidName
{
Oid oid;
char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
} DbOidName;
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
int exit_code;
int numWorkers = 1;
Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
int n_errors = 0;
bool globals_only = false;
SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -119,7 +90,6 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
{"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -174,7 +144,6 @@ main(int argc, char **argv)
{"with-statistics", no_argument, &with_statistics, 1},
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
{"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -203,7 +172,7 @@ main(int argc, char **argv)
}
}
while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -230,14 +199,11 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
case 'g':
/* restore only global.dat file from directory */
globals_only = true;
break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -352,9 +318,6 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
case 6: /* database patterns to skip */
simple_string_list_append(&db_exclude_patterns, optarg);
break;
default:
/* getopt_long already emitted a complaint */
@@ -382,13 +345,6 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
if (db_exclude_patterns.head != NULL && globals_only)
{
pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
pg_log_error_hint("Try \"%s --help\" for more information.", progname);
exit_nicely(1);
}
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -496,114 +452,6 @@ main(int argc, char **argv)
opts->formatName);
}
/*
* If toc.dat file is not present in the current path, then check for
* global.dat. If global.dat file is present, then restore all the
* databases from map.dat (if it exists), but skip restoring those
* matching --exclude-database patterns.
*/
if (inputFileSpec != NULL && !file_exists_in_directory(inputFileSpec, "toc.dat") &&
file_exists_in_directory(inputFileSpec, "global.dat"))
{
PGconn *conn = NULL; /* Connection to restore global sql
* commands. */
/*
* Can only use --list or --use-list options with a single database
* dump.
*/
if (opts->tocSummary)
pg_fatal("option -l/--list cannot be used when restoring an archive created by pg_dumpall");
else if (opts->tocFile)
pg_fatal("option -L/--use-list cannot be used when restoring an archive created by pg_dumpall");
/*
* To restore from a pg_dumpall archive, -C (create database) option
* must be specified unless we are only restoring globals.
*/
if (!globals_only && opts->createDB != 1)
{
pg_log_error("option -C/--create must be specified when restoring an archive created by pg_dumpall");
pg_log_error_hint("Try \"%s --help\" for more information.", progname);
pg_log_error_hint("Individual databases can be restored using their specific archives.");
exit_nicely(1);
}
/*
* Connect to the database to execute global sql commands from
* global.dat file.
*/
if (opts->cparams.dbname)
{
conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
false, progname, NULL, NULL, NULL, NULL);
if (!conn)
pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
}
/* If globals-only, then return from here. */
if (globals_only)
{
/*
* Open global.dat file and execute/append all the global sql
* commands.
*/
n_errors = process_global_sql_commands(conn, inputFileSpec,
opts->filename);
if (conn)
PQfinish(conn);
pg_log_info("database restoring skipped because option -g/--globals-only was specified");
}
else
{
/* Now restore all the databases from map.dat */
n_errors = restore_all_databases(conn, inputFileSpec, db_exclude_patterns,
opts, numWorkers);
}
/* Free db pattern list. */
simple_string_list_destroy(&db_exclude_patterns);
}
else /* process if global.dat file does not exist. */
{
if (db_exclude_patterns.head != NULL)
pg_fatal("option --exclude-database can be used only when restoring an archive created by pg_dumpall");
if (globals_only)
pg_fatal("option -g/--globals-only can be used only when restoring an archive created by pg_dumpall");
n_errors = restore_one_database(inputFileSpec, opts, numWorkers, false, 0);
}
/* Done, print a summary of ignored errors during restore. */
if (n_errors)
{
pg_log_warning("errors ignored on restore: %d", n_errors);
return 1;
}
return 0;
}
/*
* restore_one_database
*
* This will restore one database using toc.dat file.
*
* returns the number of errors while doing restore.
*/
static int
restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
int numWorkers, bool append_data, int num)
{
Archive *AH;
int n_errors;
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -611,15 +459,9 @@ restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
* it's still NULL, the cleanup function will just be a no-op. If we are
* restoring multiple databases, then only update AX handle for cleanup as
* the previous entry was already in the array and we had closed previous
* connection, so we can use the same array slot.
* it's still NULL, the cleanup function will just be a no-op.
*/
if (!append_data || num == 0)
on_exit_close_archive(AH);
else
replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -639,21 +481,25 @@ restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
else
{
ProcessArchiveRestoreOptions(AH);
RestoreArchive(AH, append_data);
RestoreArchive(AH);
}
n_errors = AH->n_errors;
/* done, print a summary of ignored errors */
if (AH->n_errors)
pg_log_warning("errors ignored on restore: %d", AH->n_errors);
/* AH may be freed in CloseArchive? */
exit_code = AH->n_errors ? 1 : 0;
CloseArchive(AH);
return n_errors;
return exit_code;
}
static void
usage(const char *progname)
{
printf(_("%s restores PostgreSQL databases from archives created by pg_dump or pg_dumpall.\n\n"), progname);
printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -671,7 +517,6 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -688,7 +533,6 @@ usage(const char *progname)
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --enable-row-security enable row security\n"));
printf(_(" --exclude-database=PATTERN do not restore the specified database(s)\n"));
printf(_(" --filter=FILENAME restore or skip objects based on expressions\n"
" in FILENAME\n"));
printf(_(" --if-exists use IF EXISTS when dropping objects\n"));
@@ -725,8 +569,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
"The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be\n"
"combined and specified multiple times to select multiple objects.\n"));
"The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
"multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -831,585 +675,3 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
/*
* file_exists_in_directory
*
* Returns true if the file exists in the given directory.
*/
static bool
file_exists_in_directory(const char *dir, const char *filename)
{
struct stat st;
char buf[MAXPGPATH];
if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
pg_fatal("directory name too long: \"%s\"", dir);
return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
}
/*
* read_one_statement
*
* This will start reading from passed file pointer using fgetc and read till
* semicolon(sql statement terminator for global.dat file)
*
* EOF is returned if end-of-file input is seen; time to shut down.
*/
static int
read_one_statement(StringInfo inBuf, FILE *pfile)
{
int c; /* character read from getc() */
int m;
StringInfoData q;
initStringInfo(&q);
resetStringInfo(inBuf);
/*
* Read characters until EOF or the appropriate delimiter is seen.
*/
while ((c = fgetc(pfile)) != EOF)
{
if (c != '\'' && c != '"' && c != '\n' && c != ';')
{
appendStringInfoChar(inBuf, (char) c);
while ((c = fgetc(pfile)) != EOF)
{
if (c != '\'' && c != '"' && c != ';' && c != '\n')
appendStringInfoChar(inBuf, (char) c);
else
break;
}
}
if (c == '\'' || c == '"')
{
appendStringInfoChar(&q, (char) c);
m = c;
while ((c = fgetc(pfile)) != EOF)
{
appendStringInfoChar(&q, (char) c);
if (c == m)
{
appendStringInfoString(inBuf, q.data);
resetStringInfo(&q);
break;
}
}
}
if (c == ';')
{
appendStringInfoChar(inBuf, (char) ';');
break;
}
if (c == '\n')
appendStringInfoChar(inBuf, (char) '\n');
}
pg_free(q.data);
/* No input before EOF signal means time to quit. */
if (c == EOF && inBuf->len == 0)
return EOF;
/* return something that's not EOF */
return 'Q';
}
/*
* get_dbnames_list_to_restore
*
* This will mark for skipping any entries from dbname_oid_list that pattern match an
* entry in the db_exclude_patterns list.
*
* Returns the number of database to be restored.
*
*/
static int
get_dbnames_list_to_restore(PGconn *conn,
SimplePtrList *dbname_oid_list,
SimpleStringList db_exclude_patterns)
{
int count_db = 0;
PQExpBuffer query;
PGresult *res;
query = createPQExpBuffer();
if (!conn)
pg_log_info("considering PATTERN as NAME for --exclude-database option as no database connection while doing pg_restore");
/*
* Process one by one all dbnames and if specified to skip restoring, then
* remove dbname from list.
*/
for (SimplePtrListCell *db_cell = dbname_oid_list->head;
db_cell; db_cell = db_cell->next)
{
DbOidName *dbidname = (DbOidName *) db_cell->ptr;
bool skip_db_restore = false;
PQExpBuffer db_lit = createPQExpBuffer();
appendStringLiteralConn(db_lit, dbidname->str, conn);
for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
{
/*
* If there is an exact match then we don't need to try a pattern
* match
*/
if (pg_strcasecmp(dbidname->str, pat_cell->val) == 0)
skip_db_restore = true;
/* Otherwise, try a pattern match if there is a connection */
else if (conn)
{
int dotcnt;
appendPQExpBufferStr(query, "SELECT 1 ");
processSQLNamePattern(conn, query, pat_cell->val, false,
false, NULL, db_lit->data,
NULL, NULL, NULL, &dotcnt);
if (dotcnt > 0)
{
pg_log_error("improper qualified name (too many dotted names): %s",
dbidname->str);
PQfinish(conn);
exit_nicely(1);
}
res = executeQuery(conn, query->data);
if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
{
skip_db_restore = true;
pg_log_info("database name \"%s\" matches exclude pattern \"%s\"", dbidname->str, pat_cell->val);
}
PQclear(res);
resetPQExpBuffer(query);
}
if (skip_db_restore)
break;
}
destroyPQExpBuffer(db_lit);
/*
* Mark db to be skipped or increment the counter of dbs to be
* restored
*/
if (skip_db_restore)
{
pg_log_info("excluding database \"%s\"", dbidname->str);
dbidname->oid = InvalidOid;
}
else
{
count_db++;
}
}
destroyPQExpBuffer(query);
return count_db;
}
/*
* get_dbname_oid_list_from_mfile
*
* Open map.dat file and read line by line and then prepare a list of database
* names and corresponding db_oid.
*
* Returns, total number of database names in map.dat file.
*/
static int
get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oid_list)
{
StringInfoData linebuf;
FILE *pfile;
char map_file_path[MAXPGPATH];
int count = 0;
/*
* If there is only global.dat file in dump, then return from here as
* there is no database to restore.
*/
if (!file_exists_in_directory(dumpdirpath, "map.dat"))
{
pg_log_info("database restoring is skipped because file \"%s\" does not exist in directory \"%s\"", "map.dat", dumpdirpath);
return 0;
}
snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
/* Open map.dat file. */
pfile = fopen(map_file_path, PG_BINARY_R);
if (pfile == NULL)
pg_fatal("could not open file \"%s\": %m", map_file_path);
initStringInfo(&linebuf);
/* Append all the dbname/db_oid combinations to the list. */
while (pg_get_line_buf(pfile, &linebuf))
{
Oid db_oid = InvalidOid;
char *dbname;
DbOidName *dbidname;
int namelen;
char *p = linebuf.data;
/* Extract dboid. */
while (isdigit((unsigned char) *p))
p++;
if (p > linebuf.data && *p == ' ')
{
sscanf(linebuf.data, "%u", &db_oid);
p++;
}
/* dbname is the rest of the line */
dbname = p;
namelen = strlen(dbname);
/* Report error and exit if the file has any corrupted data. */
if (!OidIsValid(db_oid) || namelen <= 1)
pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
count + 1);
pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
dbname, db_oid, map_file_path);
dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
dbidname->oid = db_oid;
strlcpy(dbidname->str, dbname, namelen);
simple_ptr_list_append(dbname_oid_list, dbidname);
count++;
}
/* Close map.dat file. */
fclose(pfile);
return count;
}
/*
* restore_all_databases
*
* This will restore databases those dumps are present in
* directory based on map.dat file mapping.
*
* This will skip restoring for databases that are specified with
* exclude-database option.
*
* returns, number of errors while doing restore.
*/
static int
restore_all_databases(PGconn *conn, const char *dumpdirpath,
SimpleStringList db_exclude_patterns, RestoreOptions *opts,
int numWorkers)
{
SimplePtrList dbname_oid_list = {NULL, NULL};
int num_db_restore = 0;
int num_total_db;
int n_errors_total;
int count = 0;
char *connected_db = NULL;
bool dumpData = opts->dumpData;
bool dumpSchema = opts->dumpSchema;
bool dumpStatistics = opts->dumpSchema;
/* Save db name to reuse it for all the database. */
if (opts->cparams.dbname)
connected_db = opts->cparams.dbname;
num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
/* If map.dat has no entries, return after processing global.dat */
if (dbname_oid_list.head == NULL)
return process_global_sql_commands(conn, dumpdirpath, opts->filename);
pg_log_info(ngettext("found %d database name in \"%s\"",
"found %d database names in \"%s\"",
num_total_db),
num_total_db, "map.dat");
if (!conn)
{
pg_log_info("trying to connect to database \"%s\"", "postgres");
conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
false, progname, NULL, NULL, NULL, NULL);
/* Try with template1. */
if (!conn)
{
pg_log_info("trying to connect to database \"%s\"", "template1");
conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
false, progname, NULL, NULL, NULL, NULL);
}
}
/*
* filter the db list according to the exclude patterns
*/
num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
db_exclude_patterns);
/* Open global.dat file and execute/append all the global sql commands. */
n_errors_total = process_global_sql_commands(conn, dumpdirpath, opts->filename);
/* Close the db connection as we are done with globals and patterns. */
if (conn)
PQfinish(conn);
/* Exit if no db needs to be restored. */
if (dbname_oid_list.head == NULL || num_db_restore == 0)
{
pg_log_info(ngettext("no database needs restoring out of %d database",
"no database needs restoring out of %d databases", num_total_db),
num_total_db);
return n_errors_total;
}
pg_log_info("need to restore %d databases out of %d databases", num_db_restore, num_total_db);
/*
* We have a list of databases to restore after processing the
* exclude-database switch(es). Now we can restore them one by one.
*/
for (SimplePtrListCell *db_cell = dbname_oid_list.head;
db_cell; db_cell = db_cell->next)
{
DbOidName *dbidname = (DbOidName *) db_cell->ptr;
char subdirpath[MAXPGPATH];
char subdirdbpath[MAXPGPATH];
char dbfilename[MAXPGPATH];
int n_errors;
/* ignore dbs marked for skipping */
if (dbidname->oid == InvalidOid)
continue;
/*
* We need to reset override_dbname so that objects can be restored
* into an already created database. (used with -d/--dbname option)
*/
if (opts->cparams.override_dbname)
{
pfree(opts->cparams.override_dbname);
opts->cparams.override_dbname = NULL;
}
snprintf(subdirdbpath, MAXPGPATH, "%s/databases", dumpdirpath);
/*
* Look for the database dump file/dir. If there is an {oid}.tar or
* {oid}.dmp file, use it. Otherwise try to use a directory called
* {oid}
*/
snprintf(dbfilename, MAXPGPATH, "%u.tar", dbidname->oid);
if (file_exists_in_directory(subdirdbpath, dbfilename))
snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", dumpdirpath, dbidname->oid);
else
{
snprintf(dbfilename, MAXPGPATH, "%u.dmp", dbidname->oid);
if (file_exists_in_directory(subdirdbpath, dbfilename))
snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", dumpdirpath, dbidname->oid);
else
snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dbidname->oid);
}
pg_log_info("restoring database \"%s\"", dbidname->str);
/* If database is already created, then don't set createDB flag. */
if (opts->cparams.dbname)
{
PGconn *test_conn;
test_conn = ConnectDatabase(dbidname->str, NULL, opts->cparams.pghost,
opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
false, progname, NULL, NULL, NULL, NULL);
if (test_conn)
{
PQfinish(test_conn);
/* Use already created database for connection. */
opts->createDB = 0;
opts->cparams.dbname = dbidname->str;
}
else
{
/* we'll have to create it */
opts->createDB = 1;
opts->cparams.dbname = connected_db;
}
}
/*
* Reset flags - might have been reset in pg_backup_archiver.c by the
* previous restore.
*/
opts->dumpData = dumpData;
opts->dumpSchema = dumpSchema;
opts->dumpStatistics = dumpStatistics;
/* Restore the single database. */
n_errors = restore_one_database(subdirpath, opts, numWorkers, true, count);
/* Print a summary of ignored errors during single database restore. */
if (n_errors)
{
n_errors_total += n_errors;
pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
}
count++;
}
/* Log number of processed databases. */
pg_log_info("number of restored databases is %d", num_db_restore);
/* Free dbname and dboid list. */
simple_ptr_list_destroy(&dbname_oid_list);
return n_errors_total;
}
/*
* process_global_sql_commands
*
* Open global.dat and execute or copy the sql commands one by one.
*
* If outfile is not NULL, copy all sql commands into outfile rather than
* executing them.
*
* Returns the number of errors while processing global.dat
*/
static int
process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
{
char global_file_path[MAXPGPATH];
PGresult *result;
StringInfoData sqlstatement,
user_create;
FILE *pfile;
int n_errors = 0;
snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
/* Open global.dat file. */
pfile = fopen(global_file_path, PG_BINARY_R);
if (pfile == NULL)
pg_fatal("could not open file \"%s\": %m", global_file_path);
/*
* If outfile is given, then just copy all global.dat file data into
* outfile.
*/
if (outfile)
{
copy_or_print_global_file(outfile, pfile);
return 0;
}
/* Init sqlstatement to append commands. */
initStringInfo(&sqlstatement);
/* creation statement for our current role */
initStringInfo(&user_create);
appendStringInfoString(&user_create, "CREATE ROLE ");
/* should use fmtId here, but we don't know the encoding */
appendStringInfoString(&user_create, PQuser(conn));
appendStringInfoChar(&user_create, ';');
/* Process file till EOF and execute sql statements. */
while (read_one_statement(&sqlstatement, pfile) != EOF)
{
/* don't try to create the role we are connected as */
if (strstr(sqlstatement.data, user_create.data))
continue;
pg_log_info("executing query: %s", sqlstatement.data);
result = PQexec(conn, sqlstatement.data);
switch (PQresultStatus(result))
{
case PGRES_COMMAND_OK:
case PGRES_TUPLES_OK:
case PGRES_EMPTY_QUERY:
break;
default:
n_errors++;
pg_log_error("could not execute query: %s", PQerrorMessage(conn));
pg_log_error_detail("Command was: %s", sqlstatement.data);
}
PQclear(result);
}
/* Print a summary of ignored errors during global.dat. */
if (n_errors)
pg_log_warning(ngettext("ignored %d error in file \"%s\"",
"ignored %d errors in file \"%s\"", n_errors),
n_errors, global_file_path);
fclose(pfile);
return n_errors;
}
/*
* copy_or_print_global_file
*
* Copy global.dat into the output file. If "-" is used as outfile,
* then print commands to stdout.
*/
static void
copy_or_print_global_file(const char *outfile, FILE *pfile)
{
char out_file_path[MAXPGPATH];
FILE *OPF;
int c;
/* "-" is used for stdout. */
if (strcmp(outfile, "-") == 0)
OPF = stdout;
else
{
snprintf(out_file_path, MAXPGPATH, "%s", outfile);
OPF = fopen(out_file_path, PG_BINARY_W);
if (OPF == NULL)
{
fclose(pfile);
pg_fatal("could not open file: \"%s\"", outfile);
}
}
/* Append global.dat into output file or print to stdout. */
while ((c = fgetc(pfile)) != EOF)
fputc(c, OPF);
fclose(pfile);
/* Close output file. */
if (strcmp(outfile, "-") != 0)
fclose(OPF);
}

View File

@@ -237,24 +237,6 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
command_fails_like(
[ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
'pg_restore: option --exclude-database cannot be used together with -g/--globals-only'
);
command_fails_like(
[ 'pg_restore', '--exclude-database=foo', '-d', 'xxx', 'dumpdir' ],
qr/\Qpg_restore: error: option --exclude-database can be used only when restoring an archive created by pg_dumpall\E/,
'When option --exclude-database is used in pg_restore with dump of pg_dump'
);
command_fails_like(
[ 'pg_restore', '--globals-only', '-d', 'xxx', 'dumpdir' ],
qr/\Qpg_restore: error: option -g\/--globals-only can be used only when restoring an archive created by pg_dumpall\E/,
'When option --globals-only is not used in pg_restore with dump of pg_dump'
);
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -262,8 +244,4 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
command_fails_like(
[ 'pg_dumpall', '--format', 'x' ],
qr/\Qpg_dumpall: error: unrecognized output format "x";\E/,
'pg_dumpall: unrecognized output format');
done_testing();

View File

@@ -1,400 +0,0 @@
# Copyright (c) 2021-2025, PostgreSQL Global Development Group
use strict;
use warnings FATAL => 'all';
use PostgreSQL::Test::Cluster;
use PostgreSQL::Test::Utils;
use Test::More;
my $tempdir = PostgreSQL::Test::Utils::tempdir;
my $run_db = 'postgres';
my $sep = $windows_os ? "\\" : "/";
# Tablespace locations used by "restore_tablespace" test case.
my $tablespace1 = "${tempdir}${sep}tbl1";
my $tablespace2 = "${tempdir}${sep}tbl2";
mkdir($tablespace1) || die "mkdir $tablespace1 $!";
mkdir($tablespace2) || die "mkdir $tablespace2 $!";
# Scape tablespace locations on Windows.
$tablespace1 = $windows_os ? ($tablespace1 =~ s/\\/\\\\/gr) : $tablespace1;
$tablespace2 = $windows_os ? ($tablespace2 =~ s/\\/\\\\/gr) : $tablespace2;
# Where pg_dumpall will be executed.
my $node = PostgreSQL::Test::Cluster->new('node');
$node->init;
$node->start;
###############################################################
# Definition of the pg_dumpall test cases to run.
#
# Each of these test cases are named and those names are used for fail
# reporting and also to save the dump and restore information needed for the
# test to assert.
#
# The "setup_sql" is a psql valid script that contains SQL commands to execute
# before of actually execute the tests. The setups are all executed before of
# any test execution.
#
# The "dump_cmd" and "restore_cmd" are the commands that will be executed. The
# "restore_cmd" must have the --file flag to save the restore output so that we
# can assert on it.
#
# The "like" and "unlike" is a regexp that is used to match the pg_restore
# output. It must have at least one of then filled per test cases but it also
# can have both. See "excluding_databases" test case for example.
my %pgdumpall_runs = (
restore_roles => {
setup_sql => '
CREATE ROLE dumpall WITH ENCRYPTED PASSWORD \'admin\' SUPERUSER;
CREATE ROLE dumpall2 WITH REPLICATION CONNECTION LIMIT 10;',
dump_cmd => [
'pg_dumpall',
'--format' => 'directory',
'--file' => "$tempdir/restore_roles",
],
restore_cmd => [
'pg_restore', '-C',
'--format' => 'directory',
'--file' => "$tempdir/restore_roles.sql",
"$tempdir/restore_roles",
],
like => qr/
^\s*\QCREATE ROLE dumpall;\E\s*\n
\s*\QALTER ROLE dumpall WITH SUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN NOREPLICATION NOBYPASSRLS PASSWORD 'SCRAM-SHA-256\E
[^']+';\s*\n
\s*\QCREATE ROLE dumpall2;\E
\s*\QALTER ROLE dumpall2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN REPLICATION NOBYPASSRLS CONNECTION LIMIT 10;\E
/xm
},
restore_tablespace => {
setup_sql => "
CREATE ROLE tap;
CREATE TABLESPACE tbl1 OWNER tap LOCATION '$tablespace1';
CREATE TABLESPACE tbl2 OWNER tap LOCATION '$tablespace2' WITH (seq_page_cost=1.0);",
dump_cmd => [
'pg_dumpall',
'--format' => 'directory',
'--file' => "$tempdir/restore_tablespace",
],
restore_cmd => [
'pg_restore', '-C',
'--format' => 'directory',
'--file' => "$tempdir/restore_tablespace.sql",
"$tempdir/restore_tablespace",
],
# Match "E" as optional since it is added on LOCATION when running on
# Windows.
like => qr/^
\n\QCREATE TABLESPACE tbl1 OWNER tap LOCATION \E(?:E)?\Q'$tablespace1';\E
\n\QCREATE TABLESPACE tbl2 OWNER tap LOCATION \E(?:E)?\Q'$tablespace2';\E
\n\QALTER TABLESPACE tbl2 SET (seq_page_cost=1.0);\E
/xm,
},
restore_grants => {
setup_sql => "
CREATE DATABASE tapgrantsdb;
CREATE SCHEMA private;
CREATE SEQUENCE serial START 101;
CREATE FUNCTION fn() RETURNS void AS \$\$
BEGIN
END;
\$\$ LANGUAGE plpgsql;
CREATE ROLE super;
CREATE ROLE grant1;
CREATE ROLE grant2;
CREATE ROLE grant3;
CREATE ROLE grant4;
CREATE ROLE grant5;
CREATE ROLE grant6;
CREATE ROLE grant7;
CREATE ROLE grant8;
CREATE TABLE t (id int);
INSERT INTO t VALUES (1), (2), (3), (4);
GRANT SELECT ON TABLE t TO grant1;
GRANT INSERT ON TABLE t TO grant2;
GRANT ALL PRIVILEGES ON TABLE t to grant3;
GRANT CONNECT, CREATE ON DATABASE tapgrantsdb TO grant4;
GRANT USAGE, CREATE ON SCHEMA private TO grant5;
GRANT USAGE, SELECT, UPDATE ON SEQUENCE serial TO grant6;
GRANT super TO grant7;
GRANT EXECUTE ON FUNCTION fn() TO grant8;
",
dump_cmd => [
'pg_dumpall',
'--format' => 'directory',
'--file' => "$tempdir/restore_grants",
],
restore_cmd => [
'pg_restore', '-C',
'--format' => 'directory',
'--file' => "$tempdir/restore_grants.sql",
"$tempdir/restore_grants",
],
like => qr/^
\n\QGRANT super TO grant7 WITH INHERIT TRUE GRANTED BY\E
(.*\n)*
\n\QGRANT ALL ON SCHEMA private TO grant5;\E
(.*\n)*
\n\QGRANT ALL ON FUNCTION public.fn() TO grant8;\E
(.*\n)*
\n\QGRANT ALL ON SEQUENCE public.serial TO grant6;\E
(.*\n)*
\n\QGRANT SELECT ON TABLE public.t TO grant1;\E
\n\QGRANT INSERT ON TABLE public.t TO grant2;\E
\n\QGRANT ALL ON TABLE public.t TO grant3;\E
(.*\n)*
\n\QGRANT CREATE,CONNECT ON DATABASE tapgrantsdb TO grant4;\E
/xm,
},
excluding_databases => {
setup_sql => 'CREATE DATABASE db1;
\c db1
CREATE TABLE t1 (id int);
INSERT INTO t1 VALUES (1), (2), (3), (4);
CREATE TABLE t2 (id int);
INSERT INTO t2 VALUES (1), (2), (3), (4);
CREATE DATABASE db2;
\c db2
CREATE TABLE t3 (id int);
INSERT INTO t3 VALUES (1), (2), (3), (4);
CREATE TABLE t4 (id int);
INSERT INTO t4 VALUES (1), (2), (3), (4);
CREATE DATABASE dbex3;
\c dbex3
CREATE TABLE t5 (id int);
INSERT INTO t5 VALUES (1), (2), (3), (4);
CREATE TABLE t6 (id int);
INSERT INTO t6 VALUES (1), (2), (3), (4);
CREATE DATABASE dbex4;
\c dbex4
CREATE TABLE t7 (id int);
INSERT INTO t7 VALUES (1), (2), (3), (4);
CREATE TABLE t8 (id int);
INSERT INTO t8 VALUES (1), (2), (3), (4);
CREATE DATABASE db5;
\c db5
CREATE TABLE t9 (id int);
INSERT INTO t9 VALUES (1), (2), (3), (4);
CREATE TABLE t10 (id int);
INSERT INTO t10 VALUES (1), (2), (3), (4);
',
dump_cmd => [
'pg_dumpall',
'--format' => 'directory',
'--file' => "$tempdir/excluding_databases",
'--exclude-database' => 'dbex*',
],
restore_cmd => [
'pg_restore', '-C',
'--format' => 'directory',
'--file' => "$tempdir/excluding_databases.sql",
'--exclude-database' => 'db5',
"$tempdir/excluding_databases",
],
like => qr/^
\n\QCREATE DATABASE db1\E
(.*\n)*
\n\QCREATE TABLE public.t1 (\E
(.*\n)*
\n\QCREATE TABLE public.t2 (\E
(.*\n)*
\n\QCREATE DATABASE db2\E
(.*\n)*
\n\QCREATE TABLE public.t3 (\E
(.*\n)*
\n\QCREATE TABLE public.t4 (/xm,
unlike => qr/^
\n\QCREATE DATABASE db3\E
(.*\n)*
\n\QCREATE TABLE public.t5 (\E
(.*\n)*
\n\QCREATE TABLE public.t6 (\E
(.*\n)*
\n\QCREATE DATABASE db4\E
(.*\n)*
\n\QCREATE TABLE public.t7 (\E
(.*\n)*
\n\QCREATE TABLE public.t8 (\E
\n\QCREATE DATABASE db5\E
(.*\n)*
\n\QCREATE TABLE public.t9 (\E
(.*\n)*
\n\QCREATE TABLE public.t10 (\E
/xm,
},
format_directory => {
setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
dump_cmd => [
'pg_dumpall',
'--format' => 'directory',
'--file' => "$tempdir/format_directory",
],
restore_cmd => [
'pg_restore', '-C',
'--format' => 'directory',
'--file' => "$tempdir/format_directory.sql",
"$tempdir/format_directory",
],
like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
},
format_tar => {
setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
dump_cmd => [
'pg_dumpall',
'--format' => 'tar',
'--file' => "$tempdir/format_tar",
],
restore_cmd => [
'pg_restore', '-C',
'--format' => 'tar',
'--file' => "$tempdir/format_tar.sql",
"$tempdir/format_tar",
],
like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
},
format_custom => {
setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
dump_cmd => [
'pg_dumpall',
'--format' => 'custom',
'--file' => "$tempdir/format_custom",
],
restore_cmd => [
'pg_restore', '-C',
'--format' => 'custom',
'--file' => "$tempdir/format_custom.sql",
"$tempdir/format_custom",
],
like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
},
dump_globals_only => {
setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
dump_cmd => [
'pg_dumpall',
'--format' => 'directory',
'--globals-only',
'--file' => "$tempdir/dump_globals_only",
],
restore_cmd => [
'pg_restore', '-C', '--globals-only',
'--format' => 'directory',
'--file' => "$tempdir/dump_globals_only.sql",
"$tempdir/dump_globals_only",
],
like => qr/
^\s*\QCREATE ROLE dumpall;\E\s*\n
/xm
},);
# First execute the setup_sql
foreach my $run (sort keys %pgdumpall_runs)
{
if ($pgdumpall_runs{$run}->{setup_sql})
{
$node->safe_psql($run_db, $pgdumpall_runs{$run}->{setup_sql});
}
}
# Execute the tests
foreach my $run (sort keys %pgdumpall_runs)
{
# Create a new target cluster to pg_restore each test case run so that we
# don't need to take care of the cleanup from the target cluster after each
# run.
my $target_node = PostgreSQL::Test::Cluster->new("target_$run");
$target_node->init;
$target_node->start;
# Dumpall from node cluster.
$node->command_ok(\@{ $pgdumpall_runs{$run}->{dump_cmd} },
"$run: pg_dumpall runs");
# Restore the dump on "target_node" cluster.
my @restore_cmd = (
@{ $pgdumpall_runs{$run}->{restore_cmd} },
'--host', $target_node->host, '--port', $target_node->port);
my ($stdout, $stderr) = run_command(\@restore_cmd);
# pg_restore --file output file.
my $output_file = slurp_file("$tempdir/${run}.sql");
if ( !($pgdumpall_runs{$run}->{like})
&& !($pgdumpall_runs{$run}->{unlike}))
{
die "missing \"like\" or \"unlike\" in test \"$run\"";
}
if ($pgdumpall_runs{$run}->{like})
{
like($output_file, $pgdumpall_runs{$run}->{like}, "should dump $run");
}
if ($pgdumpall_runs{$run}->{unlike})
{
unlike(
$output_file,
$pgdumpall_runs{$run}->{unlike},
"should not dump $run");
}
}
# Some negative test case with dump of pg_dumpall and restore using pg_restore
# test case 1: when -C is not used in pg_restore with dump of pg_dumpall
$node->command_fails_like(
[
'pg_restore',
"$tempdir/format_custom",
'--format' => 'custom',
'--file' => "$tempdir/error_test.sql",
],
qr/\Qpg_restore: error: option -C\/--create must be specified when restoring an archive created by pg_dumpall\E/,
'When -C is not used in pg_restore with dump of pg_dumpall');
# test case 2: When --list option is used with dump of pg_dumpall
$node->command_fails_like(
[
'pg_restore',
"$tempdir/format_custom", '-C',
'--format' => 'custom',
'--list',
'--file' => "$tempdir/error_test.sql",
],
qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring an archive created by pg_dumpall\E/,
'When --list is used in pg_restore with dump of pg_dumpall');
# test case 3: When non-exist database is given with -d option
$node->command_fails_like(
[
'pg_restore',
"$tempdir/format_custom", '-C',
'--format' => 'custom',
'-d' => 'dbpq',
],
qr/\Qpg_restore: error: could not connect to database "dbpq"\E/,
'When non-existent database is given with -d option in pg_restore with dump of pg_dumpall'
);
$node->stop('fast');
done_testing();