1
0
mirror of https://github.com/postgres/postgres.git synced 2025-07-24 14:22:24 +03:00

Fix pg_dump to do the right thing when escaping the contents of large objects.

The previous implementation got it right in most cases but failed in one:
if you pg_dump into an archive with standard_conforming_strings enabled, then
pg_restore to a script file (not directly to a database), the script will set
standard_conforming_strings = on but then emit large object data as
nonstandardly-escaped strings.

At the moment the code is made to emit hex-format bytea strings when dumping
to a script file.  We might want to change to old-style escaping for backwards
compatibility, but that would be slower and bulkier.  If we do, it's just a
matter of reimplementing appendByteaLiteral().

This has been broken for a long time, but given the lack of field complaints
I'm not going to worry about back-patching.
This commit is contained in:
Tom Lane
2009-08-04 21:56:09 +00:00
parent 50d08346f3
commit b1732111f2
5 changed files with 75 additions and 20 deletions

View File

@ -15,7 +15,7 @@
*
*
* IDENTIFICATION
* $PostgreSQL: pgsql/src/bin/pg_dump/pg_backup_archiver.c,v 1.173 2009/07/21 21:46:10 tgl Exp $
* $PostgreSQL: pgsql/src/bin/pg_dump/pg_backup_archiver.c,v 1.174 2009/08/04 21:56:08 tgl Exp $
*
*-------------------------------------------------------------------------
*/
@ -1249,20 +1249,19 @@ dump_lo_buf(ArchiveHandle *AH)
}
else
{
unsigned char *str;
size_t len;
PQExpBuffer buf = createPQExpBuffer();
str = PQescapeBytea((const unsigned char *) AH->lo_buf,
AH->lo_buf_used, &len);
if (!str)
die_horribly(AH, modulename, "out of memory\n");
appendByteaLiteralAHX(buf,
(const unsigned char *) AH->lo_buf,
AH->lo_buf_used,
AH);
/* Hack: turn off writingBlob so ahwrite doesn't recurse to here */
AH->writingBlob = 0;
ahprintf(AH, "SELECT pg_catalog.lowrite(0, '%s');\n", str);
ahprintf(AH, "SELECT pg_catalog.lowrite(0, %s);\n", buf->data);
AH->writingBlob = 1;
free(str);
destroyPQExpBuffer(buf);
}
AH->lo_buf_used = 0;
}