1
0
mirror of https://github.com/postgres/postgres.git synced 2025-11-16 15:02:33 +03:00

Fix pg_dump to do the right thing when escaping the contents of large objects.

The previous implementation got it right in most cases but failed in one:
if you pg_dump into an archive with standard_conforming_strings enabled, then
pg_restore to a script file (not directly to a database), the script will set
standard_conforming_strings = on but then emit large object data as
nonstandardly-escaped strings.

At the moment the code is made to emit hex-format bytea strings when dumping
to a script file.  We might want to change to old-style escaping for backwards
compatibility, but that would be slower and bulkier.  If we do, it's just a
matter of reimplementing appendByteaLiteral().

This has been broken for a long time, but given the lack of field complaints
I'm not going to worry about back-patching.
This commit is contained in:
Tom Lane
2009-08-04 21:56:09 +00:00
parent 50d08346f3
commit b1732111f2
5 changed files with 75 additions and 20 deletions

View File

@@ -17,12 +17,13 @@
*
*
* IDENTIFICATION
* $PostgreSQL: pgsql/src/bin/pg_dump/pg_backup_null.c,v 1.21 2009/07/21 21:46:10 tgl Exp $
* $PostgreSQL: pgsql/src/bin/pg_dump/pg_backup_null.c,v 1.22 2009/08/04 21:56:09 tgl Exp $
*
*-------------------------------------------------------------------------
*/
#include "pg_backup_archiver.h"
#include "dumputils.h"
#include <unistd.h> /* for dup */
@@ -101,16 +102,16 @@ _WriteBlobData(ArchiveHandle *AH, const void *data, size_t dLen)
{
if (dLen > 0)
{
unsigned char *str;
size_t len;
PQExpBuffer buf = createPQExpBuffer();
str = PQescapeBytea((const unsigned char *) data, dLen, &len);
if (!str)
die_horribly(AH, NULL, "out of memory\n");
appendByteaLiteralAHX(buf,
(const unsigned char *) data,
dLen,
AH);
ahprintf(AH, "SELECT pg_catalog.lowrite(0, '%s');\n", str);
ahprintf(AH, "SELECT pg_catalog.lowrite(0, %s);\n", buf->data);
free(str);
destroyPQExpBuffer(buf);
}
return dLen;
}