mirror of
https://github.com/postgres/postgres.git
synced 2025-05-15 19:15:29 +03:00
The POSIX standard for tar headers requires archive member sizes to be printed in octal with at most 11 digits, limiting the representable file size to 8GB. However, GNU tar and apparently most other modern tars support a convention in which oversized values can be stored in base-256, allowing any practical file to be a tar member. Adopt this convention to remove two limitations: * pg_dump with -Ft output format failed if the contents of any one table exceeded 8GB. * pg_basebackup failed if the data directory contained any file exceeding 8GB. (This would be a fatal problem for installations configured with a table segment size of 8GB or more, and it has also been seen to fail when large core dump files exist in the data directory.) File sizes under 8GB are still printed in octal, so that no compatibility issues are created except in cases that would have failed entirely before. In addition, this patch fixes several bugs in the same area: * In 9.3 and later, we'd defined tarCreateHeader's file-size argument as size_t, which meant that on 32-bit machines it would write a corrupt tar header for file sizes between 4GB and 8GB, even though no error was raised. This broke both "pg_dump -Ft" and pg_basebackup for such cases. * pg_restore from a tar archive would fail on tables of size between 4GB and 8GB, on machines where either "size_t" or "unsigned long" is 32 bits. This happened even with an archive file not affected by the previous bug. * pg_basebackup would fail if there were files of size between 4GB and 8GB, even on 64-bit machines. * In 9.3 and later, "pg_basebackup -Ft" failed entirely, for any file size, on 64-bit big-endian machines. In view of these potential data-loss bugs, back-patch to all supported branches, even though removal of the documented 8GB limit might otherwise be considered a new feature rather than a bug fix.
src/port/README libpgport ========= libpgport must have special behavior. It supplies functions to both libraries and applications. However, there are two complexities: 1) Libraries need to use object files that are compiled with exactly the same flags as the library. libpgport might not use the same flags, so it is necessary to recompile the object files for individual libraries. This is done by removing -lpgport from the link line: # Need to recompile any libpgport object files LIBS := $(filter-out -lpgport, $(LIBS)) and adding infrastructure to recompile the object files: OBJS= execute.o typename.o descriptor.o data.o error.o prepare.o memory.o \ connect.o misc.o path.o exec.o \ $(filter snprintf.o, $(LIBOBJS)) The problem is that there is no testing of which object files need to be added, but missing functions usually show up when linking user applications. 2) For applications, we use -lpgport before -lpq, so the static files from libpgport are linked first. This avoids having applications dependent on symbols that are _used_ by libpq, but not intended to be exported by libpq. libpq's libpgport usage changes over time, so such a dependency is a problem. Win32, Linux, and Darwin use an export list to control the symbols exported by libpq.