pglz_maximum_compressed_size() potentially underestimated the amount
of compressed data required to produce N bytes of decompressed data;
this is a fault in commit 11a078cf8.
Separately from that, pglz_decompress() failed to protect itself
against corrupt compressed data, particularly off == 0 in a match
tag. Commit c60e520f6 turned such a situation into an infinite loop,
where before it'd just have resulted in garbage output.
The combination of these two bugs seems like it may explain bug #16694
from Tom Vijlbrief, though it's impossible to be quite sure without
direct inspection of the failing session. (One needs to assume that
the pglz_maximum_compressed_size() bug caused us to fail to fetch the
second byte of a match tag, and what happened to be there instead was
a zero. The reported infinite loop is hard to explain without off == 0,
though.)
Aside from fixing the bugs, rewrite associated comments for more
clarity.
Back-patch to v13 where both these commits landed.
Discussion: https://postgr.es/m/16694-f107871e499ec114@postgresql.org
genhtml has been generating the following warning with this new code:
WARNING: function data mismatch at /path/src/common/unicode_norm.c:102
HTML coverage reports care about the uniqueness of functions defined in
source files, ignoring any assumptions around CFLAGS. 783f0cc
introduced a duplicated definition of get_code_entry(), leading to a
warning and potentially some incorrect data generated in the reports.
This refactors the code so as the code has only one function
declaration, fixing the warning.
Oversight in 783f0cc.
Reported-by: Tom Lane
Author: Michael Paquier
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/207789.1603469272@sss.pgh.pa.us
This replaces the existing binary search with two perfect hash functions
for the composition and the decomposition in the backend code, at the
cost of slightly-larger binaries there (35kB in libpgcommon_srv.a). Per
the measurements done, this improves the speed of the recomposition and
decomposition by up to 30~40 times for the NFC and NFKC conversions,
while all other operations get at least 40% faster. This is not as
"good" as what libicu has, but it closes the gap a lot as per the
feedback from Daniel Verite.
The decomposition table remains the same, getting used for the binary
search in the frontend code, where we care more about the size of the
libraries like libpq over performance as this gets involved only in code
paths related to the SCRAM authentication. In consequence, note that
the perfect hash function for the recomposition needs to use a new
inverse lookup array back to to the existing decomposition table.
The size of all frontend deliverables remains unchanged, even with
--enable-debug, including libpq.
Author: John Naylor
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/CAFBsxsHUuMFCt6-pU+oG-F1==CmEp8wR+O+bRouXWu6i8kXuqA@mail.gmail.com
After de8feb1f3a, some warnings remained
that were only visible when using GCC on Windows. Fix those as well.
Note that the ecpg test source files don't use the full pg_config.h,
so we can't use pg_funcptr_t there but have to do it the long way.
80f8eb7 has introduced in unicode_norm.c some new code that uses
htonl(). On at least some FreeBSD environments, it is possible to find
that this function is undeclared, causing a compilation warning. It is
worth noting that no buildfarm members have reported this issue.
Instead of adding a new inclusion to arpa/inet.h, switch to use
the equivalent defined in pg_bswap.h, to benefit from any built-in
function if the compiler has one.
Reported-by: Masahiko Sawada
Discussion: https://postgr.es/m/CA+fd4k7D4b12ShywWj=AbcHZzV1-OqMjNe7RZAu+tgz5rd_11A@mail.gmail.com
This makes the normalization quick check about 30% faster for NFC and
50% faster for NFKC than the binary search used previously. The hash
lookup reuses the existing array of bit fields used for the binary
search to get the quick check property and is generated as part of "make
update-unicode" in src/common/unicode/.
Author: John Naylor
Reviewed-by: Mark Dilger, Michael Paquier
Discussion: https://postgr.es/m/CACPNZCt4fbJ0_bGrN5QPt34N4whv=mszM0LMVQdoa2rC9UMRXA@mail.gmail.com
As fe0a1dc has proved, it is not a good concept to add to libpq
dependencies that would enforce the error output to a central logging
facility because it breaks the promise of reporting the error back to
an application in a consistent way, with the application to potentially
exit() suddenly if using pieces from for example jsonapi.c. prairiedog
has allowed to report an actual design problem with fe0a1dc, but it will
not be around forever, so removing logging.c from libpgcommon_shlib is a
simple and much better long-term way to prevent any attempt to load the
central logging in libraries with general purposes.
Author: Michael Paquier
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20200928073330.GC2316@paquier.xyz
This reverts commit e21cbb4, as the switch to EVP routines requires a
more careful design where we would need to have at least our wrapper
routines return a status instead of issuing an error by themselves to
let the caller do the error handling. The memory handling was also
incorrect and could cause leaks in the backend if a failure happened,
requiring most likely a callback to do the necessary cleanup as the only
clean way to be able to allocate an EVP context requires the use of an
allocation within OpenSSL. The potential rework of the wrappers also
impacts the fallback implementation when not building with OpenSSL.
Originally, prairiedog has reported a compilation failure, but after
discussion with Tom Lane this needs a better design.
Discussion: https://postgr.es/m/20200928073330.GC2316@paquier.xyz
The use of low-level hash routines is not recommended by upstream
OpenSSL since 2000, and pgcrypto already switched to EVP as of 5ff4a67.
Note that this also fixes a failure with SCRAM authentication when using
FIPS in OpenSSL, but as there have been few complaints about this
problem and as this causes an ABI breakage, no backpatch is done.
Author: Michael Paquier, Alessandro Gherardi
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20200924025314.GE7405@paquier.xyz
Discussion: https://postgr.es/m/20180911030250.GA27115@paquier.xyz
Further experience says that the appending behavior offered by
pg_get_line_append is useful to only a very small minority of callers.
For most, the requirement to reset the buffer after each line is just
an error-prone nuisance. Hence, invent another alternative call
pg_get_line_buf, which takes care of that detail.
Noted while reviewing a patch from Daniel Gustafsson.
Discussion: https://postgr.es/m/48A4FA71-524E-41B9-953A-FD04EF36E2E7@yesql.se
Up to now, if you tried to omit "AS" before a column label in a SELECT
list, it would only work if the column label was an IDENT, that is not
any known keyword. This is rather unfriendly considering that we have
so many keywords and are constantly growing more. In the wake of commit
1ed6b8956 it's possible to improve matters quite a bit.
We'd originally tried to make this work by having some of the existing
keyword categories be allowed without AS, but that didn't work too well,
because each category contains a few special cases that don't work
without AS. Instead, invent an entirely orthogonal keyword property
"can be bare column label", and mark all keywords that way for which
we don't get shift/reduce errors by doing so.
It turns out that of our 450 current keywords, all but 39 can be made
bare column labels, improving the situation by over 90%. This number
might move around a little depending on future grammar work, but it's
a pretty nice improvement.
Mark Dilger, based on work by myself and Robert Haas;
review by John Naylor
Discussion: https://postgr.es/m/38ca86db-42ab-9b48-2902-337a0d6b8311@2ndquadrant.com
Instead of hard-wiring specific verbosity levels into the option
processing of client applications, invent pg_logging_increase_verbosity()
and encourage clients to implement --verbose by calling that. Then,
the common convention that more -v's gets you more verbosity just works.
In particular, this allows resurrection of the debug-grade messages that
have long existed in pg_dump and its siblings. They were unreachable
before this commit due to lack of a way to select PG_LOG_DEBUG logging
level. (It appears that they may have been unreachable for some time
before common/logging.c was introduced, too, so I'm not specifically
blaming cc8d41511 for the oversight. One reason for thinking that is
that it's now apparent that _allocAH()'s message needs a null-pointer
guard. Testing might have failed to reveal that before 96bf88d52.)
Discussion: https://postgr.es/m/1173106.1600116625@sss.pgh.pa.us
Letting the caller provide a StringInfo to read into is helpful when
the caller needs to merge lines or otherwise modify the data after
it's been read. Notably, now the code added by commit 8f8154a50
can use pg_get_line_append() instead of having its own copy of that
logic. A follow-on commit will also make use of this.
Also, since StringInfo buffers are a minimum of 1KB long, blindly
using pg_get_line() in a loop can eat a lot more memory than one would
expect. I discovered for instance that commit e0f05cd5b caused initdb
to consume circa 10MB to read postgres.bki, even though that's under
1MB worth of data. A less memory-hungry alternative is to re-use the
same StringInfo for all lines and pg_strdup the results.
Discussion: https://postgr.es/m/1315832.1599345736@sss.pgh.pa.us
This patch started out with the goal of harmonizing various arbitrary
limits on password length, but after awhile a better idea emerged:
let's just get rid of those fixed limits.
recv_password_packet() has an arbitrary limit on the packet size,
which we don't really need, so just drop it. (Note that this doesn't
really affect anything for MD5 or SCRAM password verification, since
those will hash the user's password to something shorter anyway.
It does matter for auth methods that require a cleartext password.)
Likewise remove the arbitrary error condition in pg_saslprep().
The remaining limits are mostly in client-side code that prompts
for passwords. To improve those, refactor simple_prompt() so that
it allocates its own result buffer that can be made as big as
necessary. Actually, it proves best to make a separate routine
pg_get_line() that has essentially the semantics of fgets(), except
that it allocates a suitable result buffer and hence will never
return a truncated line. (pg_get_line has a lot of potential
applications to replace randomly-sized fgets buffers elsewhere,
but I'll leave that for another patch.)
I built pg_get_line() atop stringinfo.c, which requires moving
that code to src/common/; but that seems fine since it was a poor
fit for src/port/ anyway.
This patch is mostly mine, but it owes a good deal to Nathan Bossart
who pressed for a solution to the password length problem and
created a predecessor patch. Also thanks to Peter Eisentraut and
Stephen Frost for ideas and discussion.
Discussion: https://postgr.es/m/09512C4F-8CB9-4021-B455-EF4C4F0D55A0@amazon.com
They are equivalent, except that StrNCpy() zero-fills the entire
destination buffer instead of providing just one trailing zero. For
all but a tiny number of callers, that's just overhead rather than
being desirable.
Remove StrNCpy() as it is now unused.
In some cases, namestrcpy() is the more appropriate function to use.
While we're here, simplify the API of namestrcpy(): Remove the return
value, don't check for NULL input. Nothing was using that anyway.
Also, remove a few unused name-related functions.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/44f5e198-36f6-6cdb-7fa9-60e34784daae%402ndquadrant.com
Any frontend-only file of src/common/ should include a protection to
prevent such code to be included in the backend compilation.
fe_memutils.c and restricted_token.c have been doing that, while
file_utils.c (since bf5bb2e) and logging.c (since fc9a62a) forgot it.
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20200625080757.GI130132@paquier.xyz
fe_archive.c was compiled only for the frontend in src/common/, but as
it will never share anything with the backend, it makes most sense to
move this file to src/fe_utils/.
Reported-by: Peter Eisentraut
Discussion: https://postgr.es/m/e9766d71-8655-ac86-bdf6-77e0e7169977@2ndquadrant.com
Backpatch-through: 13
Thomas Munro fixed a longstanding annoyance in pg_bsd_indent, that
it would misformat lines containing IsA() macros on the assumption
that the IsA() call should be treated like a cast. This improves
some other cases involving field/variable names that match typedefs,
too. The only places that get worse are a couple of uses of the
OpenSSL macro STACK_OF(); we'll gladly take that trade-off.
Discussion: https://postgr.es/m/20200114221814.GA19630@alvherre.pgsql
Includes some manual cleanup of places that pgindent messed up,
most of which weren't per project style anyway.
Notably, it seems some people didn't absorb the style rules of
commit c9d297751, because there were a bunch of new occurrences
of function calls with a newline just after the left paren, all
with faulty expectations about how the rest of the call would get
indented.
The additional pain from level 4 is excessive for the gain.
Also revert all the source annotation changes to their original
wordings, to avoid back-patching pain.
Discussion: https://postgr.es/m/31166.1589378554@sss.pgh.pa.us
The normalization-check target needs to be run last, after moving the
newly generated files into place. Also, we need an additional
dependency so that unicode_norm.o is rebuilt first. Otherwise,
norm_test will still test the old files but against the new expected
results, which will probably fail.
This adds SQL expressions NORMALIZE() and IS NORMALIZED to convert and
check Unicode normal forms, per SQL standard.
To support fast IS NORMALIZED tests, we pull in a new data file
DerivedNormalizationProps.txt from Unicode and build a lookup table
from that, using techniques similar to ones already used for other
Unicode data. make update-unicode will keep it up to date. We only
build and use these tables for the NFC and NFKC forms, because they
are too big for NFD and NFKD and the improvement is not significant
enough there.
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/flat/c1909f27-c269-2ed9-12f8-3ab72c8caf7a@2ndquadrant.com
pg_rewind needs to copy from the source cluster to the target cluster a
set of relation blocks changed from the previous checkpoint where WAL
forked up to the end of WAL on the target. Building this list of
relation blocks requires a range of WAL segments that may not be present
anymore on the target's pg_wal, causing pg_rewind to fail. It is
possible to work around this issue by copying manually the WAL segments
needed but this may lead to some extra and actually useless work.
This commit introduces a new option allowing pg_rewind to use a
restore_command while doing the rewind by grabbing the parameter value
of restore_command from the target cluster configuration. This allows
the rewind operation to be more reliable, so as only the WAL segments
needed by the rewind are restored from the archives.
In order to be able to do that, a new routine is added to src/common/ to
allow frontend tools to restore files from archives using an
already-built restore command. This version is more simple than the
backend equivalent as there is no need to handle the non-recovery case.
Author: Alexey Kondratov
Reviewed-by: Andrey Borodin, Andres Freund, Alvaro Herrera, Alexander
Korotkov, Michael Paquier
Discussion: https://postgr.es/m/a3acff50-5a0d-9a2c-b3b2-ee36168955c1@postgrespro.ru
restore_command has only been used until now by the backend, but there
is a pending patch for pg_rewind to make use of that in the frontend.
Author: Alexey Kondratov
Reviewed-by: Andrey Borodin, Andres Freund, Alvaro Herrera, Alexander
Korotkov, Michael Paquier
Discussion: https://postgr.es/m/a3acff50-5a0d-9a2c-b3b2-ee36168955c1@postgrespro.ru
SQL includes provisions for numeric Unicode escapes in string
literals and identifiers. Previously we only accepted those
if they represented ASCII characters or the server encoding
was UTF-8, making the conversion to internal form trivial.
This patch adjusts things so that we'll call the appropriate
encoding conversion function in less-trivial cases, allowing
the escape sequence to be accepted so long as it corresponds
to some character available in the server encoding.
This also applies to processing of Unicode escapes in JSONB.
However, the old restriction still applies to client-side
JSON processing, since that hasn't got access to the server's
encoding conversion infrastructure.
This patch includes some lexer infrastructure that simplifies
throwing errors with error cursors pointing into the middle of
a string (or other complex token). For the moment I only used
it for errors relating to Unicode escapes, but we might later
expand the usage to some other cases.
Patch by me, reviewed by John Naylor.
Discussion: https://postgr.es/m/2393.1578958316@sss.pgh.pa.us
When setting PG_COLOR to "always" or "auto" in a Windows terminal
VT100-compatible, the colorization output was not showing up correctly
because it is necessary to update the console's output handling mode.
This fix allows to detect automatically if the environment is compatible
with VT100. Hence, PG_COLOR=auto is able to detect and handle both
compatible and non-compatible environments. The behavior of
PG_COLOR=always remains unchanged, as it enforces the use of colorized
output even if the environment does not allow it.
This fix is based on an initial suggestion from Thomas Munro.
Reported-by: Haiying Tang
Author: Juan José Santamaría Flecha
Reviewed-by: Michail Nikolaev, Michael Paquier, Haiying Tang
Discussion: https://postgr.es/m/16108-134692e97146b7bc@postgresql.org
Backpatch-through: 12
This also involves renaming src/include/utils/hashutils.h, which
becomes src/include/common/hashfn.h. Perhaps an argument can be
made for keeping the hashutils.h name, but it seemed more
consistent to make it match the name of the file, and also more
descriptive of what is actually going on here.
Patch by me, reviewed by Suraj Kharage and Mark Dilger. Off-list
advice on how not to break the Windows build from Davinder Singh
and Amit Kapila.
Discussion: http://postgr.es/m/CA+TgmoaRiG4TXND8QuM6JXFRkM_1wL2ZNhzaUKsuec9-4yrkgw@mail.gmail.com
Given all we have learned about fsync() error handling in the last few
years, reporting an fsync() error non-fatally is not useful,
unless you don't care much about the file, in which case you probably
don't need to use fsync() in the first place.
Change fsync_fname() and durable_rename() to exit(1) on fsync() errors
other than those that we specifically chose to ignore.
This affects initdb, pg_basebackup, pg_checksums, pg_dump, pg_dumpall,
and pg_rewind.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/d239d1bd-aef0-ca7c-dc0a-da14bdcf0392%402ndquadrant.com
This fixes and updates a couple of comments related to outdated Windows
versions. Particularly, src/common/exec.c had a fallback implementation
to read a file's line from a pipe because stdin/stdout/stderr does not
exist in Windows 2000 that is removed to simplify src/common/ as there
are unlikely versions of Postgres running on such platforms.
Author: Michael Paquier
Reviewed-by: Kyotaro Horiguchi, Juan José Santamaría Flecha
Discussion: https://postgr.es/m/20191219021526.GC4202@paquier.xyz
The previous system had configure put the value into the makefiles and
then have the makefiles pass them to the build of pg_config. That was
put in place when pg_config was a shell script. We can simplify that
by having configure put the value into pg_config.h directly. This
also makes the standard build system match how the MSVC build system
already does it.
Discussion: https://www.postgresql.org/message-id/flat/6e457870-cef5-5f1d-b57c-fc89cfb8a788%402ndquadrant.com
Those new assertions can be used at file scope, outside of any function
for compilation checks. This commit provides implementations for C and
C++, and fallback implementations.
Author: Peter Smith
Reviewed-by: Andres Freund, Kyotaro Horiguchi, Dagfinn Ilmari Mannsåker,
Michael Paquier
Discussion: https://postgr.es/m/201DD0641B056142AC8C6645EC1B5F62014B8E8030@SYD1217
To make this work, (1) makeJsonLexContextCstringLen now takes the
encoding to be used as an argument; (2) check_stack_depth() is made to
do nothing in frontend code, and (3) elog(ERROR, ...) is changed to
pg_log_fatal + exit in frontend code.
Mark Dilger, reviewed and slightly revised by me.
Discussion: http://postgr.es/m/CA+TgmoYfOXhd27MUDGioVh6QtpD0C1K-f6ObSA10AWiHBAL5bA@mail.gmail.com
The leaks have been detected by a Coverity run on Windows. No backpatch
is done as the leaks are minor.
While on it, make restricted token creation more consistent in its error
handling by logging an error instead of a warning if missing
advapi32.dll, which was missing in the NT4 days. Any modern platform
should have this DLL around. Now, if the library is not there, an error
is still reported back to the caller, and nothing is done do there is no
behavior change done in this commit.
Author: Ranier Vilela
Discussion: https://postgr.es/m/CAEudQApa9MG0foPkgPX87fipk=vhnF2Xfg+CfUyR08h4R7Mywg@mail.gmail.com
Two routines have been added in OpenSSL 1.1.0 to set the protocol bounds
allowed within a given SSL context:
- SSL_CTX_set_min_proto_version
- SSL_CTX_set_max_proto_version
As Postgres supports OpenSSL down to 1.0.1 (as of HEAD), equivalent
replacements exist in the tree, which are only available for the
backend. A follow-up patch is planned to add control of the SSL
protocol bounds for libpq, so move those routines to src/common/ so as
libpq can use them.
Author: Daniel Gustafsson
Discussion: https://postgr.es/m/4F246AE3-A7AE-471E-BD3D-C799D3748E03@yesql.se
Move all the backend-only code that'd crept into wchar.c and encnames.c
into mbutils.c.
To remove the last few #ifdef dependencies from wchar.c and encnames.c,
also make the following changes:
* Adjust get_encoding_name_for_icu to return NULL, not throw an error,
for unsupported encodings. Its sole caller can perfectly well throw an
error instead. (While at it, I also made this function and its sibling
is_encoding_supported_by_icu proof against out-of-range encoding IDs.)
* Remove the overlength-name error condition from pg_char_to_encoding.
It's completely silly not to treat that just like any other
the-name-is-not-in-the-table case.
Also, get rid of pg_mic_mblen --- there's no obvious reason why
conv.c shouldn't call pg_mule_mblen instead.
Other than that, this is just code movement and comment-polishing with
no functional changes. Notably, I reordered declarations in pg_wchar.h
to show which functions are frontend-accessible and which are not.
Discussion: https://postgr.es/m/CA+TgmoYO8oq-iy8E02rD8eX25T-9SmyxKWqqks5OMHxKvGXpXQ@mail.gmail.com
Formerly, various frontend directories symlinked these two sources
and then built them locally. That's an ancient, ugly hack, and
we now have a much better way: put them into libpgcommon.
So do that. (The immediate motivation for this is the prospect
of having to introduce still more symlinking if we don't.)
This commit moves these two files absolutely verbatim, for ease of
reviewing the git history. There's some follow-on work to be done
that will modify them a bit.
Robert Haas, Tom Lane
Discussion: https://postgr.es/m/CA+TgmoYO8oq-iy8E02rD8eX25T-9SmyxKWqqks5OMHxKvGXpXQ@mail.gmail.com
We currently have several sets of files generated from data provided
by Unicode. These all have ad hoc rules and instructions for updating
when new Unicode versions appear, and it's not done consistently.
This patch centralizes and automates the process and makes it part of
the release checklist. The Unicode and CLDR versions are specified in
Makefile.global.in. There is a new make target "update-unicode" that
downloads all the relevant files and runs the generation script.
There is also a new script for generating the table of combining
characters for ucs_wcwidth(). That table is now in a separate include
file rather than hardcoded into the middle of other code. This is
based on the script that was used for generating
d8594d123c, but the script itself wasn't
committed at that time.
Reviewed-by: John Naylor <john.naylor@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/c8d05f42-443e-6c23-819b-05b31759a37c@2ndquadrant.com
The byte loop used in pglz_decompress() because of possible overlap may
be quite inefficient, so this commit replaces it with memcpy. The gains
do depend on the data (compressibility) and hardware, but seem to be
quite significant.
Author: Andrey Borodin
Reviewed-by: Michael Paquier, Konstantin Knizhnik, Tels
Discussion: https://postgr.es/m/469C9ED9-348C-4FE7-A7A7-B0FA671BEE4C@yandex-team.ru
There's plenty places in frontend code that could benefit from a
string buffer implementation. Some because it yields simpler and
faster code, and some others because of the desire to share code
between backend and frontend.
While there is a string buffer implementation available to frontend
code, libpq's PQExpBuffer, it is clunkier than stringinfo, it
introduces a libpq dependency, doesn't allow for sharing between
frontend and backend code, and has a higher API/ABI stability
requirement due to being exposed via libpq.
Therefore it seems best to just making StringInfo being usable by
frontend code. There's not much to do for that, except for rewriting
two subsequent elog/ereport calls into others types of error
reporting, and deciding on a maximum string length.
For the maximum string size I decided to privately define MaxAllocSize
to the same value as used in the backend. It seems likely that we'll
want to reconsider this for both backend and frontend code in the not
too far away future.
For now I've left stringinfo.h in lib/, rather than common/, to reduce
the likelihood of unnecessary breakage. We could alternatively decide
to provide a redirecting stringinfo.h in lib/, or just not provide
compatibility.
Author: Andres Freund
Reviewed-By: Kyotaro Horiguchi, Daniel Gustafsson
Discussion: https://postgr.es/m/20190920051857.2fhnvhvx4qdddviz@alap3.anarazel.de
When maintaining or merging patches, one of the most common sources
for conflicts are the list of objects in makefiles. Especially when
the split across lines has been changed on both sides, which is
somewhat common due to attempting to stay below 80 columns, those
conflicts are unnecessarily laborious to resolve.
By splitting, and alphabetically sorting, OBJS style lines into one
object per line, conflicts should be less frequent, and easier to
resolve when they still occur.
Author: Andres Freund
Discussion: https://postgr.es/m/20191029200901.vww4idgcxv74cwes@alap3.anarazel.de