1
0
mirror of https://github.com/postgres/postgres.git synced 2025-06-01 14:21:49 +03:00

psql: Fix assertion failures with pipeline mode

A correct cocktail of COPY FROM, SELECT and/or DML queries and
\syncpipeline was able to break the logic in charge of discarding
results of a pipeline, done in discardAbortedPipelineResults().  Such
sequence make the backend generate a FATAL, due to a protocol
synchronization loss.

This problem comes down to the fact that we did not consider the case of
libpq returning a PGRES_FATAL_ERROR when discarding the results of an
aborted pipeline.  The discarding code is changed so as this result
status is handled as a special case, with the caller of
discardAbortedPipelineResults() being responsible for consuming the
result.

A couple of tests are added to cover the problems reported, bringing an
interesting gain in coverage as there were no tests in the tree covering
the case of protocol synchronization loss.

Issue introduced by 41625ab8ea3d.

Reported-by: Alexander Kozhemyakin <a.kozhemyakin@postgrespro.ru>
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/ebf6ce77-b180-4d6b-8eab-71f641499ddf@postgrespro.ru
This commit is contained in:
Michael Paquier 2025-04-24 12:22:53 +09:00
parent 923ae50cf5
commit 3631612eae
2 changed files with 58 additions and 0 deletions

View File

@ -1478,6 +1478,23 @@ discardAbortedPipelineResults(void)
*/
return res;
}
else if (res != NULL && result_status == PGRES_FATAL_ERROR)
{
/*
* Found a FATAL error sent by the backend, and we cannot recover
* from this state. Instead, return the last result and let the
* outer loop handle it.
*/
PGresult *fatal_res PG_USED_FOR_ASSERTS_ONLY;
/*
* Fetch result to consume the end of the current query being
* processed.
*/
fatal_res = PQgetResult(pset.db);
Assert(fatal_res == NULL);
return res;
}
else if (res == NULL)
{
/* A query was processed, decrement the counters */

View File

@ -483,4 +483,45 @@ psql_like($node, "copy (values ('foo'),('bar')) to stdout \\g | $pipe_cmd",
my $c4 = slurp_file($g_file);
like($c4, qr/foo.*bar/s);
# Tests with pipelines. These trigger FATAL failures in the backend,
# so they cannot be tested via SQL.
$node->safe_psql('postgres', 'CREATE TABLE psql_pipeline()');
my $log_location = -s $node->logfile;
psql_fails_like(
$node,
qq{\\startpipeline
COPY psql_pipeline FROM STDIN;
SELECT 'val1';
\\syncpipeline
\\getresults
\\endpipeline},
qr/server closed the connection unexpectedly/,
'protocol sync loss in pipeline: direct COPY, SELECT, sync and getresult'
);
$node->wait_for_log(
qr/FATAL: .*terminating connection because protocol synchronization was lost/,
$log_location);
psql_fails_like(
$node,
qq{\\startpipeline
COPY psql_pipeline FROM STDIN \\bind \\sendpipeline
SELECT 'val1' \\bind \\sendpipeline
\\syncpipeline
\\getresults
\\endpipeline},
qr/server closed the connection unexpectedly/,
'protocol sync loss in pipeline: bind COPY, SELECT, sync and getresult');
# This time, test without the \getresults.
psql_fails_like(
$node,
qq{\\startpipeline
COPY psql_pipeline FROM STDIN;
SELECT 'val1';
\\syncpipeline
\\endpipeline},
qr/server closed the connection unexpectedly/,
'protocol sync loss in pipeline: COPY, SELECT and sync');
done_testing();