1
0
mirror of https://github.com/postgres/postgres.git synced 2025-12-19 17:02:53 +03:00

Flush the IO statistics of active WAL senders more frequently

WAL senders do not flush their statistics until they exit, limiting the
monitoring possible for live processes.  This is penalizing when WAL
senders are running for a long time, like in streaming or logical
replication setups, because it is not possible to know the amount of IO
they generate while running.

This commit makes WAL senders more aggressive with their statistics
flush, using an internal of 1 second, with the flush timing calculated
based on the existing GetCurrentTimestamp() done before the sleeps done
to wait for some activity.  Note that the sleep done for logical and
physical WAL senders happens in two different code paths, so the stats
flushes need to happen in these two places.

One test is added for the physical WAL sender case, and one for the
logical WAL sender case.  This can be done in a stable fashion by
relying on the WAL generated by the TAP tests in combination with a
stats reset while a server is running, but only on HEAD as WAL data has
been added to pg_stat_io in a051e71e28.

This issue exists since a9c70b46db and the introduction of pg_stat_io,
so backpatch down to v16.

Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com>
Discussion: https://postgr.es/m/Z73IsKBceoVd4t55@ip-10-97-1-34.eu-west-3.compute.internal
Backpatch-through: 16
This commit is contained in:
Michael Paquier
2025-04-08 07:57:19 +09:00
parent ba2a3c2302
commit 039549d70f
3 changed files with 66 additions and 2 deletions

View File

@@ -42,6 +42,9 @@ $node_standby_2->init_from_backup($node_standby_1, $backup_name,
has_streaming => 1);
$node_standby_2->start;
# Reset IO statistics, for the WAL sender check with pg_stat_io.
$node_primary->safe_psql('postgres', "SELECT pg_stat_reset_shared('io')");
# Create some content on primary and check its presence in standby nodes
$node_primary->safe_psql('postgres',
"CREATE TABLE tab_int AS SELECT generate_series(1,1002) AS a");
@@ -333,6 +336,19 @@ $node_primary->psql(
note "switching to physical replication slot";
# Wait for the physical WAL sender to update its IO statistics. This is
# done before the next restart, which would force a flush of its stats, and
# far enough from the reset done above to not impact the run time.
$node_primary->poll_query_until(
'postgres',
qq[SELECT sum(reads) > 0
FROM pg_catalog.pg_stat_io
WHERE backend_type = 'walsender'
AND object = 'wal']
)
or die
"Timed out while waiting for the walsender to update its IO statistics";
# Switch to using a physical replication slot. We can do this without a new
# backup since physical slots can go backwards if needed. Do so on both
# standbys. Since we're going to be testing things that affect the slot state,

View File

@@ -113,6 +113,9 @@ $node_subscriber->safe_psql('postgres',
# Wait for initial table sync to finish
$node_subscriber->wait_for_subscription_sync($node_publisher, 'tap_sub');
# Reset IO statistics, for the WAL sender check with pg_stat_io.
$node_publisher->safe_psql('postgres', "SELECT pg_stat_reset_shared('io')");
my $result =
$node_subscriber->safe_psql('postgres', "SELECT count(*) FROM tab_notrep");
is($result, qq(0), 'check non-replicated table is empty on subscriber');
@@ -184,6 +187,19 @@ $result =
$node_subscriber->safe_psql('postgres', "SELECT count(*) FROM tab_no_col");
is($result, qq(2), 'check replicated changes for table having no columns');
# Wait for the logical WAL sender to update its IO statistics. This is
# done before the next restart, which would force a flush of its stats, and
# far enough from the reset done above to not impact the run time.
$node_publisher->poll_query_until(
'postgres',
qq[SELECT sum(reads) > 0
FROM pg_catalog.pg_stat_io
WHERE backend_type = 'walsender'
AND object = 'wal']
)
or die
"Timed out while waiting for the walsender to update its IO statistics";
# insert some duplicate rows
$node_publisher->safe_psql('postgres',
"INSERT INTO tab_full SELECT generate_series(1,10)");