1
0
mirror of https://github.com/postgres/postgres.git synced 2025-07-24 14:22:24 +03:00

read_stream: Fix overflow hazard with large shared buffers

If the limit returned by GetAdditionalPinLimit() is large, the buffer_limit
variable in read_stream_start_pending_read() can overflow. While the code is
careful to limit buffer_limit PG_INT16_MAX, we subsequently add the number of
forwarded buffers.

The overflow can lead to assertion failures, crashes or wrong query results
when using large shared buffers.

It seems easier to avoid this if we make the buffer_limit variable an int,
instead of an int16.  Do so, and clamp buffer_limit after adding the number of
forwarded buffers.

It's possible we might want to address this and related issues more widely by
changing to int instead of int16 more widely, but since the consequences of
this bug can be confusing, it seems better to fix it now.

This bug was introduced in ed0b87caac.

Discussion: https://postgr.es/m/ewvz3cbtlhrwqk7h6ca6cctiqh7r64ol3pzb3iyjycn2r5nxk5@tnhw3a5zatlr
This commit is contained in:
Andres Freund
2025-04-07 08:47:30 -04:00
parent 717d0e8dd9
commit 8ce79483dc

View File

@ -237,7 +237,7 @@ read_stream_start_pending_read(ReadStream *stream)
int16 io_index;
int16 overflow;
int16 buffer_index;
int16 buffer_limit;
int buffer_limit;
/* This should only be called with a pending read. */
Assert(stream->pending_read_nblocks > 0);
@ -294,7 +294,10 @@ read_stream_start_pending_read(ReadStream *stream)
else
buffer_limit = Min(GetAdditionalPinLimit(), PG_INT16_MAX);
Assert(stream->forwarded_buffers <= stream->pending_read_nblocks);
buffer_limit += stream->forwarded_buffers;
buffer_limit = Min(buffer_limit, PG_INT16_MAX);
if (buffer_limit == 0 && stream->pinned_buffers == 0)
buffer_limit = 1; /* guarantee progress */