1
0
mirror of https://github.com/postgres/postgres.git synced 2025-10-19 15:49:24 +03:00

Fix hashjoin memory balancing logic

Commit a1b4f289be improved the hashjoin sizing to also consider the
memory used by BufFiles for batches. The code however had multiple
issues, making it ineffective or not working as expected in some cases.

* The amount of memory needed by buffers was calculated using uint32,
  so it would overflow for nbatch >= 262144. If this happened the loop
  would exit prematurely and the memory usage would not be reduced.

  The nbatch overflow is fixed by reworking the condition to not use a
  multiplication at all, so there's no risk of overflow. An explicit
  cast was added to a similar calculation in ExecHashIncreaseBatchSize.

* The loop adjusting the nbatch value used hash_table_bytes to calculate
  the old/new size, but then updated only space_allowed. The consequence
  is the total memory usage was not reduced, but all the memory saved by
  reducing the number of batches was used for the internal hash table.

  This was fixed by using only space_allowed. This is also more correct,
  because hash_table_bytes does not account for skew buckets.

* The code was also doubling multiple parameters (e.g. the number of
  buckets for hash table), but was missing overflow protections.

  The loop now checks for overflow, and terminates if needed. It'd be
  possible to cap the value and continue the loop, but it's not worth
  the complexity. And the overflow implies the in-memory hash table is
  already very large anyway.

While at it, rework the comment explaining how the memory balancing
works, to make it more concise and easier to understand.

The initial nbatch overflow issue was reported by Vaibhav Jain. The
other issues were noticed by me and Melanie Plageman. Fix by me, with a
lot of review and feedback by Melanie.

Backpatch to 18, where the hashjoin memory balancing was introduced.

Reported-by: Vaibhav Jain <jainva@google.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Backpatch-through: 18
Discussion: https://postgr.es/m/CABa-Az174YvfFq7rLS+VNKaQyg7inA2exvPWmPWqnEn6Ditr_Q@mail.gmail.com
This commit is contained in:
Tomas Vondra
2025-10-17 21:44:42 +02:00
parent fd53065013
commit b85c4700fc

View File

@@ -850,85 +850,91 @@ ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew,
/* /*
* Optimize the total amount of memory consumed by the hash node. * Optimize the total amount of memory consumed by the hash node.
* *
* The nbatch calculation above focuses on the size of the in-memory hash * The nbatch calculation above focuses on the in-memory hash table,
* table, assuming no per-batch overhead. Now adjust the number of batches * assuming no per-batch overhead. But each batch may have two files, each
* and the size of the hash table to minimize total memory consumed by the * with a BLCKSZ buffer. For large nbatch values these buffers may use
* hash node. * significantly more memory than the hash table.
*
* Each batch file has a BLCKSZ buffer, and we may need two files per
* batch (inner and outer side). So with enough batches this can be
* significantly more memory than the hashtable itself.
* *
* The total memory usage may be expressed by this formula: * The total memory usage may be expressed by this formula:
* *
* (inner_rel_bytes / nbatch) + (2 * nbatch * BLCKSZ) <= hash_table_bytes * (inner_rel_bytes / nbatch) + (2 * nbatch * BLCKSZ)
* *
* where (inner_rel_bytes / nbatch) is the size of the in-memory hash * where (inner_rel_bytes / nbatch) is the size of the in-memory hash
* table and (2 * nbatch * BLCKSZ) is the amount of memory used by file * table and (2 * nbatch * BLCKSZ) is the amount of memory used by file
* buffers. But for sufficiently large values of inner_rel_bytes value * buffers.
* there may not be a nbatch value that would make both parts fit into
* hash_table_bytes.
* *
* In this case we can't enforce the memory limit - we're going to exceed * The nbatch calculation however ignores the second part. And for very
* it. We can however minimize the impact and use as little memory as * large inner_rel_bytes, there may be no nbatch that keeps total memory
* possible. (We haven't really enforced it before either, as we simply * usage under the budget (work_mem * hash_mem_multiplier). To deal with
* ignored the batch files.) * that, we will adjust nbatch to minimize total memory consumption across
* both the hashtable and file buffers.
* *
* The formula for total memory usage says that given an inner relation of * As we increase the size of the hashtable, the number of batches
* size inner_rel_bytes, we may divide it into an arbitrary number of * decreases, and the total memory usage follows a U-shaped curve. We find
* batches. This determines both the size of the in-memory hash table and * the minimum nbatch by "walking back" -- checking if halving nbatch
* the amount of memory needed for batch files. These two terms work in * would lower the total memory usage. We stop when it no longer helps.
* opposite ways - when one decreases, the other increases.
* *
* For low nbatch values, the hash table takes most of the memory, but at * We only reduce the number of batches. Adding batches reduces memory
* some point the batch files start to dominate. If you combine these two * usage only when most of the memory is used by the hash table, with
* terms, the memory consumption (for a fixed size of the inner relation) * total memory usage within the limit or not far from it. We don't want
* has a u-shape, with a minimum at some nbatch value. * to start batching when not needed, even if that would reduce memory
* usage.
* *
* Our goal is to find this nbatch value, minimizing the memory usage. We * While growing the hashtable, we also adjust the number of buckets to
* calculate the memory usage with half the batches (i.e. nbatch/2), and * maintain a load factor of NTUP_PER_BUCKET while squeezing tuples back
* if it's lower than the current memory usage we know it's better to use * from batches into the hashtable.
* fewer batches. We repeat this until reducing the number of batches does
* not reduce the memory usage - we found the optimum. We know the optimum
* exists, thanks to the u-shape.
* *
* We only want to do this when exceeding the memory limit, not every * Note that we can only change nbuckets during initial hashtable sizing.
* time. The goal is not to minimize memory usage in every case, but to * Once we start building the hash, nbuckets is fixed (we may still grow
* minimize the memory usage when we can't stay within the memory limit. * the hash table).
* *
* For this reason we only consider reducing the number of batches. We * We double several parameters (space_allowed, nbuckets, num_skew_mcvs),
* could try the opposite direction too, but that would save memory only * which introduces a risk of overflow. We avoid this by exiting the loop.
* when most of the memory is used by the hash table. And the hash table * We could do something smarter (e.g. capping nbuckets and continue), but
* was used for the initial sizing, so we shouldn't be exceeding the * the complexity is not worth it. Such cases are extremely rare, and this
* memory limit too much. We might save memory by using more batches, but * is a best-effort attempt to reduce memory usage.
* it would result in spilling more batch files, which does not seem like
* a great trade off.
*
* While growing the hashtable, we also adjust the number of buckets, to
* not have more than one tuple per bucket (load factor 1). We can only do
* this during the initial sizing - once we start building the hash,
* nbucket is fixed.
*/ */
while (nbatch > 0) while (nbatch > 1)
{ {
/* how much memory are we using with current nbatch value */ /* Check that buckets wont't overflow MaxAllocSize */
size_t current_space = hash_table_bytes + (2 * nbatch * BLCKSZ); if (nbuckets > (MaxAllocSize / sizeof(HashJoinTuple) / 2))
break;
/* how much memory would we use with half the batches */ /* num_skew_mcvs should be less than nbuckets */
size_t new_space = hash_table_bytes * 2 + (nbatch * BLCKSZ); Assert((*num_skew_mcvs) < (INT_MAX / 2));
/* If the memory usage would not decrease, we found the optimum. */ /*
if (current_space < new_space) * Check that space_allowed won't overlow SIZE_MAX.
*
* We don't use hash_table_bytes here, because it does not include the
* skew buckets. And we want to limit the overall memory limit.
*/
if ((*space_allowed) > (SIZE_MAX / 2))
break; break;
/* /*
* It's better to use half the batches, so do that and adjust the * Will halving the number of batches and doubling the size of the
* nbucket in the opposite direction, and double the allowance. * hashtable reduce overall memory usage?
*
* This is the same as (S = space_allowed):
*
* (S + 2 * nbatch * BLCKSZ) < (S * 2 + nbatch * BLCKSZ)
*
* but avoiding intermediate overflow.
*/
if (nbatch < (*space_allowed) / BLCKSZ)
break;
/*
* MaxAllocSize is sufficiently small that we are not worried about
* overflowing nbuckets.
*/ */
nbatch /= 2;
nbuckets *= 2; nbuckets *= 2;
*num_skew_mcvs = (*num_skew_mcvs) * 2;
*space_allowed = (*space_allowed) * 2; *space_allowed = (*space_allowed) * 2;
nbatch /= 2;
} }
Assert(nbuckets > 0); Assert(nbuckets > 0);
@@ -994,14 +1000,14 @@ ExecHashIncreaseBatchSize(HashJoinTable hashtable)
* How much additional memory would doubling nbatch use? Each batch may * How much additional memory would doubling nbatch use? Each batch may
* require two buffered files (inner/outer), with a BLCKSZ buffer. * require two buffered files (inner/outer), with a BLCKSZ buffer.
*/ */
size_t batchSpace = (hashtable->nbatch * 2 * BLCKSZ); size_t batchSpace = (hashtable->nbatch * 2 * (size_t) BLCKSZ);
/* /*
* Compare the new space needed for doubling nbatch and for enlarging the * Compare the new space needed for doubling nbatch and for enlarging the
* in-memory hash table. If doubling the hash table needs less memory, * in-memory hash table. If doubling the hash table needs less memory,
* just do that. Otherwise, continue with doubling the nbatch. * just do that. Otherwise, continue with doubling the nbatch.
* *
* We're either doubling spaceAllowed of batchSpace, so which of those * We're either doubling spaceAllowed or batchSpace, so which of those
* increases the memory usage the least is the same as comparing the * increases the memory usage the least is the same as comparing the
* values directly. * values directly.
*/ */