From aa7d0b6cf954c53e9e7f446c9cd75761013d6f35 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Tue, 26 Sep 2023 19:44:21 -0400 Subject: [PATCH] doc: clarify the effect of concurrent work_mem allocations Reported-by: Sami Imseih Discussion: https://postgr.es/m/66590882-F48C-4A25-83E3-73792CF8C51F@amazon.com Backpatch-through: 11 --- doc/src/sgml/config.sgml | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index ddd78beec8d..186f870fc75 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -1815,9 +1815,10 @@ include_dir 'conf.d' (such as a sort or hash table) before writing to temporary disk files. If this value is specified without units, it is taken as kilobytes. The default value is four megabytes (4MB). - Note that for a complex query, several sort or hash operations might be - running in parallel; each operation will generally be allowed - to use as much memory as this value specifies before it starts + Note that a complex query might perform several sort and hash + operations at the same time, with each operation generally being + allowed to use as much memory as this value specifies before + it starts to write data into temporary files. Also, several running sessions could be doing such operations concurrently. Therefore, the total memory used could be many times the value @@ -1831,7 +1832,7 @@ include_dir 'conf.d' Hash-based operations are generally more sensitive to memory availability than equivalent sort-based operations. The - memory available for hash tables is computed by multiplying + memory limit for a hash table is computed by multiplying work_mem by hash_mem_multiplier. This makes it possible for hash-based operations to use an amount of memory