We had a report from Stefan Kaltenbrunner of a case in which postmaster
log files overran available disk space because multiple backends spewed
enormous context stats dumps upon hitting an out-of-memory condition.
Given the lack of similar reports, this isn't a common problem, but it
still seems worth doing something about. However, we don't want to just
blindly truncate the output, because that might prevent diagnosis of OOM
problems. What seems like a workable compromise is to limit the dump to
100 child contexts per parent, and summarize the space used within any
additional child contexts. That should help because practical cases where
the dump gets long will typically be huge numbers of siblings under the
same parent context; while the additional debugging value from seeing
details about individual siblings beyond 100 will not be large, we hope.
Anyway it doesn't take much code or memory space to do this, so let's try
it like this and see how things go.
Since the summarization mechanism requires passing totals back up anyway,
I took the opportunity to add a "grand total" line to the end of the
printout.
The tuplesort/tuplestore memory management logic assumed that the chunk
allocation overhead for its memtuples array could not increase when
increasing the array size. This is and always was true for tuplesort,
but we (I, I think) blindly copied that logic into tuplestore.c without
noticing that the assumption failed to hold for the much smaller array
elements used by tuplestore. Given rather small work_mem, this could
result in an improper complaint about "unexpected out-of-memory situation",
as reported by Brent DeSpain in bug #13530.
The easiest way to fix this is just to increase tuplestore's initial
array size so that the assumption holds. Rather than relying on magic
constants, though, let's export a #define from aset.c that represents
the safe allocation threshold, and make tuplestore's calculation depend
on that.
Do the same in tuplesort.c to keep the logic looking parallel, even though
tuplesort.c isn't actually at risk at present. This will keep us from
breaking it if we ever muck with the allocation parameters in aset.c.
Back-patch to all supported versions. The error message doesn't occur
pre-9.3, not so much because the problem can't happen as because the
pre-9.3 tuplestore code neglected to check for it. (The chance of
trouble is a great deal larger as of 9.3, though, due to changes in the
array-size-increasing strategy.) However, allowing LACKMEM() to become
true unexpectedly could still result in less-than-desirable behavior,
so let's patch it all the way back.
Everywhere else in the file, "context" is of type MemoryContext and
"set" is of type AllocSet. AllocSetContextCreate uses a variable of
type AllocSet, so rename it from "context" to "set".
This potentially allows us to add mcxt.c interfaces that do something
other than throw an error when memory cannot be allocated. We'll
handle adding those interfaces in a separate commit.
Michael Paquier, with minor changes by me
Since C99, it's been standard for printf and friends to accept a "z" size
modifier, meaning "whatever size size_t has". Up to now we've generally
dealt with printing size_t values by explicitly casting them to unsigned
long and using the "l" modifier; but this is really the wrong thing on
platforms where pointers are wider than longs (such as Win64). So let's
start using "z" instead. To ensure we can do that on all platforms, teach
src/port/snprintf.c to understand "z", and add a configure test to force
use of that implementation when the platform's version doesn't handle "z".
Having done that, modify a bunch of places that were using the
unsigned-long hack to use "z" instead. This patch doesn't pretend to have
gotten everyplace that could benefit, but it catches many of them. I made
an effort in particular to ensure that all uses of the same error message
text were updated together, so as not to increase the number of
translatable strings.
It's possible that this change will result in format-string warnings from
pre-C99 compilers. We might have to reconsider if there are any popular
compilers that will warn about this; but let's start by seeing what the
buildfarm thinks.
Andres Freund, with a little additional work by me
The MaxAllocSize guard is convenient for most callers, because it
reduces the need for careful attention to overflow, data type selection,
and the SET_VARSIZE() limit. A handful of callers are happy to navigate
those hazards in exchange for the ability to allocate a larger chunk.
Introduce MemoryContextAllocHuge() and repalloc_huge(). Use this in
tuplesort.c and tuplestore.c, enabling internal sorts of up to INT_MAX
tuples, a factor-of-48 increase. In particular, B-tree index builds can
now benefit from much-larger maintenance_work_mem settings.
Reviewed by Stephen Frost, Simon Riggs and Jeff Janes.
Valgrind "client requests" in aset.c and mcxt.c teach Valgrind and its
Memcheck tool about the PostgreSQL allocator. This makes Valgrind
roughly as sensitive to memory errors involving palloc chunks as it is
to memory errors involving malloc chunks. Further client requests in
PageAddItem() and printtup() verify that all bits being added to a
buffer page or furnished to an output function are predictably-defined.
Those tests catch failures of C-language functions to fully initialize
the bits of a Datum, which in turn stymie optimizations that rely on
_equalConst(). Define the USE_VALGRIND symbol in pg_config_manual.h to
enable these additions. An included "suppression file" silences nominal
errors we don't plan to fix.
Reviewed in earlier versions by Peter Geoghegan and Korry Douglas.
Move some repeated debugging code into functions and store intermediates
in variables where not presently necessary. No code-generation changes
in a production build, and no functional changes. This simplifies and
focuses the main patch.
Clear isReset before, not after, calling the context-specific alloc method,
so as to preserve the option to do a tail call in MemoryContextAlloc
(and also so this code isn't assuming that a failed alloc call won't have
changed the context's state before failing). Fix missed direct invocation
of reset method. Reformat a comment.
avoids the overhead of one function call when calling MemoryContextReset(),
and it seems like the isReset optimization would be applicable to any new
memory context we might invent in the future anyway.
This buys back the overhead I just added in previous patch to always call
MemoryContextReset() in ExecScan, even when there's no quals or projections.
The previous coding would allow requests up to half of maxBlockSize to be
treated as "chunks", but when that actually did happen, we'd waste nearly
half of the space in the malloc block containing the chunk, if no smaller
requests came along to fill it. Avoid this scenario by limiting the
maximum size of a chunk to 1/8th maxBlockSize, so that we can waste no more
than 1/8th of the allocated space. This will not change the behavior at
all for the default context size parameters (with large maxBlockSize),
but it will change the behavior when using ALLOCSET_SMALL_MAXSIZE.
In particular, there's no longer a need for spell.c to be overly concerned
about the request size parameters it uses, so remove a rather unhelpful
comment about that.
Merlin Moncure, per an idea of Tom Lane's
by using a lookup table instead of a naive shift-and-count loop. Based on
code originally posted by Sean Eron Anderson at
http://graphics.stanford.edu/%7eseander/bithacks.html.
Greg Stark did the research and benchmarking to show that this is what
we should use. Jeremy Kerr first noticed that this is a hotspot that
could be optimized, though we ended up not using his suggestion of
platform-specific bit-searching code.
input functions that include garbage bytes in their results. Provide a
compile-time option RANDOMIZE_ALLOCATED_MEMORY to make palloc fill returned
blocks with variable contents. This option also makes the parser perform
conversions of literal constants twice and compare the results, emitting a
WARNING if they don't match. (This is the code I used to catch the input
function bugs fixed in the previous commit.) For the moment, I've set it
to be activated automatically by --enable-cassert.
enlarge the memory chunk in-place when it was feasible to do so. This turns
out to not work well at all for scenarios involving repeated cycles of
palloc/repalloc/pfree: the eventually freed chunks go into the wrong freelist
for the next initial palloc request, and so we consume memory indefinitely.
While that could be defended against, the number of cases where the
optimization can still be applied drops significantly, and adjusting the
initial sizes of StringInfo buffers makes it drop to almost nothing.
Seems better to just remove the extra complexity.
Per recent discussion and testing.
child memory contexts is indented two spaces to the right of its
parent context. This should make it easier to deduce the memory
context hierarchy from the output of MemoryContextStats().
look through a freelist for a chunk of adequate size. For a long time
now, all elements of a given freelist have been exactly the same
allocated size, so we don't need a loop. Since the loop never iterated
more than once, you'd think this wouldn't matter much, but it makes a
noticeable savings in a simple test --- perhaps because the compiler
isn't optimizing on a mistaken assumption that the loop would repeat.
AllocSetAlloc is called often enough that saving even a couple of
instructions is worthwhile.
has a small maxBlockSize: the maximum request size that we will treat as a
"chunk" needs to be limited to fit in maxBlockSize. Otherwise we will round
up the request size to the next power of 2, wasting space, which is a bit
pointless if we aren't going to make the blocks big enough to fit additional
stuff in them. The example motivating this is local buffer management, which
makes repeated allocations of 8K (one BLCKSZ buffer) in TopMemoryContext,
which has maxBlockSize = 8K because for the most part allocations there are
small. This leads to each local buffer actually eating 16K of space, which
adds up when there are thousands of them. I intend to change localbuf.c to
aggregate its requests, which will prevent this particular misbehavior, but
it seems likely that similar scenarios could arise elsewhere, so fixing the
core problem seems wise as well.
The former coding relied on the actual allocated size of the last block,
which made it behave strangely if the first allocation in a context was
larger than ALLOC_CHUNK_LIMIT: subsequent allocations would be referenced
to that and not to the intended series of block sizes. Noted while
studying a memory wastage gripe from Tatsuo.
---------------------------------------------------------------------------
Tom Lane <tgl@sss.pgh.pa.us> writes:
> a_ogawa <a_ogawa@hi-ho.ne.jp> writes:
> > It is a reasonable idea. However, the majority part of MemSet was not
> > able to be avoided by this idea. Because the per-tuple contexts are used
> > at the early stage of executor.
>
> Drat. Well, what about changing that? We could introduce additional
> contexts or change the startup behavior so that the ones that are
> frequently reset don't have any data in them unless you are working
> with pass-by-ref values inside the inner loop.
That might be possible. However, I think that we should change only
aset.c about this article.
I thought further: We can check whether context was used from the last
reset even when blocks list is not empty. Please see attached patch.
> a_ogawa <a_ogawa@hi-ho.ne.jp> writes:
> > It is a reasonable idea. However, the majority part of MemSet was not
> > able to be avoided by this idea. Because the per-tuple contexts are used
> > at the early stage of executor.
>
> Drat. Well, what about changing that? We could introduce additional
> contexts or change the startup behavior so that the ones that are
> frequently reset don't have any data in them unless you are working
> with pass-by-ref values inside the inner loop.
That might be possible. However, I think that we should change only
aset.c about this article.
I thought further: We can check whether context was used from the last
reset even when blocks list is not empty. Please see attached patch.
The effect of the patch that I measured is as follows:
o Execution time that executed the SQL ten times.
(1)Linux(CPU: Pentium III, Compiler option: -O2)
- original: 24.960s
- patched : 23.114s
(2)Linux(CPU: Pentium 4, Compiler option: -O2)
- original: 8.730s
- patched : 7.962s
(3)Solaris(CPU: Ultra SPARC III, Compiler option: -O2)
- original: 37.0s
- patched : 33.7s
Atsushi Ogawa (a_ogawa)
when the blocks list is empty (there can surely be no freelist items if
the context contains no memory), and use MemSetAligned not MemSet to
clear the headers (we assume alignof(pointer) >= alignof(int32)).
Per discussion with Atsushi Ogawa. He proposes some further hacking
that I'm not yet sold on, but these two changes are unconditional wins
since there is no case in which they make things slower.
Also performed an initial run through of upgrading our Copyright date to
extend to 2005 ... first run here was very simple ... change everything
where: grep 1996-2004 && the word 'Copyright' ... scanned through the
generated list with 'less' first, and after, to make sure that I only
picked up the right entries ...
ListCells are only 8 bytes instead of 12 (on 4-byte-pointer machines
anyway), it's worth maintaining a separate freelist for 8-byte objects.
Remembering that alloc chunks carry 8 bytes of overhead, this should
reduce the net storage requirement for a long List by about a third.