1
0
mirror of https://sourceware.org/git/glibc.git synced 2025-06-29 22:21:48 +03:00

malloc: Add Huge Page support to arenas

It is enabled as default for glibc.malloc.hugetlb set to 2 or higher.
It also uses a non configurable minimum value and maximum value,
currently set respectively to 1 and 4 selected huge page size.

The arena allocation with huge pages does not use MAP_NORESERVE.  As
indicate by kernel internal documentation [1], the flag might trigger
a SIGBUS on soft page faults if at memory access there is no left
pages in the pool.

On systems without a reserved huge pages pool, is just stress the
mmap(MAP_HUGETLB) allocation failure.  To improve test coverage it is
required to create a pool with some allocated pages.

Checked on x86_64-linux-gnu with no reserved pages, 10 reserved pages
(which trigger mmap(MAP_HUGETBL) failures) and with 256 reserved pages
(which does not trigger mmap(MAP_HUGETLB) failures).

[1] https://www.kernel.org/doc/html/v4.18/vm/hugetlbfs_reserv.html#resv-map-modifications

Reviewed-by: DJ Delorie <dj@redhat.com>
This commit is contained in:
Adhemerval Zanella
2021-08-20 13:22:35 -03:00
parent 98d5fcb8d0
commit c1beb51d08
3 changed files with 99 additions and 44 deletions

View File

@ -5302,7 +5302,7 @@ static __always_inline int
do_set_mmap_threshold (size_t value)
{
/* Forbid setting the threshold too high. */
if (value <= HEAP_MAX_SIZE / 2)
if (value <= heap_max_size () / 2)
{
LIBC_PROBE (memory_mallopt_mmap_threshold, 3, value, mp_.mmap_threshold,
mp_.no_dyn_threshold);