1
0
mirror of https://sourceware.org/git/glibc.git synced 2025-10-12 19:04:54 +03:00
Commit Graph

16796 Commits

Author SHA1 Message Date
gfleury
8f842ce13e htl: move __pthread_default_rwlockattr into libc.
Signed-off-by: gfleury <gfleury@disroot.org>
Message-ID: <20250216145434.7089-2-gfleury@disroot.org>
2025-02-16 22:59:00 +01:00
Aurelien Jarno
60f2d6be65 Fix tst-aarch64-pkey to handle ENOSPC as not supported
The syscall pkey_alloc can return ENOSPC to indicate either that all
keys are in use or that the system runs in a mode in which memory
protection keys are disabled. In such case the test should not fail and
just return unsupported.

This matches the behaviour of the generic tst-pkey.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-02-15 11:08:43 +01:00
Yat Long Poon
95e807209b AArch64: Improve codegen for SVE powf
Improve memory access with indexed/unpredicated instructions.
Eliminate register spills.  Speedup on Neoverse V1: 3%.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-02-13 18:16:54 +00:00
Yat Long Poon
0b195651db AArch64: Improve codegen for SVE pow
Move constants to struct.  Improve memory access with indexed/unpredicated
instructions.  Eliminate register spills.  Speedup on Neoverse V1: 24%.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-02-13 18:16:54 +00:00
Yat Long Poon
f5ff34cb3c AArch64: Improve codegen for SVE erfcf
Reduce number of MOV/MOVPRFXs and use unpredicated FMUL.
Replace MUL with LSL.  Speedup on Neoverse V1: 6%.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-02-13 18:16:54 +00:00
Luna Lamb
c0ff447edf Aarch64: Improve codegen in SVE exp and users, and update expf_inline
Use unpredicted muls, and improve memory access.
7%, 3% and 1% improvement in throughput microbenchmark on Neoverse V1,
for exp, exp2 and cosh respectively.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-02-13 18:16:54 +00:00
Luna Lamb
8f0e7fe61e Aarch64: Improve codegen in SVE asinh
Use unpredicated muls, use lanewise mla's and improve memory access.
1% regression in throughput microbenchmark on Neoverse V1.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-02-13 18:16:54 +00:00
Wilco Dijkstra
5afaf99edb math: Improve layout of exp/exp10 data
GCC aligns global data to 16 bytes if their size is >= 16 bytes.  This patch
changes the exp_data struct slightly so that the fields are better aligned
and without gaps.  As a result on targets that support them, more load-pair
instructions are used in exp.  Exp10 is improved by moving invlog10_2N later
so that neglog10_2hiN and neglog10_2loN can be loaded using load-pair.

The exp benchmark improves 2.5%, "144bits" by 7.2%, "768bits" by 12.7% on
Neoverse V2.  Exp10 improves by 1.5%.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-13 18:16:54 +00:00
Adhemerval Zanella
b81252c4b9 math: Consolidate coshf and sinhf internal tables
The libm size improvement built with "--enable-stack-protector=strong
--enable-bind-now=yes --enable-fortify-source=2":

Before:

   text    data     bss     dec     hex filename
 585192     860      12  586064   8f150 aarch64-linux-gnu/math/libm.so
 960775    1068      12  961855   ead3f x86_64-linux-gnu/math/libm.so
1189174    5544     368 1195086  123c4e powerpc64le-linux-gnu/math/libm.so

After:

   text    data     bss     dec     hex filename
 584952     860      12  585824   8f060 aarch64-linux-gnu/math/libm.so
 960615    1068      12  961695   eac9f x86_64-linux-gnu/math/libm.so
1189078    5544     368 1194990  123bee powerpc64le-linux-gnu/math/libm.so

The are small code changes for x86_64 and powerpc64le, which do not
affect performance; but on aarch64 with gcc-14 I see a slight better
code generation due the usage of ldq for floating point constant loading.
Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
2025-02-12 16:31:57 -03:00
Adhemerval Zanella
994007ff29 math: Consolidate acoshf and asinhf internal tables
The libm size improvement built with "--enable-stack-protector=strong
--enable-bind-now=yes --enable-fortify-source=2":

Before:

   text    data     bss     dec     hex filename
 587304     860      12  588176   8f990 aarch64-linux-gnu-master/math/libm.so
 962855    1068      12  963935   eb55f x86_64-linux-gnu-master/math/libm.so
1191222    5544     368 1197134  12444e powerpc64le-linux-gnu-master/math/libm.so

After:

   text    data     bss     dec     hex filename
 585192     860      12  586064   8f150 aarch64-linux-gnu/math/libm.so
 960775    1068      12  961855   ead3f x86_64-linux-gnu/math/libm.so
1189174    5544     368 1195086  123c4e powerpc64le-linux-gnu/math/libm.so

The are small code changes for x86_64 and powerpc64le, which do not
affect performance; but on aarch64 with gcc-14 I see a slight better
code generation due the usage of ldq for floating point constant loading.
Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
2025-02-12 16:31:57 -03:00
Adhemerval Zanella
8f170dc819 math: Use tanpif from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic tanpif.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

latency                      master        patched   improvement
x86_64                      85.1683        47.7990        43.88%
x86_64v2                    76.8219        41.4679        46.02%
x86_64v3                    73.7775        37.7734        48.80%
aarch64 (Neoverse)          35.4514        18.0742        49.02%
power8                      22.7604        10.1054        55.60%
power10                     22.1358         9.9553        55.03%

reciprocal-throughput        master        patched   improvement
x86_64                      41.0174        19.4718        52.53%
x86_64v2                    34.8565        11.3761        67.36%
x86_64v3                    34.0325         9.6989        71.50%
aarch64 (Neoverse)          25.4349         9.2017        63.82%
power8                      13.8626         3.8486        72.24%
power10                     11.7933         3.6420        69.12%

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12 16:31:57 -03:00
Adhemerval Zanella
de2fca9fe2 math: Use sinpif from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic sinpif.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

latency                      master        patched   improvement
x86_64                      47.5710        38.4455        19.18%
x86_64v2                    46.8828        40.7563        13.07%
x86_64v3                    44.0034        34.1497        22.39%
aarch64 (Neoverse)          19.2493        14.1968        26.25%
power8                      23.5312        16.3854        30.37%
power10                     22.6485        10.2888        54.57%

reciprocal-throughput        master        patched   improvement
x86_64                      21.8858        11.6717        46.67%
x86_64v2                    22.0620        11.9853        45.67%
x86_64v3                    21.5653        11.3291        47.47%
aarch64 (Neoverse)          13.0615         6.5499        49.85%
power8                      16.2030         6.9580        57.06%
power10                     12.8911         4.2858        66.75%

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12 16:31:57 -03:00
Adhemerval Zanella
be85208b9f math: Use cospif from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic cospif.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

latency                    master        patched   improvement
x86_64                    47.4679        38.4157        19.07%
x86_64v2                  46.9686        38.3329        18.39%
x86_64v3                  43.8929        31.8510        27.43%
aarch64 (Neoverse)        18.8867        13.2089        30.06%
power8                    22.9435         7.8023        65.99%
power10                   15.4472        7.77505        49.67%

reciprocal-throughput      master        patched   improvement
x86_64                    20.9518        11.4991        45.12%
x86_64v2                  19.8699        10.5921        46.69%
x86_64v3                  19.3475         9.3998        51.42%
aarch64 (Neoverse)        12.5767         6.2158        50.58%
power8                    15.0566         3.2654        78.31%
power10                    9.2866         3.1147        66.46%

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12 16:31:57 -03:00
Adhemerval Zanella
95a01ea955 math: Use atanpif from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic atanpif.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

latency                     master        patched   improvement
x86_64                     66.3296        52.7558        20.46%
x86_64v2                   66.0429        51.4007        22.17%
x86_64v3                   60.6294        48.7876        19.53%
aarch64 (Neoverse)         24.3163        20.9110        14.00%
power8                     16.5766        13.3620        19.39%
power10                    16.5115        13.4072        18.80%

reciprocal-throughput       master        patched   improvement
x86_64                     30.8599        16.0866        47.87%
x86_64v2                   29.2286        15.4688        47.08%
x86_64v3                   23.0960        12.8510        44.36%
aarch64 (Neoverse)         15.4619        10.6752        30.96%
power8                      7.9200         5.2483        33.73%
power10                     6.8539         4.6262        32.50%

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12 16:31:57 -03:00
Adhemerval Zanella
1cd9ccd8c0 math: Use atan2pif from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic atan2pif.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

latency                 master        patched   improvement
x86_64                 79.4006        70.8726        10.74%
x86_64v2               77.5136        69.1424        10.80%
x86_64v3               71.8050        68.1637         5.07%
aarch64 (Neoverse)     27.8363        24.7700        11.02%
power8                 39.3893        17.2929        56.10%
power10                19.7200        16.8187        14.71%

reciprocal-throughput   master        patched   improvement
x86_64                 38.3457        30.9471        19.29%
x86_64v2               37.4023        30.3112        18.96%
x86_64v3               33.0713        24.4891        25.95%
aarch64 (Neoverse)     19.3683        15.3259        20.87%
power8                 19.5507        8.27165        57.69%
power10                9.05331        7.63775        15.64%

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12 16:31:57 -03:00
Adhemerval Zanella
ae679a0aca math: Use asinpif from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic asinpif.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

latency                 master        patched   improvement
x86_64                 46.4996        41.6126        10.51%
x86_64v2               46.7551        38.8235        16.96%
x86_64v3               42.6235        33.7603        20.79%
aarch64 (Neoverse)     17.4161        14.3604        17.55%
power8                 10.7347         9.0193        15.98%
power10                10.6420         9.0362        15.09%

reciprocal-throughput   master        patched   improvement
x86_64                 24.7208        16.5544        33.03%
x86_64v2               24.2177        14.8938        38.50%
x86_64v3               20.5617        10.5452        48.71%
aarch64 (Neoverse)     13.4827        7.17613        46.78%
power8                 6.46134        3.56089        44.89%
power10                5.79007        3.49544        39.63%

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12 16:31:57 -03:00
Adhemerval Zanella
edb2a8f0ae math: Use acospif from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance to the generic acospif.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

latency                  master        patched   improvement
x86_64                  54.8281        42.9070        21.74%
x86_64v2                54.1717        42.7497        21.08%
x86_64v3                49.3552        34.1512        30.81%
aarch64 (Neoverse)      17.9395        14.3733        19.88%
power8                  20.3110         8.8609        56.37%
power10                 11.3113        8.84067        21.84%

reciprocal-throughput    master        patched   improvement
x86_64                  21.2301        14.4803        31.79%
x86_64v2                20.6858        13.9506        32.56%
x86_64v3                16.1944        11.3377        29.99%
aarch64 (Neoverse)      11.4474        7.13282        37.69%
power8                  10.6916        3.57547        66.56%
power10                 4.64269        3.54145        23.72%

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12 16:31:57 -03:00
Samuel Thibault
392261a2b6 hurd: Replace char foo[1024] with string_t
Like already done in various other places and advised by Roland in

https://lists.gnu.org/archive/html/bug-hurd/2012-04/msg00124.html
2025-02-10 20:10:59 +01:00
Samuel Thibault
659fa18dde hurd: Drop useless buffer initialization in ttyname*
The RPC stub will write a string anyway.
2025-02-10 20:10:59 +01:00
gfleury
6bcd7bf100 htl: stop exporting __pthread_default_barrierattr.
since all symbol that use it are now in libc
Message-ID: <20250209200108.865599-9-gfleury@disroot.org>
2025-02-10 01:39:17 +01:00
gfleury
710bbc9659 htl: move pthread_barrier_wait into libc.
Message-ID: <20250209200108.865599-8-gfleury@disroot.org>
2025-02-10 01:39:17 +01:00
gfleury
2789003489 htl: move pthread_barrier_init into libc.
Message-ID: <20250209200108.865599-7-gfleury@disroot.org>
2025-02-10 01:39:17 +01:00
gfleury
735c9b73d6 htl: move pthread_barrier_destroy into libc.
Message-ID: <20250209200108.865599-6-gfleury@disroot.org>
2025-02-10 01:39:17 +01:00
gfleury
ccf19a68ab htl: move pthread_barrierattr_getpshared, pthread_barrierattr_setpshared into libc.
Message-ID: <20250209200108.865599-5-gfleury@disroot.org>
2025-02-10 01:39:17 +01:00
gfleury
ca2a95ee67 htl: move pthread_barrierattr_init into libc.
Message-ID: <20250209200108.865599-4-gfleury@disroot.org>
2025-02-10 01:18:56 +01:00
gfleury
40cbd3c361 htl: move pthread_barrierattr_destroy into libc.
Message-ID: <20250209200108.865599-3-gfleury@disroot.org>
2025-02-10 01:18:17 +01:00
gfleury
7d799d85e8 htl: move __pthread_default_barrierattr into libc.
Message-ID: <20250209200108.865599-2-gfleury@disroot.org>
2025-02-10 01:17:50 +01:00
Florian Weimer
3755ffb665 powerpc64le: Also avoid IFUNC for __mempcpy
Code used during early static startup in elf/dl-tls.c uses
__mempcpy.

Fixes commit cbd9fd2369 ("Consolidate
TLS block allocation for static binaries with ld.so").

Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-02-05 09:53:11 +01:00
Adhemerval Zanella
09e7f4d594 math: Fix tanf for some inputs (BZ 32630)
The logic was copied wrong from CORE-MATH.
2025-02-03 09:40:39 -03:00
Florian Weimer
749310c61b elf: Add l_soname accessor function for DT_SONAME values
It's not necessary to introduce temporaries because the compiler
is able to evaluate l_soname just once in constracts like:

  l_soname (l) != NULL && strcmp (l_soname (l), LIBC_SO) != 0
2025-02-02 20:10:09 +01:00
Florian Weimer
aa1bf89039 elf: Split _dl_lookup_map, _dl_map_new_object from _dl_map_object
So that they can eventually be called separately from dlopen.
2025-02-02 20:10:08 +01:00
Sergey Bugaev
a7aad6e2b7 hurd: Use the new __proc_reauthenticate_complete protocol 2025-02-01 18:20:42 +01:00
Florian Weimer
96429bcc91 elf: Do not add a copy of _dl_find_object to libc.so
This reduces code size and dependencies on ld.so internals from
libc.so.

Fixes commit f4c142bb9f
("arm: Use _dl_find_object on __gnu_Unwind_Find_exidx (BZ 31405)").

Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-02-01 12:37:58 +01:00
gfleury
cf51d18b9d htl: move pthread_setcancelstate into libc.
sysdeps/pthread/sem_open.c: call pthread_setcancelstate directely
since forward declaration is gone on hurd too
Message-ID: <20250201080202.494671-1-gfleury@disroot.org>
2025-02-01 11:24:14 +01:00
Adhemerval Zanella
04588633cf math: Fix sinhf for some inputs (BZ 32627)
The logic was copied wrong from CORE-MATH.
2025-01-31 13:05:41 -03:00
Adhemerval Zanella
c79277a167 math: Fix log10p1f internal table value (BZ 32626)
It was copied wrong from CORE-MATH.
2025-01-31 13:05:41 -03:00
Adhemerval Zanella
22a11aa1c3 sh: Fix tst-guard1 build
The tests uses ARCH_MIN_GUARD_SIZE and the sysdep.h include is not
required.
2025-01-31 09:34:36 -03:00
Petr Malat
4c43173eba ld.so: Decorate BSS mappings
Decorate BSS mappings with [anon: glibc: .bss <file>], for example
[anon: glibc: .bss /lib/libc.so.6]. The string ".bss" is already used
by bionic so use the same, but add the filename as well. If the name
would be longer than what the kernel allows, drop the directory part
of the path.

Refactor glibc.mem.decorate_maps check to a separate function and use
it to avoid assembling a name, which would not be used later.

Signed-off-by: Petr Malat <oss@malat.biz>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-01-30 10:16:37 -03:00
Adhemerval Zanella
a6fbe36b7f nptl: Add support for setup guard pages with MADV_GUARD_INSTALL
Linux 6.13 (662df3e5c3766) added a lightweight way to define guard areas
through madvise syscall.  Instead of PROT_NONE the guard region through
mprotect, userland can madvise the same area with a special flag, and
the kernel ensures that accessing the area will trigger a SIGSEGV (as for
PROT_NONE mapping).

The madvise way has the advantage of less kernel memory consumption for
the process page-table (one less VMA per guard area), and slightly less
contention on kernel (also due to the fewer VMA areas being tracked).

The pthread_create allocates a new thread stack in two ways: if a guard
area is set (the default) it allocates the memory range required using
PROT_NONE and then mprotect the usable stack area. Otherwise, if a
guard page is not set it allocates the region with the required flags.

For the MADV_GUARD_INSTALL support, the stack area region is allocated
with required flags and then the guard region is installed.  If the
kernel does not support it, the usual way is used instead (and
MADV_GUARD_INSTALL is disabled for future stack creations).

The stack allocation strategy is recorded on the pthread struct, and it
is used in case the guard region needs to be resized.  To avoid needing
an extra field, the 'user_stack' is repurposed and renamed to 'stack_mode'.

This patch also adds a proper test for the pthread guard.

I checked on x86_64, aarch64, powerpc64le, and hppa with kernel 6.13.0-rc7.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-01-30 10:16:37 -03:00
gfleury
9a31eb64db htl: move pthread_setcanceltype into libc.
Message-ID: <20250103103750.870897-7-gfleury@disroot.org>
2025-01-29 02:32:36 +01:00
gfleury
265c5991af htl: move pthread_mutex_consistent, pthread_mutex_consistent_np into libc.
Message-ID: <20250103103750.870897-6-gfleury@disroot.org>
2025-01-29 02:32:36 +01:00
gfleury
8bfabe7a92 htl: move pthread_mutex_destroy into libc.
Message-ID: <20250103103750.870897-5-gfleury@disroot.org>
2025-01-29 02:32:36 +01:00
gfleury
be9f0e7681 htl: move pthread_mutex_getprioceiling, pthread_mutex_setprioceiling into libc
Message-ID: <20250103103750.870897-4-gfleury@disroot.org>
2025-01-29 02:32:36 +01:00
gfleury
2ebc2d8e24 htl: move pthread_mutex_{lock, unlock, trylock, timedlock, clocklock}
I haven't exposed _pthread_mutex_lock,  _pthread_mutex_trylock and
_pthread_mutex_unlock in GLIBC_PRIVATE since there aren't used in any
code in libpthread
Message-ID: <20250103103750.870897-3-gfleury@disroot.org>
2025-01-29 02:32:36 +01:00
gfleury
e892a93073 htl: move pthread_mutex_init into libc.
Message-ID: <20250103103750.870897-2-gfleury@disroot.org>
2025-01-29 02:32:36 +01:00
gfleury
56b25bfd60 htl: remove leftover for pthread_mutexattr_settype
from b386295727
2025-01-29 02:32:36 +01:00
Martin Coufal
45c42b65c2 Add new tests for fopen
Adding some basic tests for fopen, testing different modes, stream
positioning and concurrent read/write operation on files.
Reviewed-by: DJ Delorie <dj@redhat.com>
2025-01-28 12:50:50 -05:00
Siddhesh Poyarekar
68ee0f704c Fix underallocation of abort_msg_s struct (CVE-2025-0395)
Include the space needed to store the length of the message itself, in
addition to the message string.  This resolves BZ #32582.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reviewed: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-01-22 08:17:17 -05:00
Yury Khrustalev
50eaf54883 aarch64: Add HWCAP_GCS
Use upper 32 bits of HWCAP.

Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
2025-01-21 11:45:14 +00:00
Florian Weimer
b3a6bd625c Linux: Do not check unused bytes after sched_getattr in tst-sched_setattr
Linux 6.13 was released with a change that overwrites those bytes.
This means that the check_unused subtest fails.

Update the manual accordingly.

Tested-by: Xi Ruoyao <xry111@xry111.site>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-01-20 15:20:57 +01:00