1
0
mirror of https://sourceware.org/git/glibc.git synced 2025-08-08 17:42:12 +03:00
Commit Graph

16972 Commits

Author SHA1 Message Date
H. Peter Anvin (Intel)
edf7328db2 termios: make __tcsetattr() the internal interface
There is a prototype for an internal __tcsetattr() function in
include/termios.h, but tcsetattr without __ were still declared as the
actual functions.

Make this match the comment and make __tcsetattr() an internal
interface. This will be required to version struct termios for Linux on
MIPS and SPARC.

Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-06-17 09:11:38 -03:00
Carlos O'Donell
15808c77b3 ppc64le: Revert "powerpc: Optimized strcmp for power10" (CVE-2025-5702)
This reverts commit 3367d8e180

Reason for revert: Power10 strcmp clobbers non-volatile vector
registers (Bug 33056)

Tested on ppc64le without regression.
2025-06-16 18:02:58 -04:00
Carlos O'Donell
a7877bb668 ppc64le: Revert "powerpc : Add optimized memchr for POWER10" (Bug 33059)
This reverts commit b9182c793c

Reason for revert: Power10 memchr clobbers v20 vector register
(Bug 33059)

This is not a security issue, unlike CVE-2025-5745 and
CVE-2025-5702.

Tested on ppc64le without regression.
2025-06-16 18:02:58 -04:00
Carlos O'Donell
c22de63588 ppc64le: Revert "powerpc: Fix performance issues of strcmp power10" (CVE-2025-5702)
This reverts commit 90bcc8721e

This change is in the chain of the final revert that fixes the CVE
i.e. 3367d8e180

Reason for revert: Power10 strcmp clobbers non-volatile vector
registers (Bug 33056)

Tested on ppc64le with no regressions.
2025-06-16 18:02:58 -04:00
Carlos O'Donell
63c60101ce ppc64le: Revert "powerpc: Optimized strncmp for power10" (CVE-2025-5745)
This reverts commit 23f0d81608

Reason for revert: Power10 strncmp clobbers non-volatile vector
registers (Bug 33060)

Tested on ppc64le with no regressions.
2025-06-16 18:02:58 -04:00
H.J. Lu
81467d4b61 elf: Add optimization barrier for __ehdr_start and _end
rtld.c has

extern const ElfW(Ehdr) __ehdr_start attribute_hidden;
...
  _dl_rtld_map.l_map_start = (ElfW(Addr)) &__ehdr_start;
  _dl_rtld_map.l_map_end = (ElfW(Addr)) _end;

As

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=120653

shows, compiler may generate run-time relocation on __ehdr_start with

	movq	.LC0(%rip), %xmm0
...
	.section	.data.rel.ro.local,"aw"
	.align 8
.LC0:
	.quad	__ehdr_start

This won't work before run-time relocation is finished in rtld.c.  Add
optimization barrier to prevent run-time relocations against __ehdr_start
and _end.

Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
Reviewed-by: Sam James <sam@gentoo.org>
2025-06-16 08:43:40 +08:00
gfleury
27360ab9ea htl: move pthread_key_*, pthread_get/setspecific
Signed-off-by: gfleury <gfleury@disroot.org>
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Message-ID: <20250613184440.1660335-1-gfleury@disroot.org>
2025-06-15 21:21:12 +02:00
Mark Harris
8af8beb1c4 riscv: Correct __riscv_hwprobe function prototype [BZ #32932]
The third argument to __riscv_hwprobe is the size in bytes of the
cpu bitmask pointed to by the fourth argument, however in the access
attribute (read_only, 4, 3) it is used as an element count (i.e., the
number of unsigned longs that make up the bitmask), resulting in a
false compiler warning:

$ gcc -c hwprobe1.c
hwprobe1.c: In function 'main':
hwprobe1.c:15:11: warning: '__riscv_hwprobe' reading 1024 bytes from a region of size 128 [-Wstringop-overread]
   15 |     ret = __riscv_hwprobe (pairs, 1, sizeof(cpus), cpus, 0);
      |           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
hwprobe1.c:9:23: note: source object 'cpus' of size 128
    9 |     unsigned long int cpus[16];
      |                       ^~~~
In file included from hwprobe1.c:1:
/usr/include/riscv64-linux-gnu/sys/hwprobe.h:66:12: note: in a call to function '__riscv_hwprobe' declared with attribute 'access (read_only, 4, 3)'
   66 | extern int __riscv_hwprobe (struct riscv_hwprobe *__pairs, size_t __pair_count,
      |            ^~~~~~~~~~~~~~~
$

The documentation (https://docs.kernel.org/arch/riscv/hwprobe.html)
claims that the cpu bitmask has the type cpu_set_t *, which would be
consistent with other functions that take a cpu bitmask such as
sched_setaffinity and sched_getaffinity.  It also uses the name
cpusetsize for the third argument, which is much more accurate than
cpu_count since it is a size in bytes and not a cpu count.  The
(read_only, 4, 3) access attribute in the glibc prototype claims
that the cpu bitmask is only read, however when flags is
RISCV_HWPROBE_WHICH_CPUS it is both read and written.

Therefore, in the glibc prototype the type of the fourth argument is
changed to cpu_set_t * to match the documentation, the name of the
third argument is changed to cpusetsize as in the documentation, and the
incorrect access attribute that applies to these arguments is removed.
Almost all existing callers pass a null pointer for the fourth
argument, however a transparent union is introduced for compatibility
with callers that cast a pointer to the old argument type, and a
macro is introduced allowing callers the ability to distinguish
between the old and new prototype when needed.

The access attributes are being specified with __fortified_attr_access,
however this macro is for fortified functions; the regular
__attr_access macro is for non-fortified functions such as this one.
Using the incorrect macro results in no access checks at fortify level
3, because it is assumed that the fortified function will be doing the
checking.  It is changed to use the correct macro so that the access
checks will work regardless of fortify level.

Also because __riscv_hwprobe is not a cancellation point, __THROW
is added, consistent with similar functions.  (However, it is omitted
from the typedef because GCC does not accept it there.)

The __wur (warn_unused_result) attribute is helpful for functions that
cannot be used safely without checking the result, however code such
as the following does not require the result to be checked and should
not produce a warning:
    struct riscv_hwprobe pair = { RISCV_HWPROBE_KEY_IMA_EXT_0, 0 };
    __riscv_hwprobe (&pair, 1, 0, NULL, 0);
    if (pair.value & RISCV_HWPROBE_EXT_ZBB) ...
Therefore this attribute is omitted.

The comment claiming that the second argument to the ifunc selector
is a pointer to the vDSO function is corrected.  It is a pointer to
the regular glibc function (which returns errors as positive values),
not the vDSO function (which returns errors as negative values).

Fixes commit 426d0e1aa8 ("riscv: Add
Linux hwprobe syscall support").

Fixes: BZ #32932
Signed-off-by: Mark Harris <mark.hsj@gmail.com>
Signed-off-by: Mark Harris <mark.hsj@gmail.com>
Reviewed-by: Palmer Dabbelt <palmer@dabbelt.com>
Acked-by: Palmer Dabbelt <palmer@dabbelt.com>
2025-06-13 11:25:12 -03:00
Yury Khrustalev
b15ed85c86 aarch64: fix typo in sysdeps/aarch64/Makefile 2025-06-10 10:48:07 +01:00
Samuel Thibault
5fdc693d95 hurd: Make __getrandom_early_init call __mach_init
25d37948c9 ("malloc: Improve malloc initialization") moved calling malloc
initialization earlier, within _dl_sysdep_start's call to dl_main, before
__mach_init is called by _dl_init_first. But malloc initialization uses
getrandom, which needs to make RPCs.

This adds __getrandom_early_init on hurd to express that getrandom needs
__mach_init too. This also adds a guard to avoid making it create several task
and host ports.

Fixes: 25d37948c9 ("malloc: Improve malloc initialization")
Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
2025-06-09 08:34:06 +00:00
H.J. Lu
0a027674a1 x86: Avoid GLRO(dl_x86_cpu_features)
In init_cpu_features, replace GLRO(dl_x86_cpu_features) with
cpu_features to avoid an extra load.

Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-06-09 13:03:13 +08:00
Wilco Dijkstra
09795c5612 AArch64: Fix builderror with GCC 12.1/12.2
Early versions of GCC 12 didn't support -mtune=neoverse-v2, so use
-mtune=neoverse-v1 instead.

Reported-by: Yury Khrustalev <yury.khrustalev@arm.com>
2025-06-06 13:22:27 +00:00
Maciej W. Rozycki
7a751ce39c Linux: Drop obsolete kernel support with if_nameindex' and if_nametoindex'
Support for the SIOCGIFINDEX ioctl(2) Linux ABI (0x8933 command, called
SIOGIFINDEX in the API originally) was added with kernel version 2.1.14
for AF_INET6 sockets, followed by general support with version 2.1.22.
The Linux API was then updated by adding the current SIOCGIFINDEX name
with kernel version 2.1.68, back in Nov 1997.

All these kernel versions are well below our current default required
minimum of 3.2.0, let alone some platform higher version requirements.

Drop support for the absence of the SIOCGIFINDEX ioctl(2) in the API or
ABI, by removing arrangements for the ENOSYS error condition.  Discard
the indirection from '__if_nameindex' to 'if_nameindex_netlink' and
adjust the implementation of '__if_nametoindex' accordingly for a better
code flow.
2025-06-05 19:04:46 +01:00
Yury Khrustalev
fcd6a8b5c5 aarch64: add __ifunc_hwcap function to be used in ifunc resolvers
Add a new helper function __ifunc_hwcap() as a portable way to
access HWCAP elements via the parameter(s) passed to an ifunc
resolver checking the _IFUNC_ARG_HWCAP bit in the first parameter
and size of the buffer in the second parameter.

Note that 0 is returned when the requested element is not available
or does not correspond to a valid AT_HWCAP{,2,...} value.

Also add relevant tests.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-06-05 14:38:51 +01:00
Yury Khrustalev
ea14d04e9a aarch64: add support for hwcap3,4
Add basic support for hwcap3 and hwcap4 in dynamic loader and
ifunc resolvers.

Describe new backward-compatible prototype for GNU indirect
function resolvers that use a pointer to uint64_t array in
stead of a pointer to the __ifunc_arg_t struct.

This patch also adds macro _IFUNC_HWCAP_MAX to specify current
number of hwcap elements.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-06-05 14:38:03 +01:00
Adhemerval Zanella
404526ee2e sparc: Fix argument passing to __libc_start_main (BZ 32981)
sparc start.S does not provide the final argument for
__libc_start_main, which is the highest stack address used to
update the __libc_stack_end.A

This fixes elf/tst-execstack-prog-static-tunable on sparc64.
On sparcv9 this does not happen because the kernel puts an
auxv value, which turns to point to a value in the stack itself.

Checked on sparc64-linux-gnu.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-06-03 09:11:46 -03:00
наб
6945ce4a6f sigaction: don't sign-extend sa_flags
Before:
  rt_sigaction(SIGBUS, {sa_handler=0x55abb9960139, sa_mask=[], sa_flags=SA_RESTORER|SA_RESETHAND|SA_SIGINFO|0xffffffff00000000, sa_restorer=0x7fb1b2a82050}, NULL, 8) = 0

After:
  rt_sigaction(SIGBUS, {sa_handler=0x7f6a70dce139, sa_mask=[], sa_flags=SA_RESTORER|SA_RESETHAND|SA_SIGINFO, sa_restorer=0x7f6a70c28f60}, NULL, 8) = 0

Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-06-03 10:53:12 +02:00
Adhemerval Zanella
7c00a20397 math: Remove i386 ilogb/ilogbf/llogb/llogbf
The new float and double implementation does not required an
extra function call and error handling uses math_err function,
which results in better performance on i386 as well.

With gcc-14 on AMD AMD Ryzen 9 5900X, master shows:

$ ./benchtests/bench-ilogb
  "ilogb": {
   "subnormal": {
    "duration": 3.68863e+09,
    "iterations": 1.72228e+08,
    "max": 89.2995,
    "min": 21.016,
    "mean": 21.4171
   },
   "normal": {
    "duration": 3.68878e+09,
    "iterations": 1.72948e+08,
    "max": 78.6065,
    "min": 21.127,
    "mean": 21.3288
   }
  }
$ ./benchtests/bench-ilogbf
  "ilogbf": {
   "subnormal": {
    "duration": 3.68835e+09,
    "iterations": 1.66716e+08,
    "max": 46.953,
    "min": 21.793,
    "mean": 22.1236
   },
   "normal": {
    "duration": 3.68784e+09,
    "iterations": 1.66168e+08,
    "max": 46.9715,
    "min": 21.904,
    "mean": 22.1935
   }
  }

While with this patch:

$ ./benchtests/bench-ilogb
  "ilogb": {
   "subnormal": {
    "duration": 3.68134e+09,
    "iterations": 4.17516e+08,
    "max": 32.5045,
    "min": 8.3245,
    "mean": 8.81723
   },
   "normal": {
    "duration": 3.6677e+09,
    "iterations": 6.79468e+08,
    "max": 50.9305,
    "min": 5.3465,
    "mean": 5.3979
   }
}
$ ./benchtests/bench-ilogbf
  "ilogbf": {
   "subnormal": {
    "duration": 3.67553e+09,
    "iterations": 5.11032e+08,
    "max": 35.927,
    "min": 7.0485,
    "mean": 7.19237
   },
   "normal": {
    "duration": 3.66877e+09,
    "iterations": 6.556e+08,
    "max": 26.3625,
    "min": 5.5315,
    "mean": 5.59605
   }
 }

Checked on i686-linux-gnu.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-06-02 13:32:19 -03:00
Adhemerval Zanella
39775f00b1 math: Optimize float ilogb/llogb
It removes the wrapper by moving the error/EDOM handling to an
out-of-line implementation (__math_invalidf_i/__math_invalidf_li).
Also, __glibc_unlikely is used on errors case since it helps
code generation on recent gcc.

The code now builds to with gcc-14 on aarch64:

0000000000000000 <__ilogbf>:
   0:   1e260000        fmov    w0, s0
   4:   d3577801        ubfx    x1, x0, #23, #8
   8:   340000e1        cbz     w1, 24 <__ilogbf+0x24>
   c:   5101fc20        sub     w0, w1, #0x7f
  10:   7103fc3f        cmp     w1, #0xff
  14:   54000040        b.eq    1c <__ilogbf+0x1c>  // b.none
  18:   d65f03c0        ret
  1c:   12b00000        mov     w0, #0x7fffffff                 // #2147483647
  20:   14000000        b       0 <__math_invalidf_i>
  24:   53175800        lsl     w0, w0, #9
  28:   340000a0        cbz     w0, 3c <__ilogbf+0x3c>
  2c:   5ac01000        clz     w0, w0
  30:   12800fc1        mov     w1, #0xffffff81                 // #-127
  34:   4b000020        sub     w0, w1, w0
  38:   d65f03c0        ret
  3c:   320107e0        mov     w0, #0x80000001                 // #-2147483647
  40:   14000000        b       0 <__math_invalidf_i>

Some ABI requires additional adjustments:

  * i386 and m68k requires to use the template version, since
    both provide __ieee754_ilogb implementatations.

  * loongarch uses a custom implementation as well.

  * powerpc64le also has a custom implementation for POWER9, which
    is also used for float and float128 version.  The generic
    e_ilogb.c implementation is moved on powerpc to keep the
    current code as-is.

Checked on aarch64-linux-gnu and x86_64-linux-gnu.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-06-02 13:32:19 -03:00
Adhemerval Zanella
afe09d44f3 math: Remove UB and optimize double ilogbf
The subnormal exponent calculation invokes UB by left shifting the
signed expoenent to find the first leading bit.

The patch reimplements ilogb using the math_config.h macros and
uses the new stdbit.h function to simplify the subnormal handling.

On aarch64 it generates better code:

* master:

0000000000000000 <__ieee754_ilogbf>:
   0:   1e260000        fmov    w0, s0
   4:   12007801        and     w1, w0, #0x7fffffff
   8:   72091c1f        tst     w0, #0x7f800000
   c:   54000141        b.ne    34 <__ieee754_ilogbf+0x34>  // b.any
  10:   34000201        cbz     w1, 50 <__ieee754_ilogbf+0x50>
  14:   53185c21        lsl     w1, w1, #8
  18:   12800fa0        mov     w0, #0xffffff82                 // #-126
  1c:   d503201f        nop
  20:   531f7821        lsl     w1, w1, #1
  24:   51000400        sub     w0, w0, #0x1
  28:   7100003f        cmp     w1, #0x0
  2c:   54ffffac        b.gt    20 <__ieee754_ilogbf+0x20>
  30:   d65f03c0        ret
  34:   13177c20        asr     w0, w1, #23
  38:   12b01002        mov     w2, #0x7f7fffff                 // #2139095039
  3c:   5101fc00        sub     w0, w0, #0x7f
  40:   6b02003f        cmp     w1, w2
  44:   12b00001        mov     w1, #0x7fffffff                 // #2147483647
  48:   1a819000        csel    w0, w0, w1, ls  // ls = plast
  4c:   d65f03c0        ret
  50:   320107e0        mov     w0, #0x80000001                 // #-2147483647
  54:   d65f03c0        ret

* patch:

0000000000000000 <__ieee754_ilogbf>:
   0:   1e260001        fmov    w1, s0
   4:   d3577820        ubfx    x0, x1, #23, #8
   8:   350000e0        cbnz    w0, 24 <__ieee754_ilogbf+0x24>
   c:   53175821        lsl     w1, w1, #9
  10:   34000141        cbz     w1, 38 <__ieee754_ilogbf+0x38>
  14:   5ac01021        clz     w1, w1
  18:   12800fc0        mov     w0, #0xffffff81                 // #-127
  1c:   4b010000        sub     w0, w0, w1
  20:   d65f03c0        ret
  24:   7103fc1f        cmp     w0, #0xff
  28:   5101fc00        sub     w0, w0, #0x7f
  2c:   12b00001        mov     w1, #0x7fffffff                 // #2147483647
  30:   1a811000        csel    w0, w0, w1, ne  // ne = any
  34:   d65f03c0        ret
  38:   320107e0        mov     w0, #0x80000001                 // #-2147483647
  3c:   d65f03c0        ret

Other architecture with support for stdc_leading_zeros and/or
__builtin_clzll should have similar improvements.

Checked on aarch64-linux-gnu and x86_64-linux-gnu.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-06-02 13:32:19 -03:00
Adhemerval Zanella
c4be334400 math: Optimize double ilogb/llogb
It removes the wrapper by moving the error/EDOM handling to an
out-of-line implementation (__math_invalid_i/__math_invalid_li).
Also, __glibc_unlikely is used on errors case since it helps
code generation on recent gcc.

The code now builds to with gcc-14 on aarch64:

0000000000000000 <__ilogb>:
   0:   9e660000        fmov    x0, d0
   4:   d374f801        ubfx    x1, x0, #52, #11
   8:   340000e1        cbz     w1, 24 <__ilogb+0x24>
   c:   510ffc20        sub     w0, w1, #0x3ff
  10:   711ffc3f        cmp     w1, #0x7ff
  14:   54000040        b.eq    1c <__ilogb+0x1c>  // b.none
  18:   d65f03c0        ret
  1c:   12b00000        mov     w0, #0x7fffffff                 // #2147483647
  20:   14000000        b       0 <__math_invalid_i>
  24:   d374cc00        lsl     x0, x0, #12
  28:   b40000a0        cbz     x0, 3c <__ilogb+0x3c>
  2c:   dac01000        clz     x0, x0
  30:   12807fc1        mov     w1, #0xfffffc01                 // #-1023
  34:   4b000020        sub     w0, w1, w0
  38:   d65f03c0        ret
  3c:   320107e0        mov     w0, #0x80000001                 // #-2147483647
  40:   14000000        b       0 <__math_invalid_i>

Some ABI requires additional adjustments:

  * i386 and m68k requires to use the template version, since
    both provide __ieee754_ilogb implementatations.

  * loongarch uses a custom implementation as well.

  * powerpc64le also has a custom implementation for POWER9, which
    is also used for float and float128 version.  The generic
    e_ilogb.c implementation is moved on powerpc to keep the
    current code as-is.

Checked on aarch64-linux-gnu and x86_64-linux-gnu.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-06-02 13:32:19 -03:00
Adhemerval Zanella
eb1e9194fa math: Remove UB and optimize double ilogb
The subnormal exponent calculation invokes UB by left shifting the
signed exponent to find the first leading bit.  The implementation
also uses 32 bits operations, which generates suboptimal code in
64 bits architectures.

The patch reimplements ilogb using the math_config.h macros and
uses the new stdbit function to simplify the subnormal handling.

On aarch64 it generates better code:

* master:

0000000000000000 <__ieee754_ilogb>:
   0:   9e660000        fmov    x0, d0
   4:   d360fc02        lsr     x2, x0, #32
   8:   d360f801        ubfx    x1, x0, #32, #31
   c:   f26c285f        tst     x2, #0x7ff00000
  10:   540001a1        b.ne    44 <__ieee754_ilogb+0x44>  // b.any
  14:   2a000022        orr     w2, w1, w0
  18:   34000322        cbz     w2, 7c <__ieee754_ilogb+0x7c>
  1c:   35000221        cbnz    w1, 60 <__ieee754_ilogb+0x60>
  20:   2a0003e1        mov     w1, w0
  24:   7100001f        cmp     w0, #0x0
  28:   12808240        mov     w0, #0xfffffbed                 // #-1043
  2c:   540000ad        b.le    40 <__ieee754_ilogb+0x40>
  30:   531f7821        lsl     w1, w1, #1
  34:   51000400        sub     w0, w0, #0x1
  38:   7100003f        cmp     w1, #0x0
  3c:   54ffffac        b.gt    30 <__ieee754_ilogb+0x30>
  40:   d65f03c0        ret
  44:   13147c20        asr     w0, w1, #20
  48:   12b00202        mov     w2, #0x7fefffff                 // #2146435071
  4c:   510ffc00        sub     w0, w0, #0x3ff
  50:   6b02003f        cmp     w1, w2
  54:   12b00001        mov     w1, #0x7fffffff                 // #2147483647
  58:   1a819000        csel    w0, w0, w1, ls  // ls = plast
  5c:   d65f03c0        ret
  60:   53155021        lsl     w1, w1, #11
  64:   12807fa0        mov     w0, #0xfffffc02                 // #-1022
  68:   531f7821        lsl     w1, w1, #1
  6c:   51000400        sub     w0, w0, #0x1
  70:   7100003f        cmp     w1, #0x0
  74:   54ffffac        b.gt    68 <__ieee754_ilogb+0x68>
  78:   d65f03c0        ret
  7c:   320107e0        mov     w0, #0x80000001                 // #-2147483647
  80:   d65f03c0        ret

* patch:

0000000000000000 <__ieee754_ilogb>:
   0:   9e660001        fmov    x1, d0
   4:   d374f820        ubfx    x0, x1, #52, #11
   8:   350000e0        cbnz    w0, 24 <__ieee754_ilogb+0x24>
   c:   d374cc21        lsl     x1, x1, #12
  10:   b4000141        cbz     x1, 38 <__ieee754_ilogb+0x38>
  14:   dac01021        clz     x1, x1
  18:   12807fc0        mov     w0, #0xfffffc01                 // #-1023
  1c:   4b010000        sub     w0, w0, w1
  20:   d65f03c0        ret
  24:   711ffc1f        cmp     w0, #0x7ff
  28:   510ffc00        sub     w0, w0, #0x3ff
  2c:   12b00001        mov     w1, #0x7fffffff                 // #2147483647
  30:   1a811000        csel    w0, w0, w1, ne  // ne = any
  34:   d65f03c0        ret
  38:   320107e0        mov     w0, #0x80000001                 // #-2147483647
  3c:   d65f03c0        ret

Other architecture with support for stdc_leading_zeros and/or
__builtin_clzll should have similar improvements.

Checked on aarch64-linux-gnu and x86_64-linux-gnu.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-06-02 13:32:19 -03:00
Joseph Myers
eaf88c1025 Update syscall lists for Linux 6.15
Linux 6.15 adds the new syscall open_tree_attr.  Update
syscall-names.list and regenerate the arch-syscall.h headers with
build-many-glibcs.py update-syscalls.

Tested with build-many-glibcs.py.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-05-29 19:21:46 +00:00
Wilco Dijkstra
aa18367c11 AArch64: Improve enabling of SVE for libmvec
When using a -mcpu option in CFLAGS, GCC can report errors when building libmvec.
Fix this by overriding both -mcpu and -march with a generic variant with SVE added.
Also use a tune for a modern SVE core.

Reviewed-by: Yury Khrustalev <yury.khrustalev@arm.com>
2025-05-29 16:58:49 +00:00
Luna Lamb
da196e6134 AArch64: Improve codegen in SVE log1p
Improves memory access, reformat evaluation scheme to pack coefficients.
5% improvement in throughput microbenchmark on Neoverse V1.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-05-29 15:25:35 +00:00
Yury Khrustalev
22419a2b60 linux: use PKEY_UNRESTRICTED macro in tst-pkey
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-05-28 10:59:26 +01:00
Yury Khrustalev
01bb997ef5 misc: add PKEY_UNRESTRICTED macro
A corresponding macro has been added to Linux UAPI headers in 6.15.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-05-28 10:58:40 +01:00
Florian Weimer
27cc947dce generic: Add missing parameter name to __getrandom_early_init
This is required after commit 03da41d47d
("Turn on -Wmissing-parameter-name by default if available").

Reviewed-by: Sam James <sam@gentoo.org>
2025-05-28 10:00:41 +02:00
Stefan Liebler
319f94dea2 S390: Use cfi_val_offset instead of cfi_escape. 31bit part
Due to raising the minimum binutils version to version >=2.28,
the used cfi_escape for cfi_val_offset can now be ommitted.

The commit 0fc76d8762
has already adjusted it for the 64bit part of mcount.
This patch also adjusts it for the 31bit part of mcount.

Checked with "objdump -WF" / "objdump -Wf" that the previous
cfi_escape and the new cfi_val_offset are equal.
2025-05-23 15:05:56 +02:00
Andreas Schwab
d3e0f63fb9 ldbl-128: also disable lgammaf128_r builtin when building lgammal_r 2025-05-21 14:11:50 +02:00
Sunil K Pandey
f2aeb6ff94 x86_64: Fix typo in ifunc-impl-list.c.
Fix wcsncpy and wcpncpy typo in ifunc-impl-list.c.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2025-05-20 16:31:52 -07:00
Wilco Dijkstra
2071666d03 AArch64: Fix typo in math-vector.h
Fix typo atanpi2->atan2pi in math-vector.h.
2025-05-20 13:44:16 +00:00
Andreas Schwab
b1f33b2eeb Fix typos in ldbl-opt makefile
The -fno-builtin options need to disable the long double builtins.
2025-05-20 12:42:54 +02:00
Wilco Dijkstra
b990b0aee2 AArch64: Cleanup SVE config and defines
Now we finally support modern GCC and binutils, it's time for a cleanup.
Remove HAVE_AARCH64_SVE_ASM define and conditional compilation.  Remove SVE
configure checks for SVE, ACLE and variant-PCS support.

Reviewed-by: Yury Khrustalev <yury.khrustalev@arm.com>
2025-05-20 10:33:55 +00:00
Wilco Dijkstra
2c421fc430 AArch64: Cleanup PAC and BTI
Now we finally support modern GCC and binutils, it's time for a cleanup.
Use PAC and BTI instructions unconditionally and use proper assembler syntax.
Remove the PR target/94791 strip_pac workarounds for buggy GCCs.  Remove the
PAC/BTI configure checks - always emit GNU property notes on assembly files.
Change cfi_window_save to the correct cfi_negate_ra_state unwind directive.

Reviewed-by: Matthieu Longo <matthieu.longo@arm.com>
2025-05-19 15:35:32 +00:00
Dylan Fleming
96abd59bf2 AArch64: Implement AdvSIMD and SVE atan2pi/f
Implement double and single precision variants of the C23 routine atan2pi
for both AdvSIMD and SVE.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-05-19 15:35:25 +00:00
Dylan Fleming
edf6202815 AArch64: Implement AdvSIMD and SVE atanpi/f
Implement double and single precision variants of the C23 routine atanpi
for both AdvSIMD and SVE.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-05-19 15:34:40 +00:00
Dylan Fleming
0ef2cf44e7 AArch64: Implement AdvSIMD and SVE asinpi/f
Implement double and single precision variants of the C23 routine asinpi
for both AdvSIMD and SVE.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-05-19 15:33:50 +00:00
Dylan Fleming
993997ca1b AArch64: Implement AdvSIMD and SVE acospi/f
Implement double and single precision variants of the C23 routine acospi
for both AdvSIMD and SVE.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-05-19 15:31:59 +00:00
Dylan Fleming
1e84509e00 AArch64: Optimize inverse trig functions
Improve performance of Inverse trig functions by altering how coefficients are
loaded.

Performance improvement on Neoverse V1:
SVE     acos   14%
AdvSIMD acos   6%

AdvSIMD asin   6%
SVE     asin   5%
AdvSIMD asinf  2%

AdvSIMD atanf  22%
SVE     atanf  20%
SVE     atan   11%
AdvSIMD atan   5%

SVE     atan2  7%
SVE     atan2f 4%
AdvSIMD atan2f 3%
AdvSIMD atan2  2%

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-05-19 14:54:32 +00:00
Florian Weimer
10a66a8e42 Remove <libc-tsd.h>
Use __thread variables directly instead.  The macros do not save any
typing.  It seems unlikely that a future port will lack __thread
variable support.

Some of the __libc_tsd_* variables are referenced from assembler
files, so keep their names.  Previously, <libc-tls.h> included
<tls.h>, which in turn included <errno.h>, so a few direct includes
of <errno.h> are now required.

Reviewed-by: Frédéric Bérat <fberat@redhat.com>
2025-05-16 19:53:09 +02:00
Andreas Schwab
eb7a681b82 powerpc: Remove check for -mabi=ibmlongdouble
The -mabi=ibmlongdouble option has been added in gcc 4.2, thus can be
assumed to always exist.
2025-05-15 15:54:18 +02:00
Yury Khrustalev
251f932624 aarch64: update tests for SME
Add test that checks that ZA state is disabled after setjmp and sigsetjmp
Update existing SME test that uses setjmp

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-05-15 14:23:35 +01:00
Yury Khrustalev
a7f6fd976c aarch64: Disable ZA state of SME in setjmp and sigsetjmp
Due to the nature of the ZA state, setjmp() should clear it in the
same manner as it is already done by longjmp.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2025-05-15 14:23:03 +01:00
Joseph Myers
06caf53adf Implement C23 rootn.
C23 adds various <math.h> function families originally defined in TS
18661-4.  Add the rootn functions, which compute the Yth root of X for
integer Y (with a domain error if Y is 0, even if X is a NaN).  The
integer exponent has type long long int in C23; it was intmax_t in TS
18661-4, and as with other interfaces changed after their initial
appearance in the TS, I don't think we need to support the original
version of the interface.

As with pown and compoundn, I strongly encourage searching for worst
cases for ulps error for these implementations (necessarily
non-exhaustively, given the size of the input space).  I also expect a
custom implementation for a given format could be much faster as well
as more accurate, although the implementation is simpler than those
for pown and compoundn.

This completes adding to glibc those TS 18661-4 functions (ignoring
DFP) that are included in C23.  See
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118592 regarding the C23
mathematical functions (not just the TS 18661-4 ones) missing built-in
functions in GCC, where such functions might usefully be added.

Tested for x86_64 and x86, and with build-many-glibcs.py.
2025-05-14 10:51:46 +00:00
Stefan Liebler
0fc76d8762 S390: Use cfi_val_offset instead of cfi_escape.
Due to raising the minimum binutils version to version >=2.28,
the used cfi_escape for cfi_val_offset can now be ommitted.

Checked with "objdump -WF" / "objdump -Wf" that the previous
cfi_escape and the new cfi_val_offset are equal.
2025-05-14 10:35:55 +02:00
Stefan Liebler
4b1ffb828c powerpc64le: Remove configure check for objcopy >= 2.26.
Due to raising the minimum binutils version to >= 2.26, the configure
check for testing support of --update-section is not needed anymore.
Reviewed-by: Peter Bergner <bergner@tenstorrent.com>
2025-05-14 10:35:55 +02:00
Yury Khrustalev
691edbdf77 aarch64: fix unwinding in longjmp
Previously, longjmp() on aarch64 was using CFI directives around the
call to __libc_arm_za_disable() after CFA was redefined at the start
of longjmp(). This may result in unwinding issues. Move the call and
surrounding CFI directives to the beginning of longjmp().

Suggested-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
2025-05-13 13:00:57 +01:00
Samuel Thibault
2ae4ec56c2 hurd: Make rename refuse trailing slashes [BZ #32570]
As tested by Gnulib's renameatu module.

Reported by Collin Funk on
https://sourceware.org/bugzilla/show_bug.cgi?id=32570
2025-05-12 01:54:34 +02:00
Joseph Myers
ae31254432 Implement C23 compoundn
C23 adds various <math.h> function families originally defined in TS
18661-4.  Add the compoundn functions, which compute (1+X) to the
power Y for integer Y (and X at least -1).  The integer exponent has
type long long int in C23; it was intmax_t in TS 18661-4, and as with
other interfaces changed after their initial appearance in the TS, I
don't think we need to support the original version of the interface.

Note that these functions are "compoundn" with a trailing "n", *not*
"compound" (CORE-MATH has the wrong name, for example).

As with pown, I strongly encourage searching for worst cases for ulps
error for these implementations (necessarily non-exhaustively, given
the size of the input space).  I also expect a custom implementation
for a given format could be much faster as well as more accurate (I
haven't tested or benchmarked the CORE-MATH implementation for
binary32); this is one of the more complicated and less efficient
functions to implement in a type-generic way.

As with exp2m1 and exp10m1, this showed up places where the
powerpc64le IFUNC setup is not as self-contained as one might hope (in
this case, without the changes specific to powerpc64le, there were
undefined references to __GI___expf128).

Tested for x86_64 and x86, and with build-many-glibcs.py.
2025-05-09 15:17:27 +00:00