1
0
mirror of https://sourceware.org/git/glibc.git synced 2025-06-25 12:42:04 +03:00
Commit Graph

6670 Commits

Author SHA1 Message Date
ed6a68bac7 debug: Improve '%n' fortify detection (BZ 30932)
The 7bb8045ec0 path made the '%n' fortify check ignore EMFILE errors
while trying to open /proc/self/maps, and this added a security
issue where EMFILE can be attacker-controlled thus making it
ineffective for some cases.

The EMFILE failure is reinstated but with a different error
message.  Also, to improve the false positive of the hardening for
the cases where no new files can be opened, the
_dl_readonly_area now uses  _dl_find_object to check if the
memory area is within a writable ELF segment.  The procfs method is
still used as fallback.

Checked on x86_64-linux-gnu and i686-linux-gnu.
Reviewed-by: Arjun Shankar <arjun@redhat.com>
2025-03-21 15:46:48 -03:00
090dfa40a5 Add _FORTIFY_SOURCE support for inet_ntop
- Create the __inet_ntop_chk routine that verifies that the builtin size
of the destination buffer is at least as big as the size given by the
user.
- Redirect calls from inet_ntop to __inet_ntop_chk or __inet_ntop_warn
- Update the abilist for this new routine
- Update the manual to mention the new fortification

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-03-21 09:35:42 +01:00
409668f6e8 Implement C23 powr
C23 adds various <math.h> function families originally defined in TS
18661-4.  Add the powr functions, which are like pow, but with simpler
handling of special cases (based on exp(y*log(x)), so negative x and
0^0 are domain errors, powers of -0 are always +0 or +Inf never -0 or
-Inf, and 1^+-Inf and Inf^0 are also domain errors, while NaN^0 and
1^NaN are NaN).  The test inputs are taken from those for pow, with
appropriate adjustments (including removing all tests that would be
domain errors from those in auto-libm-test-in and adding some more
such tests in libm-test-powr.inc).

The underlying implementation uses __ieee754_pow functions after
dealing with all special cases that need to be handled differently.
It might be a little faster (avoiding a wrapper and redundant checks
for special cases) to have an underlying implementation built
separately for both pow and powr with compile-time conditionals for
special-case handling, but I expect the benefit of that would be
limited given that both functions will end up needing to use the same
logic for computing pow outside of special cases.

My understanding is that powr(negative, qNaN) should raise "invalid":
that the rule on "invalid" for an argument outside the domain of the
function takes precedence over a quiet NaN argument producing a quiet
NaN result with no exceptions raised (for rootn it's explicit that the
0th root of qNaN raises "invalid").  I've raised this on the WG14
reflector to confirm the intent.

Tested for x86_64 and x86, and with build-many-glibcs.py.
2025-03-14 15:58:11 +00:00
9b646f5dc9 elf: Canonicalize $ORIGIN in an explicit ld.so invocation [BZ 25263]
When an executable is invoked directly, we calculate $ORIGIN by calling
readlink on /proc/self/exe, which the Linux kernel resolves to the
target of any symlinks.  However, if an executable is run through ld.so,
we cannot use /proc/self/exe and instead use the path given as an
argument.  This leads to a different calculation of $ORIGIN, which is
most notable in that it causes ldd to behave differently (e.g., by not
finding a library) from directly running the program.

To make the behavior consistent, take advantage of the fact that the
kernel also resolves /proc/self/fd/ symlinks to the target of any
symlinks in the same manner, so once we have opened the main executable
in order to load it, replace the user-provided path with the result of
calling readlink("/proc/self/fd/N").

(On non-Linux platforms this resolution does not happen and so no
behavior change is needed.)

The __fd_to_filename requires _fitoa_word and _itoa_word, which for
32-bits pulls a lot of definitions from _itoa.c (due _ITOA_NEEDED
being defined).  To simplify the build move the required function
to a new file, _fitoa_word.c.

Checked on x86_64-linux-gnu and i686-linux-gnu.

Co-authored-by: Geoffrey Thomas <geofft@ldpreload.com>
Reviewed-by: Geoffrey Thomas <geofft@ldpreload.com>
Tested-by: Geoffrey Thomas <geofft@ldpreload.com>
2025-03-13 16:50:16 -03:00
eea6f1e079 Update syscall lists for Linux 6.13
Linux 6.13 adds four new syscalls.  Update syscall-names.list and
regenerate the arch-syscall.h headers with build-many-glibcs.py
update-syscalls.

Tested with build-many-glibcs.py.
2025-03-12 12:51:54 +00:00
1ec411f7ae Linux: Add new test misc/tst-sched_setattr-thread
The straightforward sched_getattr call serves as a test for
bug 32781, too.

Reviewed-by: Joseph Myers <josmyers@redhat.com>
2025-03-12 10:23:59 +01:00
74c68fa61b Linux: Remove attribute access from sched_getattr (bug 32781)
The GCC attribute expects an element count, not bytes.
2025-03-12 10:23:47 +01:00
74d463c50b Linux: Add the pthread_gettid_np function (bug 27880)
Current Bionic has this function, with enhanced error checking
(the undefined case terminates the process).

Reviewed-by: Joseph Myers <josmyers@redhat.com>
2025-03-12 10:23:35 +01:00
a9017caff3 nptl: extend test coverage for sched_yield
We add sched_yield() API testing to the existing thread affinity
test case because it allows us to test sched_yield() operation
in the following scenarios:

  * On a main thread.
  * On multiple threads simultaneously.
  * On every CPU the system reports simultaneously.

The ensures we exercise sched_yield() in as many scenarios as
we would exercise calls to the affinity functions.

Additionally, the test is improved by adding a semaphore to coordinate
all the threads running, so that an early starter thread won't consume
cpu resources that could be used to start the other threads.

Co-authored-by: DJ Delorie <dj@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2025-03-07 17:50:44 -05:00
77261698b4 Implement C23 rsqrt
C23 adds various <math.h> function families originally defined in TS
18661-4.  Add the rsqrt functions (1/sqrt(x)).  The test inputs are
taken from those for sqrt.

Tested for x86_64 and x86, and with build-many-glibcs.py.
2025-03-07 19:15:26 +00:00
50351e0570 sysdeps: linux: Add BTRFS_SUPER_MAGIC to pathconf
btrfs has a 65535 maximum link count. Include this value in pathconf to
give the real max link count for this filesystem.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-03-05 15:28:31 -03:00
6cb703b81d linux: Prefix AT_HWCAP with 0x on LD_SHOW_AUXV
Suggested-by: Stefan Liebler <stli@linux.ibm.com>
Reviewed-by: Stefan Liebler <stli@linux.ibm.com>
2025-03-05 11:22:09 -03:00
1d60b9dfda Remove dl-procinfo.h
powerpc was the only architecture with arch-specific hooks for
LD_SHOW_AUXV, and with the information moved to ld diagnostics there
is no need to keep the _dl_procinfo hook.

Checked with a build for all affected ABIs.

Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2025-03-05 11:22:09 -03:00
2fd580ea46 powerpc: Remove unused dl-procinfo.h
The _dl_string_platform is moved to hwcapinfo.h, since it is only used
by hwcapinfo.c and test-get_hwcap internal test.

Checked on powerpc64le-linux-gnu.

Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2025-03-05 11:22:09 -03:00
8a995670a8 powerpc: Move AT_HWCAP descriptions to ld diagnostics
The ld.so diagnostics already prints AT_HWCAP values, but only in
hexadecimal.  To avoid duplicating the strings, consolidate the
hwcap_names from cpu-features.h on a new file, dl-hwcap-info.h
(and it also improves the hwcap string description with more
values).

For future AT_HWCAP3/AT_HWCAP4 extensions, it is just a matter
to add them on dl-hwcap-info.c so both ld diagnostics and
tunable filtering will parse the new values.

Checked on powerpc64le-linux-gnu.

Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2025-03-05 11:22:09 -03:00
e5893e6349 Remove unused dl-procinfo.h
Remove unused _dl_hwcap_string defines.  As a result many dl-procinfo.h headers
can be removed.  This also removes target specific _dl_procinfo implementations
which only printed HWCAP strings using dl_hwcap_string.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-28 16:55:18 +00:00
935563754b AArch64: Remove LP64 and ILP32 ifdefs
Remove LP64 and ILP32 ifdefs.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-24 14:20:29 +00:00
eb7ac024d9 AArch64: Cleanup pointer mangling
Cleanup pointer mangling.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-24 14:17:57 +00:00
19860fd42e AArch64: Remove PTR_REG defines
Remove PTR_REG defines.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-24 14:16:55 +00:00
ce2f26a22e AArch64: Remove PTR_ARG/SIZE_ARG defines
This series removes various ILP32 defines that are now
no longer needed.

Remove PTR_ARG/SIZE_ARG.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-24 14:15:15 +00:00
689a62a421 nptl: clear the whole rseq area before registration
Due to the extensible nature of the rseq area we can't explictly
initialize fields that are not part of the ABI yet. It was agreed with
upstream that all new fields will be documented as zero initialized by
userspace. Future kernels configured with CONFIG_DEBUG_RSEQ will
validate the content of all fields during registration.

Replace the explicit field initialization with a memset of the whole
rseq area which will cover fields as they are added to future kernels.

Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-02-21 22:21:25 +00:00
41f6684557 aarch64: Add GCS test with signal handler
Test that when we return from a function that enabled GCS at runtime
we get SIGSEGV. Also test that ucontext contains GCS block with the
GCS pointer.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-21 16:23:44 +00:00
15afd01e80 aarch64: Add GCS tests for dlopen
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-21 16:10:44 +00:00
57ee1deb1f aarch64: Add GCS tests for transitive dependencies
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-21 16:09:06 +00:00
82decb59bc aarch64: Add tests for Guarded Control Stack
These tests validate that GCS tunable works as expected depending
on the GCS markings in the test binaries.

Tests validate both static and dynamically linked binaries.

These new tests are AArch64 specific. Moreover, they are included only
if linker supports the "-z gcs=<value>" option. If built, these tests
will run on systems with and without HWCAP_GCS. In the latter case the
tests will be reported as UNSUPPORTED.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-02-21 16:08:00 +00:00
60f2d6be65 Fix tst-aarch64-pkey to handle ENOSPC as not supported
The syscall pkey_alloc can return ENOSPC to indicate either that all
keys are in use or that the system runs in a mode in which memory
protection keys are disabled. In such case the test should not fail and
just return unsupported.

This matches the behaviour of the generic tst-pkey.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-02-15 11:08:43 +01:00
4c43173eba ld.so: Decorate BSS mappings
Decorate BSS mappings with [anon: glibc: .bss <file>], for example
[anon: glibc: .bss /lib/libc.so.6]. The string ".bss" is already used
by bionic so use the same, but add the filename as well. If the name
would be longer than what the kernel allows, drop the directory part
of the path.

Refactor glibc.mem.decorate_maps check to a separate function and use
it to avoid assembling a name, which would not be used later.

Signed-off-by: Petr Malat <oss@malat.biz>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-01-30 10:16:37 -03:00
a6fbe36b7f nptl: Add support for setup guard pages with MADV_GUARD_INSTALL
Linux 6.13 (662df3e5c3766) added a lightweight way to define guard areas
through madvise syscall.  Instead of PROT_NONE the guard region through
mprotect, userland can madvise the same area with a special flag, and
the kernel ensures that accessing the area will trigger a SIGSEGV (as for
PROT_NONE mapping).

The madvise way has the advantage of less kernel memory consumption for
the process page-table (one less VMA per guard area), and slightly less
contention on kernel (also due to the fewer VMA areas being tracked).

The pthread_create allocates a new thread stack in two ways: if a guard
area is set (the default) it allocates the memory range required using
PROT_NONE and then mprotect the usable stack area. Otherwise, if a
guard page is not set it allocates the region with the required flags.

For the MADV_GUARD_INSTALL support, the stack area region is allocated
with required flags and then the guard region is installed.  If the
kernel does not support it, the usual way is used instead (and
MADV_GUARD_INSTALL is disabled for future stack creations).

The stack allocation strategy is recorded on the pthread struct, and it
is used in case the guard region needs to be resized.  To avoid needing
an extra field, the 'user_stack' is repurposed and renamed to 'stack_mode'.

This patch also adds a proper test for the pthread guard.

I checked on x86_64, aarch64, powerpc64le, and hppa with kernel 6.13.0-rc7.

Reviewed-by: DJ Delorie <dj@redhat.com>
2025-01-30 10:16:37 -03:00
50eaf54883 aarch64: Add HWCAP_GCS
Use upper 32 bits of HWCAP.

Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
2025-01-21 11:45:14 +00:00
b3a6bd625c Linux: Do not check unused bytes after sched_getattr in tst-sched_setattr
Linux 6.13 was released with a change that overwrites those bytes.
This means that the check_unused subtest fails.

Update the manual accordingly.

Tested-by: Xi Ruoyao <xry111@xry111.site>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-01-20 15:20:57 +01:00
a335acb8b8 aarch64: Use __alloc_gcs in makecontext
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-01-20 09:36:19 +00:00
d3df351338 aarch64: Process gnu properties in static exe
Unlike for BTI, the kernel does not process GCS properties so update
GL(dl_aarch64_gcs) before the GCS status is set.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-01-20 09:36:19 +00:00
b81ee54bc9 aarch64: Enable GCS in static linked exe
Use the ARCH_SETUP_TLS hook to enable GCS in the static linked case.
The system call must be inlined and then GCS is enabled on a top
level stack frame that does not return and has no exception handlers
above it.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-01-20 09:31:47 +00:00
9ad3d9267d aarch64: Add glibc.cpu.aarch64_gcs tunable
This tunable controls Guarded Control Stack (GCS) for the process.

0 = disabled: do not enable GCS
1 = enforced: check markings and fail if any binary is not marked
2 = optional: check markings but keep GCS off if a binary is unmarked
3 = override: enable GCS, markings are ignored

By default it is 0, so GCS is disabled, value 1 will enable GCS.

The status is stored into GL(dl_aarch64_gcs) early and only applied
later, since enabling GCS is tricky: it must happen on a top level
stack frame. Using GL instead of GLRO because it may need updates
depending on loaded libraries that happen after readonly protection
is applied, however library marking based GCS setting is not yet
implemented.

Describe new tunable in the manual.

Co-authored-by: Yury Khrustalev <yury.khrustalev@arm.com>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-01-20 09:31:33 +00:00
3ac237fb71 aarch64: Add GCS support for makecontext
Changed the makecontext logic: previously the first setcontext jumped
straight to the user callback function and the return address is set
to __startcontext. This does not work when GCS is enabled as the
integrity of the return address is protected, so instead the context
is setup such that setcontext jumps to __startcontext which calls the
user callback (passed in x20).

The map_shadow_stack syscall is used to allocate a suitably sized GCS
(which includes some reserved area to account for altstack signal
handlers and otherwise supports maximum number of 16 byte aligned
stack frames on the given stack) however the GCS is never freed as
the lifetime of ucontext and related stack is user managed.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-01-20 09:22:41 +00:00
9885d13b66 aarch64: Add GCS support for setcontext
Userspace ucontext needs to store GCSPR, it does not have to be
compatible with the kernel ucontext. For now we use the linux
struct gcs_context layout but only use the gcspr field from it.

Similar implementation to the longjmp code, supports switching GCS
if the target GCS is capped, and unwinding a continuous GCS to a
previous state.

Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2025-01-20 09:22:41 +00:00
1cf59c2603 aarch64: Add GCS support to vfork
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2025-01-20 09:22:41 +00:00
37b9a5aacc Linux: Add tests that check that TLS and rseq area are separate
The new test elf/tst-rseq-tls-range-4096-static reliably detected
the extra TLS allocation problem (tcb_offset was dropped from
the allocation size) on aarch64.  It also failed with a crash
in dlopen *before* the extra TLS changes, so TLS alignment with
static dlopen was already broken.

Reviewed-by: Michael Jeanson <mjeanson@efficios.com>
2025-01-16 20:02:42 +01:00
abeae3c006 Linux: Fixes for getrandom fork handling
Careful updates of grnd_alloc.len are required to ensure that
after fork, grnd_alloc.states does not contain entries that
are also encountered by __getrandom_reset_state in TCBs.
For the same reason, it is necessary to overwrite the TCB state
pointer with NULL before updating grnd_alloc.states in
__getrandom_vdso_release.

Before this change, different TCBs could share the same getrandom
state after multi-threaded fork.  This would be a critical security
bug (predictable randomness) if not caught during development.

The additional check in stdlib/tst-arc4random-thread makes it more
likely that the test fails due to the bugs mentioned above.

Both __getrandom_reset_state and __getrandom_vdso_release could
put reserved NULL pointers into the states array.  This is also
fixed with this commit.  After these changes, no null pointers were
observed in the states array during testing.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-01-16 19:58:09 +01:00
09ea1afec7 affinity-inheritance: Overallocate CPU sets
Some kernels on S390 appear to return a CPU affinity mask based on
configured processors rather than the ones online.  Overallocate the CPU
set to match that, but operate only on the ones online.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Co-authored-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2025-01-14 09:23:36 -05:00
f42634f824 sh4: ensure FPSCR.PR==0 when executing FRCHG [BZ #27543]
If the bit is not 0, the operations FRCHG and FSCHG are
undefined and cause a trap; qemu now checks for this as
well, so we set it to 0 temporarily and restore the old
value in getcontext afterwards (setcontext/swapcontext
already do so).

From the discussion in the bugreport, this can probably
be optimised in one place but none of the people involved
are SH4 assembly experts, this patch is field-tested, and
it’s not a code path run often. The other question, what
happens if a signal occurs while the bit is temporarily 0,
is also still unsolved, but to fix that a kernel change is
most likely needed; this patch changes a certain trap on
many CPUs for a hard-to-get trap in a signal handler if a
signal is delivered during the few instructions the PR bit
is temporarily set to 0, so it’s not a regression for most
users.

See BZ and https://bugs.launchpad.net/qemu/+bug/1796520 for
related discussion, references and review comments.

Signed-off-by: mirabilos <tg@debian.org>
Reviewed-by: Oleg Endo <olegendo@gcc.gnu.org>
Tested-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-01-13 11:25:23 -03:00
6c575d835e aarch64: Use 64-bit variable to access the special registers
clang issues:

  error: value size does not match register size specified by the
  constraint and modifier [-Werror,-Wasm-operand-widths]

while tryng to use 32 bit variables with 'mrs' to get/set the
fpsr, dczid_el0, and ctr.
2025-01-13 10:17:38 -03:00
072795229c Linux: Update internal copy of '<sys/rseq.h>'
Sync the internal copy of '<sys/rseq.h>' with the latest Linux kernel
'include/uapi/linux/rseq.h'.

Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-01-10 20:20:48 +00:00
93d0bfbe8f nptl: Move the rseq area to the 'extra TLS' block
Move the rseq area to the newly added 'extra TLS' block, this is the
last step in adding support for the rseq extended ABI. The size of the
rseq area is now dynamic and depends on the rseq features reported by
the kernel through the elf auxiliary vector. This will allow
applications to use rseq features past the 32 bytes of the original rseq
ABI as they become available in future kernels.

Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-01-10 20:20:27 +00:00
494d65129e nptl: Introduce <rseq-access.h> for RSEQ_* accessors
In preparation to move the rseq area to the 'extra TLS' block, we need
accessors based on the thread pointer and the rseq offset. The ONCE
variant of the accessors ensures single-copy atomicity for loads and
stores which is required for all fields once the registration is active.

A separate header is required to allow including <atomic.h> which
results in an include loop when added to <tcb-access.h>.

Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-01-10 20:20:17 +00:00
be440f6c38 nptl: add rtld_hidden_proto to __rseq_size and __rseq_offset
This allows accessing the internal aliases of __rseq_size and
__rseq_offset from ld.so without ifdefs and avoids dynamic symbol
binding at run time for both variables.

Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-01-10 20:19:53 +00:00
304221775c Add Linux 'extra TLS'
Add the Linux implementation of 'extra TLS' which will allocate space
for the rseq area at the end of the TLS blocks in allocation order.

Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-01-10 20:19:40 +00:00
c813c1490d nptl: Add rseq auxvals
Get the rseq feature size and alignment requirement from the auxiliary
vector for use inside the dynamic loader. Use '__rseq_size' directly to
store the feature size. If the main thread registration fails or is
disabled by tunable, reset the value to 0.

This will be used in the TLS block allocator to compute the size and
alignment of the rseq area block for the extended ABI support.

Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-01-10 20:19:07 +00:00
e41aabcc93 tests: Verify inheritance of cpu affinity
Add a couple of tests to verify that CPU affinity set using
sched_setaffinity and pthread_setaffinity_np are inherited by a child
process and child thread.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2025-01-09 10:51:38 -05:00
0bba6c29a1 Revert "configure: default to --prefix=/usr on GNU/Linux"
This reverts commit 81439a116c.
2025-01-08 16:55:05 -05:00