The third argument to __riscv_hwprobe is the size in bytes of the
cpu bitmask pointed to by the fourth argument, however in the access
attribute (read_only, 4, 3) it is used as an element count (i.e., the
number of unsigned longs that make up the bitmask), resulting in a
false compiler warning:
$ gcc -c hwprobe1.c
hwprobe1.c: In function 'main':
hwprobe1.c:15:11: warning: '__riscv_hwprobe' reading 1024 bytes from a region of size 128 [-Wstringop-overread]
15 | ret = __riscv_hwprobe (pairs, 1, sizeof(cpus), cpus, 0);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
hwprobe1.c:9:23: note: source object 'cpus' of size 128
9 | unsigned long int cpus[16];
| ^~~~
In file included from hwprobe1.c:1:
/usr/include/riscv64-linux-gnu/sys/hwprobe.h:66:12: note: in a call to function '__riscv_hwprobe' declared with attribute 'access (read_only, 4, 3)'
66 | extern int __riscv_hwprobe (struct riscv_hwprobe *__pairs, size_t __pair_count,
| ^~~~~~~~~~~~~~~
$
The documentation (https://docs.kernel.org/arch/riscv/hwprobe.html)
claims that the cpu bitmask has the type cpu_set_t *, which would be
consistent with other functions that take a cpu bitmask such as
sched_setaffinity and sched_getaffinity. It also uses the name
cpusetsize for the third argument, which is much more accurate than
cpu_count since it is a size in bytes and not a cpu count. The
(read_only, 4, 3) access attribute in the glibc prototype claims
that the cpu bitmask is only read, however when flags is
RISCV_HWPROBE_WHICH_CPUS it is both read and written.
Therefore, in the glibc prototype the type of the fourth argument is
changed to cpu_set_t * to match the documentation, the name of the
third argument is changed to cpusetsize as in the documentation, and the
incorrect access attribute that applies to these arguments is removed.
Almost all existing callers pass a null pointer for the fourth
argument, however a transparent union is introduced for compatibility
with callers that cast a pointer to the old argument type, and a
macro is introduced allowing callers the ability to distinguish
between the old and new prototype when needed.
The access attributes are being specified with __fortified_attr_access,
however this macro is for fortified functions; the regular
__attr_access macro is for non-fortified functions such as this one.
Using the incorrect macro results in no access checks at fortify level
3, because it is assumed that the fortified function will be doing the
checking. It is changed to use the correct macro so that the access
checks will work regardless of fortify level.
Also because __riscv_hwprobe is not a cancellation point, __THROW
is added, consistent with similar functions. (However, it is omitted
from the typedef because GCC does not accept it there.)
The __wur (warn_unused_result) attribute is helpful for functions that
cannot be used safely without checking the result, however code such
as the following does not require the result to be checked and should
not produce a warning:
struct riscv_hwprobe pair = { RISCV_HWPROBE_KEY_IMA_EXT_0, 0 };
__riscv_hwprobe (&pair, 1, 0, NULL, 0);
if (pair.value & RISCV_HWPROBE_EXT_ZBB) ...
Therefore this attribute is omitted.
The comment claiming that the second argument to the ifunc selector
is a pointer to the vDSO function is corrected. It is a pointer to
the regular glibc function (which returns errors as positive values),
not the vDSO function (which returns errors as negative values).
Fixes commit 426d0e1aa8 ("riscv: Add
Linux hwprobe syscall support").
Fixes: BZ #32932
Signed-off-by: Mark Harris <mark.hsj@gmail.com>
Signed-off-by: Mark Harris <mark.hsj@gmail.com>
Reviewed-by: Palmer Dabbelt <palmer@dabbelt.com>
Acked-by: Palmer Dabbelt <palmer@dabbelt.com>
The memcpy optimization (commit 587a1290a1) has a series
of mistakes:
- The implementation is wrong: the chunk size calculation is wrong
leading to invalid memory access.
- It adds ifunc supports as default, so --disable-multi-arch does
not work as expected for riscv.
- It mixes Linux files (memcpy ifunc selection which requires the
vDSO/syscall mechanism) with generic support (the memcpy
optimization itself).
- There is no __libc_ifunc_impl_list, which makes testing only
check the selected implementation instead of all supported
by the system.
This patch also simplifies the required bits to enable ifunc: there
is no need to memcopy.h; nor to add Linux-specific files.
The __memcpy_noalignment tail handling now uses a branchless strategy
similar to aarch64 (overlap 32-bits copies for sizes 4..7 and byte
copies for size 1..3).
Checked on riscv64 and riscv32 by explicitly enabling the function
on __libc_ifunc_impl_list on qemu-system.
Changes from v1:
* Implement the memcpy in assembly to correctly handle RISCV
strict-alignment.
Reviewed-by: Evan Green <evan@rivosinc.com>
Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
The new riscv_hwprobe syscall also comes with a vDSO for faster answers
to your most common questions. Call in today to speak with a kernel
representative near you!
Signed-off-by: Evan Green <evan@rivosinc.com>
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Add awareness and a thin wrapper function around a new Linux system call
that allows callers to get architecture and microarchitecture
information about the CPUs from the kernel. This can be used to
do things like dynamically choose a memcpy implementation.
Signed-off-by: Evan Green <evan@rivosinc.com>
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>