Without this increase the mtr test case pre/post conditions will
fail as the stack usage has increased under MSAN with clang-20.1.
ASAN takes a 11M stack, however there was no obvious gain in MSAN
test success after 2M.
The resulting behaviour observed on smaller stack size was a
SEGV normally.
Hide the default stack size from the sysvar tests that expose
thread-stack as a variable with its default value.
my_time_fraction_remainder(): Remove a DBUG_ASSERT, because there is
none in sec_part_shift() or sec_part_unshift() either. A buffer
overflow should be caught by cmake -DWITH_ASAN=ON in all three.
This fixes a build with GCC 14.2 and
cmake -DCMAKE_BUILD_TYPE=Debug -DCMAKE_CXX_FLAGS=-Og.
Reviewed by: Daniel Black
page_is_corrupted(): Do not allocate the buffers from stack,
but from the heap, in xb_fil_cur_open().
row_quiesce_write_cfg(): Issue one type of message when we
fail to create the .cfg file.
update_statistics_for_table(), read_statistics_for_table(),
delete_statistics_for_table(), rename_table_in_stat_tables():
Use a common stack buffer for Index_stat, Column_stat, Table_stat.
ha_connect::FileExists(): Invoke push_warning_printf() so that
we can avoid allocating a buffer for snprintf().
translog_init_with_table(): Do not duplicate TRANSLOG_PAGE_SIZE_BUFF.
Let us also globally enable the GCC 4.4 and clang 3.0 option
-Wframe-larger-than=16384 to reduce the possibility of introducing
such stack overflow in the future. For RocksDB and Mroonga we relax
these limits.
Reviewed by: Vladislav Lesin
In commit b6923420f3 (MDEV-29445)
we started to specify the MAP_POPULATE flag for allocating the
InnoDB buffer pool. This would cause a lot of time to be spent
on __mm_populate() inside the Linux kernel, such as 16 seconds
to pre-fault or commit innodb_buffer_pool_size=64G.
Let us revert to the previous way of allocating the buffer pool
at startup. Note: An attempt to increase the buffer pool size by
SET GLOBAL innodb_buffer_pool_size (up to innodb_buffer_pool_size_max)
will invoke my_virtual_mem_commit(), which will use MAP_POPULATE
to zero-fill and prefault the requested additional memory area, blocking
buf_pool.mutex.
Before MDEV-29445 we allocated the InnoDB buffer pool by invoking
mmap(2) once (via my_large_malloc()). After the change, we would
invoke mmap(2) twice, first via my_virtual_mem_reserve() and then
via my_virtual_mem_commit(). Outside Microsoft Windows, we are
reverting back to my_large_malloc() like allocation.
my_virtual_mem_reserve(): Define only for Microsoft Windows.
Other platforms should invoke my_large_virtual_alloc() and
update_malloc_size() instead of my_virtual_mem_reserve() and
my_virtual_mem_commit().
my_large_virtual_alloc(): Define only outside Microsoft Windows.
Do not specify MAP_NORESERVE nor MAP_POPULATE, to preserve compatibility
with my_large_malloc(). Were MAP_POPULATE specified, the mmap()
system call would be significantly slower, for example 18 seconds
to reserve 64 GiB upfront.
Problem:
========
- After commit cc8eefb0dc (MDEV-33087),
InnoDB does use bulk insert operation for ALTER TABLE.. ALGORITHM=COPY
and CREATE TABLE..SELECT as well. InnoDB fails to clear the bulk
buffer when it encounters error during CREATE..SELECT. Problem
is that while transaction cleanup, InnoDB fails to identify
the bulk insert for DDL operation.
Fix:
====
- Represent bulk_insert in trx by 2 bits. By doing that, InnoDB
can distinguish between TRX_DML_BULK, TRX_DDL_BULK. During DDL,
set bulk insert value for transaction to TRX_DDL_BULK.
- Introduce a parameter HA_EXTRA_ABORT_ALTER_COPY which rollbacks
only TRX_DDL_BULK transaction.
- bulk_insert_apply() happens for TRX_DDL_BULK transaction happens
only during HA_EXTRA_END_ALTER_COPY extra() call.
Fixing the code adding MySQL _0900_ collations as _uca1400_ aliases
not to perform deep initialization of the corresponding _uca1400_
collations.
Only basic initialization is now performed which allows to watch
these collations (both _0900_ and _uca1400_) in queries to
INFORMATION_SCHEMA tables COLLATIONS and
COLLATION_CHARACTER_SET_APPLICABILITY,
as well as in SHOW COLLATION statements.
Deep initialization is now performed only when a collation
(either the _0900_ alias or the corresponding _uca1400_ collation)
is used for the very first time after the server startup.
Refactoring was done to maintain the code easier:
- most of the _uca1400_ code was moved from ctype-uca.c
to a new file ctype-uca1400.c
- most of the _0900_ code was moved from type-uca.c
to a new file ctype-uca0900.c
Change details:
- The original function add_alias_for_collation() added by the patch for
"MDEV-20912 Add support for utf8mb4_0900_* collations in MariaDB Server"
was removed from mysys/charset.c, as it had two two problems:
a. it forced deep initialization of the _uca1400_ collations
when adding _0900_ aliases for them at the server startup
(the main reported problem)
b. the collation initialization code in add_alias_for_collation()
was related more to collations rather than to memory management,
so /strings should be a better place for it than /mysys.
The code from add_alias_for_collation() was split into separate functions.
Cyclic dependency was removed. `#include <my_sys.h>` was removed
from /strings/ctype-uca.c. Collations are now added using a callback
function MY_CHARSET_LOADED::add_collation, like it is done for
user collations defined in Index.xml. The code in /mysys sets
MY_CHARSET_LOADED::add_collation to add_compiled_collation().
- The function compare_collations() was removed.
A new virtual function was added into my_collation_handler_st instead:
my_bool (*eq_collation)(CHARSET_INFO *self, CHARSET_INFO *other);
because it is the collation handler who knows how to detect equal
collations by comparing only some of CHARSET_INFO members without
their deep initialization.
Three implementations were added:
- my_ci_eq_collation_uca() for UCA collations, it compares
_0900_ collations as equal to their corresponding _uca1400_ collations.
- my_ci_eq_collation_utf8mb4_bin(), it compares
utf8mb4_nopad_bin and utf8mb4_0900_bin as equal.
- my_ci_eq_collation_generic() - the default implementation,
which compares all collations as not equal.
A C++ wrapper CHARSET_INFO::eq_collations() was added.
The code in /sql was changes to use the wrapper instead of
the former calls for the removed function compare_collations().
- A part of add_alias_for_collation() was moved into a new function
my_ci_alloc(). It allocates a memory for a new charset_info_st
instance together with the collation name and the comment using a single
MY_CHARSET_LOADER::once_alloc call, which points to my_once_alloc()
in the server.
- A part of add_alias_for_collation() was moved into a new function
my_ci_make_comment_for_alias(). It makes an "Alias for xxx" string,
e.g. "Alias for utf8mb4_uca1400_swedish_ai_ci" in case of
utf8mb4_sv_0900_ai_ci.
- A part of the code in create_tailoring() was moved to
a new function my_uca1400_collation_get_initialized_shared_uca(),
to reuse the code between _uca1400_ and _0900_ collations.
- A new function my_collation_id_is_mysql_uca0900() was added
in addition to my_collation_id_is_mysql_uca1400().
- Functions to build collation names were added:
my_uca0900_collation_build_name()
my_uca1400_collation_build_name()
- A shared function function was added:
my_bool
my_uca1400_collation_alloc_and_init(MY_CHARSET_LOADER *loader,
LEX_CSTRING name,
LEX_CSTRING comment,
const uca_collation_def_param_t *param,
uint id)
It's reused to add _uca1400_ and _0900_ collations, with basic
initialization (without deep initialization).
- The function add_compiled_collation() changed its return type from
void to int, to make it compatible with MY_CHARSET_LOADER::add_collation.
- Functions mysql_uca0900_collation_definition_add(),
mysql_uca0900_utf8mb4_collation_definitions_add(),
mysql_utf8mb4_0900_bin_add() were added into ctype-uca0900.c.
They get MY_CHARSET_LOADER as a parameter.
- Functions my_uca1400_collation_definition_add(),
my_uca1400_collation_definitions_add() were moved from
charset-def.c to strings/ctype-uca1400.c.
The latter now accepts MY_CHARSET_LOADER as the first parameter
instead of initializing a MY_CHARSET_LOADER inside.
- init_compiled_charsets() now initializes a MY_CHARSET_LOADER
variable and passes it to all functions adding collations:
- mysql_utf8mb4_0900_collation_definitions_add()
- mysql_uca0900_utf8mb4_collation_definitions_add()
- mysql_utf8mb4_0900_bin_add()
- A new structure was added into ctype-uca.h:
typedef struct uca_collation_def_param
{
my_cs_encoding_t cs_id;
uint tailoring_id;
uint nopad_flags;
uint level_flags;
} uca_collation_def_param_t;
It simplifies reusing the code for _uca1400_ and _0900_ collations.
- The definition of MY_UCA1400_COLLATION_DEFINITION was
moved from ctype-uca.c to ctype-uca1400.h, to reuse
the code for _uca1400_ and _0900_ collations.
- The definitions of "MY_UCA_INFO my_uca_v1400" and
"MY_UCA_INFO my_uca1400_info_tailored[][]" were moved from
ctype-uca.c to ctype-uca1400.c.
- The definitions/declarations of:
- mysql_0900_collation_start,
- struct mysql_0900_to_mariadb_1400_mapping
- mysql_0900_to_mariadb_1400_mapping
- mysql_utf8mb4_0900_collation_definitions_add()
were moved from ctype-uca.c to ctype-uca0900.c
- Functions
my_uca1400_make_builtin_collation_id()
my_uca1400_collation_definition_init()
my_uca1400_collation_id_uca400_compat()
my_ci_get_collation_name_uca1400_context()
were moved from ctype-uca.c to ctype-uca1400.c and ctype-uca1400.h
- A part of my_uca1400_collation_definition_init()
was moved into my_uca0520_builtin_collation_by_id(),
to make functions smaller.
We deprecate and ignore the parameter innodb_buffer_pool_chunk_size
and let the buffer pool size to be changed in arbitrary 1-megabyte
increments.
innodb_buffer_pool_size_max: A new read-only startup parameter
that specifies the maximum innodb_buffer_pool_size. If 0 or
unspecified, it will default to the specified innodb_buffer_pool_size
rounded up to the allocation unit (2 MiB or 8 MiB). The maximum value
is 4GiB-2MiB on 32-bit systems and 16EiB-8MiB on 64-bit systems.
This maximum is very likely to be limited further by the operating system.
The status variable Innodb_buffer_pool_resize_status will reflect
the status of shrinking the buffer pool. When no shrinking is in
progress, the string will be empty.
Unlike before, the execution of SET GLOBAL innodb_buffer_pool_size
will block until the requested buffer pool size change has been
implemented, or the execution is interrupted by a KILL statement
a client disconnect, or server shutdown. If the
buf_flush_page_cleaner() thread notices that we are running out of
memory, the operation may fail with ER_WRONG_USAGE.
SET GLOBAL innodb_buffer_pool_size will be refused
if the server was started with --large-pages (even if
no HugeTLB pages were successfully allocated). This functionality
is somewhat exercised by the test main.large_pages, which now runs
also on Microsoft Windows. On Linux, explicit HugeTLB mappings are
apparently excluded from the reported Redident Set Size (RSS), and
apparently unshrinkable between mmap(2) and munmap(2).
The buffer pool will be mapped to a contiguous virtual memory area
that will be aligned and partitioned into extents of 8 MiB on
64-bit systems and 2 MiB on 32-bit systems.
Within an extent, the first few innodb_page_size blocks contain
buf_block_t objects that will cover the page frames in the rest
of the extent. The number of such frames is precomputed in the
array first_page_in_extent[] for each innodb_page_size.
In this way, there is a trivial mapping between
page frames and block descriptors and we do not need any
lookup tables like buf_pool.zip_hash or buf_pool_t::chunk_t::map.
We will always allocate the same number of block descriptors for
an extent, even if we do not need all the buf_block_t in the last
extent in case the innodb_buffer_pool_size is not an integer multiple
of the of extents size.
The minimum innodb_buffer_pool_size is 256*5/4 pages. At the default
innodb_page_size=16k this corresponds to 5 MiB. However, now that the
innodb_buffer_pool_size includes the memory allocated for the block
descriptors, the minimum would be innodb_buffer_pool_size=6m.
my_large_virtual_alloc(): A new function, similar to my_large_malloc().
my_virtual_mem_reserve(), my_virtual_mem_commit(),
my_virtual_mem_decommit(), my_virtual_mem_release():
New interface mostly by Vladislav Vaintroub, to separately
reserve and release virtual address space, as well as to
commit and decommit memory within it.
After my_virtual_mem_decommit(), the virtual memory range will be
read-only or unaccessible, depending on whether the build option
cmake -DHAVE_UNACCESSIBLE_AFTER_MEM_DECOMMIT=1
has been specified. This option is hard-coded on Microsoft Windows,
where VirtualMemory(MEM_DECOMMIT) will make the memory unaccessible.
On IBM AIX, Linux, Illumos and possibly Apple macOS, the virtual memory
will be zeroed out immediately. On other POSIX-like systems,
madvise(MADV_FREE) will be used if available, to give the operating
system kernel a permission to zero out the virtual memory range.
We prefer immediate freeing so that the reported
resident set size (RSS) of the process will reflect the current
innodb_buffer_pool_size. Shrinking the buffer pool is a rarely
executed resource intensive operation, and the immediate configuration
of the MMU mappings should not incur significant additional penalty.
opt_super_large_pages: Declare only on Solaris. Actually, this is
specific to the SPARC implementation of Solaris, but because we
lack access to a Solaris development environment, we will not revise
this for other MMU and ISA.
buf_pool_t::chunk_t::create(): Remove.
buf_pool_t::create(): Initialize all n_blocks of the buf_pool.free list.
buf_pool_t::allocate(): Renamed from buf_LRU_get_free_only().
buf_pool_t::LRU_warned: Changed to Atomic_relaxed<bool>,
only to be modified by the buf_flush_page_cleaner() thread.
buf_pool_t::shrink(): Attempt to shrink the buffer pool.
There are 3 possible outcomes: SHRINK_DONE (success),
SHRINK_IN_PROGRESS (the caller may keep trying),
and SHRINK_ABORT (we seem to be running out of buffer pool).
While traversing buf_pool.LRU, release the contended
buf_pool.mutex once in every 32 iterations in order to
reduce starvation. Use lru_scan_itr for efficient traversal,
similar to buf_LRU_free_from_common_LRU_list().
buf_pool_t::shrunk(): Update the reduced size of the buffer pool
in a way that is compatible with buf_pool_t::page_guess(),
and invoke my_virtual_mem_decommit().
buf_pool_t::resize(): Before invoking shrink(), run one batch of
buf_flush_page_cleaner() in order to prevent LRU_warn().
Abort if shrink() recommends it, or no blocks were withdrawn in
the past 15 seconds, or the execution of the statement
SET GLOBAL innodb_buffer_pool_size was interrupted.
buf_pool_t::first_to_withdraw: The first block descriptor that is
out of the bounds of the shrunk buffer pool.
buf_pool_t::withdrawn: The list of withdrawn blocks.
If buf_pool_t::resize() is aborted before shrink() completes,
we must be able to resurrect the withdrawn blocks in the free list.
buf_pool_t::contains_zip(): Added a parameter for the
number of least significant pointer bits to disregard,
so that we can find any pointers to within a block
that is supposed to be free.
buf_pool_t::is_shrinking(): Return the total number or blocks that
were withdrawn or are to be withdrawn.
buf_pool_t::to_withdraw(): Return the number of blocks that will need to
be withdrawn.
buf_pool_t::usable_size(): Number of usable pages, considering possible
in-progress attempt at shrinking the buffer pool.
buf_pool_t::page_guess(): Try to buffer-fix a guessed block pointer.
If HAVE_UNACCESSIBLE_AFTER_MEM_DECOMMIT is set, the pointer will
be validated before being dereferenced.
buf_pool_t::get_info(): Replaces buf_stats_get_pool_info().
innodb_init_param(): Refactored. We must first compute
srv_page_size_shift and then determine the valid bounds of
innodb_buffer_pool_size.
buf_buddy_shrink(): Replaces buf_buddy_realloc().
Part of the work is deferred to buf_buddy_condense_free(),
which is being executed when we are not holding any
buf_pool.page_hash latch.
buf_buddy_condense_free(): Do not relocate blocks.
buf_buddy_free_low(): Do not care about buffer pool shrinking.
This will be handled by buf_buddy_shrink() and
buf_buddy_condense_free().
buf_buddy_alloc_zip(): Assert !buf_pool.contains_zip()
when we are allocating from the binary buddy system.
Previously we were asserting this on multiple recursion levels.
buf_buddy_block_free(), buf_buddy_free_low():
Assert !buf_pool.contains_zip().
buf_buddy_alloc_from(): Remove the redundant parameter j.
buf_flush_LRU_list_batch(): Add the parameter to_withdraw
to keep track of buf_pool.n_blocks_to_withdraw.
buf_do_LRU_batch(): Skip buf_free_from_unzip_LRU_list_batch()
if we are shrinking the buffer pool. In that case, we want
to minimize the page relocations and just finish as quickly
as possible.
trx_purge_attach_undo_recs(): Limit purge_sys.n_pages_handled()
in every iteration, in case the buffer pool is being shrunk
in the middle of a purge batch.
Reviewed by: Debarun Banerjee
Although the `my_thread_id` type is 64 bits, binlog format specs
limits it to 32 bits in practice. (See also: MDEV-35706)
The writable SQL variable `pseudo_thread_id` didn’t realize this though
and had a range of `ULONGLONG_MAX` (at least `UINT64_MAX` in C/C++).
It consequentially accepted larger values silently, but only the lower
32 bits of whom gets binlogged; this could lead to inconsistency.
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
The problem was that get_collation_number_internal() loops over all
collations for finding a collation based on name. For looking up
utf8mb4_0900_ aliases it used 22633 character strings comparisons at
startup.
Fixed by adding the MariaDB internal collation number in the "0900" alias
lookup array. This is fine as collation numbers never changes.
Discussed-with: serg@mariadb.com
- Needless engaged_ removed;
- SCOPE_VALUE, SCOPE_SET, SCOPE_CLEAR macros for neater declaration;
- IF_CLASS / IF_NOT_CLASS SFINAE checkers to pass arg by value or
reference;
- inline keyword;
- couple of refactorings of temporary free_list.
Example:
{
auto _= make_scope_value(var, tmp_value);
}
make_scope_value(): a function which returns RAII object which temporary
changes a value of a variable
detail::Scope_value: actual implementation of such RAII class.
It shouldn't be used directly! That's why it's inside a namespace detail.
The fix in MDEV-34825/#3484/dff354e7df2f originally just took the FreeBSD carried
patch of inline ASM as it didn't have the __ppc_get_timebase function.
What clang does have is the __builtin_ppc_get_timebase, which was
replaced in the same commit, which was the fix taken from Alpine.
To reduce complexity - we only need one working function rather
than an equivalent asm implementation.
Noted by Marko, thanks!
MY_RELAX_CPU(): On GCC and compatible compilers (including clang and
its derivatives), let us use a null inline assembler block as the
fallback. This should benefit s390x and LoongArch, for example.
Also, let us remove the generic fallback block that does exactly the
opposite of what this function aims to achieve: avoid hogging the
memory bus so that other threads will have a chance to let our spin
loop to proceed.
On RISC-V, we will use __builtin_riscv_pause() which is a valid
instruction encoding in all ISA versions according to
https://gcc.gnu.org/pipermail/gcc-patches/2021-January/562936.html
storage/maria/ma_open.c:352:7: runtime error: call to function debug_sync(THD*, char const*, unsigned long)
through pointer to incorrect function type 'void (*)(void *, const char *, unsigned long)'
The THD argument is a void *. Because of the way myisam is .c files the
function prototype is mismatched.
As Marko pointed out the MYSQL_THD is declared as void * in C.
Thanks Jimmy Hú for noting that struct THD is the equalivalant in C to
the class THD. The C NULL was also different to the C++ nullptr.
Corrected the definations of MYSQL_THD and DEBUG_SYNC_C to be C and C++
compatible.
This commit updates default memory allocations size used with MEM_ROOT
objects to minimize the number of calls to malloc().
Changes:
- Updated MEM_ROOT block sizes in sql_const.h
- Updated MALLOC_OVERHEAD to also take into account the extra memory
allocated by my_malloc()
- Updated init_alloc_root() to only take MALLOC_OVERHEAD into account as
buffer size, not MALLOC_OVERHEAD + sizeof(USED_MEM).
- Reset mem_root->first_block_usage if and only if first block was used.
- Increase MEM_ROOT buffers sized used by my_load_defaults, plugin_init,
Create_tmp_table, allocate_table_share, TABLE and TABLE_SHARE.
This decreases number of malloc calls during queries.
- Use a small buffer for THD->main_mem_root in THD::THD. This avoids
multiple malloc() call for new connections.
I tried the above changes on a complex select query with 12 tables.
The following shows the number of extra allocations that where used
to increase the size of the MEM_ROOT buffers.
Original code:
- Connection to MariaDB: 9 allocations
- First query run: 146 allocations
- Second query run: 24 allocations
Max memory allocated for thd when using with heap table: 61,262,408
Max memory allocated for thd when using Aria tmp table: 419,464
After changes:
Connection to MariaDB: 0 allocations
- First run: 25 allocations
- Second run: 7 allocations
Max memory allocated for thd when using with heap table: 61,347,424
Max memory allocated for thd when using Aria table: 529,168
The new code uses slightly more memory, but avoids memory fragmentation
and is slightly faster thanks to much fewer calls to malloc().
Reviewed-by: Sergei Golubchik <serg@mariadb.org>
Heap tables are allocated blocks to store rows according to
my_default_record_cache (mapped to the server global variable
read_buffer_size).
This causes performance issues when the record length is big
(> 1000 bytes) and the my_default_record_cache is small.
Changed to instead split the default heap allocation to 1/16 of the
allowed space and not use my_default_record_cache anymore when creating
the heap. The allocation is also aligned to be just under a power of 2.
For some test that I have been running, which was using record length=633,
the speed of the query doubled thanks to this change.
Other things:
- Fixed calculation of max_records passed to hp_create() to take
into account padding between records.
- Updated calculation of memory needed by heap tables. Before we
did not take into account internal structures needed to access rows.
- Changed block sized for memory_table from 1 to 16384 to get less
fragmentation. This also avoids a problem where we need 1K
to manage index and row storage which was not counted for before.
- Moved heap memory usage to a separate test for 32 bit.
- Allocate all data blocks in heap in powers of 2. Change reported
memory usage for heap to reflect this.
Reviewed-by: Sergei Golubchik <serg@mariadb.org>
This is done by mapping most of the existing MySQL unicode 0900 collations
to MariadB 1400 unicode collations. The assumption is that 1400 is a super
set of 0900 for all practical purposes.
I also added a new function 'compare_collations()' and changed most code
to use this instead of comparing character sets directly.
This enables one to seamlessly mix-and-match the corresponding 0900 and
1400 sets. Field comparision and alter table treats the character sets
as identical.
All MySQL 8.0 0900 collations are supported except:
- utf8mb4_ja_0900_as_cs
- utf8mb4_ja_0900_as_cs_ks
- utf8mb4_ru_0900_as_cs
- utf8mb4_zh_0900_as_cs
These do not have corresponding entries in the MariadB 01400 collations.
Other things:
- Added COMMENT colum to information_schema.collations. For utf8mb4_0900
colletions it contains the corresponding alias collation.
Under unknown circumstances, the SQL layer may wrongly disregard an
invocation of thd_mark_transaction_to_rollback() when an InnoDB
transaction had been aborted (rolled back) due to one of the following errors:
* HA_ERR_LOCK_DEADLOCK
* HA_ERR_RECORD_CHANGED (if innodb_snapshot_isolation=ON)
* HA_ERR_LOCK_WAIT_TIMEOUT (if innodb_rollback_on_timeout=ON)
Such an error used to cause a crash of InnoDB during transaction commit.
These changes aim to catch and report the error earlier, so that not only
this crash can be avoided but also the original root cause be found and
fixed more easily later.
The idea of this fix is from Michael 'Monty' Widenius.
HA_ERR_ROLLBACK: A new error code that will be translated into
ER_ROLLBACK_ONLY, signalling that the current transaction
has been aborted and the only allowed action is ROLLBACK.
trx_t::state: Add TRX_STATE_ABORTED that is like
TRX_STATE_NOT_STARTED, but noting that the transaction had been
rolled back and aborted.
trx_t::is_started(): Replaces trx_is_started().
ha_innobase: Check the transaction state in various places.
Simplify the logic around SAVEPOINT.
ha_innobase::is_valid_trx(): Replaces ha_innobase::is_read_only().
The InnoDB logic around transaction savepoints, commit, and rollback
was unnecessarily complex and might have contributed to this
inconsistency. So, we are simplifying that logic as well.
trx_savept_t: Replace with const undo_no_t*. When we rollback to
a savepoint, all we need to know is the number of undo log records
that must survive.
trx_named_savept_t, DB_NO_SAVEPOINT: Remove. We can store undo_no_t
directly in the space allocated at innobase_hton->savepoint_offset.
fts_trx_create(): Do not copy previous savepoints.
fts_savepoint_rollback(): If a savepoint was not found, roll back
everything after the default savepoint of fts_trx_create().
The test innodb_fts.savepoint is extended to cover this code.
Reviewed by: Vladislav Lesin
Tested by: Matthias Leich
os_innodb_umask was of the incorrect type resulting in warnings
in clang-19. The correct type is mode_t.
As os_innodb_umask was set during innnodb_init from my_umask,
corrected the type there along with its companion my_umask_dir.
Because of this, the defaults mask values in innodb never
had an effect.
The resulting change allow found signed differences in
my_create{,_nosymlink}, open_nosymlinks:
mysys/my_create.c:47:20: error: operand of ?: changes signedness from ‘int’ to ‘mode_t’ {aka ‘unsigned int’} due to unsignedness of other operand [-Werror=sign-compare]
47 | CreateFlags ? CreateFlags : my_umask);
Ref: clang-19 warnings:
[55/123] Building CXX object storage/innobase/CMakeFiles/innobase.dir/os/os0file.cc.o
storage/innobase/os/os0file.cc:1075:46: warning: implicit conversion loses integer precision: 'ulint' (aka 'unsigned long') to 'mode_t' (aka 'unsigned int') [-Wshorten-64-to-32]
1075 | file = open(name, create_flag | O_CLOEXEC, os_innodb_umask);
| ~~~~ ^~~~~~~~~~~~~~~
storage/innobase/os/os0file.cc:1249:46: warning: implicit conversion loses integer precision: 'ulint' (aka 'unsigned long') to 'mode_t' (aka 'unsigned int') [-Wshorten-64-to-32]
1249 | file = open(name, create_flag | O_CLOEXEC, os_innodb_umask);
| ~~~~ ^~~~~~~~~~~~~~~
storage/innobase/os/os0file.cc:1381:45: warning: implicit conversion loses integer precision: 'ulint' (aka 'unsigned long') to 'mode_t' (aka 'unsigned int') [-Wshorten-64-to-32]
1381 | file = open(name, create_flag | O_CLOEXEC, os_innodb_umask);
| ~~~~ ^~~~~~~~~~~~~~~
RISC-V and Clang produce rdcycle for __builtin_readcyclecounter.
Since Linux kernel 6.6 this is a privileged instruction not available
to userspace programs.
The use of __builtin_readcyclecounter is excluded from RISCV falling
back to the rdtime/rdtimeh instructions provided in MDEV-33435.
Thanks Alexander Richardson for noting it should be linux only in the
code and noting FreeBSD RISC-V permits rdcycle.
Author: BINSZ on JIRA