ha_innobase::statistics_init(), ha_innobase::info_low():
Correctly handle a DB_READ_ONLY return value from dict_stats_save().
Fixes up commit 6e6a1b316ca8df5116613fbe4ca2dc37b3c73bd1 (MDEV-35000)
Fixed a compilation error caused by MDEV-27861 that
occurs when building with cmake -DPLUGIN_PARTITION=NO,
because the default_part_plugin variable is only
visible when WITH_PARTITION_STORAGE_ENGINE is defined.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Add check for MARIADB_GROUP_SUFFIX environment variable when
--default-group-suffix argument is not passed. This environment variable
will take precedence over the MYSQL_GROUP_SUFFIX environment variable if
both are set.
All new code of the whole pull request, including one or several files that are
either new files or modified ones, are contributed under the BSD-new license. I
am contributing on behalf of my employer Amazon Web Services, Inc.
MDEV-34171 denied removing indirect routines/tables after
recover_from_failed_open() for auto-create partition case. Now we are
going further and keep them for any failed table reopen.
MDEV-34171 did not handle correctly open_and_process_routine() after
that skip of sp_remove_not_own_routines(). Now it is fixed by
sroutine_to_open correct usage.
Attempt to run a cursor after change in metadata of tables it depends on
resulted in firing assertion on allocating a memory from a memory root
marked as read only.
On every execution of a cursor its Query_arena is set up as a statement
arena (see sp_lex_keeper::cursor_reset_lex_and_exec_core()).
As a consequence, any memory allocations happened on execution of
the cursor's query should be taken from the cursor's memory root.
The reason of allocating a memory from the memory root marked as read
only is that the cursor's memory root points to a memory root of
sp_head that could be already marked as read only after first
successful execution and this relation isn't changed on re-parsing of
the cursor's query.
To fix the issue, memory root of cursor is adjusted in the method
sp_lex_instr::parse_expr() to point to the new memory root just
created for re-parsing of failed query.
Item_func_sp::execute() was called two times per row in this scenario:
SELECT ROW(f1(),1) = ROW(1,1), @counter FROM seq_1_to_5;
- the first time from Item_func_sp::bring_value()
- the second time from Item_func_sp::val_int()
Fix:
Changing Item_func_sp::bring_value() to call execute() only
when the result type is ROW_RESULT.
Item_func_sp::execute() was called two times per row in this scenario:
SELECT ROW(f1(),1) = ROW(1,1), @counter FROM seq_1_to_5;
- the first time from Item_func_sp::bring_value()
- the second time from Item_func_sp::val_int()
Fix:
Changing Item_func_sp::bring_value() to call execute() only
when the result type is ROW_RESULT.
- This issue caused by commit a032f14b342c782b82dfcd9235805bee446e6fe8(MDEV-33559).
In MDEV-33559, matched_rec::block was changed to pointer
and assinged with the help of buf_block_alloc(). But patch
fails to check for the block can be nullptr in
rtr_check_discard_page().
rtr_cur_search_with_match(): Acquire rtr_match_mutex before
creating shadow block for the matched records
rtr_pcur_move_to_next(): Copy the shadow block to page cursor
block under rtr_match_mutex
mariadb-slap crashes with a floating point exception when run with
-iterations=0 due to division by zero in generate_stats(). This occurs
when calculating average timing by dividing by the number of iterations.
To fix this, the solution checks if iterations == 0 before performing
the division, and returns early from the function in such cases.
Attempt to create a procedure with the DEFINER clause resulted in
abnormal server termination in case the server run with the option
--skip-grant-tables=1.
The reason of abnormal termination is that on handling of the DEFINER
clause, not initialized data members of acl_cache is accessed, that led
to server crash.
Behaviour of the server for considered use case must be the same
as for embedded server. Than means, if a security subsytem wasn't
initialized (server is started with the option --skip-grant-tables=1)
return success from get_current_user() without further access to the
acl_cache that obviously not initialized.
Additionlly, AUTHID::is_role was modified to handle the case when
a host part of the user name isn't provided. Treat this case as if
the empty host name is provided.
The fix for MDEV-34413 added support for Index Condition Pushdown with reverse
ordered scans. This makes Rowid filtering work with reverse-ordered scans, too,
so enable it. For example, InnoDB can now check the pushed index condition and
then check the rowid filter on success, in the ORDER BY ... DESC case.
On WSREP state transfer the status in service manager changes to:
WSREP state transfer (role ...) ongoing...
This status was not changed after state transfer was complete. Let's
change it again to reflect now situation:
WSREP state transfer (role ...) comleted.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Use reclength because rec_buff_length is the actual reclength with
padding, whose use could cause ASAN unknown-crash, presumably caused
by memory violation.
Allows index condition pushdown for reverse ordered scans, a previously
disabled feature due to poor performance. This patch adds a new
API to the handler class called set_end_range which allows callers to
tell the handler what the end of the index range will be when scanning.
Combined with a pushed index condition, the handler can scan the index
efficiently and not read beyond the end of the given range. When
checking if the pushed index condition matches, the handler will also
check if scanning has reached the end of the provided range and stop if
so.
If we instead only enabled ICP for reverse ordered scans without
also calling this new API, then the handler would perform unnecessary
index condition checks. In fact this would continue until the end of
the index is reached.
These changes are agnostic of storage engine. That is, any storage
engine that supports index condition pushdown will inhereit this new
behavior as it is implemented in the SQL and storage engine
API layers.
The partitioned tables storage meta-engine (ha_partition) adds an
override of set_end_range which recursively calls set_end_range on its
child storage engine (handler) implementations.
This commit updates the test made in an earlier commit to show that
ICP matches happen for the reverse ordered case.
This patch is based on changes written by Olav Sandstaa in
MySQL commit da1d92fd46071cd86de61058b6ea39fd9affcd87
Adds tests which show that ICP was not enabled for reverse-ordered scans
prior to this mdev. A later commit for this same mdev records again
these same tests, showing that ICP for reverse-ordered scans is
enabled and working.
sp_head::execute_procedure() and sp_head::execute_function() did not
check that Item_param could be passed as an actual parameter to a ROW type
formal parameter of a stored routine. Example:
CREATE PROCEDURE p0(OUT a ROW(a INT,b INT)) ...;
PREPARE s0 'CALL p0(?)';
EXECUTE p0 USING @a;
In case of passing a user variable as an OUT parameter it led to
a crash after executing routine instructions, when copying formal
OUT parameters to the bound actual parameters.
Fix:
Check cases when Item_param is being bound to a ROW type formal parameter.
Raise an error if so. The new check is done for all parameter modes:
IN, OUT, INOUT, for a consistent error message.
The new check is done before executing the routine instructions.
Add MTR tests in the `multi_source` suite to validate future changes
that they does not affect the function of these two `mariadbd` options
```
master_info_file
rpl_show_slave_auth_info
```
Reviewed-by: Susil Kumar Behera <susil.behera@mariadb.com>
Let us integrate the test case with innodb.page_cleaner so that there
will be less interference from log writes due to checkpoints.
Also, make the test compatible with ./mtr --cursor-protocol.
The syntax error produced on running the test engines/iuds.insert_time
in PS-mode was caused by presence of the C-Style comment containing
the single quote at the end of SQL statement, something like
the following one:
/* doesn't throw error */;
Presence of the single quote was interpreted by mysqltest utility as
indication of real string literal, that resulted in consuming every
characters following that mark as a literal, including the delimiter
character ';'. It led to concatenation of lines into a single
multi-statement that was sent from mysqltest to MariaDB server for
processing. In case mysqltest is run in regular mode (that is,
not PS-mode), multi-statement is handled successfully on server side,
but in case PS-mode is on, multi-statement is supplied in COM_STMT_PREPARE
that caused the parsing error since multi-statements is not supported by
statement prepare command.
To fix the issue, in case mysqltest encounters the C-Style comment
is should switch to reading next following characters without any
processing until it hit the closing C-style comment marks '*/',
with one exception if the sequence of characters '/*' followed by
the exclamation mark, that means the hint introducer has been read.
Currently it is allowed to set innodb_io_capacity to very large value
up to unsigned 8 byte maximum value 18446744073709551615. While
calculating the number of pages to flush, we could sometime go beyond
innodb_io_capacity. Specifically, MDEV-24369 has introduced a logic
for aggressive flushing when dirty page percentage in buffer pool
exceeds innodb_max_dirty_pages_pct. So, when innodb_io_capacity is
set to very large value and dirty page percentage exceeds the
threshold, there is a multiplication overflow in Innodb page cleaner.
Fix: We should prevent setting io_capacity to unrealistic values and
define a practical limit to it. The patch introduces limits for
innodb_io_capacity_max and innodb_io_capacity to the maximum of 4 byte
unsigned integer i.e. 4294967295 (2^32-1). For 16k page size this limit
translates to 64 TiB/sec write IO speed which looks sufficient.
Reviewed by: Marko Mäkelä
Per https://github.com/systemd/systemd/issues/36529 OOM counts
as a on-abnormal condition. To ensure that MariaDB testart on
OOM the Restart is changes to on-abnormal which an extension
on the current on-abort condition.
In SHOW SLAVE STATUS, do not access members of the SQL thread's THD without
holding mi->run_lock. Otherwise the THD can go away in case of concurrent
STOP SLAVE, leading to invalid memory references and server crash.
Reviewed-by: Monty <monty@mariadb.org>
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
This will make mysql-test-run.pl try to schedule these long-running
(> 60 seconds) tests early in --parallel runs, which helps avoid that
the testsuite gets stuck with a few long-running tests at the end
while most other test workers are idle.
This speed up mtr --parallel=96 with 25 seconds for me.
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
- Suppress a couple errors the slave can get as the master crashes.
- The mysql-test-run occasionally takes 120 seconds between crashing
the master and starting it back up for some (unknown) reason. For
now, work-around that by letting the slave try for 500 seconds to
connect to master before giving up instead of only 100 seconds.
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
ha_innobase::extra(): Conditionally avoid a log write that had been
added in commit e5b9dc15368c7597a70569048eebfc5e05e98ef4 (MDEV-25910)
because it may be invoked as part of select_insert::prepare_eof()
and not only during DDL operations.
Reviewed by: Sergei Golubchik
The test case set debug_sync=RESET without waiting for the server thread to
receive the prior signal. This can cause the signal to be lost, the thread to
not wake up, and thus the test to time out.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
wait_for_prior_commit() can be called multiple times per event group,
only do my_error() the first time the call fails.
Remove redundant set_overwrite_status() calls.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Reviewed-by: Monty <monty@mariadb.org>
Loose index scan currently only supports ASC key. When searching for
the next MAX value, the search starts from the rightmost range and
moves towards the left. If the key has "moved" as the result of a
successful ha_index_read_map() call when handling a previous range, we
check if the key is to the left of the current range, and if so, we
can skip to the next range.
The existing check on whether the loop has iterated at least once is
not sufficient, as an unsuccessful ha_index_read_map() often (always?)
does not "move" the key.
At the start of mariadb-backup --backup, trigger a flush of the
InnoDB buffer pool, so that as little log as possible will have
to be copied.
The previously debug-build-only interface
SET GLOBAL innodb_log_checkpoint_now=ON;
will be made available on all builds, and
mariadb-backup --backup will invoke it, unless the option
--skip-innodb-log-checkpoint-now is specified.
Reviewed by: Vladislav Vaintroub
- Remove the redundant check of TRX_SYS page change in
wf_incremental_process()
- Remove the double casting of srv_undo_tablespaces
in write_backup_config_file()
- Remove the unused variables like checkpoint_lsn_start
and checkpoint_no_start.
This is a regression which caused by commit 1c55b845e0fe337e647ba230288ed13e966cb7c7.
If connect engineis not able to allocate connect_work_space memory for
GetUser() it will call free() twice with the same value (g).
g was freed first in user_connect::user_init() which calls PlugExit() on
errors and then again in ~user_connect() which also calls PlugExit().
Fixed by setting g to 0 in user_init() after calling PlugExit()
This code was tested 'by hand' by setting connect.work_space=600G
Other things:
- Removed some very old not relevant comments in touched code
- Added comments to clarify how some memory was freed
- Fixed indentation in changed functions.