The --skip-write-binlog message was confusing that it only had
an effect if the galera was enabled. There are uses beyond galera
so we apply SET SESSION SQL_LOG_BIN=0 as implied by the option
without being conditional on the wsrep status.
Remove wsrep.mysql_tzinfo_to_sql_symlink{,_skip} tests as they offered
no additional coverage beyond main.mysql_tzinfo_to_sql_symlink as no
server testing was done.
Introduced a variant of the galera.mariadb_tzinfo_to_sql as
galera.mysql_tzinfo_to_sql, which does testing using the mysql client
rather than directly importing into the server via mysqltest.
Update man page and mysql_tzinfo_to_sql to having a --skip-write-binlog
option.
merge notes:
10.4:
- conflicts in tztime.cc can revert to this version of --help text.
- tztime.cc - merge execute immediate @prep1, and leave %s%s trunc_tables, lock_tables
after that.
10.6:
- Need to remove the not_embedded.inc in mysql_tzinfo_to_sql.test and
replace it with no_protocol.inc
- leave both mysql_tzinfo_to_sql.test and mariadb_tzinfo_to_sql.sql
tests.
- sql/tztime.cc - keep entirely 10.6 version.
The --skip-write-binary-log added to mysql_tzinfo_to_sql in
MDEV-18778 was only effective if galera was enabled on the server.
This is because it tied together three concepts under one option:
1. binary logging
2. wsrep replication, and
3. using innodb as a transitional table type.
Change 1: small change in help option to reflect this.
To solve the performance problem with Aria tables, LOCK TABLES WRITE
is used to eliminate the need to fdatasync until the UNLOCK TABLES.
If galera isn't enabled, then we also want to use the LOCK TABLE WRITE
mechanism.
The START TRANSACTION added in MDEV-23440 needed to be moved to
before LOCK TABLES otherwise it would cancel their effect.
TRUNCATE TABLE statements also need to be before the LOCK TABLES.
When changing back from InnoDB to Aria, include the ORDER BY that
was originally there in 6aaccbcbf7 and matching the final ALTER
TABLE in the timezonedir branch.
Running: mariadb-tzinfo-to-sql --skip-write-binlog /usr/share/zoneinfo
now generates 16 Aria_transaction_log_syncs from 7053.
Mutex order violation when wsrep bf thread kills a conflicting trx,
the stack is
wsrep_thd_LOCK()
wsrep_kill_victim()
lock_rec_other_has_conflicting()
lock_clust_rec_read_check_and_lock()
row_search_mvcc()
ha_innobase::index_read()
ha_innobase::rnd_pos()
handler::ha_rnd_pos()
handler::rnd_pos_by_record()
handler::ha_rnd_pos_by_record()
Rows_log_event::find_row()
Update_rows_log_event::do_exec_row()
Rows_log_event::do_apply_event()
Log_event::apply_event()
wsrep_apply_events()
and mutexes are taken in the order
lock_sys->mutex -> victim_trx->mutex -> victim_thread->LOCK_thd_data
When a normal KILL statement is executed, the stack is
innobase_kill_query()
kill_handlerton()
plugin_foreach_with_mask()
ha_kill_query()
THD::awake()
kill_one_thread()
and mutexes are
victim_thread->LOCK_thd_data -> lock_sys->mutex -> victim_trx->mutex
This patch is the plan D variant for fixing potetial mutex locking
order exercised by BF aborting and KILL command execution.
In this approach, KILL command is replicated as TOI operation.
This guarantees total isolation for the KILL command execution
in the first node: there is no concurrent replication applying
and no concurrent DDL executing. Therefore there is no risk of
BF aborting to happen in parallel with KILL command execution
either. Potential mutex deadlocks between the different mutex
access paths with KILL command execution and BF aborting cannot
therefore happen.
TOI replication is used, in this approach, purely as means
to provide isolated KILL command execution in the first node.
KILL command should not (and must not) be applied in secondary
nodes. In this patch, we make this sure by skipping KILL
execution in secondary nodes, in applying phase, where we
bail out if applier thread is trying to execute KILL command.
This is effective, but skipping the applying of KILL command
could happen much earlier as well.
This also fixed unprotected calls to wsrep_thd_abort
that will use wsrep_abort_transaction. This is fixed
by holding THD::LOCK_thd_data while we abort transaction.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Mutex order violation when wsrep bf thread kills a conflicting trx,
the stack is
wsrep_thd_LOCK()
wsrep_kill_victim()
lock_rec_other_has_conflicting()
lock_clust_rec_read_check_and_lock()
row_search_mvcc()
ha_innobase::index_read()
ha_innobase::rnd_pos()
handler::ha_rnd_pos()
handler::rnd_pos_by_record()
handler::ha_rnd_pos_by_record()
Rows_log_event::find_row()
Update_rows_log_event::do_exec_row()
Rows_log_event::do_apply_event()
Log_event::apply_event()
wsrep_apply_events()
and mutexes are taken in the order
lock_sys->mutex -> victim_trx->mutex -> victim_thread->LOCK_thd_data
When a normal KILL statement is executed, the stack is
innobase_kill_query()
kill_handlerton()
plugin_foreach_with_mask()
ha_kill_query()
THD::awake()
kill_one_thread()
and mutexes are
victim_thread->LOCK_thd_data -> lock_sys->mutex -> victim_trx->mutex
This patch is the plan D variant for fixing potetial mutex locking
order exercised by BF aborting and KILL command execution.
In this approach, KILL command is replicated as TOI operation.
This guarantees total isolation for the KILL command execution
in the first node: there is no concurrent replication applying
and no concurrent DDL executing. Therefore there is no risk of
BF aborting to happen in parallel with KILL command execution
either. Potential mutex deadlocks between the different mutex
access paths with KILL command execution and BF aborting cannot
therefore happen.
TOI replication is used, in this approach, purely as means
to provide isolated KILL command execution in the first node.
KILL command should not (and must not) be applied in secondary
nodes. In this patch, we make this sure by skipping KILL
execution in secondary nodes, in applying phase, where we
bail out if applier thread is trying to execute KILL command.
This is effective, but skipping the applying of KILL command
could happen much earlier as well.
This also fixed unprotected calls to wsrep_thd_abort
that will use wsrep_abort_transaction. This is fixed
by holding THD::LOCK_thd_data while we abort transaction.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
This patch is the plan D variant for fixing potetial mutex locking
order exercised by BF aborting and KILL command execution.
In this approach, KILL command is replicated as TOI operation.
This guarantees total isolation for the KILL command execution
in the first node: there is no concurrent replication applying
and no concurrent DDL executing. Therefore there is no risk of
BF aborting to happen in parallel with KILL command execution
either. Potential mutex deadlocks between the different mutex
access paths with KILL command execution and BF aborting cannot
therefore happen.
TOI replication is used, in this approach, purely as means
to provide isolated KILL command execution in the first node.
KILL command should not (and must not) be applied in secondary
nodes. In this patch, we make this sure by skipping KILL
execution in secondary nodes, in applying phase, where we
bail out if applier thread is trying to execute KILL command.
This is effective, but skipping the applying of KILL command
could happen much earlier as well.
This patch also fixes mutex locking order and unprotected
THD member accesses on bf aborting case. We try to hold
THD::LOCK_thd_data during bf aborting. Only case where it
is not possible is at wsrep_abort_transaction before
call wsrep_innobase_kill_one_trx where we take InnoDB
mutexes first and then THD::LOCK_thd_data.
This will also fix possible race condition during
close_connection and while wsrep is disconnecting
connections.
Added wsrep_bf_kill_debug test case
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
galera_var_wsrep_on_off : Add wait conditions to make sure DDL is
replicated before continuing.
wsrep.[variables|variables_debug] : Remove unnecessary parts
and add check to correct number of variables or skip
galera_ssl_reload: Add version check and SSL checks.
* Disallow setting wsrep_on = 1 if wsrep_provider is unset. Also, move
wsrep_on_basic from sys_vars to wsrep suite: this test now requires
to run with wsrep_provider set
* Disallow setting @@session.wsrep_on = 1 when @@global.wsrep_on = 0
* Handle the case where a new connection turns @@global.wsrep_on from
off to on. In this case we would miss a call to wsrep_open, causing
unexpected states in wsrep::client_state (causing assertions).
* Disable wsrep.MDEV-22443 because it is no longer possible to enable
wsrep_on, if server is started with wsrep_provider='none'
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
There were multiple problems here
* wsrep_trx_fragment_size should not be set when wsrep is disabled or provider is not loaded
* wsrep_trx_fragment_unit should not be set when wsrep is disabled or provider is not loaded
* wsrep_debug has no effect if wsrep is disabled or provider is not loaded
* wsrep_start_position should not be set when wsrep is disabled or provider is not loaded any other value than default
* wsrep_start_position should be changed only when we are joiner or initialized
* wsrep_start_position should be allowed to set only a value that exits, thus
we need to add error handling to wsrep_sst_complete
A wsrep transaction was started for EXECUTE IMMEDIATE, which
caused assertion failure when the executed statement was
CREATE TABLE which should be executed in TOI mode.
As a fix, don't start wsrep transaction for EXECUTE IMMEDIATE
to let the wsrep state logic to be handled from inside stored
procedure codepath.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
Change xarecover_handlerton so that transaction with WSREP prefixed
xids are rolled back when Galera is disabled.
Reviewd-by: Jan Lindström <jan.lindstrom@mariadb.com>
Some invalid wsrep_provider paths may be interpreted as a valid
directory. For example '/invalid/libgalera_smm.so' with UTF character
set is interpreted as '/', which is a valid directory. A early check
that wsrep_provider should not be a directory fixes it.
Since MDEV-18778, timezone tables get changed to innodb
to allow them to be replicated to other galera nodes.
Even without galera, timezone tables could be declared innodb.
With the standalone innodb tables, the mysql_tzinfo_to_sql takes
approximately 27 seconds.
With the transactions enabled in this patch, 1.2 seconds is
the approximate load time.
While explicit checks for the engine of the time zone tables could be
done, or checks against !opt_skip_write_binlog, non-transactional
storage engines will just ignore the transactional state without
even a warning so its safe to enact globally.
Leap seconds are pretty much ignored as they are a single insert
statement and have gone out of favour as they have caused MariaDB
stalls in the past.