Problem:
========
Truncate operation holds MDL on the table (t1) and tries to
acquire InnoDB dict_operation_lock. Purge holds dict_operation_lock
and tries to acquire MDL on the table (t1) to evaluate virtual
column expressions for indexed virtual columns.
It leads to deadlock of purge and truncate table (DDL).
Solution:
=========
If purge tries to acquire MDL on the table then it should do the following:
i) Purge should release all innodb latches (including dict_operation_lock)
before acquiring metadata lock on the table.
ii) After acquiring metadata lock on the table, it should check whether the
table was dropped or renamed. If the table is dropped then purge should
ignore the undo log record. If the table is renamed then it should
release the old MDL and acquire MDL on the new name.
iii) Once purge acquires MDL, it should use the SQL table handle for all
the remaining virtual index for the purge record.
purge_node_t: Introduce new virtual column information to know whether
the MDL was acquired successfully.
This is joint work with Marko Mäkelä.
specific temporary errors
The optimistic parallel slave's worker thread could face a run-time error due to
the algorithm's specifics which allows for conflicts like the reported
"Can't find record in 'table'".
A typical stack is like
{noformat}
#0 handler::print_error (this=0x61c00008f8a0, error=149, errflag=0) at handler.cc:3650
#1 0x0000555555e95361 in write_record (thd=thd@entry=0x62a0000a2208, table=table@entry=0x61f00008ce88, info=info@entry=0x7fffdee356d0) at sql_insert.cc:1944
#2 0x0000555555ea7767 in mysql_insert (thd=thd@entry=0x62a0000a2208, table_list=0x61b00012ada0, fields=..., values_list=..., update_fields=..., update_values=..., duplic=<optimized out>, ignore=<optimized out>) at sql_insert.cc:1039
#3 0x0000555555efda90 in mysql_execute_command (thd=thd@entry=0x62a0000a2208) at sql_parse.cc:3927
#4 0x0000555555f0cc50 in mysql_parse (thd=0x62a0000a2208, rawbuf=<optimized out>, length=<optimized out>, parser_state=<optimized out>) at sql_parse.cc:7449
#5 0x00005555566d4444 in Query_log_event::do_apply_event (this=0x61200005b9c8, rgi=<optimized out>, query_arg=<optimized out>, q_len_arg=<optimized out>) at log_event.cc:4508
#6 0x00005555566d639e in Query_log_event::do_apply_event (this=<optimized out>, rgi=<optimized out>) at log_event.cc:4185
#7 0x0000555555d738cf in Log_event::apply_event (rgi=0x61d0001ea080, this=0x61200005b9c8) at log_event.h:1343
#8 apply_event_and_update_pos_apply (ev=ev@entry=0x61200005b9c8, thd=thd@entry=0x62a0000a2208, rgi=rgi@entry=0x61d0001ea080, reason=<optimized out>) at slave.cc:3479
#9 0x0000555555d8596b in apply_event_and_update_pos_for_parallel (ev=ev@entry=0x61200005b9c8, thd=thd@entry=0x62a0000a2208, rgi=rgi@entry=0x61d0001ea080) at slave.cc:3623
#10 0x00005555562aca83 in rpt_handle_event (qev=qev@entry=0x6190000fa088, rpt=rpt@entry=0x62200002bd68) at rpl_parallel.cc:50
#11 0x00005555562bd04e in handle_rpl_parallel_thread (arg=arg@entry=0x62200002bd68) at rpl_parallel.cc:1258
{noformat}
Here {{handler::print_error}} computes whether to error log the
current error when --log-warnings > 1. The decision flag is consulted
bu {{my_message_sql()}} which can be eventually called.
In the bug case the decision is to log.
However in the optimistic mode slave applier case any conflict is
attempted to resolve with rollback and retry to success. Hence the
logging is at least extraneous.
The case is fixed with adding a new flag {{ME_LOG_AS_WARN}} which
{{handler::print_error}} may propagate further on through {{my_error}}
when the error comes from an optimistically running slave worker thread.
The new flag effectively requests the warning level for the errlog record,
while the thread's DA records the actual error (which is regarded as temporary one
by the parallel slave error handler).
The cause of this was several different bugs:
- When using binary logging with binlog_row_image=FULL
the all bits in read_set was set, which caused a
different (wrong) pattern for marking vcol_set.
- TABLE::mark_virtual_columns_for_write() didn't in all
cases mark vcol_set with the vcol_field.
- TABLE::update_virtual_fields() has to update all
vcol fields on REPLACE if binary logging with FULL
is used.
- VCOL_UPDATE_INDEXED should update all vcol fields part
of an index that was not updated by VCOL_UPDATE_FOR_READ
- max_row_length() calculated length of NULL and not
used fields. This didn't cause any crash, but used
more memory than needed.
These test can sporadically show mutex deadlock warnings between LOCK_wsrep_thd
and LOCK_thd_data mutexes. This means that these mutexes can be locked in opposite
order by different threads, and thus result in deadlock situation.
To fix such issue, the locking policy of these mutexes should be revised and
enforced to be uniform. However, a quick code review shows that the number of
lock/unlock operations for these mutexes combined is between 100-200, and all these
mutex invocations should be checked/fixed.
On the other hand, it turns out that LOCK_wsrep_thd is used for protecting access to
wsrep variables of THD (wsrep_conflict_state, wsrep_query_state), whereas LOCK_thd_data
protects query, db and mysys_var variables in THD. Extending LOCK_thd_data to protect
also wsrep variables looks like a viable solution, as there should not be a use case
where separate threads need simultaneous access to wsrep variables and THD data variables.
In this commit LOCK_wsrep_thd mutex is refactored to be replaced by LOCK_thd_data.
By bluntly replacing LOCK_wsrep_thd by LOCK_thd_data, will result in double locking
of LOCK_thd_data, and some adjustements have been performed to fix such situations.
Counter for select numbering made stored with the statement (before was global)
So now it does have always accurate value which does not depend on
interruption of statement prepare by errors like lack of table in
a view definition.
1. Moving the following methods from THD to Item_change_list:
nocheck_register_item_tree_change()
check_and_register_item_tree_change()
rollback_item_tree_changes()
as they work only with the "change_list" member and don't
require anything else from THD.
2. Deriving THD from Item_change_list
This change will help to fix "MDEV-14603 signal 11 with short stacktrace" easier.
trx_undo_page_report_modify(): For SPATIAL INDEX, keep logging
updated off-page columns twice, so that
the minimum bounding rectangle (MBR) will be logged.
Avoiding the redundant logging would require larger changes
to the undo log format.
row_build_index_entry_low(): Handle SPATIAL_UNKNOWN more robustly,
by refusing to purge the record from the spatial index.
We can get this code when processing old undo log from 10.2.10 or
10.2.11 (the releases affected by MDEV-14799, which was a regression
from MDEV-14051).
If translation table present when we materialize the derived table then
change it to point to the materialized table.
Added debug info to see really what happens with what derived.
* changed thd_binlog_format to return configured binlog format in wsrep execution,
regardless of binlogging setting (i.e. with or without binlogging)
* thd_binlog_format is used in innobase::write_row(), and would return confusing
result there when log_bin==OFF
- Fix win64 pointer truncation warnings
(usually coming from misusing 0x%lx and long cast in DBUG)
- Also fix printf-format warnings
Make the above mentioned warnings fatal.
- fix pthread_join on Windows to set return value.
* changed thd_binlog_format to return configured binlog format in wsrep execution,
regardless of binlogging setting (i.e. with or without binlogging)
* thd_binlog_format is used in innobase::write_row(), and would return confusing
result there when log_bin==OFF
Was reported as 'rpl.rpl_gtid_stop_start fails with Valgrind in buildbot and outside'.
The 'Conditional jump or move depends on uninitialized value' valgrind complaint is valid
and means THD::m_current_stage_key that is not initialized indeed by constructor.
Fixed with its initialization added to the initializer list of
THD::THD. Double checked with mysql to have found a similar fixes for
the very same issue which was discovered after P_S merge to Maria
(60589aeee where the member is introduced) had been done.
For running the Galera tests, the variable my_disable_leak_check
was set to true in order to avoid assertions due to memory leaks
at shutdown.
Some adjustments due to MDEV-13625 (merge InnoDB tests from MySQL 5.6)
were performed. The most notable behaviour changes from 10.0 and 10.1
are the following:
* innodb.innodb-table-online: adjustments for the DROP COLUMN
behaviour change (MDEV-11114, MDEV-13613)
* innodb.innodb-index-online-fk: the removal of a (1,NULL) record
from the result; originally removed in MySQL 5.7 in the
Oracle Bug #16244691 fix
377774689b
* innodb.create-index-debug: disabled due to MDEV-13680
(the MySQL Bug #77497 fix was not merged from 5.6 to 5.7.10)
* innodb.innodb-alter-autoinc: MariaDB 10.2 behaves like MySQL 5.6/5.7,
while MariaDB 10.0 and 10.1 assign different values when
auto_increment_increment or auto_increment_offset are used.
Also MySQL 5.6/5.7 exhibit different behaviour between
LGORITHM=INPLACE and ALGORITHM=COPY, so something needs to be tested
and fixed in both MariaDB 10.0 and 10.2.
* innodb.innodb-wl5980-alter: disabled because it would trigger an
InnoDB assertion failure (MDEV-13668 may need additional effort in 10.2)
Merged from mysql-wsrep-bugs following:
GCF-1058 MTR test galera.MW-86 fails on repeated runs
Wait for the sync point sync.wsrep_apply_cb to be reached before
executing the test and clearing the debug flag sync.wsrep_apply_cb.
The race scenario:
Intended behavior:
node2: set sync.wsrep_apply_cb in order to start waiting in the background INSERT
node1: INSERT start
node2 (background): INSERT start
node1: INSERT end
node2: send signal to background INSERT: "stop waiting and continue executing"
node2: clear sync.wsrep_apply_cb as no longer needed
node2 (background): consume the signal
node2 (background): INSERT end
node2: DROP TABLE
node2: check no pending signals are left - ok
What happens occasionally (unexpected):
node2: set sync.wsrep_apply_cb in order to start waiting in the background INSERT
node1: INSERT start
node2 (background): INSERT start
node1: INSERT end
// The background INSERT still has _not_ reached the place where it starts
// waiting for the signal:
// DBUG_EXECUTE_IF("sync.wsrep_apply_cb", "now wait_for...");
node2: send signal to background INSERT: "stop waiting and continue executing"
node2: clear sync.wsrep_apply_cb as no longer needed
// The background INSERT reaches DBUG_EXECUTE_IF("sync.wsrep_apply_cb", ...)
// but sync.wsrep_apply_cb has already been cleared and the "wait" code is not
// executed. The signal remains unconsumed.
node2 (background): INSERT end
node2: DROP TABLE
node2: check no pending signals are left - failure, signal.wsrep_apply_cb is
pending (not consumed)
Remove MW-360 test case as it is not intended for MariaDB (uses
MySQL GTID).