1
0
mirror of https://github.com/MariaDB/server.git synced 2025-08-08 11:22:35 +03:00

MDEV-22456 Dropping the adaptive hash index may cause DDL to lock up InnoDB

If the InnoDB buffer pool contains many pages for a table or index
that is being dropped or rebuilt, and if many of such pages are
pointed to by the adaptive hash index, dropping the adaptive hash index
may consume a lot of time.

The time-consuming operation of dropping the adaptive hash index entries
is being executed while the InnoDB data dictionary cache dict_sys is
exclusively locked.

It is not actually necessary to drop all adaptive hash index entries
at the time a table or index is being dropped or rebuilt. We can let
the LRU replacement policy of the buffer pool take care of this gradually.
For this to work, we must detach the dict_table_t and dict_index_t
objects from the main dict_sys cache, and once the last
adaptive hash index entry for the detached table is removed
(when the garbage page is evicted from the buffer pool) we can free
the dict_table_t and dict_index_t object.

Related to this, in MDEV-16283, we made ALTER TABLE...DISCARD TABLESPACE
skip both the buffer pool eviction and the drop of the adaptive hash index.
We shifted the burden to ALTER TABLE...IMPORT TABLESPACE or DROP TABLE.
We can remove the eviction from DROP TABLE. We must retain the eviction
in the ALTER TABLE...IMPORT TABLESPACE code path, so that in case the
discarded table is being re-imported with the same tablespace identifier,
the fresh data from the imported tablespace will replace any stale pages
in the buffer pool.

rpl.rpl_failed_drop_tbl_binlog: Remove the test. DROP TABLE can
no longer be interrupted inside InnoDB.

fseg_free_page(), fseg_free_step(), fseg_free_step_not_header(),
fseg_free_page_low(), fseg_free_extent(): Remove the parameter
that specifies whether the adaptive hash index should be dropped.

btr_search_lazy_free(): Lazily free an index when the last
reference to it is dropped from the adaptive hash index.

buf_pool_clear_hash_index(): Declare static, and move to the
same compilation unit with the bulk of the adaptive hash index
code.

dict_index_t::clone(), dict_index_t::clone_if_needed():
Clone an index that is being rebuilt while adaptive hash index
entries exist. The original index will be inserted into
dict_table_t::freed_indexes and dict_index_t::set_freed()
will be called.

dict_index_t::set_freed(), dict_index_t::freed(): Note that
or check whether the index has been freed. We will use the
impossible page number 1 to denote this condition.

dict_index_t::n_ahi_pages(): Replaces btr_search_info_get_ref_count().

dict_index_t::detach_columns(): Move the assignment n_fields=0
to ha_innobase_inplace_ctx::clear_added_indexes().
We must have access to the columns when freeing the
adaptive hash index. Note: dict_table_t::v_cols[] will remain
valid. If virtual columns are dropped or added, the table
definition will be reloaded in ha_innobase::commit_inplace_alter_table().

buf_page_mtr_lock(): Drop a stale adaptive hash index if needed.

We will also reduce the number of btr_get_search_latch() calls
and enclose some more code inside #ifdef BTR_CUR_HASH_ADAPT
in order to benefit cmake -DWITH_INNODB_AHI=OFF.
This commit is contained in:
Marko Mäkelä
2020-05-15 17:10:59 +03:00
parent ff66d65a09
commit ad6171b91c
34 changed files with 552 additions and 903 deletions

View File

@@ -1,32 +0,0 @@
include/master-slave.inc
[connection master]
create table t1 (a int) engine=innodb;
create table t2 (b longblob) engine=innodb;
create table t3 (c int) engine=innodb;
insert into t2 values (repeat('b',1024*1024));
insert into t2 select * from t2;
insert into t2 select * from t2;
insert into t2 select * from t2;
insert into t2 select * from t2;
set debug_sync='rm_table_no_locks_before_delete_table SIGNAL nogo WAIT_FOR go EXECUTE 2';
drop table t1, t2, t3;
connect foo,localhost,root;
set debug_sync='now SIGNAL go';
kill query CONNECTION_ID;
connection master;
ERROR 70100: Query execution was interrupted
"Tables t2 and t3 should be listed"
SHOW TABLES;
Tables_in_test
t2
t3
include/show_binlog_events.inc
Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 # Gtid # # GTID #-#-#
master-bin.000001 # Query # # use `test`; DROP TABLE `t1` /* generated by server */
connection slave;
drop table t2, t3;
connection master;
set debug_sync='RESET';
drop table t2, t3;
include/rpl_end.inc

View File

@@ -1,64 +0,0 @@
# ==== Purpose ====
#
# Check that when the execution of a DROP TABLE command with single table
# fails it should not be written to the binary log. Also test that when the
# execution of DROP TABLE command with multiple tables fails the command
# should be written into the binary log.
#
# ==== Implementation ====
#
# Steps:
# 0 - Create tables named t1, t2, t3
# 1 - Execute DROP TABLE t1,t2,t3 command.
# 2 - Kill the DROP TABLE command while it is trying to drop table 't2'.
# 3 - Verify that tables t2,t3 are present after the DROP command execution
# was interrupted.
# 4 - Check that table 't1' is present in binary log as part of DROP
# command.
#
# ==== References ====
#
# MDEV-20348: DROP TABLE IF EXISTS killed on master but was replicated.
#
--source include/have_innodb.inc
--source include/have_debug_sync.inc
--source include/have_binlog_format_statement.inc
--source include/master-slave.inc
create table t1 (a int) engine=innodb;
create table t2 (b longblob) engine=innodb;
create table t3 (c int) engine=innodb;
insert into t2 values (repeat('b',1024*1024));
insert into t2 select * from t2;
insert into t2 select * from t2;
insert into t2 select * from t2;
insert into t2 select * from t2;
let $binlog_start= query_get_value(SHOW MASTER STATUS, Position, 1);
let $id=`select connection_id()`;
set debug_sync='rm_table_no_locks_before_delete_table SIGNAL nogo WAIT_FOR go EXECUTE 2';
send drop table t1, t2, t3;
connect foo,localhost,root;
set debug_sync='now SIGNAL go';
let $wait_condition=select 1 from information_schema.processlist where state like 'debug sync point:%';
source include/wait_condition.inc;
--replace_result $id CONNECTION_ID
eval kill query $id;
connection master;
error ER_QUERY_INTERRUPTED;
reap;
--echo "Tables t2 and t3 should be listed"
SHOW TABLES;
--source include/show_binlog_events.inc
--sync_slave_with_master
drop table t2, t3;
connection master;
set debug_sync='RESET';
drop table t2, t3;
source include/rpl_end.inc;