1
0
mirror of https://github.com/MariaDB/server.git synced 2025-09-02 09:41:40 +03:00

Patch that refactors global read lock implementation and fixes

bug #57006 "Deadlock between HANDLER and FLUSH TABLES WITH READ
LOCK" and bug #54673 "It takes too long to get readlock for
'FLUSH TABLES WITH READ LOCK'".

The first bug manifested itself as a deadlock which occurred
when a connection, which had some table open through HANDLER
statement, tried to update some data through DML statement
while another connection tried to execute FLUSH TABLES WITH
READ LOCK concurrently.

What happened was that FTWRL in the second connection managed
to perform first step of GRL acquisition and thus blocked all
upcoming DML. After that it started to wait for table open
through HANDLER statement to be flushed. When the first connection
tried to execute DML it has started to wait for GRL/the second
connection creating deadlock.

The second bug manifested itself as starvation of FLUSH TABLES
WITH READ LOCK statements in cases when there was a constant
stream of concurrent DML statements (in two or more
connections).

This has happened because requests for protection against GRL
which were acquired by DML statements were ignoring presence of
pending GRL and thus the latter was starved.

This patch solves both these problems by re-implementing GRL
using metadata locks.

Similar to the old implementation acquisition of GRL in new
implementation is two-step. During the first step we block
all concurrent DML and DDL statements by acquiring global S
metadata lock (each DML and DDL statement acquires global IX
lock for its duration). During the second step we block commits
by acquiring global S lock in COMMIT namespace (commit code
acquires global IX lock in this namespace).

Note that unlike in old implementation acquisition of
protection against GRL in DML and DDL is semi-automatic.
We assume that any statement which should be blocked by GRL
will either open and acquires write-lock on tables or acquires
metadata locks on objects it is going to modify. For any such
statement global IX metadata lock is automatically acquired
for its duration.

The first problem is solved because waits for GRL become
visible to deadlock detector in metadata locking subsystem
and thus deadlocks like one in the first bug become impossible.

The second problem is solved because global S locks which
are used for GRL implementation are given preference over
IX locks which are acquired by concurrent DML (and we can
switch to fair scheduling in future if needed).

Important change:
FTWRL/GRL no longer blocks DML and DDL on temporary tables.
Before this patch behavior was not consistent in this respect:
in some cases DML/DDL statements on temporary tables were
blocked while in others they were not. Since the main use cases
for FTWRL are various forms of backups and temporary tables are
not preserved during backups we have opted for consistently
allowing DML/DDL on temporary tables during FTWRL/GRL.

Important change:
This patch changes thread state names which are used when
DML/DDL of FTWRL is waiting for global read lock. It is now
either "Waiting for global read lock" or "Waiting for commit
lock" depending on the stage on which FTWRL is.

Incompatible change:
To solve deadlock in events code which was exposed by this
patch we have to replace LOCK_event_metadata mutex with
metadata locks on events. As result we have to prohibit
DDL on events under LOCK TABLES.

This patch also adds extensive test coverage for interaction
of DML/DDL and FTWRL.

Performance of new and old global read lock implementations
in sysbench tests were compared. There were no significant
difference between new and old implementations.
This commit is contained in:
Dmitry Lenev
2010-11-11 20:11:05 +03:00
parent aee8fce233
commit 378cdc58c1
75 changed files with 5492 additions and 1243 deletions

View File

@@ -37,7 +37,6 @@ where name like 'Wait/Synch/Cond/sql/%'
order by name limit 10;
NAME ENABLED TIMED
wait/synch/cond/sql/COND_flush_thread_cache YES YES
wait/synch/cond/sql/COND_global_read_lock YES YES
wait/synch/cond/sql/COND_manager YES YES
wait/synch/cond/sql/COND_queue_state YES YES
wait/synch/cond/sql/COND_rpl_status YES YES
@@ -46,6 +45,7 @@ wait/synch/cond/sql/COND_thread_cache YES YES
wait/synch/cond/sql/COND_thread_count YES YES
wait/synch/cond/sql/Delayed_insert::cond YES YES
wait/synch/cond/sql/Delayed_insert::cond_client YES YES
wait/synch/cond/sql/Event_scheduler::COND_state YES YES
select * from performance_schema.SETUP_INSTRUMENTS
where name='Wait';
select * from performance_schema.SETUP_INSTRUMENTS

View File

@@ -100,3 +100,4 @@ INNER JOIN performance_schema.THREADS p USING (THREAD_ID)
WHERE p.PROCESSLIST_ID = 1
GROUP BY h.EVENT_NAME
HAVING TOTAL_WAIT > 0;
UPDATE performance_schema.SETUP_INSTRUMENTS SET enabled = 'YES';

View File

@@ -110,4 +110,5 @@ WHERE (EVENT_NAME = 'wait/synch/rwlock/sql/LOCK_grant'));
SELECT IF((COALESCE(@after_count, 0) - COALESCE(@before_count, 0)) = 0, 'Success', 'Failure') test_fm2_rw_timed;
test_fm2_rw_timed
Success
UPDATE performance_schema.SETUP_INSTRUMENTS SET enabled = 'YES';
DROP TABLE t1;

View File

@@ -1,4 +1,5 @@
use performance_schema;
update performance_schema.SETUP_INSTRUMENTS set enabled='YES';
grant SELECT, UPDATE, LOCK TABLES on performance_schema.* to pfsuser@localhost;
flush privileges;
connect (con1, localhost, pfsuser, , test);
@@ -21,9 +22,9 @@ select event_name,
left(source, locate(":", source)) as short_source,
timer_end, timer_wait, operation
from performance_schema.EVENTS_WAITS_CURRENT
where event_name like "wait/synch/cond/sql/COND_global_read_lock";
where event_name like "wait/synch/cond/sql/MDL_context::COND_wait_status";
event_name short_source timer_end timer_wait operation
wait/synch/cond/sql/COND_global_read_lock lock.cc: NULL NULL wait
wait/synch/cond/sql/MDL_context::COND_wait_status mdl.cc: NULL NULL timed_wait
unlock tables;
update performance_schema.SETUP_INSTRUMENTS set enabled='NO';
update performance_schema.SETUP_INSTRUMENTS set enabled='YES';

View File

@@ -88,10 +88,6 @@ where name like "wait/synch/mutex/sql/LOCK_manager";
count(name)
1
select count(name) from MUTEX_INSTANCES
where name like "wait/synch/mutex/sql/LOCK_global_read_lock";
count(name)
1
select count(name) from MUTEX_INSTANCES
where name like "wait/synch/mutex/sql/LOCK_global_system_variables";
count(name)
1
@@ -120,10 +116,6 @@ where name like "wait/synch/mutex/sql/Query_cache::structure_guard_mutex";
count(name)
1
select count(name) from MUTEX_INSTANCES
where name like "wait/synch/mutex/sql/LOCK_event_metadata";
count(name)
1
select count(name) from MUTEX_INSTANCES
where name like "wait/synch/mutex/sql/LOCK_event_queue";
count(name)
1
@@ -184,10 +176,6 @@ where name like "wait/synch/cond/sql/COND_manager";
count(name)
1
select count(name) from COND_INSTANCES
where name like "wait/synch/cond/sql/COND_global_read_lock";
count(name)
1
select count(name) from COND_INSTANCES
where name like "wait/synch/cond/sql/COND_thread_cache";
count(name)
1