From the very beginning, the default InnoDB transaction isolation level
REPEATABLE READ does not correspond to any well formed definition.
The main issue is the lack of write/write conflict detection.
To fix that and to make REPEATABLE READ correspond to Snapshot Isolation,
b8a671988954870b7db22e20d1a1409fd40f8e3d introduced the Boolean
session variable innodb_snapshot_isolation. It was disabled by default
in order not to break any user applications.
In a new major version of MariaDB Server, we had better enable this
parameter by default.
1. The merge aeccbbd926e759a5c3b9818d9948a35918404478 has overwritten
lock0lock.cc, and the changes of MDEV-29622 and MDEV-29635 were
partially lost, this commit restores the changes.
2. innodb.deadlock_wait_thr_race test:
The following hang was found during testing.
There is deadlock_report_before_lock_releasing sync point in
Deadlock::report(), which is waiting for sel_cont signal under lock_sys_t
lock. The signal must be issued after "UPDATE t SET b = 100" rollback,
and that rollback is executing undo record, which is blocked
on dict_sys latch request. dict_sys is locked by the thread of statistics
update(dict_stats_save()), and during that update lock_sys lock is
requested, and can't be acquired as Deadlock::report() holds it. We have
to disable statistics update to make the test stable.
But even if statistics update is disabled, and transaction with consistent
snapshot is started at the very beginning of the test to prevent purging,
the purge can still be invoked for system tables, and it tries to open
system table by id, what causes dict_sys.freeze() call and dict_sys
latching. What, in combination with lock_sys::xx_lock() causes the same
deadlock as described above. We need to disable purging globally for the
test as well.
All the above is applicable to innodb.deadlock_wait_lock_race test also.
Suppose we have two transactions, trx 1 and trx 2.
trx 2 does deadlock resolving from lock_wait(), it sets
victim->lock.was_chosen_as_deadlock_victim=true for trx 1, but has not
yet invoked lock_cancel_waiting_and_release().
trx 1 checks the flag in lock_trx_handle_wait(), and starts rollback
from row_mysql_handle_errors(). It can change trx->lock.wait_thr and
trx->state as it holds trx_t::mutex, but trx 2 has not yet requested it,
as lock_cancel_waiting_and_release() has not yet been called.
After that trx 1 tries to release locks in trx_t::rollback_low(),
invoking trx_t::rollback_finish(). lock_release() is blocked on try to
acquire lock_sys.rd_lock(SRW_LOCK_CALL) in lock_release_try(), as
lock_sys is blocked by trx 2, as deadlock resolution works under
lock_sys.wr_lock(SRW_LOCK_CALL), see Deadlock::report() for details.
trx 2 executes lock_cancel_waiting_and_release() for deadlock victim, i.
e. for trx 1. lock_cancel_waiting_and_release() contains some
trx->lock.wait_thr and trx->state assertions, which will fail, because
trx 1 has changed them during rollback execution.
So, according to the above scenario, it's legal to have
trx->lock.wait_thr==0 and trx->state!=TRX_STATE_ACTIVE in
lock_cancel_waiting_and_release(), if it was invoked from
Deadlock::report(), and the fix is just in the assertion conditions
changing.
The fix is just in changing assertion condition.
There is also lock_wait() cleanup around trx->error_state.
If trx->error_state can be changed not by the owned thread, it must be
protected with lock_sys.wait_mutex, as lock_wait() uses trx->lock.cond
along with that mutex.
Also if trx->error_state was changed before lock_sys.wait_mutex
acquision, then it could be reset with the following code, what is
wrong. Also we need to check trx->error_state before entering waiting
loop, otherwise it can be the case when trx->error_state was set before
lock_sys.wait_mutex acquision, but the thread will be waiting on
trx->lock.cond.