STATUS OF ROLLBACKED TRANSACTION" and bug #17054007 - "TRANSACTION
IS NOT FULLY ROLLED BACK IN CASE OF INNODB DEADLOCK".
The problem in the first bug report was that although deadlock involving
metadata locks was reported using the same error code and message as InnoDB
deadlock it didn't rollback transaction like the latter. This caused
confusion to users as in some cases after ER_LOCK_DEADLOCK transaction
could have been restarted immediately and in some cases rollback was
required.
The problem in the second bug report was that although InnoDB deadlock
caused transaction rollback in all storage engines it didn't cause release
of metadata locks. So concurrent DDL on the tables used in transaction was
blocked until implicit or explicit COMMIT or ROLLBACK was issued in the
connection which got InnoDB deadlock.
The former issue has stemmed from the fact that when support for detection
and reporting metadata locks deadlocks was added we erroneously assumed
that InnoDB doesn't rollback transaction on deadlock but only last statement
(while this is what happens on InnoDB lock timeout actually) and so didn't
implement rollback of transactions on MDL deadlocks.
The latter issue was caused by the fact that rollback of transaction due
to deadlock is carried out by setting THD::transaction_rollback_request
flag at the point where deadlock is detected and performing rollback
inside of trans_rollback_stmt() call when this flag is set. And
trans_rollback_stmt() is not aware of MDL locks, so no MDL locks are
released.
This patch solves these two problems in the following way:
- In case when MDL deadlock is detect transaction rollback is requested
by setting THD::transaction_rollback_request flag.
- Code performing rollback of transaction if THD::transaction_rollback_request
is moved out from trans_rollback_stmt(). Now we handle rollback request
on the same level as we call trans_rollback_stmt() and release statement/
transaction MDL locks.
After single row substitutions there might appear new equalities.
They should be properly propagated to all AND/OR levels the WHERE
condition. It's done now with an additional call of remove_eq_conds().
The main bug here was the following situation:
Suppose we set up a completely new master2 as an extra multi-master to an
existing slave that already has a different master1 for domain_id=0. When the
slave tries to connect to master2, master2 will not have anything that slave
requests in domain_id=0, but that is fine as master2 is supposedly meant to
serve eg. domain_id=1. (This is MDEV-4485).
But suppose that master2 then actually starts sending events from
domain_id=0. In this case, the fix for MDEV-4485 was incomplete, and the code
would fail to give the error that the position requested by the slave in
domain_id=0 was missing from the binlogs of master2. This could lead to lost
events or completely wrong replication.
The patch for this bug fixes this issue.
In addition, it cleans up the code a bit, getting rid of the fake_gtid_hash in
the code. And the error message when slave and master have diverged due to
alternate future is clarified, as requested in the bug description.
This patch almost totally revised the patch for bug mdev-4177.
The latter had too many defects. In particular, it did not
propagate multiple equalities formed when merging a degenerate
disjunct into underlying AND formula.
Added partition_exchange test.
Do not set HA_OPTION_PACK_RECORD for InnoDB specific row formats
(e.g. COMPACT, REDUNDANT). Adjusted mysql_compare_tables() accordingly.
Following variables do not require LOCK_open protection anymore:
- table_def_cache (renamed to tdc_hash) is protected by rw-lock
LOCK_tdc_hash;
- table_def_shutdown_in_progress doesn't need LOCK_open protection;
- last_table_id use atomics;
- TABLE_SHARE::ref_count (renamed to TABLE_SHARE::tdc.ref_count)
is protected by TABLE_SHARE::tdc.LOCK_table_share;
- TABLE_SHARE::next, ::prev (renamed to tdc.next and tdc.prev),
oldest_unused_share, end_of_unused_share are protected by
LOCK_unused_shares;
- TABLE_SHARE::m_flush_tickets (renamed to tdc.m_flush_tickets)
is protected by TABLE_SHARE::tdc.LOCK_table_share;
- refresh_version (renamed to tdc_version) use atomics.
This a an old legacy performance bug.
When a very selective range scan existed for the second table in a join,
and, at the same time, there was another range condition depending on the
fields of the first table, the optimizer chose a plan with
'Range checked for each record'. This plan was extremely inefficient in
comparison with the regular selective range scan.
As a matter of fact the range scan chosen for each record was the same as
that selective range scan.
Changed the test case for bug 24776 to preserve the old output for explain.