an INFORMATION_SCHEMA table
When a prepared statement using a merged view containing an information
schema table was executed, a metadata lock of the view was not taken.
This meant that it was possible for concurrent view DDL to execute,
thereby breaking the binary log. For example, it was possible
for DROP VIEW to appear in the binary log before a query using the view.
This also happened when a statement in a stored routine was executed a
second time.
For such views, the information schema table is merged into the view
during the prepare phase (or first execution of a statement in a routine).
The problem was that we took a short cut and were not executing full-blown
view opening during subsequent executions of the statement. As a result,
a metadata lock on the view was not taken to protect the view definition.
This patch resolves the problem by making sure a metadata lock is taken
for views even after information schema tables are merged into them.
Test cased added to view.test.
The problem was that a shared InnoDB row lock was taken when executing
SELECT statements inside a stored function as a part of a transaction
using REPEATABLE READ. This prevented other transactions from updating
the row.
InnoDB uses multi-versioning and consistent nonlocking reads. SELECTs
should therefore not acquire locks and block other transactions
wishing to do updates.
This bug is no longer repeatable with the changes introduced in the scope
of metadata locking.
Test case added to innodb_mysql.test.
If a prepared statement used both a MyISAMMRG table and a stored
function or trigger, execution could fail with "No such table"
error or crash.
The error would come from a failure of the MyISAMMRG engine
to meet the expectations of the prelocking algorithm,
in particular maintain lex->query_tables_own_last pointer
in sync with lex->query_tables_last pointer/the contents
of lex->query_tables. When adding merge children, the merge
engine would extend the table list. Then, when adding
prelocked tables, the prelocking algorithm would use a pointer
to the last merge child to assign to lex->query_tables_own_last.
Then, when merge children were removed at the end of
open_tables(), lex->query_tables_own_last
was not updated, and kept pointing
to a removed merge child.
The fix ensures that query_tables_own_last is always in
sync with lex->query_tables_last.
This is a regression introduced by WL#4144 and present only
in next-4284 tree and 6.0.
TEMPORARY + HANDLER + LOCK + SP".
Server crashed when one:
1) Opened HANDLER or acquired global read lock
2) Then locked one or several temporary tables with
LOCK TABLES statement (but no base tables).
3) Then issued any statement causing commit (explicit
or implicit).
4) Issued statement which should have closed HANDLER
or released global read lock.
The problem was that when entering LOCK TABLES mode in the
scenario described above we incorrectly set transactional
MDL sentinel to zero. As result during commit all metadata
locks were released (including lock for open HANDLER or
global metadata shared lock). Indeed, attempt to release
metadata lock for the second time which happened during
HANLDER CLOSE or during release of GLR caused crash.
This patch fixes problem by changing MDL_context's
set_trans_sentinel() method to set sentinel to correct
value (it should point to the most recent ticket).
DDL workload".
When a RENAME TABLE or LOCK TABLE ... WRITE statement which
mentioned the same table several times were aborted during
the process of acquring metadata locks (due to deadlock
which was discovered or because of KILL statement) server
might have crashed.
When attempt to acquire all locks requested had failed we
went through the list of requests and released locks which
we have managed to acquire by that moment one by one. Since
in the scenario described above list of requests contained
duplicates this led to releasing the same ticket twice and
a crash as result.
This patch solves the problem by employing different approach
to releasing locks in case of failure to acquire all locks
requested.
Now we take a MDL savepoint before starting acquiring locks
and simply rollback to it if things go bad.
function with distinct.
Loose index scan is used to find MIN/MAX values using appropriate index and
thus allow to avoid grouping. For each found row it updates non-aggregated
fields with values from row with found MIN/MAX value.
Without loose index scan non-aggregated fields are copied by end_send_group
function. With loose index scan there is no need in end_send_group and
end_send is used instead. Non-aggregated fields still need to be copied and
this was wrongly implemented in QUICK_GROUP_MIN_MAX_SELECT::get_next.
WL#3220 added a case when loose index scan can be used with end_send_group to
optimize calculation of aggregate functions with distinct. In this case
the row found by QUICK_GROUP_MIN_MAX_SELECT::get_next might belong to a next
group and copying it will produce wrong result.
Update of non-aggregated fields is moved to the end_send function from
QUICK_GROUP_MIN_MAX_SELECT::get_next.
failed in enter_locked_tables_mode".
Server was aborted due to assertion failure when one tried to
execute statement requiring prelocking (i.e. firing triggers
or using stored functions) while having open HANDLERs.
The problem was that THD::enter_locked_tables_mode() method
which was called at the beginning of execution of prelocked
statement assumed there are no open HANDLERs. It had to do
so because corresponding THD::leave_locked_tables_mode()
method was unable to properly restore MDL sentinel when
leaving LOCK TABLES/prelocked mode in the presence of open
HANDLERs.
This patch solves this problem by changing the latter method
to properly restore MDL sentinel and thus removing need for
this assumption. As a side-effect, it lifts unjustified
limitation by allowing to keep HANDLERs open when entering
LOCK TABLES mode.
causing crashes!
Adding a SPATIAL INDEX on a non-geometrical column caused a
segmentation fault when the table was subsequently
inserted into.
A test was added in mysql_prepare_create_table to explicitly
check whether non-geometrical columns are used in a
spatial index, and throw an error if so.
corruption and crash results
An index creation statement where the index key
is larger/wider than the column it references
should throw an error.
A statement like:
CREATE TABLE t1 (a CHAR(1), PRIMARY KEY (A(255)))
did not error, but a segmentation fault followed when
an insertion was attempted on the table
The partial key validiation clause has been
restructured to (hopefully) better document which
uses of partial keys are valid.
This patch introduces timeouts for metadata locks.
The timeout is specified in seconds using the new dynamic system
variable "lock_wait_timeout" which has both GLOBAL and SESSION
scopes. Allowed values range from 1 to 31536000 seconds (= 1 year).
The default value is 1 year.
The new server parameter "lock-wait-timeout" can be used to set
the default value parameter upon server startup.
"lock_wait_timeout" applies to all statements that use metadata locks.
These include DML and DDL operations on tables, views, stored procedures
and stored functions. They also include LOCK TABLES, FLUSH TABLES WITH
READ LOCK and HANDLER statements.
The patch also changes thr_lock.c code (table data locks used by MyISAM
and other simplistic engines) to use the same system variable.
InnoDB row locks are unaffected.
One exception to the handling of the "lock_wait_timeout" variable
is delayed inserts. All delayed inserts are executed with a timeout
of 1 year regardless of the setting for the global variable. As the
connection issuing the delayed insert gets no notification of
delayed insert timeouts, we want to avoid unnecessary timeouts.
It's important to note that the timeout value is used for each lock
acquired and that one statement can take more than one lock.
A statement can therefore block for longer than the lock_wait_timeout
value before reporting a timeout error. When lock timeout occurs,
ER_LOCK_WAIT_TIMEOUT is reported.
Test case added to lock_multi.test.
rqg_mdl_stability".
When start of statement's waiting on a metadata lock
created more than one loop in waiters graph server might
have entered deadlock condition.
The problem was that in the case described above MDL deadlock
detector had to perform several searches for deadlock but
forgot to reset Deadlock_detection_context before performing
new search.
Failure to do so has broken assumption in code resposible for
choosing victim that if Deadlock_detection_context::victim
is set we also have read lock on m_waiting_for_lock for this
context. As result this lock could have been unlocked more
times than it was acquired which corrupted rwlock's state
which led to server deadlock.
This fix ensures that such reset is done before each attempt
to find a deadlock.
and MDL".
Concurrent execution of a multi-DELETE statement and ALTER
TABLE statement which affected one of the tables used in
the multi-DELETE sometimes led to deadlock.
Similar deadlocks might have occured when one performed
INSERT/UPDATE/DELETE on a view and concurrently executed
ALTER TABLE for the view's underlying table, or when one
concurrently executed TRUNCATE TABLE for InnoDB table and
ALTER TABLE for the same table.
These deadlocks were caused by a discrepancy between types of
metadata and thr_lock.cc locks acquired by those statements.
What happened was that multi-DELETE/TRUNCATE/DML-through-the-
view statement in the first connection acquired SR lock on a
table, then ALTER TABLE would come in in the second connection
and acquire SNW metadata lock and TL_WRITE_ALLOW_READ
thr_lock.c lock and then would start waiting for the first
connection during lock upgrade. After that the statement in
the first connection would try to acquire TL_WRITE lock on
table and would start waiting for the second connection,
creating a deadlock.
This patch solves this problem by ensuring that we acquire
SW metadata lock in all cases in which we acquiring write
thr_lock.c lock. This guarantees that deadlocks like the
one described above won't occur since all lock conflicts
in such situation are resolved within MDL subsystem.
This patch also adds assert which should guarantee that
such situations won't arise in future.
failed on HANDLER + I_S
This assert was triggered when an I_S query tried to acquire a
metadata lock on a table which was already locked by a HANDLER
statement in the same connection.
First the HANDLER took a MDL_SHARED lock. Afterwards, the I_S query
requested a MDL_SHARED_HIGH_PRIO lock. The existing MDL_SHARED ticket
is found in find_ticket() since it satisfies
ticket->has_stronger_or_equal_type(mdl_request->type) as MDL_SHARED
and MDL_SHARED_HIGH_PRIO have equal strengths, just different priority.
However, two asserts later check lock type strengths using relational
operators (>= and <=) rather than MDL_ticket::has_stronger_or_equal_type().
These asserts are triggered since MDL_SHARED >= MDL_SHARED_HIGH_PRIORITY
is false (mapped to 1 and 2 respectively).
This patch updates the asserts to use MDL_ticket::has_stronger_or_equal_type()
rather than relational operators to check lock type strength.
Test case added to include/handler.inc.
Change the error code for ER_WARN_I_S_SKIPPED_TABLE, to not
upset the tests that rely on ER_SLAVE_CONVERSION_ERROR error
code = 1667.
Fix a merge bug with binlogging of CREATE TABLE (temporary tables).
HANDLER OPEN
The problem was a too restrictive assert in the code for
HANDLER ... OPEN and HANDLER ... READ that checked table->next
to verify that we didn't open views or merge tables.
This pointer is also used to link temporary tables together
(see thd->temporary_tables). In this case TABLE::next can be
set even if we're trying to open a single table.
This patch adjust the two asserts to also check for the presence
of temporary tables.
Test case added to handler_myisam.test.
This was a deadlock between ALTER TABLE and another DML statement
(or LOCK TABLES ... READ). ALTER TABLE would wait trying to upgrade
its lock to MDL_EXCLUSIVE and the DML statement would wait trying
to acquire a TL_READ_NO_INSERT table level lock.
This could happen if one connection first acquired a MDL_SHARED_READ
lock on a table. In another connection ALTER TABLE is then started.
ALTER TABLE eventually blocks trying to upgrade to MDL_EXCLUSIVE,
but while holding a TL_WRITE_ALLOW_READ table level lock.
If the first connection then tries to acquire TL_READ_NO_INSERT,
it will block and we have a deadlock since neither connection can
proceed.
This patch fixes the problem by allowing TL_READ_NO_INSERT
locks to be granted if another connection holds TL_WRITE_ALLOW_READ
on the same table. This will allow the DML statement to proceed
such that it eventually can release its MDL lock which in turn
makes ALTER TABLE able to proceed.
Note that TL_READ_NO_INSERT was already partially compatible with
TL_WRITE_ALLOW_READ as the latter would be granted if the former
lock was held. This patch just makes the opposite true as well.
Also note that since ALTER TABLE takes an upgradable MDL lock,
there will be no starvation of ALTER TABLE statements by
statements acquiring TL_READ or TL_READ_NO_INSERT.
Test case added to lock_sync.test.
failed in open_ltable()
The problem was too restrictive asserts that enforced that
open_ltable() was called without any active HANDLERs, LOCK TABLES
or global read locks.
However, this can happen in several cases when opening system
tables. The assert would, for example, be triggered when drop
function was called from a connection with active HANDLERs as
this would cause open_ltable() to be called for mysql.proc.
The assert could also be triggered when using table-based
general log (mysql.general_log).
This patch removes the asserts since they will be triggered in
several legitimate cases and because the asserts are no longer
relevant due to changes in how locks are released.
The patch also fixes set_needs_thr_lock_abort() that before
ignored its parameter and always set the member variable to TRUE.
Test case added to mdl_sync.test.
Thanks to Dmitry Lenev for help with this bug!
m_tickets.front() == m_trans_sentinel'".
Debug build of server crashed due to assert failure in MDL
subsystem when one tried to execute multi-table REPAIR or
OPTIMIZE in autocommit=0 mode.
The assert failure occured when multi-table REPAIR or OPTIMIZE
started processing of second table from its table list and
tried to acquire upgradable metadata lock on this table.
The cause of the assert failure were MDL locks left over from
processing of previous table. It turned out that in autocommit=0
mode close_thread_tables() which happens at the end of table
processing doesn't release metadata locks.
This fix solves problem by releasing locks explicitly using
MDL_context::release_trans_locks() call.
fulltext search and row op.
The search for fulltext indexes is searching for some special
predicate layouts. While doing so it's not checking for the number
of columns of the expressions it tries to calculate.
And since row expressions can't return a single scalar value there
was a crash.
Fixed by checking if the expressions are scalar (in addition to
being constant) before calling Item::val_xxx() methods.
Fix Bug#50555 "handler commands crash server in my_hash_first()"
as a post-merge fix (the new handler tests are not passing
otherwise).
- in hash.c, don't call calc_hash if ! my_hash_inited().
- add tests and results for the test case for Bug#50555