To 5.x Release
Notes
=====
This is a backport of BUG#23300 into 5.1 GA.
Original cset revid (in betony):
luis.soares@sun.com-20090929140901-s4kjtl3iiyy4ls2h
Description
===========
When using replication, the slave will not log any slow query
logs queries replicated from the master, even if the
option "--log-slow-slave-statements" is set and these take more
than "log_query_time" to execute.
In order to log slow queries in replicated thread one needs to
set the --log-slow-slave-statements, so that the SQL thread is
initialized with the correct switch. Although setting this flag
correctly configures the slave thread option to log slow queries,
there is an issue with the condition that is used to check
whether to log the slow query or not. When replaying binlog
events the statement contains the SET TIMESTAMP clause which will
force the slow logging condition check to fail. Consequently, the
slow query logging will not take place.
This patch addresses this issue by removing the second condition
from the log_slow_statements as it prevents slow queries to be
binlogged and seems to be deprecated.
To 5.x Release
Notes
=====
This is a backport of BUG#23300 into 5.1 GA.
Original cset revid (in betony):
luis.soares@sun.com-20090929140901-s4kjtl3iiyy4ls2h
Description
===========
When using replication, the slave will not log any slow query
logs queries replicated from the master, even if the
option "--log-slow-slave-statements" is set and these take more
than "log_query_time" to execute.
In order to log slow queries in replicated thread one needs to
set the --log-slow-slave-statements, so that the SQL thread is
initialized with the correct switch. Although setting this flag
correctly configures the slave thread option to log slow queries,
there is an issue with the condition that is used to check
whether to log the slow query or not. When replaying binlog
events the statement contains the SET TIMESTAMP clause which will
force the slow logging condition check to fail. Consequently, the
slow query logging will not take place.
This patch addresses this issue by removing the second condition
from the log_slow_statements as it prevents slow queries to be
binlogged and seems to be deprecated.
condition variable per context instead of one mutex and one conditional
variable for the whole subsystem.
This should increase concurrency in this subsystem.
It also opens the way for further changes which are necessary to solve
such bugs as bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock"
and bug #37346 "innodb does not detect deadlock between update and alter
table".
Two other notable changes done by this patch:
- MDL subsystem no longer implicitly acquires global intention exclusive
metadata lock when per-object metadata lock is acquired. Now this has
to be done by explicit calls outside of MDL subsystem.
- Instead of using separate MDL_context for opening system tables/tables
for purposes of I_S we now create MDL savepoint in the main context
before opening tables and rollback to this savepoint after closing
them. This means that it is now possible to get ER_LOCK_DEADLOCK error
even not inside a transaction. This might happen in unlikely case when
one runs DDL on one of system tables while also running DDL on some
other tables. Cases when this ER_LOCK_DEADLOCK error is not justified
will be addressed by advanced deadlock detector for MDL subsystem which
we plan to implement.
mysql-test/include/handler.inc:
Adjusted handler_myisam.test and handler_innodb.test to the fact that
exclusive metadata locks on tables are now acquired according to
alphabetical order of fully qualified table names instead of order
in which tables are mentioned in statement.
mysql-test/r/handler_innodb.result:
Adjusted handler_myisam.test and handler_innodb.test to the fact that
exclusive metadata locks on tables are now acquired according to
alphabetical order of fully qualified table names instead of order
in which tables are mentioned in statement.
mysql-test/r/handler_myisam.result:
Adjusted handler_myisam.test and handler_innodb.test to the fact that
exclusive metadata locks on tables are now acquired according to
alphabetical order of fully qualified table names instead of order
in which tables are mentioned in statement.
mysql-test/r/mdl_sync.result:
Adjusted mdl_sync.test to the fact that exclusive metadata locks on
tables are now acquired according to alphabetical order of fully
qualified table names instead of order in which tables are mentioned
in statement.
mysql-test/t/mdl_sync.test:
Adjusted mdl_sync.test to the fact that exclusive metadata locks on
tables are now acquired according to alphabetical order of fully
qualified table names instead of order in which tables are mentioned
in statement.
sql/events.cc:
Instead of using separate MDL_context for opening system tables we now
create MDL savepoint in the main context before opening such tables
and rollback to this savepoint after closing them. To support this
change methods of THD responsible for saving/restoring open table
state were changed to use Open_tables_backup class which in addition
to Open_table_state has a member for this savepoint. As result code
opening/closing system tables was changed to use Open_tables_backup
instead of Open_table_state class as well.
sql/ha_ndbcluster.cc:
Since manipulations with open table state no longer install proxy
MDL_context it does not make sense to perform them in order to
satisfy assert in mysql_rm_tables_part2(). Removed them per agreement
with Cluster team. This has not broken test suite since scenario in
which deadlock can occur and assertion fails is not covered by tests.
sql/lock.cc:
MDL subsystem no longer implicitly acquires global intention exclusive
metadata lock when per-object exclusive metadata lock is acquired.
Now this has to be done by explicit calls outside of MDL subsystem.
sql/log.cc:
Instead of using separate MDL_context for opening system tables we now
create MDL savepoint in the main context before opening such tables
and rollback to this savepoint after closing them. To support this
change methods of THD responsible for saving/restoring open table
state were changed to use Open_tables_backup class which in addition
to Open_table_state has a member for this savepoint. As result code
opening/closing system tables was changed to use Open_tables_backup
instead of Open_table_state class as well.
sql/mdl.cc:
Changed metadata locking subsystem to use mutex per lock and condition
variable per context instead of one mutex and one conditional variable
for the whole subsystem.
Changed approach to handling of global metadata locks. Instead of
implicitly acquiring intention exclusive locks when user requests
per-object upgradeable or exclusive locks now we require them to be
acquired explicitly in the same way as ordinary metadata locks.
In fact global lock are now ordinary metadata locks in new GLOBAL
namespace.
To implement these changes:
- Removed LOCK_mdl mutex and COND_mdl condition variable.
- Introduced MDL_lock::m_mutex mutexes which protect individual lock
objects.
- Replaced mdl_locks hash with MDL_map class, which has hash for
MDL_lock objects as a member and separate mutex which protects this
hash. Methods of this class allow to find(), find_or_create() or
remove() MDL_lock objects in concurrency-friendly fashion (i.e.
for most common operation, find_or_create(), we don't acquire
MDL_lock::m_mutex while holding MDL_map::m_mutex. Thanks to MikaelR
for this idea and benchmarks!). Added three auxiliary members to
MDL_lock class (m_is_destroyed, m_ref_usage, m_ref_release) to
support this concurrency-friendly behavior.
- Introduced MDL_context::m_ctx_wakeup_cond condition variable to be
used for waiting until this context's pending request can be
satisfied or its thread has to perform actions to resolve potential
deadlock. Context which want to wait add ticket corresponding to the
request to an appropriate queue of waiters in MDL_lock object so
they can be noticed when other contexts change state of lock and be
awaken by them by signalling on MDL_context::m_ctx_wakeup_cond.
As consequence MDL_ticket objects has to be used for any waiting
in metadata locking subsystem including one which happens in
MDL_context::wait_for_locks() method.
Another consequence is that MDL_context is no longer copyable and
can't be saved/restored when working with system tables.
- Made MDL_lock an abstract class, which delegates specifying exact
compatibility matrix to its descendants. Added MDL_global_lock child
class for global lock (The old is_lock_type_compatible() method
became can_grant_lock() method of this class). Added MDL_object_lock
class to represent per-object lock (The old MDL_lock::can_grant_lock()
became its method). Choice between two classes happens based on MDL
namespace in MDL_lock::create() method.
- Got rid of MDL_lock::type member as its meaning became ambigous for
global locks.
- To simplify waking up of contexts waiting for lock split waiting queue
in MDL_lock class in two queues. One for pending requests for exclusive
(including intention exclusive) locks and another for requests for
shared locks.
- Added virtual wake_up_waiters() method to MDL_lock, MDL_global_lock and
MDL_object_lock classes which allows to wake up waiting contexts after
state of lock changes. Replaced old duplicated code with calls to this
method.
- Adjusted MDL_context::try_acquire_shared_lock()/exclusive_lock()/
global_shared_lock(), MDL_ticket::upgrade_shared_lock_to_exclusive_lock()
and MDL_context::release_ticket() methods to use MDL_map and
MDL_lock::m_mutex instead of single LOCK_mdl mutex and wake up
waiters according to the approach described above. The latter method
also was renamed to MDL_context::release_lock().
- Changed MDL_context::try_acquire_shared_lock()/exclusive_lock() and
release_lock() not to handle global locks. They are now supposed to
be taken explicitly like ordinary metadata locks.
- Added helper MDL_context::try_acquire_global_intention_exclusive_lock()
and acquire_global_intention_exclusive_lock() methods.
- Moved common code from MDL_context::acquire_global_shared_lock() and
acquire_global_intention_exclusive_lock() to new method -
MDL_context::acquire_lock_impl().
- Moved common code from MDL_context::try_acquire_shared_lock(),
try_acquire_global_intention_exclusive_lock()/exclusive_lock()
to MDL_context::try_acquire_lock_impl().
- Since acquiring of several exclusive locks can no longer happen under
single LOCK_mdl mutex the approach to it had to be changed. Now we do
it in one by one fashion. This is done in alphabetical order to avoid
deadlocks. Changed MDL_context::acquire_exclusive_locks() accordingly
(as part of this change moved code responsible for acquiring single
exclusive lock to new MDL_context::acquire_exclusive_lock_impl()
method).
- Since we no longer have single LOCK_mdl mutex which protects all
MDL_context::m_is_waiting_in_mdl members using these members to
determine if we have really awaken context holding conflicting
shared lock became inconvinient. Got rid of this member and changed
notify_shared_lock() helper function and process of acquiring
of/upgrading to exclusive lock not to rely on such information.
Now in MDL_context::acquire_exclusive_lock_impl() and
MDL_ticket::upgrade_shared_lock_to_exclusive_lock() we simply
re-try to wake up threads holding conflicting shared locks after
small time out.
- Adjusted MDL_context::can_wait_lead_to_deadlock() and
MDL_ticket::has_pending_conflicting_lock() to use per-lock
mutexes instead of LOCK_mdl. To do this introduced
MDL_lock::has_pending_exclusive_lock() method.
sql/mdl.h:
Changed metadata locking subsystem to use mutex per lock and condition
variable per context instead of one mutex and one conditional variable
for the whole subsystem. In order to implement this change:
- Added MDL_key::cmp() method to be able to sort MDL_key objects
alphabetically. Changed length fields in MDL_key class to uint16
as 16-bit is enough for length of any key.
- Changed MDL_ticket::get_ctx() to return pointer to non-const
object in order to be able to use MDL_context::awake() method
for such contexts.
- Got rid of unlocked versions of can_wait_lead_to_deadlock()/
has_pending_conflicting_lock() methods in MDL_context and
MDL_ticket. We no longer has single mutex which protects all
locks. Thus one always has to use versions of these methods
which acquire per-lock mutexes.
- MDL_request_list type of list now counts its elements.
- Added MDL_context::m_ctx_wakeup_cond condition variable to be used
for waiting until this context's pending request can be satisfied
or its thread has to perform actions to resolve potential deadlock.
Added awake() method to wake up context from such wait.
Addition of condition variable made MDL_context uncopyable.
As result we no longer can save/restore MDL_context when working
with system tables. Instead we create MDL savepoint before opening
those tables and rollback to it once they are closed.
- MDL_context::release_ticket() became release_lock() method.
- Added auxiliary MDL_context::acquire_exclusive_lock_impl() method
which does all necessary work to acquire exclusive lock on one object
but should not be used directly as it does not enforce any asserts
ensuring that no deadlocks are possible.
- Since we no longer need to know if thread trying to acquire exclusive
lock managed to wake up any threads having conflicting shared locks
(as, anyway, we will try to wake up such threads again shortly)
- MDL_context::m_is_waiting_in_mdl member became unnecessary and
notify_shared_lock() no longer needs to be friend of MDL_context.
Changed approach to handling of global metadata locks. Instead of
implicitly acquiring intention exclusive locks when user requests
per-object upgradeable or exclusive locks now we require them to be
acquired explicitly in the same way as ordinary metadata locks.
- Added new GLOBAL namespace for such locks.
- Added new type of lock to be requested MDL_INTENTION_EXCLISIVE.
- Added MDL_context::try_acquire_global_intention_exclusive_lock()
and acquire_global_intention_exclusive_lock() methods.
- Moved common code from MDL_context::acquire_global_shared_lock()
and acquire_global_intention_exclusive_lock() to new method -
MDL_context::acquire_lock_impl().
- Moved common code from MDL_context::try_acquire_shared_lock(),
try_acquire_global_intention_exclusive_lock()/exclusive_lock()
to MDL_context::try_acquire_lock_impl().
- Added helper MDL_context::is_global_lock_owner() method to be
able easily to find what kind of global lock this context holds.
- MDL_context::m_has_global_shared_lock became unnecessary as
global read lock is now represented by ordinary ticket.
- Removed assert in MDL_context::set_lt_or_ha_sentinel() which became
false for cases when we execute LOCK TABLES under global read lock
mode.
sql/mysql_priv.h:
Instead of using separate MDL_context for opening system tables we now
create MDL savepoint in the main context before opening such tables
and rollback to this savepoint after closing them. To support this
change methods of THD responsible for saving/restoring open table
state were changed to use Open_tables_backup class which in addition
to Open_table_state has a member for this savepoint. As result calls
opening/closing system tables were changed to use Open_tables_backup
instead of Open_table_state class as well.
sql/sp.cc:
Instead of using separate MDL_context for opening system tables we now
create MDL savepoint in the main context before opening such tables
and rollback to this savepoint after closing them. To support this
change methods of THD responsible for saving/restoring open table
state were changed to use Open_tables_backup class which in addition
to Open_table_state has a member for this savepoint. As result code
opening/closing system tables was changed to use Open_tables_backup
instead of Open_table_state class as well.
sql/sp.h:
Instead of using separate MDL_context for opening system tables we now
create MDL savepoint in the main context before opening such tables
and rollback to this savepoint after closing them. To support this
change methods of THD responsible for saving/restoring open table
state were changed to use Open_tables_backup class which in addition
to Open_table_state has a member for this savepoint. As result code
opening/closing system tables was changed to use Open_tables_backup
instead of Open_table_state class as well.
sql/sql_base.cc:
close_thread_tables():
Since we no longer use separate MDL_context for opening system
tables we need to avoid releasing all transaction locks when
closing system table. Releasing metadata lock on system table
is now responsibility of THD::restore_backup_open_tables_state().
open_table_get_mdl_lock(),
Open_table_context::recover_from_failed_open():
MDL subsystem no longer implicitly acquires global intention exclusive
metadata lock when per-object upgradable or exclusive metadata lock is
acquired. So this have to be done explicitly from these calls.
Changed Open_table_context class to store MDL_request object for
global intention exclusive lock acquired when opening tables.
open_table():
Do not release metadata lock if we have failed to open table as
this lock might have been acquired by one of previous statements
in transaction, and therefore should not be released.
open_system_tables_for_read()/close_system_tables()/
open_performance_schema_table():
Instead of using separate MDL_context for opening system tables we now
create MDL savepoint in the main context before opening such tables
and rollback to this savepoint after closing them. To support this
change methods of THD responsible for saving/restoring open table
state were changed to use Open_tables_backup class which in addition
to Open_table_state has a member for this savepoint. As result code
opening/closing system tables was changed to use Open_tables_backup
instead of Open_table_state class as well.
close_performance_schema_table():
Got rid of duplicated code.
sql/sql_class.cc:
Instead of using separate MDL_context for opening system tables we now
create MDL savepoint in the main context before opening such tables
and rollback to this savepoint after closing them. To support this
change methods of THD responsible for saving/restoring open table
state were changed to use Open_tables_backup class which in addition
to Open_table_state has a member for this savepoint. Also releasing
metadata lock on system table is now responsibility of
THD::restore_backup_open_tables_state().
Adjusted assert in THD::cleanup() to take into account fact that now
we also use MDL sentinel for global read lock.
sql/sql_class.h:
Instead of using separate MDL_context for opening system tables we now
create MDL savepoint in the main context before opening such tables
and rollback to this savepoint after closing them. As result:
- 'mdl_context' member was moved out of Open_tables_state to THD class.
enter_locked_tables_mode()/leave_locked_tables_mode() had to follow.
- Methods of THD responsible for saving/restoring open table state were
changed to use Open_tables_backup class which in addition to
Open_table_state has a member for this savepoint.
Changed Open_table_context class to store MDL_request object for
global intention exclusive lock acquired when opening tables.
sql/sql_delete.cc:
MDL subsystem no longer implicitly acquires global intention exclusive
metadata lock when per-object exclusive metadata lock is acquired.
Now this has to be done by explicit calls outside of MDL subsystem.
sql/sql_help.cc:
Instead of using separate MDL_context for opening system tables we now
create MDL savepoint in the main context before opening such tables
and rollback to this savepoint after closing them. To support this
change methods of THD responsible for saving/restoring open table
state were changed to use Open_tables_backup class which in addition
to Open_table_state has a member for this savepoint. As result code
opening/closing system tables was changed to use Open_tables_backup
instead of Open_table_state class as well.
sql/sql_parse.cc:
Adjusted assert reload_acl_and_cache() to the fact that global read
lock now takes full-blown metadata lock.
sql/sql_plist.h:
Added support for element counting to I_P_List list template.
One can use policy classes to specify if such counting is needed
or not needed for particular list.
sql/sql_show.cc:
Instead of using separate MDL_context for opening tables for I_S
purposes we now create MDL savepoint in the main context before
opening tables and rollback to this savepoint after closing them.
To support this and similar change for system tables methods of
THD responsible for saving/restoring open table state were changed
to use Open_tables_backup class which in addition to Open_table_state
has a member for this savepoint. As result code opening/closing tables
for I_S purposes was changed to use Open_tables_backup instead of
Open_table_state class as well.
sql/sql_table.cc:
mysql_rm_tables_part2():
Since now global intention exclusive metadata lock is ordinary
metadata lock we no longer can rely that by releasing MDL locks
on all tables we will release all locks acquired by this routine.
So in non-LOCK-TABLES mode we have to release all locks acquired
explicitly.
prepare_for_repair(), mysql_alter_table():
MDL subsystem no longer implicitly acquires global intention
exclusive metadata lock when per-object exclusive metadata lock
is acquired. Now this has to be done by explicit calls outside of
MDL subsystem.
sql/tztime.cc:
Instead of using separate MDL_context for opening system tables we now
create MDL savepoint in the main context before opening such tables
and rollback to this savepoint after closing them. To support this
change methods of THD responsible for saving/restoring open table
state were changed to use Open_tables_backup class which in addition
to Open_table_state has a member for this savepoint. As result code
opening/closing system tables was changed to use Open_tables_backup
instead of Open_table_state class as well.
Also changed code not to use special mechanism for open system tables
when it is not really necessary.
condition variable per context instead of one mutex and one conditional
variable for the whole subsystem.
This should increase concurrency in this subsystem.
It also opens the way for further changes which are necessary to solve
such bugs as bug #46272 "MySQL 5.4.4, new MDL: unnecessary deadlock"
and bug #37346 "innodb does not detect deadlock between update and alter
table".
Two other notable changes done by this patch:
- MDL subsystem no longer implicitly acquires global intention exclusive
metadata lock when per-object metadata lock is acquired. Now this has
to be done by explicit calls outside of MDL subsystem.
- Instead of using separate MDL_context for opening system tables/tables
for purposes of I_S we now create MDL savepoint in the main context
before opening tables and rollback to this savepoint after closing
them. This means that it is now possible to get ER_LOCK_DEADLOCK error
even not inside a transaction. This might happen in unlikely case when
one runs DDL on one of system tables while also running DDL on some
other tables. Cases when this ER_LOCK_DEADLOCK error is not justified
will be addressed by advanced deadlock detector for MDL subsystem which
we plan to implement.
REORGANIZE PARTITION
There were several problems which lead to this this,
all related to bad error handling.
1) There was several bugs preventing the ddl-log to be used for
cleaning up created files on error.
2) The error handling after the copy partition rows did not close
and unlock the tables, resulting in deletion of partitions
which were in use, which lead InnoDB to put the partition to
drop in a background queue.
sql/ha_partition.cc:
Bug#47343: InnoDB fails to clean-up after lock wait timeout on
REORGANIZE PARTITION
Better error handling, if partition has been created/opened/locked
then make sure it is unlocked and closed before returning error.
The delete of the newly created partition is handled by the ddl-log.
sql/sql_parse.cc:
Bug#47343: InnoDB fails to clean-up after lock wait timeout on
REORGANIZE PARTITION
Fix a bug found when experimenting, thd could really be NULL here,
as mentioned in the function header.
sql/sql_partition.cc:
Bug#47343: InnoDB fails to clean-up after lock wait timeout on
REORGANIZE PARTITION
Used the correct .frm shadow name to put into the ddl-log.
Really use the ddl-log to handle errors.
sql/sql_table.cc:
Bug#47343: InnoDB fails to clean-up after lock wait timeout on
REORGANIZE PARTITION
Fixes of the ddl-log when used as error recovery (no crash).
When executing an entry from memory (not read from disk)
the name_len was not set correctly.
REORGANIZE PARTITION
There were several problems which lead to this this,
all related to bad error handling.
1) There was several bugs preventing the ddl-log to be used for
cleaning up created files on error.
2) The error handling after the copy partition rows did not close
and unlock the tables, resulting in deletion of partitions
which were in use, which lead InnoDB to put the partition to
drop in a background queue.
'CREATE TABLE IF NOT EXISTS ... SELECT' statement were causing 'CREATE
TEMPORARY TABLE ...' to be written to the binary log in row-based
mode (a.k.a. RBR), when there was a temporary table with the same name.
Because the 'CREATE TABLE ... SELECT' statement was executed as
'INSERT ... SELECT' into the temporary table. Since in RBR mode no
other statements related to temporary tables are written into binary log,
this sometimes broke replication.
This patch changes behavior of 'CREATE TABLE [IF NOT EXISTS] ... SELECT ...'.
it ignores existence of temporary table with the
same name as table being created and is interpreted
as attempt to create/insert into base table. This makes behavior of
'CREATE TABLE [IF NOT EXISTS] ... SELECT' consistent with
how ordinary 'CREATE TABLE' and 'CREATE TABLE ... LIKE' behave.
'CREATE TABLE IF NOT EXISTS ... SELECT' statement were causing 'CREATE
TEMPORARY TABLE ...' to be written to the binary log in row-based
mode (a.k.a. RBR), when there was a temporary table with the same name.
Because the 'CREATE TABLE ... SELECT' statement was executed as
'INSERT ... SELECT' into the temporary table. Since in RBR mode no
other statements related to temporary tables are written into binary log,
this sometimes broke replication.
This patch changes behavior of 'CREATE TABLE [IF NOT EXISTS] ... SELECT ...'.
it ignores existence of temporary table with the
same name as table being created and is interpreted
as attempt to create/insert into base table. This makes behavior of
'CREATE TABLE [IF NOT EXISTS] ... SELECT' consistent with
how ordinary 'CREATE TABLE' and 'CREATE TABLE ... LIKE' behave.
- Marked a couple of tests with --big
- Fixed xtradb/handler/ha_innodb.cc to call explain_filename()
storage/xtradb/handler/ha_innodb.cc:
Call explain_filename() to get proper names for partitioned tables
Conflicts:
Text conflict in .bzr-mysql/default.conf
Text conflict in mysql-test/suite/rpl/r/rpl_loaddata_fatal.result
Text conflict in mysql-test/suite/rpl/r/rpl_stm_log.result
Text conflict in mysql-test/t/mysqlbinlog.test
Text conflict in sql/sql_acl.cc
Text conflict in sql/sql_servers.cc
Text conflict in sql/sql_update.cc
Text conflict in support-files/mysql.spec.sh
Conflicts:
Text conflict in .bzr-mysql/default.conf
Text conflict in mysql-test/suite/rpl/r/rpl_loaddata_fatal.result
Text conflict in mysql-test/suite/rpl/r/rpl_stm_log.result
Text conflict in mysql-test/t/mysqlbinlog.test
Text conflict in sql/sql_acl.cc
Text conflict in sql/sql_servers.cc
Text conflict in sql/sql_update.cc
Text conflict in support-files/mysql.spec.sh
In certain rare cases when a process was interrupted
during a FLUSH PRIVILEGES operation the diagnostic
area would be set to an error state but the function
responsible for the operation would still signal
success. This would lead to a debug assertion error
later on when the server would attempt to reset the
DA before sending the error message.
This patch fixes the issue by assuring that
reload_acl_and_cache() always fails if an error
condition is raised.
The second issue was that a KILL could cause
a console error message which referred to a DA
state without first making sure that such a
state existed.
This patch fixes this issue in two different
palces by first checking DA state before
fetching the error message.
sql/sql_acl.cc:
* Make sure that there is an error to print before attempting to do so.
* Minor style change: change 1 to TRUE for clarity.
sql/sql_parse.cc:
* Always fail reload_acl_and_cache() if the query was killed.
sql/sql_servers.cc:
* Make sure that there is an error to print before attempting to do so.
In certain rare cases when a process was interrupted
during a FLUSH PRIVILEGES operation the diagnostic
area would be set to an error state but the function
responsible for the operation would still signal
success. This would lead to a debug assertion error
later on when the server would attempt to reset the
DA before sending the error message.
This patch fixes the issue by assuring that
reload_acl_and_cache() always fails if an error
condition is raised.
The second issue was that a KILL could cause
a console error message which referred to a DA
state without first making sure that such a
state existed.
This patch fixes this issue in two different
palces by first checking DA state before
fetching the error message.
This was a deadlock between LOCK TABLES/CREATE DATABASE in one connection
and DROP DATABASE in another. It only happened if the table locked by
LOCK TABLES was in the database to be dropped. The deadlock is similar
to the one in Bug#48940, but with LOCK TABLES instead of an active
transaction.
The order of events needed to trigger the deadlock was:
1) Connection 1 locks table db1.t1 using LOCK TABLES. It will now
have a metadata lock on the table name.
2) Connection 2 issues DROP DATABASE db1. This will wait inside
the MDL subsystem for the lock on db1.t1 to go away. While waiting, it
will hold the LOCK_mysql_create_db mutex.
3) Connection 1 issues CREATE DATABASE (database name irrelevant).
This will hang trying to lock the same mutex. Since this is the connection
holding the metadata lock blocking Connection 2, we have a deadlock.
This deadlock would also happen for earlier trees without MDL, but
there DROP DATABASE would wait for a table to be removed from the
table definition cache.
This patch fixes the problem by prohibiting CREATE DATABASE in LOCK TABLES
mode. In the example above, this prevents Connection 1 from hanging trying
to get the LOCK_mysql_create_db mutex. Note that other commands that use
LOCK_mysql_create_db (ALTER/DROP DATABASE) are already prohibited in
LOCK TABLES mode.
Incompatible change: CREATE DATABASE is now disallowed in LOCK TABLES mode.
Test case added to schema.test.
mysql-test/t/drop.test:
Updates the test for Bug#21216 by swapping the order of CREATE DATABASE
and LOCK TABLES. This is now needed as CREATE DATABASE is prohibited in
LOCK TABLES mode.
mysql-test/t/schema.test:
Test case for Bug#49988 added.
Also fixes a problem with the test for Bug#48940 where the result
would differ for embedded server.
This was a deadlock between LOCK TABLES/CREATE DATABASE in one connection
and DROP DATABASE in another. It only happened if the table locked by
LOCK TABLES was in the database to be dropped. The deadlock is similar
to the one in Bug#48940, but with LOCK TABLES instead of an active
transaction.
The order of events needed to trigger the deadlock was:
1) Connection 1 locks table db1.t1 using LOCK TABLES. It will now
have a metadata lock on the table name.
2) Connection 2 issues DROP DATABASE db1. This will wait inside
the MDL subsystem for the lock on db1.t1 to go away. While waiting, it
will hold the LOCK_mysql_create_db mutex.
3) Connection 1 issues CREATE DATABASE (database name irrelevant).
This will hang trying to lock the same mutex. Since this is the connection
holding the metadata lock blocking Connection 2, we have a deadlock.
This deadlock would also happen for earlier trees without MDL, but
there DROP DATABASE would wait for a table to be removed from the
table definition cache.
This patch fixes the problem by prohibiting CREATE DATABASE in LOCK TABLES
mode. In the example above, this prevents Connection 1 from hanging trying
to get the LOCK_mysql_create_db mutex. Note that other commands that use
LOCK_mysql_create_db (ALTER/DROP DATABASE) are already prohibited in
LOCK TABLES mode.
Incompatible change: CREATE DATABASE is now disallowed in LOCK TABLES mode.
Test case added to schema.test.
MySQL handles the join syntax "JOIN ... USING( field1,
... )" and natural joins by building the same parse tree as
a corresponding join with an "ON t1.field1 = t2.field1 ..."
expression would produce. This parse tree was not cleaned up
properly in the following scenario. If a thread tries to
lock some tables and finds that the tables were dropped and
re-created while waiting for the lock, it cleans up column
references in the statement by means a per-statement free
list. But if the statement was part of a stored procedure,
column references on the stored procedure's free list weren't
cleaned up and thus contained pointers to freed objects.
Fixed by adding a call to clean up the current prepared
statement's free list.
mysql-test/r/sp_sync.result:
Bug#48157: Test case
mysql-test/t/sp_sync.test:
Bug#48157: Test result
sql/item.h:
Bug#48157: Commented field.
sql/sql_parse.cc:
Bug#48157: Commented function.
sql/sql_update.cc:
Bug#48157: fix
MySQL handles the join syntax "JOIN ... USING( field1,
... )" and natural joins by building the same parse tree as
a corresponding join with an "ON t1.field1 = t2.field1 ..."
expression would produce. This parse tree was not cleaned up
properly in the following scenario. If a thread tries to
lock some tables and finds that the tables were dropped and
re-created while waiting for the lock, it cleans up column
references in the statement by means a per-statement free
list. But if the statement was part of a stored procedure,
column references on the stored procedure's free list weren't
cleaned up and thus contained pointers to freed objects.
Fixed by adding a call to clean up the current prepared
statement's free list.
Conflicts:
Text conflict in .bzr-mysql/default.conf
Text conflict in mysql-test/extra/rpl_tests/rpl_loaddata.test
Text conflict in mysql-test/r/mysqlbinlog2.result
Text conflict in mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result
Text conflict in mysql-test/suite/binlog/r/binlog_unsafe.result
Text conflict in mysql-test/suite/rpl/r/rpl_insert_id.result
Text conflict in mysql-test/suite/rpl/r/rpl_loaddata.result
Text conflict in mysql-test/suite/rpl/r/rpl_stm_auto_increment_bug33029.result
Text conflict in mysql-test/suite/rpl/r/rpl_udf.result
Text conflict in mysql-test/suite/rpl/t/rpl_slow_query_log.test
Text conflict in sql/field.h
Text conflict in sql/log.cc
Text conflict in sql/log_event.cc
Text conflict in sql/log_event_old.cc
Text conflict in sql/mysql_priv.h
Text conflict in sql/share/errmsg.txt
Text conflict in sql/sp.cc
Text conflict in sql/sql_acl.cc
Text conflict in sql/sql_base.cc
Text conflict in sql/sql_class.h
Text conflict in sql/sql_db.cc
Text conflict in sql/sql_delete.cc
Text conflict in sql/sql_insert.cc
Text conflict in sql/sql_lex.cc
Text conflict in sql/sql_lex.h
Text conflict in sql/sql_load.cc
Text conflict in sql/sql_table.cc
Text conflict in sql/sql_update.cc
Text conflict in sql/sql_view.cc
Conflict adding files to storage/innobase. Created directory.
Conflict because storage/innobase is not versioned, but has versioned children. Versioned directory.
Conflict adding file storage/innobase. Moved existing file to storage/innobase.moved.
Conflict adding files to storage/innobase/handler. Created directory.
Conflict because storage/innobase/handler is not versioned, but has versioned children. Versioned directory.
Contents conflict in storage/innobase/handler/ha_innodb.cc
Conflicts:
Text conflict in .bzr-mysql/default.conf
Text conflict in mysql-test/extra/rpl_tests/rpl_loaddata.test
Text conflict in mysql-test/r/mysqlbinlog2.result
Text conflict in mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result
Text conflict in mysql-test/suite/binlog/r/binlog_unsafe.result
Text conflict in mysql-test/suite/rpl/r/rpl_insert_id.result
Text conflict in mysql-test/suite/rpl/r/rpl_loaddata.result
Text conflict in mysql-test/suite/rpl/r/rpl_stm_auto_increment_bug33029.result
Text conflict in mysql-test/suite/rpl/r/rpl_udf.result
Text conflict in mysql-test/suite/rpl/t/rpl_slow_query_log.test
Text conflict in sql/field.h
Text conflict in sql/log.cc
Text conflict in sql/log_event.cc
Text conflict in sql/log_event_old.cc
Text conflict in sql/mysql_priv.h
Text conflict in sql/share/errmsg.txt
Text conflict in sql/sp.cc
Text conflict in sql/sql_acl.cc
Text conflict in sql/sql_base.cc
Text conflict in sql/sql_class.h
Text conflict in sql/sql_db.cc
Text conflict in sql/sql_delete.cc
Text conflict in sql/sql_insert.cc
Text conflict in sql/sql_lex.cc
Text conflict in sql/sql_lex.h
Text conflict in sql/sql_load.cc
Text conflict in sql/sql_table.cc
Text conflict in sql/sql_update.cc
Text conflict in sql/sql_view.cc
Conflict adding files to storage/innobase. Created directory.
Conflict because storage/innobase is not versioned, but has versioned children. Versioned directory.
Conflict adding file storage/innobase. Moved existing file to storage/innobase.moved.
Conflict adding files to storage/innobase/handler. Created directory.
Conflict because storage/innobase/handler is not versioned, but has versioned children. Versioned directory.
Contents conflict in storage/innobase/handler/ha_innodb.cc