INFILE".
Attempts to execute an INSERT statement for a MEMORY table which invoked
a trigger or called a stored function which tried to perform LOW_PRIORITY
update on the table being inserted into, resulted in debug servers aborting
due to an assertion failure. On non-debug servers such INSERTs failed with
"Can't update table t1 in stored function/trigger because it is already used
by statement which invoked this stored function/trigger" as expected.
The problem was that in the above scenario TL_WRITE_CONCURRENT_INSERT
is converted to TL_WRITE inside the thr_lock() function since the MEMORY
engine does not support concurrent inserts. This triggered an assertion
which assumed that for the same table, one thread always requests locks with
higher thr_lock_type value first. When TL_WRITE_CONCURRENT_INSERT is
upgraded to TL_WRITE after the locks have been sorted, this is no longer true.
In this case, TL_WRITE was requested after acquiring a TL_WRITE_LOW_PRIORITY
lock on the table, triggering the assert.
This fix solves the problem by adjusting this assert to take this
scenario into account.
An alternative approach to change handler::store_locks() methods for all engines
which do not support concurrent inserts in such way that
TL_WRITE_CONCURRENT_INSERT is upgraded to TL_WRITE there instead,
was considered too intrusive.
Commit on behalf of Dmitry Lenev.
Manually deleteing one or more entries from 'master-bin.index', will
cause master infinitely loop to send one binlog file.
When starting a dump session, master opens index file and search the binlog file
which is being requested by the slave. The position of the binlog file in the
index file is recorded. it will be used to find the next binlog file when current
binlog file has dumped completely. As only the position is used, it may
not get the correct file if some entries has been removed manually from the index file.
the master will reopen the current binlog file which has been dump completely
and redump it if it can not get the next binlog file's name from index file.
It obviously is a logical error.
Even though it is allowed to manually change index file,
but it is not recommended. so after this patch, master
sends a fatal error to slave and close the dump session if a new binlog file
has been generated and master can not get it from the index file.
Conflicts:
Text conflict in .bzr-mysql/default.conf
Text conflict in mysql-test/extra/rpl_tests/rpl_loaddata.test
Text conflict in mysql-test/r/mysqlbinlog2.result
Text conflict in mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result
Text conflict in mysql-test/suite/binlog/r/binlog_unsafe.result
Text conflict in mysql-test/suite/rpl/r/rpl_insert_id.result
Text conflict in mysql-test/suite/rpl/r/rpl_loaddata.result
Text conflict in mysql-test/suite/rpl/r/rpl_stm_auto_increment_bug33029.result
Text conflict in mysql-test/suite/rpl/r/rpl_udf.result
Text conflict in mysql-test/suite/rpl/t/rpl_slow_query_log.test
Text conflict in sql/field.h
Text conflict in sql/log.cc
Text conflict in sql/log_event.cc
Text conflict in sql/log_event_old.cc
Text conflict in sql/mysql_priv.h
Text conflict in sql/share/errmsg.txt
Text conflict in sql/sp.cc
Text conflict in sql/sql_acl.cc
Text conflict in sql/sql_base.cc
Text conflict in sql/sql_class.h
Text conflict in sql/sql_db.cc
Text conflict in sql/sql_delete.cc
Text conflict in sql/sql_insert.cc
Text conflict in sql/sql_lex.cc
Text conflict in sql/sql_lex.h
Text conflict in sql/sql_load.cc
Text conflict in sql/sql_table.cc
Text conflict in sql/sql_update.cc
Text conflict in sql/sql_view.cc
Conflict adding files to storage/innobase. Created directory.
Conflict because storage/innobase is not versioned, but has versioned children. Versioned directory.
Conflict adding file storage/innobase. Moved existing file to storage/innobase.moved.
Conflict adding files to storage/innobase/handler. Created directory.
Conflict because storage/innobase/handler is not versioned, but has versioned children. Versioned directory.
Contents conflict in storage/innobase/handler/ha_innodb.cc
Subtraction of two unsigned months yielded a (very large) positive value.
Conversion of this to a signed value was not necessarily well defined.
Solution: do the subtraction on signed values.
"set engine_condition_pushdown" is deprecated, engine condition pushdown is controlled
by a new "set optimizer_switch=engine_condition_pushdown=on|off".
For tables with metadata sizes ranging from 251 to 255 the size
of the event data (m_data_size) was being improperly calculated
in the Table_map_log_event constructor. This was due to the fact
that when writing the Table_map_log_event body (in
Table_map_log_event::write_data_body) a call to net_store_length
is made for packing the m_field_metadata_size. It happens that
net_store_length uses *one* byte for storing
m_field_metadata_size when it is smaller than 251 but *three*
bytes when it exceeds that value. BUG 42749 had already
pinpointed and fix this fact, but the fix was incomplete, as the
calculation in the Table_map_log_event constructor considers 255
instead of 251 as the threshold to increment m_data_size by
three. Thence, the window for having a mismatch between the
number of bytes written and the number of bytes accounted in the
event length (m_data_size) was left open for
m_field_metadata_size values between 251 and 255.
We fix this by changing the condition in the Table_map_log_event
constructor to match the one in the net_store_length, ie,
increment one byte if m_field_metadata_size < 251 and three if it
exceeds this value.
In auto-commit mode, updating both trx and non-trx tables (i.e. issuing a mixed
statement) causes the following sequence of events:
1 - "Flush trx changes" (MYSQL_BIN_LOG::write) - T1:
1.1 - mutex_lock (&LOCK_log)
1.2 - mutex_lock (&LOCK_prep_xids)
1.3 - increase prepared_xids
1.4 - mutex_unlock (&LOCK_prep_xids)
1.5 - mutex_unlock (&LOCK_log)
2 - "Flush non-trx changes" (MYSQL_BIN_LOG::write) - T1:
2.1 - mutex_lock (&LOCK_log)
2.2 - mutex_unlock (&LOCK_log)
3. "unlog" - T1
3.1 - mutex_lock (&LOCK_prep_xids)
3.2 - decrease prepared xids
3.3 - pthread_cond_signal(&COND_prep_xids);
3.4 - mutex_unlock (&LOCK_prep_xids)
The "FLUSH logs" command produces the following sequence of events:
1 - "FLUSH logs" command (MYSQL_BIN_LOG::new_file_impl) - user thread:
1.1 - mutex_lock (&LOCK_log)
1.2 - mutex_lock (&LOCK_prep_xids)
1.3 - while (prepared_xids) pthread_cond_wait(..., &LOCK_prep_xids);
1.4 - mutex_unlock (&LOCK_prep_xids)
1.5 - mutex_unlock (&LOCK_log)
A deadlock will arise if T1 flushes the trx changes and thus increases
prepared_xids but before it is able to continue the execution and flush the
non-trx changes, an user thread calls the "FLUSH logs" command and wait that
the prepared_xids is decreased and gets to zero. However, T1 cannot proceed
with the call to "Flush non-trx changes" because it will block in the mutex
"LOCK_log" and by consequence cannot complete the execution and call the
unlog to decrease the prepared_xids.
To fix the problem, we ensure that the non-trx changes are always flushed
before the trx changes.
Note that if you call "Flush non-trx changes" and a concurrent "FLUSH logs" is
issued, the "Flush non-trx changes" may block, but a deadlock will never happen
because the prepared_xids will eventually get to zero. Bottom line, there will
not be any transaction able to increase the prepared_xids because they will
block in the mutex "LOCK_log" (MYSQL_BIN_LOG::write) and those that increased
the prepared_xids will eventually commit and decrease the prepared_xids.
subquery in the select list
When a dependent subquery with count(distinct <col>) was
evaluated multiple times, the Distinct_Aggregator was reused.
However, the Aggregator was not reset, so when the subquery was
evaluated for the next record in the outer select, old dependent
info was used.
The fix is to clear() the existing aggregator in
Item_sum::set_aggregator(). This ensures that the aggregator is
reevaluated with the new dependent information.
In statement-based or mixed-mode replication, use DROP TEMPORARY TABLE
to drop multiple tables causes different errors on master and slave,
when one or more of these tables do not exist. Because when executed
on slave, it would automatically add IF EXISTS to the query to ignore
all ER_BAD_TABLE_ERROR errors.
To fix the problem, do not add IF EXISTS when executing DROP TEMPORARY
TABLE on the slave, and clear the ER_BAD_TABLE_ERROR error after
execution if the query does not expect any errors.
In statement-based or mixed-mode replication, use DROP TEMPORARY TABLE
to drop multiple tables causes different errors on master and slave,
when one or more of these tables do not exist. Because when executed
on slave, it would automatically add IF EXISTS to the query to ignore
all ER_BAD_TABLE_ERROR errors.
To fix the problem, do not add IF EXISTS when executing DROP TEMPORARY
TABLE on the slave, and clear the ER_BAD_TABLE_ERROR error after
execution if the query does not expect any errors.
This change is supposed to reduce number of ER_LOCK_DEADLOCK
errors which occur when multi-statement transaction encounters
conflicting metadata lock in cases when waiting is possible.
The idea is not to fail ER_LOCK_DEADLOCK error immediately when
we encounter conflicting metadata lock. Instead we release all
metadata locks acquired by current statement and start to wait
until conflicting lock go away. To avoid deadlocks we use simple
empiric which aborts waiting with ER_LOCK_DEADLOCK error if it
turns out that somebody is waiting for metadata locks owned by
this transaction.
This patch also fixes bug #46273 "MySQL 5.4.4 new MDL: Bug#989
is not fully fixed in case of ALTER".
The bug was that concurrent execution of UPDATE or MULTI-UPDATE
statement as a part of multi-statement transaction that already
has used table being updated and ALTER TABLE statement might have
resulted of loss of isolation between this transaction and ALTER
TABLE statement, which manifested itself as changes performed by
ALTER TABLE becoming visible in transaction and wrong binary log
order as a consequence.
This problem occurred when UPDATE or MULTI-UPDATE's wait in
mysql_lock_tables() call was aborted due to metadata lock
upgrade performed by concurrent ALTER TABLE. After such abort all
metadata locks held by transaction were released but transaction
silently continued to be executed as if nothing has happened.
We solve this problem by changing our code not to release all
locks in such case. Instead we release only locks which were
acquired by current statement and then try to reacquire them
by restarting open/lock tables process. We piggyback on simple
deadlock detector implementation since this change has to be
done anyway for it.
3655 Jon Olav Hauglid 2009-10-19
Bug #30977 Concurrent statement using stored function and DROP FUNCTION
breaks SBR
Bug #48246 assert in close_thread_table
Implement a fix for:
Bug #41804 purge stored procedure cache causes mysterious hang for many
minutes
Bug #49972 Crash in prepared statements
The problem was that concurrent execution of DML statements that
use stored functions and DDL statements that drop/modify the same
function might result in incorrect binary log in statement (and
mixed) mode and therefore break replication.
This patch fixes the problem by introducing metadata locking for
stored procedures and functions. This is similar to what is done
in Bug#25144 for views. Procedures and functions now are
locked using metadata locks until the transaction is either
committed or rolled back. This prevents other statements from
modifying the procedure/function while it is being executed. This
provides commit ordering - guaranteeing serializability across
multiple transactions and thus fixes the reported binlog problem.
Note that we do not take locks for top-level CALLs. This means
that procedures called directly are not protected from changes by
simultaneous DDL operations so they are executed at the state they
had at the time of the CALL. By not taking locks for top-level
CALLs, we still allow transactions to be started inside
procedures.
This patch also changes stored procedure cache invalidation.
Upon a change of cache version, we no longer invalidate the entire
cache, but only those routines which we use, only when a statement
is executed that uses them.
This patch also changes the logic of prepared statement validation.
A stored procedure used by a prepared statement is now validated
only once a metadata lock has been acquired. A version mismatch
causes a flush of the obsolete routine from the cache and
statement reprepare.
Incompatible changes:
1) ER_LOCK_DEADLOCK is reported for a transaction trying to access
a procedure/function that is locked by a DDL operation in
another connection.
2) Procedure/function DDL operations are now prohibited in LOCK
TABLES mode as exclusive locks must be taken all at once and
LOCK TABLES provides no way to specifiy procedures/functions to
be locked.
Test cases have been added to sp-lock.test and rpl_sp.test.
Work on this bug has very much been a team effort and this patch
includes and is based on contributions from Davi Arnaut, Dmitry
Lenev, Magne Mæhre and Konstantin Osipov.
- backported code that handles %f/%g arguments in
my_vsnprintf.c from 6.0
- backported %f/%g tests in unittest/mysys/my_vsnprintf-t.c
from 6.0
- replaced snprintf("%g") in sql/set_var.cc with my_gcvt()
- removed unnecessary "--replace-result"s for Windows in
mysql-test/suite/sys_vars/t/long_query_time_basic.test
- some test results adjustments
The 'rpl_cross_version' fails on mysql-next-mr-bugfixing as following:
mysqltest: In included file "./include/setup_fake_relay_log.inc": At line 80: query
'select './$_fake_filename-fake.000001\n' into dumpfile '$_fake_relay_index'' failed:
1290: The MySQL server is running with the --secure-file-priv option so it cannot execute
this statement.
To fix the problem by removeing the --secure-file-priv option
for adapting the update of the 'setup_fake_relay_log.inc'.
Metadata for geometric fields was not being properly stored by
the slave in its the table definition. This happened because
MYSQL_TYPE_GEOMETRY was not included in the 'switch... case' that
handles field metadata according to the field type. Therefore, it
would default to 0, leading to always have a mismatch between
master's field and slave fields'.
We fix this by deploying the missing 'case MYSQL_TYPE_GEOMETRY:'.
subselect_single_select_engine::exec()
When a subquery doesn't need to be evaluated because
it returns only aggregate functions and these aggregates
can be calculated from the metadata about the table it
was not updating all the relevant members of the JOIN
structure to reflect that this is a constant query.
This caused problems to the enclosing subquery
('<> SOME' in the test case above) trying to read some
data about the tables.
Fixed by setting const_tables to the number of tables
when the SELECT is optimized away.
Conflicts:
Conflict adding files to server-tools. Created directory.
Conflict because server-tools is not versioned, but has versioned children. Versioned directory.
Conflict adding files to server-tools/instance-manager. Created directory.
Conflict because server-tools/instance-manager is not versioned, but has versioned children. Versioned directory.
Contents conflict in server-tools/instance-manager/instance_map.cc
Contents conflict in server-tools/instance-manager/listener.cc
Contents conflict in server-tools/instance-manager/options.cc
Contents conflict in server-tools/instance-manager/user_map.cc