Rows events were applied wrongly on the temporary table with the same name.
But rows events are generated only for base tables. As temporary
table's data never be binlogged on row mode. Normally, base table of the
same name cannot be updated if a temporary table has the same name.
But there are two cases which can generate rows events on
the base table of same name.
Case1: 'CREATE TABLE ... SELECT' statement.
In mixed format, it will generate rows events if it is unsafe.
Case2: Drop a transactional temporary table in a transaction
(happens only on 5.5+).
BEGIN;
DROP TEMPORARY TABLE t1; # t1 is a InnoDB table
INSERT INTO t1 VALUES(rand()); # t1 is a MyISAM table
COMMIT;
'DROP TEMPORARY TABLE' will be put in the transaction cache and
binlogged after the rows events generated by the 'INSERT' statement.
After this patch, slave opens only base table when applying a rows event.
Additional fixes in 5.5:
ibuf_set_del_mark(): Add diagnostics when setting a buffered delete-mark fails.
ibuf_delete(): Correct a misleading comment about non-found records.
rec_print(): Add a const qualifier to the index parameter.
Bug #56680 wrong InnoDB results from a case-insensitive covering index
row_search_for_mysql(): When a secondary index record might not be
visible in the current transaction's read view and we consult the
clustered index and optionally some undo log records, return the
relevant columns of the clustered index record to MySQL instead of the
secondary index record.
ibuf_insert_to_index_page_low(): New function, refactored from
ibuf_insert_to_index_page().
ibuf_insert_to_index_page(): When we are inserting a record in place
of a delete-marked record and some fields of the record differ, update
that record just like row_ins_sec_index_entry_by_modify() would do.
btr_cur_update_alloc_zip(): Make the function public.
mysql_row_templ_t: Add clust_rec_field_no.
row_sel_store_mysql_rec(), row_sel_push_cache_row_for_mysql(): Add the
flag rec_clust, for returning data at clust_rec_field_no instead of
rec_field_no. Resurrect the debug assertion that the record not be
marked for deletion. (Bug #55626)
[UNIV_DEBUG || UNIV_IBUF_DEBUG] ibuf_debug, buf_page_get_gen(),
buf_flush_page_try():
Implement innodb_change_buffering_debug=1 for evicting pages from the
buffer pool, so that change buffering will be attempted more
frequently.
row_search_for_mysql(): When a secondary index record might not be
visible in the current transaction's read view and we consult the
clustered index and optionally some undo log records, return the
relevant columns of the clustered index record to MySQL instead of the
secondary index record.
ibuf_insert_to_index_page_low(): New function, refactored from
ibuf_insert_to_index_page().
ibuf_insert_to_index_page(): When we are inserting a record in place
of a delete-marked record and some fields of the record differ, update
that record just like row_ins_sec_index_entry_by_modify() would do.
btr_cur_update_alloc_zip(): Make the function public.
mysql_row_templ_t: Add clust_rec_field_no.
row_sel_store_mysql_rec(), row_sel_push_cache_row_for_mysql(): Add the
flag rec_clust, for returning data at clust_rec_field_no instead of
rec_field_no. Resurrect the debug assertion that the record not be
marked for deletion. (Bug #55626)
[UNIV_DEBUG || UNIV_IBUF_DEBUG] ibuf_debug, buf_page_get_gen(),
buf_flush_page_try():
Implement innodb_change_buffering_debug=1 for evicting pages from the
buffer pool, so that change buffering will be attempted more
frequently.
row_search_for_mysql(): When a secondary index record might not be
visible in the current transaction's read view and we consult the
clustered index and optionally some undo log records, return the
relevant columns of the clustered index record to MySQL instead of the
secondary index record.
REC_INFO_DELETED_FLAG: Move the definition from rem0rec.ic to rem0rec.h.
ibuf_insert_to_index_page_low(): New function, refactored from
ibuf_insert_to_index_page().
ibuf_insert_to_index_page(): When we are inserting a record in place
of a delete-marked record and some fields of the record differ, update
that record just like row_ins_sec_index_entry_by_modify() would do.
mysql_row_templ_t: Add clust_rec_field_no.
row_sel_store_mysql_rec(), row_sel_push_cache_row_for_mysql(): Add the
flag rec_clust, for returning data at clust_rec_field_no instead of
rec_field_no. Resurrect the debug assertion that the record not be
marked for deletion. (Bug #55626)
buf_LRU_free_block(): Refactored from
buf_LRU_search_and_free_block(). This is needed for the
innodb_change_buffering_debug diagnostics.
[UNIV_DEBUG || UNIV_IBUF_DEBUG] ibuf_debug, buf_page_get_gen(),
buf_flush_page_try():
Implement innodb_change_buffering_debug=1 for evicting pages from the
buffer pool, so that change buffering will be attempted more
frequently.
This is a merge from 5.1/builtin to 5.1/plugin of:
--------------
revision-id: vasil.dimov@oracle.com-20101018104811-nwqhg9vav17kl5s1
committer: Vasil Dimov <vasil.dimov@oracle.com>
timestamp: Mon 2010-10-18 13:48:11 +0300
message:
Fix Bug#57252 disabling innobase_stats_on_metadata disables ANALYZE
In order to fix this bug we need to distinguish whether ha_innobase::info()
has been called from ::analyze() or not. Rename ::info() to ::info_low()
and add a boolean parameter that tells whether the call is from ::analyze()
or not. Create a new simple ::info() that just calls
::info_low(false => not called from analyze). From ::analyze() instead of
::info() call ::info_low(true => called from analyze).
Approved by: Jimmy (rb://487)
--------------
In order to fix this bug we need to distinguish whether ha_innobase::info()
has been called from ::analyze() or not. Rename ::info() to ::info_low()
and add a boolean parameter that tells whether the call is from ::analyze()
or not. Create a new simple ::info() that just calls
::info_low(false => not called from analyze). From ::analyze() instead of
::info() call ::info_low(true => called from analyze).
Approved by: Jimmy (rb://487)
replication aborts
When recieving a 'SLAVE STOP' command, slave SQL thread will roll back the
transaction and stop immidiately if there is only transactional table updated,
even through 'CREATE|DROP TEMPOARY TABLE' statement are in it. But These
statements can never be rolled back. Because the temporary tables to the user
session mapping remain until 'RESET SLAVE', Therefore it will abort SQL thread
with an error that the table already exists or doesn't exist, when it restarts
and executes the whole transaction again.
After this patch, SQL thread always waits till the transaction ends and then stops,
if 'CREATE|DROP TEMPOARY TABLE' statement are in it.
This is a port of the following changeset from
5.1/storage/innobase to 5.1/storage/innodb_plugin:
------------------------------------------------------------
revno: 3626
revision-id: vasil.dimov@oracle.com-20101013171859-gi9n558yj89x9v3w
parent: klewis@mysql.com-20101012175933-ce9kkgl0z8oeqffa
committer: Vasil Dimov <vasil.dimov@oracle.com>
branch nick: mysql-5.1-innodb
timestamp: Wed 2010-10-13 20:18:59 +0300
message:
Fix Bug#56143 too many foreign keys causes output of show create table to become invalid
Just remove the check whether the file is "too big".
A similar code exists in ha_innobase::update_table_comment() but that
method does not seem to be used.
Also use a CREATE statement with all the FKs instead of ALTERing the
table 550 times because it is faster.
The root of the problem is that to interrupt a slave SQL thread
wait, the STOP SLAVE implementation uses thd->awake(THD::NOT_KILLED).
This appears as a spurious wakeup (e.g. from a sleep on a
condition variable) to the code that the slave SQL thread is
executing at the time of the STOP. If the code is not written
to be spurious-wakeup safe, unexpected behavior can occur. For
the reported case, this problem led to an infinite loop around
the interruptible_wait() function in item_func.cc (SLEEP()
function implementation). The loop was not being properly
restarted and, consequently, would not come to an end. Since the
SLEEP function sleeps on a timed event in order to be killable
and to perform periodic checks until the requested time has
elapsed, the spurious wake up was causing the requested sleep
time to be reset every two seconds.
The solution is to calculate the requested absolute time only
once and to ensure that the thread only sleeps until this
time is elapsed. In case of a spurious wake up, the sleep is
restarted using the previously calculated absolute time. This
restores the behavior present in previous releases. If a slave
thread is executing a SLEEP function, a STOP SLAVE statement
will wait until the time requested in the sleep function
has elapsed.
Just remove the check whether the file is "too big".
A similar code exists in ha_innobase::update_table_comment() but that
method does not seem to be used.
heavy load".
rpl_row_sp003.test has sporadically failed when run on machine
under heavy load or on slow hardware.
This patch fixes races in the test which were causing these
failures and also removes unnecessary 100 second wait from it.
After ALTER TABLE which changed only table's metadata, row-based
binlog sometimes got corrupted since the tablemap was unexpectedly
set to 0 for subsequent updates to the same table.
ALTER TABLE which changed only table's metadata always reset
table_map_id for the table share to 0. Despite the fact that
0 is a valid value for table_map_id, this step caused problems
as it could have created situation in which we had more than
one table share with table_map_id equal 0. If more than one
table with table_map_id are 0 were updated in the same statement,
updates to these different tables were written into the same
rows event. This caused slave server to crash.
This bug happens only on 5.1. It doesn't affect 5.5+.
This patch solves this problem by ensuring that ALTER TABLE
statements which change metadata only never reset table_map_id
to 0. To do this it changes reopen_table() to correctly use
refreshed table_map_id value instead of using the old one/
resetting it.
When slave executes a transaction bigger than slave's max_binlog_cache_size,
slave will crash. It is caused by the assert that server should only roll back
the statement but not the whole transaction if the error ER_TRANS_CACHE_FULL
happens. But slave sql thread always rollbacks the whole transaction when
an error happens.
Ather this patch, we always clear any error set in sql thread(it is different
from the error in 'SHOW SLAVE STATUS') and it is cleared before rolling back
the transaction.
discover its existence".
The problem was that user without any privileges on
routine was able to find out whether it existed or not.
DROP FUNCTION and DROP PROCEDURE statements were
checking if routine being dropped existed and reported
ER_SP_DOES_NOT_EXIST error/warning before checking
if user had enough privileges to drop it.
This patch solves this problem by changing code not to
check if routine exists before checking if user has enough
privileges to drop it. Moreover we no longer perform this
check using a separate call instead we rely on
sp_drop_routine() returning SP_KEY_NOT_FOUND if routine
doesn't exist.
This change also simplifies one of upcoming patches
refactoring global read lock implementation.
The error message for ER_SLAVE_HEARTBEAT_VALUE_OUT_OF_RANGE was
hard coded. Additionally, the same error was used in three
separate error symptoms: 1. when heartbeat period exceeds the
value of slave_net_timeout, 2. when it is smaller than 1
milisecond and 3. when it was not in range, ie, either negative
or greater than the maximum allowed.
We fix this by splitting into three distinct errors and by
removing the message from the source code and moving it to the
errmsg-utf8.txt file.
This change is to align the 5.5 performance_schema.THREADS
table definition with the 5.6 performance_schema.THREADS table,
to facilitate the 5.5 -> 5.6 migration later.
In the table performance_schema.THREADS:
- renamed ID to PROCESSLIST_ID, removed not null
- changed NAME from varchar(64) to varchar(128)
to match the columns definitions from 5.6
Adjusted the test cases accordingly.
Note: this fix is for 5.5 only, to null merge into 5.6
Bug#54678: InnoDB, TRUNCATE, ALTER, I_S SELECT, crash or deadlock
- Incompatible change: truncate no longer resorts to a row by
row delete if the storage engine does not support the truncate
method. Consequently, the count of affected rows does not, in
any case, reflect the actual number of rows.
- Incompatible change: it is no longer possible to truncate a
table that participates as a parent in a foreign key constraint,
unless it is a self-referencing constraint (both parent and child
are in the same table). To work around this incompatible change
and still be able to truncate such tables, disable foreign checks
with SET foreign_key_checks=0 before truncate. Alternatively, if
foreign key checks are necessary, please use a DELETE statement
without a WHERE condition.
Problem description:
The problem was that for storage engines that do not support
truncate table via a external drop and recreate, such as InnoDB
which implements truncate via a internal drop and recreate, the
delete_all_rows method could be invoked with a shared metadata
lock, causing problems if the engine needed exclusive access
to some internal metadata. This problem originated with the
fact that there is no truncate specific handler method, which
ended up leading to a abuse of the delete_all_rows method that
is primarily used for delete operations without a condition.
Solution:
The solution is to introduce a truncate handler method that is
invoked when the engine does not support truncation via a table
drop and recreate. This method is invoked under a exclusive
metadata lock, so that there is only a single instance of the
table when the method is invoked.
Also, the method is not invoked and a error is thrown if
the table is a parent in a non-self-referencing foreign key
relationship. This was necessary to avoid inconsistency as
some integrity checks are bypassed. This is inline with the
fact that truncate is primarily a DDL operation that was
designed to quickly remove all data from a table.
The binlog_cache_use is incremented twice when changes to a transactional table
are committed, i.e. TC_LOG_BINLOG::log_xid calls is called. The problem happens
because log_xid calls both binlog_flush_stmt_cache and binlog_flush_trx_cache
without checking if such caches are empty thus unintentionally increasing the
binlog_cache_use value twice.
To fix the problem we avoided incrementing the binlog_cache_use if the cache is
empty. We also decided to increment binlog_cache_use when the cache is truncated
as the cache is used although its content is discarded and is not written to the
binary log.
Note that binlog_cache_use is incremented for both types of cache, transactional
and non-transactional and that the behavior presented in this patch also applies
to the binlog_cache_disk_use.
Finally, we re-organized the code around the functions binlog_flush_trx_cache and
binlog_flush_stmt_cache.
parameters don't match
Revert the changes of the default values of innodb_file_per_table
and innobase_file_format in 5.5, until WL#5135 is implemented.