After the WL#2687, the binlog_cache_size and max_binlog_cache_size affect both the
stmt-cache and the trx-cache. This means that the resource used is twice the amount
expected/defined by the user.
The binlog_cache_use is incremented when the stmt-cache or the trx-cache is used
and binlog_cache_disk_use is incremented when the disk space from the stmt-cache or the
trx-cache is used. This behavior does not allow to distinguish which cache may be harming
performance due to the extra disk accesses and needs to have its in-memory cache
increased.
To fix the problem, we introduced two new options and status variables related to the
stmt-cache:
Options:
. binlog_stmt_cache_size
. max_binlog_stmt_cache_size
Status Variables:
. binlog_stmt_cache_use
. binlog_stmt_cache_disk_use
So there are
. binlog_cache_size that defines the size of the transactional cache for
updates to transactional engines for the binary log.
. binlog_stmt_cache_size that defines the size of the statement cache for
updates to non-transactional engines for the binary log.
. max_binlog_cache_size that sets the total size of the transactional
cache.
. max_binlog_stmt_cache_size that sets the total size of the statement
cache.
. binlog_cache_use that identifies the number of transactions that used the
temporary transactional binary log cache.
. binlog_cache_disk_use that identifies the number of transactions that used
the temporary transactional binary log cache but that exceeded the value of
binlog_cache_size.
. binlog_stmt_cache_use that identifies the number of statements that used the
temporary non-transactional binary log cache.
. binlog_stmt_cache_disk_use that identifies the number of statements that used
the temporary non-transactional binary log cache but that exceeded the value of
binlog_stmt_cache_size.
There were actually more problems in this area:
Slaves (if any) were unconditionally restarted, this appears unnecessary.
Sort criteria were suboptimal, included the test name.
Added logic to "reserve" a sequence of tests with same config for one thread
Got rid of sort_criteria hash, put it into the test case itself
Adds little sanity check that expected worker picks up test
Fixed some tests that may fail if starting on running server
Some of these fail only if *same* test is repeated.
Finally, special sorting of tests that do --force-restart
The problem was that the warnings risen by a trigger were not cleared upon
successful completion. The warnings should be cleared if the trigger completes
successfully.
The fix is to skip merging warnings into caller's Warning Info for triggers.
"Grantor" columns' data is lost when replicating mysql.tables_priv.
Slave SQL thread used its default user ''@'' as the grantor of GRANT|REVOKE
statements executing on it.
In this patch, current user is put in query log event for all GRANT and REVOKE
statement, SQL thread uses the user in query log event as grantor.
Rows events were applied wrongly on the temporary table with the same name.
But rows events are generated only for base tables. As temporary
table's data never be binlogged on row mode. Normally, base table of the
same name cannot be updated if a temporary table has the same name.
But there are two cases which can generate rows events on
the base table of same name.
Case1: 'CREATE TABLE ... SELECT' statement.
In mixed format, it will generate rows events if it is unsafe.
Case2: Drop a transactional temporary table in a transaction
(happens only on 5.5+).
BEGIN;
DROP TEMPORARY TABLE t1; # t1 is a InnoDB table
INSERT INTO t1 VALUES(rand()); # t1 is a MyISAM table
COMMIT;
'DROP TEMPORARY TABLE' will be put in the transaction cache and
binlogged after the rows events generated by the 'INSERT' statement.
After this patch, slave opens only base table when applying a rows event.
replication aborts
When recieving a 'SLAVE STOP' command, slave SQL thread will roll back the
transaction and stop immidiately if there is only transactional table updated,
even through 'CREATE|DROP TEMPOARY TABLE' statement are in it. But These
statements can never be rolled back. Because the temporary tables to the user
session mapping remain until 'RESET SLAVE', Therefore it will abort SQL thread
with an error that the table already exists or doesn't exist, when it restarts
and executes the whole transaction again.
After this patch, SQL thread always waits till the transaction ends and then stops,
if 'CREATE|DROP TEMPOARY TABLE' statement are in it.
The root of the problem is that to interrupt a slave SQL thread
wait, the STOP SLAVE implementation uses thd->awake(THD::NOT_KILLED).
This appears as a spurious wakeup (e.g. from a sleep on a
condition variable) to the code that the slave SQL thread is
executing at the time of the STOP. If the code is not written
to be spurious-wakeup safe, unexpected behavior can occur. For
the reported case, this problem led to an infinite loop around
the interruptible_wait() function in item_func.cc (SLEEP()
function implementation). The loop was not being properly
restarted and, consequently, would not come to an end. Since the
SLEEP function sleeps on a timed event in order to be killable
and to perform periodic checks until the requested time has
elapsed, the spurious wake up was causing the requested sleep
time to be reset every two seconds.
The solution is to calculate the requested absolute time only
once and to ensure that the thread only sleeps until this
time is elapsed. In case of a spurious wake up, the sleep is
restarted using the previously calculated absolute time. This
restores the behavior present in previous releases. If a slave
thread is executing a SLEEP function, a STOP SLAVE statement
will wait until the time requested in the sleep function
has elapsed.
heavy load".
rpl_row_sp003.test has sporadically failed when run on machine
under heavy load or on slow hardware.
This patch fixes races in the test which were causing these
failures and also removes unnecessary 100 second wait from it.
After ALTER TABLE which changed only table's metadata, row-based
binlog sometimes got corrupted since the tablemap was unexpectedly
set to 0 for subsequent updates to the same table.
ALTER TABLE which changed only table's metadata always reset
table_map_id for the table share to 0. Despite the fact that
0 is a valid value for table_map_id, this step caused problems
as it could have created situation in which we had more than
one table share with table_map_id equal 0. If more than one
table with table_map_id are 0 were updated in the same statement,
updates to these different tables were written into the same
rows event. This caused slave server to crash.
This bug happens only on 5.1. It doesn't affect 5.5+.
This patch solves this problem by ensuring that ALTER TABLE
statements which change metadata only never reset table_map_id
to 0. To do this it changes reopen_table() to correctly use
refreshed table_map_id value instead of using the old one/
resetting it.
When slave executes a transaction bigger than slave's max_binlog_cache_size,
slave will crash. It is caused by the assert that server should only roll back
the statement but not the whole transaction if the error ER_TRANS_CACHE_FULL
happens. But slave sql thread always rollbacks the whole transaction when
an error happens.
Ather this patch, we always clear any error set in sql thread(it is different
from the error in 'SHOW SLAVE STATUS') and it is cleared before rolling back
the transaction.
The error message for ER_SLAVE_HEARTBEAT_VALUE_OUT_OF_RANGE was
hard coded. Additionally, the same error was used in three
separate error symptoms: 1. when heartbeat period exceeds the
value of slave_net_timeout, 2. when it is smaller than 1
milisecond and 3. when it was not in range, ie, either negative
or greater than the maximum allowed.
We fix this by splitting into three distinct errors and by
removing the message from the source code and moving it to the
errmsg-utf8.txt file.
On Solaris with version 3.4.6, the ha_example.so shared library is built
with DTrace and the server is built without DTrace support. This occurs
because dtrace.cmake disables DTrace support for 3.4.6, but still set
HAVE_DTRACE, which causes probes_mysql.h to include probes_mysql_dtrace.h
instead of probes_mysql_nodtrace.h.
This patch fixes this by not setting HAVE_DTRACE on Solaris for GCC 3.4.6.
temp table
This patch introduces two key changes in the replication's behavior.
Firstly, it reverts part of BUG#51894 which puts any update to temporary tables
into the trx-cache. Now, updates to temporary tables are handled according to
the type of their engines as a regular table.
Secondly, an unsafe mixed statement, (i.e. a statement that access transactional
table as well non-transactional or temporary table, and writes to any of them),
are written into the trx-cache in order to minimize errors in the execution when
the statement logging format is in use.
Such changes has a direct impact on which statements are classified as unsafe
statements and thus part of BUG#53259 is reverted.