This allows one to run the test suite even if any of the following
options are changed:
- character-set-server
- collation-server
- join-cache-level
- log-basename
- max-allowed-packet
- optimizer-switch
- query-cache-size and query-cache-type
- skip-name-resolve
- table-definition-cache
- table-open-cache
- Some innodb options
etc
Changes:
- Don't print out the value of system variables as one can't depend on
them to being constants.
- Don't set global variables to 'default' as the default may not
be the same as the test was started with if there was an additional
option file. Instead save original value and reset it at end of test.
- Test that depends on the latin1 character set should include
default_charset.inc or set the character set to latin1
- Test that depends on the original optimizer switch, should include
default_optimizer_switch.inc
- Test that depends on the value of a specific system variable should
set it in the test (like optimizer_use_condition_selectivity)
- Split subselect3.test into subselect3.test and subselect3.inc to
make it easier to set and reset system variables.
- Added .opt files for test that required specfic options that could
be changed by external configuration files.
- Fixed result files in rockdsb & tokudb that had not been updated for
a while.
Analysis:
========
As part of BUG#28642318 fix, two new test cases were added. The first test
case tests a scenario where two sessions are present, in which the first
session has a regular table named 't1' and another session has a temporary
table named 't1'. Test executes a DELETE statement on regular table. These
statements are captured from binary log and replayed back on new client
connection to prove that DELETE statement is applied successfully. Note that
the binlog contains only CREATE TEMPORARY TABLE part hence a temporary table
gets created in new connection. This replaying logic is implemented by using
'--exec $MYSQL' command. If the new connection gets disconnected within the
scope of first test case the test passes, i.e the temporary table gets dropped
as part thread cleanup. But on slow platforms the connection gets closed at
the time of execution of test case 2. When the temporary table is dropped as
part thread cleanup a "DROP TEMPORARY TABLE t1" is written into the binary
log. In test case two the same sessions continue to exist and and table names
are reused to test a new bug scenario. The additional "DROP TEMPORARY TABLE"
command drops second test specific tables which results in "Unknown table"
error.
Fix:
====
Rename the second case specific table to 't2'. Even if the close connection
from test case one happens later the drop command with has
'DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `t1`' will not result in an error.
MDEV-5589 commit set up a policy to skip DROP TEMPORARY TABLE binary logging
in case the target table has not been "CREATEed" in binlog (no CREATE
Query-log-event was logged into the binary log).
It turns out that
1. the rule did not cover non-existing table DROPped with IF-EXISTS clause.
The logged-create knowledge for the non-existing one does not even need
MDEV-5589 patch, and
2. connection close disobeys it to trigger automatic DROP-IF-EXISTS
binlogging.
Either 1 or 2 or even both is/are also responsible for unexpected binlog
records observed in MDEV-17863, actually rendering a referred
@@global.read_only irrelevant as far as the described stored procedure
definition *and* the ROW binlog-format are concerned.
Analysis
========
Point in time recovery using mysqlbinlog containing queries
operating on temporary tables results in an error.
While writing the query log event in the binary log, the
thread id used for execution of DROP TABLE and DELETE commands
were incorrect. The thread variable 'thread_specific_used'
is used to determine whether a specific thread id is to used
while executing the statements i.e using 'SET
@@session.pseudo_thread_id'. This variable was not set
correctly for DROP TABLE query and was never set for DELETE
query. The thread id is important for temporary tables
since the tables are session specific. DROP TABLE and DELETE
queries executed using a wrong thread id resulted in errors
while applying the queries generated by mysqlbinlog utility.
Fix
===
Set the 'thread_specific_used' THD variable for DROP TABLE and
DELETE queries.
ReviewBoard: 21833
Problem:
=======
Executing command, "mysqlbinlog --read-from-remote-server --host='xx.xx.xx.xx'
--port=3306 --user=xxx --password=xxx --database=mysql --to-last-log
mysql-bin.000001 --start-position=1098699 --stop-never |mysql -uxxx -pxxx", we
found that last data read from remote couldn't commit.
Analysis:
========
The purpose of 'Write_on_release_cache' is that the contents of the Cache will
automatically be written to a dedicated result file on destruction. Flush
operation on the result file is controlled by a flag 'FLUSH_F'. Events which
require force flush upon their destruction will have to enable this
'Write_on_release_cache::FLUSH_F'. At present the 'FLUSH_F' flag is defined as
an enum as shown below.
enum flag
{
FLUSH_F
};
Since 'FLUSH_F' is the first member without initialization it get the default
value '0'. Because of this the following flush condition never succeeds.
if (m_flags & FLUSH_F)
fflush(m_file);
At present the file gets flushed only during my_fclose(result_file) operation.
When continuous streaming is enabled through --stop-never option it never gets
flushed and hence events are not replicated.
Fix:
===
Initialize the enum value to non zero value.
fix MDEV-18750: failed to flashback large-size binlog file
fix mysqlbinlog flashback failure caused by reading io_cache without MY_FULL_IO flag
fix MDEV-18750: mysqlbinlog flashback failure on large binlog
Problem:
========
The mysqlbinlog tool is leaking memory, causing failures in various tests when
compiling and testing with AddressSanitizer or LeakSanitizer like this:
cmake -DCMAKE_BUILD_TYPE=Debug -DWITH_ASAN:BOOL=ON /path/to/source
make -j$(nproc)
cd mysql-test
ASAN_OPTIONS=abort_on_error=1 ./mtr --parallel=auto
Analysis:
=========
Two types of leaks were observed during above execution.
1) Leak in Log_event::read_log_event(char const*, unsigned int, char const**,
Format_description_log_event const*, char)
File: sql/log_event.cc:2150
For all row based replication events the memory which is allocated during
read_log_event is not freed after the event is processed. The event specific
memory has to be retained only when flashback option is enabled with
mysqlbinlog tool. In this case all the events are retained till the end
statement is received and they are processed in reverse order and they are
destroyed. But in the existing code all events are retained irrespective of
flashback mode. Hence the memory leaks are observed.
2) read_remote_annotate_event(unsigned char*, unsigned long, char const**)
File: client/mysqlbinlog.cc:194
In general the Annotate event is not printed immediately because all
subsequent rbr-events can be filtered away. Instead it will be printed
together with the first not filtered away Table map or the last rbr will be
processed. While reading remote annotate events memory is allocated for event
buffer and event's temp_buf is made to point to the allocated buffer as shown
below. The TRUE flag is used for doing proper cleanup using free_temp_buf().
i.e at the time of deletion of annotate event its destructor takes care of
clearing the temp_buf.
/*
Ensure the event->temp_buf is pointing to the allocated buffer.
(TRUE = free temp_buf on the event deletion)
*/
event->register_temp_buf((char*)event_buf, TRUE);
But existing code does the following when it receives a remote annotate_event.
if (remote_opt)
ev->temp_buf= 0;
That is code immediately sets temp_buf=0, because of which free_temp_buf()
call will return empty handed as it has lost the reference to the allocated
temporary buffer. This results in memory leak
Fix:
====
1) If not in flashback mode, destroy the memory for events once they are
processed.
2) Remove the ev->temp_buf=0 code for remote option. Let the proper cleanup to
be done as part of free_temp_buf().
On some systems with 10,000+ binlogs, show binary logs could block
log rotation for more than 10 seconds.
This patch fixes this by first caching all binary log names and
releases all mutexes while calculating the sizes of the binary logs.
Other things:
- Ensure that reinit_io_cache() sets end_of_file when moving to read_cache.
This ensures that external changes of the underlying file is known to
the cache.
- get_binlog_list() is made more efficent and show_binlogs() is changed
to call get_binlog_list()
Reviewed by Andrei Elkin
ignore FK-prelocked tables when looking for write-prelocked tables
with auto-increment to complain about "Statement is unsafe because
it invokes a trigger or a stored function that inserts into an
AUTO_INCREMENT column"
The problem was originally stated in
http://bugs.mysql.com/bug.php?id=82212
The size of an base64-encoded Rows_log_event exceeds its
vanilla byte representation in 4/3 times.
When a binlogged event size is about 1GB mysqlbinlog generates
a BINLOG query that can't be send out due to its size.
It is fixed with fragmenting the BINLOG argument C-string into
(approximate) halves when the base64 encoded event is over 1GB size.
The mysqlbinlog in such case puts out
SET @binlog_fragment_0='base64-encoded-fragment_0';
SET @binlog_fragment_1='base64-encoded-fragment_1';
BINLOG @binlog_fragment_0, @binlog_fragment_1;
to represent a big BINLOG.
For prompt memory release BINLOG handler is made to reset the BINLOG argument
user variables in the middle of processing, as if @binlog_fragment_{0,1} = NULL
is assigned.
Notice the 2 fragments are enough, though the client and server still may
need to tweak their @@max_allowed_packet to satisfy to the fragment
size (which they would have to do anyway with greater number of
fragments, should that be desired).
On the lower level the following changes are made:
Log_event::print_base64()
remains to call encoder and store the encoded data into a cache but
now *without* doing any formatting. The latter is left for time
when the cache is copied to an output file (e.g mysqlbinlog output).
No formatting behavior is also reflected by the change in the meaning
of the last argument which specifies whether to cache the encoded data.
Rows_log_event::print_helper()
is made to invoke a specialized fragmented cache-to-file copying function
which is
copy_cache_to_file_wrapped()
that takes care of fragmenting also optionally wraps encoded
strings (fragments) into SQL stanzas.
my_b_copy_to_file()
is refactored to into my_b_copy_all_to_file(). The former function
is generalized
to accepts more a limit argument to constraint the copying and does
not reinitialize anymore the cache into reading mode.
The limit does not do any effect on the fully read cache.