Relates to MDEV-17863 DROP TEMPORARY TABLE creates a transaction in
binary log on read only server
Other things:
- Fixed that insert into normal_table select from tmp_table is
replicated as row events if tmp_table doesn't exists on slave.
- Any temporary tables created under read-only mode will never be logged
to binary log. Any usage of these tables to update normal tables, even
after read-only has been disabled, will use row base logging (as the
temporary table will not be on the slave).
- Analyze, check and repair table will not be logged in read-only mode.
Other things:
- Removed not used varaibles in
MYSQL_BIN_LOG::flush_and_set_pending_rows_event.
- Set table_share->table_creation_was_logged for all normal tables.
- THD::binlog_query() now returns -1 if statement was not logged., This
is used to update table_share->table_creation_was_logged.
- Don't log admin statements in opt_readonly is set.
- Table's that doesn't have table_creation_was_logged will set binlog format to row
logging.
- Removed not needed/wrong setting of table->s->table_creation_was_logged
in create_table_from_items()
calc_field_event_length should accurately calculate the size of BLOB type
fields, Instead of returning just the bytes taken by length it should return
length bytes + actual length.
calc_field_event_length should accurately calculate the size of BLOB type
fields, Instead of returning just the bytes taken by length it should return
length bytes + actual length.
Cherry-pick the commits the mysql and some changes.
WL#4618 RBR: extended table metadata in the binary log
This patch extends Table Map Event. It appends some new fields for
more metadata. The new metadata includes:
- Signedness of Numberic Columns
- Character Set of Character Columns and Binary Columns
- Column Name
- String Value of SET Columns
- String Value of ENUM Columns
- Primary Key
- Character Set of SET Columns and ENUM Columns
- Geometry Type
Some of them are optional, the patch introduces a GLOBAL system
variable to control it. It is binlog_row_metadata.
- Scope: GLOBAL
- Dynamic: Yes
- Type: ENUM
- Values: {NO_LOG, MINIMAL, FULL}
- Default: NO_LOG
Only Signedness, character set and geometry type are logged if it is MINIMAL.
Otherwise all of them are logged.
Also add a binlog_type_info() to field, So that we can have extract
relevant binlog info from field.
This allows one to run the test suite even if any of the following
options are changed:
- character-set-server
- collation-server
- join-cache-level
- log-basename
- max-allowed-packet
- optimizer-switch
- query-cache-size and query-cache-type
- skip-name-resolve
- table-definition-cache
- table-open-cache
- Some innodb options
etc
Changes:
- Don't print out the value of system variables as one can't depend on
them to being constants.
- Don't set global variables to 'default' as the default may not
be the same as the test was started with if there was an additional
option file. Instead save original value and reset it at end of test.
- Test that depends on the latin1 character set should include
default_charset.inc or set the character set to latin1
- Test that depends on the original optimizer switch, should include
default_optimizer_switch.inc
- Test that depends on the value of a specific system variable should
set it in the test (like optimizer_use_condition_selectivity)
- Split subselect3.test into subselect3.test and subselect3.inc to
make it easier to set and reset system variables.
- Added .opt files for test that required specfic options that could
be changed by external configuration files.
- Fixed result files in rockdsb & tokudb that had not been updated for
a while.
Analysis:
========
As part of BUG#28642318 fix, two new test cases were added. The first test
case tests a scenario where two sessions are present, in which the first
session has a regular table named 't1' and another session has a temporary
table named 't1'. Test executes a DELETE statement on regular table. These
statements are captured from binary log and replayed back on new client
connection to prove that DELETE statement is applied successfully. Note that
the binlog contains only CREATE TEMPORARY TABLE part hence a temporary table
gets created in new connection. This replaying logic is implemented by using
'--exec $MYSQL' command. If the new connection gets disconnected within the
scope of first test case the test passes, i.e the temporary table gets dropped
as part thread cleanup. But on slow platforms the connection gets closed at
the time of execution of test case 2. When the temporary table is dropped as
part thread cleanup a "DROP TEMPORARY TABLE t1" is written into the binary
log. In test case two the same sessions continue to exist and and table names
are reused to test a new bug scenario. The additional "DROP TEMPORARY TABLE"
command drops second test specific tables which results in "Unknown table"
error.
Fix:
====
Rename the second case specific table to 't2'. Even if the close connection
from test case one happens later the drop command with has
'DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `t1`' will not result in an error.
MDEV-5589 commit set up a policy to skip DROP TEMPORARY TABLE binary logging
in case the target table has not been "CREATEed" in binlog (no CREATE
Query-log-event was logged into the binary log).
It turns out that
1. the rule did not cover non-existing table DROPped with IF-EXISTS clause.
The logged-create knowledge for the non-existing one does not even need
MDEV-5589 patch, and
2. connection close disobeys it to trigger automatic DROP-IF-EXISTS
binlogging.
Either 1 or 2 or even both is/are also responsible for unexpected binlog
records observed in MDEV-17863, actually rendering a referred
@@global.read_only irrelevant as far as the described stored procedure
definition *and* the ROW binlog-format are concerned.
Analysis
========
Point in time recovery using mysqlbinlog containing queries
operating on temporary tables results in an error.
While writing the query log event in the binary log, the
thread id used for execution of DROP TABLE and DELETE commands
were incorrect. The thread variable 'thread_specific_used'
is used to determine whether a specific thread id is to used
while executing the statements i.e using 'SET
@@session.pseudo_thread_id'. This variable was not set
correctly for DROP TABLE query and was never set for DELETE
query. The thread id is important for temporary tables
since the tables are session specific. DROP TABLE and DELETE
queries executed using a wrong thread id resulted in errors
while applying the queries generated by mysqlbinlog utility.
Fix
===
Set the 'thread_specific_used' THD variable for DROP TABLE and
DELETE queries.
ReviewBoard: 21833
Problem:
=======
Executing command, "mysqlbinlog --read-from-remote-server --host='xx.xx.xx.xx'
--port=3306 --user=xxx --password=xxx --database=mysql --to-last-log
mysql-bin.000001 --start-position=1098699 --stop-never |mysql -uxxx -pxxx", we
found that last data read from remote couldn't commit.
Analysis:
========
The purpose of 'Write_on_release_cache' is that the contents of the Cache will
automatically be written to a dedicated result file on destruction. Flush
operation on the result file is controlled by a flag 'FLUSH_F'. Events which
require force flush upon their destruction will have to enable this
'Write_on_release_cache::FLUSH_F'. At present the 'FLUSH_F' flag is defined as
an enum as shown below.
enum flag
{
FLUSH_F
};
Since 'FLUSH_F' is the first member without initialization it get the default
value '0'. Because of this the following flush condition never succeeds.
if (m_flags & FLUSH_F)
fflush(m_file);
At present the file gets flushed only during my_fclose(result_file) operation.
When continuous streaming is enabled through --stop-never option it never gets
flushed and hence events are not replicated.
Fix:
===
Initialize the enum value to non zero value.
fix MDEV-18750: failed to flashback large-size binlog file
fix mysqlbinlog flashback failure caused by reading io_cache without MY_FULL_IO flag
fix MDEV-18750: mysqlbinlog flashback failure on large binlog
Problem:
========
The mysqlbinlog tool is leaking memory, causing failures in various tests when
compiling and testing with AddressSanitizer or LeakSanitizer like this:
cmake -DCMAKE_BUILD_TYPE=Debug -DWITH_ASAN:BOOL=ON /path/to/source
make -j$(nproc)
cd mysql-test
ASAN_OPTIONS=abort_on_error=1 ./mtr --parallel=auto
Analysis:
=========
Two types of leaks were observed during above execution.
1) Leak in Log_event::read_log_event(char const*, unsigned int, char const**,
Format_description_log_event const*, char)
File: sql/log_event.cc:2150
For all row based replication events the memory which is allocated during
read_log_event is not freed after the event is processed. The event specific
memory has to be retained only when flashback option is enabled with
mysqlbinlog tool. In this case all the events are retained till the end
statement is received and they are processed in reverse order and they are
destroyed. But in the existing code all events are retained irrespective of
flashback mode. Hence the memory leaks are observed.
2) read_remote_annotate_event(unsigned char*, unsigned long, char const**)
File: client/mysqlbinlog.cc:194
In general the Annotate event is not printed immediately because all
subsequent rbr-events can be filtered away. Instead it will be printed
together with the first not filtered away Table map or the last rbr will be
processed. While reading remote annotate events memory is allocated for event
buffer and event's temp_buf is made to point to the allocated buffer as shown
below. The TRUE flag is used for doing proper cleanup using free_temp_buf().
i.e at the time of deletion of annotate event its destructor takes care of
clearing the temp_buf.
/*
Ensure the event->temp_buf is pointing to the allocated buffer.
(TRUE = free temp_buf on the event deletion)
*/
event->register_temp_buf((char*)event_buf, TRUE);
But existing code does the following when it receives a remote annotate_event.
if (remote_opt)
ev->temp_buf= 0;
That is code immediately sets temp_buf=0, because of which free_temp_buf()
call will return empty handed as it has lost the reference to the allocated
temporary buffer. This results in memory leak
Fix:
====
1) If not in flashback mode, destroy the memory for events once they are
processed.
2) Remove the ev->temp_buf=0 code for remote option. Let the proper cleanup to
be done as part of free_temp_buf().
On some systems with 10,000+ binlogs, show binary logs could block
log rotation for more than 10 seconds.
This patch fixes this by first caching all binary log names and
releases all mutexes while calculating the sizes of the binary logs.
Other things:
- Ensure that reinit_io_cache() sets end_of_file when moving to read_cache.
This ensures that external changes of the underlying file is known to
the cache.
- get_binlog_list() is made more efficent and show_binlogs() is changed
to call get_binlog_list()
Reviewed by Andrei Elkin