mysqld maintains a list of TABLE objects for all temporary
tables created within a session in THD. Here each table is
represented by a TABLE object.
A query referencing a particular temporary table for more
than once, however, failed with ER_CANT_REOPEN_TABLE error
because a TABLE_SHARE was allocate together with the TABLE,
so temporary tables always had only one TABLE per TABLE_SHARE.
This patch lift this restriction by separating TABLE and
TABLE_SHARE objects and storing TABLE_SHAREs for temporary
tables in a list in THD, and TABLEs in a list within their
respective TABLE_SHAREs.
INSERTS/UPDATES ON TEMPORARY TABLES
Bug#14294223: CHANGES NOT ALLOWED TO TEMPORARY TABLES ON
READ-ONLY SERVERS
Problem:
========
Running 5.5.14 in read only we can create temporary tables
but can not insert or update records in the table. When we
try we get Error 1290 : The MySQL server is running with the
--read-only option so it cannot execute this statement.
Analysis:
=========
This bug is very specific to binlog being enabled and
binlog-format being stmt/mixed. Standalone server without
binlog enabled or with row based binlog-mode works fine.
How standalone server and row based replication work:
=====================================================
Standalone server and row based replication mark the
transactions as read_write only when they are modifying
non temporary tables as part of their current transaction.
Because of this when code enters commit phase it checks
if a transaction is read_write or not. If the transaction
is read_write and global read only mode is enabled those
transaction will fail with 'server is read only mode'
error.
In the case of statement based mode at the time of writing
to binary log a binlog handler is created and it is always
marked as read_write. In case of temporary tables even
though the engine did not mark the transaction as read_write
but the new transaction that is started by binlog handler is
considered as read_write.
Hence in this case when code enters commit phase it finds
one handler which has a read_write transaction even when
we are modifying temporary table. This causes the server
to throw an error when global read-only mode is enabled.
Fix:
====
At the time of commit in "ha_commit_trans" if a read_write
transaction is found, we should check if this transaction is
coming from a handler other than binlog_handler. This will
ensure that there is a genuine read_write transaction being
sent by the engine apart from binlog_handler and only then
it should be blocked.
This is done by splitting variables.errmsg and locale.errmsg to
variables.errmsg_extra and locale.errmsg_extra
The ER() macros in unireg.h now looks more complex than before, but this
isn't critical as most usage of them are with constants and the compiler
will remove most of the test code.
LOG-SLOW-SLAVE-STATEMENTS NOT DISPLAYED.
These parameters were moved from the command line options to
the system variables section. Treatment of the
opt_log_slow_slave_statements changed to let the
dynamic change of the variable.
delete deferred events after they're executed
(otherwise they can be executed again for a sub-statement)
See also
commit 0e78d1d
Author: Venkatesh Duggirala <venkatesh.duggirala@oracle.com>
Date: Wed Mar 20 11:20:47 2013 +0530
BUG#15850951-DUPLICATE ERROR IN REPLICATION WITH SLAVE
TRIGGERS AND AUTO INCREMENT
REPLICATION
Problem: In RBR mode, merge table updates are not successfully applied on a cascading
replication.
Analysis & Fix: Every type of row event is preceded by one or more table_map_log_events
that gives the information about all the tables that are involved in the row
event. Server maintains the list in RPL_TABLE_LIST and it goes through all the
tables and checks for the compatibility between master and slave. Before
checking for the compatibility, it calls 'open_tables()' which takes the list
of all tables that needs to be locked and opened. In RBR, because of the
Table_map_log_event , we already have all the tables including base tables in
the list. But the open_tables() which is generic call takes care of appending
base tables if the list contains merge tables. There is an assumption in the
current replication layer logic that these tables (TABLE_LIST type objects) are always
added in the end of the list. Replication layer maintains the count of
tables(tables_to_lock_count) that needs to be verified for compatibility check
and runs through only those many tables from the list and rest of the objects
in linked list can be skipped. But this assumption is wrong.
open_tables()->..->add_children_to_list() adds base tables to the list immediately
after seeing the merge table in the list.
For eg: If the list passed to open_tables() is t1->t2->t3 where t3 is merge
table (and t1 and t2 are base tables), it adds t1'->t2' to the list after t3.
New table list looks like t1->t2->t3->t1'->t2'. It looks like it added at the
end of the list but that is not correct. If the list passed to open_tables()
is t3->t1->t2 where t3 is merge table (and t1 and t2 are base tables), the new
prepared list will be t3->t1'->t2'->t1->t2. Where t1' and t2' are of
TABLE_LIST objects which were added by add_children_to_list() call and replication
layer should not look into them. Here tables_to_lock_count will not help as the
objects are added in between the list.
Fix: After investigating add_children_list() logic (which is called from open_tables()),
there is no flag/logic in it to skip adding the children to the list even if the
children are already included in the table list. Hence to fix the issue, a
logic should be added in the replication layer to skip children in the list by
checking whether 'parent_l' is non-null or not. If it is children, we will skip 'compatibility'
check for that table.
Also this patch is not removing 'tables_to_lock_count' logic for the performance issues
if there are any children at the end of the list, those can be easily skipped directly by
stopping the loop with tables_to_lock_count check.
The reason for the assertion failure is that the update statement for
the minimal row image sets only the PK column in the write_set of the
table to true. On the other hand, the trigger aims to update a different
column.
Make sure that triggers update the used columns accordingly, when being
processed.
Fix remaining issues with wsrep_sync_wait and query cache.
- Fixes misplaced call to invalidate query cache in
Rows_log_event::do_apply_event().
Query cache was invalidated too early, and allowed old
entries to be inserted to the cache.
- Reset thd->wsrep_sync_wait_gtid on query cache hit.
THD->cleanup_after_query is not called in such cases,
and thd->wsrep_sync_wait_gtid remained initialized.
Problem:
========
1) Drop table queries are re-generated by server
before writing the events(queries) into binlog
for various reasons. If table name/db name contains
a non regular characters (like latin characters),
the generated query is wrong. Hence it breaks the
replication.
2) In the edge case, when table name/db name contains
64 characters, server is throwing an assert
assert(M_TBLLEN < 128)
3) In the edge case, when db name contains 64 latin
characters, binlog content is interpreted badly
which is leading replication failure.
Analysis & Fix :
================
1) Parser reads the table name from the query and converts
it to standard charset(utf8) and stores it in table_name variable.
When drop table query is regenerated with the same table_name
variable, it should be converted back to the original charset
from standard charset(utf8).
2) Latin character takes two bytes for each character. Limit
of the identifier is 64. SYSTEM_CHARSET_MBMAXLEN is set to '3'.
So there is a possiblity that tablename/dbname contains 3 * 64.
Hence assert is changed to
(M_TBLLEN <= NAME_CHAR_LEN*SYSTEM_CHARSET_MBMAXLEN)
3) db_len in the binlog event header is taking 1 byte.
db_len is ranged from 0 to 192 bytes (3 * 64).
While reading the db_len from the event, server
is casting to uint instead of uchar which is leading
to bad db_len. This problem is fixed by changing the
cast type to uchar.
This includes fixing all utilities to not have any memory leaks,
as safemalloc warnings stopped tests from passing on MacOSX.
- Ensure that all clients takes character-set-dir, as the
libmysqlclient library will use it.
- mysql-test-run now passes character-set-dir to all external clients.
- Changed dynstr_free() so that it can be called twice (made freeing code easier)
- Changed rpl_global_gtid_slave_state to be allocated dynamicly as it
includes a mutex that needs to be initizlied/destroyed before my_end() is called.
- Removed rpl_slave_state::init() and rpl_slave_stage::deinit() as
their job are better handling by constructor and delete.
- Print alias instead of table_name in check_duplicate_key as
table_name may have been converted to lower case.
Other things:
- Fixed a case in time_to_datetime_with_warn() where we where
using && instead of & in tests
Not printing the value" with binlog-row-image=minimal"
Merged Rows_log_event::print_verbose_one_row() and log_event_print_value()
with MySQL 5.7
Added flush after writing of Table_map_log_event() to fix wrong order of
lines in output. This causes a lot of changes in some test results.
changes in query execution plans.
Fixed by introducing table->rpl_write_set which holds which columns should
be stored in the binary log.
Other things:
- Removed some not needed references to read_set and write_set to make
code really changing read_set and write_set easier to read
(in opt_range.cc)
- Added error handling of failed unpack_current_row()
- Added missing call to mark_columns_needed_for_insert() for DELAYED INSERT
- Removed not used functions in_read_set() and in_write_set()
- In rpl_record.cc, removed not used variable error
MDEV-8685 MariaDB fails to decode Anonymous_GTID entries
MDEV-5705 Replication testing: 5.6->10.0
- Ignoring GTID events from MySQL 5.6+ (Allows replication from MySQL 5.6+ with GTID enabled)
- Added ignorable events from MySQL 5.6
- mysqlbinlog now writes information about GTID and ignorable events.
- Added more information in error message when replication stops because of wrong information in binary log.
- Fixed wrong test when write_on_release() should flush cache.
Introduce Log_event_writer() that encapsulates
writing data to an IO_CACHE with automatic checksum calculation.
Now all events properly checksum themselves as needed.
Use Log_event_writer in MYSQL_BIN_LOG::write_cache() instead
of copy-pasting its logic all over.
Later Log_event_writer will also do encryption.