Problem:
=======
Test assertion fails on slave.
Assertion text: 'Last_Seen_Transaction should show .'
Assertion condition: '"0-1-1" = ""'
Assertion condition, interpolated: '"0-1-1" = ""'
Assertion result: '0'
Analysis:
========
Test case creates a table on master and it waits for it to be replicated to
slave and applied by slave applier. On completion 'Last_Seen_Transaction'
value from 'performance_schema.replication_applier_status_by_worker' table
is compared with '@@gtid_slave_pos' to ensure its correctness. The test
should ensure that user table and 'gtid_slave_pos' table are of same engine
type 'InnoDB' to get consistent test results. This guarantees that
'gtid_slave_pos' table is updated as part of transaction commit. In the of
such an engine consistency, user table gets created in default MyISAM
storage engine and 'mysql.gtid_slave_pos' table gets created in Aria
storage engine. When the test code reaches above assert there might be a
delay in an update to 'gtid_slave_pos' table, which leads to test assertion
failure.
Fix:
===
Use InnoDB engine for both user table and 'mysql.gtid_slave_pos' table.
Problem:
========
180511 11:07:58 [ERROR] Slave I/O: Unexpected master's heartbeat data:
heartbeat is not compatible with local info;the event's data: log_file_name
mysql-bin.000009 log_pos 1054262041, Error_code: 1623
Analysis:
=========
In replication setup when master server doesn't have any events to send to
slave server it sends an 'Heartbeat_log_event'. This event carries the
current binary log filename and offset details. The offset values is stored
within 4 bytes of event header. When the size of binary log is higher than
UINT32_MAX the log_pos values will not fit in 4 bytes memory. It overflows
and hence slave stops with an error.
Fix:
===
Since we cannot extend the common_header of Log_event class, a greater than
4GB value of Log_event::log_pos is made to be transported with a HeartBeat
event's sub-header. Log_event::log_pos in such case is set to zero to
indicate that the 8 byte sub-header is allocated in the event.
In case of cross version replication following behaviour is expected
OLD - Server without fix
NEW - Server with fix
OLD<->NEW : works bidirectionally as long as the binlog offset is
(normally) within 4GB.
When log_pos > UINT32_MAX
OLD->NEW : The 'log_pos' is bound to overflow and NEW slave may report
an invalid event/incompatible heart beat event error.
NEW->OLD : Since patched server sets log_pos=0 on overflow, OLD slave will
report invalid event error.
after dfb41fddf6 tables that failed to drop are excluded from the
binlogged DROP TABLE statement. It means that the slave should not
expect any errors when executing DROP TABLE, and the binlog should
report that no error has happened, even if it was.
Do not write error code into the binlogged DROP TABLE,
and remove all code that was needed to compute it.
Back port upstream fix
commit 1800b015a1d487330f7b15f2020b887be348a66b
Author: Venkatesh Duggirala <venkatesh.duggirala@oracle.com>
Date: Fri Sep 8 20:29:22 2017 +0530
Bug#26027024 SLAVE_COMPRESSED_PROTOCOL DOESN'T WORK WITH
SEMI-SYNC REPLICATION IN MYSQL-5.7
Analysis: In mysql-5.6, dump thread (the thread that is created
on Master after Slave requested for a binlog dump) is also used
to receive acknowledgements from the Slave and act on them accordingly.
For performance reasons, a special thread called Ack Receiver thread
is added in mysql-5.7 Semi synchronous replication plugin.
This thread does not have special handling to receive acknowledgements
if Slave has enabled compression in the protocol. Hence Master is
unable to handle any slave if Slave_compressed_protocol is enabled
on it.
Fix: Enable compress flag on the communication channels if the Slave
has Slave_compressed_protocol ON.
Merge 'replication_applier_status_by_coordinator' table.
This table captures SQL_THREAD status in case of both single threaded and
multi threaded slave configuration. When multi_source replication is enabled
this table will display each source specific SQL_THREAD status.
Added new columns for:
- LAST_SEEN_TRANSACTION
- LAST_TRANS_RETRY_COUNT
Merge 'replication_connection_configuration' table.
Replaced following column:
- AUTO_POSITION with USING_GTID
Added new columns for:
- IGNORE_SERVER_IDS
- DO_DOMAIN_IDS
- IGNORE_SERVER_IDS
Removed following columns as they are not part of mariadb replication
connection configuration:
- NETWORK_INTERFACE
- TLS_VERSION
@sql/mysqld.cc
Changed "master-retry-count" default value to 100000.
Step 3:
======
Preserve worker pool information on either STOP SLAVE/Error. In case STOP
SLAVE is executed worker threads will be gone, hence worker threads will be
unavailable. Querying the table at this stage will give empty rows. To
address this case when worker threads are about to stop, due to an error or
forced stop, create a backup pool and preserve the data which is relevant to
populate performance schema table. Clear the backup pool upon slave start.
Step2:
=====
Add two extra columns mentioned below.
---------------------------------------------------------------------------
|Column Name: | Description: |
|-------------------------------------------------------------------------|
| | |
|WORKER_IDLE_TIME | Total idle time in seconds that the worker |
| | thread has spent waiting for work from |
| | co-ordinator thread |
| | |
|LAST_TRANS_RETRY_COUNT | Total number of retries attempted by last |
| | transaction |
---------------------------------------------------------------------------
Step1:
=====
Backport 'replication_applier_status_by_worker' from upstream.
Iterate through rpl_parallel_thread_pool and display slave worker thread
specific information as part of 'replication_applier_status_by_worker'
table.
---------------------------------------------------------------------------
|Column Name: | Description: |
|-------------------------------------------------------------------------|
| | |
|CHANNEL_NAME | Name of replication channel through which the |
| | transaction is received. |
| | |
|THREAD_ID | Thread_Id as displayed in 'performance_schema. |
| | threads' table for thread with name |
| | 'thread/sql/rpl_parallel_thread' |
| | |
| | THREAD_ID will be NULL when worker threads are |
| | stopped due to an error/force stop |
| | |
|SERVICE_STATE | Thread is running or not |
| | |
|LAST_SEEN_TRANSACTION | Last GTID executed by worker |
| | |
|LAST_ERROR_NUMBER | Last Error that occured on a particular worker |
| | |
|LAST_ERROR_MESSAGE | Last error specific message |
| | |
|LAST_ERROR_TIMESTAMP | Time stamp of last error |
| | |
---------------------------------------------------------------------------
CHANNEL_NAME will be empty when the worker has not processed any
transaction. Channel_name points to valid source channel_name when it is
processing a transaction/event group.
Adds an implementation for SELECT ... FOR UPDATE SKIP LOCKED /
SELECT ... LOCK IN SHARED MODE SKIP LOCKED
This is implemented only InnoDB at the moment, not in RockDB yet.
This adds a new hander flag HA_CAN_SKIP_LOCKED than
will be used when the storage engine advertises the flag.
When a storage engine indicates this flag it will get
TL_WRITE_SKIP_LOCKED and TL_READ_SKIP_LOCKED transaction types.
The Lex structure has been updated to store both the FOR UPDATE/LOCK IN
SHARE as well as the SKIP LOCKED so the SHOW CREATE VIEW
implementation is simplier.
"SELECT FOR UPDATE ... SKIP LOCKED" combined with CREATE TABLE AS or
INSERT.. SELECT on the result set is not safe for STATEMENT based
replication. MIXED replication will replicate this as row based events."
Thanks to guidance from Facebook commit
193896c466
This helped verify basic test case, and components that need implementing
(even though every part was implemented differently).
Thanks Marko for guidance on simplier InnoDB implementation.
Reviewers: Marko, Monty
This feature adds the functionality of ignorability for indexes.
Indexes are not ignored be default.
To control index ignorability explicitly for a new index,
use IGNORE or NOT IGNORE as part of the index definition for
CREATE TABLE, CREATE INDEX, or ALTER TABLE.
Primary keys (explicit or implicit) cannot be made ignorable.
The table INFORMATION_SCHEMA.STATISTICS get a new column named IGNORED that
would store whether an index needs to be ignored or not.
Problem:
========
CHANGE MASTER TO MASTER_USER='root', MASTER_SSL=0, MASTER_SSL_CA='',
MASTER_SSL_CERT='', MASTER_SSL_KEY='', MASTER_SSL_CRL='',
MASTER_SSL_CRLPATH='';
CHANGE MASTER TO MASTER_USER='root', MASTER_PASSWORD='', MASTER_SSL=0;
use-after-poison is reported for lex_mi->ssl_crl
File: sql_repl.cc
if (lex_mi->ssl_crl)
strmake_buf(mi->ssl_crl, lex_mi->ssl_crl);
Analysis:
========
At the end of CHANGE MASTER statement execution, the LEX_MASTER_INFO
parameters are reset so that the next query will have a clean state. But
'ssl_crl' and 'ssl_crl_path' members of LEX_MASTER_INFO object are not
cleared during 'LEX_MASTER_INFO::reset'. Hence when a new CHANGE MASTER
statement is executed, the stale value of lex_mi->ssl_crl is used, so ASAN
reports use-after-poison.
Fix:
===
Clear 'ssl_crl' and 'ssl_crl_path' as part of 'reset'.
We implement an idea that was suggested by Michael 'Monty' Widenius
in October 2017: When InnoDB is inserting into an empty table or partition,
we can write a single undo log record TRX_UNDO_EMPTY, which will cause
ROLLBACK to clear the table.
For this to work, the insert into an empty table or partition must be
covered by an exclusive table lock that will be held until the transaction
has been committed or rolled back, or the INSERT operation has been
rolled back (and the table is empty again), in lock_table_x_unlock().
Clustered index records that are covered by the TRX_UNDO_EMPTY record
will carry DB_TRX_ID=0 and DB_ROLL_PTR=1<<55, and thus they cannot
be distinguished from what MDEV-12288 leaves behind after purging the
history of row-logged operations.
Concurrent non-locking reads must be adjusted: If the read view was
created before the INSERT into an empty table, then we must continue
to imagine that the table is empty, and not try to read any records.
If the read view was created after the INSERT was committed, then
all records must be visible normally. To implement this, we introduce
the field dict_table_t::bulk_trx_id.
This special handling only applies to the very first INSERT statement
of a transaction for the empty table or partition. If a subsequent
statement in the transaction is modifying the initially empty table again,
we must enable row-level undo logging, so that we will be able to
roll back to the start of the statement in case of an error (such as
duplicate key).
INSERT IGNORE will continue to use row-level logging and locking, because
implementing it would require the ability to roll back the latest row.
Since the undo log that we write only allows us to roll back the entire
statement, we cannot support INSERT IGNORE. We will introduce a
handler::extra() parameter HA_EXTRA_IGNORE_INSERT to indicate to storage
engines that INSERT IGNORE is being executed.
In many test cases, we add an extra record to the table, so that during
the 'interesting' part of the test, row-level locking and logging will
be used.
Replicas will continue to use row-level logging and locking until
MDEV-24622 has been addressed. Likewise, this optimization will be
disabled in Galera cluster until MDEV-24623 enables it.
dict_table_t::bulk_trx_id: The latest active or committed transaction
that initiated an insert into an empty table or partition.
Protected by exclusive table lock and a clustered index leaf page latch.
ins_node_t::bulk_insert: Whether bulk insert was initiated.
trx_t::mod_tables: Use C++11 style accessors (emplace instead of insert).
Unlike earlier, this collection will cover also temporary tables.
trx_mod_table_time_t: Add start_bulk_insert(), end_bulk_insert(),
is_bulk_insert(), was_bulk_insert().
trx_undo_report_row_operation(): Before accessing any undo log pages,
invoke trx->mod_tables.emplace() in order to determine whether undo
logging was disabled, or whether this is the first INSERT and we are
supposed to write a TRX_UNDO_EMPTY record.
row_ins_clust_index_entry_low(): If we are inserting into an empty
clustered index leaf page, set the ins_node_t::bulk_insert flag for
the subsequent trx_undo_report_row_operation() call.
lock_rec_insert_check_and_lock(), lock_prdt_insert_check_and_lock():
Remove the redundant parameter 'flags' that can be checked in the caller.
btr_cur_ins_lock_and_undo(): Simplify the logic. Correctly write
DB_TRX_ID,DB_ROLL_PTR after invoking trx_undo_report_row_operation().
trx_mark_sql_stat_end(), ha_innobase::extra(HA_EXTRA_IGNORE_INSERT),
ha_innobase::external_lock(): Invoke trx_t::end_bulk_insert() so that
the next statement will not be covered by table-level undo logging.
ReadView::changes_visible(trx_id_t) const: New accessor for the case
where the trx_id_t is not read from a potentially corrupted index page
but directly from the memory. In this case, we can skip a sanity check.
row_sel(), row_sel_try_search_shortcut(), row_search_mvcc():
row_sel_try_search_shortcut_for_mysql(),
row_merge_read_clustered_index(): Check dict_table_t::bulk_trx_id.
row_sel_clust_sees(): Replaces lock_clust_rec_cons_read_sees().
lock_sec_rec_cons_read_sees(): Replaced with lower-level code.
btr_root_page_init(): Refactored from btr_create().
dict_index_t::clear(), dict_table_t::clear(): Empty an index or table,
for the ROLLBACK of an INSERT operation.
ROW_T_EMPTY, ROW_OP_EMPTY: Note a concurrent ROLLBACK of an INSERT
into an empty table.
This is joint work with Thirunarayanan Balathandayuthapani,
who created a working prototype.
Thanks to Matthias Leich for extensive testing.
Problem:
========
Auto purge of relaylogs stops when relay-log-file is
'slave-relay-log.999999' and slave_parallel_threads is enabled.
Analysis:
=========
The problem is that in Relay_log_info::inc_group_relay_log_pos() function,
when two log names are compared via strcmp() function, it gives correct
result, when log name sequence numbers are of same digits(6 digits), But
when the number goes to 7 digits, a 999999 compares greater than
1000000, which is wrong, hence the bug.
Fix:
====
Extract the numeric extension part of the file name, convert it into
unsigned long and compare.
Thanks to David Zhao for the contribution.