mirror of
https://github.com/MariaDB/server.git
synced 2025-09-03 20:43:11 +03:00
replication causing replication to fail. Remove the temporary fix for MDEV-5914, which used READ COMMITTED for parallel replication worker threads. Replace it with a better, more selective solution. The issue is with certain edge cases of InnoDB gap locks, for example between INSERT and ranged DELETE. It is possible for the gap lock set by the DELETE to block the INSERT, if the DELETE runs first, while the record lock set by INSERT does not block the DELETE, if the INSERT runs first. This can cause a conflict between the two in parallel replication on the slave even though they ran without conflicts on the master. With this patch, InnoDB will ask the server layer about the two involved transactions before blocking on a gap lock. If the server layer tells InnoDB that the transactions are already fixed wrt. commit order, as they are in parallel replication, InnoDB will ignore the gap lock and allow the two transactions to proceed in parallel, avoiding the conflict. Improve the fix for MDEV-6020. When InnoDB itself detects a deadlock, it now asks the server layer for any preferences about which transaction to roll back. In case of parallel replication with two transactions T1 and T2 fixed to commit T1 before T2, the server layer will ask InnoDB to roll back T2 as the deadlock victim, not T1. This helps in some cases to avoid excessive deadlock rollback, as T2 will in any case need to wait for T1 to complete before it can itself commit. Also some misc. fixes found during development and testing: - Remove thd_rpl_is_parallel(), it is not used or needed. - Use KILL_CONNECTION instead of KILL_QUERY when a parallel replication worker thread is killed to resolve a deadlock with fixed commit ordering. There are some cases, eg. in sql/sql_parse.cc, where a KILL_QUERY can be ignored if the query otherwise completed successfully, and this could cause the deadlock kill to be lost, so that the deadlock was not correctly resolved. - Fix random test failure due to missing wait_for_binlog_checkpoint.inc. - Make sure that deadlock or other temporary errors during parallel replication are not printed to the the error log; there were some places around the replication code with extra error logging. These conditions can occur occasionally and are handled automatically without breaking replication, so they should not pollute the error log. - Fix handling of rgi->gtid_sub_id. We need to be able to access this also at the end of a transaction, to be able to detect and resolve deadlocks due to commit ordering. But this value was also used as a flag to mark whether record_gtid() had been called, by being set to zero, losing the value. Now, introduce a separate flag rgi->gtid_pending, so rgi->gtid_sub_id remains valid for the entire duration of the transaction. - Fix one place where the code to handle ignored errors called reset_killed() unconditionally, even if no error was caught that should be ignored. This could cause loss of a deadlock kill signal, breaking deadlock detection and resolution. - Fix a couple of missing mysql_reset_thd_for_next_command(). This could cause a prior error condition to remain for the next event executed, causing assertions about errors already being set and possibly giving incorrect error handling for following event executions. - Fix code that cleared thd->rgi_slave in the parallel replication worker threads after each event execution; this caused the deadlock detection and handling code to not be able to correctly process the associated transactions as belonging to replication worker threads. - Remove useless error code in slave_background_kill_request(). - Fix bug where wfc->wakeup_error was not cleared at wait_for_commit::unregister_wait_for_prior_commit(). This could cause the error condition to wrongly propagate to a later wait_for_prior_commit(), causing spurious ER_PRIOR_COMMIT_FAILED errors. - Do not put the binlog background thread into the processlist. It causes too many result differences in mtr, but also it probably is not useful for users to pollute the process list with a system thread that does not really perform any user-visible tasks...
246 lines
8.3 KiB
C++
246 lines
8.3 KiB
C++
#ifndef RPL_PARALLEL_H
|
|
#define RPL_PARALLEL_H
|
|
|
|
#include "log_event.h"
|
|
|
|
|
|
struct rpl_parallel;
|
|
struct rpl_parallel_entry;
|
|
struct rpl_parallel_thread_pool;
|
|
|
|
class Relay_log_info;
|
|
struct inuse_relaylog;
|
|
|
|
|
|
/*
|
|
Structure used to keep track of the parallel replication of a batch of
|
|
event-groups that group-committed together on the master.
|
|
|
|
It is used to ensure that every event group in one batch has reached the
|
|
commit stage before the next batch starts executing.
|
|
|
|
Note the lifetime of this structure:
|
|
|
|
- It is allocated when the first event in a new batch of group commits
|
|
is queued, from the free list rpl_parallel_entry::gco_free_list.
|
|
|
|
- The gco for the batch currently being queued is owned by
|
|
rpl_parallel_entry::current_gco. The gco for a previous batch that has
|
|
been fully queued is owned by the gco->prev_gco pointer of the gco for
|
|
the following batch.
|
|
|
|
- The worker thread waits on gco->COND_group_commit_orderer for
|
|
rpl_parallel_entry::count_committing_event_groups to reach wait_count
|
|
before starting; the first waiter links the gco into the next_gco
|
|
pointer of the gco of the previous batch for signalling.
|
|
|
|
- When an event group reaches the commit stage, it signals the
|
|
COND_group_commit_orderer if its gco->next_gco pointer is non-NULL and
|
|
rpl_parallel_entry::count_committing_event_groups has reached
|
|
gco->next_gco->wait_count.
|
|
|
|
- When gco->wait_count is reached for a worker and the wait completes,
|
|
the worker frees gco->prev_gco; at this point it is guaranteed not to
|
|
be needed any longer.
|
|
*/
|
|
struct group_commit_orderer {
|
|
/* Wakeup condition, used with rpl_parallel_entry::LOCK_parallel_entry. */
|
|
mysql_cond_t COND_group_commit_orderer;
|
|
uint64 wait_count;
|
|
group_commit_orderer *prev_gco;
|
|
group_commit_orderer *next_gco;
|
|
bool installed;
|
|
};
|
|
|
|
|
|
struct rpl_parallel_thread {
|
|
bool delay_start;
|
|
bool running;
|
|
bool stop;
|
|
mysql_mutex_t LOCK_rpl_thread;
|
|
mysql_cond_t COND_rpl_thread;
|
|
mysql_cond_t COND_rpl_thread_queue;
|
|
struct rpl_parallel_thread *next; /* For free list. */
|
|
struct rpl_parallel_thread_pool *pool;
|
|
THD *thd;
|
|
/*
|
|
Who owns the thread, if any (it's a pointer into the
|
|
rpl_parallel_entry::rpl_threads array.
|
|
*/
|
|
struct rpl_parallel_thread **current_owner;
|
|
/* The rpl_parallel_entry of the owner. */
|
|
rpl_parallel_entry *current_entry;
|
|
struct queued_event {
|
|
queued_event *next;
|
|
Log_event *ev;
|
|
rpl_group_info *rgi;
|
|
inuse_relaylog *ir;
|
|
ulonglong future_event_relay_log_pos;
|
|
char event_relay_log_name[FN_REFLEN];
|
|
char future_event_master_log_name[FN_REFLEN];
|
|
ulonglong event_relay_log_pos;
|
|
my_off_t future_event_master_log_pos;
|
|
size_t event_size;
|
|
} *event_queue, *last_in_queue;
|
|
uint64 queued_size;
|
|
queued_event *qev_free_list;
|
|
rpl_group_info *rgi_free_list;
|
|
group_commit_orderer *gco_free_list;
|
|
|
|
void enqueue(queued_event *qev)
|
|
{
|
|
if (last_in_queue)
|
|
last_in_queue->next= qev;
|
|
else
|
|
event_queue= qev;
|
|
last_in_queue= qev;
|
|
queued_size+= qev->event_size;
|
|
}
|
|
|
|
void dequeue1(queued_event *list)
|
|
{
|
|
DBUG_ASSERT(list == event_queue);
|
|
event_queue= last_in_queue= NULL;
|
|
}
|
|
|
|
void dequeue2(size_t dequeue_size)
|
|
{
|
|
queued_size-= dequeue_size;
|
|
}
|
|
|
|
queued_event *get_qev_common(Log_event *ev, ulonglong event_size);
|
|
queued_event *get_qev(Log_event *ev, ulonglong event_size,
|
|
Relay_log_info *rli);
|
|
queued_event *retry_get_qev(Log_event *ev, queued_event *orig_qev,
|
|
const char *relay_log_name,
|
|
ulonglong event_pos, ulonglong event_size);
|
|
void free_qev(queued_event *qev);
|
|
rpl_group_info *get_rgi(Relay_log_info *rli, Gtid_log_event *gtid_ev,
|
|
rpl_parallel_entry *e, ulonglong event_size);
|
|
void free_rgi(rpl_group_info *rgi);
|
|
group_commit_orderer *get_gco(uint64 wait_count, group_commit_orderer *prev);
|
|
void free_gco(group_commit_orderer *gco);
|
|
};
|
|
|
|
|
|
struct rpl_parallel_thread_pool {
|
|
uint32 count;
|
|
struct rpl_parallel_thread **threads;
|
|
struct rpl_parallel_thread *free_list;
|
|
mysql_mutex_t LOCK_rpl_thread_pool;
|
|
mysql_cond_t COND_rpl_thread_pool;
|
|
bool changing;
|
|
bool inited;
|
|
|
|
rpl_parallel_thread_pool();
|
|
int init(uint32 size);
|
|
void destroy();
|
|
struct rpl_parallel_thread *get_thread(rpl_parallel_thread **owner,
|
|
rpl_parallel_entry *entry);
|
|
void release_thread(rpl_parallel_thread *rpt);
|
|
};
|
|
|
|
|
|
struct rpl_parallel_entry {
|
|
mysql_mutex_t LOCK_parallel_entry;
|
|
mysql_cond_t COND_parallel_entry;
|
|
uint32 domain_id;
|
|
uint64 last_commit_id;
|
|
bool active;
|
|
/*
|
|
Set when SQL thread is shutting down, and no more events can be processed,
|
|
so worker threads must force abort any current transactions without
|
|
waiting for event groups to complete.
|
|
*/
|
|
bool force_abort;
|
|
/*
|
|
At STOP SLAVE (force_abort=true), we do not want to process all events in
|
|
the queue (which could unnecessarily delay stop, if a lot of events happen
|
|
to be queued). The stop_count provides a safe point at which to stop, so
|
|
that everything before becomes committed and nothing after does. The value
|
|
corresponds to group_commit_orderer::wait_count; if wait_count is less than
|
|
or equal to stop_count, we execute the associated event group, else we
|
|
skip it (and all following) and stop.
|
|
*/
|
|
uint64 stop_count;
|
|
|
|
/*
|
|
Cyclic array recording the last rpl_thread_max worker threads that we
|
|
queued event for. This is used to limit how many workers a single domain
|
|
can occupy (--slave-domain-parallel-threads).
|
|
|
|
Note that workers are never explicitly deleted from the array. Instead,
|
|
we need to check (under LOCK_rpl_thread) that the thread still belongs
|
|
to us before re-using (rpl_thread::current_owner).
|
|
*/
|
|
rpl_parallel_thread **rpl_threads;
|
|
uint32 rpl_thread_max;
|
|
uint32 rpl_thread_idx;
|
|
/*
|
|
The sub_id of the last transaction to commit within this domain_id.
|
|
Must be accessed under LOCK_parallel_entry protection.
|
|
|
|
Event groups commit in order, so the rpl_group_info for an event group
|
|
will be alive (at least) as long as
|
|
rpl_group_info::gtid_sub_id > last_committed_sub_id. This can be used to
|
|
safely refer back to previous event groups if they are still executing,
|
|
and ignore them if they completed, without requiring explicit
|
|
synchronisation between the threads.
|
|
*/
|
|
uint64 last_committed_sub_id;
|
|
/*
|
|
The sub_id of the last event group in this replication domain that was
|
|
queued for execution by a worker thread.
|
|
*/
|
|
uint64 current_sub_id;
|
|
rpl_group_info *current_group_info;
|
|
/*
|
|
If we get an error in some event group, we set the sub_id of that event
|
|
group here. Then later event groups (with higher sub_id) can know not to
|
|
try to start (event groups that already started will be rolled back when
|
|
wait_for_prior_commit() returns error).
|
|
The value is ULONGLONG_MAX when no error occured.
|
|
*/
|
|
uint64 stop_on_error_sub_id;
|
|
/* Total count of event groups queued so far. */
|
|
uint64 count_queued_event_groups;
|
|
/*
|
|
Count of event groups that have started (but not necessarily completed)
|
|
the commit phase. We use this to know when every event group in a previous
|
|
batch of master group commits have started committing on the slave, so
|
|
that it is safe to start executing the events in the following batch.
|
|
*/
|
|
uint64 count_committing_event_groups;
|
|
/* The group_commit_orderer object for the events currently being queued. */
|
|
group_commit_orderer *current_gco;
|
|
|
|
rpl_parallel_thread * choose_thread(rpl_group_info *rgi, bool *did_enter_cond,
|
|
PSI_stage_info *old_stage, bool reuse);
|
|
group_commit_orderer *get_gco();
|
|
void free_gco(group_commit_orderer *gco);
|
|
};
|
|
struct rpl_parallel {
|
|
HASH domain_hash;
|
|
rpl_parallel_entry *current;
|
|
bool sql_thread_stopping;
|
|
|
|
rpl_parallel();
|
|
~rpl_parallel();
|
|
void reset();
|
|
rpl_parallel_entry *find(uint32 domain_id);
|
|
void wait_for_done(THD *thd, Relay_log_info *rli);
|
|
void stop_during_until();
|
|
bool workers_idle();
|
|
int do_event(rpl_group_info *serial_rgi, Log_event *ev, ulonglong event_size);
|
|
};
|
|
|
|
|
|
extern struct rpl_parallel_thread_pool global_rpl_thread_pool;
|
|
|
|
|
|
extern int rpl_parallel_change_thread_count(rpl_parallel_thread_pool *pool,
|
|
uint32 new_count,
|
|
bool skip_check= false);
|
|
|
|
#endif /* RPL_PARALLEL_H */
|