Problem was that controlling connection i.e. connection that
executed the query SET GLOBAL wsrep_reject_queries = ALL_KILL;
was also killed but server would try to send result from that
query to controlling connection resulting a assertion
mysqld: /home/jan/mysql/10.2-sst/include/mysql/psi/mysql_socket.h:738: inline_mysql_socket_send: Assertion `mysql_socket.fd != -1' failed.
as socket was closed when controlling connection was closed.
wsrep_close_client_connections()
Do not close controlling connection and instead of
wsrep_close_thread() we do now soft kill by THD::awake
wsrep_reject_queries_update()
Call wsrep_close_client_connections using current thd.
MDEV-17058: Test failure on wsrep.variables
MDEV-17060: Test failure on galera.galera_var_slave_threads
Fix incorrect calculation of increased applier (slave) threads.
Note that increase change takes effect "immediately" but we should
use proper wait condition to wait it. Reducing the number of
slave threads is not immediate as thread will only exit after a
replication event.
This is 10.1 version where no merge error exists.
wsrep_on_check
New check function. Galera can't be enabled
if innodb-lock-schedule-algorithm=VATS.
innobase_kill_query
In Galera async kill we could own lock mutex.
innobase_init
If Variance-Aware-Transaction-Sheduling Algorithm (VATS) is
used on Galera we refuse to start InnoDB.
Changed innodb-lock-schedule-algorithm as read-only parameter
as it was designed to be.
lock_rec_other_has_expl_req,
lock_rec_other_has_conflicting,
lock_rec_lock_slow
lock_table_other_has_incompatible
lock_rec_insert_check_and_lock
Change pointer to conflicting lock to normal pointer as this
pointer contents could be changed later.
Merged from mysql-wsrep-bugs following:
GCF-1058 MTR test galera.MW-86 fails on repeated runs
Wait for the sync point sync.wsrep_apply_cb to be reached before
executing the test and clearing the debug flag sync.wsrep_apply_cb.
The race scenario:
Intended behavior:
node2: set sync.wsrep_apply_cb in order to start waiting in the background INSERT
node1: INSERT start
node2 (background): INSERT start
node1: INSERT end
node2: send signal to background INSERT: "stop waiting and continue executing"
node2: clear sync.wsrep_apply_cb as no longer needed
node2 (background): consume the signal
node2 (background): INSERT end
node2: DROP TABLE
node2: check no pending signals are left - ok
What happens occasionally (unexpected):
node2: set sync.wsrep_apply_cb in order to start waiting in the background INSERT
node1: INSERT start
node2 (background): INSERT start
node1: INSERT end
// The background INSERT still has _not_ reached the place where it starts
// waiting for the signal:
// DBUG_EXECUTE_IF("sync.wsrep_apply_cb", "now wait_for...");
node2: send signal to background INSERT: "stop waiting and continue executing"
node2: clear sync.wsrep_apply_cb as no longer needed
// The background INSERT reaches DBUG_EXECUTE_IF("sync.wsrep_apply_cb", ...)
// but sync.wsrep_apply_cb has already been cleared and the "wait" code is not
// executed. The signal remains unconsumed.
node2 (background): INSERT end
node2: DROP TABLE
node2: check no pending signals are left - failure, signal.wsrep_apply_cb is
pending (not consumed)
Remove MW-360 test case as it is not intended for MariaDB (uses
MySQL GTID).
This patch fixes two problems that may arise when changing the
value of wsrep_slave_threads:
1) Threads may be leaked if wsrep_slave_threads is changed
repeatedly. Specifically, when changing the number of slave
threads, we keep track of wsrep_slave_count_change, the
number of slaves to start / stop. The problem arises when
wsrep_slave_count_change is updated before slaves had a
chance to exit (threads may take some time to exit, as they
exit only after commiting one more replication event).
The fix is to update wsrep_slave_count_change such that it
reflects the number of threads that already exited or are
scheduled to exit.
2) Attempting to set out of range value for wsrep_slave_threads
(below 1 / above 512) results in wsrep_slave_count_change to
be computed based on the out of range value, even though a
warning is generated and wsrep_slave_threads is set to a
truncated value. wsrep_slave_count_change is computed in
wsrep_slave_threads_check(), which is called before mysql
checks for valid range. Fix is to update wsrep_count_change
whenever wsrep_slave_threads is updated with a valid value.
This changes variable wsrep_max_ws_size so that its value
is linked to the value of provider option repl.max_ws_size.
That is, changing the value of variable wsrep_max_ws_size
will change the value of provider option repl.max_ws_size,
and viceversa.
The writeset size limit is always enforced in the provider,
regardless of which option is used.
On wsrep_cluster_address update, node restarts the replication
and attempts to connect to the new address. In this process it
makes a call to wsrep provider's connect API, which could lead
to segfault if wsrep provider is not loaded (wsrep_on=OFF).
Fixed by making sure that it proceeds only if a provider is
loaded.
* mysqld_safe: Since wsrep_on variable is mandatory in 10.1, skip wsrep
position recovery if its OFF.
* mysqld: Remove "-wsrep" from server version
* mysqld: Remove wsrep patch version from @@version_comment
* mysqld: Introduce @@wsrep_patch_version
1. factored XID-related functions to a separate wsrep_xid.cc unit.
2. refactored them to take refrences instead of pointers where appropriate
3. implemented wsrep_get/set_SE_position to take wsrep_uuid_t and wsrep_seqno_t instead of XID
4. call wsrep_set_SE_position() in wsrep_sst_received() to reinitialize SE checkpoint after SST was received, avoid assert() in setting code by first checking current position.
Added a SESSION-only system variable "wsrep_dirty_reads" to allow SELECT
queries to pass even when the node is not prepared to accept queries
(wsrep_ready=OFF). Added a test case.
Merged lp:maria/maria-10.0-galera up to revision 3879.
Added a new functions to handler API to forcefully abort_transaction,
producing fake_trx_id, get_checkpoint and set_checkpoint for XA. These
were added for future possiblity to add more storage engines that
could use galera replication.