Caused by MDEV-31857 enable --ssl-verify-server-cert by default in
the internal client. Fixed by disabling master_ssl_verify_server_cert
because account is passwordless.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
page_encrypt_thread_key: The key for fil_crypt_thread().
All other InnoDB threads should already have been registered for
performance_schema ever since
commit a2f510fccf
page_encrypt_thread_key: The key for fil_crypt_thread().
All other InnoDB threads should already have been registered for
performance_schema ever since
commit a2f510fccf
mysql_prepare_create_table: Extract a Key initialization part that
relates to length calculation and long unique index designation.
append_system_key_parts call also moves there.
Move this initialization before the duplicate elimination.
Extract WITHOUT OVERPLAPS check into a separate function. It had to be moved
earlier in the code to preserve the order of the error checks, as in the tests.
table->move_fields has some limitations:
1. It cannot be used in cascade
2. It should always have a restoring pair
In this case, an error has occurred before the field ptr was restored, returning
from the function in that state. Even in case of an error, the table can be
reused afterwards and table->field[i]->ptr is not reset in between.
The solution is to restore the field pointers immanently whenever they've been
deviated.
Also add an assertion that ensures that table fields are restored after the use
in close_thread_tables.
Test rpl.rpl_parallel_gco_wait_kill has a race condition where
it identifies that SQL using its state as
"Slave has read all relay log", and immediately tries to save
the thread id of the SQL thread by querying for threads with
that same state. However, the state of the SQL thread may
change in-between this time, leading to the query that saves
the SQL thread finding no thread id that matches that state.
Commit 3ee6f69d49 aimed to fix
this in 10.11+ by simplifying the query string to
"%relay log%"; however, it could still fail (though less
often).
This commit instead changes the query to find the SQL thread
from using some state, to using the command "Slave_SQL", which
will not change
MDEV-35477 incorrectly reverted the original test fix of MDEV-34355,
thinking it superseded that fix. It is still needed though. Pasting
the content of the original patch’s commit message, as it is still
relevant (and the method to reproduce the test failure still works).
“””
The problem is that the test could query the status variable
Rpl_semi_sync_slave_send_ack before the slave actually updated it.
This would result in an immediate --die assertion killing the rest
of the test. The bottom of this commit message has a small patch
that can be applied to reproduce the test failure.
This patch fixes the test failure by waiting for the variable to be
updated before querying its value.
diff --git a/sql/semisync_slave.cc b/sql/semisync_slave.cc
index 9ddd4c5c8d7..60538079fce 100644
--- a/sql/semisync_slave.cc
+++ b/sql/semisync_slave.cc
@@ -303,7 +303,10 @@ int Repl_semi_sync_slave::slave_reply(Master_info *mi)
reply_res= DBUG_EVALUATE_IF("semislave_failed_net_flush", 1,
net_flush(net));
if (!reply_res)
+ {
+ sleep(1);
rpl_semi_sync_slave_send_ack++;
+ }
}
DBUG_RETURN(reply_res);
}
“””
When client connections use threadpool, i.e. configuration has:
thread_handling = pool-of-threads it turned out that during wsrep
replication shutdown, not all client connections could be closed.
Reason was that some client threads has stmt_da in state DA_EOF,
and this state was earlier used to detect if client connection
was issuing SHUTDOWN command.
To fix this, the connection executing SHUTDOWN is now detected by
looking at the actual command being executed:
thd->get_command() == COM_SHUTDOWN
During replication shutdown, all other connections but the
SHUTDOWN executor, are terminated.
This commit has new mtr test galera.galera_threadpool, which
opens a number of threadpool client connections, and then
restarts the node to verify that connections in threadpool are
terminated during shutdown.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
DROP TABLE on child and UPDATE of parent table can cause an MDL BF-BF
conflict when applied concurrently.
DROP TABLE takes MDL locks on both child and its parent table, however
it only it did not add certification keys for the parent table.
This patch adds the following:
* Append certification keys corresponding to all parent tables
before DROP TABLE replication.
* Fix wsrep_append_fk_parent_table() so that it works when it is
given a table list containing temporary tables.
* Make sure function wsrep_append_fk_parent_table() is only called
for local transaction. That was not the case for ALTER TABLE.
* Add a test case that verifies that UPDATE parent depends on
preceeding DROP TABLE child.
* Adapt galera_ddl_fk_conflict test to work with DROP TABLE as well.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
In Log_event::read_log_event(), don't use IO_CACHE::error of the relay log's
IO_CACHE to signal an error back to the caller. When reading the active
relay log, this flag is also being used by the IO thread, and setting it can
randomly cause the IO thread to wrongly detect IO error on writing and
permanently disable the relay log.
This was seen sporadically in test case rpl.rpl_from_mysql80. The read
error set by the SQL thread in the IO_CACHE would be interpreted as a
write error by the IO thread, which would cause it to throw a fatal
error and close the relay log. And this would later cause CHANGE
MASTER to try to purge a closed relay log, resulting in nullptr crash.
SQL thread is not able to parse an event read from the relay log. This
can happen like here when replicating unknown events from a MySQL master,
potentially also for other reasons.
Also fix a mistake in my_b_flush_io_cache() introduced back in 2001
(fa09f2cd7e) where my_b_flush_io_cache() could wrongly return an error set
in IO_CACHE::error, even if the flush operation itself succeeded.
Also fix another sporadic failure in rpl.rpl_from_mysql80 where the outout
of MASTER_POS_WAIT() depended on timing of SQL and IO thread.
Reviewed-by: Monty <monty@mariadb.org>
Reviewed-by: Andrei Elkin <andrei.elkin@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
This bug affected queries containing degenerated single-value subqueries
with window functions. The bug led mostly to wrong results for such
queries.
A subquery is called degenerated if it is of the form (SELECT <expr>...).
For degenerated subqueries of the form (SELECT <expr>) the transformation
(SELECT <expr>) => <expr>
usually is applied. However if <expr> contains set functions or window
functions such rewriting is not valid for an obvious reason. The code
before this patch erroneously applied the transformation when <expr>
contained window functions and did not contain set functions.
Approved by Rex Johnston <rex.johnston@mariadb.com>
This commit implements
mysql/mysql-server@7037a0bdc8
functionality.
If some transaction 't' requests not-gap X-lock 'Xt' on record 'r', and
locks list of the record 'r' contains not-gap granted S-lock 'St' of
transaction 't', followed by not-gap waiting locks WB={Wb1,
Wb2, ..., Wbn} conflicting with 'Xt', and 'Xt' does not conflict with any
other lock, located in the list after 'St', then grant 'Xt'. Note that
insert-intention locks are also gap locks.
If some transaction 't' holds not-gap lock 'Lt' on record 'r', and some
other transactions have not-gap continuous waiting locks sequence
L(B)={L(b1), L(b2), ..., L(bn)} following L(t) in
the list of locks for the record 'r', and transaction 't' requests not-gap,
what means also not insert intention, as ii-locks are also gap locks,
X-lock conflicting with any lock in L(B), then grant the.
MySQL's commit contains the following explanation of why insert-intention
locks must not overtake a waiting ordinary or gap locks:
"It is important that this decission rule doesn't allow
INSERT_INTENTION locks to overtake WAITING locks on gaps (`S`, `S|GAP`,
`X`, `X|GAP`), as inserting a record into a gap would split such WAITING
lock, violating the invariant that each transaction can have at most
single WAITING lock at any time."
I would add to the explanation the following. Suppose we have trx 1 which
holds ordinary X-lock on some record. And trx 2 executes "DELETE FROM t"
or "SELECT * FOR UPDATE" in RR(see lock_delete_updated.test and
MDEV-27992), i.e. it creates waiting ordinary X-lock on the same record.
And then trx 1 wants to insert some record just before the locked record.
It requests insert-intention lock, and if the lock overtakes trx 2 lock,
there will be phantom records for trx 2 in RR. lock_delete_updated.test
shows how "DELETE" allows to insert some records in already scanned gap
and misses some records to delete.
The current implementation differs from MySQL implementation. There are
two key differences:
1. Lock queue ordering. In MySQL all waiting locks precede all granted
locks. A new waiting lock is added to the head of the queue, a new
granted lock is added to the end of the queue, if some waiting lock
is granted, it's moved to the end of the queue. In MariaDB any new
lock is added to the end of the queue and waiting lock does not change
its position in the queue where the lock is granted. The rule is that
blocking lock must be located before blocked lock in lock queue. We
maintain the rule with inserting bypassing lock just before bypassed
one.
2. MySQL implementation uses some object(locksys::Trx_locks_cache) which
can be passed to consecutive calls to rec_lock_has_to_wait() for the
same trx and heap_no to cache the result of checking if trx has a
granted lock which is blocking the waiting lock(see
locksys::Trx_locks_cache::has_granted_blocker()). The current
implementation does not use such object, because it looks for such
granted lock on the level of lock_rec_other_has_conflicting() and
lock_rec_has_to_wait_in_queue(). I.e. there is no need in additional
lock queue iteration in
locksys::Trx_locks_cache::has_granted_blocker(), as we already iterate
it in lock_rec_other_has_conflicting() and
lock_rec_has_to_wait_in_queue().
During the testing the following case was found. Suppose we have
delete-marked record and going to do inplace insert into
that delete-marked record. Usually we don't create explicit lock if
there are no conlicting with not gap X-lock locks(see
lock_clust_rec_modify_check_and_lock(), btr_cur_update_in_place()). The
implicit lock will be converted to explicit one by demand.
That can happen during INSERT, the not-gap S-lock can
be acquired on searching for duplicates(see
row_ins_duplicate_error_in_clust()), and, if delete-marked record is
found, inplace insert(see btr_cur_upd_rec_in_place()) modifies the
record, what is treated as implicit lock.
But there can be a case when some transaction trx1 holds not-gap S-lock,
another transaction trx2 creates waiting X-lock, and then trx2 tries to
do inplace insert. Before the fix the waiting X-lock of trx2 would be
conflicting lock, and trx1 would try to create explicit X-lock, what
would cause deadlock, and one of the transactions whould be rolled back.
But after the fix, trx2 waiting X-lock is not treated as conflicting
with trx1 X-lock anymore, as trx1 already holds S-lock. If we don't create
explicit lock, then some other transaction trx3 can create it during
implicit to explicit lock conversion and place it at the end of the
queue. So there can be the following locks order in the queue:
S1(granted) X2(waiting) X1(granted)
The above queue is not valid, because all granted trx1 locks must be
placed before waiting trx2 lock. Besides, lock_rec_release_try() can
remove S(granted, trx1) lock and grant X lock to trx 2, and there can be
two granted X-locks on the same record:
X2(granted) X1(granted)
Taking into account that lock_rec_release_try() can release cell and
lock_sys latches leaving some locks unreleased, the queue validation
function can fail in any unexpected place.
It can be fixed with two ways:
1) Place explicit X(granted, trx1) lock before X(waiting, trx2) lock
during implicit to explicit lock conversion. This option is implemented
in MySQL, as granted lock is always placed at the top of locks queue,
and waiting locks are placed at the bottom of the queue. MariaDB does
not do this, and implementing this variant would require conflicting
locks search before converting implicit to explicit lock, what, in
turns, would require cell and/or lock_sys latch acquiring.
2) Create and place X(granted, trx1) lock before X(waiting, trx2) during
inplace INSERT, i.e. when lock_rec_lock() is invoked from
lock_clust_rec_modify_check_and_lock() or
lock_sec_rec_modify_check_and_lock(), if X(waiting, trx2) is
bypassed. Such a way we don't need in additional conflicting locks
search, as they are searched anyway in lock_rec_low().
This fix implements the second variant(see the changes around
c_lock_info.insert_after in lock_rec_lock). I.e. if some record was
delete-marked and we do inplace insert in such a record, and some lock for
bypass was found, create explicit lock to avoid conflicting lock search on
each implicit to explicit lock conversion. We can remove it if MDEV-35624
is implemented.
lock_rec_other_has_conflicting(), lock_rec_has_to_wait_in_queue():
search locks to bypass along with conflicting locks searching in the
same loop. The result is returned in conflicting_lock_info object.
There can be several locks to bypass, only the first one is returned to
limit lock_rec_find_similar_on_page() with the first bypassed lock to
preserve "blocking before blocked" invariant. conflicting_lock_info also
contains a pointer to the lock, after which we can insert bypassing
lock. This lock precedes bypassed one.
Bypassing lock can be next-key lock, and the following cases are
possible:
1. S1(not-gap, granted) II2(granted) X3(waiting for S1),
When new X1(ordinary) lock is acquired, there will be the following
locks queue:
S1(not-gap, granted) II2(granted) X1(ordinary, granted) X3(waiting for
S1)
If we had inserted new X1 lock just after S1, and S1 had been released
on transaction commit or rollback, we would have the following
sequence in the locks queue:
X1(ordinary, granted) II2(granted) X3(waiting for X1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is not a real issue as II lock once granted can be
ignored but it could possibly hit some assert(taking into account
that lock_release_try() can release lock_sys latch, and other threads
can acquire the latch and validate lock queue) as it breaks our design
constraint that any granted lock in the queue should not conflict
with locks ahead in the queue. But lock_rec_queue_validate() does not
check the above constraint. We place new bypassing lock just before
bypassed one, but there still can be the case when lock bitmap is used
instead of creating new lock object(see lock_rec_add_to_queue() and
lock_rec_find_similar_on_page()), and the lock, which owns the
bitmap, can precede II2(granted). We can either disable
lock_rec_find_similar_on_page() space optimization for bypassing locks
or treat "X1(ordinary, granted) II2(granted)" sequence as valid. As
we don't currently have the function which would fail on the above
sequence, let treat it as valid for the case, when lock_release()
execution is in process.
2. S1(ordinary, granted) II2(waiting for S1) X3(waiting for S1)
When new X1(ordinary) lock is acquired, there will be the following
locks queue:
S1(ordinary, granted) II2(waiting for S1) X1(ordinary, granted)
X3(waiting for S1).
After S1 releasing there will be:
II2(granted) X1(ordinary, granted) X3(waiting for X1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The above queue is valid because ordinary lock does not conflict with
II-lock(see lock_rec_has_to_wait()).
lock_rec_create_low(): insert new lock to the position which
lock_rec_other_has_conflicting(), lock_rec_has_to_wait_in_queue()
returned if the lock is bypassing.
lock_rec_find_similar_on_page(): add ability to limit similiar lock search
with the certain lock to preserve "blocking before blocked" invariant for
all bypassed locks.
lock_rec_add_to_queue(): don't treat bypassed locks as waiting ones to
let lock bitmap reusing for bypassing locks.
lock_rec_lock(): fix inplace insert case, explained above.
lock_rec_dequeue_from_page(), lock_rec_rebuild_waiting_queue(): move
bypassing lock to the correct place to preserve "blocking before blocked"
invariant.
Reviewed by: Debarun Banerjee, Marko Mäkelä.
innodb_convert_name(): Convert a schema or table name to
my_charset_filename compatible format.
dict_table_lookup(): Replaces dict_get_referenced_table().
Make the callers responsible for invoking innodb_convert_name().
innobase_casedn_str(): Remove. Let us invoke my_casedn_str() directly.
dict_table_rename_in_cache(): Do not duplicate a call to
dict_mem_foreign_table_name_lookup_set().
innobase_convert_to_filename_charset(): Defined static in the only
compilation unit that needs it.
dict_scan_id(): Remove the constant parameters
table_id=FALSE, accept_also_dot=TRUE. Invoke strconvert() directly.
innobase_convert_from_id(): Remove; only called from dict_scan_id().
innobase_convert_from_table_id(): Remove (dead code).
table_name_t::dblen(), table_name_t::basename(): In non-debug builds,
tolerate names that may miss a '/' separator.
Reviewed by: Debarun Banerjee
row_ins_cascade_calc_update_vec(): Skip any virtual columns in the
update vector of the parent table.
Based on mysql/mysql-server@0ac176453b
Reviewed by: Debarun Banerjee
It is not clear that the three Compare_key enums form a hierarchy like
alter algorithms do. So on the safe side (in case MDEV-22168 gets
implemented without looking at this code) we should return NotEqual if
partition engines do not return the same enum value. There's a static
merge function that merges these enums, but it is used to merge the
Compare_keys of different key parts (horizontally), rather than
different partitions (vertically).
- MDEV-34392(commit cc810e64d4) adds
the check for nullability of foreign key column when foreign key
relation is of UPDATE_CASCADE or UPDATE SET NULL. This check
makes DDL fail when it violates foreign key nullability.
This patch basically does the nullability check for foreign key
column only for strict sql mode
The functions MY_CHARSET_HANDLER::caseup() and MY_CHARSET_HANDLER::casedn()
in their virtual imlementations do "const char *end= src + srclen"
in the very beginning. Therefore src cannot be NULL to avoid
"UBSAN: SUMMARY: UndefinedBehaviorSanitizer: nullptr-with-offset".
Adding DBUG_ASSERT(src != NULL) into all virtual implementations,
to catch this problem in regular Debug builds (without UBSAN).
Fixing Master_info_index::get_master_info() to check connection_name->str.
If it is NULL then passing empty_clex_str into IdentBufferCasedn
instead of *connection_name.
Problem was that in case of INSERT DELAYED thd->query() is
freed before we call trans_rollback where WSREP_DEBUG
could access thd->query() in wsrep_thd_query().
Fix is to reset thd->query() to NULL in delayed_insert
destructor after it is freed. There is already
null guard at wsrep_thd_query().
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Although the `my_thread_id` type is 64 bits, binlog format specs
limits it to 32 bits in practice. (See also: MDEV-35706)
The writable SQL variable `pseudo_thread_id` didn’t realize this though
and had a range of `ULONGLONG_MAX` (at least `UINT64_MAX` in C/C++).
It consequentially accepted larger values silently, but only the lower
32 bits of whom gets binlogged; this could lead to inconsistency.
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Item:print_for_table_def() uses QT_TO_SYSTEM_CHARSET to print
the DEFAULT expression into FRM file during CREATE TABLE.
Therefore, the expression is encoded in utf8 in FRM.
get_field_default_value() erroneously used field->charset() to
print the DEFAULT expression at SHOW CREATE TABLE time.
Fixing get_field_default_value() to use &my_charset_utf8mb4_general_ci instead.
This makes DEFAULT work in the way way with:
- virtual column expressions:
if (field->vcol_info)
{
StringBuffer<MAX_FIELD_WIDTH> str(&my_charset_utf8mb4_general_ci);
field->vcol_info->print(&str);
- check constraint expressions:
if (field->check_constraint)
{
StringBuffer<MAX_FIELD_WIDTH> str(&my_charset_utf8mb4_general_ci);
field->check_constraint->print(&str);
Additional cleanup:
Fixing system_charset_info to &my_charset_utf8mb4_general_ci in a few
places to make non-BMP characters work in DEFAULT, virtual column,
check constraint expressions.
problem:
=======
- During load statement, InnoDB bulk operation relies on temporary
directory and it got crash when tmpdir is exhausted.
Solution:
========
During bulk insert, LOAD statement is building the clustered
index one record at a time instead of page. By doing this,
InnoDB does the following
1) Avoids creation of temporary file for clustered index.
2) Writes the undo log for first insert operation alone
trx_t::autoinc_locks: Use small_vector<lock_t*,4> in order to avoid any
dynamic memory allocation in the most common case (a statement is
holding AUTO_INCREMENT locks on at most 4 tables or partitions).
lock_cancel_waiting_and_release(): Instead of removing elements from
the middle, simply assign nullptr, like lock_table_remove_autoinc_lock().
The added test innodb.auto_increment_lock_mode covers the dynamic memory
allocation as well as nondeterministically (occasionally) covers
the out-of-order lock release in lock_table_remove_autoinc_lock().
Reviewed by: Debarun Banerjee
In case of error last_pos points to null_element and there is no any
other children. tree_search_next() walks the children from last_pos
until the leaves (null_element) ignoring the case the topmost parent
in search state is the leaf itself.
Quick read record uses different handler (H1) for finding records. It
cannot use ha_delete_row() handler (H2) as it is different search
mode: inited == INDEX for H1, inited == RND for H2. So, read handler
H1 uses index while write handler H2 uses random access.
For going next record in H1 there is info->last_pos optimization for
stepping index via tree_search_next(). This optimization can work with
deleted rows only if delete is conducted in the same handler, there
is:
67 int hp_rb_delete_key(HP_INFO *info, register HP_KEYDEF *keyinfo,
68 const uchar *record, uchar *recpos, int flag)
69 {
...
74 if (flag)
75 info->last_pos= NULL; /* For heap_rnext/heap_rprev */
But this cannot work for different handler. So, last_pos in H1 after
delete in H2 contains stale info->parents array and last_pos points
into that parents. In the specific test case last_pos' parent is
already freed node and tree_search_next() steps into it.
The fix invalidates local savings of info->parents and info->last_pos
based on key_version. Record deletion increments share->key_version in
H2, so in H1 we know the tree might be changed.
Another good measure would be to use H1 for delete. But this is bigger
refactoring than just bug fixing.
system versioned table
For versioned table REPLACE first tries to insert a row, if it gets
duplicate key error and optimization is possible it does UPDATE +
INSERT history. If optimization is not possible it goes normal branch
for UPDATE to history and repeats the cycle of INSERT.
The failure was in normal branch when we tried UPDATE to history but
such history already exists from previous cycles. There is no such
failures in optimized branch because vers_insert_history_row() already
ignores duplicates.
The fix ignores duplicate errors for UPDATE to history and does DELETE
instead.
DELAYED with virtual columns
Segfault was cause by two different copies of same Field instance in
prepared delayed insert. One was made by
Delayed_insert::get_local_table() (see make_new_field()). That copy
went through parse_vcol_defs() and received new vcol_info->expr.
Another one was made by copy_keys_from_share() by this code:
/*
We are using only a prefix of the column as a key:
Create a new field for the key part that matches the index
*/
field= key_part->field=field->make_new_field(root, outparam, 0);
field->field_length= key_part->length;
So, key_part and table got different objects of same field and the
crash was because key_part->field->vcol_info->expr is NULL.
The fix does update_keypart_vcol_info() to update vcol_info->expr in
key_part->field.
Cleanup: memdup_vcol() is static inline instead of macro + check OOM.
InnoDB transactions may be reused after committed:
- when taken from the transaction pool
- during a DDL operation execution
In this case wsrep flag on trx object is cleared, which may cause wrong
execution logic afterwards (wsrep-related hooks are not run).
Make trx->wsrep flag initialize from THD object only once on InnoDB transaction
start and don't change it throughout the transaction's lifetime.
The flag is reset at commit time as before.
Unconditionally set wsrep=OFF for THD objects that represent InnoDB background
threads.
Make Wsrep_schema::store_view() operate in its own transaction.
Fix streaming replication transactions' fragments rollback to not switch
THD->wsrep value during transaction's execution
(use THD->wsrep_ignore_table as a workaround).
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Row-injection updates don’t correctly set the historical partition
for tables with system versioning and system_time partitions. This
results in inconsistencies between the master and slave when
replicating transactions that target such tables (i.e. the primary
server would correctly distribute archived rows amongst its
partitions, whereas the replica would have all archived rows in a
single partition). The function
partition_info::vers_set_hist_part(THD*) is used to set the
partition; however, its initial check for
vers_require_hist_part(THD*) returns false, bypassing the rest of
the function (which sets up the partition to use). This is because
the actual check uses the LEX sql_command (via
LEX::vers_history_generating()) to determine if the command is valid
to generate history. Row injections don’t have sql_commands though.
This patch provides a fix which extends the check in
vers_history_generating() to additionally allow row injections to be
history generating (via the function LEX::is_stmt_row_injection()).
Special thanks to Jan Lindstrom <jan.lindstrom@galeracluster.com>
for his work in reproducing the bug, and providing an initial test
case.
Reviewed By
============
Kristian Nielsen <knielsen@knielsen-hq.org>
Aleksey Midenkov <midenok@mariadb.com>
Problem was caused by MDEV-31413 commit 277968aa where
mysql.gtid_slave_pos table was replicated by Galera.
However, as not all nodes in Galera cluster are replica
nodes, rows were not deleted from table.
In this fix this is corrected so that mysql.gtid_slave_pos
table is not replicated by Galera. Instead when Galera
node receives GTID event and wsrep_gtid_mode=1, this event
is stored to mysql.gtid_slave_pos table.
Added test case galera_2primary_replica for 2 async
primaries replicating to galera cluster.
Added test case galera_circular_replication where
async primary replicates to galera cluster and
one of the galera cluster nodes is master
to async replica.
Modified test case galera_restart_replica to monitor
gtid positions and rows in mysql.gtid_pos_table.
This is the test case from commit 46aaf328ce
(MDEV-35830) to cover backup_undo_trunc() in the regression tests of
earlier major versions of mariadb-backup.
recv_sys_t::parse(): Correctly handle the storing==BACKUP case,
and simplify some logic around storing==YES as well.
The added test mariabackup.undo_truncate is based on an idea of
Thirunarayanan Balathandayuthapani. It nondeterministically (not on
every run) covers this logic, including the function backup_undo_trunc(),
for both innodb_encrypt_log=ON and innodb_encrypt_log=OFF.
Reviewed by: Debarun Banerjee
log_t::resize_start(): If the ib_logfile101 cannot be created,
be sure to reset log_sys.resize_lsn.
log_t::resize_abort(): In case SET GLOBAL innodb_log_file_size is
aborted, delete the ib_logfile101.
innodb_log_file_mmap: Use a constant documentation string that
refers to persistent memory also when it is not available in the build.
HAVE_INNODB_MMAP: Remove, and unconditionally enable this code.
log_mmap(): On 32-bit systems, ensure that the size fits in 32 bits.
log_t::resize_start(), log_t::resize_abort(): Only handle memory-mapping
if HAVE_PMEM is defined. The generic memory-mapped interface is only for
reading the log in recovery. Writable memory mappings are only for
persistent memory, that is, Linux file systems with mount -o dax.
Reviewed by: Debarun Banerjee, Otto Kekäläinen
Fix regression introduced by commits 9588526 which attempted to address
MDEV-27037. With the regression, mariadb-binlog cannot process multiple
log files when --stop-datetime is specified.
The change is to keep recording timestamp of last processed event, and
after all log files are processed, if the last recorded timestamp has not
reached specified --stop-datetime, it will emit a warning. This applies
when processing local log files, or log files from remote servers.
All new code of the whole pull request, including one or several files that are
either new files or modified ones, are contributed under the BSD-new license. I
am contributing on behalf of my employer Amazon Web Services, Inc.
Co-authored-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
recv_sys_t::parse<storing=NO>(): Do invoke
fil_space_set_recv_size_and_flags() and do parse enough of page 0
to facilitate that.
This fixes a regression that had been introduced in
commit b249a059da (MDEV-34850).
In a multi-batch crash recovery, we would fail to invoke
fil_space_set_recv_size_and_flags() while parsing the remaining log,
before starting the first recovery batch.
Reviewed by: Debarun Banerjee
Tested by: Matthias Leich