The partitioning error handling code was looking at
thd->lex->alter_info.partition_flags in non-alter-table cases, in which cases
the value is stale and contains whatever was set by any earlier ALTER TABLE.
This could cause the wrong error code to be generated, which then in some cases
can cause replication to break with "different errorcode" error.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Parallel slave failed to retry in retry_event_group() with error
WSREP: Parallel slave worker failed at wsrep_before_command() hook
Fix wsrep transaction cleanup/restart in retry_event_group() to properly
clean up previous transaction by calling wsrep_after_statement().
Also move call to reset error after call to wsrep_after_statement()
to make sure that it remains effective.
Add a MTR test galera_as_slave_parallel_retry to reproduce the error
when the fix is not present.
Other issues which were detected when testing with sysbench:
Check if parallel slave is killed for retry before waiting for prior
commits in THD::wsrep_parallel_slave_wait_for_prior_commit(). This
is required with slave-parallel-mode=optimistic to avoid deadlock
when a slave later in commit order manages to reach prepare phase
before a lock conflict is detected.
Suppress wsrep applier specific warning for slave threads.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Test failed sporadically when --ps-protocol was enabled:
a transaction that was BF aborted on COMMIT would succeed
instead of reporting the expected deadlock error.
The reason for the failure was that, depending on timing,
the transaction was BF aborted while the COMMIT statement
was being prepared through a COM_STMT_PREPARE command.
In the failing cases, the transaction was BF aborted
after COM_STMT_PREPARE had already disabled the diagnostics
area of the client. Attempt to override the deadlock error
towards the end of dispatch_command() would be skipped,
resulting in a successful COMMIT even if the transaction
is aborted.
This bug affected the following MTR tests:
- galera_insert_multi
- galera_nopk_unicode
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
I could not reproduce reported assertion. However, I could
reporoduce test failure because missing wait_condition
and error in test case. Added missing wait_condition and
fixed error in test case to make test stable.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
For some reason InnoDB was disabled at --wsrep-recover call.
Added --loose-innodb to command like to make sure InnoDB is
enabled.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
row_purge_remove_sec_if_poss_leaf(): If there is an active transaction
that is not newer than PAGE_MAX_TRX_ID, return the bogus value 1
so that row_purge_remove_sec_if_poss_tree() is guaranteed to recheck if
the record needs to be purged. It could be the case that an active
transaction would insert this record between the time this check
completed and row_purge_remove_sec_if_poss_tree() acquired a latch
on the secondary index leaf page again.
row_purge_del_mark_error(), row_purge_check(): Some unlikely code
refactored into separate non-inline functions.
trx_sys_t::find_same_or_older_low(): Move the unlikely and bulky
part of trx_sys_t::find_same_or_older() to a non-inline function.
trx_sys_t::find_same_or_older_in_purge(): A variant of
trx_sys_t::find_same_or_older() for use in the purge subsystem,
with potential concurrent access of the same trx_t object from
multiple threads.
trx_t::max_inactive_id_atomic: An Atomic_relaxed alias of the
regular data field trx_t::max_inactive_id, which we
use on systems that have native 64-bit loads or stores.
On any 64-bit system that seems to be supported by GCC, Clang or MSVC,
relaxed atomic loads and stores use the regular load and store
instructions. On -march=i686 the 64-bit atomic loads and stores
would use an XMM register.
This fixes a regression that had been introduced in
commit b7b9f3ce82 (MDEV-34515).
There would be messages
[ERROR] InnoDB: tried to purge non-delete-marked record in index
in the server error log, and an assertion ut_ad(0) would cause a
crash of debug instrumented builds. This could also cause incorrect
results for MVCC reads and corrupted secondary indexes.
The debug instrumented test case was written by Debarun Banerjee.
Reviewed by: Debarun Banerjee
Make galera_sr.GCF-572 behave the same with and without option
innodb-snapshot-isolation. It is sufficient to remove a SELECT
statement from a transaction to delay the creation of the read
view in innodb. Avoiding the detection of a write-write conflict
under innodb-snapshot-isolation.
Ignore snapshot isolation conflict during fragment removal, before
streaming transaction commits. This happens when a streaming
transaction creates a read view that precedes the INSERTion of
fragments into the streaming_log table. Fragments are INSERTed
using a different transaction. These fragment are then removed
as part of COMMIT of the streaming transaction. This fragment
removal operation could fail when the fragments were not part
the transaction's read view, thus violating snapshot isolation.
Make sure that node_1 remains in primary view by increasing it's
weight. Add suppression on expected warnings because we kill
node_2.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
- Innochecksum misinterprets the freed pages as active one.
This leads the user to think there are too many valid
pages exist.
- To avoid this confusion, innochecksum introduced one
more option --skip-freed-pages and -r to avoid the freed
pages while dumping or printing the summary of the tablespace.
- Innochecksum can safely assume the page is freed if
the respective extent doesn't belong to a segment and marked as
freed in XDES_BITMAP in extent descriptor page.
- Innochecksum shouldn't assume that zero-filled page as extent
descriptor page.
Reviewed-by: Marko Mäkelä
MTR test rpl_semi_sync_no_missed_ack_after_add_slave fails on
buildbot after the preparatory commit for MDEV-35109 (5290fa043b)
which changed a sleep to a debug_sync point. The problem is that the
debug_sync point would time-out on a slave while waiting to enter
the logic to send an ACK reply. More specifically, where the test
config is a primary with two replicas, and the test waits on one of
the replicas to start sending an ACK, if the other replica was able
to receive the event and respond with an ACK before the binlog dump
thread of the timing-out server would prepare to send event, it
wouldn't set the SEMI_SYNC_NEED_ACK flag, and the replica wouldn't
even try to respond with an ACK.
Fix is to use debug_sync for both replicas such that both replicas
are held before sending their ack, so one can’t temporarily disable
semi-sync for the other before it receives the transaction.
Cursor protocol cannot handle select... into.
Disable this on loaddata.
For the grant_plugin/innodb_fts.fulltext changed
the tests to use a temporary table rather than a
user variable.
Problem:
========
From 10.6.13(86767bcc0f) version,
InnoDB purge thread does free the TRX_UNDO_CACHED undo segment.
Pre-10.6.13 version data directory can contain TRX_UNDO_CACHED
undo segment in system tablespace even though it
has external undo tablespace.
During slow shutdown, InnoDB collects the segment from tables
that exist in system tablespace and cached undo segment in
the system tablespace as used segment exist in system tablespace.
While shrinking the system tablespace, last used extent can be
used by undo cached segment. This extent blocks the shrinking of
system tablespace in a effective way.
Solution:
========
While freeing the unused segment, InnoDB should free the cached
undo segment header page exists in system tablespace and
reset the TRX_RSEG_UNDO_SLOTS to 0xff for the rollback segment
header page exists in system tablespace. This could improve
the shrinking of system tablespace further.
slave_applier_reset_xa_trans() should clear the THD::pseudo_thread_id when called
to reset XA transaction state completely. Clearing when pseudo_thread_id models
the binlog applier that handles BASE64-encoded events which possibly contain the
pseudo_thread_id, allowing us to restore the pre-event's state of the
connection's respective session var.
The test innodb.log_file_size_online was killing the server shortly
after the completion of SET GLOBAL innodb_log_file_size.
It is possible that log_t::write_checkpoint() would let mysqltest.cc
proceed to kill the server before the message had been written to
the server error log.
Let us remove the occasionally failing check of that message, and
instead ensure that SET GLOBAL innodb_log_file_size to a different
size will result in an error log message when the server is not being
killed.
MDEV-35350 consolidated two methods that MTR tests
would wait until a file had certain content
written to it, which were only available in 10.6+.
This patch only backports the functionality to
10.5 in case some test wants to use it (nothing
uses it in 10.5 at present).
The cleanup bc46f1a7d9 from 10.6 is also
backported so SEARCH_TYPE doesn't need to be
accounted for in the new search_pattern_in_file.inc
logic.
- Replace statement fails with duplicate key error when multiple
unique key conflict happens. Reason is that Server expects the
InnoDB engine to store the confliciting keys in ascending order.
But the InnoDB doesn't store the conflicting keys
in ascending order.
Fix:
===
- Enable HA_DUPLICATE_KEY_NOT_IN_ORDER for InnoDB storage engine
only when unique index order is different in .frm and innodb dictionary.
The merge f00711bba2 included a change
of the test innodb.log_file_name, which would try to ensure that
in the presence of the code fix decdd4bf49
we would get an error on Linux when invoking lseek() on a directory.
It turns out that this is not the case in at least one Linux based
cloud environment.
Problem:
========
- dict_stats_table_clone_create() does not initialize the
flag stats_error_printed in either dict_table_t or dict_index_t.
Because dict_stats_save_index_stat() is operating on a copy
of a dict_index_t object, it appears that
dict_index_t::stats_error_printed will always be false
for actual metadata objects, and uninitialized in
dict_stats_save_index_stat().
Solution:
=========
dict_stats_table_clone_create(): Assign stats_error_printed
for table and index while copying the statistics
Replace wait_for_pattern_in_file.inc and all of its uses
to use search_pattern_in_file.inc with SEARCH_WAIT.
Reviewed By:
============
Kristian Nielsen <knielsen@knielsen-hq.org>
Sergei Golubchik <serg@mariadb.org>
Ignoring configured server_id should not be a warning because
correct configuration is documented. Changed message to info
level with more detailed message what was configured and
what will be actually used.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
While applying CTAS log event, we peek the relay log to see if CTAS
contains inserted rows or if it's empty.
The peek function didn't check for end-of-file condition when tried to
get the next event from the log, and thus it hanged.
The fix includes checking for end-of-file while peeking for log events
and considering returned XID_EVENT value as a sign of an empty CTAS.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
For a primary configured with wait_point=AFTER_SYNC, if two threads
T1 (binlogging through MYSQL_BIN_LOG::write()) and T2 were
binlogging at the same time, T1 could accidentally wait for its
semi-sync ACK using the binlog coordinates of T2. Prior to
MDEV-33551, this only resulted in delayed transactions, because all
transactions shared the same condition variable for ACK signaling.
However, with the MDEV-33551 changes, each thread has its own
condition variable to signal. So T1 could wait indefinitely when
either:
1) T1's ACK is received but not T2's when T1 goes into
wait_after_sync(), because the ACK receiver thread has already
notified about the T1 ACK, but T1 was _actually_ waiting on T2's
ACK, and therefore tries to wait (in vain).
2) T1 goes to wait_after_sync() before any ACKs have arrived. When
T1's ACK comes in, T1 is woken up; however, sees it needs to wait
more (because it was actually waiting on T2's ACK), and goes to wait
again (this time, in vain).
Note that the actual cause of T1 waiting on T2's binlog coordinates
is when MYSQL_BIN_LOG::write() would call
Repl_semisync_master::wait_after_sync(), the binlog offset parameter
was read as the end of MYSQL_BIN_LOG::log_file, which is shared
among transactions. So if T2 had updated the binary log _after_ T1
had released LOCK_log, but not yet invoked wait_after_sync(), it
would use the end of the binary log file as the binlog offset, which
was that of T2 (or any future transaction).
The fix in this patch ensures consistency between the binary log
coordinates a transaction uses between report_binlog_update() and
wait_after_sync().
Reviewed By
============
Kristian Nielsen <knielsen@knielsen-hq.org>
Andrei Elkin <andrei.elkin@mariadb.com>
This is a preparatory commit for MDEV-35109 to make its
testing code cleaner (and harden other tests too).
The DEBUG_DBUG point simulate_delay_semisync_slave_reply
up to this patch used my_sleep() to delay an ACK response,
but sleeps are prone to test failures on machines that
run tests when already having a heavy load (e.g. on
buildbot).
This patch changes this DEBUG_DBUG sleep to use DEBUG_SYNC
to coordinate exactly when a slave should send its reply,
which is safer and faster.
As DEBUG_SYNC can't be used while a server is shutting
down, to synchronize threads with SHUTDOWN WAIT FOR SLAVES
logic, we use and extend wait_for_pattern_in_file.inc to
wait for an informational error message in the logic to
indicate that the shutdown process has reached the
intended state (i.e. indicating that the shutdown has
been delayed to await semi-sync ACKs). Specifically, the
extensions are as follows:
1. wait_for_pattern_in_file.inc is extended with parameter
wait_for_pattern_count as a number that indicates the
number of times a pattern should occur in the file before
return control back to the calling script.
2. search_for_pattern_in_file.inc is extended with parameter
SEARCH_ABORT_IS_SUCCESS to inverse the error/success
logic, so the SEARCH_ABORT condition can be used to
indicate success, rather than error.
Problem:
=======
- InnoDB fails to write the buffered insert operation during
create..select operation. This happens when bulk_insert
in transaction is reset to false while unlocking a source table.
Fix:
===
- InnoDB should apply the previous buffered changes to
all tables if we encounter any statement other than
pure INSERT or INSERT..SELECT statement in
ha_innobase::external_lock() and start_stmt().
- Remove the function bulk_insert_apply_for_table()
start_stmt(), external_lock(): Assert that trx->duplicates
should be enabled during bulk insert operation
This is an extension of MDEV-30423 "Deadlock on Replica during BACKUP
STAGE BLOCK_COMMIT on XA transactions"
The original commit in MDEV-30423 was not complete as some usage in XA of
MDL_BACKUP_COMMIT locks did not set thd->backup_commit_lock.
This is required to be set when using parallel replication.
Fixed by ensuring that all usage of BACKUP_COMMIT lock i XA is uniform and
all sets thd->backup_commit_lock. I also changed all locks to be
MDL_EXPLICIT to keep also that part uniform.
A regression test is added.