row_upd_build_difference_binary(): Correctly handle the
case where columns (or clustered index fields) have been added
since the 'entry' was originally created. In this case,
the update vector must replace any missing columns with the
default values of the instantly added columns.
The test occasionally fails with different table reference count
due to purge activity after INSERT operations (MDEV-12288).
Wait for purge before accessing dict_table_t::n_ref_count.
- Note that some issues was also fixed in 10.2 and 10.4. I also fixed them
here to be able to continue with making 10.5 valgrind safe again
- Disable connection threads warnings when doing shutdown
The MDEV-20265 commit e746f451d5
introduces DBUG_ASSERT(right_op == r_tbl) in
st_select_lex::add_cross_joined_table(), and that assertion would
fail in several tests that exercise joins. That commit was skipped
in this merge, and a separate fix of MDEV-20265 will be necessary in 10.4.
ha_innobase::open(): Always ignore problems with FOREIGN KEY constraints
(pass DICT_ERR_IGNORE_FK_NOKEY), no matter whether foreign_key_checks
is enabled. Instead, we must report errors when enforcing the FOREIGN KEY
constraints. As a result of ignoring these errors, the tables will be
loaded with dict_foreign_t objects whose foreign_index or referenced_index
will be NULL.
Also, pass DICT_ERR_IGNORE_FK_NOKEY instead of DICT_ERR_IGNORE_NONE
to dict_table_open_on_id_low() in many other cases. Notably, on
CREATE TABLE and ALTER TABLE, we will keep validating the FOREIGN KEY
constraints as before.
dict_table_open_on_name(): If no other flags than
DICT_ERR_IGNORE_FK_NOKEY are set, refuse access to unreadable tables.
Some encryption tests rely on this code path.
For the DML code path, we used to have the problem that when
one of the indexes was missing in dict_foreign_t, we would ignore
the FOREIGN KEY constraint altogether. The following changes
address that.
row_ins_check_foreign_constraints(): Add the parameter pk.
For the primary key, consider also foreign key constraints for which
foreign->foreign_index=NULL (no underlying index is available).
row_ins_check_foreign_constraint(): Report errors also for !check_ref.
Remove a redundant check for srv_read_only_mode.
row_ins_foreign_report_add_err(): Tolerate foreign->foreign_index=NULL.
Let us invoke wait_all_purged.inc right before the workload.
Starting with MDEV-12288 in MariaDB Server 10.3, also INSERT
generates purge workload. If we do not ensure that purge has
run to completion, the results on 10.3 and later could be
nondeterministic.
Stabilize the test:
- replace Rows column in EXPLAIN output for one query
- Use EITS statistics for another query (in that testcase, the
query must use LooseScan)
Optimize the test by dropping the table early and by using only
one undo log thread, so that purge will be doing more useful work
and less busy work of suspending and resuming the worker threads.
The test used to cause shutdown timeout on 10.4 on buildbot, and
for me locally when using --mysqld=--innodb-sync-debug.
With these tweaks, it passes for me with --mysqld=--innodb-sync-debug.
If there're multiple row versions in InnoDB, reading one row from PK
may have O(N) complexity and reading from secondary keys may have
O(N^2) complexity.
The problem occurs when there are many pending versions of the same
row, meaning that the primary key is the same, but a secondary key is
different. The slowdown occurs when the secondary index is
traversed. This patch creates a helper class for the function
row_sel_get_clust_rec_for_mysql() which can remember and re-use
cached_clust_rec & cached_old_vers so that rec_get_offsets() does not
need to be called over and over for the clustered record.
Corrections by Kevin Lewis <kevin.lewis@oracle.com>
MDEV-20341 Unstable innodb.innodb_bug14704286
Removed test that tested the ability of interrupting long query which
is not long anymore.
Starting with MDEV-12288 in MariaDB Server 10.3,
the transaction identifiers on records will be reset on purge.
Because purge might or might not run to completion before shutdown,
it could happen that the bogus transaction identifier that our
test is writing will be reset by purge after restart, and the
expected warning message on SELECT will fail to appear.
We resolve the race condition by ensuring that purge runs to
completion before the shutdown.
Use DEBUG_SYNC to hang the execution at the interesting point,
and then kill and restart the server externally. This will work
also with Valgrind. DBUG_SUICIDE() causes Valgrind to hang,
and it could also cause uninteresting reports about memory leaks.
While we are at it, let us clean up innodb.innodb_bulk_create_index_debug
so that it will actually test the desired functionality also in future
versions (with instant ADD COLUMN and DROP COLUMN) and avoid
some unnecessary restarts.
We are adding two DEBUG_SYNC points for ALTER TABLE, because there were
none that would be executed right before ha_commit_trans().
Skip the test on big-endian systems.
In MariaDB Server 10.0 and 10.1 (as well as MySQL 5.6),
the implementation of innodb_checksum_algorithm=crc32
wrongly assumes little-endian byte order.
MDEV-17614 flags INSERT…ON DUPLICATE KEY UPDATE unsafe for statement-based
replication when there are multiple unique indexes. This correctly fixes
something whose attempted fix in MySQL 5.7
in mysql/mysql-server@c93b0d9a97
caused lock conflicts. That change was reverted in MySQL 5.7.26
in mysql/mysql-server@066b6fdd43
(with a substantial amount of other changes).
In MDEV-17073 we already disabled the unfortunate MySQL change when
statement-based replication was not being used. Now, thanks to MDEV-17614,
we can actually remove the change altogether.
This reverts commit 8a346f31b9 (MDEV-17073)
and mysql/mysql-server@c93b0d9a97 while
keeping the test cases.
Problem: Clients running different values for auto_increment_increment
and doing concurrent inserts leads to "Duplicate key error" in one of them.
Analysis:
When auto_increment_increment value is reduced in a session,
InnoDB uses last auto_increment_increment value
to recalculate the autoinc value.
In case, some other session has inserted a value
with different auto_increment_increment, InnoDB recalculate
autoinc values based on current session previous auto_increment_increment
instead of considering the auto_increment_increment used for last insert
across all session
Fix:
revert 7acdf29cb4
a.k.a. 7c12a9e5c3
as it causing the bug.
Reviewed By:
Bin <bin.x.su@oracle.com>
Kevin <kevin.lewis@oracle.com>
RB#21777
Note: In MariaDB Server, earlier changes in
ae5bc05988
for MDEV-533 require that the original test in
mysql/mysql-server@1ccd472d63
be adjusted for MariaDB.
Also, ef47b62551 (MDEV-8827)
had to be reverted after the upstream fix had been backported.
Problem:
=======
Autoincrement value gives duplicate values because of the following reasons.
(1) In InnoDB handler function, current autoincrement value is not changed
based on newly set auto_increment_increment or auto_increment_offset variable.
(2) Handler function does the rounding logic and changes the current
autoincrement value and InnoDB doesn't aware of the change in current
autoincrement value.
Solution:
========
Fix the problem(1), InnoDB always respect the auto_increment_increment
and auto_increment_offset value in case of current autoincrement value.
By fixing the problem (2), handler layer won't change any current
autoincrement value.
Reviewed-by: Jimmy Yang <jimmy.yang@oracle.com>
RB: 13748
This is a regression due to MDEV-16515 that affects some versions in
the MariaDB 10.1 server series starting with 10.1.35, and possibly
all versions starting with 10.2.17, 10.3.8, and 10.4.0.
The idea of MDEV-16515 is to allow DROP TABLE to be interrupted,
in case it was stuck due to some concurrent activity. We already
made some cases of internal DROP TABLE immune to kill in MDEV-18237,
MDEV-16647, MDEV-17470. We must include the cleanup of
CREATE TABLE...SELECT in the list of such internal DROP TABLE.
ha_innobase::delete_table(): Pass create_failed=true if the current
SQL statement is CREATE, so that the table will be dropped.
row_drop_table_for_mysql(): If create_failed=true, do not allow
the operation to be interrupted.
btr_push_update_extern_fields(): Add a parameter for the original number
of fields in the record before btr_cur_trim(). Assume that this function
will only be called for the clustered index, which is the only index
that can contain off-page columns.
trx_undo_prev_version_build(), btr_cur_pessimistic_update():
Only invoke btr_push_update_extern_fields() for the clustered index.