This was done in, among other things:
- thd->db and thd->db_length
- TABLE_LIST tablename, db, alias and schema_name
- Audit plugin database name
- lex->db
- All db and table names in Alter_table_ctx
- st_select_lex db
Other things:
- Changed a lot of functions to take const LEX_CSTRING* as argument
for db, table_name and alias. See init_one_table() as an example.
- Changed some function arguments from LEX_CSTRING to const LEX_CSTRING
- Changed some lists from LEX_STRING to LEX_CSTRING
- threads_mysql.result changed because process list_db wasn't always
correctly updated
- New append_identifier() function that takes LEX_CSTRING* as arguments
- Added new element tmp_buff to Alter_table_ctx to separate temp name
handling from temporary space
- Ensure we store the length after my_casedn_str() of table/db names
- Removed not used version of rename_table_in_stat_tables()
- Changed Natural_join_column::table_name and db_name() to never return
NULL (used for print)
- thd->get_db() now returns db as a printable string (thd->db.str or "")
MDEV-11415 Remove excessive undo logging during ALTER TABLE…ALGORITHM=COPY
Move a test from innodb.rename_table_debug to innodb.alter_copy.
ha_innobase::extra(HA_EXTRA_BEGIN_ALTER_COPY): Register id-versioned
tables so that mysql.transaction_registry will be updated, even for
empty tables that are subjected to ALTER TABLE…ALGORITHM=COPY.
If a crash occurs during ALTER TABLE…ALGORITHM=COPY, InnoDB would spend
a lot of time rolling back writes to the intermediate copy of the table.
To reduce the amount of busy work done, a work-around was introduced in
commit fd069e2bb3 in MySQL 4.1.8 and 5.0.2,
to commit the transaction after every 10,000 inserted rows.
A proper fix would have been to disable the undo logging altogether and
to simply drop the intermediate copy of the table on subsequent server
startup. This is what happens in MariaDB 10.3 with MDEV-14717,MDEV-14585.
In MariaDB 10.2, the intermediate copy of the table would be left behind
with a name starting with the string #sql.
This is a backport of a bug fix from MySQL 8.0.0 to MariaDB,
contributed by jixianliang <271365745@qq.com>.
Unlike recent MySQL, MariaDB supports ALTER IGNORE. For that operation
InnoDB must for now keep the undo logging enabled, so that the latest
row can be rolled back in case of an error.
In Galera cluster, the LOAD DATA statement will retain the existing
behaviour and commit the transaction after every 10,000 rows if
the parameter wsrep_load_data_splitting=ON is set. The logic to do
so (the wsrep_load_data_split() function and the call
handler::extra(HA_EXTRA_FAKE_START_STMT)) are joint work
by Ji Xianliang and Marko Mäkelä.
The original fix:
Author: Thirunarayanan Balathandayuthapani <thirunarayanan.balathandayuth@oracle.com>
Date: Wed Dec 2 16:09:15 2015 +0530
Bug#17479594 AVOID INTERMEDIATE COMMIT WHILE DOING ALTER TABLE ALGORITHM=COPY
Problem:
During ALTER TABLE, we commit and restart the transaction for every
10,000 rows, so that the rollback after recovery would not take so long.
Fix:
Suppress the undo logging during copy alter operation. If fts_index is
present then insert directly into fts auxiliary table rather
than doing at commit time.
ha_innobase::num_write_row: Remove the variable.
ha_innobase::write_row(): Remove the hack for committing every 10000 rows.
row_lock_table_for_mysql(): Remove the extra 2 parameters.
lock_get_src_table(), lock_is_table_exclusive(): Remove.
Reviewed-by: Marko Mäkelä <marko.makela@oracle.com>
Reviewed-by: Shaohua Wang <shaohua.wang@oracle.com>
Reviewed-by: Jon Olav Hauglid <jon.hauglid@oracle.com>
- Galera tests that was not updated with connection change
messages
- Test where out of memory error was changed (We are now using the
standard out of memory error in most places)
- Removed tokudb tests that uses include files that doesn't exist
in MariaDB
- Removed not supported mariadb startup option from option file
- Galera tests that was not updated with connection change
messages
- Disabled some TokuDB tests that always timed out.
These should be enabled again when we have an option to
specicy timeouts per tests.
Do not SET DEBUG_DBUG=-d,... in tests. To disable debug instrumentation,
save and restore the original value of the variable DEBUG_DBUG.
Assigning -d,... will enable the output of a lot of unrelated DBUG
messages to the server error log.
Tests started to fail after a merge of MDEV-15107 (from bb-10.2-ext to 10.3),
because MDEV-15107 additionally fixed this problem:
MDEV-15112 Inconsistent evaluation of spvariable=0 in strict mode
Modifying tests not to reply on the pre-MDEV-15112 behavior.
Now we don't open partitions if it was explicitly cpecified.
ha_partition::m_opened_partition bitmap added to track
partitions that were actually opened.
replicate_events_marked_for_skip=FILTER_ON_MASTER
When events of a big transaction are binlogged offsetting over 2GB from
the beginning of the log the semisync master's dump thread
lost such events.
The events were skipped by the Dump thread that found their skipping
status erroneously.
The current fixes make sure the skipping status is computed correctly.
The test verifies them simulating the 2GB offset.
InnoDB RNG maintains global state, causing otherwise unnecessary bus
traffic. Even worse this is cross-mutex traffic. That is different
mutexes suffer from contention.
Fixed delay of 4 was verified to give best throughput by OLTP update
index and read-write benchmarks on Intel Broadwell (2/20/40) and
ARM (1/46/46).
MDEV-14511 tried to avoid some consistency problems related to InnoDB
persistent statistics. The persistent statistics are being written by
an InnoDB internal SQL interpreter that requires the InnoDB data dictionary
cache to be locked.
Before MDEV-14511, the statistics were written during DDL in separate
transactions, which could unnecessarily reduce performance (each commit
would require a redo log flush) and break atomicity, because the statistics
would be updated separately from the dictionary transaction.
However, because it is unacceptable to hold the InnoDB data dictionary
cache locked while suspending the execution for waiting for a
transactional lock (in the mysql.innodb_index_stats or
mysql.innodb_table_stats tables) to be released, any lock conflict
was immediately be reported as "lock wait timeout".
To fix MDEV-14941, an attempt to reduce these lock conflicts by acquiring
transactional locks on the user tables in both the statistics and DDL
operations was made, but it would still not entirely prevent lock conflicts
on the mysql.innodb_index_stats and mysql.innodb_table_stats tables.
Fixing the remaining problems would require a change that is too intrusive
for a GA release series, such as MariaDB 10.2.
Thefefore, we revert the change MDEV-14511. To silence the
MDEV-13201 assertion, we use the pre-existing flag trx_t::internal.
The problem was that max_size was acciently set to 1 in some
cases.
Other things:
- Adjust max_rows if min_rows > max_rows.
- Removed not used variable varchar_length
- Adjusted max_pack_length (safety fix)
If InnoDB is killed while ALTER TABLE...ALGORITHM=COPY is in progress,
after recovery there could be undo log records some records that were
inserted into an intermediate copy of the table. Due to these undo log
records, InnoDB would resurrect locks at recovery, and the intermediate
table would be locked while we are trying to drop it. This would cause
a call to row_rename_table_for_mysql(), either from
row_mysql_drop_garbage_tables() or from the rollback of a RENAME
operation that was part of the ALTER TABLE.
row_rename_table_for_mysql(): Do not attempt to parse FOREIGN KEY
constraints when renaming from #sql-something to #sql-something-else,
because it does not make any sense.
row_drop_table_for_mysql(): When deferring DROP TABLE due to locks,
do not rename the table if its name already starts with the #sql-
prefix, which is what row_mysql_drop_garbage_tables() uses.
Previously, the too strict prefix #sql-ib was used, and some
tables were renamed unnecessarily.
innodb.truncate_inject: Replacement for innodb_zip.wl6501_error_1
Note: unlike MySQL, in some cases TRUNCATE does not return
an error in MariaDB. This should be fixed in the scope of
MDEV-13564 or similar.
MariaDB inherits the MySQL limitation that ALGORITHM=INPLACE cannot
create more than one FULLTEXT INDEX at a time. As part of the MDEV-11369
Instant ADD COLUMN refactoring, MariaDB 10.3.2 accidentally stopped
enforcing the restriction.
Actually, it is a bug in MySQL 5.6 and MariaDB 10.0 that an ALTER TABLE
statement with multiple ADD FULLTEXT INDEX but without explicit
ALGORITHM=INPLACE would return in an error message, rather than
executing the operation with ALGORITHM=COPY.
ha_innobase::check_if_supported_inplace_alter(): Enforce the restriction
on multiple FULLTEXT INDEX.
prepare_inplace_alter_table_dict(): Replace some code with debug
assertions. A "goto error_handled" at this point would result in
another error, because the reference count of ctx->new_table would be 0.
and the system_versioning_transaction_registry variable.
The user enables transaction registry by specifying BIGINT for
row_start/row_end columns.
check mysql.transaction_registry structure on the first open,
not on startup. Avoid warnings unless transaction_registry
is actually used.
Problems --------
The slave io thread did not conduct integrity check
for a group of row-based events. Specifically it tolerates missed
terminal block event that must be flagged with STMT_END. Failure to
react on its loss can confuse the applier thread in various ways.
Another potential issue was that there were no check of impossible
second in row Gtid-log-event while the slave io thread is receiving
to be skipped events after reconnect.
Fixes
-----
The slave io thread is made by this patch to track the rows event
STMT_END status.
Whenever at next event reading the IO thread finds out that a preceding
Rows event did not actually had the flag, an
explicit error is issued.
Replication can be resumed after the source of failure is eliminated,
see a provided test.
Note that currently the row-based group integrity check excludes
the compressed version 2 Rows events (which are not generated by MariaDB
master).
Its uncompressed counterpart is manually tested.
The 2nd issue is covered to produce an error in case the io thread
receives a successive Gtid_log_event while it is post-reconnect
skipping.