Example of what causes the problem:
T1: ANALYZE TABLE starts to collect statistics
T2: ALTER TABLE starts by deleting statistics for all changed fields,
then creates a temp table and copies data to it.
T1: ANALYZE ends and writes to the statistics tables.
T2: ALTER TABLE renames temp table in place of the old table.
Now the statistics from analyze matches the old deleted tables.
Fixed by waiting to delete old statistics until ALTER TABLE is
the only one using the old table and ensure that rename of columns
can handle swapping of column names.
rename_columns_in_stat_table() (former rename_column_in_stat_tables())
now takes a list of columns to rename. It uses the following algorithm
to update column_stats to be able to handle circular renames
- While there are columns to be renamed and it is the first loop or
last rename loop did change something.
- Loop over all columns to be renamed
- Change column name in column_stat
- If fail because of duplicate key
- If this is first change attempt for this column
- Change column name to a temporary column name
- If there was a conflicting row, replace it with the current row.
else
- Remove entry from column list
- Loop over all remaining columns in the list
- Remove the conflicting row
- Change column from temporary name to final name in column_stat
Other things:
- Don't flush tables for every operation. Only flush when all updates
are done.
- Rename of columns was not handled in case of ALGORITHM=copy (old bug).
- Fixed that we do not collect statistics for hidden hash columns
used by UNIQUE constraint on long values.
- Fixed that we do not collect statistics for blob columns referred by
generated virtual columns. This was achieved by storing the fields for
which we want to have statistics in table->has_value_set instead of
in table->read_set.
- Rename of indexes was not handled for persistent statistics.
- This is now handled similar as rename of columns. Renamed columns
are now stored in 'rename_stat_indexes' and handled in
Alter_info::delete_statistics() together with drooped indexes.
- ALTER TABLE .. ADD INDEX may instead of creating a new index rename
an existing generated foreign key index. This was not reflected in
the index_stats table because this was handled in
mysql_prepare_create_table instead instead of in the mysql_alter() code.
Fixed by adding a call in mysql_prepare_create_table() to drop the
changed index.
I also had to change the code that 'marked the index' to be ignored
with code that would not destroy the original index name.
Reviewer: Sergei Petrunia <sergey@mariadb.com>
At the moment we cannot support
wsrep_forced_binlog_format=[MIXED|STATEMENT]
during CREATE TABLE AS SELECT.
Statement will use ROW instead and give
a warning.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Problem:
Under terms of MDEV-27490, we'll update Unicode version used
to compare identifiers to 14.0.0. Unlike in the old Unicode version,
in the new version a string can grow during lower-case. We cannot
perform check_db_name() inplace any more.
Change summary:
- Allocate memory to store lower-cased identifiers in memory root
- Removing check_db_name() performing both in-place lower-casing and validation
at the same time. Splitting it into two separate stages:
* creating a memory-root lower-cased copy of an identifier
(using new MEM_ROOT functions and Query_arena wrapper methods)
* performing validation on a constant string
(using Lex_ident_fs methods)
Implementation details:
- Adding a mysys helper function to allocate lower-cased strings on MEM_ROOT:
lex_string_casedn_root()
and a Query_arena wrappers for it:
make_ident_casedn()
make_ident_opt_casedn()
- Adding a Query_arena method to perform both MEM_ROOT lower-casing and
database name validation at the same time:
to_ident_db_internal_with_error()
This method is very close to the old (pre-11.3) check_db_name(),
but performs lower-casing to a newly allocated MEM_ROOT
memory (instead of performing lower-casing the original string in-place).
- Adding a Table_ident method which additionally handles derived table names:
to_ident_db_internal_with_error()
- Removing the old check_db_name()
Replacing my_casedn_str() called on local char[] buffer variables
to CharBuffer::copy_casedn() calls.
This is a sub-task for MDEV-31531 Remove my_casedn_str()
Details:
- Adding a helper template class IdentBuffer (a CharBuffer descendant),
which assumes utf8 data. Like CharBuffer, it's initialized to an empty
string in the constructor, but can be populated with lower-cased data
later.
- Adding a helper template class IdentBufferCasedn, which initializes
to lower case right in the constructor.
- Removing char[] buffers, replacing them to IdentBuffer and IdentBufferCasedn.
- Changing the data type of "db" and "table" parameters from
"const char*" to LEX_CSTRING in the following functions:
find_field_in_table_ref()
insert_fields()
set_thd_db()
mysql_grant()
to reuse IdentBuffer easeir.
- This commit is different from 10.6 commit c438284863db2ccba8a04437c941a5c8a2d9225b.
Due to Commit 045757af4c301757ba449269351cc27b1691a7d6 (MDEV-24621),
InnoDB does buffer and pre-sort the records for each index, and build
the indexes one page at a time.
Multiple large insert ignore statment aborts the server during bulk
insert operation. Problem is that InnoDB merge record exceeds
the page size. To avoid this scenario, InnoDB should catch
too big record while buffering the insert operation itself.
row_merge_buf_encode(): returns length of the encoded index record
row_merge_buf_write(): Catches the DB_TOO_BIG_RECORD earlier and
returns error
- HA_EXTRA_IGNORE_INSERT call is being called for every inserted row,
and on partitioned tables on every row * every partition.
This leads to slowness during load..data operation
- Under bulk operation, multiple insert statement error handling
will end up emptying the table. This behaviour introduced by the
commit 8ea923f55b7666a359ac2c54f6c10e8609d16846 (MDEV-24818).
This makes the HA_EXTRA_IGNORE_INSERT call redundant. We can
use the same behavior for insert..ignore statement as well.
- Removed the extra call HA_EXTRA_IGNORE_INSERT as the solution
to improve the performance of load command.
... upon replicating online ALTER
When an online event is applied and slave_exec_mode is idempotent,
Write_rows_log_event::do_before_row_operations had reset
thd->lex->sql_command to SQLCOM_REPLACE.
This led to that a statement was detected as a row-type during binlogging,
and was logged as not standalone.
So the corresponding Gtid_log_event, when applied on replica, did not exit
early and created a new PSI transaction. Hence the difference with
non-online ALTER.
Adding an auto_increment column online leads to an undefined behavior.
Basically any DEFAULTs that depend on a row order in the table, or on
the non-deterministic (in scope of the ALTER TABLE statement) function
is UB.
For example, NOW() is considered generally non-deterministic
(Item_func_now_utc is marked with VCOL_NON_DETERMINISTIC), but it's fixed
in scope of a single statement.
Same for any other function that depends only on the session/status vars
apart from its arguments.
Only two UB cases are known:
* adding new AUTO_INCREMENT column. Modifying the existing column may be
fine under certain circumstances, see MDEV-31058.
* adding new column with DEFAULT(nextval(...)). Modifying the existing
column is possible, since its value will be always present in the online
event, except for the NULL -> NOT NULL modification
Add a new virtual function that will increase the inserted rows count
for the insert log event and decrease it for the delete event.
Reuses Rows_log_event::m_row_count on the replication side, which was only
set on the logging side.
The deadlock was caused by too strong MDL acquired by the start ALTER.
Replica's ALTER TABLE replication consists of two phases:
1. Start ALTER (SA) -- the event is emittd in the very beginning,
allowing replication start ALTER in parallel
2. Commit ALTER (CA) -- ensures that master finishes successfully
CA is normally received by wait_for_master call.
If parallel DML was run, the following sequence will take place:
|- SA
|- DML
|- CA
If CA is handled after MDL upgrade, it'll will deadlock with DML.
While MDL is shared by the start ALTER wait for its 2nd part
to allow concurrent DMLs to grab the lock.
The fix uses wait_for_master reentrancy -- no need to avoid a second call
in the end of mysql_alter_table.
Since SA and CA are marked with FL_DDL, the DML issued in-between cannot be
rescheduled before or after them. However, SA "commits" (by he call of
write_bin_log_start_alter and, subsequently,
thd->wakeup_subsequent_commits) before the copy stage begins, unlocking
the DMLs to run on this table. That is, these DMLs will be executed
concurrently with the copy stage, making Online alter effective on replicas
as well
Co-authored-by: Nikita Malyavin (nikitamalyavin@gmail.com)
1. Make online disk writes unlimited, same as filesort does.
2. Make proper error handling -- in 32-bit build IO_CACHE capacity limit is
4GB, so it is quite possible to overfill there.
3. Event_log::write_cache complicated with event reparsing, and as it was
proven by QA, contains some mistakes. Rewrite introbuce a simpler and much
faster version, not featuring reparsing and therefore copying a whole
buffer at once. This also disables checksums and crypto.
4. Handle read_log_event errors correctly: error returned is -1 (eof
signal for alter table), and my_error is not called. Call my_error and
always return 1. There's no test for this, since it shouldn't happen,
see the next bullet.
5. An event could be written partially in case of error, if it's bigger
than the IO_CACHE buffer. Restore the position where it was before the
error was emitted.
As a result, online alter is untied of several binlog variables, which was
a second aim of this patch.
Group all the checks in online_alter_check_supported().
There is now two groups of checks:
1. A technical availability of online, that is checked before open_tables,
and affects table_list->lock_type. It's supposed to be safe to make it
TL_READ even if COPY algorithm will fall back to not-online, since MDL is
SHARED_UPGRADEABLE anyway.
2. An 'online' availability for a COPY algorithm. It can be done as late as
just before the copy_data_between_tables call. The lock_type influence is
disclosed above, so the only other place it affects is
Alter_info::supports_lock, where `online` flag is only used to decide
whether to report the error at the inplace preparation stage. We'd want to
make that at the last resort, which is COPY preparation, if no algorithm is
chosen by the user. So it's event better now.
Some changes are required to the autoinc support detection, as the check
now happens after mysql_prepare_alter_table:
* alter_info->drop_list is empty
* instead, dropped columns are in tmp_set
* alter_info->create_list now has every field that's in the new table.
* the column definition's change.str will be nonnull whether the column
remains in the new table (vs whether it was changed, as before).
But it also has `field` field set.
* IF EXISTS doesn't have to be dealt anymore
This infers that the changes are now checked in more detail: a field's
definition shouldn't be changed, vs a field shouldn't be mentioned in
the CHANGE list, as it was before. This is reflected by the line 193 test.
When column is changed to autoinc, ALTER TABLE may update zero/NULL values,
if NO_AUTO_VALUE_ON_ZERO mode is not enabled.
Forbid this for LOCK=NONE for the unreliable cases.
The cases are described in online_alter_check_autoinc.
Assertion `!table->versioned(VERS_TRX_ID)' failed in
Write_rows_log_event::binlog_row_logging_function during ONLINE ALTER.
trxid-versioned tables can't be replicated.
ONLINE ALTER will also be forbidden for these tables.
1. ER_KEY_NOT_FOUND
general replcation problem, already fixed earlier.
test added.
2. ER_LOCK_WAIT_TIMEOUT
This is a long unique specific problem.
Sometimes, lookup_handler is created for to->file. To properly free it,
ha_reset should be called. It is usually done by calling
close_thread_table, but ALTER TABLE makes it differently. Hence, a single
ha_reset call is added to mysql_alter_table.
Also, event_mem_root is removed. Normally, no per-event data should be
allocated on thd->mem_root, that would mean a leak. And otherwise,
lookup_handler is lazily allocated, but its lifetime matches statement,
not event.
because online means we'll apply events from the binlog, and
ignore means that bad rows will be skipped. So a bad Write_row_log_event
will be skipped and a following Update_row_log_event will fail to
apply.
if ALTER TABLE ... LOCK=xxx is executed under LOCK TABLES,
ignore the LOCK clause, because ALTER should not downgrade
already taken EXCLUSIVE table lock to SHARED or NONE.
This commit preserves the existing behavior (LOCK was de facto ignored),
but makes it explicit.
ALTER ONLINE TABLE acquires table with TL_READ. Myisam normally acquires
TL_WRITE for DML, which makes it hang until table is freed.
We deadlock once ALTER upgrades its MDL lock.
Solution:
Unlock table earlier. We don't need to hold TL_READ once we finished
copying. Relay log replication requires no data locks on `from` table.
in the catch-up phase of the online alter we apply row events,
they're unpacked into `from->record[0]` and then converted
to `to->record[0]`.
This needs all fields of `from` to be in the `write_set`.
Although practically `Field::unpack()` does not assert the `write_set`,
and `Field::reset()` - used when a field value is not present in the
after-image - also doesn't assert the `write_set` for many types,
`Field_new_decimal::reset()` does.
If online alter fails, TABLE_SHARE can be freed while concurrent
transactions still have row events in their online_alter_cache_data.
On commit they try'll to flush them, writing to TABLE_SHARE's
Cache_flip_event_log, which is already freed.
This causes a crash in main.alter_table_online_debug test
don't simply set tdc->flushed, use flush_unused(1) that removes opened
but unused TABLE instances (that would otherwise prevent TABLE_SHARE from
being closed by keeping the ref_count>0).