THD::close_temporary_tables(): Revert the change.
ha_innobase::delete_table(): Move the work-around inside
a debug assertion, and check thd_kill_level() instead of thd_killed(),
because the latter would not hold for KILL_CONNECTION.
Background: Used encryption key_id is stored to encryption metadata
i.e. crypt_data that is stored on page 0 of the tablespace of the
table. crypt_data is created only if implicit encryption/not encryption
is requested i.e. ENCRYPTED=[YES|NO] table option is used
fil_create_new_single_table_tablespace on fil0fil.cc.
Later if encryption is enabled all tables that use default encryption
mode (i.e. no encryption table option is set) are encrypted with
default encryption key_id that is 1. See fil_crypt_start_encrypting_space on
fil0crypt.cc.
ha_innobase::check_table_options()
If default encryption is used and encryption is disabled, you may
not use nondefault encryption_key_id as it is not stored anywhere.
main.derived_cond_pushdown: Move all 10.3 tests to the end,
trim trailing white space, and add an "End of 10.3 tests" marker.
Add --sorted_result to tests where the ordering is not deterministic.
main.win_percentile: Add --sorted_result to tests where the
ordering is no longer deterministic.
The relevant InnoDB/XtraDB fixes up to 5.6.42 had already
been applied to MariaDB in commit 30c3d6db32.
Revert some changes that appeared in
the merge commit 87d852f102.
Fixed by adding table flag HA_WANTS_PRIMARY_KEY, which is like
HA_REQUIRE_PRIMARY_KEY but tells SQL upper layer that the storage engine
internally can handle tables without primary keys (for example for
sequences or trough user variables)
Allow ADD COLUMN anywhere in a table, not only adding as the
last column.
Allow instant DROP COLUMN and instant changing the order of columns.
The added columns will always be added last in clustered index records.
In new records, instantly dropped columns will be stored as NULL or
empty when possible.
Information about dropped and reordered columns will be written in
a metadata BLOB (mblob), which is stored before the first 'user' field
in the hidden metadata record at the start of the clustered index.
The presence of mblob is indicated by setting the delete-mark flag in
the metadata record.
The metadata BLOB stores the number of clustered index fields,
followed by an array of column information for each field.
For dropped columns, we store the NOT NULL flag, the fixed length,
and for variable-length columns, whether the maximum length exceeded
255 bytes. For non-dropped columns, we store the column position.
Unlike with MDEV-11369, when a table becomes empty, it cannot
be converted back to the canonical format. The reason for this is
that other threads may hold cached objects such as
row_prebuilt_t::ins_node that could refer to dropped or reordered
index fields.
For instant DROP COLUMN and ROW_FORMAT=COMPACT or ROW_FORMAT=DYNAMIC,
we must store the n_core_null_bytes in the root page, so that the
chain of node pointer records can be followed in order to reach the
leftmost leaf page where the metadata record is located.
If the mblob is present, we will zero-initialize the strings
"infimum" and "supremum" in the root page, and use the last byte of
"supremum" for storing the number of null bytes (which are allocated
but useless on node pointer pages). This is necessary for
btr_cur_instant_init_metadata() to be able to navigate to the mblob.
If the PRIMARY KEY contains any variable-length column and some
nullable columns were instantly dropped, the dict_index_t::n_nullable
in the data dictionary could be smaller than it actually is in the
non-leaf pages. Because of this, the non-leaf pages could use more
bytes for the null flags than the data dictionary expects, and we
could be reading the lengths of the variable-length columns from the
wrong offset, and thus reading the child page number from wrong place.
This is the result of two design mistakes that involve unnecessary
storage of data: First, it is nonsense to store any data fields for
the leftmost node pointer records, because the comparisons would be
resolved by the MIN_REC_FLAG alone. Second, there cannot be any null
fields in the clustered index node pointer fields, but we nevertheless
reserve space for all the null flags.
Limitations (future work):
MDEV-17459 Allow instant ALTER TABLE even if FULLTEXT INDEX exists
MDEV-17468 Avoid table rebuild on operations on generated columns
MDEV-17494 Refuse ALGORITHM=INSTANT when the row size is too large
btr_page_reorganize_low(): Preserve any metadata in the root page.
Call lock_move_reorganize_page() only after restoring the "infimum"
and "supremum" records, to avoid a memcmp() assertion failure.
dict_col_t::DROPPED: Magic value for dict_col_t::ind.
dict_col_t::clear_instant(): Renamed from dict_col_t::remove_instant().
Do not assert that the column was instantly added, because we
sometimes call this unconditionally for all columns.
Convert an instantly added column to a "core column". The old name
remove_instant() could be mistaken to refer to "instant DROP COLUMN".
dict_col_t::is_added(): Rename from dict_col_t::is_instant().
dtype_t::metadata_blob_init(): Initialize the mblob data type.
dtuple_t::is_metadata(), dtuple_t::is_alter_metadata(),
upd_t::is_metadata(), upd_t::is_alter_metadata(): Check if info_bits
refer to a metadata record.
dict_table_t::instant: Metadata about dropped or reordered columns.
dict_table_t::prepare_instant(): Prepare
ha_innobase_inplace_ctx::instant_table for instant ALTER TABLE.
innobase_instant_try() will pass this to dict_table_t::instant_column().
On rollback, dict_table_t::rollback_instant() will be called.
dict_table_t::instant_column(): Renamed from instant_add_column().
Add the parameter col_map so that columns can be reordered.
Copy and adjust v_cols[] as well.
dict_table_t::find(): Find an old column based on a new column number.
dict_table_t::serialise_columns(), dict_table_t::deserialise_columns():
Convert the mblob.
dict_index_t::instant_metadata(): Create the metadata record
for instant ALTER TABLE. Invoke dict_table_t::serialise_columns().
dict_index_t::reconstruct_fields(): Invoked by
dict_table_t::deserialise_columns().
dict_index_t::clear_instant_alter(): Move the fields for the
dropped columns to the end, and sort the surviving index fields
in ascending order of column position.
ha_innobase::check_if_supported_inplace_alter(): Do not allow
adding a FTS_DOC_ID column if a hidden FTS_DOC_ID column exists
due to FULLTEXT INDEX. (This always required ALGORITHM=COPY.)
instant_alter_column_possible(): Add a parameter for InnoDB table,
to check for additional conditions, such as the maximum number of
index fields.
ha_innobase_inplace_ctx::first_alter_pos: The first column whose position
is affected by instant ADD, DROP, or changing the order of columns.
innobase_build_col_map(): Skip added virtual columns.
prepare_inplace_add_virtual(): Correctly compute num_to_add_vcol.
Remove some unnecessary code. Note that the call to
innodb_base_col_setup() should be executed later.
commit_try_norebuild(): If ctx->is_instant(), let the virtual
columns be added or dropped by innobase_instant_try().
innobase_instant_try(): Fill in a zero default value for the
hidden column FTS_DOC_ID (to reduce the work needed in MDEV-17459).
If any columns were dropped or reordered (or added not last),
delete any SYS_COLUMNS records for the following columns, and
insert SYS_COLUMNS records for all subsequent stored columns as well
as for all virtual columns. If any virtual column is dropped, rewrite
all virtual column metadata. Use a shortcut only for adding
virtual columns. This is because innobase_drop_virtual_try()
assumes that the dropped virtual columns still exist in ctx->old_table.
innodb_update_cols(): Renamed from innodb_update_n_cols().
innobase_add_one_virtual(), innobase_insert_sys_virtual(): Change
the return type to bool, and invoke my_error() when detecting an error.
innodb_insert_sys_columns(): Insert a record into SYS_COLUMNS.
Refactored from innobase_add_one_virtual() and innobase_instant_add_col().
innobase_instant_add_col(): Replace the parameter dfield with type.
innobase_instant_drop_cols(): Drop matching columns from SYS_COLUMNS
and all columns from SYS_VIRTUAL.
innobase_add_virtual_try(), innobase_drop_virtual_try(): Let
the caller invoke innodb_update_cols().
innobase_rename_column_try(): Skip dropped columns.
commit_cache_norebuild(): Update table->fts->doc_col.
dict_mem_table_col_rename_low(): Skip dropped columns.
trx_undo_rec_get_partial_row(): Skip dropped columns.
trx_undo_update_rec_get_update(): Handle the metadata BLOB correctly.
trx_undo_page_report_modify(): Avoid out-of-bounds access to record fields.
Log metadata records consistently.
Apparently, the first fields of a clustered index may be updated
in an update_undo vector when the index is ID_IND of SYS_FOREIGN,
as part of renaming the table during ALTER TABLE. Normally, updates of
the PRIMARY KEY should be logged as delete-mark and an insert.
row_undo_mod_parse_undo_rec(), row_purge_parse_undo_rec():
Use trx_undo_metadata.
row_undo_mod_clust_low(): On metadata rollback, roll back the root page too.
row_undo_mod_clust(): Relax an assertion. The delete-mark flag was
repurposed for ALTER TABLE metadata records.
row_rec_to_index_entry_impl(): Add the template parameter mblob
and the optional parameter info_bits for specifying the desired new
info bits. For the metadata tuple, allow conversion between the original
format (ADD COLUMN only) and the generic format (with hidden BLOB).
Add the optional parameter "pad" to determine whether the tuple should
be padded to the index fields (on ALTER TABLE it should), or whether
it should remain at its original size (on rollback).
row_build_index_entry_low(): Clean up the code, removing
redundant variables and conditions. For instantly dropped columns,
generate a dummy value that is NULL, the empty string, or a
fixed length of NUL bytes, depending on the type of the dropped column.
row_upd_clust_rec_by_insert_inherit_func(): On the update of PRIMARY KEY
of a record that contained a dropped column whose value was stored
externally, we will be inserting a dummy NULL or empty string value
to the field of the dropped column. The externally stored column would
eventually be dropped when purge removes the delete-marked record for
the old PRIMARY KEY value.
btr_index_rec_validate(): Recognize the metadata record.
btr_discard_only_page_on_level(): Preserve the generic instant
ALTER TABLE metadata.
btr_set_instant(): Replaces page_set_instant(). This sets a clustered
index root page to the appropriate format, or upgrades from
the MDEV-11369 instant ADD COLUMN to generic ALTER TABLE format.
btr_cur_instant_init_low(): Read and validate the metadata BLOB page
before reconstructing the dictionary information based on it.
btr_cur_instant_init_metadata(): Do not read any lengths from the
metadata record header before reading the BLOB. At this point, we
would not actually know how many nullable fields the metadata record
contains.
btr_cur_instant_root_init(): Initialize n_core_null_bytes in one
of two possible ways.
btr_cur_trim(): Handle the mblob record.
row_metadata_to_tuple(): Convert a metadata record to a data tuple,
based on the new info_bits of the metadata record.
btr_cur_pessimistic_update(): Invoke row_metadata_to_tuple() if needed.
Invoke dtuple_convert_big_rec() for metadata records if the record is
too large, or if the mblob is not yet marked as externally stored.
btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete():
When the last user record is deleted, do not delete the
generic instant ALTER TABLE metadata record. Only delete
MDEV-11369 instant ADD COLUMN metadata records.
btr_cur_optimistic_insert(): Avoid unnecessary computation of rec_size.
btr_pcur_store_position(): Allow a logically empty page to contain
a metadata record for generic ALTER TABLE.
REC_INFO_DEFAULT_ROW_ADD: Renamed from REC_INFO_DEFAULT_ROW.
This is for the old instant ADD COLUMN (MDEV-11369) only.
REC_INFO_DEFAULT_ROW_ALTER: The more generic metadata record,
with additional information for dropped or reordered columns.
rec_info_bits_valid(): Remove. The only case when this would fail
is when the record is the generic ALTER TABLE metadata record.
rec_is_alter_metadata(): Check if a record is the metadata record
for instant ALTER TABLE (other than ADD COLUMN). NOTE: This function
must not be invoked on node pointer records, because the delete-mark
flag in those records may be set (it is garbage), and then a debug
assertion could fail because index->is_instant() does not necessarily
hold.
rec_is_add_metadata(): Check if a record is MDEV-11369 ADD COLUMN metadata
record (not more generic instant ALTER TABLE).
rec_get_converted_size_comp_prefix_low(): Assume that the metadata
field will be stored externally. In dtuple_convert_big_rec() during
the rec_get_converted_size() call, it would not be there yet.
rec_get_converted_size_comp(): Replace status,fields,n_fields with tuple.
rec_init_offsets_comp_ordinary(), rec_get_converted_size_comp_prefix_low(),
rec_convert_dtuple_to_rec_comp(): Add template<bool mblob = false>.
With mblob=true, process a record with a metadata BLOB.
rec_copy_prefix_to_buf(): Assert that no fields beyond the key and
system columns are being copied. Exclude the metadata BLOB field.
rec_convert_dtuple_to_metadata_comp(): Convert an alter metadata tuple
into a record.
row_upd_index_replace_metadata(): Apply an update vector to an
alter_metadata tuple.
row_log_allocate(): Replace dict_index_t::is_instant()
with a more appropriate condition that ignores dict_table_t::instant.
Only a table on which the MDEV-11369 ADD COLUMN was performed
can "lose its instantness" when it becomes empty. After
instant DROP COLUMN or reordering columns, we cannot simply
convert the table to the canonical format, because the data
dictionary cache and all possibly existing references to it
from other client connection threads would have to be adjusted.
row_quiesce_write_index_fields(): Do not crash when the table contains
an instantly dropped column.
Thanks to Thirunarayanan Balathandayuthapani for discussing the design
and implementing an initial prototype of this.
Thanks to Matthias Leich for testing.
The setting innodb_safe_truncate=ON reduces compatibility with older
versions of MariaDB and backup tools in two ways.
First, we will be writing TRX_UNDO_RENAME_TABLE records, which older
versions do not know about. These records could be misinterpreted if
a DDL transaction was recovered and would be rolled back.
Such rollback is only possible if the server was killed while
an incomplete DDL transaction was persisted. On transaction completion,
the insert_undo log pages would only be repurposed for new undo log
allocations, and their contents would not matter. So, older versions
will not have a problem with innodb_safe_truncate=ON if the server was
shut down cleanly.
Second, to prevent such recovery failure, innodb_safe_truncate=ON will
cause a modification of the redo log format identifier, which will
prevent older versions from starting up after a crash. MariaDB Server
versions older than 10.2.13 will refuse to start up altogether, even
after clean shutdown.
A server restart with innodb_safe_truncate=OFF will restore compatibility
with older server and backup versions.
- Backported the MYSQL_SYSVAR_SIZE_T to 10.0
- The parameter innodb_ft_result_cache_limit was only 32 bits wide
also on 64-bit systems. Make it size_t, so that it will be 64 bits
on 64-bit systems.
- Added a test case that show how innodb_ft_result_cache_limit variables
behaves in 32bit and 64 bit system.
Rename the 10.2-specific configuration option innodb_unsafe_truncate
to innodb_safe_truncate, and invert its value.
The default (for now) is innodb_safe_truncate=OFF, to avoid
disrupting users with an undo and redo log format change within
a Generally Available (GA) release series.
While MariaDB Server 10.2 is not really guaranteed to be compatible
with Percona XtraBackup 2.4 (for example, the MySQL 5.7 undo log format
change that could be present in XtraBackup, but was reverted from
MariaDB in MDEV-12289), we do not want to disrupt users who have
deployed xtrabackup and MariaDB Server 10.2 in their environments.
With this change, MariaDB 10.2 will continue to use the backup-unsafe
TRUNCATE TABLE code, so that neither the undo log nor the redo log
formats will change in an incompatible way.
Undo tablespace truncation will keep using the redo log only. Recovery
or backup with old code will fail to shrink the undo tablespace files,
but the contents will be recovered just fine.
In the MariaDB Server 10.2 series only, we introduce the configuration
parameter innodb_unsafe_truncate and make it ON by default. To allow
MariaDB Backup (mariabackup) to work properly with TRUNCATE TABLE
operations, use loose_innodb_unsafe_truncate=OFF.
MariaDB Server 10.3.10 and later releases will always use the
backup-safe TRUNCATE TABLE, and this parameter will not be
added there.
recv_recovery_rollback_active(): Skip row_mysql_drop_garbage_tables()
unless innodb_unsafe_truncate=OFF. It is too unsafe to drop orphan
tables if RENAME operations are not transactional within InnoDB.
LOG_HEADER_FORMAT_10_3: Replaces LOG_HEADER_FORMAT_CURRENT.
log_init(), log_group_file_header_flush(),
srv_prepare_to_delete_redo_log_files(),
innobase_start_or_create_for_mysql(): Choose the redo log format
and subformat based on the value of innodb_unsafe_truncate.
wsrep_append_foreign_key() and wsrep_append_key() used to take a boolean
argument denoting whether the relevant certification key type is shared
(assuming it is exclusive if the argument is false). Change that
argument to the enum wsrep_key_type from wsrep_api.h, so that eventually
other types can also be passed (like WSREP_KEY_SEMI).
This is a non-functional change.
(cherry picked from commit 360bf36dbb9378b36ef57921c725a9505e19e0d9)
On the rollback of changes to SYS_COLUMNS, MDEV-15562 will
break the assumption that the only instantaneous changes to columns
are the addition to the end of the column list.
The function dict_table_t::rollback_instant(unsigned n)
is inherently incompatible with instantly dropping or reordering
columns.
When a change to SYS_COLUMNS is rolled back, we must simply evict
the affected table definition, at the end of the rollback. We cannot
free the table object immediately, because the current transaction
that is being rolled back may be holding a lock on the table and
its metadata record.
dict_table_remove_from_cache_low(): Replaced
by dict_table_remove_from_cache().
dict_table_remove_from_cache(): Add a third parameter keep=false,
so that the table can be freed by the caller.
trx_lock_t::evicted_tables: List of tables on which trx_t::evict_table()
was invoked.
trx_t::evict_table(): Evict a table definition during rollback.
trx_commit_in_memory(): Empty the trx->lock.evicted_tables list
after the locks were released, by freeing the table objects.
row_undo_ins_remove_clust_rec(), row_undo_mod_clust_low():
Invoke trx_t::evict_table() on the affected table if a change to
SYS_COLUMNS is being rolled back.
The error handling for ALTER TABLE…ALGORITHM=COPY as well as
CREATE TABLE used to commit the CREATE TABLE transaction and then
issue DROP TABLE in a separate transaction. This is unnecessarily
breaking atomicity during DDL operations. Let us revise it so
that the DROP TABLE will be executed within the same transaction,
which will finally be rolled back.
FIXME: Introduce an undo log record so that the data file would be
deleted on rollback and no DROP TABLE would be needed at all.
FIXME: Avoid unnecessary access to per-table tablespace during DROP TABLE.
If the .ibd file is going to be deleted anyway, we should not bother
to mark the pages free.
dict_create_add_foreigns_to_dictionary(): Do not commit the transaction.
We want simple rollback in case dict_load_foreigns() would fail.
create_table_info_t::create_table(), row_create_index_for_mysql(),
row_table_add_foreign_constraints(): Before invoking rollback, drop
the table. Rollback would invoke trx_t::evict_table(), and after that
dropping the table would be a no-op.
ha_innobase::create(): Before rollback, drop the table. If the SQL
layer invoked ha_innobase::delete_table() later, it would be a no-op
because the rollback would have invoked trx_t::evict_table().
table for purge thread
Problem:
=======
Purge tries to fetch mdl lock for the whole table even though it tries
to open one of the partition. But table name length was wrongly set to indicate
the partition name too.
Solution:
========
- Table name length should identify the table name only not the partition name.
table for purge thread
Problem:
=======
Purge tries to fetch mdl lock for the whole table even though it tries
to open one of the partition. But table name length was wrongly set to indicate
the partition name too.
Solution:
========
- Table name length should identify the table name only not the partition name.
|| node->vcol_info.is_used()' failed
- Purge thread can acquire mdl lock while initializing the mysql template.
Set the vcol_info information before acquiring mdl lock.
- Purge thread doesn't need to use the virtual column info even though it is
requested. In that case, reset the virtual column info.
In MySQL 5.7, a follow-up to WL#6671 removed the unused
fields ha_innobase::lock and INNOBASE_SHARE::lock, but
MariaDB did not remove them, even though a counterpart of
WL#6671 itself was implemented as MDEV-7660 in
commit d665e79c5b.
INNOBASE_SHARE was removed in MDEV-16557. Thus, all that
needs to be removed is the unused member ha_innobase::lock
and related code.
Thanks to Monty (and Valgrind) for noticing that
ha_innobase::lock was uninitialized.
Stop supporting the additional *trunc.log files that were
introduced via MySQL 5.7 to MariaDB Server 10.2 and 10.3.
DB_TABLESPACE_TRUNCATED: Remove.
purge_sys.truncate: A new structure to track undo tablespace
file truncation.
srv_start(): Remove the call to buf_pool_invalidate(). It is
no longer necessary, given that we no longer access things in
ways that violate the ARIES protocol. This call was originally
added for innodb_file_format, and it may later have been necessary
for the proper function of the MySQL 5.7 TRUNCATE recovery, which
we are now removing.
trx_purge_cleanse_purge_queue(): Take the undo tablespace as a
parameter.
trx_purge_truncate_history(): Rewrite everything mostly in a
single function, replacing references to undo::Truncate.
recv_apply_hashed_log_recs(): If any redo log is to be applied,
and if the log_sys.log.subformat indicates that separately
logged truncate may have been used, refuse to proceed except if
innodb_force_recovery is set. We will still refuse crash-upgrade
if TRUNCATE TABLE was logged. Undo tablespace truncation would
only be logged in undo*trunc.log files, which we are no longer
checking for.
With the TRUNCATE by rename, create, drop (MDEV-13564),
old tables with invalid ROW_FORMAT attribute could not be
truncated. Introduce a sloppy mode for allowing the TRUNCATE.
create_table_info_t::prepare_create_table(): Add the parameter
strict=true.
ha_innobase::create(): Pass strict=false if trx!=NULL
(the create is part of TRUNCATE).
It turned out that ha_innobase::truncate() would prematurely
commit the transaction already before the completion of the
ha_innobase::create(). All of this must be atomic.
innodb.truncate_crash: Use the correct DEBUG_SYNC point, and
tolerate non-truncation of the table, because the redo log
for the TRUNCATE transaction commit might be flushed due to
some InnoDB background activity.
dict_build_tablespace_for_table(): Merge to the function
dict_build_table_def_step().
dict_build_table_def_step(): If a table is being created during
an already started data dictionary transaction (such as TRUNCATE),
persistently write the table_id to the undo log header before
creating any file. In this way, the recovery of TRUNCATE will be
able to delete the new file before rolling back the rename of
the original table.
dict_table_rename_in_cache(): Add the parameter replace_new_file,
used as part of rolling back a TRUNCATE operation.
fil_rename_tablespace_check(): Add the parameter replace_new.
If the parameter is set and a file identified by new_path exists,
remove a possible tablespace and also the file.
create_table_info_t::create_table_def(): Remove some debug assertions
that no longer hold. During TRUNCATE, the transaction will already
have been started (and performed a rename operation) before the
table is created. Also, remove a call to dict_build_tablespace_for_table().
create_table_info_t::create_table(): Add the parameter create_fk=true.
During TRUNCATE TABLE, do not add FOREIGN KEY constraints to the
InnoDB data dictionary, because they will also not be removed.
row_table_add_foreign_constraints(): If trx=NULL, do not modify
the InnoDB data dictionary, but only load the FOREIGN KEY constraints
from the data dictionary.
ha_innobase::create(): Lock the InnoDB data dictionary cache only
if no transaction was passed by the caller. Unlock it in any case.
innobase_rename_table(): Add the parameter commit = true.
If !commit, do not lock or unlock the data dictionary cache.
ha_innobase::truncate(): Lock the data dictionary before invoking
rename or create, and let ha_innobase::create() unlock it and
also commit or roll back the transaction.
trx_undo_mark_as_dict(): Renamed from trx_undo_mark_as_dict_operation()
and declared global instead of static.
row_undo_ins_parse_undo_rec(): If table_id is set, this must
be rolling back the rename operation in TRUNCATE TABLE, and
therefore replace_new_file=true.
This is a merge from 10.2, but the 10.2 version of this will not
be pushed into 10.2 yet, because the 10.2 version would include
backports of MDEV-14717 and MDEV-14585, which would introduce
a crash recovery regression: Tables could be lost on
table-rebuilding DDL operations, such as ALTER TABLE,
OPTIMIZE TABLE or this new backup-friendly TRUNCATE TABLE.
The test innodb.truncate_crash occasionally loses the table due to
the following bug:
MDEV-17158 log_write_up_to() sometimes fails
This is a backport of the following commits:
commit b4165985c9
commit 69e88de0fe
commit 40f4525f43
commit 656f66def2
Now that MDEV-14717 made RENAME TABLE crash-safe within InnoDB,
it should be safe to drop the #sql- tables within InnoDB during
crash recovery. These tables can be one of two things:
(1) #sql-ib related to deferred DROP TABLE (follow-up to MDEV-13407)
or to table-rebuilding ALTER TABLE...ALGORITHM=INPLACE
(since MDEV-14378, only related to the intermediate copy of a table),
(2) #sql- related to the intermediate copy of a table during
ALTER TABLE...ALGORITHM=COPY
We will not drop tables whose name starts with #sql2, because
the server can be killed during an ALGORITHM=COPY operation at
a point where the original table was renamed to #sql2 but the
finished intermediate copy was not yet renamed from #sql-
to the original table name.
If an old version of MariaDB Server before 10.2.13 (MDEV-11415)
was killed while ALTER TABLE...ALGORITHM=COPY was in progress,
after recovery there could be undo log records for some records that were
inserted into an intermediate copy of the table. Due to these undo log
records, InnoDB would resurrect locks at recovery, and the intermediate
table would be locked while we are trying to drop it. This would cause
a call to row_rename_table_for_mysql(), either from
row_mysql_drop_garbage_tables() or from the rollback of a RENAME
operation that was part of the ALTER TABLE.
row_rename_table_for_mysql(): Do not attempt to parse FOREIGN KEY
constraints when renaming from #sql-something to #sql-something-else,
because it does not make any sense.
row_drop_table_for_mysql(): When deferring DROP TABLE due to locks,
do not rename the table if its name already starts with the #sql-
prefix, which is what row_mysql_drop_garbage_tables() uses.
Previously, the too strict prefix #sql-ib was used, and some
tables were renamed unnecessarily.
This is a backport of commit 07e9ff1fe1.
Allow DROP TABLE `#mysql50##sql-...._.` to drop tables that were
being rebuilt by ALGORITHM=INPLACE
NOTE: If the server is killed after the table-rebuilding ALGORITHM=INPLACE
commits inside InnoDB but before the .frm file has been replaced, then
the recovery will involve something else than DROP TABLE.
NOTE: If the server is killed in a true inplace ALTER TABLE commits
inside InnoDB but before the .frm file has been replaced, then we
are really out of luck. To properly handle that situation, we would
need a transactional mysql.ddl_fixup table that directs recovery to
rename or remove files.
prepare_inplace_alter_table_dict(): Use the altered_table->s->table_name
for generating the new_table_name.
table_name_t::part_suffix: The start of the partition name suffix.
table_name_t::dbend(): Return the end of the schema name.
table_name_t::dblen(): Return the length of the schema name, in bytes.
table_name_t::basename(): Return the name without the schema name.
table_name_t::part(): Return the partition name, or NULL if none.
row_drop_table_for_mysql(): Assert for #sql, not #sql-ib.
This is a backport of commit 0bc36758ba
and commit 9eb3fcc9fb.
InnoDB in MariaDB 10.2 appears to only write MLOG_FILE_RENAME2
redo log records during table-rebuilding ALGORITHM=INPLACE operations.
We must write the records for any .ibd file renames, so that the
operations are crash-safe.
If InnoDB is killed during a RENAME TABLE operation, it can happen that
the transaction for updating the data dictionary will be rolled back.
But, nothing will roll back the renaming of the .ibd file
(the MLOG_FILE_RENAME2 only guarantees roll-forward), or for that matter,
the renaming of the dict_table_t::name in the dict_sys cache. We introduce
the undo log record TRX_UNDO_RENAME_TABLE to fix this.
fil_space_for_table_exists_in_mem(): Remove the parameters
adjust_space, table_id and some code that was trying to work around
these deficiencies.
fil_name_write_rename(): Write a MLOG_FILE_RENAME2 record.
dict_table_rename_in_cache(): Invoke fil_name_write_rename().
trx_undo_rec_copy(): Set the first 2 bytes to the length of the
copied undo log record.
trx_undo_page_report_rename(), trx_undo_report_rename():
Write a TRX_UNDO_RENAME_TABLE record with the old table name.
row_rename_table_for_mysql(): Invoke trx_undo_report_rename()
before modifying any data dictionary tables.
row_undo_ins_parse_undo_rec(): Roll back TRX_UNDO_RENAME_TABLE
by invoking dict_table_rename_in_cache(), which will take care
of both renaming the table and the file.
ha_innobase::truncate(): Remove a work-around.
Remove the innodb_undo suite, and move and adapt the tests.
Remove unnecessary restarts, and add innodb_page_size_small.inc
for combinations.
innodb.undo_truncate is the merge of innodb_undo.truncate
and innodb_undo.truncate_multi_client.
Add the global status variable innodb_undo_truncations.
Without this, the test innodb.undo_truncate would occasionally
report that truncation did not happen. The test was only waiting
for the history list length to reach 0, but the undo tablespace
truncation would only take place some time after that.
Undo tablespace truncation will only occasionally occur with
innodb_page_size=32k, and typically never occur (with this amount
of undo log operations) with innodb_page_size=64k. We disable
these combinations.
innodb.undo_truncate_recover was formerly called
innodb_undo.truncate_recover.