- In ha_innobase::prepare_inplace_alter_table(), InnoDB should
check whether the table is empty. If the table is empty then
server should avoid downgrading the MDL after prepare phase.
It is more like instant alter, does change only in dicationary
and metadata.
- Changed few debug test case to make non-empty DDL table
Drop and add same key is considered rename (look ALTER_RENAME_INDEX in
fill_alter_inplace_info()). But in this case order of keys may be
changed, because mysql_prepare_alter_table() yet does not know about
rename and treats 2 operations: drop and add.
In that case we disable inplace algorithm for such engines as Memory,
MyISAM and Aria with ALTER_INDEX_ORDER flag. These engines have no
specialized check_if_supported_inplace_alter() and default
handler::check_if_supported_inplace_alter() sees an unknown flag and
returns HA_ALTER_INPLACE_NOT_SUPPORTED.
ha_innobase::check_if_supported_inplace_alter() works differently and
inplace is not disabled (with the help of modified
INNOBASE_INPLACE_IGNORE). add_drop_v_cols fork was also tweaked as it
wrongly failed with MSG_UNSUPPORTED_ALTER_ONLINE_ON_VIRTUAL_COLUMN
when it seen ALTER_INDEX_ORDER.
No-op operation must be still no-op no matter of ALTER_INDEX_ORDER
presence, so we tweek its condition as well.
mysql_prepare_create_table() does my_qsort(sort_keys) on key
info. This sorting is indeterministic: a table is created with one
order and inplace alter may overwrite frm with another order. Since
inplace alter does nothing about key info for MyISAM/Aria storage
engines this results in discrepancy between frm and storage engine key
definitions.
The fix avoids the sorting of keys when no new keys added by ALTER
(and this is ok for MyISAM/Aria since it cannot add new keys inplace).
There is a case when implicit primary key may be changed when removing
NOT NULL from the part of unique key. In that case we update
modified_primary_key which is then used to not skip key sorting.
According to is_candidate_key() there is no other cases when primary
key may be changed implicitly.
Notes:
mi_keydef_write()/mi_keyseg_write() are used only in mi_create(). They
should be used in ha_inplace_alter_table() as well.
Aria corruption detection is unimplemented: maria_check_definition()
is never used!
MySQL 8.0 has this bug as well as of 8.0.26.
There is a case when implicit primary key may be changed when removing
NOT NULL from the part of unique key. In that case we update
modified_primary_key which is then used to not skip key sorting.
According to is_candidate_key() there is no other cases when primary
kay may be changed implicitly.
mysql_prepare_create_table() does my_qsort(sort_keys) on key
info. This sorting is indeterministic: a table is created with one
order and inplace alter may overwrite frm with another order. Since
inplace alter does nothing about key info for MyISAM/Aria storage
engines this results in discrepancy between frm and storage engine key
definitions.
The fix avoids the sorting of keys when no new keys added by ALTER
(and this is ok for MyISAM/Aria since it cannot add new keys inplace).
Notes:
mi_keydef_write()/mi_keyseg_write() are used only in mi_create(). They
should be used in ha_inplace_alter_table() as well.
Aria corruption detection is unimplemented: maria_check_definition()
is never used!
MySQL 8.0 has this bug as well as of 8.0.26.
This breaks main.long_unique in 10.4. The new result is correct and
should be applied as it just different (original) order of keys.
Assertion `!pk->has_virtual()' failed in dict_index_build_internal_clust
while creating PRIMARY key longer than possible to store in the page.
This happened because the key was wrongly deduced as Long UNIQUE supported,
however PRIMARY KEY cannot be of that type. The main reason is that
only 8 bytes are used to store the hash, see HA_HASH_FIELD_LENGTH.
This is also why HA_NOSAME flag is removed (and caused the assertion in
turn) in open_table_from_share:
if (key_info->algorithm == HA_KEY_ALG_LONG_HASH)
{
key_part_end++;
key_info->flags&= ~HA_NOSAME;
}
To make it unique, the additional check is done by
check_duplicate_long_entries call from ha_write_row, and similar one from
ha_update_row.
PRIMARY key is already forbidden, which is checked by the first test in
main.long_unique, however is_hash_field_needed was wrongly deduced to true
in mysql_prepare_create_table in this particular case.
FIX:
* Improve the check for Key::PRIMARY type
* Simplify is_hash_field_needed deduction for a more neat reading
A table rebuild that would truncate the default value of a
DATE column is expected to issue data truncation warnings.
But, these warnings are not being issued if the ADD COLUMN
is being executed with ALGORITHM=INSTANT. InnoDB sets the
warning of the field while assigning the default value
of the field during check_if_supported_inplace_alter().
- InnoDB fails to check DB_COMPUTE_VALUE_FAILED error in
row_merge_read_clustered_index() and wrongly asserts that
the buffer shouldn't be ran out of memory. Alter table
should give warning when the column value is being
truncated.
When doing a ALTER TABLE ... RENAME, MariaDB doesn't rename
original table to #sql-backup, which it does in other cases,
but insteads drops the original table directly. However
this optimization doesn't work in case of InnoDB table
with a foreign key constraint.
During copy algorithm, InnoDB fails to rename the foreign key
constraint(MDEV-25855). With this optimisation, InnoDB also
fails to drop the original table because the table has
FOREIGN Key constraint exist in INNODB_SYS_FOREIGN table.
This leads to orphan .ibd file in InnoDB dictionary.
so disabling this optimization when FK is involved.
Reviewer: monty@mariadb.org
after dfb41fddf6 tables that failed to drop are excluded from the
binlogged DROP TABLE statement. It means that the slave should not
expect any errors when executing DROP TABLE, and the binlog should
report that no error has happened, even if it was.
Do not write error code into the binlogged DROP TABLE,
and remove all code that was needed to compute it.
InnoDB fails to fetch the index type when innodb dictionary
doesn't match with frm. InnoDB should return corrupted if it
can't find the index in ha_innobase::index_type().
This is a backport of
commit fd9ca2a742 (MDEV-23295) and
commit 9a156e1a23 (MDEV-23345) to 10.3.
An instant ADD/DROP/reorder column could create a dummy table
object with the wrong ROW_FORMAT when innodb_default_row_format
was changed between CREATE TABLE and ALTER TABLE.
prepare_inplace_alter_table_dict(): If we had promised that
ALGORITHM=INPLACE is supported, we must preserve the ROW_FORMAT.
The rest of the changes are related to adding
Alter_inplace_info::inplace_supported to cache the return value of
handler::check_if_supported_inplace_alter().
Before FRM is written walk vcol expressions through
check_table_name_processor() and check if field items match (db,
table_name) qualifier.
We cannot do this in check_vcol_func_processor() as there is already
no table name qualifiers in expressions of written and loaded FRM.
When a column is added to an non-empty table, existing rows will have
a column's default value for existing rows. Or a "zero value" if the
column has no default.
But this check should be skipped when an existing column is altered.
Fixed by adding a MDL_BACKUP_COMMIT lock before altering temporary tables
whose creation was logged to binary log (in which case the ALTER TABLE
must also be logged)
Problem:
The problem happened because of a conceptual flaw in the server code:
a. The table level CHARSET/COLLATE clause affected all data types,
including numeric and temporal ones:
CREATE TABLE t1 (a INT) CHARACTER SET utf8 [COLLATE utf8_general_ci];
In the above example, the Column_definition_attributes
(and then the FRM record) for the column "a" erroneously inherited
"utf8" as its character set.
b. The "ALTER TABLE t1 CONVERT TO CHARACTER SET csname" statement
also erroneously affected Column_definition_attributes::charset
for numeric and temporal data types and wrote "csname" as their
character set into FRM files.
So now we have arbitrary non-relevant charset ID values for numeric
and temporal data types in all FRM files in the world :)
The code in the server and the other engines did not seem to be affected
by this flaw. Only InnoDB inplace ALTER was affected.
Solution:
Fixing the code in the way that only character string data types
(CHAR,VARCHAR,TEXT,ENUM,SET):
- inherit the table level CHARSET/COLLATE clause
- get the charset value according to "CONVERT TO CHARACTER SET csname".
Numeric and temporal data types now always get &my_charset_numeric
in Column_definition_attributes::charset and always write its ID into FRM files:
- no matter what the table level CHARSET/COLLATE clause is, and
- no matter what "CONVERT TO CHARACTER SET" says.
Details:
1. Adding helper classes to pass small parts of HA_CREATE_INFO
into Type_handler methods:
- Column_derived_attributes - to pass table level CHARSET/COLLATE,
so columns that do not have explicit CHARSET/COLLATE clauses
can derive them from the table level, e.g.
CREATE TABLE t1 (a VARCHAR(1), b CHAR(1)) CHARACTER SET utf8;
- Column_bulk_alter_attributes - to pass bulk attribute changes
generated by the ALTER related code. These bulk changes affect
multiple columns at the same time:
ALTER TABLE ... CONVERT TO CHARACTER SET csname;
Note, passing the whole HA_CREATE_INFO directly to Type_handler
would not be good: HA_CREATE_INFO is huge and would need not desired
dependencies in sql_type.h and sql_type.cc. The Type_handler API should
use smallest possible data types!
2. Type_handler::Column_definition_prepare_stage1() is now responsible
to set Column_definition::charset properly, according to the data type,
for example:
- For string data types, Column_definition_attributes::charset is set from
the table level CHARSET/COLLATE clause (if not specified explicitly in
the column definition).
- For numeric and temporal fields, Column_definition_attributes::charset is
set to &my_charset_numeric, no matter what the table level
CHARSET/COLLATE says.
- For GEOMETRY, Column_definition_attributes::charset is set to
&my_charset_bin, no matter what the table level CHARSET/COLLATE says.
Previously this code (setting `charset`) was outside of of
Column_definition_prepare_stage1(), namely in
mysql_prepare_create_table(), and was erroneously called for
all data types.
3. Adding Type_handler::Column_definition_bulk_alter(), to handle
"ALTER TABLE .. CONVERT TO". Previously this code was inside
get_sql_field_charset() and was erroneously called for all data types.
4. Removing the Schema_specification_st parameter from
Type_handler::Column_definition_redefine_stage1().
Column_definition_attributes::charset is now fully properly initialized by
Column_definition_prepare_stage1(). So we don't need access to the
table level CHARSET/COLLATE clause in Column_definition_redefine_stage1()
any more.
5. Other changes:
- Removing global function get_sql_field_charset()
- Moving the part of the former get_sql_field_charset(), which was
responsible to inherit the table level CHARSET/COLLATE clause to
new methods:
-- Column_definition_attributes::explicit_or_derived_charset() and
-- Column_definition::prepare_charset_for_string().
This code is only needed for string data types.
Previously it was erroneously called for all data types.
- Moving another part, which was responsible to apply the
"CONVERT TO" clause, to
Type_handler_general_purpose_string::Column_definition_bulk_alter().
- Replacing the call for get_sql_field_charset() in sql_partition.cc
to sql_field->explicit_or_derived_charset() - it is perfectly enough.
The old code was redundant: get_sql_field_charset() was called from
sql_partition.cc only when there were no a "CONVERT TO CHARACTER SET"
clause involved, so its purpose was only to inherit the table
level CHARSET/COLLATE clause.
- Moving the code handling the BINCMP_FLAG flag from
mysql_prepare_create_table() to
Column_definition::prepare_charset_for_string():
This code is responsible to resolve the BINARY comparison style
into the corresponding _bin collation, to do the following transparent
rewrite:
CREATE TABLE t1 (a VARCHAR(10) BINARY) CHARSET utf8; ->
CREATE TABLE t1 (a VARCHAR(10) CHARACTER SET utf8 COLLATE utf8_bin);
This code is only needed for string data types.
Previously it was erroneously called for all data types.
6. Renaming Table_scope_and_contents_source_pod_st::table_charset
to alter_table_convert_to_charset, because the only purpose it's used for
is handlering "ALTER .. CONVERT". The new name is much more self-descriptive.
The problem was that the CONNECT engine is trying to open the .frm file
during drop_table(), which the code did not take into account.
Fixed by adding the HA_REUSES_FILE_NAMES table flag to CONNECT.
Other things:
- Fixed a wrong test of HA_REUSE_FILE_NAMES of in mysql_alter_table()
(Comment was correct, no the code)
- Added a test in the connect engine that if the .frm it tries to use in
delete is not made for connect, it will generate an error instead of
crash.
partially revert 76063c2a13. Item::clone() is not an all-purpose
Item copying machine, it was specifically created for pushdown
of predicates into derived tables and views and it does not
copy everything. In particular, it does not copy Item_func_regex.
Fix the bug differently by preserving the old constraint name.
But keep setting automatic_name=true to have it regenerated
for cases like ALTER TABLE ... ADD CONSTRAINT.
Ever since commit 007f68c37f,
ALTER TABLE no longer invokes handler::open() after
handler::commit_inplace_alter_table().
ha_innobase::reload_statistics(): Reload or recompute statistics
after ALTER TABLE.
innodb_notify_tabledef_changed(): A new function to invoke
ha_innobase::reload_statistics().
handlerton::notify_tabledef_changed(): Add the parameter handler*
so that ha_innobase::reload_statistics() can be invoked.
ha_partition::notify_tabledef_changed(),
partition_notify_tabledef_changed(): Pass through the call
to any partitions or subpartitions.
This is based on code that was supplied by Monty.
* be strict in CREATE TABLE, just like in ALTER TABLE, because
CREATE TABLE, just like ALTER TABLE, can be rolled back for any engine
* but don't auto-convert warnings into errors for engine warnings
(handler::create) - this matches ALTER TABLE behavior
* and not when creating a default record, these errors are handled
specially (and replaced with ER_INVALID_DEFAULT)
* always issue a Note when a non-unique key is truncated, because it's
not a Warning that can be converted to an Error. Before this commit
it was a Note for blobs and a Warning for all other data types.