- During copy algorithm, InnoDB should use bulk insert operation
for row by row insert operation. By doing this, copy algorithm
can effectively build indexes. This optimization is disabled
for temporary table, versioning table and table which has
foreign key relation.
Introduced the variable innodb_alter_copy_bulk to allow
the bulk insert operation for copy alter operation
inside InnoDB. This is enabled by default
ha_innobase::extra(): HA_EXTRA_END_ALTER_COPY mode tries to apply
the buffered bulk insert operation, updates the non-persistent
table stats.
row_merge_bulk_t::write_to_index(): Update stat_n_rows after
applying the bulk insert operation
row_ins_clust_index_entry_low(): In case of copy algorithm,
switch to bulk insert operation.
copy_data_error_ignore(): Handles the error while copying
the data from source to target file.
Caused by:
5d37cac7 MDEV-33348 ALTER TABLE lock waiting stages are indistinguishable.
In that commit, progress reporting was moved to
mysql_alter_table from copy_data_between_tables.
The temporary table case wasn't taken into the consideration,
where the execution of mysql_alter_table ends earlier than usual, by the
'end_temporary' label. There, thd_progress_end has been missing.
Fix:
Add missing thd_progress_end() call in mysql_alter_table.
Correct the second parameter for strxnmov to prevent potential buffer
overflows. The second parameter must be one less than the size of the
input buffer to avoid writing past the end of the buffer.
While the second parameter is usually correct, there are exceptions
that need fixing.
This commit addresses the issue within frm_file_exists() and other
affected places.
on disable_indexes(HA_KEY_SWITCH_NONUNIQ_SAVE) the engine does
not know that the long unique is logically unique, because on the
engine level it is not. And the engine disables it,
Change the disable_indexes/enable_indexes API. Instead of the enum
mode, send a key_map of indexes that should be enabled. This way the
server will decide what is unique, not the engine.
Assertion "from->s->online_alter_binlog == NULL" fails in
copy_data_between_tables, signalizing that a table share is being reused
(in another alter) after a lock upgrade to EXCLUSIVE fails.
Commit 3059f27 relaxed the lock to be upgraded to MDL_SHARED_NO_WRITE, leaving
it to happen later by a common path wait_while_table_is_used() call.
However the error handling there is not enough for online alter case, where we
require (for now) the table to be flushed, in order to clean up the memory
properly.
* Add another lock upgrade (to MDL_EXCLUSIVE) after the second replication stage
in copy_data_between_tables.
The error from this upgrade will be handled by the branch presented further in
the function.
MDEV-33450 Assertion fails in main.alter_table_online_debug
`TABLE_SHARE` that is being online-altered has a shared `s->online_alter_binlog`
member that all concurrent DMLs are writing to. Online alter thread deletes it
under the MDL_EXCLUSIVE. If upgrading the lock to MDL_EXCLUSIVE fails, table as
marked as `flushed` and it's freed automatically when its usage drops to zero.
In commit 3059f27 the lock upgrade was relaxed to MDL_SHARED_NO_WRITE to allow
concurrent SELECT threads during the final `online_alter_read_from_binlog()`
pass. An attempt to upgrade the lock to MDL_EXCLUSIVE was still happening, but
much later — after the code that marked the table `flushed`.
That is, if the upgrade failed, the table was left with a stale
`s->online_alter_binlog` triggering an assert in a future online alter.
To fix this, upgrade the lock to MDL_EXCLUSIVE earlier, after the final
`online_alter_read_from_binlog()`.
The discovered memory leak was introduced by the commit
762bf7a03b
(MDEV-22602 Disable UPDATE CASCADE for SQL constraints)
The reason why a memory leaked on running the test main.constraints
is that a statement arena was used for allocation a memory
for storing a constraint name. A constraint name is an entity having
temporary nature by its design so runtime arena should be used for its
allocation.
Some fixes related to commit f838b2d799 and
Rows_log_event::do_apply_event() and Update_rows_log_event::do_exec_row()
for system-versioned tables were provided by Nikita Malyavin.
This was required by test versioning.rpl,trx_id,row.
In case there is a view that queried from a stored routine or
a prepared statement and this temporary table is dropped between
executions of SP/PS, then it leads to hitting an assertion
at the SELECT_LEX::fix_prepare_information. The fired assertion
was added by the commit 85f2e4f8e8
(MDEV-32466: Potential memory leak on executing of create view statement).
Firing of this assertion means memory leaking on execution of SP/PS.
Moreover, if the added assert be commented out, different result sets
can be produced by the statement SELECT * FROM the hidden table.
Both hitting the assertion and different result sets have the same root
cause. This cause is usage of temporary table's metadata after the table
itself has been dropped. To fix the issue, reload the cache of stored
routines. To do it cache of stored routines is reset at the end of
execution of the function dispatch_command(). Next time any stored routine
be called it will be loaded from the table mysql.proc. This happens inside
the method Sp_handler::sp_cache_routine where loading of a stored routine
is performed in case it missed in cache. Loading is performed unconditionally
while previously it was controlled by the parameter lookup_only. By that
reason the signature of the method Sroutine_hash_entry::sp_cache_routine
was changed by removing unused parameter lookup_only.
Clearing of sp caches affects the test main.lock_sync since it forces
opening and locking the table mysql.proc but the test assumes that each
statement locks its tables once during its execution. To keep this invariant
the debug sync points with names "before_lock_tables_takes_lock" and
"after_lock_tables_takes_lock" are not activated on handling the table
mysql.proc
Problem:
REPAIR TABLE executed for a pre-MDEV-29959 table (with the old UUID format)
updated the server version in the FRM file without rewriting the data,
so it created a new FRM for old UUIDs. After that MariaDB could not
read UUIDs correctly.
Fix:
- Adding a new virtual method in class Type_handler:
virtual bool type_handler_for_implicit_upgrade() const;
* For the up-to-date data types it returns "this".
* For the data types which need to be implicitly upgraded
during REPAIR TABLE or ALTER TABLE, it returns a pointer
to a new replacement data type handler.
Old VARCHAR and old UUID type handlers override this method.
See more comments below.
- Changing the semantics of the method
Type_handler::Column_definition_implicit_upgrade(Column_definition *c)
to the opposite, so now:
* c->type_handler() references the old data type (to upgrade from)
* "this" references the new data type (to upgrade to).
Before this change Column_definition_implicit_upgrade() was supposed
to be called with the old data type handler (to upgrade from).
Renaming the method to Column_definition_implicit_upgrade_to_this(),
to avoid automatic merges in this method.
Reflecting this change in Create_field::upgrade_data_types().
- Replacing the hard-coded data type tests inside handler::check_old_types()
to a call for the new virtual method
Type_handler::type_handler_for_implicit_upgrade()
- Overriding Type_handler_fbt::type_handler_for_implicit_upgrade()
to call a new method FbtImpl::type_handler_for_implicit_upgrade().
Reasoning:
Type_handler_fbt is a template, so it has access only to "this".
So in case of UUID data types, the type handler for old UUID
knows nothing about the type handler of new UUID inside sql_type_fixedbin.h.
So let's have Type_handler_fbt delegate type_handler_for_implicit_upgrade()
to its Type_collection, which knows both new UUID and old UUID.
- Adding Type_collection_uuid::type_handler_for_implicit_upgrade().
It returns a pointer to the new UUID type handler.
- Overriding Type_handler_var_string::type_handler_for_implicit_upgrade()
to return a pointer to type_handler_varchar (true VARCHAR).
- Cleanup: these two methods:
handler::check_old_types()
handler::ha_check_for_upgrade()
were always called consequently.
So moving the call for check_old_types() inside ha_check_for_upgrade(),
and making check_old_types() private.
- Cleanup: removing the "bool varchar" parameter from fill_alter_inplace_info(),
as its not used any more.
Several points of synchronization during ALTER TABLE COPY looked identical
in the progress report query. Besides, if its the late lock upgrade stage,
the data would be:
STAGE 0
MAX_STAGE 0
PROGRESS 0.000
which looks irrelevant.
This patch moves thd_progress_deinit call after the last lock upgrade.
Also, for online alter, if there is nothing to replicate, the
progress and max_progress values would be 0, which discard the result data
on the side of sql_show, see processlist_callback in sql_show.cc.
So now the minimal max_progress will be 1. To avoid 0% progress in the
report, minimax progress value is also set to 1, so we will see 100% if
there's nothing to replicate.
This will allow selects pass until the rename/commit stage.
The lock is anyway upgraded to MDL_EXCLUSIVE later by the
wait_while_table_is_used call in mysql_alter_table.
The memory leak occurs on error when backup_reset_alter_copy_lock fails
with timeout. This leads to the alter rollback, but flush_unused is not
called.
Move table flushing on error handling to a single place and mind more
possible failures this time.
Problem is that Galera starts TOI (total order isolation) i.e.
it sends query to all nodes. Later it is discovered that
used engine or other feature is not supported by Galera.
Because TOI is executed parallelly in all nodes appliers
could execute given TOI and ignore the error and
start inconsistency voting causing node to leave from
cluster or we might have a crash as reported.
For example SEQUENCE engine does not support GEOMETRY data
type causing either inconsistency between nodes (because
some errors are ignored on applier) or crash.
Fixed my adding new function wsrep_check_support to check
can Galera support provided CREATE TABLE/SEQUENCE before TOI is
started and if not clear error message is provided to
the user.
Currently, not supported cases:
* CREATE TABLE ... AS SELECT when streaming replication is used
* CREATE TABLE ... WITH SYSTEM VERSIONING AS SELECT
* CREATE TABLE ... ENGINE=SEQUENCE
* CREATE SEQUENCE ... ENGINE!=InnoDB
* ALTER TABLE t ... ENGINE!=InnoDB where table t is SEQUENCE
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
mysql_prepare_alter_table(): Alter table should check whether
foreign key exists when it expected to exists and
report the error in early stage
dict_foreign_parse_drop_constraints(): Don't throw error if the
foreign key constraints doesn't exist when if exists is given
in the statement.
- Add selected tables as shared keys for CTAS certification
- Set proper security context on the replayer thread
- Disallow CTAS command retry
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>