1
0
mirror of https://github.com/MariaDB/server.git synced 2025-04-26 11:49:09 +03:00

5 Commits

Author SHA1 Message Date
Thirunarayanan Balathandayuthapani
4b80c11f52 MDEV-15250 UPSERT during ALTER TABLE results in 'Duplicate entry' error for alter
- InnoDB DDL results in `Duplicate entry' if concurrent DML throws
duplicate key error. The following scenario explains the problem

connection con1:
  ALTER TABLE t1 FORCE;

connection con2:
  INSERT INTO t1(pk, uk) VALUES (2, 2), (3, 2);

In connection con2, InnoDB throws the 'DUPLICATE KEY' error because
of unique index. Alter operation will throw the error when applying
the concurrent DML log.

- Inserting the duplicate key for unique index logs the insert
operation for online ALTER TABLE. When insertion fails,
transaction does rollback and it leads to logging of
delete operation for online ALTER TABLE.
While applying the insert log entries, alter operation
encounters 'DUPLICATE KEY' error.

- To avoid the above fake duplicate scenario, InnoDB should
not write any log for online ALTER TABLE before DML transaction
commit.

- User thread which does DML can apply the online log if
InnoDB ran out of online log and index is marked as completed.
Set online log error if apply phase encountered any error.
It can also clear all other indexes log, marks the newly
added indexes as corrupted.

- Removed the old online code which was a part of DML operations

commit_inplace_alter_table() : Does apply the online log
for the last batch of secondary index log and does frees
the log for the completed index.

trx_t::apply_online_log: Set to true while writing the undo
log if the modified table has active DDL

trx_t::apply_log(): Apply the DML changes to online DDL tables

dict_table_t::is_active_ddl(): Returns true if the table
has an active DDL

dict_index_t::online_log_make_dummy(): Assign dummy value
for clustered index online log to indicate the secondary
indexes are being rebuild.

dict_index_t::online_log_is_dummy(): Check whether the online
log has dummy value

ha_innobase_inplace_ctx::log_failure(): Handle the apply log
failure for online DDL transaction

row_log_mark_other_online_index_abort(): Clear out all other
online index log after encountering the error during
row_log_apply()

row_log_get_error(): Get the error happened during row_log_apply()

row_log_online_op(): Does apply the online log if index is
completed and ran out of memory. Returns false if apply log fails

UndorecApplier: Introduced a class to maintain the undo log
record, latched undo buffer page, parse the undo log record,
maintain the undo record type, info bits and update vector

UndorecApplier::get_old_rec(): Get the correct version of the
clustered index record that was modified by the current undo
log record

UndorecApplier::clear_undo_rec(): Clear the undo log related
information after applying the undo log record

UndorecApplier::log_update(): Handle the update, delete undo
log and apply it on online indexes

UndorecApplier::log_insert(): Handle the insert undo log
and apply it on online indexes

UndorecApplier::is_same(): Check whether the given roll pointer
is generated by the current undo log record information

trx_t::rollback_low(): Set apply_online_log for the transaction
after partially rollbacked transaction has any active DDL

prepare_inplace_alter_table_dict(): After allocating the online
log, InnoDB does create fulltext common tables. Fulltext index
doesn't allow the index to be online. So removed the dead
code of online log removal

Thanks to Marko Mäkelä for providing the initial prototype and
Matthias Leich for testing the issue patiently.
2022-04-25 18:52:19 +05:30
Marko Mäkelä
45ed9dd957 MDEV-23855: Remove fil_system.LRU and reduce fil_system.mutex contention
Also fixes MDEV-23929: innodb_flush_neighbors is not being ignored
for system tablespace on SSD

When the maximum configured number of file is exceeded, InnoDB will
close data files. We used to maintain a fil_system.LRU list and
a counter fil_node_t::n_pending to achieve this, at the huge cost
of multiple fil_system.mutex operations per I/O operation.

fil_node_open_file_low(): Implement a FIFO replacement policy:
The last opened file will be moved to the end of fil_system.space_list,
and files will be closed from the start of the list. However, we will
not move tablespaces in fil_system.space_list while
i_s_tablespaces_encryption_fill_table() is executing
(producing output for INFORMATION_SCHEMA.INNODB_TABLESPACES_ENCRYPTION)
because it may cause information of some tablespaces to go missing.
We also avoid this in mariabackup --backup because datafiles_iter_next()
assumes that the ordering is not changed.

IORequest: Fold more parameters to IORequest::type.

fil_space_t::io(): Replaces fil_io().

fil_space_t::flush(): Replaces fil_flush().

OS_AIO_IBUF: Remove. We will always issue synchronous reads of the
change buffer pages in buf_read_page_low().

We will always ignore some errors for background reads.

This should reduce fil_system.mutex contention a little.

fil_node_t::complete_write(): Replaces fil_node_t::complete_io().
On both read and write completion, fil_space_t::release_for_io()
will have to be called.

fil_space_t::io(): Do not acquire fil_system.mutex in the normal
code path.

xb_delta_open_matching_space(): Do not try to open the system tablespace
which was already opened. This fixes a file sharing violation in
mariabackup --prepare --incremental.

Reviewed by: Vladislav Vaintroub
2020-10-26 17:09:01 +02:00
Marko Mäkelä
eeee1832d7 Speed up buildbot by requiring --big-test for some slow tests 2019-05-29 08:28:15 +03:00
Marko Mäkelä
589b0b3655 Allow innodb_open_files to be exceeded 2017-11-10 16:06:44 +02:00
Marko Mäkelä
1eee3a3fb7 MDEV-13051 MySQL#86607 InnoDB crash after failed ADD INDEX and table_definition_cache eviction
There are two bugs related to failed ADD INDEX and
the InnoDB table cache eviction.

dict_table_close(): Try dropping failed ADD INDEX when releasing
the last table handle, not when releasing the last-but-one.

dict_table_remove_from_cache_low(): Do not invoke
row_merge_drop_indexes() after freeing all index metadata.
Instead, directly invoke row_merge_drop_indexes_dict() to
remove the metadata from the persistent data dictionary
and to free the index pages.
2017-10-16 19:11:30 +03:00