In the case if some unique key fields are nullable, there can be
several records with the same key fields in unique index with at least
one key field equal to NULL, as NULL != NULL.
When transaction is resumed after waiting on the record with at least one
key field equal to NULL, and stored in persistent cursor record is
deleted, persistent cursor can be restored to the record with all key
fields equal to the stored ones, but with at least one field equal to
NULL. And such record is wrongly treated as a record with the same unique
key as stored in persistent cursor record one, what is wrong as
NULL != NULL.
The fix is to check if at least one unique field is NULL in restored
persistent cursor position, and, if so, then don't treat the record as
one with the same unique key as in the stored record key.
dict_index_t::nulls_equal was removed, as it was initially developed for
never existed in MariaDB "intrinsic tables", and there is no code, which
would set it to "true".
Reviewed by Marko Mäkelä.
- This issue is caused by commit 188c5da72a (MDEV-32453).
InnoDB fails to end the bulk insert for the table after
applying the bulk insert operation. This leads to assertion
during commit process.
In commit d74d95961a (MDEV-18543)
there was an error that would cause the hidden metadata record
to be deleted, and therefore cause the table to appear corrupted
when it is reloaded into the data dictionary cache.
PageConverter::update_records(): Do not delete the metadata record,
but do validate it.
RecIterator::open(): Make the API more similar to 10.6, to simplify
merges.
Even after commit b8a6719889 there
is an anomaly where a locking read could return inconsistent results.
If a locking read would have to wait for a record lock, then by the
definition of a read view, the modifications made by the current lock
holder cannot be visible in the read view. This is because the read
view must exclude any transactions that had not been committed at the
time when the read view was created.
lock_rec_convert_impl_to_expl_for_trx(), lock_rec_convert_impl_to_expl():
Return an unsafe-to-dereference pointer to a transaction that holds or
held the lock, or nullptr if the lock was available.
lock_clust_rec_modify_check_and_lock(),
lock_sec_rec_read_check_and_lock(),
lock_clust_rec_read_check_and_lock():
Return DB_RECORD_CHANGED if innodb_strict_isolation=ON and the
lock was being held by another transaction.
The test case, which is based on a bug report by Zhuang Liu,
covers the function lock_sec_rec_read_check_and_lock().
Reviewed by: Vladislav Lesin
Reason:
=======
- InnoDB fails to apply the buffered insert operation if the
after insert trigger does change the same table. This behaviour
leads to empty table for the subsequent insert operation
and server abort.
Solution:
========
- InnoDB should apply buffered insert operation if
"after insert" trigger changes the same table.
on a busy system it might take time for buffer_page_written_index_leaf
to reach the correct value. Wait for it.
also, tag identical statements to be different in the result file.
In commit 24648768b4 (MDEV-30136)
the parameter innodb_flush_method was deprecated, with no direct
replacement for innodb_flush_method=O_DIRECT_NO_FSYNC.
Let us change innodb_doublewrite from Boolean to ENUM that can
be changed while the server is running:
OFF: Assume that writes of innodb_page_size are atomic
ON: Prevent torn writes (the default)
fast: Like ON, but avoid synchronizing writes to data files
The deprecated start-up parameter innodb_flush_method=NO_FSYNC will cause
innodb_doublewrite=ON to be changed to innodb_doublewrite=fast,
which will prevent InnoDB from making any durable writes to data files.
This would normally be done right before the log checkpoint LSN is updated.
Depending on the file systems being used and their configuration,
this may or may not be safe.
The value innodb_doublewrite=fast differs from the previous combination of
innodb_doublewrite=ON and innodb_flush_method=O_DIRECT_NO_FSYNC by always
invoking os_file_flush() on the doublewrite buffer itself
in buf_dblwr_t::flush_buffered_writes_completed(). This should be safer
when there are multiple doublewrite batches between checkpoints.
Typically, once per second, buf_flush_page_cleaner() would write out
up to innodb_io_capacity pages and advance the log checkpoint.
Also typically, innodb_io_capacity>128, which is the size of the
doublewrite buffer in pages. Should os_file_flush_func() not be invoked
between doublewrite batches, writes could be reordered in an unsafe way.
The setting innodb_doublewrite=fast could be safe when the doublewrite
buffer (the first file of the system tablespace) and the data files
reside in the same file system.
This was tested by running "./mtr --rr innodb.alter_kill". On the first
server startup, with innodb_doublewrite=fast, os_file_flush_func()
would only be invoked on the ibdata1 file and possibly ib_logfile0.
On subsequent startups with innodb_doublewrite=OFF, os_file_flush_func()
will be invoked on the individual data files during log_checkpoint().
Note: The setting debug_no_sync (in the code, my_disable_sync) would
disable all durable writes to InnoDB files, which would be much less safe.
IORequest::Type: Introduce special values WRITE_DBL and PUNCH_DBL
for asynchronous writes that are submitted via the doublewrite buffer.
In this way, fil_space_t::use_doublewrite() or buf_dblwr.in_use()
will only be consulted during buf_page_t::flush() and the doublewrite
buffer can be enabled or disabled without any fear of inconsistency.
buf_dblwr_t::block_size: Replaces block_size().
buf_dblwr_t::flush_buffered_writes(): If !in_use() and the doublewrite
buffer is empty, just invoke fil_flush_file_spaces() and return. The
doublewrite buffer could have been disabled while a batch was in
progress.
innodb_init_params(): If innodb_flush_method=O_DIRECT_NO_FSYNC,
set innodb_doublewrite=fast or innodb_doublewrite=fearless.
Thanks to Mark Callaghan for reporting this, and Vladislav Vaintroub
for feedback.
Some fixes related to commit f838b2d799 and
Rows_log_event::do_apply_event() and Update_rows_log_event::do_exec_row()
for system-versioned tables were provided by Nikita Malyavin.
This was required by test versioning.rpl,trx_id,row.
By design, InnoDB has always hung when permanently running out of
buffer pool, for example when several threads are waiting to allocate
a block, and all of the buffer pool is buffer-fixed by the active threads.
The hang that we are fixing here occurs when the buffer pool is only
temporarily running out and the situation could be rescued by writing out
some dirty pages or evicting some clean pages.
buf_LRU_get_free_block(): Simplify the way how we wait for
the buf_flush_page_cleaner thread. This fixes occasional hangs
of the test encryption.innochecksum that were introduced by
commit a55b951e60 (MDEV-26827).
To play it safe, we use a timed wait when waiting for the
buf_flush_page_cleaner() thread to perform its job. Should that
thread get stuck, we will invoke buf_pool.LRU_warn() in order to
display a message that pages could not be freed, and keep trying
to wake up the buf_flush_page_cleaner() thread.
The INFORMATION_SCHEMA.INNODB_METRICS counters
buffer_LRU_single_flush_failure_count and
buffer_LRU_get_free_waits will be removed.
The latter is represented by buffer_pool_wait_free.
Also removed will be the message
"InnoDB: Difficult to find free blocks in the buffer pool"
because in d34479dc66 we
introduced a more precise message
"InnoDB: Could not free any blocks in the buffer pool"
in the buf_flush_page_cleaner thread.
buf_pool_t::LRU_warn(): Issue the warning message that we could
not free any blocks in the buffer pool. This may also be invoked
by buf_LRU_get_free_block() if buf_flush_page_cleaner() appears
to be stuck.
buf_pool_t::n_flush_dec(): Remove.
buf_pool_t::n_flush_dec_holding_mutex(): Rename to n_flush_dec().
buf_flush_LRU_list_batch(): Increment the eviction counter for blocks
of temporary, discarded or dropped tablespaces.
buf_flush_LRU(): Make static, and remove the constant parameter
evict=false. The only caller will be the buf_flush_page_cleaner()
thread.
IORequest::is_LRU(): Remove. The only case of evicting pages on
write completion will be when we are writing out pages of the
temporary tablespace. Those pages are not in buf_pool.flush_list,
only in buf_pool.LRU.
buf_page_t::flush(): Remove the parameter evict.
buf_page_t::write_complete(): Change the parameter "bool temporary"
to "bool persistent" and add a parameter for an already read state().
Reviewed by: Debarun Banerjee
https://jepsen.io/analyses/mysql-8.0.34 highlights that the
transaction isolation levels in the InnoDB storage engine do not
correspond to any widely accepted definitions, such as
"Generalized Isolation Level Definitions"
https://pmg.csail.mit.edu/papers/icde00.pdf
(PL-1 = READ UNCOMMITTED, PL-2 = READ COMMITTED, PL-2.99 = REPEATABLE READ,
PL-3 = SERIALIZABLE).
Only READ UNCOMMITTED in InnoDB seems to match the above definition.
The issue is that InnoDB does not detect write/write conflicts
(Section 4.4.3, Definition 6) in the above.
It appears that as soon as we implement write/write conflict detection
(SET SESSION innodb_snapshot_isolation=ON), the default isolation level
(SET TRANSACTION ISOLATION LEVEL REPEATABLE READ) will become
Snapshot Isolation (similar to Postgres), as defined in Section 4.2 of
"A Critique of ANSI SQL Isolation Levels", MSR-TR-95-51, June 1995
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-95-51.pdf
Locking reads inside InnoDB used to read the latest committed version,
ignoring what should actually be visible to the transaction.
The added test innodb.lock_isolation illustrates this. The statement
UPDATE t SET a=3 WHERE b=2;
is executed in a transaction that was started before a read view or
a snapshot of the current transaction was created, and committed before
the current transaction attempts to execute
UPDATE t SET b=3;
If SET innodb_snapshot_isolation=ON is in effect when the second
transaction was started, the second transaction will be aborted with
the error ER_CHECKREAD. By default (innodb_snapshot_isolation=OFF),
the second transaction would execute inconsistently, displaying an
incorrect SELECT COUNT(*) FROM t in its read view.
If innodb_snapshot_isolation=ON, if an attempt to acquire a lock on a
record that does not exist in the current read view is made, an error
DB_RECORD_CHANGED (HA_ERR_RECORD_CHANGED, ER_CHECKREAD) will
be raised. This error will be treated in the same way as a deadlock:
the transaction will be rolled back.
lock_clust_rec_read_check_and_lock(): If the current transaction has
a read view where the record is not visible and
innodb_snapshot_isolation=ON, fail before trying to acquire the lock.
row_sel_build_committed_vers_for_mysql(): If innodb_snapshot_isolation=ON,
disable the "semi-consistent read" logic that had been implemented by
myself on the directions of Heikki Tuuri in order to address
https://bugs.mysql.com/bug.php?id=3300 that was motivated by a customer
wanting UPDATE to skip locked rows that do not match the WHERE condition.
It looks like my changes were included in the MySQL 5.1.5
commit ad126d90e019f223470e73e1b2b528f9007c4532; at that time, employees
of Innobase Oy (a recent acquisition of Oracle) had lost write access to
the repository.
The only reason why we set innodb_snapshot_isolation=OFF by default is
backward compatibility with applications, such as the one that motivated
the implementation of "semi-consistent read" back in 2005. In a later
major release, we can default to innodb_snapshot_isolation=ON.
Thanks to Peter Alvaro, Kyle Kingsbury and Alexey Gotsman for their work
on https://github.com/jepsen-io/ and to Kyle and Alexey for explanations
and some testing of this fix.
Thanks to Vladislav Lesin for the initial test for MDEV-26643,
as well as reviewing these changes.
Problem:
======
- InnoDB fail to do instant operation while adding the variable
length column. Problem is that InnoDB wrongly assumes that
variable character length can never part of externally stored
page.
Solution:
========
instant_alter_column_possible(): Variable length
character field can be stored as externally stored page.
- Suppress the "Difficult to find free blocks" warning
globally to avoid many different test case failing.
- Demote the error information in validate_first_page() to note.
So first page can recovered from doublewrite buffer and can throw
error in case the page wasn't found in doublewrite buffer.
The issue here is ha_innobase::get_auto_increment() could cause a
deadlock involving auto-increment lock and rollback the transaction
implicitly. For such cases, storage engines usually call
thd_mark_transaction_to_rollback() to inform SQL engine about it which
in turn takes appropriate actions and close the transaction. In innodb,
we call it while converting Innodb error code to MySQL.
However, since ::innobase_get_autoinc() returns void, we skip the call
for error code conversion and also miss marking the transaction for
rollback for deadlock error. We assert eventually while releasing a
savepoint as the transaction state is not active.
Since convert_error_code_to_mysql() is handling some generic error
handling part, like invoking the callback when needed, we should call
that function in ha_innobase::get_auto_increment() even if we don't
return the resulting mysql error code back.
Problem:
========
During upgrade, InnoDB does write the redo log for adjusting
the tablespace size or tablespace flags even before the log
has upgraded to configured format. This could lead to data
inconsistent if any crash happened during upgrade process.
Fix:
===
srv_start(): Write the tablespace flags adjustment, increased
tablespace size redo log only after redo log upgradation.
log_write_low(), log_reserve_and_write_fast(): Check whether
the redo log is in physical format.
The root cause is the WAL logging of file operation when the actual
operation fails afterwards. It creates a situation with a log entry for
a operation that would always fail. I could simulate both the backup
scenario error and Innodb recovery failure exploiting the weakness.
We are following WAL for file rename operation and once logged the
operation must eventually complete successfully, or it is a major
catastrophe. Right now, we fail for rename and handle it as normal error
and it is the problem.
I created a patch to address RENAME operation to a non existing schema
where the destination schema directory is missing. The patch checks for
the missing schema before logging in an attempt to avoid the failure
after WAL log is written/flushed. I also checked that the schema cannot
be dropped or there cannot be any race with other rename to the same
file. This is protected by the MDL lock in SQL today.
The patch should this be a good improvement over the current situation
and solves the issue at hand.
Problem:
========
- InnoDB reads the length of the variable length field wrongly
while applying the modification log of instant table.
Solution:
========
rec_init_offsets_comp_ordinary(): For the temporary instant
file record, InnoDB should read the length of the variable length
field from the record itself.
Problem:
=======
- innodb.log_file_name fails if it executes after
innodb.lock_insert_into_empty in few cases.
innodb.lock_insert_into_empty test case failed to
cleanup the table t2. Rollback of create..select fails
to remove the table when it fails to acquire the
innodb statistics table. This leads to rename table
in log_file_name test case fails.
Solution:
========
- Cleanup the table t2 explictly after resetting
innodb_lock_wait_timeout variable in
innodb.lock_insert_into_empty test case.
Problem:
=======
- When online alter of InnoDB statistics table happens,
any transaction which updates the statistics table
has to read the undo log and log the DML changes during
transaction commit. Applying undo log
(UndorecApplier::apply_undo_rec) requires a shared
lock on dictionary cache but dict_stats_save() already
holds write lock on dictionary cache. This leads to
abort of server during commit of statistics table changes.
Solution:
========
- Disallow LOCK=NONE operation for the InnoDB statistics table.
The reasoning is that statistics tables are typically
rather small, so any blocking would be rather short.
Writes to the statistics tables should be a rare operation.
Problem:
========
Currently import operation fails with schema mismatch when
cfg file has hidden fts document id and hidden fts document index.
Fix:
====
To fix this issue, simply add the fts doc id column,
indexes in table definition and try to import the table.
In case of success:
1) update the fts document id in sys columns.
2) update the number of columns in sys tables.
3) insert the new fts index entry in sys indexes table
and sys fields.
4) Reload the table with new table definition
purge_sys_t::truncating_tablespace(): Clamp the
innodb_max_undo_log_size to the maximum number of pages
before converting the result into a 32-bit unsigned integer.
This fixes up commit f8c88d905b (MDEV-33213).
In later major versions, we would use 32-bit unsigned integer here
due to commit ca501ffb04
and the code would crash also on 64-bit processors.
Reviewed by: Debarun Banerjee
This fixes up the merge commit 7e39470e33
dict_table_open_on_name(): Report ER_TABLE_CORRUPT in a consistent
fashion, with a pretty-printed table name.
MDEV-33308 CHECK TABLE is modifying .frm file even if --read-only
As noted in commit d0ef1aaf61,
MySQL as well as older versions of MariaDB server would during
ALTER TABLE ... IMPORT TABLESPACE write bogus values to the
PAGE_MAX_TRX_ID field to pages of the clustered index, instead of
letting that field remain 0.
In commit 8777458a6e this field
was repurposed for PAGE_ROOT_AUTO_INC in the clustered index root page.
To avoid trouble when upgrading from MySQL or older versions of MariaDB,
we will try to detect and correct bogus values of PAGE_ROOT_AUTO_INC
when opening a table for the first time from the SQL layer.
btr_read_autoinc_with_fallback(): Add the parameters to mysql_version,max
to indicate the TABLE_SHARE::mysql_version of the .frm file and the
maximum value allowed for the type of the AUTO_INCREMENT column.
In case the table was originally created in MySQL or an older version of
MariaDB, read also the maximum value of the AUTO_INCREMENT column from
the table and reset the PAGE_ROOT_AUTO_INC if it is above the limit.
dict_table_t::get_index(const dict_col_t &) const: Find an index that
starts with the specified column.
ha_innobase::check_for_upgrade(): Return HA_ADMIN_FAILED if InnoDB
needs upgrading but is in read-only mode. In this way, the call to
update_frm_version() will be skipped.
row_import_autoinc(): Adjust the AUTO_INCREMENT column at the end of
ALTER TABLE...IMPORT TABLESPACE. This refinement was suggested by
Debarun Banerjee.
The changes outside InnoDB were developed by Michael 'Monty' Widenius:
Added print_check_msg() service for easy reporting of check/repair messages
in ENGINE=Aria and ENGINE=InnoDB.
Fixed that CHECK TABLE do not update the .frm file under --read-only.
Added 'handler_flags' to HA_CHECK_OPT as a way for storage engines to
store state from handler::check_for_upgrade().
Reviewed by: Debarun Banerjee
row_discard_tablespace(): Do not invoke dict_index_t::clear_instant_alter()
because that would corrupt any adaptive hash index entries in the table.
row_import_for_mysql(): Invoke dict_index_t::clear_instant_alter()
after detaching any adaptive hash index entries.
lock_table_children(): A new function to lock all child tables of a table.
We will only hold dict_sys.latch while traversing
dict_table_t::referenced_set. To prevent a race condition with
std::set::erase() we will copy the pointers to the child tables to a
local vector. Once we have acquired MDL and references to all child tables,
we can safely release dict_sys.latch, wait for the locks, and finally
release the references.
dict_acquire_mdl_shared(): A new variant that takes mdl_context as a
parameter.
lock_table_for_trx(): Assert that we are not holding dict_sys.latch.
ha_innobase::truncate(): When foreign_key_checks=ON, assert that
no child tables exist (other than the current table).
In any case, we will invoke lock_table_children()
so that the child table metadata can be safely updated.
(It is possible that a child table is being created concurrently
with TRUNCATE TABLE.)
ha_innobase::delete_table(): Before and after acquiring exclusive
locks on the current table as well as all child tables,
check that FOREIGN KEY constraints will not be violated.
In this way, we can reject impossible DROP TABLE without having to
wait for locks first.
This fixes up commit 2ca1123464 (MDEV-26217)
and commit c3c53926c4 (MDEV-26554).
This follows up commit 383f77cd84
which simplified dict_table_schema_check().
Note: We can display quoted names like this:
my_snprintf(buf, sizeof buf, "%`.*s.%`s",
int(t->name.dblen()), t->name.m_name, t->name.basename());
THE FIX MUST NOT BE MERGED TO 10.6+, BECAUSE 10.6+ IS NOT AFFECTED!
The test is waiting for delete-marked record purging. But this does not
happen under the following conditions:
1. "START TRANSACTION WITH CONSISTENT SNAPSHOT" - is active, has not
been rolled back yet
2. "DELETE FROM t WHERE b = 20 # trx_1" - is committed
3. "INSERT INTO t VALUES(10, 20) # trx_2" - hanging on
"ib_after_row_insert" sync point, waiting for "first_ins_cont" signal
4. "DELETE FROM t WHERE b = 20 # trx_3" - blocked on delete-marked by
trx_1 record, waiting for trx_2
5. connection "default" is waiting on
'now WAIT_FOR row_purge_del_mark_finished'
purge_coordinator_callback_low() sets
purge_state.m_history_length= srv_do_purge(&n_total_purged);
even if nothing was purged, like in our case. Nothing was purged because
transaction with consistent snapshot was still alive during purging
procedure.
Then purge_coordinator_timer_callback() does not wake purge thread if
the following condition is true:
purge_state.m_history_length == trx_sys.rseg_history_len
The above condition is true for our case, because we are waiting for
delete-marked record purging, and trx_sys.rseg_history_len does not
grow.
Only 10.5 is affected, because there is no such condition in 10.6, i.e.
purge thread is woken up even if history size was not changed during
purge coordinator thread suspending.
The easiest way to fix it is just to remove the test from 10.5.
During import, if cfg file is not specified, we don't update the autoinc
field in innodb dictionary object dict_table_t. The next insert tries to
insert from the starting position of auto increment and fails.
It can be observed that the issue is resolved once server is restarted
as the persistent value is read correctly from PAGE_ROOT_AUTO_INC from
index root page. The patch fixes the issue by reading the the auto
increment value directly from PAGE_ROOT_AUTO_INC during import if cfg
file is not specified.
Test Fix:
1. import_bugs.test: Embedded mode warning has absolute path. Regular
expression replacement in test.
2. full_crc32_import.test: Table level auto increment mismatch after
import. It was using the auto increment data from the table prior to
discard and import which is not right. This value has cached auto
increment value higher than the actual inserted value and value stored
in PAGE_ROOT_AUTO_INC. Updated the result file and added validation for
checking the maximum value of auto increment column.