- InnoDB mistakenly identifies the non-unique FTS_DOC_ID index as
FTS_DOC_ID_INDEX while loading the table. dict_load_indexes()
should check whether the index is unique before assigning
fts_doc_id_index
- Redundant InnoDB table fails to set the flags2 while loading
the table. It leads to "Upgrade index name failure" during alter
operation. InnoDB should set the flags2 to FTS_AUX_HEX_NAME when
fts is being loaded
- innodb_fts.sync_block doesn't make sense after MDEV-25581's patch
because fts cache syncing is done as a part of insert operation
and it leads to completion of select over insert sometimes.
This test case is not relevant any more
Problem:
========
InnoDB FTS requesting the fts sync of the table once the fts
cache size reaches 1/10 of innodb_ft_cache_size. But fts_sync()
releases cache lock when writing the word. By doing this, InnoDB
insert thread increases the innodb fts cache memory and
SYNC operation will take more time to complete.
Solution:
=========
Remove the fts sync operation(FTS_MSG_SYNC_TABLE) from
the fts optimize background thread. Instead of that,
allow user thread to sync the InnoDB fts cache when
the cache size exceeds 512 kb. User thread holds
cache lock while doing cache syncing, it make sure that
other threads doesn't add the docs into the cache.
Removed FTS_MSG_SYNC_TABLE and its related function
because we do remove the FTS_MSG_SYNC_TABLE message
itself.
Removed fts_sync_index_check() and all related
function because other threads doesn't add while
cache operation going on.
- InnoDB fails to create a fts cache while loading the innodb fts
table which is stored in system tablespace. InnoDB should create
the fts cache while loading FTS_DOC_ID column from system column.
- InnoDB DDL results in `Duplicate entry' if concurrent DML throws
duplicate key error. The following scenario explains the problem
connection con1:
ALTER TABLE t1 FORCE;
connection con2:
INSERT INTO t1(pk, uk) VALUES (2, 2), (3, 2);
In connection con2, InnoDB throws the 'DUPLICATE KEY' error because
of unique index. Alter operation will throw the error when applying
the concurrent DML log.
- Inserting the duplicate key for unique index logs the insert
operation for online ALTER TABLE. When insertion fails,
transaction does rollback and it leads to logging of
delete operation for online ALTER TABLE.
While applying the insert log entries, alter operation
encounters 'DUPLICATE KEY' error.
- To avoid the above fake duplicate scenario, InnoDB should
not write any log for online ALTER TABLE before DML transaction
commit.
- User thread which does DML can apply the online log if
InnoDB ran out of online log and index is marked as completed.
Set online log error if apply phase encountered any error.
It can also clear all other indexes log, marks the newly
added indexes as corrupted.
- Removed the old online code which was a part of DML operations
commit_inplace_alter_table() : Does apply the online log
for the last batch of secondary index log and does frees
the log for the completed index.
trx_t::apply_online_log: Set to true while writing the undo
log if the modified table has active DDL
trx_t::apply_log(): Apply the DML changes to online DDL tables
dict_table_t::is_active_ddl(): Returns true if the table
has an active DDL
dict_index_t::online_log_make_dummy(): Assign dummy value
for clustered index online log to indicate the secondary
indexes are being rebuild.
dict_index_t::online_log_is_dummy(): Check whether the online
log has dummy value
ha_innobase_inplace_ctx::log_failure(): Handle the apply log
failure for online DDL transaction
row_log_mark_other_online_index_abort(): Clear out all other
online index log after encountering the error during
row_log_apply()
row_log_get_error(): Get the error happened during row_log_apply()
row_log_online_op(): Does apply the online log if index is
completed and ran out of memory. Returns false if apply log fails
UndorecApplier: Introduced a class to maintain the undo log
record, latched undo buffer page, parse the undo log record,
maintain the undo record type, info bits and update vector
UndorecApplier::get_old_rec(): Get the correct version of the
clustered index record that was modified by the current undo
log record
UndorecApplier::clear_undo_rec(): Clear the undo log related
information after applying the undo log record
UndorecApplier::log_update(): Handle the update, delete undo
log and apply it on online indexes
UndorecApplier::log_insert(): Handle the insert undo log
and apply it on online indexes
UndorecApplier::is_same(): Check whether the given roll pointer
is generated by the current undo log record information
trx_t::rollback_low(): Set apply_online_log for the transaction
after partially rollbacked transaction has any active DDL
prepare_inplace_alter_table_dict(): After allocating the online
log, InnoDB does create fulltext common tables. Fulltext index
doesn't allow the index to be online. So removed the dead
code of online log removal
Thanks to Marko Mäkelä for providing the initial prototype and
Matthias Leich for testing the issue patiently.
- InnoDB FTS DDL decrements the FTS_DOC_ID when there is a
deleted marked record involved. FTS_DOC_ID must never be
reused. The purpose of FTS_DOC_ID is to be a unique row
identifier that will be changed whenever a fulltext indexed
column is updated.
- Make innodb_ft_cache_size & innodb_ft_total_cache_size are dynamic
variable and increase the maximum value of innodb_ft_cache_size to
512MB for 32-bit system and 1 TB for 64-bit system and set
innodb_ft_total_cache_size maximum value to 1 TB for 64-bit system.
- Print warning if the fts cache exceeds the innodb_ft_cache_size
and also unlock the cache if fts cache memory reduces less than
innodb_ft_cache_size.
* preserve DESC index property in the parser
* store it in the frm (only for HA_KEY_ALG_BTREE)
* read it from the frm
* show it in SHOW CREATE
* skip DESC indexes in opt_range.cc and opt_sum.cc
* ORDER BY test
This includes a fix of MDEV-27432.
This is loosely based on the InnoDB changes in
mysql/mysql-server@97fd8b1b69
that I had developed in 2015 or 2016.
For each B-tree key field, we will allow a flag ASC/DESC to be associated.
When PRIMARY KEY fields are internally appended to secondary indexes,
the ASC/DESC attribute will be inherited, so that covering index scans
will work as expected.
Note: Until the subsequent commit, the DESC attribute will be ignored
(no HA_REVERSE_SORT flag will be written to .frm files).
dict_field_t::descending: A new flag to denote descending order.
cmp_data(), cmp_dfield_dfield(): Add a new parameter descending.
cmp_dtuple_rec(), cmp_dtuple_rec_with_match(): Add a parameter "index".
dtuple_coll_eq(): Replaces dtuple_coll_cmp().
cmp_dfield_dfield_eq_prefix(): Replaces cmp_dfield_dfield_like_prefix().
dict_index_t::is_btree(): Check whether the index is a regular
B-tree index (not SPATIAL, FULLTEXT, or the ibuf.index,
or a corrupted index.
btr_cur_search_to_nth_level_func(): Only attempt to use
the adaptive hash index if index->is_btree().
This function may also be invoked on ibuf.index, and
cmp_dtuple_rec_with_match_bytes() will no longer work on ibuf.index
because it assumes that the index and record fields exactly match.
The ibuf.index is a special variadic index tree.
Thanks to Thirunarayanan Balathandayuthapani for fixing some bugs:
MDEV-27439, MDEV-27374/MDEV-27445.
- In ha_innobase::prepare_inplace_alter_table(), InnoDB should
check whether the table is empty. If the table is empty then
server should avoid downgrading the MDL after prepare phase.
It is more like instant alter, does change only in dicationary
and metadata.
- Changed few debug test case to make non-empty DDL table
Based on mysql/mysql-server@bc9c46bf28
but without sleeps.
The test was verified to hit the debug assertion if the change to
fts_add_doc_by_id() in commit 2d98b967e3
was reverted.
InnoDB commit fails when consecutive FTS_DOC_ID value
is greater than 4294967295.
Fix is that InnoDB should remove the delta FTS_DOC_ID
value limitations and fts should encode 8 byte value,
remove FTS_DOC_ID_MAX_STEP variable. Replaced the
fts0vlc.ic file with fts0vlc.h
fts_encode_int(): Should be able to encode 10 bytes value
fts_get_encoded_len(): Should get the length of the value
which has 10 bytes
fts_decode_vlc(): Add debug assertion to verify the maximum
length allowed is 10.
mach_read_uint64_little_endian(): Reads 64 bit stored in
little endian format
Added a unit test case which check for minimum and maximum
value to do the fts encoding
This essentially reverts commit 4e89ec6692
and only disables InnoDB persistent statistics for tests where it is
desirable. By design, InnoDB persistent statistics will not be updated
except by ANALYZE TABLE or by STATS_AUTO_RECALC.
The internal transactions that update persistent InnoDB statistics
in background tasks (with innodb_stats_auto_recalc=ON) may cause
nondeterministic query plans or interfere with some tests that deal
with other InnoDB internals, such as the purge of transaction history.
InnoDB DDL fails when it tries to sync the table
when innodb_force_recovery is set to 2. Problem
is that fts_optimize_wq is not initialized when
there are no background threads. fts_sync_during_ddl()
should check whether fts_optimize_wq is initialized.
trx_t::commit_in_memory(): Do not initiate a redo log write if
the transaction has no visible effect. If anything for this
transaction had been made durable, crash recovery will roll back
the transaction just fine even if the end of ROLLBACK is not
durably written.
Rollbacks of transactions that are associated with XA identifiers
(possibly internally via the binlog) will always be persisted.
The test rpl.rpl_gtid_crash covers this.