CHECK constraint is checked by check_expression() which walks its
items and gets into Item_field::check_vcol_func_processor() to check
for conformity with foreign key list.
WITHOUT OVERLAPS is checked for same conformity in
mysql_prepare_create_table().
Long uniques are already impossible with InnoDB foreign keys. See
ER_CANT_CREATE_TABLE in test case.
2 accompanying bugs fixed (test main.constraints failed):
1. check->name.str lived on SP execute mem_root while "check" obj
itself lives on SP main mem_root. On second SP execute check->name.str
had garbage data. Fixed by allocating from thd->stmt_arena->mem_root
which is SP main mem_root.
2. CHECK_CONSTRAINT_IF_NOT_EXISTS value was mixed with
VCOL_FIELD_REF. VCOL_FIELD_REF is assigned in check_expression() and
then detected as CHECK_CONSTRAINT_IF_NOT_EXISTS in
handle_if_exists_options().
Existing cases for MDEV-16932 in main.constraints cover both fixes.
Insert worked incorrect as well. RocksDB used table->record[0] internally to store some
intermediate results for key conversion, during index searching among other operations.
So table->record[0] is spoiled during ha_rnd_index_map in ha_check_overlaps, so in turn
the broken record data was inserted.
The fix is to store RocksDB intermediate result in its own buffer instead of table->record[0].
`rocksdb` MTR suite is is checked and runs fine.
No need for additional tests. The existing overlaps.test covers the case completely.
However, I am not going to add anything related to rocksdb to suite, to keep it away
from additional dependencies.
To run tests with RocksDB engine, one can add following to engines.combinations:
[rocksdb]
plugin-load=$HA_ROCKSDB_SO
default-storage-engine=rocksdb
rocksdb
Respect system fields in NO_ZERO_DATE mode.
This is the subject for refactoring in MDEV-19597
Conflict resolution from 7d5223310789f967106d86ce193ef31b315ecff0
MDEV-21398 Deadlock (server hang) or assertion failure in
Diagnostics_area::set_error_status upon ALTER under lock
This failure could only happen if one locked the same table
multiple times and then did an ALTER TABLE on the table.
Major change is to change all instances of
table->m_needs_reopen= true;
to
table->mark_table_for_reopen();
The main fix for the problem was to ensure that we mark all
instances of the table in the locked_table_list and when we
reopen the tables, we first close all tables before reopening
and locking them.
Other things:
- Don't call thd->locked_tables_list.reopen_tables if there
are no tables marked for reopen. (performance)
All changes (except one) is of type
thd->transaction. -> thd->transaction->
thd->transaction points by default to 'thd->default_transaction'
This allows us to 'easily' have multiple active transactions for a
THD object, like when reading data from the mysql.proc table
- ALTER_ALGORITHM should be substituted when there is no mention of
algorithm in alter statement.
- Introduced algorithm(thd) in Alter_info. It returns the
user requested algorithm. If user doesn't specify algorithm explicitly then
it returns alter_algorithm variable.
- changed algorithm() to get_algorithm(thd) to return algorithm name for
displaying the error.
- set_requested_algorithm(algo_value) to avoid direct assignment on
requested_algorithm variable.
- Avoid direct access of requested_algorithm to encapsulate
requested_algorithm variable
- Inplace alter shouldn't set default date column as '0000-00-00' when
table is not empty. So mysql_inplace_alter_table() copied
alter_ctx.error_if_not_empty to a new field of Alter_inplace_info.
In ha_innobase::check_if_supported_inplace_alter() should check the
error_if_not_empty flag and return INPLACE_NOT_SUPPORTED if the table
is not empty
This is a backport of the applicable part of
commit 93475aff8d and
commit 2c39f69d34
from 10.4.
Before 10.4 and Galera 4, WSREP_ON is a macro that points to
a global Boolean variable, so it is not that expensive to
evaluate, but we will add an unlikely() hint around it.
WSREP_ON_NEW: Remove. This macro was introduced in
commit c863159c32
when reverting WSREP_ON to its previous definition.
We replace some use of WSREP_ON with WSREP(thd), like it was done
in 93475aff8d. Note: the macro
WSREP() in 10.1 is equivalent to WSREP_NNULL() in 10.4.
Item_func_rand::seed_random(): Avoid invoking current_thd
when WSREP is not enabled.
commit 105b879d0f introduced this
warning. The warning looks harmless, but GCC does not understand
that the initialization and the use of the variables are guarded
by the same predicate.
The reason for this is to make all temporary file names similar and
also to be able to figure out from where a #sql-xxx name orginates.
New format is for most cases:
'#sql-name-current_pid-thread_id[-increment]'
Where name is one of subselect, alter, exchange, temptable or backup
The exceptions are:
ALTER PARTITION shadow files:
'#sql-shadow-thread_id-'original_table_name'
Names used with temp pool:
'#sql-name-current_pid-pool_number'
MDEV-22088 S3 partitioning support
All ALTER PARTITION commands should now work on S3 tables except
REBUILD PARTITION
TRUNCATE PARTITION
REORGANIZE PARTITION
In addition, PARTIONED S3 TABLES can also be replicated.
This is achived by storing the partition tables .frm and .par file on S3
for partitioned shared (S3) tables.
The discovery methods are enchanced by allowing engines that supports
discovery to also support of the partitioned tables .frm and .par file
Things in more detail
- The .frm and .par files of partitioned tables are stored in S3 and kept
in sync.
- Added hton callback create_partitioning_metadata to inform handler
that metadata for a partitoned file has changed
- Added back handler::discover_check_version() to be able to check if
a table's or a part table's definition has changed.
- Added handler::check_if_updates_are_ignored(). Needed for partitioning.
- Renamed rebind() -> rebind_psi(), as it was before.
- Changed CHF_xxx hadnler flags to an enum
- Changed some checks from using table->file->ht to use
table->file->partition_ht() to get discovery to work with partitioning.
- If TABLE_SHARE::init_from_binary_frm_image() fails, ensure that we
don't leave any .frm or .par files around.
- Fixed that writefrm() doesn't leave unusable .frm files around
- Appended extension to path for writefrm() to be able to reuse to function
for creating .par files.
- Added DBUG_PUSH("") to a a few functions that caused a lot of not
critical tracing.
In main.index_merge_myisam we remove the test that was added in
commit a2d24def8c because
it duplicates the test case that was added in
commit 5af12e4635.
`inited == NONE` at the initialization time does not always mean
that it'll be `NONE` later, at the execution time. Use a more complex
caller-specific logic to decide whether to create a cloned lookup handler.
Besides LOAD (as in the original bug report) make sure that all
prepare_for_insert() invocations are covered by tests. Add tests for
CREATE ... SELECT, multi-UPDATE, and multi-DELETE.
Don't enable write cache with long uniques.
- Fixed mysql_prepare_create_table() constraint duplicate checking;
- Refactored period constraint handling in mysql_prepare_alter_table():
* No need to allocate new objects;
* Keep old constraint name but exclude it from dup checking by automatic_name;
- Some minor memory leaks fixed;
- Some conceptual TODOs.
TDC_RT_REMOVE_ALL -> tdc_remove_table(). Some occurrences replaced with
TDC_element::flush() (whenver TABLE_SHARE is available).
TDC_RT_REMOVE_NOT_OWN[_KEEP_SHARE] -> TDC_element::flush(). These modes
assume that current thread owns TABLE_SHARE reference, which means we can
avoid hash lookup and flush unused TABLE instances directly.
TDC_RT_REMOVE_UNUSED -> TDC_element::flush_unused(). Only [ab]used by
mysql_admin_table() currently. Should be removed eventually.
Part of MDEV-17882 - Cleanup refresh version
Removed redundant tdc_remove_table(TDC_RT_REMOVE_ALL). Share was marked
flushed by preceding wait_while_table_is_used() and eventually flushed by
close_all_tables_for_name().
Part of MDEV-17882 - Cleanup refresh version
close_all_tables_for_name() is always preceded by
wait_while_table_is_used(), which makes tdc_remove_table() redundant.
The only (now fixed) exception was close_cached_tables().
Part of MDEV-17882 - Cleanup refresh version
* The overlaps check is implemented on a handler level per row command.
It creates a separate cursor (actually, another handler instance) and
caches it inside the original handler, when ha_update_row or
ha_insert_row is issued. Cursor closes on unlocking the handler.
* Containing the same key in index means unique constraint violation
even in usual terms. So we fetch left and right neighbours and check
that they have same key prefix, excluding from the key only the period part.
If it doesnt match, then there's no such neighbour, and the check passes.
Otherwise, we check if this neighbour intersects with the considered key.
* The check does not introduce new error and fails with ER_DUPP_KEY error.
This might break REPLACE workflow and should be fixed separately
* rename to a generic name
* move remaning initializations from query exec to prepare time
* simplify/unify key handling in open_table_from_share and delayed
* remove dead code
* move tests where they belong