When table2myisam() prepares recinfo structures BIT field was skipped
because pack_length_in_rec() returns 0. Instead of BIT field
DB_ROW_HASH_1 field was taken into recinfo structure and its length
was added to reclength. This is wrong because not stored fields must
not be prepared as record columns (MI_COLUMNDEF) in storage
layer. 0-length fields are prepared in "reserve space for null bits"
branch.
The problem only occurs with tables where there is no data for the
main record outside of the null bits.
The fix updates minpos condition so we avoid fields after
stored_rec_length and these are not stored by
definition. share->reclength already includes not stored lengths from
CREATE TABLE so we cannot use it as minpos starting point.
In Aria there is no "reserve space for null bits" and it does not
create column definition for BIT. Also there is no
setup_vcols_for_repair() to reproduce the issue. But nonetheless it
creates column definition for not stored fields redundantly, so we
patch table2maria() as well. The test case for Aria tries to
demonstrate BIT field works, it does not reproduce any issues (as
redundant column definition for not stored field seem to not cause any
problems).
Cannot add a foreign key on a table with a long UNIQUE multi-column index, that
contains a foreign key as a prefix.
Check that index algorithms match during the "generated" keys duplicate removal.
`limit >= trx_id' failed in purge_node_t::skip
For fast alter partition ALTER lost hash fields in frm field
count. mysql_prepare_create_table() did not call add_hash_field()
because the logic of ALTER-ing field types implies automatic
promotion/demotion to/from hash index. So we don't pass hash algorithm
to mysql_prepare_create_table() and let it decide itself, but it
cannot decide it correctly for fast alter partition.
So now mysql_prepare_alter_table() is a bit more sophisticated on what
to pass in the algorithm. If not changed any fields it will force
mysql_prepare_create_table() to re-add hash fields by setting
HA_KEY_ALG_HASH.
The problem with the original logic is mysql_prepare_alter_table()
does not care 100% about hash property so the decision is blurred
between mysql_prepare_alter_table() and mysql_prepare_create_table().
Under unknown circumstances, the SQL layer may wrongly disregard an
invocation of thd_mark_transaction_to_rollback() when an InnoDB
transaction had been aborted (rolled back) due to one of the following errors:
* HA_ERR_LOCK_DEADLOCK
* HA_ERR_RECORD_CHANGED (if innodb_snapshot_isolation=ON)
* HA_ERR_LOCK_WAIT_TIMEOUT (if innodb_rollback_on_timeout=ON)
Such an error used to cause a crash of InnoDB during transaction commit.
These changes aim to catch and report the error earlier, so that not only
this crash can be avoided but also the original root cause be found and
fixed more easily later.
The idea of this fix is from Michael 'Monty' Widenius.
HA_ERR_ROLLBACK: A new error code that will be translated into
ER_ROLLBACK_ONLY, signalling that the current transaction
has been aborted and the only allowed action is ROLLBACK.
trx_t::state: Add TRX_STATE_ABORTED that is like
TRX_STATE_NOT_STARTED, but noting that the transaction had been
rolled back and aborted.
trx_t::is_started(): Replaces trx_is_started().
ha_innobase: Check the transaction state in various places.
Simplify the logic around SAVEPOINT.
ha_innobase::is_valid_trx(): Replaces ha_innobase::is_read_only().
The InnoDB logic around transaction savepoints, commit, and rollback
was unnecessarily complex and might have contributed to this
inconsistency. So, we are simplifying that logic as well.
trx_savept_t: Replace with const undo_no_t*. When we rollback to
a savepoint, all we need to know is the number of undo log records
that must survive.
trx_named_savept_t, DB_NO_SAVEPOINT: Remove. We can store undo_no_t
directly in the space allocated at innobase_hton->savepoint_offset.
fts_trx_create(): Do not copy previous savepoints.
fts_savepoint_rollback(): If a savepoint was not found, roll back
everything after the default savepoint of fts_trx_create().
The test innodb_fts.savepoint is extended to cover this code.
Reviewed by: Vladislav Lesin
Tested by: Matthias Leich
Updated tests: cases with bugs or which cannot be run
with the cursor-protocol were excluded with
"--disable_cursor_protocol"/"--enable_cursor_protocol"
Fix for v.10.5
on disable_indexes(HA_KEY_SWITCH_NONUNIQ_SAVE) the engine does
not know that the long unique is logically unique, because on the
engine level it is not. And the engine disables it,
Change the disable_indexes/enable_indexes API. Instead of the enum
mode, send a key_map of indexes that should be enabled. This way the
server will decide what is unique, not the engine.
When HA_DUPLICATE_POS is not supported, the row to replace was navigated by
ha_index_read_idx_map, which uses only hash to navigate.
Suchwise, given a hash collision it may choose an incorrect row.
handler::position would be correct and very convenient to use here.
dup_ref is already set by handler independently of the engine
capabilities, when an extra lookup is made (for long unique or something else,
for example WITHOUT OVERLAPS) such error will be indicated by
file->lookup_errkey != -1.
write_record() when performing REPLACE has an optimization:
- if the unique violation happened in the last unique key, then do UPDATE
- otherwise, do DELETE+INSERT
This patch changes the way of detecting if this optimization
can be applied if the table has long (hash based) unique
(i.e. UNIQUE..USING HASH) constraints.
Problem:
The old condition did not take into account that
TABLE_SHARE and TABLE see long uniques differently:
- TABLE_SHARE sees as HA_KEY_ALG_LONG_HASH and HA_NOSAME
- TABLE sees as usual non-unique indexes
So the old condition could erroneously decide that the UPDATE optimization
is possible when there are still some unique hash constraints in the table.
Fix:
- If the current key is a long unique, it now works as follows:
UPDATE can be done if the current long unique is the last
long unique, and there are no in-engine (normal) uniques.
- For in-engine uniques nothing changes, it still works as before:
If the current key is an in-engine (normal) unique:
UPDATE can be done if it is the last normal unique.
use the original, not the truncated, field in the long unique prefix,
that is, in the hash(left(field, length)) expression.
because MyISAM CHECK/REPAIR in compute_vcols() moves table->field
but not prefix fields from keyparts.
Also, implement Field_string::cmp_prefix() for prefix comparison
of CHAR columns to work.
calculate auto-inc value even if long duplicate check fails -
this is what the engine does for normal uniques.
auto-inc value is needed if it's a REPLACE
The MDEV-29693 conflict resolution is from Monty, as well as is
a bug fix where ANALYZE TABLE wrongly built histograms for
single-column PRIMARY KEY.
Also includes a fix for safe_malloc error reporting.
Other things:
- Copied main.log_slow from 10.4 to avoid mtr issue
Disabled test:
- spider/bugfix.mdev_27239 because we started to get
+Error 1429 Unable to connect to foreign data source: localhost
-Error 1158 Got an error reading communication packets
- main.delayed
- Bug#54332 Deadlock with two connections doing LOCK TABLE+INSERT DELAYED
This part is disabled for now as it fails randomly with different
warnings/errors (no corruption).
This patch adds for "--ps-protocol" second execution
of queries "SELECT".
Also in this patch it is added ability to disable/enable
(--disable_ps2_protocol/--enable_ps2_protocol) second
execution for "--ps-prototocol" in testcases.
disable bulk insert optimization if long uniques are used, because they
need to read the table (index_read) after every inserted now. And bulk
insert optimization might disable indexes.
bulk insert is already disabled in other cases when there are chances
that the table will be read duing the bulk insert.
Problem:-
We are able to insert duplicate value in table because cmp_binary_offset
is not able to differentiate between NULL and empty string. So
check_duplicate_long_entry_key is never called and we don't check for
duplicate.
Solution
Added a if condition with is_null() on field which can differentiate
between NULL and empty string.
Handler for existing partition was already index-inited at the
beginning of copy_partitions().
In the case of REORGANIZE PARTITION we fill new partition by calling
its ha_write_row() (handler is storage engine of new partition). From
that we go through the below conditions:
if (this->inited == RND)
table->clone_handler_for_update();
handler *h= table->update_handler ? table->update_handler : table->file;
First, the above misses the meaning of this->inited check. Now it is
new partition and this handler is not inited. So, we assign
table->file which is ha_partition and is really not known to be inited
or not. It is supposed (this == table->file), otherwise we are
out of the logic for using update_handler. This patch adds DBUG_ASSERT
for that.
Second, we call check_duplicate_long_entries() for table->file and
that calls ha_partition::index_init() which calls index_init() for
each partition's handler. But the existing parititions' handlers was
already inited in copy_partitions() and we fail on assertion.
The fix implies that we don't need check_duplicate_long_entries()
per-partition as we've already done check_duplicate_long_entries() for
ha_partition. For REORGANIZE PARTITION that means existing row was
already checked at previous INSERT/UPDATE commands, so no need to
check it again (see NOTE in handler::ha_write_row()).
The fix also optimizes ha_update_row() so
check_duplicate_long_entries_update() is not called per-partition
considering it was already called for ha_partition. Besides,
per-partition duplicate check is not really usable.
Long UNIQUE HASH index silently creates virtual column index, which should
be impossible for base columns featuring AUTO_INCREMENT.
Fix: add a relevant check; add new vcol type for a prettier error message.
Dead code cleanup:
part_info->num_parts usage was wrong and working incorrectly in
mysql_drop_partitions() because num_parts is already updated in
prep_alter_part_table(). We don't have to update part_info->partitions
because part_info is destroyed at alter_partition_lock_handling().
Cleanups:
- DBUG_EVALUATE_IF() macro replaced by shorter form DBUG_IF();
- Typo in ER_KEY_COLUMN_DOES_NOT_EXITS.
Refactorings:
- Splitted write_log_replace_delete_frm() into write_log_delete_frm()
and write_log_replace_frm();
- partition_info via DDL_LOG_STATE;
- set_part_info_exec_log_entry() removed.
DBUG_EVALUATE removed
DBUG_EVALUTATE was only added for consistency together with
DBUG_EVALUATE_IF. It is not used anywhere in the code.
DBUG_SUICIDE() fix on release build
On release DBUG_SUICIDE() was statement. It was wrong as
DBUG_SUICIDE() is used in expression context.