Problem:
=======
- Redundant table fails to insert into the table after
instant drop blob column. Instant drop column only marking
the column as hidden and consecutive insert statement tries
to insert NULL value for the dropped BLOB column and returns
the fixed length of the blob type as 65535. This lead to
row size too large error.
Fix:
====
For redundant table, if the non-fixed dropped column can be null
then set the length of the field type as 0.
Updated tests: cases with bugs or which cannot be run
with the cursor-protocol were excluded with
"--disable_cursor_protocol"/"--enable_cursor_protocol"
Fix for v.10.5
Reason:
======
- InnoDB fails to load the instant alter table metadata from
clustered index while loading the table definition.
The reason is that InnoDB metadata blob has the column length
exceeds maximum fixed length column size.
Fix:
===
- InnoDB should treat the long fixed length column as variable
length fields that needs external storage while initializing
the field map for instant alter operation
row_ins_clust_index_entry_low(): Invoke btr_set_instant() in the same
mini-transaction that has successfully inserted the metadata record.
In this way, if inserting the metadata record fails before any
undo log record was written for it, the index root page will remain
consistent.
innobase_instant_try(): Remove the btr_set_instant() call.
This is a 10.6 version of c17aca2f11c21ed53c7f0ce5f0cb39403e20feb9.
Tested by: Matthias Leich
Reviewed by: Thirunarayanan Balathandayuthapani
row_ins_clust_index_entry_low(): Invoke btr_set_instant() in the same
mini-transaction that has successfully inserted the metadata record.
In this way, if inserting the metadata record fails before any
undo log record was written for it, the index root page will remain
consistent.
innobase_instant_try(): Remove the btr_set_instant() call.
Reviewed by: Thirunarayanan Balathandayuthapani
Tested by: Matthias Leich
In any test that uses wait_all_purged.inc, ensure that InnoDB tables
will be created without persistent statistics.
This is a follow-up to commit cd04673a177d40f7c409284d87ead851ec775c36
after a similar failure was observed in the innodb_zip.blob test.
The motivation of introducing the parameter
innodb_purge_rseg_truncate_frequency in
mysql/mysql-server@28bbd66ea5 and
mysql/mysql-server@8fc2120fed
seems to have been to avoid stalls due to freeing undo log pages
or truncating undo log tablespaces. In MariaDB Server,
innodb_undo_log_truncate=ON should be a much lighter operation
than in MySQL, because it will not involve any log checkpoint.
Another source of performance stalls should be
trx_purge_truncate_rseg_history(), which is shrinking the history list
by freeing the undo log pages whose undo records have been purged.
To alleviate that, we will introduce a purge_truncation_task that will
offload this from the purge_coordinator_task. In that way, the next
innodb_purge_batch_size pages may be parsed and purged while the pages
from the previous batch are being freed and the history list being shrunk.
The processing of innodb_undo_log_truncate=ON will still remain the
responsibility of the purge_coordinator_task.
purge_coordinator_state::count: Remove. We will ignore
innodb_purge_rseg_truncate_frequency, and act as if it had been
set to 1 (the maximum shrinking frequency).
purge_coordinator_state::do_purge(): Invoke an asynchronous task
purge_truncation_callback() to free the undo log pages.
purge_sys_t::iterator::free_history(): Free those undo log pages
that have been processed. This used to be a part of
trx_purge_truncate_history().
purge_sys_t::clone_end_view(): Take a new value of purge_sys.head
as a parameter, so that it will be updated while holding exclusive
purge_sys.latch. This is needed for race-free access to the field
in purge_truncation_callback().
Reviewed by: Vladislav Lesin
This patch adds for "--ps-protocol" second execution
of queries "SELECT".
Also in this patch it is added ability to disable/enable
(--disable_ps2_protocol/--enable_ps2_protocol) second
execution for "--ps-prototocol" in testcases.
btr_lift_page_up(): If the leaf page only contains a hidden metadata
record for MDEV-11369 instant ADD COLUMN, convert the table to the
canonical format like we are supposed to do whenever the table
becomes empty.
prepare_inplace_add_virtual(): Over-estimate the size of the arrays
by not subtracting table->s->virtual_fields (which may refer to
stored, not virtual generated columns). InnoDB only distinguishes
virtual columns.
- InnoDB fails to skip newly created column while checking for
change column when table is in redundant row format. This issue
is caused the MDEV-18035 (ccb1acbd3c15f0d99d1ea3cd1b206da38fa1c17f)
Let us introduce the parameter innodb_read_only_compressed
that is ON by default, making any ROW_FORMAT=COMPRESSED tables
read-only.
I developed the ROW_FORMAT=COMPRESSED format based on
Heikki Tuuri's rough design between 2005 and 2008. It might
have been a good idea back then, but no proper benchmarks were
ever run to validate the design or the implementation.
The format has been more or less obsolete for years.
It limits innodb_page_size to 16384 bytes (the default),
and instant ALTER TABLE is not supported.
This is the first step towards deprecating and removing
write support for ROW_FORMAT=COMPRESSED tables.
instant_alter_column_possible(): Relax a too strict debug assertion.
The existence of an index stub or a corrupted index on virtual columns
does not imply that virtual columns exist.
In commit 7eda55619654b76add275695e0a6039e60876e81 we removed a
valid debug assertion.
AddressSanitizer would trip when writing undo log for an INSERT
operation that follows the problematic ALTER TABLE because the
v_indexes would refer to an index object that was freed during
the execution of ALTER TABLE.
The operation to remove NOT NULL attribute from a column should
lead into any affected indexes to be dropped and re-created.
But, the SQL layer set the ha_alter_info->handler_flags to
HA_INPLACE_ADD_UNIQUE_INDEX_NO_WRITE|ALTER_COLUMN_NULLABLE
(that is, InnoDB was asked to add an index and change the column,
but not to drop the index that depended on the column).
Let us restore the debug assertion that catches this problem
outside AddressSanitizer, and 'defuse' the offending ALTER TABLE
statement in the test until something has been fixed in the SQL layer.
dict_v_idx_t node was shared between two dict_v_col_t objects because
of wrong object copy. Replace memory plain copy with copy constructor.
Tha patch also removes n_v_indexes property and improves "page full"
judgements for trx_undo_log_v_idx().
dict_col_t::same_encoding(), dict_col_t::same_format(): Allow
an instantaneous change of a column to a compatible encoding,
just like ha_innobase::can_convert_string() and similar functions do.
In commit 0e5a4ac2532c64a545796c787354dc41d61d0e62 (MDEV-15562)
we introduced a bogus debug assertion, demanding that dict_col_t::len
never exceed some maximum value. The intention was to match what
dict_index_add_col() is doing. But, the assignment field->fixed_len = 0
applies to dict_field_t::fixed_len, not dict_col_t::len!
- Instant alter should change the metadata alone when table is
discarded. It shouldn't try to add metadata record in clustered index.
Also make the clustered index to non-instant format.
ha_innobase::commit_inplace_alter_table(): After
ALTER_STORED_COLUMN_ORDER, ensure that the virtual column metadata
will be reloaded also when the table is not being rebuilt.
We must relax too strict debug assertions. For latin1_swedish_ci,
mtype=DATA_CHAR or mtype=DATA_VARCHAR will be used instead of
mtype=DATA_MYSQL or mtype=DATA_VARMYSQL. Likewise, some changes of
dtype_get_charset_coll() do not affect the data type encoding,
but only any indexes that are defined on the column.
Charset::same_encoding(): Check whether two charset-collations have
the same character set encoding.
dict_col_t::same_encoding(): Check whether two character columns
have the same character set encoding.
dict_col_t::same_type(): Check whether two columns have a compatible
data type encoding.
dict_col_t::same_format(), dict_table_t::instant_column(): Do not
compare mtype or the charset-collation of prtype directly.
Rely on dict_col_t::same_type() instead.
dtype_get_charset_coll(): Narrow the return type to uint16_t.
This is a refined version of a fix that was developed by
Thirunarayanan Balathandayuthapani.
innobase_drop_foreign_try(): Don't evict and reload the dict_foreign_t
during instant ALTER TABLE if the FOREIGN KEY constraint is being
dropped.
The MDEV-19630 fix (commit 07b1a26c33b28812a4fd8c814de0fe7d943bbd6b)
was incomplete, because it did not cover a case where the
FOREIGN KEY constraint is being dropped.
Thanks to Eugene Kosov for noting that the fix is incomplete.
It turns out that on instant DROP/reorder column (MDEV-15562),
we must always write the metadata record, even though the table
was empty. Alternatively, we should guarantee that all undo
log records for the table have been purged. (Attempting to do
that by updating table_id leads to other problems; see
commit 1b31d8852c00b4bab6e6fe179b97db45ccb8d535.)
It would be tempting to remove dict_index_t::clear_instant_alter()
altogether, but it turns that we need that when the instant ALTER TABLE
operation of a first-time DROP COLUMN is being rolled back.
innobase_instant_try(): Clarify a comment. Purge never calls
dict_index_t::clear_instant_alter(), but it may invoke
dict_index_t::clear_instant_add(). On first-time instant DROP/reorder,
always write a metadata record, even if the table is empty.