- InnoDB tries to build the previous version of the record for
the virtual index, but the undo log record doesn't contain
virtual column information. This leads to assert failure while
building the tuple.
rename to stress that is a specific hack for Item_func_nextval
and should not be used for other items.
If a vcol uses Item_func_nextval, a corresponding table for the sequence
should be added to the prelocking list (in that sense NEXTVAL is not
simply a function, but more like a subquery), see add_internal_tables()
in DML_prelocking_strategy::handle_table(). At the moment it is only
implemented for DEFAULT, not for GENERATED ALWAYS AS, thus the
VCOL_NEXTVAL hack.
See also commits aa8a31da and 64678c for a Bug #22990029 fix.
In this scenario INSERT chose to check if delete unmarking is available for
a just deleted record. To build an update vector, it needed to calculate
the vcols as well. Since this INSERT was not IGNORE-flagged, recalculation
failed.
Solutiuon: temporarily set abort_on_warning=true, while calculating the
column for delete-unmarked insert.
As of now innodb does not store trx_id for each record in secondary index.
The idea behind is following: let us store only per-page max_trx_id, and
delete-mark the records when they are deleted/updated.
If the read starts, it rememders the lowest id of currently active
transaction. Innodb refers to it as trx->read_view->m_up_limit_id.
See also ReadView::open.
When the page is fetched, its max_trx_id is compared to m_up_limit_id.
If the value is lower, and the secondary index record is not delete-marked,
then this page is just safe to read as is. Else, a clustered index could be
needed ato access. See page_get_max_trx_id call in row_search_mvcc, and the
corresponding switch (row_search_idx_cond_check(...)) below.
Virtual columns are required to be updated in case if the record was
delete-marked. The motivation behind it is documented in
Row_sel_get_clust_rec_for_mysql::operator() near
row_sel_sec_rec_is_for_clust_rec call.
This was basically a description why virtual column computation can
normally happen during SELECT, and, generally, a vcol index access.
Sometimes stats tables are updated by innodb. This starts a new
transaction, and it can happen that it didn't finish to the moment of
SELECT execution, forcing virtual columns recomputation. If the result was
a something that normally outputs a warning, like division by zero, then
it could be outputted in a racy manner.
The solution is to suppress the warnings when a column is computed
for the described purpose.
ignore_wrnings argument is added innobase_get_computed_value.
Currently, it is only true for a call from
row_sel_sec_rec_is_for_clust_rec.
row_purge_get_partial(): Replaces trx_undo_rec_get_partial_row().
Also copy the purge_node_t::ref to the purge_node_t::row.
In this way, the clustered index key fields will always be
available, even if thanks to
commit d384ead0f0 (MDEV-14799)
they would no longer be repeated in the remaining part of the
undo log record.
There are separate flags DBUG_OFF for disabling the DBUG facility
and ENABLED_DEBUG_SYNC for enabling the DEBUG_SYNC facility.
Let us allow debug builds without DEBUG_SYNC.
Note: For CMAKE_BUILD_TYPE=Debug, CMakeLists.txt will continue to
define ENABLED_DEBUG_SYNC.
- Simplified Chinese translation added
- Character encoding is gdk
-- gdk covers more characters
-- gdk includes both Simplified and Traditional
-- best option I think, may need to work along with other locale
settings
- Other cleanup
-- Within each error, messages are sorted according to language code
-- More consistent formatting (8 spaces proceeding each translation)
-- jps removed as duplicate of jpn translation
This should be a good starting point. More refinement is appreciated,
and needed down the road.
English "containt" (sic) spelling fixes on ER_FK_NO_INDEX_{CHILD,PARENT}
resulting in mtr test case adjustments.
Edited/reviewed by Daniel Black
extra2_read_len resolved by keeping the implementation
in sql/table.cc by exposed it for use by ha_partition.cc
Remove identical implementation in unireg.h
(ref: bfed2c7d57)
- In ha_innobase::prepare_inplace_alter_table(), InnoDB should
check whether the table is empty. If the table is empty then
server should avoid downgrading the MDL after prepare phase.
It is more like instant alter, does change only in dicationary
and metadata.
- Changed few debug test case to make non-empty DDL table
The initial test case for MySQL Bug #33053297 is based on
mysql/mysql-server@27130e2507.
innobase_get_field_from_update_vector is not a suitable function to fetch
updated row info, as well as parent table's update vector is not always
suitable. For instance, in case of DELETE it contains undefined data.
castade->update vector seems to be good enough to fetch all base columns
update data, and besides faster, and less error-prone.
This is a duplicate of MDEV-18278 89936f11e9, but I will add an
additional assertion
Description:
The frm corruption should not be reported during CREATE TABLE. Normally
it doesn't, and the data to fill TABLE is taken by open_table_from_share
call. However, the vcol data is stored as SQL string in
table->s->vcol_defs.str and is anyway parsed on each table open.
It is impossible [or hard] to avoid, because it's hard to clone the
expression tree in general (it's easier to parse).
Normally parse_vcol_defs should only fail on semantic errors. If so,
error_reported is set to true. Any other failure is not expected during
table creation. There is either unhandled/unacknowledged error, or
something went really wrong, like memory reject. This all should be
asserted anyway.
Solution:
* Set *error_reported=true for the forward references check;
* Assert for every unacknowledged error during table creation.
Server crashes in Field::register_field_in_read_map upon select from
partitioned table with indexed by prefix virtual column.
After several read-mark fixes a problem has surfaced:
Since KEY (c(10),a) uses only a prefix of c, a new field is created,
duplicated from table->field[3], with a new length. However,
vcol_inco->expr is not copied.
Therefore, (*key_info)->key_part[i].field->vcol_info->expr was left NULL
in ha_partition::index_init().
Solution: copy vcol_info from table field when it's set up.
len was containing garbage, since vctempl->mysql_col_offset was
containing old value while calling row_mysql_store_col_in_innobase_format
from innobase_get_computed_value().
It was not updated after the first ALTER TABLE call, because it's INPLACE
logic considered there's nothing to update, and exited immediately from
ha_innobase::inplace_alter_table().
However, vcol metadata needs an update, since vcols structure is changed
in mysql record.
The regression was introduced by 12614af1fe. There, refcount==1 condition
was removed, which turned out to be crucial, though racy. The idea was to
update vc_templ after each (sequencing) ALTER TABLE.
We should do the same another way, and there may be a plenty of solutions,
but the simplest one is to add a following condition:
if vcol structure is changed, drop vc_templ; it will be recreated on next
ha_innobase::open() call.
in prepare_inplace_alter_table. It is safe, since innodb inplace changes
require at least HA_ALTER_INPLACE_SHARED_LOCK_AFTER_PREPARE, which
guarantee MDL_EXCLUSIVE on this stage.
alter_templ_needs_rebuild() also has to track the columns not indexed, to
keep vc_templ correct.
Note that vc_templ is always kept constructed and available after
ha_innobase::open() call, even on INSERT, though no virtual columns are
evaluated during that statement
inside innodb.
In the test case suplied, it will be recreated on the second ALTER TABLE.
Server crashes in Field::register_field_in_read_map upon select from
partitioned table with indexed by prefix virtual column.
After several read-mark fixes a problem has surfaced:
Since KEY (c(10),a) uses only a prefix of c, a new field is created,
duplicated from table->field[3], with a new length. However,
vcol_inco->expr is not copied.
Therefore, (*key_info)->key_part[i].field->vcol_info->expr was left NULL
in ha_partition::index_init().
Solution: initialize vcols before key initialization
Also key initialization is moved to a function.
In commit 83d2e0841e (MDEV-24041)
we failed to notice that in addition to the bug with
DELETE and ON DELETE CASCADE, there is another bug with
UPDATE and ON UPDATE CASCADE.
row_ins_foreign_fill_virtual(): Use the correct memory heap
for everything that will be reachable from the cascade->update
that we return to the caller.
Note: It is correct to use the shorter-lived cascade->heap for
rec_get_offsets(), because that memory will be abandoned when
row_ins_foreign_fill_virtual() returns.
The columns that are part of DEFAULT expression were not read-marked
in statements like UPDATE...SET b=DEFAULT.
The problem is `F(DEFAULT)` expression depends of the left-hand side of an
assignment. However, setup_fields accepts only right-hand side value.
Neither Item::fix_fields does.
Suchwise, b=DEFAULT(b) works fine, because Item_default_field has
information on what field it is default of:
if (thd->mark_used_columns != MARK_COLUMNS_NONE)
def_field->default_value->expr->update_used_tables();
in Item_default_value::fix_fields().
It is not reasonable to pass a left-hand side to Item:fix_fields, because
the case is rare, so the rewrite
b= F(DEFAULT) -> b= F(DEFAULT(b))
is made instead.
Both UPDATE and multi-UPDATE are affected, however any form of INSERT
is not: it marks all the fields in DEFAULT expressions for read in
TABLE::mark_default_fields_for_write().
The problem is the same as in MDEV-18166: columns in virtual field
expression are not marked for read, while the field itself does.
field->register_field_in_read_map() should be called for read-marking all
fields.
The test is reproduced only in 10.4+, however the fix is applicable to
10.2+.
Several different test cases were failing under the same reason: the
fields in a vcol expression were not marked during marking columns of a key
contatining virtual column for read.
Fix: make marking columns of a key for read a special case where
register_field_in_read_map() is done instead of plain bitmap_set_bit().
Some test cases are only reproducible in 10.4+, but the fix is applicable
to 10.2+
This is a 10.2+ part of a jira task
The two bugs regarding virtual column marking have been fixed:
1. UPDATE of a partitioned table, where the optimizer has chosen a
secondary index to make a filesort;
2. INSERT into a table with a nonblob field generated from a blob, with
binlog enabled and binlog_row_image=noblob.
3. DELETE from a view on a table with virtual column.
Generally the assertion happens from update_virtual_fields() call
These bugs are root-caused by missing field marking for dependant fields
of a virtual column.
Therefore a fix is: mark all the fields involved in the vcol expression by
calling field->register_field_in_read_map() instead just setting a single
bit.
3 was reproducible only on 10.4+, however the problem might has just been
invisible in the earlier versions. The fix is applicable to 10.2-10.3 as
well.
The failing reason was inconsistent truncation rules: the value of virtual
column could have been evaluated to '2000' sometimes instead of '0000' for
value 'a'.
The reason why `c YEAR AS ('aaaa')` was not evaluated same is that len=4 is
a special case insidew Field_year::store.
The correct fix is: always evaluate a bad value to 0000 instead 2000.
The truncated values should be evaluated as usual.
$support_virtual_index is finally changed to 1 in gcol.gcol_ins_upd_innodb,
which is also enough for testing.
The test from original bug report is also added.
- During online alter conversion from compact to redundant,
virtual column field length already set during
innobase_get_computed_value(). Skip the char(n) check for
virtual column in row_merge_buf_add()
- InnoDB fails to check DB_COMPUTE_VALUE_FAILED error in
row_merge_read_clustered_index() and wrongly asserts that
the buffer shouldn't be ran out of memory. Alter table
should give warning when the column value is being
truncated.
table->move_fields wasn't undone in case of error.
1. move_fields is unconditionally undone even when error is occurred
2. cherry-pick an assertion in `ptr_in_record`, which is already in 10.5