MariaDB server crashes when a query includes a derived table
containing unnamed column (eg: `SELECT '' from t`). When `Item`
object representing such unnamed column was checked for valid,
non-empty name in `TABLE_LIST::create_field_translation`, the
server crahsed(assertion `item->name.str && item->name.str[0]`
failed).
This fix removes the redundant assertion. The assert was a strict
debug guard that's no longer needed because the code safely handles
empty strings without it.
Selecting `''` from a derived table caused `item->name.str`
to be an empty string. While the pointer itself wasn't `NULL`
(`item->name.str` is `true`), its first character (`item->name.str[0]`)
was null terminator, which evaluates to `false` and eventually made
the assert fail. The code immediately after the assert can safely
handle empty strings and the assert was guarding against something
which the code can already handle.
Includes `mysql-test/main/derived.test` to verify the fix.
Calling a stored routine that executes a join on three or more tables
and referencing not-existent column name in the USING clause resulted in
a crash on its second invocation.
Server crash taken place by the reason of dereferencing null pointer
in condition of DBUG_ASSERT inside the method
Field_iterator_natural_join::next()
There the data member
cur_column_ref->table_field->field
has the nullptr value that was reset at the end of first
execution of a stored routine when the standalone procedure
cleanup_items() called by the method sp_head::execute.
Later this data member is not re-initialized and never referenced
in any place except the DBUG_ASSERT on second and later invocations
of the stored routine.
To fix the issue, the assert's condition should be augmented by
a condition '|| !cur_column_ref->table_field' before dereferencing
cur_column_ref->table_field. Such extra checking is aligned with
conditions used by DBUG_ASSERT macros used by implementation of
the class Field_iterator_table_ref that aggregated the class
Field_iterator_natural_join.
it's incorrect to zero out table->triggers->extra_null_bitmap
before a statement, because if insert uses an explicit field list
and omits a field that has no default value, the field should
get NULL implicitly. So extra_null_bitmap should have 1s for all
fields that have no defaults
* create extra_null_bitmap_init and initialize it as above
* copy extra_null_bitmap_init to extra_null_bitmap for inserts
* still zero out extra_null_bitmap for updates/deletes where
all fields definitely have a value
* make not_null_fields_have_null_values() to send
ER_NO_DEFAULT_FOR_FIELD for fields with no default and no value,
otherwise creation of a trigger with an empty body would change the
error message
MDEV-28127 did is_equal() which compared vcol expressions
literally. But another table vcol expression is not equal because of
different table name.
We implement another comparison method is_identical() which respects
different table name in vcol comparison. If any field item points to
table_A and compared field item points to table_B, such items are
treated as equal in (table_A, table_B) comparison. This is done by
cloning table_B expression and renaming any table_B entries to table_A
in it.
DELAYED with virtual columns
Segfault was cause by two different copies of same Field instance in
prepared delayed insert. One was made by
Delayed_insert::get_local_table() (see make_new_field()). That copy
went through parse_vcol_defs() and received new vcol_info->expr.
Another one was made by copy_keys_from_share() by this code:
/*
We are using only a prefix of the column as a key:
Create a new field for the key part that matches the index
*/
field= key_part->field=field->make_new_field(root, outparam, 0);
field->field_length= key_part->length;
So, key_part and table got different objects of same field and the
crash was because key_part->field->vcol_info->expr is NULL.
The fix does update_keypart_vcol_info() to update vcol_info->expr in
key_part->field.
Cleanup: memdup_vcol() is static inline instead of macro + check OOM.
This commit updates default memory allocations size used with MEM_ROOT
objects to minimize the number of calls to malloc().
Changes:
- Updated MEM_ROOT block sizes in sql_const.h
- Updated MALLOC_OVERHEAD to also take into account the extra memory
allocated by my_malloc()
- Updated init_alloc_root() to only take MALLOC_OVERHEAD into account as
buffer size, not MALLOC_OVERHEAD + sizeof(USED_MEM).
- Reset mem_root->first_block_usage if and only if first block was used.
- Increase MEM_ROOT buffers sized used by my_load_defaults, plugin_init,
Create_tmp_table, allocate_table_share, TABLE and TABLE_SHARE.
This decreases number of malloc calls during queries.
- Use a small buffer for THD->main_mem_root in THD::THD. This avoids
multiple malloc() call for new connections.
I tried the above changes on a complex select query with 12 tables.
The following shows the number of extra allocations that where used
to increase the size of the MEM_ROOT buffers.
Original code:
- Connection to MariaDB: 9 allocations
- First query run: 146 allocations
- Second query run: 24 allocations
Max memory allocated for thd when using with heap table: 61,262,408
Max memory allocated for thd when using Aria tmp table: 419,464
After changes:
Connection to MariaDB: 0 allocations
- First run: 25 allocations
- Second run: 7 allocations
Max memory allocated for thd when using with heap table: 61,347,424
Max memory allocated for thd when using Aria table: 529,168
The new code uses slightly more memory, but avoids memory fragmentation
and is slightly faster thanks to much fewer calls to malloc().
Reviewed-by: Sergei Golubchik <serg@mariadb.org>
A prepared SELECT statement because of CF_REEXECUTION_FRAGILE needs to
check the table is the same definition as previously otherwise a
re-prepare of the statement can occur.
When running many 'SELECT DEFAULT(name) FROM table1_containing_sequence'
in parallel the TABLE_LIST::is_the_same_definition may be called when
m_table_ref_type is TABLE_REF_NULL because it hasn't been checked yet.
In this case populate the TABLE_LIST with the values determined by the
TABLE_SHARE and allow the execution to continue.
As a result of this, the main.ps_ddl test doesn't need to reprepare
as the defination hasn't changed. This is another case where
TABLE_LIST::is_the_same_definition is called when m_table_ref_type is
TABLE_REF_NULL, but that doesn't mean that the defination is different.
Partial commit of the greater MDEV-34348 scope.
MDEV-34348: MariaDB is violating clang-16 -Wcast-function-type-strict
Change the type of my_hash_get_key to:
1) Return const
2) Change the context parameter to be const void*
Also fix casting in hash adjacent areas.
Reviewed By:
============
Marko Mäkelä <marko.makela@mariadb.com>
Hash index is vcol-based wrapper (MDEV-371). row_end is added to
unique index. So when row_end is updated unique hash index must be
recalculated via vcol_update_fields(). DELETE did not update virtual
fields, so DELETE HISTORY was getting wrong hash value.
The fix does update_virtual_fields() on vers_update_end() so in every
case row_end is updated virtual fields are updated as well.
Search conditions were evaluated using val_int(), which was wrong.
Fixing the code to use val_bool() instead.
Details:
- Adding a new item_base_t::IS_COND flag which marks Items used
as <search condition> in WHERE, HAVING, JOIN ON, CASE WHEN clauses.
The flag is at the parse time.
These expressions must be evaluated using val_bool() rather than val_int().
Note, the optimizer creates more Items which are used as search conditions.
Most of these items are not marked with IS_COND yet. This is OK for now,
but eventually these Items can also be fixed to have the flag.
- Adding a method Item::is_cond() which tests if the Item has the IS_COND flag.
- Implementing Item_cache_bool. It evaluates the cached expression using
val_bool() rather than val_int().
Overriding Type_handler_bool::Item_get_cache() to create Item_cache_bool.
- Implementing Item::save_bool_in_field(). It uses val_bool() rather than
val_int() to evaluate the expression.
- Implementing Type_handler_bool::Item_save_in_field()
using Item::save_bool_in_field().
- Fixing all Item_bool_func descendants to implement a virtual val_bool()
rather than a virtual val_int().
- To find places where val_int() should be fixed to val_bool(), a few
DBUG_ASSERT(!is_cond()) where added into val_int() implementations
of selected (most frequent) classes:
Item_field
Item_str_func
Item_datefunc
Item_timefunc
Item_datetimefunc
Item_cache_bool
Item_bool_func
Item_func_hybrid_field_type
Item_basic_constant descendants
- Fixing all places where DBUG_ASSERT() happened during an "mtr" run
to use val_bool() instead of val_int().
Problem was that wsrep_schema tables were not marked as
category information. Fix allows access to wsrep_schema
tables even when node is detached.
This is 10.4-10.9 version of fix.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Discovered this while working on MDEV-34720: test_if_cheaper_ordering()
uses rec_per_key, while the original estimate for the access method
is produced in best_access_path() by using actual_rec_per_key().
Make test_if_cheaper_ordering() also use actual_rec_per_key().
Also make several getter function "const" to make this compile.
Also adjusted the testcase to handle this (the change backported from
11.0)