The task "MDEV-25829 Change default Unicode collation to uca1400_ai_ci"
previously changed collation derivation for string user variables
from DERIVATION_EXPLICIT to DERIVATION_COERCIBLE, to resolve illegal
collation mix conflicts between table columns and user variables
when they have different collations.
However, DERIVATION_COERCIBLE was a wrong choice because it caused
conflicts between string literals and user variables when they have
different collations.
Adding a new collation derivation level DERIVATION_USERVAR.
This makes the collation of a user variable:
- weaker than a table column (like it was intended by MDEV-25829)
- but stronger than a literal (like it was in pre-MDEV-25829)
Cleanup in sql_type.h:
Removing the line "- BINARY(expr)" from the before-DERIVATION_CAST
comment, as it was on a wrong place. It's also listed on the correct
place before DERIVATION_IMPLICIT.
Search conditions were evaluated using val_int(), which was wrong.
Fixing the code to use val_bool() instead.
Details:
- Adding a new item_base_t::IS_COND flag which marks Items used
as <search condition> in WHERE, HAVING, JOIN ON, CASE WHEN clauses.
The flag is at the parse time.
These expressions must be evaluated using val_bool() rather than val_int().
Note, the optimizer creates more Items which are used as search conditions.
Most of these items are not marked with IS_COND yet. This is OK for now,
but eventually these Items can also be fixed to have the flag.
- Adding a method Item::is_cond() which tests if the Item has the IS_COND flag.
- Implementing Item_cache_bool. It evaluates the cached expression using
val_bool() rather than val_int().
Overriding Type_handler_bool::Item_get_cache() to create Item_cache_bool.
- Implementing Item::save_bool_in_field(). It uses val_bool() rather than
val_int() to evaluate the expression.
- Implementing Type_handler_bool::Item_save_in_field()
using Item::save_bool_in_field().
- Fixing all Item_bool_func descendants to implement a virtual val_bool()
rather than a virtual val_int().
- To find places where val_int() should be fixed to val_bool(), a few
DBUG_ASSERT(!is_cond()) where added into val_int() implementations
of selected (most frequent) classes:
Item_field
Item_str_func
Item_datefunc
Item_timefunc
Item_datetimefunc
Item_cache_bool
Item_bool_func
Item_func_hybrid_field_type
Item_basic_constant descendants
- Fixing all places where DBUG_ASSERT() happened during an "mtr" run
to use val_bool() instead of val_int().
A mixture of a multi-byte *TEXT column and a short binary column
produced a too large column.
For example, COALESCE(tinytext_utf8mb4, short_varbinary)
produced a BLOB column instead of an expected TINYBLOB.
- Adding a virtual method Type_all_attributes::character_octet_length(),
returning max_length by default.
- Overriding Item_field::character_octet_length() to extract
the octet length from the underlying Field.
- Overriding Item_ref::character_octet_length() to extract
the octet length from the references Item (e.g. as VIEW fields).
- Fixing Type_numeric_attributes::find_max_octet_length() to
take the octet length using the new method character_octet_length()
instead of accessing max_length directly.
Fixing applying the COLLATE clause to a parameter caused an error error:
COLLATION '...' is not valid for CHARACTER SET 'binary'
Fix:
- Changing the collation derivation for a non-prepared Item_param
to DERIVATION_IGNORABLE.
- Allowing to apply any COLLATE clause to expressions with DERIVATION_IGNORABLE.
This includes:
1. A non-prepared Item_param
2. An explicit NULL
3. Expressions derived from #1 and #2
For example:
SELECT ? COLLATE utf8mb_unicode_ci;
SELECT NULL COLLATE utf8mb_unicode_ci;
SELECT CONCAT(?) COLLATE utf8mb_unicode_ci;
SELECT CONCAT(NULL) COLLATE utf8mb_unicode_ci
- Additional change: preserving the collation of an expression when
the expression gets assigned to a PS parameter and evaluates to SQL NULL.
Before this change, the collation of the parameter was erroneously set
to &my_charset_binary.
- Additional change: removing the multiplication to mbmaxlen from the
fix_char_length_ulonglong() argument, because the multiplication already
happens inside fix_char_length_ulonglong().
This fixes a too large column size created for a COLLATE clause.
MDEV-32188 make TIMESTAMP use whole 32-bit unsigned range
- Changed usage of timeval to my_timeval as the timeval parts on windows
are 32-bit long, which causes some compiler issues on windows.
Step#2 - Adding a new collation derivation level for CAST and CONVERT.
Now character string cast functions:
- CAST(string_expr AS CHAR)
- CONVERT(expr USING charset_name)
have a new collation derivation level between:
- string literals
- utf8 metadata functions, e.g. user() and database()
Before the change these cast functions had collation derivation equal
to table columns, which caused more illegal mix of collation conflicts.
Note, binary string cast functions:
- BINARY(expr)
- CAST(string_expr AS BINARY)
- CONVERT(expr USING binary)
did not change their collation derivation, to preserve the behaviour of
queries like these:
SELECT database()=BINARY'test';
SELECT user()=CAST('root' AS BINARY);
SELECT current_role()=CONVERT('role' USING binary);
Derivation levels after the change look as follows:
DERIVATION_IGNORABLE= 7, // Explicit NULL
DERIVATION_NUMERIC= 6, // Numbers in string context,
// Numeric user variables
// CAST(numeric_expr AS CHAR)
DERIVATION_COERCIBLE= 5, // Literals, string user variables
DERIVATION_CAST= 4, // CAST(string_expr AS CHAR),
// CONVERT(string_expr USING cs)
DERIVATION_SYSCONST= 3, // utf8 metadata functions, e.g. user(), database()
DERIVATION_IMPLICIT= 2, // Table columns, SP variables, BINARY(expr)
DERIVATION_NONE= 1, // A mix (e.g. CONCAT) of two differrent collations
DERIVATION_EXPLICIT= 0 // An explicit COLLATE clause
Fixing the problem that an operation involving a mix of
two or more GEOMETRY operands did not preserve their SRIDs.
Now SRIDs are preserved by hybrid functions, subqueries, TVCs, UNIONs, VIEWs.
Incompatible change:
An attempt to mix two different SRIDs now raises an error.
Details:
- Adding a new class Type_extra_attributes. It's a generic
container which can store very specific data type attributes.
For now it can store one uint32 and one const pointer attribute
(for GEOMETRY's SRID and for ENUM/SET TYPELIB respectively).
In the future it can grow as needed.
Type_extra_attributes will also be reused soon to store "const Type_zone*"
pointers for the TIMESTAMP's "WITH TIME ZONE 'tz'" attribute
(a timestamp data type with a fixed time zone independent from @@time_zone).
The time zone attribute will be stored in exactly the same way like
a TYPELIB pointer is stored by ENUM/SET.
- Removing Column_definition_attributes members "interval" and "srid".
Deriving Column_definition_attributes from the generic attribute container
Type_extra_attributes instead.
- Adding a new class Type_typelib_attributes, to store
the TYPELIB of the ENUM and SET data types. Deriving Field_enum from it.
Removing the member Field_enum::typelib.
- Adding a new class Type_geom_attributes, to store
the GEOMETRY related attributes. Deriving Field_geom from it.
Removing the member Field_geom::srid.
- Removing virtual methods:
Field::get_typelib()
Type_all_attributes::get_typelib() and
Type_all_attributes::set_typelib()
They were very specific to TYPELIB.
Adding more generic virtual methods instead:
* Field::type_extra_attributes() - to get extra attributes
* Type_all_attributes::type_extra_attributes() - to get extra attributes
* Type_all_attributes::type_extra_attributes_addr() - to set extra attributes
- Removing Item_type_holder::enum_set_typelib. Deriving Item_type_holder
from the generic attribute container Type_extra_attributes instead.
This makes it possible for UNION to preserve SRID
(in addition to preserving TYPELIB).
- Deriving Item_hybrid_func from Type_extra_attributes.
This makes it possible for hybrid functions (e.g. CASE, COALESCE,
LEAST, GREATEST etc) to preserve SRID.
- Deriving Item_singlerow_subselect from Type_extra_attributes and
overriding methods:
* Item_cache::type_extra_attributes()
* subselect_single_select_engine::fix_length_and_dec()
* Item_singlerow_subselect::type_extra_attributes()
* Item_singlerow_subselect::type_extra_attributes_addr()
This is needed to preserve SRID in subqueries and TVCs
- Cleanup: fixing the data type of members
* Binlog_type_info::m_enum_typelib
* Binlog_type_info::m_set_typelib
from "TYPELIB *" to "const TYPELIB *"
Some fixes related to commit f838b2d7998f18ac2a1bb9d56081aac6e563de1e and
Rows_log_event::do_apply_event() and Update_rows_log_event::do_exec_row()
for system-versioned tables were provided by Nikita Malyavin.
This was required by test versioning.rpl,trx_id,row.
Problem:
REPAIR TABLE executed for a pre-MDEV-29959 table (with the old UUID format)
updated the server version in the FRM file without rewriting the data,
so it created a new FRM for old UUIDs. After that MariaDB could not
read UUIDs correctly.
Fix:
- Adding a new virtual method in class Type_handler:
virtual bool type_handler_for_implicit_upgrade() const;
* For the up-to-date data types it returns "this".
* For the data types which need to be implicitly upgraded
during REPAIR TABLE or ALTER TABLE, it returns a pointer
to a new replacement data type handler.
Old VARCHAR and old UUID type handlers override this method.
See more comments below.
- Changing the semantics of the method
Type_handler::Column_definition_implicit_upgrade(Column_definition *c)
to the opposite, so now:
* c->type_handler() references the old data type (to upgrade from)
* "this" references the new data type (to upgrade to).
Before this change Column_definition_implicit_upgrade() was supposed
to be called with the old data type handler (to upgrade from).
Renaming the method to Column_definition_implicit_upgrade_to_this(),
to avoid automatic merges in this method.
Reflecting this change in Create_field::upgrade_data_types().
- Replacing the hard-coded data type tests inside handler::check_old_types()
to a call for the new virtual method
Type_handler::type_handler_for_implicit_upgrade()
- Overriding Type_handler_fbt::type_handler_for_implicit_upgrade()
to call a new method FbtImpl::type_handler_for_implicit_upgrade().
Reasoning:
Type_handler_fbt is a template, so it has access only to "this".
So in case of UUID data types, the type handler for old UUID
knows nothing about the type handler of new UUID inside sql_type_fixedbin.h.
So let's have Type_handler_fbt delegate type_handler_for_implicit_upgrade()
to its Type_collection, which knows both new UUID and old UUID.
- Adding Type_collection_uuid::type_handler_for_implicit_upgrade().
It returns a pointer to the new UUID type handler.
- Overriding Type_handler_var_string::type_handler_for_implicit_upgrade()
to return a pointer to type_handler_varchar (true VARCHAR).
- Cleanup: these two methods:
handler::check_old_types()
handler::ha_check_for_upgrade()
were always called consequently.
So moving the call for check_old_types() inside ha_check_for_upgrade(),
and making check_old_types() private.
- Cleanup: removing the "bool varchar" parameter from fill_alter_inplace_info(),
as its not used any more.
Functions extracting non-negative datetime components:
- YEAR(dt), EXTRACT(YEAR FROM dt)
- QUARTER(td), EXTRACT(QUARTER FROM dt)
- MONTH(dt), EXTRACT(MONTH FROM dt)
- WEEK(dt), EXTRACT(WEEK FROM dt)
- HOUR(dt),
- MINUTE(dt),
- SECOND(dt),
- MICROSECOND(dt),
- DAYOFYEAR(dt)
- EXTRACT(YEAR_MONTH FROM dt)
did not set their max_length properly, so in the DECIMAL
context they created a too small DECIMAL column, which
led to the 'Out of range value' error.
The problem is that most of these functions historically
returned the signed INT data type.
There were two simple ways to fix these functions:
1. Add +1 to max_length.
But this would also change their size in the string context
and create too long VARCHAR columns, with +1 excessive size.
2. Preserve max_length, but change the data type from INT to INT UNSIGNED.
But this would break backward compatibility.
Also, using UNSIGNED is generally not desirable,
it's better to stay with signed when possible.
This fix implements another solution, which it makes all these functions
work well in all contexts: int, decimal, string.
Fix details:
- Adding a new special class Type_handler_long_ge0 - the data type
handler for expressions which:
* should look like normal signed INT
* but which known not to return negative values
Expressions handled by Type_handler_long_ge0 store in Item::max_length
only the number of digits, without adding +1 for the sign.
- Fixing Item_extract to use Type_handler_long_ge0
for non-negative datetime components:
YEAR, YEAR_MONTH, QUARTER, MONTH, WEEK
- Adding a new abstract class Item_long_ge0_func, for functions
returning non-negative datetime components.
Item_long_ge0_func uses Type_handler_long_ge0 as the type handler.
The class hierarchy now looks as follows:
Item_long_ge0_func
Item_long_func_date_field
Item_func_to_days
Item_func_dayofmonth
Item_func_dayofyear
Item_func_quarter
Item_func_year
Item_long_func_time_field
Item_func_hour
Item_func_minute
Item_func_second
Item_func_microsecond
- Cleanup: EXTRACT(QUARTER FROM dt) created an excessive VARCHAR column
in string context. Changing its length from 2 to 1.
This original query:
(1) SELECT ts0 FROM t1
WHERE DATE(ts0) <= '2024-01-23';
was rewritten (by MDEV-8320) to:
(2) SELECT ts0 FROM t1
WHERE ts0 <= '2024-01-23 23:59.59.999999';
-- DATETIME comparison, Item_datetime on the right side
which was further optimized (by MDEV-32148) to:
(3) SELECT ts0 FROM t1
WHERE ts0 <= TIMESTAMP/* WITH LOCAL TIME ZONE*/ '2024-01-23 23:59.59.999999';
-- TIMESTAMP comparison, Item_timestamp_literal on the right side
The origin of the problem was in (2) - in the MDEV-8320 related code.
The recent new code for MDEV-32148 revealed this problem.
Item_datetime on step (2) was always created in an inconsistent way:
- with Item::decimals==0
- with ltime.second_part==999999,
without taking into account the precision of the left side
(e.g. ts0 in the above example)
On step (3), Item_timestamp_literal was created in an inconsistent way too,
because it copied the inconsistent data from step (2):
- with Item::decimals==0 (copied from Item_datetime::decimals)
- with m_value.tv_usec==999999 (copied from ltime.second_part of Item_datetime)
Later, the Item_timestamp_literal performed save_in_field()
and crashed in my_timestamp_to_binary() on a DBUG_ASSERT checking
consistency between the fractional precision and the fractional seconds value.
Fix:
On step (2) create Item_datetime with truncating maximum possible
second_part value of 999999 according to the the left side fractional
second precision. So for example it sets second_part as follows:
- 000000 for TIMESTAMP(0)
- 999000 for TIMESTAMP(3)
- 999999 for TIMESTAMP(6)
This automatically makes the code create a consistent Item_timestamp_literal
on step (3).
This also makes TIMESTAMP comparison work faster, because now
Item_timestamp_literal is created with Item::decimals value equal
to the Item_field (which is on the other side of the comparison),
so the low level function Type_handler_timestamp_common::cmp_native()
goes the fastest execution path optimized for the case when both sides
have equal fractional precision.
Adding a helper class TimeOfDay to reuse the code when populating:
- the last datetime point for YEAR()
- the last datetime point for DATE()
with a given fractional precision.
This class also helped to unify the equal code in create_start_bound()
and create_end_bound() into a single method create_bound().
Changing the way how a the following conditions are evaluated:
WHERE timestamp_column=datetime_const_expr
(for all comparison operators: =, <=>, <, >, <=, >=, <> and for NULLIF)
Before the change it was always performed as DATETIME.
That was not efficient, as involved per-row TIMESTAMP->DATETIME conversion
for timestamp_column. For example, in case of the SYSTEM time zone
it involved a localtime_r() call, which is known to be slow.
After the change it's performed as TIMESTAMP in many cases.
This allows to avoid per-row conversion, as it works the other way around:
datetime_const_expr is converted to TIMESTAMP once before the execution stage.
Note, datetime_const_expr must be inside monotone continuous periods of
the current time zone, i.e. not near these anomalies:
- DST changes (spring forward, fall back)
- leap seconds
During the 10.5->10.6 merge please use the 10.6 code on conflicts.
This is the 10.5 version of the patch (a backport of the 10.6 version).
Unlike 10.6 version, it makes changes in plugin/type_inet/sql_type_inet.*
rather than in sql/sql_type_fixedbin.h
Item_bool_rowready_func2, Item_func_between, Item_func_in
did not check if a not-NULL argument of an arbitrary data type
can produce a NULL value on conversion to INET6.
This caused a crash on DBUG_ASSERT() in conversion failures,
because the function returned SQL NULL for something that
has Item::maybe_null() equal to false.
Adding setting NULL-ability in such cases.
Details:
- Removing the code in Item_func::setup_args_and_comparator()
performing character set aggregation with optional narrowing.
This aggregation is done inside Arg_comparator::set_cmp_func_string().
So this code was redundant
- Removing Item_func::setup_args_and_comparator() as it git simplified to
just to two lines:
convert_const_compared_to_int_field(thd);
return cmp->set_cmp_func(thd, this, &args[0], &args[1], true);
Using these lines directly in:
- Item_bool_rowready_func2::fix_length_and_dec()
- Item_func_nullif::fix_length_and_dec()
- Adding a new virtual method:
- Type_handler::Item_bool_rowready_func2_fix_length_and_dec().
- Adding tests detecting if the data type conversion can return SQL NULL into
the following methods of Type_handler_inet6:
- Item_bool_rowready_func2_fix_length_and_dec
- Item_func_between_fix_length_and_dec
- Item_func_in_fix_comparator_compatible_types
This is the 10.6 version of the patch.
Item_bool_rowready_func2, Item_func_between, Item_func_in
did not check if a not-NULL argument of an arbitrary data type
can produce a NULL value on conversion to INET6.
This caused a crash on DBUG_ASSERT() in conversion failures,
because the function returned SQL NULL for something that
has Item::maybe_null() equal to false.
Adding setting NULL-ability in such cases.
Details:
- Removing the code in Item_func::setup_args_and_comparator()
performing character set aggregation with optional narrowing.
This aggregation is done inside Arg_comparator::set_cmp_func_string().
So this code was redundant
- Removing Item_func::setup_args_and_comparator() as it git simplified to
just to two lines:
convert_const_compared_to_int_field(thd);
return cmp->set_cmp_func(thd, this, &args[0], &args[1], true);
Using these lines directly in:
- Item_bool_rowready_func2::fix_length_and_dec()
- Item_func_nullif::fix_length_and_dec()
- Adding a new virtual method:
- Type_handler::Item_bool_rowready_func2_fix_length_and_dec().
- Adding tests detecting if the data type conversion can return SQL NULL into
the following methods of Type_handler_fbt:
- Item_bool_rowready_func2_fix_length_and_dec
- Item_func_between_fix_length_and_dec
- Item_func_in_fix_comparator_compatible_types
(Variant#3: Allow cross-charset comparisons, use a special
CHARSET_INFO to create lookup keys. Review input addressed.)
Equalities that compare utf8mb{3,4}_general_ci strings, like:
WHERE ... utf8mb3_key_col=utf8mb4_value (MB3-4-CMP)
can now be used to construct ref[const] access and also participate
in multiple-equalities.
This means that utf8mb3_key_col can be used for key-lookups when
compared with an utf8mb4 constant, field or expression using '=' or
'<=>' comparison operators.
This is controlled by optimizer_switch='cset_narrowing=on', which is
OFF by default.
IMPLEMENTATION
Item value comparison in (MB3-4-CMP) is done using utf8mb4_general_ci.
This is valid as any utf8mb3 value is also an utf8mb4 value.
When making index lookup value for utf8mb3_key_col, we do "Charset
Narrowing": characters that are in the Basic Multilingual Plane (=BMP) are
copied as-is, as they can be represented in utf8mb3. Characters that are
outside the BMP cannot be represented in utf8mb3 and are replaced
with U+FFFD, the "Replacement Character".
In utf8mb4_general_ci, the Replacement Character compares as equal to any
character that's not in BMP. Because of this, the constructed lookup value
will find all index records that would be considered equal by the original
condition (MB3-4-CMP).
Approved-by: Monty <monty@mariadb.org>
When aggregating pairs BIT+NULL and NULL+BIT for result, e.g.
in COALESCE(), preserve the BIT data type (ignore explicit NULLs).
The same fix applied to YEAR.
The MDEV-29693 conflict resolution is from Monty, as well as is
a bug fix where ANALYZE TABLE wrongly built histograms for
single-column PRIMARY KEY.
Also includes a fix for safe_malloc error reporting.
Other things:
- Copied main.log_slow from 10.4 to avoid mtr issue
Disabled test:
- spider/bugfix.mdev_27239 because we started to get
+Error 1429 Unable to connect to foreign data source: localhost
-Error 1158 Got an error reading communication packets
- main.delayed
- Bug#54332 Deadlock with two connections doing LOCK TABLE+INSERT DELAYED
This part is disabled for now as it fails randomly with different
warnings/errors (no corruption).
Raise notes if indexes cannot be used:
- in case of data type or collation mismatch (diferent error messages).
- in case if a table field was replaced to something else
(e.g. Item_func_conv_charset) during a condition rewrite.
Added option to write warnings and notes to the slow query log for
slow queries.
New variables added/changed:
- note_verbosity, with is a set of the following options:
basic - All old notes
unusable_keys - Print warnings about keys that cannot be used
for select, delete or update.
explain - Print unusable_keys warnings for EXPLAIN querys.
The default is 'basic,explain'. This means that for old installations
the only notable new behavior is that one will get notes about
unusable keys when one does an EXPLAIN for a query. One can turn all
of all notes by either setting note_verbosity to "" or setting sql_notes=0.
- log_slow_verbosity has a new option 'warnings'. If this is set
then warnings and notes generated are printed in the slow query log
(up to log_slow_max_warnings times per statement).
- log_slow_max_warnings - Max number of warnings written to
slow query log.
Other things:
- One can now use =ALL for any 'set' variable to set all options at once.
For example using "note_verbosity=ALL" in a config file or
"SET @@note_verbosity=ALL' in SQL.
- mysqldump will in the future use @@note_verbosity=""' instead of
@sql_notes=0 to disable notes.
- Added "enum class Data_type_compatibility" and changing the return type
of all Field::can_optimize*() methods from "bool" to this new data type.
Reviewer & Co-author: Alexander Barkov <bar@mariadb.com>
- The code that prints out the notes comes mainly from Alexander