Attempt to create a procedure with the DEFINER clause resulted in
abnormal server termination in case the server run with the option
--skip-grant-tables=1.
The reason of abnormal termination is that on handling of the DEFINER
clause, not initialized data members of acl_cache is accessed, that led
to server crash.
Behaviour of the server for considered use case must be the same
as for embedded server. Than means, if a security subsytem wasn't
initialized (server is started with the option --skip-grant-tables=1)
return success from get_current_user() without further access to the
acl_cache that obviously not initialized.
Additionlly, AUTHID::is_role was modified to handle the case when
a host part of the user name isn't provided. Treat this case as if
the empty host name is provided.
Discovered this while working on MDEV-34720: test_if_cheaper_ordering()
uses rec_per_key, while the original estimate for the access method
is produced in best_access_path() by using actual_rec_per_key().
Make test_if_cheaper_ordering() also use actual_rec_per_key().
Also make several getter function "const" to make this compile.
Also adjusted the testcase to handle this (the change backported from
11.0)
The crash happened with an indexed virtual column whose
value is evaluated using a function that has a different meaning
in sql_mode='' vs sql_mode=ORACLE:
- DECODE()
- LTRIM()
- RTRIM()
- LPAD()
- RPAD()
- REPLACE()
- SUBSTR()
For example:
CREATE TABLE t1 (
b VARCHAR(1),
g CHAR(1) GENERATED ALWAYS AS (SUBSTR(b,0,0)) VIRTUAL,
KEY g(g)
);
So far we had replacement XXX_ORACLE() functions for all mentioned function,
e.g. SUBSTR_ORACLE() for SUBSTR(). So it was possible to correctly re-parse
SUBSTR_ORACLE() even in sql_mode=''.
But it was not possible to re-parse the MariaDB version of SUBSTR()
after switching to sql_mode=ORACLE. It was erroneously mis-interpreted
as SUBSTR_ORACLE().
As a result, this combination worked fine:
SET sql_mode=ORACLE;
CREATE TABLE t1 ... g CHAR(1) GENERATED ALWAYS AS (SUBSTR(b,0,0)) VIRTUAL, ...;
INSERT ...
FLUSH TABLES;
SET sql_mode='';
INSERT ...
But the other way around it crashed:
SET sql_mode='';
CREATE TABLE t1 ... g CHAR(1) GENERATED ALWAYS AS (SUBSTR(b,0,0)) VIRTUAL, ...;
INSERT ...
FLUSH TABLES;
SET sql_mode=ORACLE;
INSERT ...
At CREATE time, SUBSTR was instantiated as Item_func_substr and printed
in the FRM file as substr(). At re-open time with sql_mode=ORACLE, "substr()"
was erroneously instantiated as Item_func_substr_oracle.
Fix:
The fix proposes a symmetric solution. It provides a way to re-parse reliably
all sql_mode dependent functions to their original CREATE TABLE time meaning,
no matter what the open-time sql_mode is.
We take advantage of the same idea we previously used to resolve sql_mode
dependent data types.
Now all sql_mode dependent functions are printed by SHOW using a schema
qualifier when the current sql_mode differs from the function sql_mode:
SET sql_mode='';
CREATE TABLE t1 ... SUBSTR(a,b,c) ..;
SET sql_mode=ORACLE;
SHOW CREATE TABLE t1; -> mariadb_schema.substr(a,b,c)
SET sql_mode=ORACLE;
CREATE TABLE t2 ... SUBSTR(a,b,c) ..;
SET sql_mode='';
SHOW CREATE TABLE t1; -> oracle_schema.substr(a,b,c)
Old replacement names like substr_oracle() are still understood for
backward compatibility and used in FRM files (for downgrade compatibility),
but they are not printed by SHOW any more.
The MDEV-29693 conflict resolution is from Monty, as well as is
a bug fix where ANALYZE TABLE wrongly built histograms for
single-column PRIMARY KEY.
Also includes a fix for safe_malloc error reporting.
Other things:
- Copied main.log_slow from 10.4 to avoid mtr issue
Disabled test:
- spider/bugfix.mdev_27239 because we started to get
+Error 1429 Unable to connect to foreign data source: localhost
-Error 1158 Got an error reading communication packets
- main.delayed
- Bug#54332 Deadlock with two connections doing LOCK TABLE+INSERT DELAYED
This part is disabled for now as it fails randomly with different
warnings/errors (no corruption).
An "ITERATE innerLoop" did not work properly inside
a WHILE loop, which itself is inside an outer FOR loop:
outerLoop:
FOR
...
innerLoop:
WHILE
...
ITERATE innerLoop;
...
END WHILE;
...
END FOR;
It erroneously generated an integer increment code for the outer FOR loop.
There were two problems:
1. "ITERATE innerLoop" worked like "ITERATE outerLoop"
2. It was always integer increment, even in case of FOR cursor loops.
Background:
- A FOR loop automatically creates a dedicated sp_pcontext stack entry,
to put the iteration and bound variables on it.
- Other loop types (LOOP, WHILE, REPEAT), do not generate a dedicated
slack entry.
The old code erroneously assumed that sp_pcontext::m_for_loop
either describes the most inner loop (in case the inner loop is FOR),
or is empty (in case the inner loop is not FOR).
But in fact, sp_pcontext::m_for_loop is never empty inside a FOR loop:
it describes the closest FOR loop, even if this FOR loop has nested
non-FOR loops inside.
So when we're near the ITERATE statement in the above script,
sp_pcontext::m_for_loop is not empty - it stores information about
the FOR loop labeled as "outrLoop:".
Fix:
- Adding a new member sp_pcontext::Lex_for_loop::m_start_label,
to remember the explicit or the auto-generated label correspoding
to the start of the FOR body. It's used during generation
of "ITERATE loop_label" code to check if "loop_label" belongs
to the current FOR loop pointed by sp_pcontext::m_for_loop,
or belongs to a non-FOR nested loop.
- Adding LEX methods sp_for_loop_intrange_iterate() and
sp_for_loop_cursor_iterate() to reuse the code between
methods handling:
* ITERATE
* END FOR
- Adding a test for Lex_for_loop::is_for_loop_cursor()
and generate a code either a cursor fetch, or for an integer increment.
Before this change, it always erroneously generated an integer increment
version.
- Cleanup: Initialize Lex_for_loop_st::m_cursor_offset inside
Lex_for_loop_st::init(), to avoid not initialized members.
- Cleanup: Removing a redundant method:
Lex_for_loop_st::init(const Lex_for_loop_st &other)
Using Lex_for_loop_st::operator(const Lex_for_loop_st &other) instead.
Example of what causes the problem:
T1: ANALYZE TABLE starts to collect statistics
T2: ALTER TABLE starts by deleting statistics for all changed fields,
then creates a temp table and copies data to it.
T1: ANALYZE ends and writes to the statistics tables.
T2: ALTER TABLE renames temp table in place of the old table.
Now the statistics from analyze matches the old deleted tables.
Fixed by waiting to delete old statistics until ALTER TABLE is
the only one using the old table and ensure that rename of columns
can handle swapping of column names.
rename_columns_in_stat_table() (former rename_column_in_stat_tables())
now takes a list of columns to rename. It uses the following algorithm
to update column_stats to be able to handle circular renames
- While there are columns to be renamed and it is the first loop or
last rename loop did change something.
- Loop over all columns to be renamed
- Change column name in column_stat
- If fail because of duplicate key
- If this is first change attempt for this column
- Change column name to a temporary column name
- If there was a conflicting row, replace it with the current row.
else
- Remove entry from column list
- Loop over all remaining columns in the list
- Remove the conflicting row
- Change column from temporary name to final name in column_stat
Other things:
- Don't flush tables for every operation. Only flush when all updates
are done.
- Rename of columns was not handled in case of ALGORITHM=copy (old bug).
- Fixed that we do not collect statistics for hidden hash columns
used by UNIQUE constraint on long values.
- Fixed that we do not collect statistics for blob columns referred by
generated virtual columns. This was achieved by storing the fields for
which we want to have statistics in table->has_value_set instead of
in table->read_set.
- Rename of indexes was not handled for persistent statistics.
- This is now handled similar as rename of columns. Renamed columns
are now stored in 'rename_stat_indexes' and handled in
Alter_info::delete_statistics() together with drooped indexes.
- ALTER TABLE .. ADD INDEX may instead of creating a new index rename
an existing generated foreign key index. This was not reflected in
the index_stats table because this was handled in
mysql_prepare_create_table instead instead of in the mysql_alter() code.
Fixed by adding a call in mysql_prepare_create_table() to drop the
changed index.
I also had to change the code that 'marked the index' to be ignored
with code that would not destroy the original index name.
Reviewer: Sergei Petrunia <sergey@mariadb.com>
Adding virtual methods to class Schema:
make_item_func_replace()
make_item_func_substr()
make_item_func_trim()
This is a non-functional preparatory change for MDEV-27744.
This patch is the result of running
run-clang-tidy -fix -header-filter=.* -checks='-*,modernize-use-equals-default' .
Code style changes have been done on top. The result of this change
leads to the following improvements:
1. Binary size reduction.
* For a -DBUILD_CONFIG=mysql_release build, the binary size is reduced by
~400kb.
* A raw -DCMAKE_BUILD_TYPE=Release reduces the binary size by ~1.4kb.
2. Compiler can better understand the intent of the code, thus it leads
to more optimization possibilities. Additionally it enabled detecting
unused variables that had an empty default constructor but not marked
so explicitly.
Particular change required following this patch in sql/opt_range.cc
result_keys, an unused template class Bitmap now correctly issues
unused variable warnings.
Setting Bitmap template class constructor to default allows the compiler
to identify that there are no side-effects when instantiating the class.
Previously the compiler could not issue the warning as it assumed Bitmap
class (being a template) would not be performing a NO-OP for its default
constructor. This prevented the "unused variable warning".
Changing the error messages in a statement like this:
CREATE DATABASE db1
COLLATE utf8mb4_bin
CHARACTER SET utf8mb4
CHARACTER SET latin1;
from
COLLATION 'utf8mb4_bin' is not valid for CHARACTER SET 'latin1'
to a more expected:
Conflicting declarations: 'CHARACTER SET utf8mb4' and 'CHARACTER SET latin1'
In order to do this:
- Adding a new type TYPE_CHARACTER_SET_COLLATE_EXACT into
Lex_exact_charset_extended_collation_attrs_st
- Removing m_had_charset_exact from its descendant class
Lex_extended_charset_extended_collation_attrs_st
Additional cleanup:
- Changing methods in Lex_exact_charset_extended_collation_attrs_st
set_charset(), set_charset_collate_default(), set_charset_collate_binary()
to get Lex_exact_charset instead CHARSET_INFO as a parameter,
to guarantee that the argument is only CHARACTER SET and does not have
any COLLATE clauses yet. This change is not directly related to
the error message change.
- Renaming Lex_charset_collation_st to
Lex_exact_charset_extended_collation_attrs_st
- Renaming Lex_explicit_charset_opt_collate to
Lex_exact_charset_opt_extended_collate
- Renaming their methods charset_collation() to charset_info(),
so the name clearly tells that it returns CHARSET_INFO.
Soon we'll have new classes (e.g. Lex_exact_collation) and
methods returning Lex_exact_collation. So the old name would be
confusing about the return type.
- Adding data type aliases:
using Lex_column_charset_collation_attrs_st = Lex_charset_collation_st;
using Lex_column_charset_collation_attrs = Lex_charset_collation;
and using them all around the code (except lex_charset.*)
instead of the original names.
- Renaming Lex_field_type_st::lex_charset_collation()
to charset_collation_attrs()
- Renaming Column_definition::set_lex_charset_collation()
to set_charset_collation_attrs()
- Renaming Column_definition::lex_charset_collation()
to charset_collation_attrs()
Rationale:
The name "Lex_charset_collation" was a not very good name.
It does not tell details about its properties:
1. if the charset is optional (yes)
2. if the collation is optional (yes)
3. if the charset can be exact (yes) or context (no)
4. if the collation can be: exact (yes) or context (yes)
5. if the clauses can be repeated multiple times (yes)
We'll need a few new data types soon with different properties.
For example, to fix MDEV-27896 and MDEV-27782, we'll need a new
data type which is very like Lex_charset_collation, but additionally
supports CHARACTER SET DEFAULT (which is allowed on table and database level,
but is not allowed on the column level yet), i.e. with:
"the charset can be exact (yes) or context (yes)" in N3.
So we'll have to rename Lex_charset_collation to something else,
e.g.: Lex_exact_charset_extended_collation_attrs,
and add a new data type:
e.g. Lex_extended_charset_extended_collation_attrs
Also, we'll possibly allow CHARACTER SET DEFAULT at the column level for
consistency with other places. So the storge on the column level can change:
- from Lex_exact_charset_extended_collation_attrs
- to Lex_extended_charset_extended_collation_attrs
Adding the aliases introduces a convenient abstraction against
upcoming renames and c++ data type changes.
The cause of the bug is overflow of uint16 KEY_PART_INFO::length and/or
uint16 KEY_PART_INFO::store_length. The solution is to increase the size
of those variables to the 'uint' type (which is 32-bit long)
This patch also fixes:
MDEV-27690 Crash on `CHARACTER SET csname COLLATE DEFAULT` in column definition
MDEV-27853 Wrong data type on column `COLLATE DEFAULT` and table `COLLATE some_non_default_collation`
MDEV-28067 Multiple conflicting column COLLATE clauses are not rejected
MDEV-28118 Wrong collation of `CAST(.. AS CHAR COLLATE DEFAULT)`
MDEV-28119 Wrong column collation on MODIFY + CONVERT
This is used by InnoDB to detect if CREATE...SELECT is used
Other things:
- Changed InnoDB to use thd_ddl_options()
- Removed lock checking code for create...select (Approved by Marko)
This commit implements the standard SQL extension
OFFSET start { ROW | ROWS }
[FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } { ONLY | WITH TIES }]
To achieve this a reserved keyword OFFSET is introduced.
The general logic for WITH TIES implies:
1. The number of rows a query returns is no longer known during optimize
phase. Adjust optimizations to no longer consider this.
2. During end_send make use of an "order Cached_item"to compare if the
ORDER BY columns changed. Keep returning rows until there is a
change. This happens only after we reached the row limit.
3. Within end_send_group, the order by clause was eliminated. It is
still possible to keep the optimization of using end_send_group for
producing the final result set.
Replace
* select_lex::offset_limit
* select_lex::select_limit
* select_lex::explicit_limit
with select_lex::Lex_select_limit
The Lex_select_limit already existed with the same elements and was used in
by the yacc parser.
This commit is in preparation for FETCH FIRST implementation, as it
simplifies a lot of the code.
Additionally, the parser is simplified by making use of the stack to
return Lex_select_limit objects.
Cleanup of init_query() too. Removes explicit_limit= 0 as it's done a bit later
in init_select() with limit_params.empty()
Adds an implementation for SELECT ... FOR UPDATE SKIP LOCKED /
SELECT ... LOCK IN SHARED MODE SKIP LOCKED
This is implemented only InnoDB at the moment, not in RockDB yet.
This adds a new hander flag HA_CAN_SKIP_LOCKED than
will be used when the storage engine advertises the flag.
When a storage engine indicates this flag it will get
TL_WRITE_SKIP_LOCKED and TL_READ_SKIP_LOCKED transaction types.
The Lex structure has been updated to store both the FOR UPDATE/LOCK IN
SHARE as well as the SKIP LOCKED so the SHOW CREATE VIEW
implementation is simplier.
"SELECT FOR UPDATE ... SKIP LOCKED" combined with CREATE TABLE AS or
INSERT.. SELECT on the result set is not safe for STATEMENT based
replication. MIXED replication will replicate this as row based events."
Thanks to guidance from Facebook commit
193896c466
This helped verify basic test case, and components that need implementing
(even though every part was implemented differently).
Thanks Marko for guidance on simplier InnoDB implementation.
Reviewers: Marko, Monty
This feature adds the functionality of ignorability for indexes.
Indexes are not ignored be default.
To control index ignorability explicitly for a new index,
use IGNORE or NOT IGNORE as part of the index definition for
CREATE TABLE, CREATE INDEX, or ALTER TABLE.
Primary keys (explicit or implicit) cannot be made ignorable.
The table INFORMATION_SCHEMA.STATISTICS get a new column named IGNORED that
would store whether an index needs to be ignored or not.
The issue happens when the secondary keys are extended with primary
key parts. Inside the function TABLE_SHARE::init_from_binary_frm_image()
adds the length bytes for the primary key key parts to the length of the
secondary key. This is not needed because when the extended keys are
used we recalculate the length for the used key parts.
Also removed TABLE_SHARE::total_key_length as it is not used in the code
Apporved-by: Monty <monty@mariadb.org>
The data member tv_usec of the struct timeval is declared as suseconds_t
on MacOS. Size of suseconds_t is 4 bytes. On the other hand, size of ulong
is 8 bytes on 64-bit MacOS, so attempt to assign a value of wider type
(usec) to a value (tv_usec) of narrower type leads to error.
- Adding optional qualifiers to data types:
CREATE TABLE t1 (a schema.DATE);
Qualifiers now work only for three pre-defined schemas:
mariadb_schema
oracle_schema
maxdb_schema
These schemas are virtual (hard-coded) for now, but may turn into real
databases on disk in the future.
- mariadb_schema.TYPE now always resolves to a true MariaDB data
type TYPE without sql_mode specific translations.
- oracle_schema.DATE translates to MariaDB DATETIME.
- maxdb_schema.TIMESTAMP translates to MariaDB DATETIME.
- Fixing SHOW CREATE TABLE to use a qualifier for a data type TYPE
if the current sql_mode translates TYPE to something else.
The above changes fix the reported problem, so this script:
SET sql_mode=ORACLE;
CREATE TABLE t2 AS SELECT mariadb_date_column FROM t1;
is now replicated as:
SET sql_mode=ORACLE;
CREATE TABLE t2 (mariadb_date_column mariadb_schema.DATE);
and the slave can unambiguously treat DATE as the true MariaDB DATE
without ORACLE specific translation to DATETIME.
Similar,
SET sql_mode=MAXDB;
CREATE TABLE t2 AS SELECT mariadb_timestamp_column FROM t1;
is now replicated as:
SET sql_mode=MAXDB;
CREATE TABLE t2 (mariadb_timestamp_column mariadb_schema.TIMESTAMP);
so the slave treats TIMESTAMP as the true MariaDB TIMESTAMP
without MAXDB specific translation to DATETIME.
* The overlaps check is implemented on a handler level per row command.
It creates a separate cursor (actually, another handler instance) and
caches it inside the original handler, when ha_update_row or
ha_insert_row is issued. Cursor closes on unlocking the handler.
* Containing the same key in index means unique constraint violation
even in usual terms. So we fetch left and right neighbours and check
that they have same key prefix, excluding from the key only the period part.
If it doesnt match, then there's no such neighbour, and the check passes.
Otherwise, we check if this neighbour intersects with the considered key.
* The check does not introduce new error and fails with ER_DUPP_KEY error.
This might break REPLACE workflow and should be fixed separately