optimize_schema_tables_memory_usage() crashed when its argument included
TABLE struct that was not fully initialized.
To prevent such a crash, we check if a table is an information schema table at
the beginning of each iteration.
Closes#1768
This was caused by two different bugs:
1) Information_schema tables where not locked by lock_tables, but
get_lock_data() was not filtering these out. This caused a crash when
mysql_unlock_some_tables() tried to unlock tables early, including
not locked information schema tables.
Fixed by not locking SYSTEM_TMP_TABLES
2) In some cases the optimizer will notice that we do not need to read
the information_schema tables at all. In this case
join_tab->read_record is not set, which caused a crash in
get_schema_tables_result()
Fixed by ignoring const tables in get_schema_tables_result()
The assertion failed in handler::ha_reset upon SELECT under
READ UNCOMMITTED from table with index on virtual column.
This was the debug-only failure, though the problem is mush wider:
* MY_BITMAP is a structure containing my_bitmap_map, the latter is a raw
bitmap.
* read_set, write_set and vcol_set of TABLE are the pointers to MY_BITMAP
* The rest of MY_BITMAPs are stored in TABLE and TABLE_SHARE
* The pointers to the stored MY_BITMAPs, like orig_read_set etc, and
sometimes all_set and tmp_set, are assigned to the pointers.
* Sometimes tmp_use_all_columns is used to substitute the raw bitmap
directly with all_set.bitmap
* Sometimes even bitmaps are directly modified, like in
TABLE::update_virtual_field(): bitmap_clear_all(&tmp_set) is called.
The last three bullets in the list, when used together (which is mostly
always) make the program flow cumbersome and impossible to follow,
notwithstanding the errors they cause, like this MDEV-17556, where tmp_set
pointer was assigned to read_set, write_set and vcol_set, then its bitmap
was substituted with all_set.bitmap by dbug_tmp_use_all_columns() call,
and then bitmap_clear_all(&tmp_set) was applied to all this.
To untangle this knot, the rule should be applied:
* Never substitute bitmaps! This patch is about this.
orig_*, all_set bitmaps are never substituted already.
This patch changes the following function prototypes:
* tmp_use_all_columns, dbug_tmp_use_all_columns
to accept MY_BITMAP** and to return MY_BITMAP * instead of my_bitmap_map*
* tmp_restore_column_map, dbug_tmp_restore_column_maps to accept
MY_BITMAP* instead of my_bitmap_map*
These functions now will substitute read_set/write_set/vcol_set directly,
and won't touch underlying bitmaps.
I_S tables were materialized too late, an attempt to use table
statistics before the table was created caused a crash.
Let's move table creation up. it only needs read_set to
be calculated properly, this happens in JOIN::optimize_inner(),
after semijoin transformation.
Note that tables are not populated at that point, so most of the
statistics would make no sense anyway. But at least field sizes
will be correct. And it won't crash.
m_status == DA_OK_BULK' failed in Diagnostics_area::message()
Analysis: Assertion failure happens because we reach the maximum limit to
examine rows.
Fix: Return the error state.
1. only call calc_sum_of_all_status() if a global
SHOW_xxx_STATUS variable is to be returned
2. only lock LOCK_status when copying global_status_var,
but not when iterating all threads
sql standard (2016) allows <collate clause> in two places in the
<column definition> - as a part of the <data type> or at the very end.
Let's do that too.
Side effect: in column/SP declaration `COLLATE cs_coll` automatically
implies `CHARACTER SET cs` (unless charset was specified explicitly).
See changes in sp-ucs2.result
The assertion failed in handler::ha_reset upon SELECT under
READ UNCOMMITTED from table with index on virtual column.
This was the debug-only failure, though the problem is mush wider:
* MY_BITMAP is a structure containing my_bitmap_map, the latter is a raw
bitmap.
* read_set, write_set and vcol_set of TABLE are the pointers to MY_BITMAP
* The rest of MY_BITMAPs are stored in TABLE and TABLE_SHARE
* The pointers to the stored MY_BITMAPs, like orig_read_set etc, and
sometimes all_set and tmp_set, are assigned to the pointers.
* Sometimes tmp_use_all_columns is used to substitute the raw bitmap
directly with all_set.bitmap
* Sometimes even bitmaps are directly modified, like in
TABLE::update_virtual_field(): bitmap_clear_all(&tmp_set) is called.
The last three bullets in the list, when used together (which is mostly
always) make the program flow cumbersome and impossible to follow,
notwithstanding the errors they cause, like this MDEV-17556, where tmp_set
pointer was assigned to read_set, write_set and vcol_set, then its bitmap
was substituted with all_set.bitmap by dbug_tmp_use_all_columns() call,
and then bitmap_clear_all(&tmp_set) was applied to all this.
To untangle this knot, the rule should be applied:
* Never substitute bitmaps! This patch is about this.
orig_*, all_set bitmaps are never substituted already.
This patch changes the following function prototypes:
* tmp_use_all_columns, dbug_tmp_use_all_columns
to accept MY_BITMAP** and to return MY_BITMAP * instead of my_bitmap_map*
* tmp_restore_column_map, dbug_tmp_restore_column_maps to accept
MY_BITMAP* instead of my_bitmap_map*
These functions now will substitute read_set/write_set/vcol_set directly,
and won't touch underlying bitmaps.
disable thd->count_cuted_fields when populating internal temporary
tables for I_S, because this is how SELECT works standalone.
And if the SELECT is a part of INSERT or UPDATE or RETURN or SET or
anything else that enables thd->count_cuted_fields, this counting should
only apply when storing the result of the SELECT in a field or a
variable, not when populating internal temporary tables for I_S.
Reimplement MDEV-14275 Improving memory utilization for information schema
Postpone temp table instantiation until after setup_fields().
Replace all unused (not marked in read_set) columns in an I_S table
with CHAR(0). This can drastically reduce the footprint of a MEMORY
table (a TABLE_CATALOG alone is 1538 bytes per row).
This does not change the engine. If the table was decided to be Aria
(because of, say, blobs) then after optimization it'll stay Aria
even if all blobs were removed.
Note 1: when transforming table structure, share->blob_fields is
preserved, otherwise Aria might switch from DYNAMIC to STATIC row format
and expect a special field for a deleted mark, which create_tmp_tabe
didn't provide.
Note 2: optimizer was doing handler::info() (to know the number of rows)
before the temp table is populated. That didn't make much sense. Now
it's done before the table is even instantiated. Preserve the old
behavior and report 0 rows.
This reverts e2664ee836 and a8458a2345
Analysis: When we reach the maximum limit to examine rows killed_state is set
as ABORT. But this isn't an actual error and we still return TRUE. This
eventually sets error as UNKNOWN ERROR.
Fix: Check if need to stop execution by checking the killed state. If we have
to abort it, return false because this isn't an actual error.
Diagnostics_area::sql_errno upon query from I_S with LIMIT ROWS EXAMINED
open_normal_and_derived_table() fails because the query was already killed
as rows examined by the query are more than the limit. However, this isn't a
real error.
Fix: Check if there is actually an error before calling thd->sql_errno()
and later send a warning in handle_select() if no real error.
- Adding optional qualifiers to data types:
CREATE TABLE t1 (a schema.DATE);
Qualifiers now work only for three pre-defined schemas:
mariadb_schema
oracle_schema
maxdb_schema
These schemas are virtual (hard-coded) for now, but may turn into real
databases on disk in the future.
- mariadb_schema.TYPE now always resolves to a true MariaDB data
type TYPE without sql_mode specific translations.
- oracle_schema.DATE translates to MariaDB DATETIME.
- maxdb_schema.TIMESTAMP translates to MariaDB DATETIME.
- Fixing SHOW CREATE TABLE to use a qualifier for a data type TYPE
if the current sql_mode translates TYPE to something else.
The above changes fix the reported problem, so this script:
SET sql_mode=ORACLE;
CREATE TABLE t2 AS SELECT mariadb_date_column FROM t1;
is now replicated as:
SET sql_mode=ORACLE;
CREATE TABLE t2 (mariadb_date_column mariadb_schema.DATE);
and the slave can unambiguously treat DATE as the true MariaDB DATE
without ORACLE specific translation to DATETIME.
Similar,
SET sql_mode=MAXDB;
CREATE TABLE t2 AS SELECT mariadb_timestamp_column FROM t1;
is now replicated as:
SET sql_mode=MAXDB;
CREATE TABLE t2 (mariadb_timestamp_column mariadb_schema.TIMESTAMP);
so the slave treats TIMESTAMP as the true MariaDB TIMESTAMP
without MAXDB specific translation to DATETIME.
In case of NATURAL JOIN / USING mark all field (one table can not be opened
in any case so optimisation does not worth it).
IMHO table should be checked for used fields and filled after prepare,
when we will fave whole info about used fields but it is too big change
for a bugfix. Which will be made later by Serg patch
The code in fill_schema_schemata() did not take into account that
make_db_list() can leave empty db_names if the requested database
name was too long, so the call for db_names.at(0) crashed on assert.
- Moving the code testing if the database directory exists
into a separate function verify_database_directory_exists()
- Modifying the test to check if db_names is not empty
(Variant #2 of the patch, which keeps the sp_head object inside the
MEM_ROOT that sp_head object owns)
(10.3 requires extra work due to sp_package, will commit a separate
patch for it)
sp_head::operator new() and operator delete() were dereferencing sp_head*
pointers to memory that didn't hold a valid sp_head object (it was
not created/already destroyed).
This caused UBSan to crash when looking up type information.
Fixed by providing static sp_head::create() and sp_head::destroy() methods.
(Variant #2 of the patch, which keeps the sp_head object inside the
MEM_ROOT that sp_head object owns)
(10.3 version of the fix, with handling for class sp_package)
sp_head::operator new() and operator delete() were dereferencing sp_head*
pointers to memory that didn't hold a valid sp_head object (it was
not created/already destroyed).
This caused UBSan to crash when looking up type information.
Fixed by providing static sp_head::create() and sp_head::destroy() methods.
PR#1127: Fix is_check_constraints.result to be compatibile with 10.3
The patch is done according to the original patch for MDEV-14474
1edd09c325 and not one which is merged on server
d526679efd.
This patch includes:
- Rename from `is_check_constraint` to `is_check_constraints` to tests
and results
- Per review, change the order of fields in IS check_constraints table by adding
the column `table_name` before `constraint_name`. According to the standard
2006 there is no `table_name` column.
- Original patch and one in `10.3` supports embedded server this patch doesn't
support. After the merge `10.3` will not support also.
- Don't use patch c8b8b01b61 to change the length of `CHECK_CLAUSE` field
PR#1150: MDEV-18440: Information_schema.check_constraints possible data leak
This patch is extension of PR 1127 and includes:
- Check for table grants
- Additional test according to the MDEV specification
Signed-off-by: Vicențiu Ciorbaru <vicentiu@mariadb.org>
PR#1127: Fix is_check_constraints.result to be compatibile with 10.3
The patch is done according to the original patch for MDEV-14474
1edd09c325 and not one which is merged on server
d526679efd.
This patch includes:
- Rename from `is_check_constraint` to `is_check_constraints` to tests
and results
- Per review, change the order of fields in IS check_constraints table by adding
the column `table_name` before `constraint_name`. According to the standard
2006 there is no `table_name` column.
- Original patch and one in `10.3` supports embedded server this patch doesn't
support. After the merge `10.3` will not support also.
- Don't use patch c8b8b01b61 to change the length of `CHECK_CLAUSE` field
PR#1150: MDEV-18440: Information_schema.check_constraints possible data leak
This patch is extension of PR 1127 and includes:
- Check for table grants
- Additional test according to the MDEV specification
Basicaly it's an uninitialized read. 165 is 0xa5 which comes from TRASH_ALLOC()
Fix by calling a class ctor which initializes problematic
TMP_TABLE_PARAM::force_copy_fields field