Crash was caused by referencing a null pointer on getting
the number of the nesting levels of the set function for the current
select_lex at the method Item_field::fix_fields.
The current select for processing is taken from Name_resolution_context
that filled in at the function set_new_item_local_context() and
where initialization of the data member Name_resolution_context
was mistakenly removed by the commit
d6ee351bbb
(Revert "MDEV-24454 Crash at change_item_tree")
To fix the issue, correct initialization of data member
Name_resolution_context::select_lex
that was removed by the commit d6ee351bbb
is restored.
This patch is actually follow-up for the task
MDEV-23902: MariaDB crash on calling function
to use correct query arena for a statement. In case invocation of
a function is in progress use its call arena, else use current
query arena that can be either a statement or a regular query arena.
Add a debug-only field MDL_context::lock_warrant. This field can be set
to the MDL context different from the one the current execution is done in.
The lock warrantor has to hold an MDL for at least a duration of a table
lifetime.
This is needed in the subsequent commit so that the shared MDL acquired by
the InnoDB purge_coordinator_task can be shared by purge_worker_task
that access index records that include virtual columns.
Reviewed by: Vladislav Vaintroub
The warning is given in case of table not found or if there is a lock
timeout. The warning is needed as in case of a lock timeout then the
persistent table stats are going to be wrong.
At the moment we cannot support
wsrep_forced_binlog_format=[MIXED|STATEMENT]
during CREATE TABLE AS SELECT.
Statement will use ROW instead and give
a warning.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
In case a table accessed by a PS/SP is dropped after the first execution of
PS/SP and a view created with the same name as a table just dropped then
the second execution of PS/SP leads to allocation of a memory on SP/PS
memory root already marked as read only on first execution.
For example, the following test case:
CREATE TABLE t1 (a INT);
PREPARE stmt FROM "INSERT INTO t1 VALUES (1)";
EXECUTE stmt;
DROP TABLE t1;
CREATE VIEW t1 S SELECT 1;
--error ER_NON_INSERTABLE_TABLE
EXECUTE stmt; # (*)
DROP VIEW t1;
will hit assert on running the statement 'EXECUTE stmt' marked with (*)
when allocation of a memory be performed on parsing the view.
Memory allocation is requested inside the function mysql_make_view
when a view definition being parsed. In order to avoid an assertion
failure, call of the function mysql_make_view() must be moved after
invocation of the function check_and_update_table_version().
It will result in re-preparing the whole PS statement or current
SP instruction that will free currently allocated items and reset
read_only flag for the memory root.
Moved call of the function check_and_update_table_version() just
before the place where the function extend_table_list() is invoked
in order to avoid allocation of memory on a PS/SP memory root
marked as read only. It happens by the reason that the function
extend_table_list() invokes sp_add_used_routine() to add a trigger
created for the table in time frame between execution the statement
EXECUTE `stmt_id` .
For example, the following test case
create table t1 (a int);
prepare stmt from "insert into t1 (a) value (1)";
execute stmt;
create trigger t1_bi before insert on t1 for each row
set @message= new.a;
execute stmt; # (*)
adds the trigger t1_bi to a list of used routines that involves
allocation of a memory on PS memory root that has been already marked
as read only on first run of the statement 'execute stmt'.
In result, when the statement marked with (*) is executed it results in
assert hit.
To fix the issue call the function check_and_update_table_version()
before invocation of extend_table_list() to force re-compilation of
PS/SP that resets read-only flag of its memory root.
This commits enables reloading of engine-independent statistics
without flushing the table from table definition cache.
This is achieved by allowing multiple version of the
TABLE_STATISTICS_CB object and having independent pointers to it in
TABLE and TABLE_SHARE. The TABLE_STATISTICS_CB object have reference
pointers and are freed when no one is pointing to it anymore.
TABLE's TABLE_STATISTICS_CB pointer is updated to use the
TABLE_SHARE's pointer when read_statistics_for_tables() is called at
the beginning of a query.
Main changes:
- read_statistics_for_table() will allocate an new TABLE_STATISTICS_CB
object.
- All get_stat_values() functions has a new parameter that tells
where collected data should be stored. get_stat_values() are not
using the table_field object anymore to store data.
- All get_stat_values() functions returns 1 if they found any
data in the statistics tables.
Other things:
- Fixed INSERT DELAYED to not read statistics tables.
- Removed Statistics_state from TABLE_STATISTICS_CB as this is not
needed anymore as wer are not changing TABLE_SHARE->stats_cb while
calculating or loading statistics.
- Store values used with store_from_statistical_minmax_field() in
TABLE_STATISTICS_CB::mem_root. This allowed me to remove the function
delete_stat_values_for_table_share().
- Field_blob::store_from_statistical_minmax_field() is implemented
but is not normally used as we do not yet support EIS statistics
for blobs. For example Field_blob::update_min() and
Field_blob::update_max() are not implemented.
Note that the function can be called if there is an concurrent
"ALTER TABLE MODIFY field BLOB" running because of a bug in
ALTER TABLE where it deletes entries from column_stats
before it has an exclusive lock on the table.
- Use result of field->val_str(&val) as a pointer to the result
instead of val (safetly fix).
- Allocate memory for collected statistics in THD::mem_root, not in
in TABLE::mem_root. This could cause the TABLE object to grow if a
ANALYZE TABLE was run many times on the same table.
This was done in allocate_statistics_for_table(),
create_min_max_statistical_fields_for_table() and
create_min_max_statistical_fields_for_table_share().
- Store in TABLE_STATISTICS_CB::stats_available which statistics was
found in the statistics tables.
- Removed index_table from class Index_prefix_calc as it was not used.
- Added TABLE_SHARE::LOCK_statistics to ensure we don't load EITS
in parallel. First thread will load it, others will reuse the
loaded data.
- Eliminate read_histograms_for_table(). The loading happens within
read_statistics_for_tables() if histograms are needed.
One downside is that if we have read statistics without histograms
before and someone requires histograms, we have to read all statistics
again (once) from the statistics tables.
A smaller downside is the need to call alloc_root() for each
individual histogram. Before we could allocate all the space for
histograms with a single alloc_root.
- Fixed bug in MyISAM and Aria where they did not properly notice
that table had changed after analyze table. This was not a problem
before this patch as then the MyISAM and Aria tables where flushed
as part of ANALYZE table which did hide this issue.
- Fixed a bug in ANALYZE table where table->records could be seen as 0
in collect_statistics_for_table(). The effect of this unlikely bug
was that a full table scan could be done even if
analyze_sample_percentage was not set to 1.
- Changed multiple mallocs in a row to use multi_alloc_root().
- Added a mutex protection in update_statistics_for_table() to ensure
that several tables are not updating the statistics at the same time.
Some of the changes in sql_statistics.cc are based on a patch from
Oleg Smirnov <olernov@gmail.com>
Co-authored-by: Oleg Smirnov <olernov@gmail.com>
Co-authored-by: Vicentiu Ciorbaru <cvicentiu@gmail.com>
Reviewer: Sergei Petrunia <sergey@mariadb.com>
Restrict vcol_cleanup_expr() in close_thread_tables() to only simple
locked tables mode. Prelocked is cleaned up like normal statement: in
close_thread_table().
The new statistics is enabled by adding the "engine", "innodb" or "full"
option to --log-slow-verbosity
Example output:
# Pages_accessed: 184 Pages_read: 95 Pages_updated: 0 Old_rows_read: 1
# Pages_read_time: 17.0204 Engine_time: 248.1297
Page_read_time is time doing physical reads inside a storage engine.
(Writes cannot be tracked as these are usually done in the background).
Engine_time is the time spent inside the storage engine for the full
duration of the read/write/update calls. It uses the same code as
'analyze statement' for calculating the time spent.
The engine statistics is done with a generic interface that should be
easy for any engine to use. It can also easily be extended to provide
even more statistics.
Currently only InnoDB has counters for Pages_% and Undo_% status.
Engine_time works for all engines.
Implementation details:
class ha_handler_stats holds all engine stats. This class is included
in handler and THD classes.
While a query is running, all statistics is updated in the handler. In
close_thread_tables() the statistics is added to the THD.
handler::handler_stats is a pointer to where statistics should be
collected. This is set to point to handler::active_handler_stats if
stats are requested. If not, it is set to 0.
handler_stats has also an element, 'active' that is 1 if stats are
requested. This is to allow engines to avoid doing any 'if's while
updating the statistics.
Cloned or partition tables have the pointer set to the base table if
status are requested.
There is a small performance impact when using --log-slow-verbosity=engine:
- All engine calls in 'select' will be timed.
- IO calls for InnoDB reads will be timed.
- Incrementation of counters are done on local variables and accesses
are inline, so these should have very little impact.
- Statistics has to be reset for each statement for the THD and each
used handler. This is only 40 bytes, which should be neglectable.
- For partition tables we have to loop over all partitions to update
the handler_status as part of table_init(). Can be optimized in the
future to only do this is log-slow-verbosity changes. For this to work
we have to update handler_status for all opened partitions and
also for all partitions opened in the future.
Other things:
- Added options 'engine' and 'full' to log-slow-verbosity.
- Some of the new files in the test suite comes from Percona server, which
has similar status information.
- buf_page_optimistic_get(): Do not increment any counter, since we are
only validating a pointer, not performing any buf_pool.page_hash lookup.
- Added THD argument to save_explain_data_intern().
- Switched arguments for save_explain_.*_data() to have
always THD first (generates better code as other functions also have THD
first).
MDEV-30668 Set function aggregated in outer select used in view definition
This patch fixes two bugs concerning views whose specifications contain
subqueries with set functions aggregated in outer selects.
Due to the first bug those such views that have implicit grouping were
considered as mergeable. This led to wrong result sets for selects from
these views.
Due to the second bug the aggregation select was determined incorrectly and
this led to bogus error messages.
The patch added several test cases for these two bugs and for four other
duplicate bugs.
The patch also enables view-protocol for many other test cases.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
- Avoid passing real field cache as a parameter when we check for duplicates.
- Correct cache cleanup (cached field number also have to be reset).
- Name resolution cache simple test added.
This patch is the result of running
run-clang-tidy -fix -header-filter=.* -checks='-*,modernize-use-equals-default' .
Code style changes have been done on top. The result of this change
leads to the following improvements:
1. Binary size reduction.
* For a -DBUILD_CONFIG=mysql_release build, the binary size is reduced by
~400kb.
* A raw -DCMAKE_BUILD_TYPE=Release reduces the binary size by ~1.4kb.
2. Compiler can better understand the intent of the code, thus it leads
to more optimization possibilities. Additionally it enabled detecting
unused variables that had an empty default constructor but not marked
so explicitly.
Particular change required following this patch in sql/opt_range.cc
result_keys, an unused template class Bitmap now correctly issues
unused variable warnings.
Setting Bitmap template class constructor to default allows the compiler
to identify that there are no side-effects when instantiating the class.
Previously the compiler could not issue the warning as it assumed Bitmap
class (being a template) would not be performing a NO-OP for its default
constructor. This prevented the "unused variable warning".
The cause of regression was handling for ROWNUM() function.
For queries like
SELECT ROWNUM() FROM ... ORDER BY ...
ROWNUM() should be computed before the ORDER BY.
The computation was moved to be before the ORDER BY for any entries in
the select list that had RAND_TABLE_BIT set.
This had a negative impact on queries in form:
SELECT sp_func() FROM t1 ORDER BY ... LIMIT n
where sp_func() is NOT declared as DETERMINISTIC (and so has
RAND_TABLE_BIT set).
The fix is to require evaluation for sorting only for the ROWNUM()
function. Functions that just have RAND_TABLE_BIT() can be computed
after ORDER BY ... LIMIT is applied.
(think about a possible index that satisfies the ORDER BY clause. In
that case, the the rows would be read in the needed order and we would
stop after reading LIMIT rows, achieving the same effect).
Underlying causes of all bugs mentioned below are same. This patch fixes
all of them:
1) MDEV-25028: ASAN use-after-poison in
base_list_iterator::next or Assertion `sl->join == 0' upon
INSERT .. RETURNING via PS
2) MDEV-25187: Assertion `inited == NONE || table->open_by_handler'
failed or Direct leak in init_dynamic_array2 upon INSERT .. RETURNING
and memory leak in init_dynamic_array2
3) MDEV-28740: crash in INSERT RETURNING subquery in prepared statements
4) MDEV-27165: crash in base_list_iterator::next
5) MDEV-29686: Assertion `slave == 0' failed in
st_select_lex_node::attach_single
Analysis:
consider this statement:
INSERT(1)...SELECT(2)...(SELECT(3)...) RETURNING (SELECT(4)...)
When RETURNING is encountered, add_slave() changes how selects are linked.
It makes the builtin_select(1) slave of SELECT(2). This causes
losing of already existing slave(3) (which is nested select of SELECT of
INSERT...SELECT). When really, builtin_select (1) shouldn't be slave to
SELECT(2) because it is not nested within it. Also, push_select() to use
correct context also changed how select are linked.
During reinit_stmt_before_use(), we expect the selects to
be cleaned-up and have join=0. Since these selects are not linked correctly,
clean-up doesn't happen correctly so join is not NULL. Hence the crash.
Fix:
IF we are parsing RETURNING, make is_parsing_returning= true for
current select. get rid of add_slave(). In place of push_select(), used
push_context() to have correct context (the context of builtin_select)
to resolve items in item_list. And add these items to item_list of
builtin_select.
The problem is that if table definition cache (TDC) is full of real tables
which are in tables cache, view definition can not stay there so will be
removed by its own underlying tables.
In situation above old mechanism of detection matching definition in PS
and current version always require reprepare and so prevent executing
the PS.
One work around is to increase TDC, other - improve version check for
views/triggers (which is done here). Now in suspicious cases we check:
- timestamp (microseconds) of the view to be sure that version really
have changed;
- time (microseconds) of creation of a trigger related to time
(microseconds) of statement preparation.
- Added missing information about database of corresponding table for various types of commands
- Update some typos
- Reviewed by: <vicentiu@mariadb.org>
1. For INSERT..SELECT statements: don't include table/view the data
is inserted into in the list of leaf tables
2. Remove duplicated and dead code related to table_count
The issue was that flush_tables() didn't take a MDL lock on cached
TABLE_SHARE before calling open_table() to do a HA_EXTRA_FLUSH call.
Most engines seams to have no issue with it, but apparantly this conflicts
with InnoDB in 10.6 when using TRUNCATE
Fixed by taking a MDL lock before trying to open the table in
flush_tables().
There is no test case as it hard to repeat the scheduling that causes
the error. I did run the test case in MDEV-28897 to verify
that the bug is fixed.
Moved LIMIT warning from vers_set_hist_part() to new call
vers_check_limit() at table unlock phase. At that point
read_partitions bitmap is already pruned by DML code (see
prune_partitions(), find_used_partitions()) so we have to set
corresponding bits for working history partition.
Also we don't do my_error(ME_WARNING|ME_ERROR_LOG), because at that
point it doesn't update warnings number, so command reports 0 warnings
(but warning list is still updated). Instead we do
push_warning_printf() and sql_print_warning() separately.
Under LOCK TABLES external_lock(F_UNLCK) is not executed. There is
start_stmt(), but no corresponding "stop_stmt()". So for that mode we
call vers_check_limit() directly from close_thread_tables().
Test result has been changed according to new LIMIT and warning
printing algorithm. For convenience all LIMIT warnings are marked with
"You see warning above ^".
TODO MDEV-20345 fixed. Now vers_history_generating() contains
fine-grained list of DML-commands that can generate history (and TODO
mechanism worked well).
column generated using date_format() and if()
vcol_info->expr is allocated on expr_arena at parsing stage. Since
expr item is allocated on expr_arena all its containee items must be
allocated on expr_arena too. Otherwise fix_session_expr() will
encounter prematurely freed item.
When table is reopened from cache vcol_info contains stale
expression. We refresh expression via TABLE::vcol_fix_exprs() but
first we must prepare a proper context (Vcol_expr_context) which meets
some requirements:
1. As noted above expr update must be done on expr_arena as there may
be new items created. It was a bug in fix_session_expr_for_read() and
was just not reproduced because of no second refix. Now refix is done
for more cases so it does reproduce. Tests affected: vcol.binlog
2. Also name resolution context must be narrowed to the single table.
Tested by: vcol.update main.default vcol.vcol_syntax gcol.gcol_bugfixes
3. sql_mode must be clean and not fail expr update.
sql_mode such as MODE_NO_BACKSLASH_ESCAPES, MODE_NO_ZERO_IN_DATE, etc
must not affect vcol expression update. If the table was created
successfully any further evaluation must not fail. Tests affected:
main.func_like
Reviewed by: Sergei Golubchik <serg@mariadb.org>