The MDEV-29693 conflict resolution is from Monty, as well as is
a bug fix where ANALYZE TABLE wrongly built histograms for
single-column PRIMARY KEY.
Also includes a fix for safe_malloc error reporting.
Other things:
- Copied main.log_slow from 10.4 to avoid mtr issue
Disabled test:
- spider/bugfix.mdev_27239 because we started to get
+Error 1429 Unable to connect to foreign data source: localhost
-Error 1158 Got an error reading communication packets
- main.delayed
- Bug#54332 Deadlock with two connections doing LOCK TABLE+INSERT DELAYED
This part is disabled for now as it fails randomly with different
warnings/errors (no corruption).
The warning is given in case of table not found or if there is a lock
timeout. The warning is needed as in case of a lock timeout then the
persistent table stats are going to be wrong.
At the moment we cannot support
wsrep_forced_binlog_format=[MIXED|STATEMENT]
during CREATE TABLE AS SELECT.
Statement will use ROW instead and give
a warning.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
In case a table accessed by a PS/SP is dropped after the first execution of
PS/SP and a view created with the same name as a table just dropped then
the second execution of PS/SP leads to allocation of a memory on SP/PS
memory root already marked as read only on first execution.
For example, the following test case:
CREATE TABLE t1 (a INT);
PREPARE stmt FROM "INSERT INTO t1 VALUES (1)";
EXECUTE stmt;
DROP TABLE t1;
CREATE VIEW t1 S SELECT 1;
--error ER_NON_INSERTABLE_TABLE
EXECUTE stmt; # (*)
DROP VIEW t1;
will hit assert on running the statement 'EXECUTE stmt' marked with (*)
when allocation of a memory be performed on parsing the view.
Memory allocation is requested inside the function mysql_make_view
when a view definition being parsed. In order to avoid an assertion
failure, call of the function mysql_make_view() must be moved after
invocation of the function check_and_update_table_version().
It will result in re-preparing the whole PS statement or current
SP instruction that will free currently allocated items and reset
read_only flag for the memory root.
Moved call of the function check_and_update_table_version() just
before the place where the function extend_table_list() is invoked
in order to avoid allocation of memory on a PS/SP memory root
marked as read only. It happens by the reason that the function
extend_table_list() invokes sp_add_used_routine() to add a trigger
created for the table in time frame between execution the statement
EXECUTE `stmt_id` .
For example, the following test case
create table t1 (a int);
prepare stmt from "insert into t1 (a) value (1)";
execute stmt;
create trigger t1_bi before insert on t1 for each row
set @message= new.a;
execute stmt; # (*)
adds the trigger t1_bi to a list of used routines that involves
allocation of a memory on PS memory root that has been already marked
as read only on first run of the statement 'execute stmt'.
In result, when the statement marked with (*) is executed it results in
assert hit.
To fix the issue call the function check_and_update_table_version()
before invocation of extend_table_list() to force re-compilation of
PS/SP that resets read-only flag of its memory root.
Replacing my_casedn_str() called on local char[] buffer variables
to CharBuffer::copy_casedn() calls.
This is a sub-task for MDEV-31531 Remove my_casedn_str()
Details:
- Adding a helper template class IdentBuffer (a CharBuffer descendant),
which assumes utf8 data. Like CharBuffer, it's initialized to an empty
string in the constructor, but can be populated with lower-cased data
later.
- Adding a helper template class IdentBufferCasedn, which initializes
to lower case right in the constructor.
- Removing char[] buffers, replacing them to IdentBuffer and IdentBufferCasedn.
- Changing the data type of "db" and "table" parameters from
"const char*" to LEX_CSTRING in the following functions:
find_field_in_table_ref()
insert_fields()
set_thd_db()
mysql_grant()
to reuse IdentBuffer easeir.
This commits enables reloading of engine-independent statistics
without flushing the table from table definition cache.
This is achieved by allowing multiple version of the
TABLE_STATISTICS_CB object and having independent pointers to it in
TABLE and TABLE_SHARE. The TABLE_STATISTICS_CB object have reference
pointers and are freed when no one is pointing to it anymore.
TABLE's TABLE_STATISTICS_CB pointer is updated to use the
TABLE_SHARE's pointer when read_statistics_for_tables() is called at
the beginning of a query.
Main changes:
- read_statistics_for_table() will allocate an new TABLE_STATISTICS_CB
object.
- All get_stat_values() functions has a new parameter that tells
where collected data should be stored. get_stat_values() are not
using the table_field object anymore to store data.
- All get_stat_values() functions returns 1 if they found any
data in the statistics tables.
Other things:
- Fixed INSERT DELAYED to not read statistics tables.
- Removed Statistics_state from TABLE_STATISTICS_CB as this is not
needed anymore as wer are not changing TABLE_SHARE->stats_cb while
calculating or loading statistics.
- Store values used with store_from_statistical_minmax_field() in
TABLE_STATISTICS_CB::mem_root. This allowed me to remove the function
delete_stat_values_for_table_share().
- Field_blob::store_from_statistical_minmax_field() is implemented
but is not normally used as we do not yet support EIS statistics
for blobs. For example Field_blob::update_min() and
Field_blob::update_max() are not implemented.
Note that the function can be called if there is an concurrent
"ALTER TABLE MODIFY field BLOB" running because of a bug in
ALTER TABLE where it deletes entries from column_stats
before it has an exclusive lock on the table.
- Use result of field->val_str(&val) as a pointer to the result
instead of val (safetly fix).
- Allocate memory for collected statistics in THD::mem_root, not in
in TABLE::mem_root. This could cause the TABLE object to grow if a
ANALYZE TABLE was run many times on the same table.
This was done in allocate_statistics_for_table(),
create_min_max_statistical_fields_for_table() and
create_min_max_statistical_fields_for_table_share().
- Store in TABLE_STATISTICS_CB::stats_available which statistics was
found in the statistics tables.
- Removed index_table from class Index_prefix_calc as it was not used.
- Added TABLE_SHARE::LOCK_statistics to ensure we don't load EITS
in parallel. First thread will load it, others will reuse the
loaded data.
- Eliminate read_histograms_for_table(). The loading happens within
read_statistics_for_tables() if histograms are needed.
One downside is that if we have read statistics without histograms
before and someone requires histograms, we have to read all statistics
again (once) from the statistics tables.
A smaller downside is the need to call alloc_root() for each
individual histogram. Before we could allocate all the space for
histograms with a single alloc_root.
- Fixed bug in MyISAM and Aria where they did not properly notice
that table had changed after analyze table. This was not a problem
before this patch as then the MyISAM and Aria tables where flushed
as part of ANALYZE table which did hide this issue.
- Fixed a bug in ANALYZE table where table->records could be seen as 0
in collect_statistics_for_table(). The effect of this unlikely bug
was that a full table scan could be done even if
analyze_sample_percentage was not set to 1.
- Changed multiple mallocs in a row to use multi_alloc_root().
- Added a mutex protection in update_statistics_for_table() to ensure
that several tables are not updating the statistics at the same time.
Some of the changes in sql_statistics.cc are based on a patch from
Oleg Smirnov <olernov@gmail.com>
Co-authored-by: Oleg Smirnov <olernov@gmail.com>
Co-authored-by: Vicentiu Ciorbaru <cvicentiu@gmail.com>
Reviewer: Sergei Petrunia <sergey@mariadb.com>
Before this patch, the code in Item_field::print() used
this convention (described in sql_explain.h:ExplainDataStructureLifetime):
- By default, the table that Item_field refers to is accessible.
- ANALYZE and SHOW {EXPLAIN|ANALYZE} may print Items after some
temporary tables have been dropped. They use
QT_DONT_ACCESS_TMP_TABLES flag. When it is ON, Item_field::print
will not access the table it refers to, if it is a temp.table
The bug was that EXPLAIN statement also may compute subqueries (depending
on subquery context and @@expensive_subquery_limit setting). After the
computation, the subquery calls JOIN::cleanup(true) which drops some of
its temporary tables. Calling Item_field::print() that refer to such table
will cause an access to free'd memory.
In this patch, we take into account that query optimization can compute
a subquery and discard its temporary tables. Item_field::print() now
assumes that any temporary table might have already been dropped.
This means QT_DONT_ACCESS_TMP_TABLES flag is not needed - we imply it is
always present.
But we also make one exception: derived tables are not freed in
JOIN::cleanup() call. They are freed later in close_thread_tables(),
at the same time when regular tables are closed.
Because of that, Item_field::print may assume that temp.tables
representing derived tables are available.
Initial patch by: Rex Jonston
Reviewed by: Monty <monty@mariadb.org>
* IS_USER_TEMP_TABLE() was misleading, name didn't match the code
* list of temp tables was rescanned number_of_databases times
* some temporary tables were not shown (from nonexistent databases)
* some temporary tables were shown more than once (e.g. after self-joins)
* sys.table_exists() - avoid querying I_S twice
* fix handling of temporary MERGE tables - it's pointless to fully open
them, they're not in thd->temporary_tables, so they simply fail to
open and are skipped. Relax the assertion instead.
The assertion was to make sure we don't do vers_set_hist_part() for
SELECT (or any non-DML). But actually we must do it if SELECT calls
some function that does DML. Patch moves the assertion to non-routines
only.
Restrict vcol_cleanup_expr() in close_thread_tables() to only simple
locked tables mode. Prelocked is cleaned up like normal statement: in
close_thread_table().
The new statistics is enabled by adding the "engine", "innodb" or "full"
option to --log-slow-verbosity
Example output:
# Pages_accessed: 184 Pages_read: 95 Pages_updated: 0 Old_rows_read: 1
# Pages_read_time: 17.0204 Engine_time: 248.1297
Page_read_time is time doing physical reads inside a storage engine.
(Writes cannot be tracked as these are usually done in the background).
Engine_time is the time spent inside the storage engine for the full
duration of the read/write/update calls. It uses the same code as
'analyze statement' for calculating the time spent.
The engine statistics is done with a generic interface that should be
easy for any engine to use. It can also easily be extended to provide
even more statistics.
Currently only InnoDB has counters for Pages_% and Undo_% status.
Engine_time works for all engines.
Implementation details:
class ha_handler_stats holds all engine stats. This class is included
in handler and THD classes.
While a query is running, all statistics is updated in the handler. In
close_thread_tables() the statistics is added to the THD.
handler::handler_stats is a pointer to where statistics should be
collected. This is set to point to handler::active_handler_stats if
stats are requested. If not, it is set to 0.
handler_stats has also an element, 'active' that is 1 if stats are
requested. This is to allow engines to avoid doing any 'if's while
updating the statistics.
Cloned or partition tables have the pointer set to the base table if
status are requested.
There is a small performance impact when using --log-slow-verbosity=engine:
- All engine calls in 'select' will be timed.
- IO calls for InnoDB reads will be timed.
- Incrementation of counters are done on local variables and accesses
are inline, so these should have very little impact.
- Statistics has to be reset for each statement for the THD and each
used handler. This is only 40 bytes, which should be neglectable.
- For partition tables we have to loop over all partitions to update
the handler_status as part of table_init(). Can be optimized in the
future to only do this is log-slow-verbosity changes. For this to work
we have to update handler_status for all opened partitions and
also for all partitions opened in the future.
Other things:
- Added options 'engine' and 'full' to log-slow-verbosity.
- Some of the new files in the test suite comes from Percona server, which
has similar status information.
- buf_page_optimistic_get(): Do not increment any counter, since we are
only validating a pointer, not performing any buf_pool.page_hash lookup.
- Added THD argument to save_explain_data_intern().
- Switched arguments for save_explain_.*_data() to have
always THD first (generates better code as other functions also have THD
first).
This patch allows to use semi-join optimization at the top level of
single-table update and delete statements.
The problem of supporting such optimization became easy to resolve after
processing a single-table update/delete statement started using JOIN
structure. This allowed to use JOIN::prepare() not only for multi-table
updates/deletes but for single-table ones as well. This was done in the
patch for mdev-28883:
Re-design the upper level of handling UPDATE and DELETE statements.
Note that JOIN::prepare() detects all subqueries that can be considered
as candidates for semi-join optimization. The code added by this patch
looks for such candidates at the top level and if such candidates are found
in the processed single-table update/delete the statement is handled in
the same way as a multi-table update/delete.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
This patch fixes not only the assertion failure in the function
Field_iterator_table_ref::set_field_iterator() but also:
- fixes the problem of forced materialization of derived tables used
in subqueries contained in WHERE clauses of single-table and multi-table
UPDATE and DELETE statements
- fixes the problem of MDEV-17954 that prevented execution of multi-table
DELETE statements if they use in their WHERE clauses references to
the tables that are updated.
The patch must be considered a complement to the patch for MDEV-28883.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
This patch introduces a new way of handling UPDATE and DELETE commands at
the top level after the parsing phase. This new way of processing update
and delete statements can be seen in the implementation of the prepare()
and execute() methods from the new Sql_cmd_dml class. This class derived
from the Sql_cmd class can be considered as an interface class for processing
such commands as SELECT, INSERT, UPDATE, DELETE and other comands
manipulating data in tables.
With this patch processing of update and delete statements after parsing
proceeds by the following schema:
- precheck of the access rights is performed for the used tables
- the used tables are opened
- context analysis phase is performed for the statement
- the used tables are locked
- the statement is optimized and executed
- clean-up is performed for the statement
The implementation of the method Sql_cmd_dml::execute() adheres this schema.
The virtual functions of the class Sql_cmd_dml used for precheck of the
access rights, context analysis, optimization and execution allow to adjust
this schema for processing data manipulation statements of any types.
This schema of processing data manipulation statements is taken from the
current MySQL code. Moreover the definition the class Sql_cmd_dml introduced
in this patch is almost a full replica of such class in the existing MySQL.
However the implementation of the derived classes for update and delete
statements is quite different. This implementation employs the JOIN class
for all kinds of update and delete statements. It allows to perform main
bulk of context analysis actions by the function JOIN::prepare(). This
guarantees that characteristics and properties of the statement tree
discovered for optimization phase when doing context analysis are the same
for single-table and multi-table updates and deletes.
With this patch the following functions are gone:
mysql_prepare_update(), mysql_multi_update_prepare(),
mysql_update(), mysql_multi_update(),
mysql_prepare_delete(), mysql_multi_delete_prepare(), mysql_delete().
The code within these functions have been used as much as possible though.
The functions mysql_test_update() and mysql_test_delete() are also not
needed anymore. The method Sql_cmd_dml::prepare() serves processing
- update/delete statement
- PREPARE stmt FROM "<update/delete statement>"
- EXECUTE stmt when stmt is prepared from update/delete statement.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
MDEV-30668 Set function aggregated in outer select used in view definition
This patch fixes two bugs concerning views whose specifications contain
subqueries with set functions aggregated in outer selects.
Due to the first bug those such views that have implicit grouping were
considered as mergeable. This led to wrong result sets for selects from
these views.
Due to the second bug the aggregation select was determined incorrectly and
this led to bogus error messages.
The patch added several test cases for these two bugs and for four other
duplicate bugs.
The patch also enables view-protocol for many other test cases.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
- Avoid passing real field cache as a parameter when we check for duplicates.
- Correct cache cleanup (cached field number also have to be reset).
- Name resolution cache simple test added.