An overflow of the double variable storing the estimate of the
number of rows in a partial join could trigger an assertion
failure during the optimization stage.
For each SELECT the list sj_nests is built by the
function simplify_joins() when scanning different
join nests. This function may be called several
times for the same join nest. That's why before
adding a new member to sj_nests it is necessary
to check if it's already in the list.
The code of simplify_joins() lacked this check and
as a result it could cause memory overwright for
some queries.
- Fix win64 pointer truncation warnings
(usually coming from misusing 0x%lx and long cast in DBUG)
- Also fix printf-format warnings
Make the above mentioned warnings fatal.
- fix pthread_join on Windows to set return value.
This patch corrects the code of the patch for mdev-13369 that
introduced the splitting technique when using materialized
derived tables / views with GROUP BY. The second actual parameters
of the call of the method JOIN::reoptimize() in the function
JOIN::push_splitting_cond_into_derived() was calculated incorrectly.
This could cause different failures for queries using derived tables
or views with GROUP BY when their FROM lists contained empty or
single-row tables.
Currently condition pushdown into materialized views / derived tables
is not implemented yet (see mdev-12387) and grouping views are
optimized early when subqueries are converted to semi-joins in
convert_join_subqueries_to_semijoins(). If a subquery that is converted
to a semi-join uses a grouping view this view is optimized in two phases.
For such a view V only the first phase of optimization is done after
the conversion of subqueries of the outer join into semi-joins.
At the same time the reference of the view V appears in the join
expression of the outer join. In fixed code there was an attempt to push
conditions into this view and to optimize it after this. This triggered
the second phase of the optimization of the view and it was done
prematurely. The second phase of the optimization for the materialized
view is supposed to be called after the splitting condition is pushed
into the view in the call of JOIN::improve_chosen_plan for the outer
join.
The fix blocks the attempt to push conditions into splittable views
if they have been already partly optimized and the following
optimization for them.
The test case of the patch shows that the code for mdev-13369
basically supported the splitting technique for materialized views /
derived tables.
The patch also replaces the name of the state JOIN::OPTIMIZATION_IN_STAGE_2
for JOIN::OPTIMIZATION_PHASE_1_DONE and fixes a bug in
TABLE_LIST::fetch_number_of_rows()
This fixes MDEV-7742 and MDEV-8305 (Allow user to specify if stored
procedures should be logged in the slow and general log)
New functionality:
- Added new variables log_slow_disable_statements and log_disable_statements
that can be used to disable logging of certain queries to slow and
general log. Currently supported options are 'admin', 'call', 'slave'
and 'sp'.
Defaults are as before. Only 'sp' (stored procedure statements) is
disabled for slow and general_log.
- Slow log to files now includes the following new information:
- When logging stored procedure statements the name of stored
procedure is logged.
- Number of created tmp_tables, tmp_disk_tables and the space used
by temporary tables.
- When logging 'call', the logged status now contains the sum of all
included statements. Before only 'time' was correct.
- Added filsort_priority_queue as an option for log_slow_filter (this
variable existed before, but was not exposed)
- Added support for BIT types in my_getopt()
Mapped some old variables to bitmaps (old variables can still be used)
- Variable 'log_queries_not_using_indexes' is mapped to
log_slow_filter='not_using_index'
- Variable 'log_slow_slave_statements' is mapped to
log_slow_disabled_statements='slave'
- Variable 'log_slow_admin_statements' is mapped to
log_slow_disabled_statements='admin'
- All the above variables are changed to session variables from global
variables
Other things:
- Simplified LOGGER::log_command. We don't need to check for super if
OPTION_LOG_OFF is set as this flag can only be set if one is a super
user.
- Removed some setting of enable_slow_log as it's guaranteed to be set by
mysql_parse()
- mysql_admin_table() now sets thd->enable_slow_log
- Added prepare_logs_for_admin_command() to reset thd->enable_slow_log if
needed.
- Added new functions to store, restore and add slow query status
- Added new functions to store and restore query start time
- Reorganized Sub_statement_state according to types
- Added code in dispatch_command() to ensure that
thd->reset_for_next_command() is always called for a query.
- Added thd->last_sql_command to simplify checking of what was the type
of the last command. Needed when logging to slow log as lex->sql_command
may have changed before slow logging is called.
- Moved QPLAN_TMP_... to where status for tmp tables are updated
- Added new THD variable, affected_rows, to be able to correctly log
number of affected rows to slow log.
- Added sql/mariadb.h file that should be included first by files in sql
directory, if sql_plugin.h is not used (sql_plugin.h adds SHOW variables
that must be done before my_global.h is included)
- Removed a lot of include my_global.h from include files
- Removed include's of some files that my_global.h automatically includes
- Removed duplicated include's of my_sys.h
- Replaced include my_config.h with my_global.h
The bug was caused by a defect of the patch for the bug 11081.
The patch was actually a port of the fix this bug from the mysql
code line. Later a correction of this fix was added to the
mysql code. Here's the comment this correction was provided with:
Bug#16499751: Opening cursor on SELECT in stored procedure causes segfault
This is a regression from the fix of bug#14740889.
The fix started using another set of expressions as the source for
the temporary table used for the materialized cursor. However,
JOIN::make_tmp_tables_info() calls setup_copy_fields() which creates
an Item_copy wrapper object on top of the function being selected.
The Item_copy objects were not properly handled by create_tmp_table -
they were simply ignored. This patch creates temporary table fields
based on the underlying item of the Item_copy objects.
The test case for the bug 13346 was taken from mdev-13380.
"Optimization for equi-joins of derived tables with GROUP BY"
should be considered rather as a 'proof of concept'.
The task itself is targeted at an optimization that employs re-writing
equi-joins with grouping derived tables / views into lateral
derived tables. Here's an example of such transformation:
select t1.a,t.max,t.min
from t1 [left] join
(select a, max(t2.b) max, min(t2.b) min from t2
group by t2.a) as t
on t1.a=t.a;
=>
select t1.a,tl.max,tl.min
from t1 [left] join
lateral (select a, max(t2.b) max, min(t2.b) min from t2
where t1.a=t2.a) as t
on 1=1;
The transformation pushes the equi-join condition t1.a=t.a into the
derived table making it dependent on table t1. It means that for
every row from t1 a new derived table must be filled out. However
the size of any of these derived tables is just a fraction of the
original derived table t. One could say that transformation 'splits'
the rows used for the GROUP BY operation into separate groups
performing aggregation for a group only in the case when there is
a match for the current row of t1.
Apparently the transformation may produce a query with a better
performance only in the case when
- the GROUP BY list refers only to fields returned by the derived table
- there is an index I on one of the tables T used in FROM list of
the specification of the derived table whose prefix covers the
the fields from the proper beginning of the GROUP BY list or
fields that are equal to those fields.
Whether the result of the re-writing can be executed faster depends
on many factors:
- the size of the original derived table
- the size of the table T
- whether the index I is clustering for table T
- whether the index I fully covers the GROUP BY list.
This patch only tries to improve the chosen execution plan using
this transformation. It tries to do it only when the chosen
plan reaches the derived table by a key whose prefix covers
all the fields of the derived table produced by the fields of
the table T from the GROUP BY list.
The code of the patch does not evaluates the cost of the improved
plan. If certain conditions are met the transformation is applied.
The problem was that the introduction of max-thread-mem-used can cause
an allocation error very early, even before mysql_parse() is called.
As mysql_parse() calls thd->reset_for_next_command(), which called
clear_error(), the error number was lost.
Fixed by adding an option to have unique messages for each KILL
signal and change max-thread-mem-used to use this new feature.
This removes a lot of problems with the original approach, where
one could get errors signaled silenty almost any time.
ixed by moving clear_error() from reset_for_next_command() to
do_command(), before any memory allocation for the thread.
Related changes:
- reset_for_next_command() now have an optional parameter if we should
call clear_error() or not. By default it's called, but not anymore from
dispatch_command() which was the original problem.
- Added optional paramater to clear_error() to force calling of
reset_diagnostics_area(). Before clear_error() only called
reset_diagnostics_area() if there was no error, so we normally
called reset_diagnostics_area() twice.
- This change removed several duplicated calls to clear_error()
when starting a query.
- Reset max_mem_used on COM_QUIT, to protect against kill during
quit.
- Use fatal_error() instead of setting is_fatal_error (cleanup)
- Set fatal_error if max_thead_mem_used is signaled.
(Same logic we use for other places where we are out of resources)
Do not run the window function computation step when the select
produces no rows (zero_result_cause!=NULL).
This may cause reads from uninitialized memory.
We still need to run the window function computation step when
the output includes just one row (for example
SELECT MAX(col), RANK() OVER (...) FROM t1 WHERE 1=0).
This fix also resolves an issue with queries with window functions
producing an output row where should be none, like in
SELECT ROW_NUMBER() FROM t1 WHERE 1=0.
Updated a few test results in the existing tests to reflect this.
There was a missing test in CTE handling if creating a temporary table
failed (in this case as a result of out of space). This caused a table
handler to be used even if it was not allocated.
- Added variable tmp_disk_table_size
- Added variable tmp_memory_table_size as an alias for tmp_table_size
- Changed internal variable tmp_table_size to tmp_memory_table_size
- create_info.data_file_length is now set with tmp_disk_table_size
- Fixed that Aria doesn't reset max_data_file_length for internal tables
- Added status flag if table is full so that we can detect this on next insert.
This ensures that the table is always 'correct', but we get the error one
row after the row that grow the table too big.
- Removed some mutex lock for internal temporary tables