KILL now breaks locks inside InnoDB
Fixed possible deadlock when running INNODB STATUS
Added ha_kill_query() and kill_query() to send kill signal to all storage engines
Added reset_killed() to ensure we don't reset killed state while awake() is getting called
include/mysql/plugin.h:
Added thd_mark_as_hard_kill()
include/mysql/plugin_audit.h.pp:
Added thd_mark_as_hard_kill()
include/mysql/plugin_auth.h.pp:
Added thd_mark_as_hard_kill()
include/mysql/plugin_ftparser.h.pp:
Added thd_mark_as_hard_kill()
sql/handler.cc:
Added ha_kill_query() to send kill signal to all storage engines
sql/handler.h:
Added ha_kill_query() and kill_query() to send kill signal to all storage engines
sql/log_event.cc:
Use reset_killed()
sql/mdl.cc:
use thd->killed instead of thd_killed() to abort on soft kill
sql/sp_rcontext.cc:
Use reset_killed()
sql/sql_class.cc:
Fixed possible deadlock in INNODB STATUS by not getting thd->LOCK_thd_data if it's locked.
Use reset_killed()
Tell storge engines that KILL has been sent
sql/sql_class.h:
Added reset_killed() to ensure we don't reset killed state while awake() is getting called.
Added mark_as_hard_kill()
sql/sql_insert.cc:
Use reset_killed()
sql/sql_parse.cc:
Simplify detection of killed queries.
Use reset_killed()
sql/sql_select.cc:
Use reset_killed()
sql/sql_union.cc:
Use reset_killed()
storage/innobase/handler/ha_innodb.cc:
Added innobase_kill_query()
Fixed error reporting for interrupted queries.
storage/xtradb/handler/ha_innodb.cc:
Added innobase_kill_query()
Fixed error reporting for interrupted queries.
Analysys:
In the beginning of JOIN::cleanup there is code that is supposed to
free all filesort buffers. The code assumes that the table being sorted
is the first non-constant table. To get this table it calls:
first_top_level_tab(this, WITHOUT_CONST_TABLES)
However, first_top_level_tab() instead returned the wrong table - the first
one in the plan, instead of the first non-constant table. There is no other
place outside filesort() where sort buffers may be freed. As a result, the
sort buffer was not freed, and there was a memory leak.
Solution:
Change first_top_level_tab(), to test for WITH_CONST_TABLES instead of
WITHOUT_CONST_TABLES.
The problem was that in debugging binaries it try to print item to assign human readable name to the item.
But subquery item was already freed (join_free/cleanup with full cleanup) so Item_field refers to temporary
table which memory had been already freed.
MDEV-567: Wrong result from a query with correlated subquery if ICP is allowed:
backport the fix developed for SHOW EXPLAIN:
revision-id: psergey@askmonty.org-20120719115219-212cxmm6qvf0wlrb
branch nick: 5.5-show-explain-r21
timestamp: Thu 2012-07-19 15:52:19 +0400
BUG#992942 & MDEV-325: Pre-liminary commit for testing
and adjust it so that it handles DS-MRR scans correctly.
If, when executing a query with ORDER BY col LIMIT n, the optimizer chose
an index-merge scan to access the table containing col while there existed
an index defined over col then optimizer did not consider the possibility
of using an alternative range scan by this index to avoid filesort. This
could cause a performance degradation if the optimizer flag index_merge was
set up to 'on'.
Fix by Sergey Petrunia.
This patch only prevents the evaluation of expensive subqueries during optimization.
The crash reported in this bug has been fixed by some other patch.
The fix is to call value->is_null() only when !value->is_expensive(), because is_null()
may trigger evaluation of the Item, which in turn triggers subquery evaluation if the
Item is a subquery.
.. into MariaDB 5.3
Fix for Bug#12667154 SAME QUERY EXEC AS WHERE SUBQ GIVES DIFFERENT
RESULTS ON IN() & NOT IN() COMP #3
This bug causes a wrong result in mysql-trunk when ICP is used
and bad performance in mysql-5.5 and mysql-trunk.
Using the query from bug report to explain what happens and causes
the wrong result from the query when ICP is enabled:
1. The t3 table contains four records. The outer query will read
these and for each of these it will execute the subquery.
2. Before the first execution of the subquery it will be optimized. In
this case the important is what happens to the first table t1:
-make_join_select() will call the range optimizer which decides
that t1 should be accessed using a range scan on the k1 index
It creates a QUICK_RANGE_SELECT object for this.
-As the last part of optimization the ICP code pushes the
condition down to the storage engine for table t1 on the k1 index.
This produces the following information in the explain for this table:
2 DEPENDENT SUBQUERY t1 range k1 k1 5 NULL 3 Using index condition; Using filesort
Note the use of filesort.
3. The first execution of the subquery does (among other things) due
to the need for sorting:
a. Call create_sort_index() which again will call find_all_keys():
b. find_all_keys() will read the required keys for all qualifying
rows from the storage engine. To do this it checks if it has a
quick-select for the table. It will use the quick-select for
reading records. In this case it will read four records from the
storage engine (based on the range criteria). The storage engine
will evaluate the pushed index condition for each record.
c. At the end of create_sort_index() there is code that cleans up a
lot of stuff on the join tab. One of the things that is cleaned
is the select object. The result of this is that the
quick-select object created in make_join_select is deleted.
4. The second execution of the subquery does the same as the first but
the result is different:
a. Call create_sort_index() which again will call find_all_keys()
(same as for the first execution)
b. find_all_keys() will read the keys from the storage engine. To
do this it checks if it has a quick-select for the table. Now
there is NO quick-select object(!) (since it was deleted in
step 3c). So find_all_keys defaults to read the table using a
table scan instead. So instead of reading the four relevant records
in the range it reads the entire table (6 records). It then
evaluates the table's condition (and here it goes wrong). Since
the entire condition has been pushed down to the storage engine
using ICP all 6 records qualify. (Note that the storage engine
will not evaluate the pushed index condition in this case since
it was pushed for the k1 index and now we do a table scan
without any index being used).
The result is that here we return six qualifying key values
instead of four due to not evaluating the table's condition.
c. As above.
5. The two last execution of the subquery will also produce wrong results
for the same reason.
Summary: The problem occurs due to all but the first executions of the
subquery is done as a table scan without evaluating the table's
condition (which is pushed to the storage engine on a different
index). This is caused by the create_sort_index() function deleting
the quick-select object that should have been used for executing the
subquery as a range scan.
Note that this bug in addition to causing wrong results also can
result in bad performance due to executing the subquery using a table
scan instead of a range scan. This is an issue in MySQL 5.5.
The fix for this problem is to avoid that the Quick-select-object that
the optimizer created is deleted when create_sort_index() is doing
clean-up of the join-tab. This will ensure that the quick-select
object and the corresponding pushed index condition will be available
and used by all following executions of the subquery.
The problem was that was_null and null_value variables was reset in each reexecution of IN subquery, but engine rerun only for non-constant subqueries.
Fixed checking constant in Item_equal sort.
Fix constant reporting in Item_subselect.
Documentation for class Item_outer_ref was wrong:
(*ref) may point to Item_field as well
(see e.g. Item_outer_ref::fix_fields)
So this casting in get_store_key() was wrong:
(*(Item_ref**)((Item_ref*)keyuse->val)->ref)->ref_type()
two tests still fail:
main.innodb_icp and main.range_vs_index_merge_innodb
call records_in_range() with both range ends being open
(which triggers an assert)
The fix backports from MWL#182: Explain running statements the logic that
saves the original JOIN_TAB array of a query plan after optimization. This
array is later used during EXPLAIN to iterate over the original JOIN plan
nodes in the cases when this plan could be changed by early subquery
execution during the optimization phase of the outer query.
sql/item_subselect.cc:
Added purecov info
sql/sql_select.cc:
Added cast
storage/innobase/handler/ha_innodb.cc:
Added cast
storage/xtradb/btr/btr0btr.c:
Added buf_block_get_frame_fast() to avoid compiler warning
storage/xtradb/handler/ha_innodb.cc:
Added cast
storage/xtradb/include/buf0buf.h:
Innodb has buf_block_get_frame(block) defined as (block)->frame.
Didn't want to do a big change to break xtradb as it may use block_get_frame() differently, so I mad this quick hack to patch one compiler warning.
When resolving outer fields, Item_field::fix_outer_fields()
creates new Item_refs for each execution of a prepared statement, so
these must be allocated in the runtime memroot. The memroot switching
before resolving JOIN::having causes these to be allocated in the
statement root, leaking memory for each PS execution.
sql/item_subselect.cc:
addon, fix for 11829691, item could be created in
runtime memroot, so we need to use real_item instead.
"ORDER BY" AND "LIMIT BY" CLAUSE
PROBLEM:
When a 'limit' clause is specified in a query along with
group by and order by, optimizer chooses wrong index
there by examining more number of rows than required.
However without the 'limit' clause, optimizer chooses
the right index.
ANALYSIS:
With respect to the query specified, range optimizer chooses
the first index as there is a range present ( on 'a'). Optimizer
then checks for an index which would give records in sorted
order for the 'group by' clause.
While checking chooses the second index (on 'c,b,a') based on
the 'limit' specified and the selectivity of
'quick_condition_rows' (number of rows present in the range)
in 'test_if_skip_sort_order' function.
But, it fails to consider that an order by clause on a
different column will result in scanning the entire index and
hence the estimated number of rows calculated above are
wrong (which results in choosing the second index).
FIX:
Do not enforce the 'limit' clause in the call to
'test_if_skip_sort_order' if we are creating a temporary
table. Creation of temporary table indicates that there would be
more post-processing and hence will need all the rows.
This fix is backported from 5.6. This problem is fixed in 5.6 as
part of changes for work log #5558
mysql-test/r/subselect.result:
Changes for Bug#11762052 results in the correct number of rows.
sql/sql_select.cc:
Do not pass the actual 'limit' value if 'need_tmp' is true.
Problem: Some queries with subqueries and a HAVING clause that
consists only of a column not in the select or grouping lists causes
the server to crash.
During parsing, an Item_ref is constructed for the HAVING column. The
name of the column is resolved when JOIN::prepare calls fix_fields()
on its having clause. Since the column is not mentioned in the select
or grouping lists, a ref pointer is not found and a new Item_field is
created instead. The Item_ref is replaced by the Item_field in the
tree of HAVING clauses. Since the tree consists only of this item, the
pointer that is updated is JOIN::having. However,
st_select_lex::having still points to the Item_ref as the root of the
tree of HAVING clauses.
The bug is triggered when doing filesort for create_sort_index(). When
find_all_keys() calls select->cond->walk() it eventually reaches
Item_subselect::walk() where it continues to walk the having clauses
from lex->having. This means that it finds the Item_ref instead of the
new Item_field, and Item_ref::walk() tries to dereference the ref
pointer, which is still null.
The crash is reproducible only in 5.5, but the problem lies latent in
5.1 and trunk as well.
Fix: After calling fix_fields on the having clause in JOIN::prepare(),
set select_lex::having to point to the same item as JOIN::having.
This patch also fixes a bug in 5.1 and 5.5 that is triggered if the
query is executed as a prepared statement. The Item_field is created
in the runtime arena when the query is prepared, and the pointer to
the item is saved by st_select_lex::fix_prepare_information() and
brought back as a dangling pointer when the query is executed, after
the runtime arena has been reclaimed.
Fix: Backport fix from trunk that switches to the permanent arena
before calling Item_ref::fix_fields() in JOIN::prepare().
sql/item.cc:
Set context when creating Item_field.
sql/sql_select.cc:
Switch to permanent arena and update select_lex->having.
CHEAP SQ: Valgrind warnings "Memory lost" with IN and EXISTS nested subquery, materialization+semijoin
Analysis:
The memory leak was a result of the interaction of semi-join optimization
with early optimization of constant subqueries. The function:
setup_jtbm_semi_joins() created a dummy temporary table "dummy_table"
in order to make some JOIN_TAB objects complete. Normally, such temporary
tables are freed inside JOIN_TAB::cleanup.
However, the inner-most subquery is pre-optimized, which allows the
optimization fo the MAX subquery to determine that its WHERE is TRUE,
and thus to compute the result of the MAX during optimization. This
ultimately allows the optimize phase of the outer query to find that
it WHERE clause is FALSE. Once JOIN::optimize finds that the result
set is empty, it sets zero_result_cause, and returns *before* it ever
reached make_join_statistics(). As a result the query plan has no
JOIN_TABs at all. Since the temporary table is supposed to be cleanup
via JOIN_TAB::cleanup, this never happens because there is no JOIN_TAB
for this table. Hence we get a memory leak.
Solution:
Whenever there are no JOIN_TABs, iterate over all table reference in
JOIN::join_list, and free the ones that contain semi-join temporary
tables.
WHEN KILLING
Suppose there is a query waiting for a lock. If the user kills
this query, then "Got error -1 when reading table" error message
must not be logged in the server log file. Since this is a user
requested interruption, no spurious error message must be logged
in the server log. This patch will remove the error message from
the log.
approved by joh and tatjana