Enable use of Rowid Filter optimization with eq_ref access.
Use the following assumptions:
- Assume index-only access cost is 50% of non-index-only access cost.
- Take into account that "Eq_ref access cache" reduces the number of
lookups eq_ref access will make.
= This means the number of Rowid Filter checks is reduced also
= Eq_ref access cost is computed using that assumption (see
prev_record_reads() call), so we should use it in all cost '
computations.
Enable use of Rowid Filter optimization with eq_ref access.
Use the following assumptions:
- Assume index-only access cost is 50% of non-index-only access cost.
- Take into account that "Eq_ref access cache" reduces the number of
lookups eq_ref access will make.
= This means the number of Rowid Filter checks is reduced also
= Eq_ref access cost is computed using that assumption (see
prev_record_reads() call), so we should use it in all cost '
computations.
(Initial patch by Varun Gupta. Amended and added comments).
When the query has both
1. Aggregate functions that require sorting data by group, and
2. Window functions
we need to use two temporary tables. The first temp.table will hold the
join output. Then it is passed to filesort(). Reading it in sorted
order allows to compute the aggregate functions.
Then, we need to write their values into the second temp. table. Then,
Window Function computation step can pass that to filesort() and read
them in the order it needs.
Failure to create the second temp. table would cause an assertion
failure: window function could would not find where to get the values
of the aggregate functions.
The problem was that federated engine does not support comparable rowids
which was not taken into account by semijoin code.
Fixed by checking that we don't use semijoin with tables that does not
support comparable rowids.
Other things:
- Fixed some typos in the code comments
The reason things fails in 10.5 and above is that test_quick_select()
returns -1 (impossible range) for empty tables if there are any
conditions attached.
This didn't happen in 10.4 as the cost for a range was more than for
a table scan with 0 rows and get_key_scan_params() did not create any
range plans and thus did not mark the range as impossible.
The code that checked the 'impossible range' conditions did not take
into account all cases of LEFT JOIN usage.
Adding an extra check if the table is used with an ON condition in case
of 'impossible range' fixes the issue.
it's incorrect to use change_item_tree() to replace arguments
of top-level AND/OR, because they (arguments) are stored in a List,
so a pointer to an argument is in the list_node, and individual
list_node's of top-level AND/OR can be deleted in Item_cond::build_equal_items().
In that case rollback_item_tree_changes() will modify the deleted object.
Luckily, it's not needed to use change_item_tree() for top-level
AND/OR, because the whole top-level item is copied and preserved
in prep_where and prep_on, and restored from there.
So, just don't.
Additionally to the test case in the commit it fixes
* ASAN failure of main.opt_tvc --ps
* ASAN failure of main.having_cond_pushdown --ps
when an internal temporary table field is created from a real field,
a new temp field should only copy a default from the source field
when the latter has it
when creating a temp table field from an actual table field,
these two fields are supposed to be mostly identical
(except for BIT field storage), in particular, temp field should
have the same default as the orig field, even if the sql_mode has
been changed meanwhile (e.g. to include NO_ZERO_DATE)
The problem was that when storing rows into a temporary table,
MIN/MAX items that where marked as constants (as theire value had
been computed at start of query) would be reset.
Fixed by not reseting MIN/MAX items that are marked as const in
Item_sum_min_max::clear().
I have not been able to repeat the problem, but the stack trace indicates
that ha_maria::extra() is called with a null file pointer.
This indicates the table has either never been opened or opened and closed,
with file pointer set to NULL, but ha_maria::extra() is still called.
In JOIN::partial_cleanup() we are only checking of table->is_created(),
which will fail if table was created and later closed.
Fixed by clearing table->created if table is dropped.
I added an assert to is_created() to catch the case that the create
flag does not match 'file'.
(Patch from Monty, slightly amended)
Fix rowid filtering optimization in best_access_path():
== Ref access + rowid filtering ==
The cost computations compare #records and index-only scan cost
(keyread_tmp) to find out the per-record advantage one will get if
they skip reading full table record.
The computations produce wrong result when:
- the #records are "clipped down" with s->worst_seeks or
thd->variables.max_seeks_for_key. keyread_tmp is not clipped
this way so the numbers are not comparable.
- access_factor is negative. This means index_only read is
cheaper than non-index-only read.
This patch makes the optimizer not to consider Rowid Filtering in
such cases.
The decision is logged in the Optimizer Trace using
"rowid_filter_skipped" name.
== Range access + rowid filtering ==
when considering to use Rowid Filter with range access, do multiply
keyread_tmp by record_count. That way, it is comparable with the
range access's estimate, which is multiplied by record_count.
The cause of regression was handling for ROWNUM() function.
For queries like
SELECT ROWNUM() FROM ... ORDER BY ...
ROWNUM() should be computed before the ORDER BY.
The computation was moved to be before the ORDER BY for any entries in
the select list that had RAND_TABLE_BIT set.
This had a negative impact on queries in form:
SELECT sp_func() FROM t1 ORDER BY ... LIMIT n
where sp_func() is NOT declared as DETERMINISTIC (and so has
RAND_TABLE_BIT set).
The fix is to require evaluation for sorting only for the ROWNUM()
function. Functions that just have RAND_TABLE_BIT() can be computed
after ORDER BY ... LIMIT is applied.
(think about a possible index that satisfies the ORDER BY clause. In
that case, the the rows would be read in the needed order and we would
stop after reading LIMIT rows, achieving the same effect).
The issue is that record_should_be_deleted() returns true in
mysql_delete() even if sub-select with join gets error from storage
engine when DELETE FROM ... WHERE ... IN (SELECT ...) statement is
executed.
The same is true for mysql_update() where select->skip_record() returns
true even if sub-select with join gets error from storage engine.
In the test case if sub-select is chosen as deadlock victim the whole
transaction is rolled back during sub-select execution, but
mysql_delete()/mysql_update() continues transaction execution and invokes
table->delete_row() as record_should_be_deleted() wrongly returns true
in mysql_delete() and table->update_row() as select->skip_record(thd)
wrongly returns 1 for mysql_update().
record_should_be_deleted() wrogly returns true because thd->is_error()
returns false SQL_SELECT::skip_record() invoked from
record_should_be_deleted().
It's supposed that THD error should be set in rr_handle_error() called
from rr_sequential() during sub-select JOIN::exec_inner() execution.
But rr_handle_error() does not set THD error because
READ_RECORD::print_error is not set in JOIN_TAB::read_record.
READ_RECORD::print_error should be initialized in
init_read_record()/init_read_record_idx(). But make_join_readinfo() does
not invoke init_read_record()/init_read_record_idx() for
JOIN_TAB::read_record.
The fix is to set JOIN_TAB::read_record.print_error in
make_join_readinfo(), i.e. in the same place where
JOIN_TAB::read_record.table is set.
Reviewed by Sergey Petrunya.
Deallocation of TABLE_LIST::dt_handler and TABLE_LIST::pushdown_derived
was performed in multiple places if code. This not only made the code
more difficult to maintain but also led to memory leaks and
ASAN heap-use-after-free errors.
This commit puts deallocation of TABLE_LIST::dt_handler and
TABLE_LIST::pushdown_derived to the single point - JOIN::cleanup()
it's incorrect to use change_item_tree() to replace arguments
of top-level AND/OR, because they (arguments) are stored in a List,
so a pointer to an argument is in the list_node, and individual
list_node's of top-level AND/OR can be deleted in Item_cond::build_equal_items().
In that case rollback_item_tree_changes() will modify the deleted object.
Luckily, it's not needed to use change_item_tree() for top-level
AND/OR, because the whole top-level item is copied and preserved
in prep_where and prep_on, and restored from there.
So, just don't.
If all elements in the list of 'IN' or 'NOT IN' clause are equal
and there are no NULLs then clause
- "a IN (e1,..,en)" can be converted to "a = e1"
- "a NOT IN (e1,..,en)" can be converted to "a <> e1".
This means an object of Item_func_in can be replaced with an object
of Item_func_eq for IN (e1,..,en) clause and Item_func_ne for
NOT IN (e1,...,en). Such a replacement allows the optimizer to choose
a better execution plan