This patch also fixes some bugs detected by valgrind after this
patch:
- Not enough copy_func elements was allocated by Create_tmp_table() which
causes an memory overwrite in Create_tmp_table::add_fields()
I added an ASSERT() to be able to detect this also without valgrind.
The bug was that TMP_TABLE_PARAM::copy_fields was not correctly set
when calling create_tmp_table().
- Aria::empty_bits is not allocated if there is no varchar/char/blob
fields in the table. Fixed code to take this into account.
This cannot cause any issues as this is just a memory access
into other Aria memory and the content of the memory would not be used.
- Aria::last_key_buff was not allocated big enough. This may have caused
issues with rtrees and ma_extra(HA_EXTRA_REMEMBER_POS) as they
would use the same memory area.
- Aria and MyISAM didn't take extended key parts into account, which
caused problems when copying rec_per_key from engine to sql level.
- Mark asan builds with 'asan' in version strihng to detect these in
not_valgrind_build.inc.
This is needed to not have main.sp-no-valgrind fail with asan.
SELECT DISTINCT did not work with expressions with sum functions.
Distinct was only done on the values stored in the intermediate temporary
tables, which only stored the value of each sum function.
In other words:
SELECT DISTINCT sum(a),sum(b),avg(c) ... worked.
SELECT DISTINCT sum(a),sum(b) > 2,sum(c)+sum(d) would not work.
The later query would do ONLY apply distinct on the sum(a) part.
Reviewer: Sergei Petrunia <sergey@mariadb.com>
This was fixed by extending remove_dup_with_hash_index() and
remove_dup_with_compare() to take into account the columns in the result
list that where not stored in the temporary table.
Note that in many cases the above dup removal functions are not used as
the optimizer may be able to either remove duplicates early or it will
discover that duplicate remove is not needed. The later happens for
example if the group by fields is part of the result.
Other things:
- Backported from 11.0 the change of Sort_param.tmp_buffer from char* to
String.
- Changed Type_handler::make_sort_key() to take String as a parameter
instead of Sort_param. This was done to allow make_sort_key() functions
to be reused by distinct elimination functions.
This makes Type_handler_string_result::make_sort_key() similar to code
in 11.0
- Simplied error handling in remove_dup_with_compare() to remove code
duplication.
WITH TIES would not take effect if SELECT DISTINCT was used in a
context where an INDEX is used to resolve the ORDER BY clause.
WITH TIES relies on the `JOIN::order` to contain the non-constant
fields to test the equality of ORDER BY fiels required for WITH TIES.
The cause of the problem was a premature removal of the `JOIN::order`
member during a DISTINCT optimization. This lead to WITH TIES code assuming
ORDER BY only contained "constant" elements.
Disable this optimization when WITH TIES is in effect.
(side-note: the order by removal does not impact any current tests, thus
it will be removed in a future version)
Reviewed by: monty@mariadb.org
There was a bug in JOIN::make_notnull_conds_for_range_scans() when
clearing TABLE->tmp_set, which was used to mark fields that could not be
null.
This function was only used if 'not_null_range_scan=on' is set.
The effect was that tmp_set contained a 'random value' and this caused
the optimizer to think that some fields could not be null.
FLUSH TABLES clears tmp_set and because of this things worked temporarily.
Fixed by clearing tmp_set properly.
Enable use of Rowid Filter optimization with eq_ref access.
Use the following assumptions:
- Assume index-only access cost is 50% of non-index-only access cost.
- Take into account that "Eq_ref access cache" reduces the number of
lookups eq_ref access will make.
= This means the number of Rowid Filter checks is reduced also
= Eq_ref access cost is computed using that assumption (see
prev_record_reads() call), so we should use it in all cost '
computations.
Enable use of Rowid Filter optimization with eq_ref access.
Use the following assumptions:
- Assume index-only access cost is 50% of non-index-only access cost.
- Take into account that "Eq_ref access cache" reduces the number of
lookups eq_ref access will make.
= This means the number of Rowid Filter checks is reduced also
= Eq_ref access cost is computed using that assumption (see
prev_record_reads() call), so we should use it in all cost '
computations.
(Initial patch by Varun Gupta. Amended and added comments).
When the query has both
1. Aggregate functions that require sorting data by group, and
2. Window functions
we need to use two temporary tables. The first temp.table will hold the
join output. Then it is passed to filesort(). Reading it in sorted
order allows to compute the aggregate functions.
Then, we need to write their values into the second temp. table. Then,
Window Function computation step can pass that to filesort() and read
them in the order it needs.
Failure to create the second temp. table would cause an assertion
failure: window function could would not find where to get the values
of the aggregate functions.
The problem was that federated engine does not support comparable rowids
which was not taken into account by semijoin code.
Fixed by checking that we don't use semijoin with tables that does not
support comparable rowids.
Other things:
- Fixed some typos in the code comments
The reason things fails in 10.5 and above is that test_quick_select()
returns -1 (impossible range) for empty tables if there are any
conditions attached.
This didn't happen in 10.4 as the cost for a range was more than for
a table scan with 0 rows and get_key_scan_params() did not create any
range plans and thus did not mark the range as impossible.
The code that checked the 'impossible range' conditions did not take
into account all cases of LEFT JOIN usage.
Adding an extra check if the table is used with an ON condition in case
of 'impossible range' fixes the issue.
it's incorrect to use change_item_tree() to replace arguments
of top-level AND/OR, because they (arguments) are stored in a List,
so a pointer to an argument is in the list_node, and individual
list_node's of top-level AND/OR can be deleted in Item_cond::build_equal_items().
In that case rollback_item_tree_changes() will modify the deleted object.
Luckily, it's not needed to use change_item_tree() for top-level
AND/OR, because the whole top-level item is copied and preserved
in prep_where and prep_on, and restored from there.
So, just don't.
Additionally to the test case in the commit it fixes
* ASAN failure of main.opt_tvc --ps
* ASAN failure of main.having_cond_pushdown --ps
when an internal temporary table field is created from a real field,
a new temp field should only copy a default from the source field
when the latter has it
when creating a temp table field from an actual table field,
these two fields are supposed to be mostly identical
(except for BIT field storage), in particular, temp field should
have the same default as the orig field, even if the sql_mode has
been changed meanwhile (e.g. to include NO_ZERO_DATE)
The problem was that when storing rows into a temporary table,
MIN/MAX items that where marked as constants (as theire value had
been computed at start of query) would be reset.
Fixed by not reseting MIN/MAX items that are marked as const in
Item_sum_min_max::clear().
I have not been able to repeat the problem, but the stack trace indicates
that ha_maria::extra() is called with a null file pointer.
This indicates the table has either never been opened or opened and closed,
with file pointer set to NULL, but ha_maria::extra() is still called.
In JOIN::partial_cleanup() we are only checking of table->is_created(),
which will fail if table was created and later closed.
Fixed by clearing table->created if table is dropped.
I added an assert to is_created() to catch the case that the create
flag does not match 'file'.
(Patch from Monty, slightly amended)
Fix rowid filtering optimization in best_access_path():
== Ref access + rowid filtering ==
The cost computations compare #records and index-only scan cost
(keyread_tmp) to find out the per-record advantage one will get if
they skip reading full table record.
The computations produce wrong result when:
- the #records are "clipped down" with s->worst_seeks or
thd->variables.max_seeks_for_key. keyread_tmp is not clipped
this way so the numbers are not comparable.
- access_factor is negative. This means index_only read is
cheaper than non-index-only read.
This patch makes the optimizer not to consider Rowid Filtering in
such cases.
The decision is logged in the Optimizer Trace using
"rowid_filter_skipped" name.
== Range access + rowid filtering ==
when considering to use Rowid Filter with range access, do multiply
keyread_tmp by record_count. That way, it is comparable with the
range access's estimate, which is multiplied by record_count.
The cause of regression was handling for ROWNUM() function.
For queries like
SELECT ROWNUM() FROM ... ORDER BY ...
ROWNUM() should be computed before the ORDER BY.
The computation was moved to be before the ORDER BY for any entries in
the select list that had RAND_TABLE_BIT set.
This had a negative impact on queries in form:
SELECT sp_func() FROM t1 ORDER BY ... LIMIT n
where sp_func() is NOT declared as DETERMINISTIC (and so has
RAND_TABLE_BIT set).
The fix is to require evaluation for sorting only for the ROWNUM()
function. Functions that just have RAND_TABLE_BIT() can be computed
after ORDER BY ... LIMIT is applied.
(think about a possible index that satisfies the ORDER BY clause. In
that case, the the rows would be read in the needed order and we would
stop after reading LIMIT rows, achieving the same effect).
The issue is that record_should_be_deleted() returns true in
mysql_delete() even if sub-select with join gets error from storage
engine when DELETE FROM ... WHERE ... IN (SELECT ...) statement is
executed.
The same is true for mysql_update() where select->skip_record() returns
true even if sub-select with join gets error from storage engine.
In the test case if sub-select is chosen as deadlock victim the whole
transaction is rolled back during sub-select execution, but
mysql_delete()/mysql_update() continues transaction execution and invokes
table->delete_row() as record_should_be_deleted() wrongly returns true
in mysql_delete() and table->update_row() as select->skip_record(thd)
wrongly returns 1 for mysql_update().
record_should_be_deleted() wrogly returns true because thd->is_error()
returns false SQL_SELECT::skip_record() invoked from
record_should_be_deleted().
It's supposed that THD error should be set in rr_handle_error() called
from rr_sequential() during sub-select JOIN::exec_inner() execution.
But rr_handle_error() does not set THD error because
READ_RECORD::print_error is not set in JOIN_TAB::read_record.
READ_RECORD::print_error should be initialized in
init_read_record()/init_read_record_idx(). But make_join_readinfo() does
not invoke init_read_record()/init_read_record_idx() for
JOIN_TAB::read_record.
The fix is to set JOIN_TAB::read_record.print_error in
make_join_readinfo(), i.e. in the same place where
JOIN_TAB::read_record.table is set.
Reviewed by Sergey Petrunya.
Deallocation of TABLE_LIST::dt_handler and TABLE_LIST::pushdown_derived
was performed in multiple places if code. This not only made the code
more difficult to maintain but also led to memory leaks and
ASAN heap-use-after-free errors.
This commit puts deallocation of TABLE_LIST::dt_handler and
TABLE_LIST::pushdown_derived to the single point - JOIN::cleanup()
it's incorrect to use change_item_tree() to replace arguments
of top-level AND/OR, because they (arguments) are stored in a List,
so a pointer to an argument is in the list_node, and individual
list_node's of top-level AND/OR can be deleted in Item_cond::build_equal_items().
In that case rollback_item_tree_changes() will modify the deleted object.
Luckily, it's not needed to use change_item_tree() for top-level
AND/OR, because the whole top-level item is copied and preserved
in prep_where and prep_on, and restored from there.
So, just don't.
If all elements in the list of 'IN' or 'NOT IN' clause are equal
and there are no NULLs then clause
- "a IN (e1,..,en)" can be converted to "a = e1"
- "a NOT IN (e1,..,en)" can be converted to "a <> e1".
This means an object of Item_func_in can be replaced with an object
of Item_func_eq for IN (e1,..,en) clause and Item_func_ne for
NOT IN (e1,...,en). Such a replacement allows the optimizer to choose
a better execution plan
When a range rowid filter was used with an index ref access the cost of
accessing the index entries for the records rejected by the filter was not
taken into account. For a ref access by an index with big average number
of records per key this led to poor execution plans if selectivity of the
used filter was high.
The patch resolves this problem. It also introduces a minor optimization
that skips look-ups into a filter that turns out to be empty.
With this patch the output of ANALYZE stmt reports the number of look-ups
into used rowid filters.
The patch also back-ports from 10.5 the code that properly sets the field
TABLE::file::table for opened temporary tables.
The test cases that were supposed to use rowid filters have been adjusted
in order to use similar execution plans after this fix.
Approved by Oleksandr Byelkin <sanja@mariadb.com>