don't construct open ranges from prefix blob keys for < (less than)
just as it's already done for > (greater than)
because prefix KEY_PART doesn't create prefix Field for blobs
(see open_table_from_share() near "Create a new field for the key part"),
so stored_field_cmp_to_item() will compare the original field to the
value not taking the prefix length into account.
LooseScan code set opt_range_condition_rows to be the
MIN(loose_scan_plan->records, table->records)
totally ignoring possible quick range selects. If there was a quick
select $QUICK on another index with
$QUICK->records < loose_scan_plan->records
this would create a situation where
opt_range_condition_rows > $QUICK->records
which causes an assert in 10.6+ and potentially wrong query plan
choice in 10.5.
Fixed by making opt_range_condition_rows to be the minimum #rows
of any quick select.
Approved-by: Monty <monty@mariadb.org>
A GROUP BY query which uses "MIN(pk)" and has "pk<>const" in the
WHERE clause would produce wrong result when handled with "Using index
for group-by". Here "pk" column is the table's primary key.
The problem was introduced by fix for MDEV-23634. It made the range
optimizer to not produce ranges for conditions in form "pk != const".
However, LooseScan code requires that the optimizer is able to
convert the condition on the MIN/MAX column into an equivalent range.
The range is used to locate the row that has the MIN/MAX value.
LooseScan checks this in check_group_min_max_predicates(). This fix
makes the code in that function to take into account that "pk != const"
does not produce a range.
This bug could affect multi-update statements as well as single-table
update statements processed as multi-updates when the where condition
contained a range condition over a non-indexed varchar column. The
optimizer calculates selectivity of such range conditions using histograms.
For each range the buckets containing endpoints of the the range are
determined with a procedure that stores the values of the endpoints in the
space of the record buffer where values of the columns are usually stored.
For a range over a varchar column the value of a endpoint may exceed the
size of the buffer and in such case the value is stored with truncation.
This truncations cannot affect the result of the calculation of the range
selectivity as the calculation employes only the beginning of the value
string. However it can trigger generation of an unexpected error on this
truncation if an update statement is processed.
This patch prohibits truncation messages when selectivity of a range
condition is calculated for a non-indexed column.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
This patch is the result of running
run-clang-tidy -fix -header-filter=.* -checks='-*,modernize-use-equals-default' .
Code style changes have been done on top. The result of this change
leads to the following improvements:
1. Binary size reduction.
* For a -DBUILD_CONFIG=mysql_release build, the binary size is reduced by
~400kb.
* A raw -DCMAKE_BUILD_TYPE=Release reduces the binary size by ~1.4kb.
2. Compiler can better understand the intent of the code, thus it leads
to more optimization possibilities. Additionally it enabled detecting
unused variables that had an empty default constructor but not marked
so explicitly.
Particular change required following this patch in sql/opt_range.cc
result_keys, an unused template class Bitmap now correctly issues
unused variable warnings.
Setting Bitmap template class constructor to default allows the compiler
to identify that there are no side-effects when instantiating the class.
Previously the compiler could not issue the warning as it assumed Bitmap
class (being a template) would not be performing a NO-OP for its default
constructor. This prevented the "unused variable warning".
This issue was caused by the bug fix for
MDEV-30325 Wrong result upon range query using index condition
The bug could happen in the case of several overlapping key ranges
with OR
The MDEV-25004 test innodb_fts.versioning is omitted because ever since
commit 685d958e38 InnoDB would not allow
writes to a database where the redo log file ib_logfile0 is missing.
This was caused by a bug in key_or() when SEL_ARG* key1 has been cloned
and is overlapping with SEL_ARG *key2
Cloning of SEL_ARG's happens only in very special cases, which is why this
bug has remained undetected for years.
It happend in the following query:
SELECT COUNT(*) FROM lineitem force index
(i_l_orderkey_quantity,i_l_shipdate) WHERE
l_shipdate < '1994-01-01' AND l_orderkey < 800 OR
l_quantity > 3 AND l_orderkey NOT IN ( 157, 1444 );
Because there are two different indexes that can be used and the code for
IN causes a 'tree_or', which causes all SEL_ARG's to be cloned.
Other things:
- While checking the code, I found a bug in SEL_ARG::SEL_ARG(SEL_ARG &arg)
- This was incrementing next_key_part->use_count as part of creating a
copy of an existing SEL_ARG.
This is however not enough as the 'reverse operation' when the copy is
not needed is 'key2_cpy.increment_use_count(-1)', which does something
completely different.
Fixed by calling increment_use_count(1) in SEL_ARG::SEL_ARG.
Make SEL_ARG::make_root() maintain SEL_ARG::weight.
Also, an unrelated change: fix dbug_print_sel_arg() to correctly
print SQL NULL for the right endpoint.
(addressed review input)
The issue was introduced by @@optimizer_max_sel_arg_weight code.
key_or() calls SEL_ARG::update_weight_locally(). That function
takes O(tree->elements) time.
Without that call, key_or(big_tree, one_element_tree) would take
O(log(big_tree)) when one_element_tree doesn't overlap with elements
of big_tree.
This means, update_weight_locally() can cause a big slowdown.
The fix:
1. key_or() actually doesn't need to call update_weight_locally().
It calls SEL_ARG::tree_delete() and SEL_ARG::insert(). These functions
update SEL_ARG::weight.
It also manipulates the SEL_ARG objects directly, but these
modifications do not change the weight of the tree.
I've just removed the update_weight_locally() call.
2. and_all_keys() also calls update_weight_locally(). It manipulates the
SEL_ARG graph directly.
Removed that call and added the code to update the SEL_ARG graph weight.
Tests main.range and main.range_not_embedded already contain the queries
that have test coverage for the affected code.
The problem was that get_best_group_min_max() did not check if fields used
by the "group_min_max optimization" where used in sub queries.
Because of this, it did not detect that a key (b,a) was used in the WHERE
clause for the statement:
SELECT DISTINCT b FROM t1 WHERE EXISTS ( SELECT 1 FROM DUAL WHERE a > 1 ).
Fixed by also traversing the sub queries when checking if a field is used.
This disables group_min_max_optimization for the above query.
Reviewer: Sergei Petrunia
Make QUICK_RANGE_SELECT::cmp_next() aware of reverse-ordered key parts.
(QUICK_RANGE_SELECT::cmp_prev() uses key_cmp() and so it already works
correctly)
Make the Range Optimizer support descending index key parts.
We follow the approach taken in MySQL-8.
See HowRangeOptimizerHandlesDescKeyparts for the description.
* preserve DESC index property in the parser
* store it in the frm (only for HA_KEY_ALG_BTREE)
* read it from the frm
* show it in SHOW CREATE
* skip DESC indexes in opt_range.cc and opt_sum.cc
* ORDER BY test
This includes a fix of MDEV-27432.