Post-fix #2:
- Update test results
- Make the optimization conditional under @@optimizer_switch flag.
- The optimization is now disabled by default, so .result files
are changed back to be what they were before the MDEV-8989 patch.
Variant #4 of the fix.
Make ORDER BY optimization functions take into account multiple
equalities. This is done in several places:
- remove_const() checks whether we can sort the first table in the
join, or we need to put rows into temp.table and then sort.
- test_if_order_by_key() checks whether there are indexes that
can be used to produce the required ordering
- make_unireg_sortorder() constructs sort criteria for filesort.
When simplify_joins() converts an outer join to an inner, it should
reset the value of TABLE::dep_tables. This is needed, because the
function may have already set TABLE::dep_tables according to the outer
join dependency.
Clang warns on this code because it is memsetting over a vtable contained in a
struct in the best_positions array. The diagnostic text is:
mariadb/sql/sql_select.cc:24462:10: error: destination for this 'memset' call is
a pointer to class containing a dynamic class 'Duplicate_weedout_picker'; vtable
pointer will be overwritten [-Werror,-Wdynamic-class-memaccess]
memset(best_positions, 0, sizeof(POSITION) * (table_count + 1));
~~~~~~ ^
Patch contributed by David Gow.
special treatment for temporal values in
create_tmp_field_from_item().
old code only did it when result_type() was STRING_RESULT,
but Item_cache_temporal::result_type() is INT_RESULT
- "Early NULLs filtering" optimization used to "peel off" Item_ref and
Item_direct_ref wrappers from an outside column reference before
adding "outer_table_col IS NOT NULL" into JOIN::outer_ref_cond.
- When this happened in a subquery that was evaluated in a post-GROUP-BY
context, attempt to evaluate JOIN::outer_ref_cond would fetch an
incorrect value of outer_table_col.
The select mentioned in the bug attempted to create a temporary table
using the maria storage engine. The table needs to have primary keys such that
duplicates can be removed. Unfortunately this use case has a longer
than allowed key and the tmp table got created without a temporary key.
We must not allow materialization for the subquery if the total key
length and key parts is greater than what the storage engine supports.
When one evaluates row-based comparison like (X, Y) = (A,B), one should
first call bring_value() for the Item that returns row value. If you
don't do that and just attempt to read values of X and Y, you get stale
values.
Semi-join/Materialization can take a row-based comparison apart and
make ref access from it. In that case, we need to call bring_value()
to get the index lookup components.
Revert the patch for MDEV-9504.
It causes test failures, attempt to fix these causes more failures. The
source of all this is that the code in test_if_skip_sort_order() has
a peculiar way of treating select_limit parameter:
Correct value is computed when the query plan is changed. In other cases,
we use an approximation that ignores the presence of GROUP BY clause,
or JOINs, or both.
A patch that fixes all of the above would be too big to do in 10.1
- Legacy code would set JOIN_TAB::limit only for EXPLAIN queries (this
variable is only used when producing EXPLAIN output)
- ANALYZE/SHOW EXPLAIN need to produce EXPLAIN output for non-EXPLAIN
queries, too, so we should always set JOIN_TAB::limit.
Undo the change in test_if_skip_sort_order() that set ref_key=-1 when
a variant of index_merge is used (was made in fix for MDEV-9021).
It turned out that test_if_cheaper_ordering() call below assumes that
ref_key=-1 means "no index is used", that is, "an inefficient full table
scan is done".
This is not the same as index_merge, index_merge can actually be quite
efficient. So, ref_key=MAX_KEY denotes the fact that some index is used,
not any given index.
GENERATED BY THE EXP() FUNCTION
When generating the error message for numeric overflow, pass a flag to
Item::print() that prevents it from expanding constant expressions and
parameters to the values they evaluate to.
For consistency, also pass the flag to Item::print() when
Item_func_spatial_collection::fix_length_and_dec() generates an error
message. It doesn't make any difference at the moment, since constant
expressions haven't been evaluated yet when this function is called.
that was mistakenly merged from mysql-5.5.47
(introduces valgrind failures in main.sp, because Field_varstring
columns are created as FIELD_NORMAL and that causes aria to
read bytes between the actual value length and field max length)
Problem:
At the end of first execution select_lex->prep_where is pointing to
a runtime created object (temporary table field). As a result
server exits trying to access a invalid pointer during second
execution.
Analysis:
While optimizing the join conditions for the query, after the
permanent transformation, optimizer makes a copy of the new
where conditions in select_lex->prep_where. "prep_where" is what
is used as the "where condition" for the query at the start of execution.
W.r.t the query in question, "where" condition is actually pointing
to a field in the temporary table. As a result, for the second
execution the pointer is no more valid resulting in server exit.
Fix:
At the end of the first execution, select_lex->where will have the
original item of the where condition.
Make prep_where the new place where the original item of select->where
has to be rolled back.
Fixed in 5.7 with the wl#7082 - Move permanent transformations from
JOIN::optimize to JOIN::prepare
Patch for 5.5 includes the following backports from 5.6:
Bugfix for Bug12603141 - This makes the first execute statement in the testcase
pass in 5.5
However it was noted later in in Bug16163596 that the above bugfix needed to
be modified. Although Bug16163596 is reproducible only with changes done for
Bug12582849, we have decided include the fix.
Considering that Bug12582849 is related to Bug12603141, the fix is
also included here. However this results in Bug16317817, Bug16317685,
Bug16739050. So fix for the above three bugs is also part of this patch.
The bitmap implementation defines two template Bitmap classes. One
optimized for 64-bit (default) wide bitmaps while the other is used for
all other widths.
In order to optimize the computations, Bitmap<64> class has defined its
own member functions for bitmap operations, the other one, however,
relies on mysys' bitmap implementation (mysys/my_bitmap.c).
Issue 1:
In case of non 64-bit Bitmap class, intersect() wrongly reset the
received bitmap while initialising a new local bitmap structure
(bitmap_init() clears the bitmap buffer) thus, the received bitmap was
getting cleared.
Fixed by initializing the local bitmap structure by using a temporary
buffer and later copying the received bitmap to the initialised bitmap
structure.
Issue 2:
The non 64-bit Bitmap class had the Iterator missing which caused
compilation failure.
Also added a cmake variable to hold the MAX_INDEXES value when supplied
from the command prompt. (eg. cmake .. -DMAX_INDEXES=128U). Checks have
been put in place to trigger build failure if MAX_INDEXES value is
greater than 128.
Test modifications:
* Introduced include/have_max_indexes_[64|128].inc to facilitate
skipping of tests for which the output differs with different
MAX_INDEXES.
* Introduced include/max_indexes.inc which would get modified by cmake
to reflect the MAX_INDEXES value used to build the server. This file
simply sets an mtr variable '$max_indexes' to show the MAX_INDEXES
value, which will then be consumed by the above introduced include file.
* Some tests (portions), dependent on MAX_INDEXES value, have been moved
to separate test files.
Issue
-----
This problem occurs when varchar columns are used in a
internal temporary table. The type of the field is set
incorrectly to the generic FIELD_NORMAL type. This in turn
results in an inaccurate calculation of the record length.
Valgrind issues will occur since initialization has not
happend for some bytes.
Fix
----
While creating the temporary table, the type of the field
needs to be to set FIELD_VARCHAR. This will allow myisam
to calculate the record length accurately.
This fix is a backport of BUG#13350136.