43233/55794.
mysql-test/r/change_user.result:
Don't use -1 integer wrap around. It used to work, but now we do what's
actually in the documentation. In tests, we now use DEFAULT or the
numeral equivalent (as we do in the 5.6 tests).
mysql-test/r/key_cache.result:
Can't drop default key case is an error now, not a warning, for compatibility
with 5.6.
mysql-test/r/variables.result:
Can't drop default key case is an error now, not a warning, for compatibility
with 5.6.
mysql-test/t/change_user.test:
Don't use -1 integer wrap around. It used to work, but now we do what's
actually in the documentation. In tests, we now use DEFAULT or the
numeral equivalent (as we do in the 5.6 tests).
mysql-test/t/key_cache.test:
Can't drop default key case is an error now, not a warning, for compatibility
with 5.6.
mysql-test/t/variables.test:
Can't drop default key case is an error now, not a warning, for compatibility
with 5.6.
sql/mysqld.cc:
0 is a legal (albeit magic) value: "drop key cache."
sql/set_var.cc:
bound_unsigned() can go now, it was just a kludge until things are done
The Right Way, which they are now.
Can't drop default key case is an error now, not a warning, for compatibility
with 5.6.
tests/mysql_client_test.c:
Don't use -1 integer wrap around. It used to work, but now we do what's
actually in the documentation. In tests, we now use DEFAULT or the
numeral equivalent (as we do in the 5.6 tests).
Open issues:
- A better fix for #57688; Igor is working on this
- Test failure in index_merge_innodb.test ; Igor promised to look at this
- Some Innodb tests fails (need to merge with latest xtradb) ; Kristian promised to look at this.
- Failing tests: innodb_plugin.innodb_bug56143 innodb_plugin.innodb_bug56632 innodb_plugin.innodb_bug56680 innodb_plugin.innodb_bug57255
- Werror is disabled; Should be enabled after merge with xtradb.
Analysis:
The send_data method of the result sink class used to collect
data statistics about materialized subqueries incorrectly assumed
that duplicate rows are removed prior to calling send_data. As
a result the collected statistics was wrong, which resulted in
an incorrect maximal number of keys in the Ordered_key buffer.
Solution:
Try to insert each row into the materialized temp table before
collecting statistics, and if the insertion results in a duplicate
row, do not count the current row.
> revision-id: gshchepa@mysql.com-20100801181236-uyuq6ewaq43rw780
> parent: alexey.kopytov@sun.com-20100723115254-jjwmhq97b9wl932l
> committer: Gleb Shchepa <gshchepa@mysql.com>
> branch nick: mysql-5.1-security
> timestamp: Sun 2010-08-01 22:12:36 +0400
> Bug #54461: crash with longblob and union or update with subquery
>
> Queries may crash, if
> 1) the GREATEST or the LEAST function has a mixed list of
> numeric and LONGBLOB arguments and
> 2) the result of such a function goes through an intermediate
> temporary table.
>
> An Item that references a LONGBLOB field has max_length of
> UINT_MAX32 == (2^32 - 1).
>
> The current implementation of GREATEST/LEAST returns REAL
> result for a mixed list of numeric and string arguments (that
> contradicts with the current documentation, this contradiction
> was discussed and it was decided to update the documentation).
>
> The max_length of such a function call was calculated as a
> maximum of argument max_length values (i.e. UINT_MAX32).
>
> That max_length value of UINT_MAX32 was used as a length for
> the intermediate temporary table Field_double to hold
> GREATEST/LEAST function result.
>
> The Field_double::val_str() method call on that field
> allocates a String value.
>
> Since an allocation of String reserves an additional byte
> for a zero-termination, the size of String buffer was
> set to (UINT_MAX32 + 1), that caused an integer overflow:
> actually, an empty buffer of size 0 was allocated.
>
> An initialization of the "first" byte of that zero-size
> buffer with '\0' caused a crash.
>
> The Item_func_min_max::fix_length_and_dec() has been
> modified to calculate max_length for the REAL result like
> we do it for arithmetical operators.
mysql-test/r/func_misc.result:
Test case for bug #54461.
mysql-test/t/func_misc.test:
Test case for bug #54461.
sql/item_func.cc:
Bug #54461: crash with longblob and union or update with subquery
The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.
If mysqltest dies, mtr waits to see if mysqld dies too within 100ms
But in that case, it should not care about expected crash
Fix: jump past the code that checks the expect file
OPTIMIZE TABLE recreates the whole table. That is why the counter gets reset.
Making the next autoinc column persistent is a separate issue from resetting
the value after an OPTIMIZE TABLE. We already have a check for ALTER TABLE
and CREATE INDEX to preserve the value on table recreate. We should be able to
add an additional check for OPTIMIZE TABLE to preserve the next value.
rb://519 Approved by Jimmy Yang.
Registration of pointer change if we assign it to other pointer which should be identical after statement execution (PS/SP).
mysql-test/r/subselect.result:
Test suite.
mysql-test/t/subselect.test:
Test suite.
sql/sql_class.cc:
The procedure of the pointer registration.
sql/sql_class.h:
The procedure of the pointer registration.
sql/sql_lex.cc:
Registration of pointer change if we assign it to other pointer which should be identical after statement execution (PS/SP).
Follow-up discussed with Reporter:
Avoid hard shutdown after test failure, if caused by server log warning
AND we are running valgrind
More general pick-up of valgrind summaries, order may apparently vary
Do exit(1) if we did find valgrind summary warnings
mysql-test/r/heap_btree.result:
Test of index over bit firld in hash table.
mysql-test/r/heap_hash.result:
Test of index over bit firld in hash table.
mysql-test/t/heap_btree.test:
Test of index over bit firld in hash table.
mysql-test/t/heap_hash.test:
Test of index over bit firld in hash table.
storage/heap/ha_heap.cc:
Adding bit field support for heap tables.
storage/heap/hp_create.c:
Adding bit field support for heap tables.
storage/heap/hp_hash.c:
Adding bit field support for heap tables.
In case of low memory sort buffer QUICK_INDEX_MERGE_SELECT creates
temporary file where is stores row ids which meet QUICK_SELECT ranges
except of clustered pk range, clustered range is processed separately.
In init_read_record we check if temporary file is used and choose
appropriate record access method. It does not take into account that
temporary file contains partial result in case of QUICK_INDEX_MERGE_SELECT
with clustered pk range.
The fix is always to use rr_quick if QUICK_INDEX_MERGE_SELECT
with clustered pk range is used.
mysql-test/suite/innodb/r/innodb_mysql.result:
test case
mysql-test/suite/innodb/t/innodb_mysql.test:
test case
mysql-test/suite/innodb_plugin/r/innodb_mysql.result:
test case
mysql-test/suite/innodb_plugin/t/innodb_mysql.test:
test case
sql/opt_range.h:
added new method
sql/records.cc:
The fix is always to use rr_quick if QUICK_INDEX_MERGE_SELECT
with clustered pk range is used.
Analysis:
Single-row subqueries are not considered expensive and are
evaluated both during EXPLAIN in to detect errors like
"Subquery returns more than 1 row", and during optimization to
perform constant optimization.
The cause for the failed ASSERT is in JOIN::join_free, where we set
bool full= (!select_lex->uncacheable && !thd->lex->describe);
Thus for EXPLAIN statements full == FALSE, and as a result the call to
JOIN::cleanup doesn't call JOIN_TAB::cleanup which should have
called table->disable_keyread().
Solution:
Consider all kinds of subquery predicates as expensive.
> revision-id: alexey.kopytov@sun.com-20100824103548-ikm79qlfrvggyj9h
> parent: sunny.bains@oracle.com-20100816001222-xqc447tr6jwh8c53
> committer: Alexey Kopytov <Alexey.Kopytov@Sun.com>
> branch nick: 5.1-security
> timestamp: Tue 2010-08-24 14:35:48 +0400
> message:
> Bug #55568: user variable assignments crash server when used
> within query
>
> The server could crash after materializing a derived table
> which requires a temporary table for grouping.
>
> When destroying the temporary table used to execute a query for
> a derived table, JOIN::destroy() did not clean up Item_fields
> pointing to fields in the temporary table. This led to
> dereferencing a dangling pointer when printing out the items
> tree later in the outer SELECT.
>
> The solution is an addendum to the patch for bug37362: in
> addition to cleaning up items in tmp_all_fields3, do the same
> for items in tmp_all_fields1, since now we have an example
> where this is necessary.
sql/field.cc:
Make sure field->table_name is not set to NULL in
Field::make_field() to avoid assertion failure in
Item_field::make_field() after cleaning up items
(the assertion fired in udf.test when running
the test suite with the patch applied).
sql/sql_select.cc:
In addition to cleaning up items in tmp_all_fields3, do the
same for items in tmp_all_fields1.
Introduce a new helper function to avoid code duplication.
sql/sql_select.h:
Introduce a new helper function to avoid code duplication in
JOIN::destroy().
Analysis:
This another instance of the problem fixed in LP BUG#675981 -
evaluation of subqueries during EXPLAIN when the subquery plan
is incomplete because JOIN::optimize() generally doesn't create
complete execution plans for EXPLAIN statements.
In this case the call path is:
mysql_explain_union -> outer_join.exec -> outer_join.init_execution ->
create_sort_index -> filesort -> find_all_keys ->
SQL_SELECT::skip_record -> outer_where_clause.val_int -> ...
-> subselect_join.exec -> ... -> sub_select_cache
When calling sub_select_cache JOIN_TAB::cache is NULL because the cache
objects are not created for EXPLAIN statements.
Solution:
Delay the call to init_execution() after all EXPLAIN related processing
is completed. Thus init_execution() is not called at all during EXPLAIN.
Cause:
The optimize() phase for the subquery selected to use join buffering via setting
JOIN_TAB::next_select= sub_select_cache in make_join_readinfo, however, the call
to check_join_cache_usage() from make_join_readinfo didn't create the corresponding
JOIN_CACHE_BNL object because of the condition:
if ((options & SELECT_DESCRIBE) ||
(((tab->cache= new JOIN_CACHE_BNL(join, tab, prev_cache))) &&
!tab->cache->init()))
Since EXPLAIN for subqueries runs regular execution, the constant predicates that
were delayed to be evaluated at the exec() phase, were evaluated during EXPLAIN.
As a result the outer JOIN::exec called JOIN::exec for the subquery, while the
subquery execution plan was no properly created, which resulted in an failed ASSERT.
Fix:
The patch blocks evaluation of constant expensive conditions during EXPLAIN. Notice
that these conditions are "constant" with respect to the outer query, thus in
general they could be arbitrarily expensive, which may result in very slow EXPLAINs.
and related small fixes.
mysql-test/t/user_var.test:
test for bug
sql/field_conv.cc:
From the C standard, memcpy() has undefined behaviour if to->ptr==from->ptr
sql/item_func.cc:
In the case of BUG#56138, entry->value==ptr in which case memcpy()
has undefined results per the C standard.
sql/sql_select.cc:
Work around a bug in old gcc
The bug happened when BKA join algorithm used an incremental buffer
and some of the fields over which access keys were constructed
- were allocated in the previous join buffers
- were non-nullable
- belonged to inner tables of outer joins.
For such fields an offset to the field value in the record is saved
in the postfix of the record, and a zero offset indicates that the value
is null. Before the key using the field value is constructed the
value is read into the corresponding field of the record buffer and
the null bit is set for the field if the offset is 0. However if
the field is non-nullable the table->null_row must be set to 1
for null values and to 0 for non-null values to ensure proper reading
of the value from the record buffer.
This is a backport of the fix for
MySQL BUG#52317: Assertion failing in Field_varstring::store () at field.cc:6833
The orginal comment by Oystein is:
In order for EXPLAIN to print const-refs, a Store_key_const_item object
is created. This is different for normal execution of subqueries where
a temporary store_key_item object is used instead. The problem is that
EXPLAIN will execute subqueries. This leads to a scenario where a
store_key_const_item object it told to write to its underlying field.
This results in a failing assert since the write set of the underlying
table does not reflect this.
The resolution is to do the same trick as for store_key_item::copy_inner().
That is, temporarily change the write set to allow writes to all columns.
This is only necessary in debug version since non-debug version does not
contain asserts on write_set.
sql/sql_select.h:
Temporarily change write_set in store_key_const_item::copy_inner() to
allow initialization of underlying field. This is necessary since
subqueries are executed for EXPLAIN. (For normal execution,
store_key_item::copy_inner is used.)
The condition that was supposed to check whether a join table
is an inner table of a nested outer join or semi-join was not
quite correct in the code of the function check_join_cache_usage.
That's why some queries with nested outer joins triggered
an assertion failure.
Encapsulated this condition in the new method called
JOIN_TAB::is_nested_inner and provided a proper code for it.
Also corrected a bug in the code of check_join_cache_usage()
that caused a downgrade of not first join buffers of the
level 5 and 7 to level 4 and 6 correspondingly.
anticipate different execution paths resulting in different
thd->proc_info.
- Fixed subselect_cache to contain correct results. The results
are currently wrong in 5.3, but are correct in 5.2, and 5.3-mwl89.
The cause for the bug was two-fold:
1. Incorrect detection of whether a table is the first one in a query plan -
"used_table & 1" actually checks if used_table is table with number "1".
2. Missing logic to delay the evaluation of (expensive) constant conditions
during the execution phase.
The fix adds/changes:
The patch:
- removes incorrect treatment of expensive predicates from make_cond_for_table,
and lets the caller decide when to evaluate expensive predicates.
- saves expensive constant conditions in JOIN::exec_const_cond,
which is evaluated once in the beginning of JOIN::exec.
Problem: crash in Item_float constructor on DBUG_ASSERT due
to not null-terminated string parameter.
Fix: making Item_float::Item_float non-null-termintated parameter safe:
- Using temporary buffer when generating error
modified:
@ mysql-test/r/xml.result
@ mysql-test/t/xml.test
@ sql/item.cc
ESCAPE argument might be empty string. It leads
to server crash under some circumstances.
The fix:
-added check if ESCAPE argument result is not empty string
mysql-test/r/ctype_latin1.result:
test case
mysql-test/t/ctype_latin1.test:
test case
sql/item_cmpfunc.cc:
-added check if ESCAPE argument result is not empty string
for --list_files in mysqltest.
client/mysqltest.cc:
Backported --replace_result for --list_files.
mysql-test/r/mysqltest.result:
updated test.
mysql-test/t/mysqltest.test:
added test for replace_result on list_files.
When pushing the condition for a table in the function
JOIN_TAB::make_scan_filter the optimizer must not push
conditions from WHERE if the table is some inner table
of an outer join..
The condition over outer tables extracted from the on expression
for a outer join must be ANDed to the condition pushed to the
first inner table of this outer join only.
Nested outer joins cannot use flat join buffers. So if join_cache_level
is set to 1 then any join algorithm employing join buffers cannot be used
for nested outer joins.