The problem was that xtradb has innodb_purge_threads default 1 (plain
innodb defaults to 0).
The test sets a special debug variable and relies on it to force
purge to happen. But when using background purge threads, this does
not work, the debug code is not made to handle this, so occasionally
the test times out waiting for the purge to occur.
Fix by explicitly setting innodb_purgee_threads=0 for this test.
Change default for innodb_use_fallocate to FALSE, due to bugs in older Linux kernels (posix_fallocate() does not always guarantee that file size is like one specified)
The official Debian Wheezy MySQL packages have versions like 5.5.30+dfsg-xxx.
Such version is larger than 5.5.30-yyy, so apt prefers it.
So use instead 5.5.30+maria-yyy, which is larger and can be pulled in
automatically by apt.
Also included are a couple of fixes for test failures in buildbot.
This fixes that by default LOAD DATA INFILE will not generate the error:
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage..."
mysql-test/suite/sys_vars/r/max_binlog_cache_size_basic.result:
Updated test case
mysql-test/suite/sys_vars/r/max_binlog_stmt_cache_size_basic.result:
Updated test case
sql/sys_vars.cc:
Increase default value of max_binlog_cache_size and max_binlog_stmt_cache_size to ulonglong_max.
fulltext search was initialized for all MATCH ... AGAINST items
at the end of the JOIN::optimize(). But since 5.3 derived tables
are initialized lazily on first use, very late in the sub_select().
Skip Item_func_match::init_search initialization if the corresponding
table isn't open yet; repeat fulltext initialization for all
not-yet-initialized MATCH ... AGAINST items after creating derived tables.
init join->top_join_tab_count to be in sync with join->join_tab=stat,
otherwise a query can be killed in-between and join_tab's won't be deleted
(JOIN::cleanup won't call JOIN_TAB::cleanup)
Analysis:
The reason for the inefficent plan was that Item_subselect::is_expensive()
didn't detect the special case when a subquery was optimized, but had no
join plan because it either has no table, or its tables have been optimized
away, or the optimizer detected that the result set is empty.
Solution:
Identify the special cases above in the Item_subselect::is_expensive(),
and consider such degenerate subqueries inexpensive.
This bug was introduced by the patch for WL#3220.
If the memory allocated for the tree to store unique elements
to be counted is not big enough to include all of them then
an external file is used to store the elements.
The unique elements are guaranteed not to be nulls. So, when
reading them from the file we don't have to care about the null
flags of the read values. However, we should remove the flag
at the very beginning of the process. If we don't do it and
if the last value written into the record buffer for the field
whose distinct values needs to be counted happens to be null,
then all values read from the file are considered to be nulls
and are not counted in.
The fix does not remove a possible null flag for the read values.
Rather it just counts the values in the same way it was done
before WL #3220.
In some cases, when using views the optimizer incorrectly determined
possible join orders for queries with nested outer and inner joins.
This could lead to invalid execution plans for such queries.
Additional fixes for possible overflows in length-related
calculations in 'spatial' implementations.
Checks added to the ::get_data_size() methods.
max_n_points decreased to occupy less 2G size. An
object of that size is practically inoperable anyway.
Item_func_make_set wasn't taking into account the first argument when
calculating maybe_null.
sql/item_strfunc.cc:
rewrite Item_func_make_set, removing separate storage of the first argument
sql/item_strfunc.h:
rewrite Item_func_make_set, removing separate storage of the first argument
with decimals=NOT_FIXED_DEC it is possible to have 'decimals' larger
than 'max_length', it's not an error for temporal functions.
But when Item_func_numhybrid converts the value to DECIMAL_RESULT,
it must limit 'decimals' to be a valid scale of a decimal number.
Flip the switch and create Item_cache based on the argument's cmp_type, not argument's result_type().
Fix subselect_engine to calculate cmp_type correctly
sql/item_subselect.h:
mdev:4284
mysql-test/r/keywords.result:
Test that option works as table/column/variable
mysql-test/suite/funcs_1/r/storedproc.result:
OPTION is now a valid identifier
mysql-test/suite/funcs_1/t/storedproc.test:
OPTION is now a valid identifier
mysql-test/t/keywords.test:
Test that option works as table/column/variable
sql/sql_yacc.yy:
OPTION is now a valid identifier
get_datetime_value() should not double-cache its own Item_cache_temporal items,
but it *should* cache other Item_cache items, such as Item_cache_str.
sql/item.h:
shortcut, to avoid going through the switch in Item::cmp_type()
sql/item_cmpfunc.cc:
even if the item is Item_cache_str - it still needs to be converted and cached.
sql/item_timefunc.h:
all descendants of Item_temporal_func always have cmp_type==TIME_RESULT.
Even Item_date_add_interval, that might have field_type == MYSQL_TYPE_STRING.
The bug was found by Alyssa Milburn.
If the number of points of a geometry feature read from
binary representation is greater than 0x10000000, then
the (uint32) (num_points * 16) will cut the higher byte,
which leads to various errors.
Fixed by additional check if (num_points > max_n_points).
This is a bug in the legacy code. It did not manifest itself because
it was masked by other bugs that were fixed by the patches for
mdev-4172 and mdev-4177.