two tests still fail:
main.innodb_icp and main.range_vs_index_merge_innodb
call records_in_range() with both range ends being open
(which triggers an assert)
The bug could caused a crash when the server executed a query with
ORDER by and sort_buffer_size was set to a small enough number.
It happened because the small sort buffer did not allow to allocate
all merge buffers in it.
Made sure that the allocated sort buffer would be big enough
to contain all possible merge buffers.
CONNECTIONS IF SPE
Problem description: -ssl-key value is not validated, you can assign any bogus
text to --ssl-key and it is not verified that it exists, and more importantly,
it allows the client to connect to mysqld.
Fix: Added proper validations checks for --ssl-key.
Note:
1) Documentation changes require for 5.1, 5.5, 5.6 and trunk in the sections
listed below and the details are :
http://dev.mysql.com/doc/refman/5.6/en/ssl-options.html#option_general_ssl
and
REQUIRE SSL section of
http://dev.mysql.com/doc/refman/5.6/en/grant.html
2) Client having with option '--ssl', should able to get ssl connection. This
will be implemented as part of separate fix in 5.6 and trunk.
Following reasons caused mismatches:
- different handling of invalid values;
- different CAST results with fractional seconds;
- microseconds support in MariaDB;
- different algorithm of comparing temporal values;
- differences in error and warning texts and codes;
- different approach to truncating datetime values to time;
- additional collations;
- different record order for queries without ORDER BY;
- MySQL bug#66034.
More details in MDEV-369 comments.
Problem description:
Table 't' created with two colums having compound index on both the
columns under innodb/myisam engine at remote machine. In the local
machine same table is created undet the federated engine.
A select having where clause with along 'AND' operation gives wrong
results on local machine.
Analysis:
The given query at federated engine is wrongly transformed by
federated::create_where_from_key() function and the same was sent to
the remote machine. Hence the local machine is showing wrong results.
Given query "select c1 from t where c1 <= 2 and c2 = 1;"
Query transformed, after ha_federated::create_where_from_key() function is:
SELECT `c1`, `c2` FROM `t` WHERE (`c1` IS NOT NULL ) AND
( (`c1` >= 2) AND (`c2` <= 1) ) and the same sent to real_query().
In the above the '<=' and '=' conditions were transformed to '>=' and
'<=' respectively.
ha_federated::create_where_from_key() function behaving as below:
The key_range is having both the start_key and end_key. The start_key
is used to get "(`c1` IS NOT NULL )" part of the where clause, this
transformation is correct. The end_key is used to get "( (`c1` >= 2)
AND (`c2` <= 1) )", which is wrong, here the given conditions('<=' and '=')
are changed as wrong conditions('>=' and '<=').
The end_key is having {key = 0x39fa6d0 "", length = 10, keypart_map = 3,
flag = HA_READ_AFTER_KEY}
The store_length is having value '5'. Based on store_length and length
values the condition values is applied in HA_READ_AFTER_KEY switch case.
The switch case 'HA_READ_AFTER_KEY' is applicable to only the last part of
the end_key and for previous parts it is going to 'HA_READ_KEY_OR_NEXT' case,
here the '>=' is getting added as a condition instead of '<='.
Fix:
Updated the 'if' condition in 'HA_READ_AFTER_KEY' case to affect for all
parts of the end_key. i.e 'i > 0' will used for end_key, Hence added it in
the if condition.
mysql-test/suite/federated/federated.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_archive.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_bug_13118.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_bug_25714.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_bug_35333.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_debug.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_innodb.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_server.test:
modified the federated.inc file location
mysql-test/suite/federated/federated_transactions.test:
modified the federated.inc file location
mysql-test/suite/federated/include/federated.inc:
moved the file from federated suite to federated/include folder
mysql-test/suite/federated/include/federated_cleanup.inc:
moved the file from federated suite to federated/include folder
mysql-test/suite/federated/include/have_federated_db.inc:
moved the file from federated suite to federated/include folder
storage/federated/ha_federated.cc:
updated the 'if condition' in ha_federated::create_where_from_key()
function.
Backporting the WL#5716, "Information schema table for InnoDB
buffer pool information". Backporting revisions 2876.244.113,
2876.244.102 from mysql-trunk.
rb://1175 approved by Jimmy Yang.
"ORDER BY" AND "LIMIT BY" CLAUSE
PROBLEM:
When a 'limit' clause is specified in a query along with
group by and order by, optimizer chooses wrong index
there by examining more number of rows than required.
However without the 'limit' clause, optimizer chooses
the right index.
ANALYSIS:
With respect to the query specified, range optimizer chooses
the first index as there is a range present ( on 'a'). Optimizer
then checks for an index which would give records in sorted
order for the 'group by' clause.
While checking chooses the second index (on 'c,b,a') based on
the 'limit' specified and the selectivity of
'quick_condition_rows' (number of rows present in the range)
in 'test_if_skip_sort_order' function.
But, it fails to consider that an order by clause on a
different column will result in scanning the entire index and
hence the estimated number of rows calculated above are
wrong (which results in choosing the second index).
FIX:
Do not enforce the 'limit' clause in the call to
'test_if_skip_sort_order' if we are creating a temporary
table. Creation of temporary table indicates that there would be
more post-processing and hence will need all the rows.
This fix is backported from 5.6. This problem is fixed in 5.6 as
part of changes for work log #5558
mysql-test/r/subselect.result:
Changes for Bug#11762052 results in the correct number of rows.
sql/sql_select.cc:
Do not pass the actual 'limit' value if 'need_tmp' is true.
This is a followup patch for the bug enabling the test
i_binlog.binlog_mysqlbinlog_file_write.test
this was disabled in mysql trunk and mysql 5.5 as in the release
build mysqlbinlog was not debug compiled whereas the mysqld was.
Since have_debug.inc script checks only for mysqld to be debug
compiled, the test was not being skipped on release builds.
We resolve this problem by creating a new inc file
mysqlbinlog_have_debug.inc which checks exclusively for mysqlbinlog
to be debug compiled. if not it skips the test.
mysql-test/include/mysqlbinlog_have_debug.inc:
new inc file to check if mysqlbinlog is debug compiled.
Print the warning(note):
YEAR(x) is deprecated and will be removed in a future release. Please use YEAR(4) instead
on "CREATE TABLE ... YEAR(x)" or "ALTER TABLE MODIFY ... YEAR(x)", where x != 4
TABLE_LIST::check_single_table made aware about fact that now if table attached to a merged view it can be (unopened) temporary table
(in 5.2 it was always leaf table or non (in case of several tables)).
We set correct cmp_context during preparation to avoid changing it later by Item_field::equal_fields_propagator.
(see mysql bugs #57135#57692 during merging)
Virtual columns of ENUM and SET data types were not supported properly
in the original patch that introduced virtual columns into MariaDB 5.2.
The problem was that for any virtual column the patch used the
interval_id field of the definition of the column in the frm file as
a reference to the virtual column expression.
The fix stores the optional interval_id of the virtual column in the
extended header of the virtual column expression.
Analysis:
The fix for bug lp:985667 implements the method Item_subselect::no_rows_in_result()
for all main kinds of subqueries. The purpose of this method is to be called from
return_zero_rows() and set Items to some default value in the case when a query
returns no rows. Aggregates and subqueries require special treatment in this case.
Every implementation of Item_subselect::no_rows_in_result() called
Item_subselect::make_const() to set the subquery predicate to its default value
irrespective of where the predicate was located in the query. Once the predicate
was set to a constant it was never executed.
At the same time, the JOIN object of the fake select for UNIONs (the one used for
the final result of the UNION), was set after all subqueries in the union were
executed. Since we set the subquery as constant, it was never executed, and the
corresponding JOIN was never created.
In order to decide whether the result of NOT IN is NULL or FALSE, Item_in_optimizer
needs to check if the subquery result was empty or not. This is where we got the
crash, because subselect_union_engine::no_rows() checks for
unit->fake_select_lex->join->send_records, and the join object was NULL.
Solution:
If a subquery is in the HAVING clause it must be evaluated in order to know its
result, so that we can properly filter the result records. Once subqueries in the
HAVING clause are executed even in the case of no result rows, this specific
crash will be solved, because the UNION will be executed, and its JOIN will be
constructed. Therefore the fix for this crash is to narrow the fix for lp:985667,
and to apply Item_subselect::no_rows_in_result() only when the subquery predicate
is in the SELECT clause.
Analysis:
Queries with implicit grouping (there is aggregate, but no group by)
follow some non-obvious semantics in the case of empty result set.
Aggregate functions produce some special "natural" value depending on
the function. For instance MIN/MAX return NULL, COUNT returns 0.
The complexity comes from non-aggregate expressions in the select list.
If the non-aggregate expression is a constant, it can be computed, so
we should return its value, however if the expression is non-constant,
and depends on columns from the empty result set, then the only meaningful
value is NULL.
The cause of the wrong result was that for subqueries the optimizer didn't
make a difference between constant and non-constant ones in the case of
empty result for implicit grouping.
Solution:
In all implementations of Item_subselect::no_rows_in_result() check if the
subquery predicate is constant. If it is constant, do not set it to the
default value for implicit grouping, instead let it be evaluated.
Problem
========
Replication breaks in the cases if the event length exceeds
the size of master Dump thread's max_allowed_packet.
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
That can happen e.g with row-based replication in Update_rows event.
Fix
====
The problem is fixed in 2 steps:
1.) The Dump thread limit to read event is increased to the upper limit
i.e. Dump thread reads whatever gets logged in the binary log.
2.) On the slave side we increase the the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option (slave_max_allowed_packet)
included, is used to regulate the max_allowed_packet of the
slave thread (IO/SQL) by the DBA, and facilitates the sending of
large packets from the master to the slave.
This causes the large packets to be received by the slave and apply
it successfully.
sql/log_event.cc:
The max_allowed_packet is not evaluated to the new option
slave_max_allowed_packet after the fix.
sql/log_event.h:
Added the new option in the log_event.h file.
sql/mysqld.cc:
Added a new option to the server.
sql/slave.cc:
Increasing the session max_allowed_packet to a large value,
i.e. not taking global(max_allowed) into consideration, for the slave's threads.
sql/sql_repl.cc:
The dump thread's max_allowed_packet is set to the upper limit
which makes it independent and it now reads whatever gets
logged in the binary log.
Analysis:
When the method JOIN::choose_subquery_plan() decided to apply
the IN-TO-EXISTS strategy, it set the unit and select_lex
uncacheable flag to UNCACHEABLE_DEPENDENT_INJECTED unconditionally.
As result, even if IN-TO-EXISTS injected non-correlated predicates,
the subquery was still treated as correlated.
Solution:
Set the subquery as correlated only if the injected predicate(s) depend
on the outer query.
Analysis:
When a subquery that needs a temp table is executed during
the prepare or optimize phase of the outer query, at the end
of the subquery execution all the JOIN_TABs of the subquery
are replaced by a new JOIN_TAB that selects from the temp table.
However that temp table has no corresponding TABLE_LIST.
Once EXPLAIN execution reaches its last phase, it tries to print
the names of the subquery tables through its TABLE_LISTs, but in
the case of this bug there is no such TABLE_LIST (it is NULL),
hence a crash.
Solution:
The fix is to block subquery evaluation inside
Item_func_like::fix_fields and Item_func_like::select_optimize()
using the Item::is_expensive() test.
Optimizator fails using index with ST_Within(g, constant_poly).
per-file comments:
mysql-test/r/gis-rt-precise.result
test result fixed.
mysql-test/r/gis-rtree.result
test result fixed.
mysql-test/suite/maria/r/maria-gis-rtree-dynamic.result
test result fixed.
mysql-test/suite/maria/r/maria-gis-rtree-trans.result
test result fixed.
mysql-test/suite/maria/r/maria-gis-rtree.result
test result fixed.
storage/maria/ma_rt_index.c
Use MBR_INTERSECT mode when optimizing the select WITH ST_Within.
storage/myisam/rt_index.c
Use MBR_INTERSECT mode when optimizing the select WITH ST_Within.
- In JOIN::exec(), make the having->update_used_tables() call before we've
made the JOIN::cleanup(full=true) call. The latter frees SJ-Materialization
structures, which correlated subquery predicate items attempt to walk afterwards.
This is a backport of the (unchaged) fix for MySQL bug #11764372, 57197.
Analysis:
When the outer query finishes its main execution and computes GROUP BY,
it needs to construct a new temporary table (and a corresponding JOIN) to
execute the last DISTINCT operation. At this point JOIN::exec calls
JOIN::join_free, which calls JOIN::cleanup -> TMP_TABLE_PARAM::cleanup
for both the outer and the inner JOINs. The call to the inner
TMP_TABLE_PARAM::cleanup sets copy_field = NULL, but not copy_field_end.
The final execution phase that computes the DISTINCT invokes:
evaluate_join_record -> end_write -> copy_funcs
The last function copies the results of all functions into the temp table.
copy_funcs walks over all functions in join->tmp_table_param.items_to_copy.
In this case items_to_copy contains both assignments to user variables.
The process of copying user variables invokes Item_func_set_user_var::check
which in turn re-evaluates the arguments of the user variable assignment.
This in turn triggers re-evaluation of the subquery, and ultimately
copy_field.
However, the previous call to TMP_TABLE_PARAM::cleanup for the subquery
already set copy_field to NULL but not its copy_field_end. This results
in a null pointer access, and a crash.
Fix:
Set copy_field_end and save_copy_field_end to null when deleting
copy fields in TMP_TABLE_PARAM::cleanup().