ORDER BY computed col
GROUP BY implies ORDER BY in the MySQL dialect of SQL. Therefore, when an
index on the first table in the query is used, and that index satisfies
ordering according to the GROUP BY clause, the query optimizer estimates the
number of tuples that need to be read from this index. If there is a LIMIT
clause, table statistics on tables following this 'sort table' are employed.
There may be a separate ORDER BY clause however, which mandates reading the
whole 'sort table' anyway. But the previous estimate was left untouched.
Fixed by removing the estimate from EXPLAIN output if GROUP BY is used in
conjunction with an ORDER BY clause that mandates using a temporary table.
ORDER BY computed col
GROUP BY implies ORDER BY in the MySQL dialect of SQL. Therefore, when an
index on the first table in the query is used, and that index satisfies
ordering according to the GROUP BY clause, the query optimizer estimates the
number of tuples that need to be read from this index. If there is a LIMIT
clause, table statistics on tables following this 'sort table' are employed.
There may be a separate ORDER BY clause however, which mandates reading the
whole 'sort table' anyway. But the previous estimate was left untouched.
Fixed by removing the estimate from EXPLAIN output if GROUP BY is used in
conjunction with an ORDER BY clause that mandates using a temporary table.
The problem could be demonstrated with an outer join of two single-row
tables where the values of the join attributes were null. Any query
with such a join could return a wrong result set if the where
condition of the query was not empty. For queries with empty
where conditions the result sets were correct.
This was the consequence of two bugs in the code:
- Item_equal objects for on conditions of outer joins were
not built if the processed query had no where condition
- the check for null values in the code that evaluated constant
Item_equal objects was incorrect.
Fixed both above problems.
Added a test case for the bug and adjusted results for some other
test cases.
When not-exists optimization was applied to a table that
happened to be an inner table of two outer joins, one
embedded into another, then setting the match flag for
the embedding outer join on could be skipped. This caused
generation of extra null complemented rows.
Made sure that the match flags are set correctly in all cases
when not-exists optimization is used.
inited==INDEX
When an error occurs while sending the data in a temporary table there was no
cleanup performed. This caused a failed assertion in the case when different
access methods were used for populating the table vs. retrieving the data from
the table if IGNORE was specified and sql_safe_updates = 0. In this case
execution continues, but the handler expects to continue with the access
method used for row retrieval.
Fixed by doing the cleanup even if errors occur.
inited==INDEX
When an error occurs while sending the data in a temporary table there was no
cleanup performed. This caused a failed assertion in the case when different
access methods were used for populating the table vs. retrieving the data from
the table if IGNORE was specified and sql_safe_updates = 0. In this case
execution continues, but the handler expects to continue with the access
method used for row retrieval.
Fixed by doing the cleanup even if errors occur.
Fall back to use ALTER TABLE for engines that doesn't support REPAIR when doing repair for upgrade.
Nicer output from mysql_upgrade and mysql_check
Updated all arrays that used NAME_LEN to use SAFE_NAME_LEN to ensure that we don't break things accidently as names can now have a #mysql50# prefix.
client/mysql_upgrade.c:
If we are using verbose, also run mysqlcheck in verbose mode.
client/mysqlcheck.c:
Add more information if running in verbose mode
Print 'Needs upgrade' instead of complex error if table needs to be upgraded
Don't write connect information if verbose is not 2 or above
mysql-test/r/drop.result:
Updated test and results as we now support full table names
mysql-test/r/grant.result:
Now you get a correct error message if using #mysql with paths
mysql-test/r/show_check.result:
Update results as table names can temporarly be bigger than NAME_LEN (during upgrade)
mysql-test/r/upgrade.result:
Test upgrade for long table names.
mysql-test/suite/funcs_1/r/is_tables_is.result:
Updated old test result (had note been updated in a while)
mysql-test/t/drop.test:
Updated test and results as we now support full table names
mysql-test/t/grant.test:
Now you get a correct error message if using #mysql with paths
mysql-test/t/upgrade.test:
Test upgrade for long table names.
sql/ha_partition.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/item.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/log_event.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/mysql_priv.h:
Added SAFE_NAME_LEN
sql/rpl_filter.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sp.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sp_head.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_acl.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_base.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_connect.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_parse.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_prepare.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_select.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_show.cc:
NAME_LEN -> SAFE_NAME_LEN
Enlarge table names for SHOW TABLES to also include optional #mysql50#
sql/sql_table.cc:
Fall back to use ALTER TABLE for engines that doesn't support REPAIR when doing repair for upgrade.
sql/sql_trigger.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_udf.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_view.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/table.cc:
Fixed check_table_name() to not count #mysql50# as part of name
If #mysql50# is part of the name, don't allow path characters in name.
- Changed to still use bcmp() in certain cases becasue
- Faster for short unaligneed strings than memcmp()
- Bettern when using valgrind
- Changed to use my_sprintf() instead of sprintf() to get higher portability for old systems
- Changed code to use MariaDB version of select->skip_record()
- Removed -%::SCCS/s.% from Makefile.am:s to remove automake warnings
Bug#46754: 'rows' field doesn't reflect partition pruning
The EXPLAIN's result in 'rows' field
was evaluated to number of rows when the table was opened
(not from the table cache) and only the partitions left
after pruning was updated with its correct number
of rows.
The evaluation of the 'rows' field was using handler::records()
which is a potentially expensive call, and ignores the partitioning
pruning.
The fix was to use the handlers stats.records after updating it
with ::info(HA_STATUS_VARIABLE) instead.
mysql-test/r/partition_pruning.result:
updated result
mysql-test/t/partition_pruning.test:
Added test.
sql/sql_select.cc:
Use ::info + stats.records instead of ::records().
Bug#46754: 'rows' field doesn't reflect partition pruning
The EXPLAIN's result in 'rows' field
was evaluated to number of rows when the table was opened
(not from the table cache) and only the partitions left
after pruning was updated with its correct number
of rows.
The evaluation of the 'rows' field was using handler::records()
which is a potentially expensive call, and ignores the partitioning
pruning.
The fix was to use the handlers stats.records after updating it
with ::info(HA_STATUS_VARIABLE) instead.
called twice in a row
Queries with nested joins could cause an infinite loop in the
server when used from SP/PS.
When flattening nested joins, simplify_joins() tracks if the
name resolution list needs to be updated by setting
fix_name_res to TRUE if the current loop iteration has done any
transformations to the join table list. The problem was that
the flag was not reset before the next loop iteration leading
to unnecessary "fixing" of the name resolution list which in
turn could lead to a loop (i.e. circularly-linked part) in that
list. This was causing problems on subsequent execution when
used together with stored procedures or prepared statements.
Fixed by making sure fix_name_res is reset on every loop
iteration.
mysql-test/r/join.result:
Added a test case for bug #53544.
mysql-test/t/join.test:
Added a test case for bug #53544.
sql/sql_select.cc:
Make sure fix_name_res is reset on every loop iteration.
called twice in a row
Queries with nested joins could cause an infinite loop in the
server when used from SP/PS.
When flattening nested joins, simplify_joins() tracks if the
name resolution list needs to be updated by setting
fix_name_res to TRUE if the current loop iteration has done any
transformations to the join table list. The problem was that
the flag was not reset before the next loop iteration leading
to unnecessary "fixing" of the name resolution list which in
turn could lead to a loop (i.e. circularly-linked part) in that
list. This was causing problems on subsequent execution when
used together with stored procedures or prepared statements.
Fixed by making sure fix_name_res is reset on every loop
iteration.
After fix for bug 39653 the shortest available secondary index was used for
full table scan. Primary clustered key was used only if no secondary index
can be used. However, when chosen secondary index includes all fields of the
table being scanned it's better to use primary index since the amount of
data to scan is the same but the primary index is clustered.
Now the find_shortest_key function takes this into account.
mysql-test/suite/innodb/r/innodb_mysql.result:
Added a test case for the bug#55656.
mysql-test/suite/innodb/t/innodb_mysql.test:
Added a test case for the bug#55656.
sql/sql_select.cc:
Bug #55656: mysqldump can be slower after bug #39653 fix.
The find_shortest_key function now prefers clustered primary key
if found secondary key includes all fields of the table.
After fix for bug 39653 the shortest available secondary index was used for
full table scan. Primary clustered key was used only if no secondary index
can be used. However, when chosen secondary index includes all fields of the
table being scanned it's better to use primary index since the amount of
data to scan is the same but the primary index is clustered.
Now the find_shortest_key function takes this into account.
within query
The server could crash after materializing a derived table
which requires a temporary table for grouping.
When destroying the temporary table used to execute a query for
a derived table, JOIN::destroy() did not clean up Item_fields
pointing to fields in the temporary table. This led to
dereferencing a dangling pointer when printing out the items
tree later in the outer SELECT.
The solution is an addendum to the patch for bug37362: in
addition to cleaning up items in tmp_all_fields3, do the same
for items in tmp_all_fields1, since now we have an example
where this is necessary.
mysql-test/r/join.result:
Added test cases for bug#55568 and a duplicate bug #54468.
mysql-test/t/join.test:
Added test cases for bug#55568 and a duplicate bug #54468.
sql/field.cc:
Make sure field->table_name is not set to NULL in
Field::make_field() to avoid assertion failure in
Item_field::make_field() after cleaning up items
(the assertion fired in udf.test when running
the test suite with the patch applied).
sql/sql_select.cc:
In addition to cleaning up items in tmp_all_fields3, do the
same for items in tmp_all_fields1.
Introduce a new helper function to avoid code duplication.
sql/sql_select.h:
Introduce a new helper function to avoid code duplication in
JOIN::destroy().
within query
The server could crash after materializing a derived table
which requires a temporary table for grouping.
When destroying the temporary table used to execute a query for
a derived table, JOIN::destroy() did not clean up Item_fields
pointing to fields in the temporary table. This led to
dereferencing a dangling pointer when printing out the items
tree later in the outer SELECT.
The solution is an addendum to the patch for bug37362: in
addition to cleaning up items in tmp_all_fields3, do the same
for items in tmp_all_fields1, since now we have an example
where this is necessary.
mysql-test/r/group_by.result:
Added test that showed problems that no_rows_in_results() didn't work for expressions
mysql-test/r/subselect4.result:
Test case for LP#612894
mysql-test/t/group_by.test:
Added test that showed problems that no_rows_in_results() didn't work for expressions
mysql-test/t/subselect4.test:
Test case for LP#612894
sql/item.h:
Added restore_to_before_no_rows_in_result()
Added function processor for no_rows_in_results() and restore_to_before_no_rows_in_results() to ensure it works with functions
Fix that above functions are handled by Item_ref()
sql/item_func.h:
Ensure that no_rows_in_results() and restore_to_before_no_rows_in_result() are called for all function arguments
sql/item_sum.cc:
Added restore_to_before_no_rows_in_result() to restore settings after Item_sum_hybrid::no_rows_in_result() was called.
This is needed to handle the case where we have made 'make_const()' on the item in opt_sum(), but the item will be reused again in a sub query.
Ignore multiple calls to no_rows_in_result() as Item_ref is calling it twice.
sql/item_sum.h:
Added restore_to_before_no_rows_in_result();
sql/sql_select.cc:
Added reset of no_rows_in_result() for JOIN::reinit()
sql/sql_select.h:
Added marker if no_rows_in_result() is called.
The server was not checking for errors generated during
the execution of Item::val_xxx() methods when copying
data to the group, order, or distinct temp table's row.
Fixed by extending the copy_funcs() to return an error
code and by checking for that error code on the places
copy_funcs() is called.
Test case added.
The server was not checking for errors generated during
the execution of Item::val_xxx() methods when copying
data to the group, order, or distinct temp table's row.
Fixed by extending the copy_funcs() to return an error
code and by checking for that error code on the places
copy_funcs() is called.
Test case added.