A bug in the restore_prev_nj_state function allowed interleaving
inner tables of outer join operations with outer tables. With the
current implementation of the nested loops algorithm it could lead
to wrong result sets for queries with nested outer joins.
Another bug in this procedure effectively blocked evaluation of some
valid execution plans for queries with nested outer joins.
The need arose when working on Bug 26141, where it became
necessary to replace TABLE_LIST with its forward declaration in a few
headers, and this involved a lot of s/TABLE_LIST/st_table_list/.
Although other workarounds exist, this patch is in line
with our general strategy of moving away from typedef-ed names.
Sometime in future we might also rename TABLE_LIST to follow the
coding style, but this is a huge change.
query / no aggregate of subquery
The optimizer counts the aggregate functions that
appear as top level expressions (in all_fields) in
the current subquery. Later it makes a list of these
that it uses to actually execute the aggregates in
end_send_group().
That count is used in several places as a flag whether
there are aggregates functions.
While collecting the above info it must not consider
aggregates that are not aggregated in the current
context. It must treat them as normal expressions
instead. Not doing that leads to incorrect data about
the query, e.g. running a query that actually has no
aggregate functions as if it has some (and hence is
expected to return only one row).
Fixed by ignoring the aggregates that are not aggregated
in the current context.
One other smaller omission discovered and fixed in the
process : the place of aggregation was not calculated for
user defined functions. Fixed by calling
Item_sum::init_sum_func_check() and
Item_sum::check_sum_func() as it's done for the rest of
the aggregate functions.
the loose scan optimization for grouping queries was applied returned
a wrong result set when the query was used with the SQL_BIG_RESULT
option.
The SQL_BIG_RESULT option forces to use sorting algorithm for grouping
queries instead of employing a suitable index. The current loose scan
optimization is applied only for one table queries when the suitable
index is covering. It does not make sense to use sort algorithm in this
case. However the create_sort_index function does not take into account
the possible choice of the loose scan to implement the DISTINCT operator
which makes sorting unnecessary. Moreover the current implementation of
the loose scan for queries with distinct assumes that sorting will
never happen. Thus in this case create_sort_index should not call
the function filesort.
INSERT into table from SELECT from the same table
with ORDER BY and LIMIT was inserting other data
than sole SELECT ... ORDER BY ... LIMIT returns.
One part of the patch for bug #9676 improperly pushed
LIMIT to temporary table in the presence of the ORDER BY
clause.
That part has been removed.
For a join query with GROUP BY and/or ORDER BY and a view reference
in the FROM list the metadata erroneously showed empty table aliases
and database names for the view columns.
The method select_insert::send_error does two things, it rolls back a statement
being executed and outputs an error message. But when a
nonexistent column is referenced, an error message has been published already and
there is no need to publish another.
Fixed by moving all functionality beyond publishing an error message into
select_insert::abort() and calling only that function.
Coding style: classes start with a capital letter.
Rename some classes related to parsing:
create_field -> Create_field
foreign_key -> Foreign_key
key_part_spec -> Key_part_spec
a temporary table has grown out of heap memory reserved for it and
the remaining disk space is not big enough to store the table as
a MyISAM table.
The crash happens because the function create_myisam_from_heap
does not handle safely the mem_root structure associated
with the converted table in the case when an error has occurred.
using a derived table over a grouping subselect.
This crash happens only when materialization of the derived tables
requires creation of auxiliary temporary table, for example when
a grouping operation is carried out with usage of a temporary table.
The crash happened because EXPLAIN EXTENDED when printing the query
expression made an attempt to use the objects created in the mem_root
of the temporary table which has been already freed by the moment
when printing is called.
This bug appeared after the method Item_field::print() had been
introduced.
Problem:
HASH indexes on VARCHAR columns with binary collations did not ignore trailing spaces from strings before comparisons. This could result in duplicate records being successfully inserted into a MEMORY table with unique key constraints.
As a direct consequence of the above, internal MEMORY tables used for GROUP BY calculation in testcases for bug #27643 contained duplicate rows which resulted in duplicate key errors when converting those temporary tables to MyISAM. Additionally, that error was incorrectly converted to the 'table is full' error.
Solution:
- ignore trailing spaces in VARCHAR fields with binary collations when calculating hashes.
- return a proper error from create_myisam_from_heap() when conversion fails.
mysqld crashed when a long-running explain query was killed from
another connection.
When the current thread caught a kill signal executing the function
best_extension_by_limited_search it just silently returned to
the calling function greedy_search without initializing elements of
the join->best_positions array.
However, the greedy_search function ignored thd->killed status
after a calls to the best_extension_by_limited_search function, and
after several calls the greedy_search function used an uninitialized
data from the join->best_positions[idx] to search position in the
join->best_ref array.
That search failed, and greedy_search tried to call swap_variables
function with NULL argument - that caused a crash.