The columns in HAVING can reference the GROUP BY and
SELECT columns. There can be "table" prefixes when
referencing these columns. And these "table" prefixes
in HAVING use the table alias if available.
This means that table aliases are subject to the same
storage rules as table names and are dependent on
lower_case_table_names in the same way as the table
names are.
Fixed by :
1. Treating table aliases as table names
and make them lowercase when printing out the SQL
statement for view persistence.
2. Using case insensitive comparison for table
aliases when requested by lower_case_table_names
UNIQUE (eq-ref) lookups result in table being considered as a "constant" table.
Queries that consist of only constant tables are processed in do_select() in a
special way that doesn't invoke evaluate_join_record(), and therefore doesn't
increase the counters join->examined_rows and join->thd->row_count.
The patch increases these counters in this special case.
NOTICE:
This behavior seems to contradict what the documentation says in Sect. 5.11.4:
"Queries handled by the query cache are not added to the slow query log, nor
are queries that would not benefit from the presence of an index because the
table has zero rows or one row."
No test case in 5.0 as issue shows only in slow query log, and other counters
can give subtly different values (with regard to counting in create_sort_index(),
synthetic rows in ROLLUP, etc.).
The bug is a regression introduced by the fix for bug30596. The problem
was that in cases when groups in GROUP BY correspond to only one row,
and there is ORDER BY, the GROUP BY was removed and the ORDER BY
rewritten to ORDER BY <group_by_columns> without checking if the
columns in GROUP BY and ORDER BY are compatible. This led to
incorrect ordering of the result set as it was sorted using the
GROUP BY columns. Additionaly, the code discarded ASC/DESC modifiers
from ORDER BY even if its columns were compatible with the GROUP BY
ones.
This patch fixes the regression by checking if ORDER BY columns form a
prefix of the GROUP BY ones, and rewriting ORDER BY only in that case,
preserving the ASC/DESC modifiers. That check is sufficient, since the
GROUP BY columns contain a unique index.
tables or more
The problem was that the optimizer used the join buffer in cases when
the result set is ordered by filesort. This resulted in the ORDER BY
clause being ignored, and the records being returned in the order
determined by the order of matching records in the last table in join.
Fixed by relaxing the condition in make_join_readinfo() to take
filesort-ordered result sets into account, not only index-ordered ones.
The HAVING clause is subject to the same rules as the SELECT list
about using aggregated and non-aggregated columns.
But this was not enforced when processing implicit grouping from
using aggregate functions.
Fixed by performing the same checks for HAVING as for SELECT.
bitmap_is_set(table->write_set, fiel
Problem: creating a temporary table we allocate the group buffer if needed
followed by table bitmaps (see create_tmp_table()). Reserving less memory for
the group buffer than actually needed (used) for values retrieval may lead
to overlapping with followed bitmaps in the memory pool that in turn leads
to unpredictable consequences.
As we use Item->max_length sometimes to calculate group buffer size,
it must be set to proper value. In this particular case
Item_datetime_typecast::max_length is too small.
Another problem is that we use max_length to calculate the group buffer
key length for items represented as DATE/TIME fields which is superfluous.
Fix: set Item_datetime_typecast::max_length properly,
accurately calculate the group buffer key length for items
represented as DATE/TIME fields in the buffer.
A rule was introduced by the 5.1 part of the fix for bug 27531 to
prefer filesort over indexed ORDER BY when accessing all of the rows of a
table (because it's faster). This new rule was not accounting for the
presence of a LIMIT clause.
Fixed the condition for this rule so it will prefer filesort over
indexed ORDER BY only if no LIMIT.
when used in a VIEW.
The problem was that wrong function (create_tmp_from_item())
was used to create a temporary field for Item_func_sp.
The fix is to use create_tmp_from_field().
The change_to_use_tmp_fields function leaves the orig_table member of an
expression's tmp table field filled for the new Item_field being created.
Later orig_table is used by the Field::make_field function to provide some
info about original table and field name to a user. This is ok for a field
but for an expression it should be empty.
The change_to_use_tmp_fields function now resets orig_table member of
an expression's tmp table field to prevent providing a wrong info to a user.
The Field::make_field function now resets the table_name and the org_col_name
variables when the orig_table is set to 0.
The optimizer takes different execution paths during EXPLAIN than SELECT,
this fix relates only to EXPLAIN, hence no behavior changes.
The test of sort keys for ORDER BY was prohibited from considering keys
that were mentioned in IGNORE KEYS FOR ORDER BY. This led to two
inconsistencies: One was that IGNORE INDEX FOR GROUP BY and
IGNORE INDEX FOR ORDER BY gave apparently different EXPLAINs; the latter
erroneously claimed to do filesort. The second inconsistency
is that the test of sort keys is called twice, finding a sort key the first
time but not the second time, leading to the mentioned filesort.
Fixed by making the test of sort keys consider all enabled
keys on the table. This test rejects keys that are not covering, and for
covering keys the hint should be ignored anyway.
When storing the VIEW the CREATE VIEW command is reconstructed
from the parse tree. While constructing the command string
the index hints specified should also be printed.
Fixed by adding code to print the index hints when printing a
table in the FROM clause.
The optimizer sets index traversal in reverse order only if there are
used key parts that are not compared to a constant.
However using the primary key as an ORDER BY suffix rendered the check
incomplete : going in reverse order must still be used even if
all the parts of the secondary key are compared to a constant.
Fixed by relaxing the check and set reverse traversal even when all
the secondary index keyparts are compared to a const.
Also account for the case when all the primary keys are compared to a
constant.
The optimization that uses a unique index to remove GROUP BY did not
ensure that the index was actually used, thus violating the ORDER BY
that is implied by GROUP BY.
Fixed by replacing GROUP BY with ORDER BY if the GROUP BY clause contains
a unique index over non-nullable field(s). In case GROUP BY ... ORDER BY
null is used, GROUP BY is simply removed.
Currently the Last_query_cost session status variable shows
only the cost of a single flat subselect. For complex queries
(with subselects or unions etc) Last_query_cost is not valid
as it was showing the cost for the last optimized subselect.
Fixed by reseting to zero Last_query_cost when the complete
cost of the query cannot be determined.
Last_query_cost will be non-zero only for single flat queries.