led to creating corrupted index.
Corrected fix. The new method called prepare2 is added to the select_create
class. As all preparations are done by the select_create::prepare function
it doesn't do anything. Slightly changed algorithm of calling the
start_bulk_insert function. Now it's called from the select_insert::prepare2
function when the SQL_BUFFER_RESULT flags is set.
The is_bulk_insert_mode flag is removed as it is not needed anymore.
This bug is actually two. The first one manifests itself on an EXPLAIN
SELECT query with nested subqueries that employs the filesort algorithm.
The whole SELECT under explain is marked as UNCACHEABLE_EXPLAIN to preserve
some temporary structures for explain. As a side-effect of this values of
nested subqueries weren't cached and subqueries were re-evaluated many
times. Each time buffer for filesort was allocated but wasn't freed because
freeing occurs at the end of topmost SELECT. Thus all available memory was
eaten up step by step and OOM event occur.
The second bug manifests itself on SELECT queries with conditions where
a subquery result is compared with a key field and the subquery itself also
has such condition. When a long chain of such nested subqueries is present
the stack overrun occur. This happens because at some point the range optimizer
temporary puts the PARAM structure on the stack. Its size if about 8K and
the stack is exhausted very fast.
Now the subselect_single_select_engine::exec function allows subquery result
caching when the UNCACHEABLE_EXPLAIN flag is set.
Now the SQL_SELECT::test_quick_select function calls the check_stack_overrun
function for stack checking purposes to prevent server crash.
Comparison of a BIGINT NOT NULL column with a constant arithmetic
expression that evaluates to NULL caused error 1048: "Column '...'
cannot be null".
Made convert_constant_item() check if the constant expression is NULL
before attempting to store it in a field. Attempts to store NULL in a
NOT NULL field caused query errors.
checked for each record'
The problem was in incorrectly calculated length of the buffer used to
store a hexadecimal representation of an index map in
select_describe(). This could result in buffer overrun and stack
corruption under some circumstances.
Fixed by correcting the calculation.
The columns in HAVING can reference the GROUP BY and
SELECT columns. There can be "table" prefixes when
referencing these columns. And these "table" prefixes
in HAVING use the table alias if available.
This means that table aliases are subject to the same
storage rules as table names and are dependent on
lower_case_table_names in the same way as the table
names are.
Fixed by :
1. Treating table aliases as table names
and make them lowercase when printing out the SQL
statement for view persistence.
2. Using case insensitive comparison for table
aliases when requested by lower_case_table_names
max_length parameter for BLOB-returning functions must be big enough
for any possible content. Otherwise the field created for a table
will be too small.
After adding an index the <VARBINARY> IN (SELECT <BINARY> ...)
clause returned a wrong result: the VARBINARY value was illegally padded
with zero bytes to the length of the BINARY column for the index search.
(<VARBINARY>, ...) IN (SELECT <BINARY>, ... ) clauses are affected too.
UNIQUE (eq-ref) lookups result in table being considered as a "constant" table.
Queries that consist of only constant tables are processed in do_select() in a
special way that doesn't invoke evaluate_join_record(), and therefore doesn't
increase the counters join->examined_rows and join->thd->row_count.
The patch increases these counters in this special case.
NOTICE:
This behavior seems to contradict what the documentation says in Sect. 5.11.4:
"Queries handled by the query cache are not added to the slow query log, nor
are queries that would not benefit from the presence of an index because the
table has zero rows or one row."
No test case in 5.0 as issue shows only in slow query log, and other counters
can give subtly different values (with regard to counting in create_sort_index(),
synthetic rows in ROLLUP, etc.).
BETWEEN was more lenient with regard to what it accepted as a DATE/DATETIME
in comparisons than greater-than and less-than were. ChangeSet makes < >
comparisons similarly robust with regard to trailing garbage (" GMT-1")
and "missing" leading zeros. Now all three comparators behave similarly
in that they throw a warning for "junk" at the end of the data, but then
proceed anyway if possible. Before < > fell back on a string- (rather than
date-) comparison when a warning-condition was raised in the string-to-date
conversion. Now the fallback only happens on actual errors, while warning-
conditions still result in a warning being to delivered to the client.
The bug is a regression introduced by the fix for bug30596. The problem
was that in cases when groups in GROUP BY correspond to only one row,
and there is ORDER BY, the GROUP BY was removed and the ORDER BY
rewritten to ORDER BY <group_by_columns> without checking if the
columns in GROUP BY and ORDER BY are compatible. This led to
incorrect ordering of the result set as it was sorted using the
GROUP BY columns. Additionaly, the code discarded ASC/DESC modifiers
from ORDER BY even if its columns were compatible with the GROUP BY
ones.
This patch fixes the regression by checking if ORDER BY columns form a
prefix of the GROUP BY ones, and rewriting ORDER BY only in that case,
preserving the ASC/DESC modifiers. That check is sufficient, since the
GROUP BY columns contain a unique index.
causes out of memory errors
The code in mysql_create_function() and mysql_drop_function() assumed
that the only reason for UDFs being uninitialized at that point is an
out-of-memory error during initialization. However, another possible
reason for that is the --skip-grant-tables option in which case UDF
initialization is skipped and UDFs are unavailable.
The solution is to check whether mysqld is running with
--skip-grant-tables and issue a proper error in such a case.
HOUR(), MINUTE(), ... returned spurious results when used on a DATE-cast.
This happened because DATE-cast object did not overload get_time() method
in superclass Item. The default method was inappropriate here and
misinterpreted the data.
Patch adds missing method; get_time() on DATE-casts now returns SQL-NULL
on NULL input, 0 otherwise. This coincides with the way DATE-columns
behave.
When constructing a key image stricter date checking (from sql_mode)
should not be enabled, because it will reject invalid dates that the
server would otherwise accept for searching when there's no index.
Fixed by disabling strict date checking when constructing a key image.
variable in where clause.
Problem: the new_item() method of Item_uint used an incorrect
constructor. "new Item_uint(name, max_length)" calls
Item_uint::Item_uint(const char *str_arg, uint length) which assumes the
first argument to be the string representation of the value, not the
item's name. This could result in either a server crash or incorrect
results depending on usage scenarios.
Fixed by using the correct constructor in new_item():
Item_uint::Item_uint(const char *str_arg, longlong i, uint length).
tables or more
The problem was that the optimizer used the join buffer in cases when
the result set is ordered by filesort. This resulted in the ORDER BY
clause being ignored, and the records being returned in the order
determined by the order of matching records in the last table in join.
Fixed by relaxing the condition in make_join_readinfo() to take
filesort-ordered result sets into account, not only index-ordered ones.
The HAVING clause is subject to the same rules as the SELECT list
about using aggregated and non-aggregated columns.
But this was not enforced when processing implicit grouping from
using aggregate functions.
Fixed by performing the same checks for HAVING as for SELECT.