Disable const propagation for Item_hex_string.
This must be done because Item_hex_string->val_int() is not
the same as (Item_hex_string->val_str() in BINARY column)->val_int().
We cannot simply disable the replacement in a particular context (
e.g. <bin_col> = <int_col> AND <bin_col> = <hex_string>) since
Items don't know the context they are in and there are functions like
IF (<hex_string>, 'yes', 'no').
Note that this will disable some valid cases as well
(e.g. : <bin_col> = <hex_string> AND <bin_col2> = <bin_col>) but
there's no way to distinguish the valid cases without having the
Item's parent say something like : Item->set_context(Item::STRING_RESULT)
and have all the Items that contain other Items do that consistently.
optimizer does not honor IGNORE INDEX
- Allow an index to be used for sorting the table
instead of filesort only if it is not disabled by
IGNORE INDEX.
table in a join
The optimizer removes redundant columns in ORDER BY. It is considering
redundant every reference to const table column, e.g b in :
create table t1 (a int, b int, primary key(a));
select 1 from t1 order by b where a = 1
But it must not remove references to const table columns if the
const table is an outer table because there still can be 2 values :
the const value and NULL. e.g.:
create table t1 (a int, b int, primary key(a));
select t2.b c from t1 left join t1 t2 on (t1.a = t2.a and t2.a = 5)
order by c;
Treat queries with no FROM and aggregate functions as normal queries,
so the aggregate function get correctly calculated as if there is 1 row.
This means that they will be considered to have one row, so COUNT(*) will return
1 instead of 0. Other aggregates will behave in compatible manner.
Renamed variable, to avoid name clash with macro "rem_size"
on AIX 5.3 and "/usr/include/sys/xmem.h" (bug#17648)
asn.cpp, asn.hpp:
Avoid name clash with NAME_MAX
- Make the range-et-al optimizer produce E(#table records after table
condition is applied),
- Make the join optimizer use this value,
- Add "filtered" column to EXPLAIN EXTENDED to show
fraction of records left after table condition is applied
- Adjust test results, add comments
When processing aggregate functions all tables values are reset
to NULLs at the end of each group.
When doing that if there are no rows found for a group
the const tables must not be reset as they are not recalculated
by do_select()/sub_select() for each group.
Too many cursors (more than 1024) could lead to memory corruption.
This affects both, stored routines and C API cursors, and the
threshold is per-server, not per-connection. Similarly, the
corruption could happen when the server was under heavy load
(executing more than 1024 simultaneous complex queries), and this is
the reason why this bug is fixed in 4.1, which doesn't support
cursors.
The corruption was caused by a bug in the temporary tables code, when
an attempt to create a table could lead to a write beyond allocated
space. Note, that only internal tables were affected (the tables
created internally by the server to resolve the query), not tables
created with CREATE TEMPORARY TABLE. Another pre-condition for the
bug is TRUE value of --temp-pool startup option, which, however, is a
default.
The cause of a bug was that random memory was overwritten in
bitmap_set_next() due to out-of-bound memory access.
When optimizing conditions like 'a = <some_val> OR a IS NULL' so that they're
united into a single condition on the key and checked together the server must
check which value is the NULL value in a correct way : not only using ->is_null
but also check if the expression doesn't depend on any tables referenced in the
current statement.
This additional check must be performed because that optimization takes place
before the actual execution of the statement, so if the field was initialized
to NULL from a previous statement the optimization would be applied incorrectly.
The problem was in that opt_sum_query() replaced MIN/MAX functions
with the corresponding constant found in a key, but due to imprecise
representation of float numbers, when evaluating the where clause,
this comparison failed.
When MIN/MAX optimization detects that all tables can be removed,
also remove all conjuncts in a where clause that refer to these
tables. As a result of this fix, these conditions are not evaluated
twice, and in the case of float number comparisons we do not discard
result rows due to imprecise float representation.
As a side-effect this fix also corrects an unnoticed problem in
bug 12882.
When there is no index defined filesort is used to sort the result of a
query. If there is a function in the select list and the result set should be
ordered by it's value then this function will be evaluated twice. First time to
get the value of the sort key and second time to send its value to a user.
This happens because filesort when sorts a table remembers only values of its
fields but not values of functions.
All functions are affected. But taking into account that SP and UDF functions
can be both expensive and non-deterministic a temporary table should be used
to store their results and then sort it to avoid twice SP evaluation and to
get a correct result.
If an expression referenced in an ORDER clause contains a SP or UDF
function, force the use of a temporary table.
A new Item_processor function called func_type_checker_processor is added
to check whether the expression contains a function of a particular type.
when calculating GROUP_CONCAT all blob fields are transformed
to varchar when making the temp table.
However a varchar has at max 2 bytes for length.
This fix makes the conversion only for blobs whose max length
is below that limit.
Otherwise blob field is created by make_string_field() call.
When making a place to store field values at the start of each group
the real item (not the reference) must be used when deciding which column
to copy.
An aggregate function reference was resolved incorrectly and
caused a crash in count_field_types.
Must use real_item() to get to the real Item instance through
the reference
The bug was due to a loss happened during a refactoring made
on May 30 2005 that modified the function JOIN::reinit.
As a result of it for any subquery the value of offset_limit_cnt
was not restored for the following executions. Yet the first
execution of the subquery made it equal to 0.
The fix restores this value in the function JOIN::reinit.
DESCRIBE returned the type BIGINT for a column of a view if the column
was specified by an expression over values of the type INT.
E.g. for the view defined as follows:
CREATE VIEW v1 SELECT COALESCE(f1,f2) FROM t1
DESCRIBE returned type BIGINT for the only column of the view if f1,f2 are
columns of the INT type.
At the same time DESCRIBE returned type INT for the only column of the table
defined by the statement:
CREATE TABLE t2 SELECT COALESCE(f1,f2) FROM t1.
This inconsistency was removed by the patch.
Now the code chooses between INT/BIGINT depending on the
precision of the aggregated column type.
Thus both DESCRIBE commands above returns type INT for v1 and t2.
* don't use join cache when the incoming data set is already ordered
for ORDER BY
This choice must be made because join cache will effectively
reverse the join order and the results will be sorted by the index
of the table that uses join cache.