The range optimizer uses 'save_in_field_no_warnings()' to verify properties of
'value <cmp> field' expressions.
If this execution yields an error, it should abort.
RETURNS RANDOM DATA
MySQL 5.5 specific version of bugfix.
When Loose Index Scan Range access is used, MySQL execution needs
to copy non-aggregated fields. end_send() checked if this was
necessary by checking if join_tab->select->quick had type
QS_TYPE_GROUP_MIN_MAX.
In this bug, however, MySQL created a sort index to sort the rows
read from this range access method. create_sort_index() deletes
join_tab->select->quick which makes it impossible to inquire
the join_tab if LIS has been used.
The fix for MySQL 5.5 is to introduce a variable in JOIN_TAB
that stores whether or not LIS has been used. There is no need
for this variable in later MySQL versions because the relevant
code has been refactored.
Post push fix:
setup_ref_array() now uses n_sum_items to determine size of ref_pointer_array.
The problem was that n_sum_items kept growing, it wasn't reset for each query.
A similar memory leak was fixed with the patch for:
Bug 14683676 ENDLESS MEMORY CONSUMPTION IN SETUP_REF_ARRAY WITH MAX IN SUBQUERY
sql/sql_yacc.yy:
Reset parsing_place when we're done parsing SHOW commands,
to prevent Item::Item incrementing select_n_having_items
(which is also used in setup_ref_array())
Item_func_group_concat::copy_or_same() creates a copy of original object.
It also creates a copy of ORDER structure because ORDER struct elements may
be modified in find_order_in_list() called from Item_func_group_concat::setup().
As ORDER copy is created using memcpy, ORDER::next elements point to original
ORDER structs. Thus find_order_in_list() called from EXECUTE stmt modifies
ordinal ORDER item pointers so they point to runtime items, these items are
freed after execution, so original ORDER structure becomes invalid.
The fix is to properly update ORDER::next fields so that they point to
new ORDER elements.
sql/item_sum.cc:
update ORDER::next fields so that they point to new ORDER elements.
COLUMNS ARE USED INSIDE A STORED PROCEDURE
Problem: When 'SET' type columns are used in a DML
inside a stored procedure and a NULL value is passed
to that column, replication is breaking.
Analysis: All stored procedure variables used inside
a DML will be substituted with NAME_CONST functions.
While NAME_CONST are used in this particular scenario,
i.e., when NULL value is passed then charset is copied
from 'empty_set_string' member of Field_set class.
The operator '=' overload method inside 'String' class
is not coping str_charset from R.H.S object to L.H.S object.
Hence charset is wrongly copied in the string assignment
Fix: Handle coping str_charset member in operator '=' overload
method.
sql/sql_string.h:
Handled coping str_charset member in operator '=' overload
method.
COLUMNS ARE USED INSIDE A STORED PROCEDURE
Problem: The operator '=' overload method inside
'String' class is not coping str_charset member from
R.H.S object to L.H.S object. Hence charset is wrongly
set while using string assignments
Analaysis: The above mentioned problem is
identified while doing the analaysis of bug#14593883.
Though the test scenario mentioned in the bug page
is not an issue in mysql-5.1 code, the actual root cause
ie., "str_charset member is not copied" exists in the
mysql-5.1 code base.
Fix: Handle coping str_charset member in operator '=' overload
method.
sql/sql_string.h:
Handled coping str_charset member in operator '=' overload
method.
PROBLEM: If multiple statements are sent by a single
request then only the last statement was
getting logged. An attacker can bypass the
audit log just by sending two comsecutive
statements in one request.
SOLUTION: Each statements from a single request are
logged.
PROBLEM AFTER MYSQL_HA_FIND
This problem occured if a prepared statement tried to create a table
for which there already existed a view with the same name while a
SQL handler was opened.
Before DDL statements are executed, mysql_ha_rm_tables() is called
to remove any matching tables from the internal list of opened SQL
handler tables. This match was done on TABLE_LIST::db and
TABLE_LIST::table_name. This is problematic for views (which use
TABLE_LIST::view_db and TABLE_LIST::view_name) and anonymous
derived tables.
This patch fixes the problem by skipping TABLE_LISTs representing
anonymous derived tables and using get_db_name()/get_table_name()
which handles views when looking for SQL handler tables to remove.
IN IN-CLAUSE USING MYISAM OR MEMORY ENGINE
Backport from 5.6. Original message:
The coincidences caused a data loss:
* The query has IN subqueries nested twice,
* the WHERE clause of the inner subquery refers to the
outer field, and the whole WHERE clause returns FALSE,
* the inner subquery has a LEFT JOIN that joins a single
row with a row of NULLs; one of that NULL columns
represents the select list of the subquery.
Normally, that inner subquery should return empty record set.
However, in our case:
* the Item_is_not_null_test item goes constant, since
its underlying field is NULL (because of LEFT JOIN ... ON
FALSE of const table row with a row of nulls);
* we evaluate Item_is_not_null_test::val_int() as a part
of fake HAVING expression of the transformed subquery;
* as far as the underlying field is NULL, we optimize
out the whole fake HAVING expression as FALSE as well
as a whole subquery with a zero result:
Impossible HAVING noticed after reading const tables";
* thus, the optimizer ignores the presence of the WHERE
clause (the WHERE expression is FALSE in our case, so
the subquery should return empty set);
* however, during the evaluation of the
Item_is_not_null_test::val_int() in the optimizer,
it marked its "owner" with the "was_null" flag -- that
forced the subquery to return UNKNOWN instead of empty
set.
That caused a wrong result.
The problem is a regression of the small cleanup in
the fix for the bug11827369 (the Item_is_not_null_test part)
that conflicts with optimizations in the fix for the bug11752543.
Before that regression the Item_is_not_null_test items
never were constants.
The fix is the rollback of Item_is_not_null_test parts
of the bug11827369 fix.
GRANT STATEMENT
Description: A missing length check causes problem while
copying source to destination when
lower_case_table_names is set to a value
other than 0. This patch fixes the issue
by ensuring that requried bound check is
performed.
ANALYSIS
--------
When we open the view using open_new_frm() ,it doesnt set the
table-list->table variable and any access to table_list->table
will cause a crash.
FIX
---
Added a check during execution of the alter partition to return
error if table is view.
[http://rb.no.oracle.com/rb/r/2001/ Approved by Mattias J ]
Problem:
When a system variable is being set to the DEFAULT value, the server
segfaults if there is no 'default' defined for that system variable.
For example, for the following statements server segfaults.
set session rand_seed1=DEFAULT;
set session rand_seed2=DEFAULT;
Analysis:
The class sys_var represents one system variable. The class set_var represents
one system variable that is to be updated. The class set_var contains two
pieces of information, the system variable to object (set_var::var) member
and the value to be updated (set_var::value).
When the given value is 'default', the set_var::value will be NULL.
To update a system variable the member set_var::update() will be called,
which in turn will call sys_var::update() or sys_var::set_default() depending
on whether a value has been provided or not.
If the sys_var::set_default() is called, then the default value is obtained
either from the session scope or the global scope. This default value is
stored in a local temporary set_var object and then passed on to the
sys_var::update() call. A local temporary set_var object is needed because
sys_var::set_default() does not take set_var as an argument.
In the given scenario, the set_var::update() called sys_var::set_default().
And this sys_var::set_default() obtains the default value and then calls
sys_var::update(). To pass this value to sys_var::update() a local set_var
object is being created. While creating this local set_var object, its member
set_var::var was incorrectly left as 0.
Solution:
Instead of creating a local set_var object, the sys_var::set_default() can take
the set_var object as an argument just like sys_var::update().
rb://1996 approved by Nirbhay and Ramil.
Problem:
When the VALUES() function is inappropriately used in the SET stmt the server
exits.
set port = values(v);
This happens because the values(v) will be parsed as an Item_insert_value by
the parser. Both Item_field and Item_insert_value return the type as
FIELD_ITEM. But for Item_insert_value the field_name member is NULL. In
set_var constructor, when the type of the item is FIELD_ITEM we try to access
the non-existent field_name.
The class hierarchy is as follows:
Item -> Item_ident -> Item_field -> Item_insert_value
The Item_ident::field_name is NULL for Item_insert_value.
Solution:
In the parsing stage, in the set_var constructor if the item type is
FIELD_ITEM and if the field_name is non-existent, then it is probably
the Item_insert_value. So leave it as it is for later evaluation.
rb://2004 approved by Roy and Norvald.
Problem:
===========================================================
If mysqld daemon is started without a --datadir option
option, and we issue the SHOW VARIABLES LIKE 'DATADIR';SQL command
at the client it returns an empty path. This is because
mysql_real_data_home_ptr is being reset to NULL by Sys_var_charptr
constructor call when the datadir is not given either through
configuration file (no-defaults) or through mysqld parameters.
Solution:
===========================================================
mysql_real_data_home is an array which stores the path of the datadir
and mysql_real_data_home_ptr is the pointer to it. The pointer is
being set to NULL at the Sys_datadir, which is of type Sys_var_charptr,
constructor call. This is because at Sys_datadir call the def_val
parameter was being passed with DEFAULT(0) which is now replaced with
DEFAULT(mysql_real_data_home). The patch has been tested manually as it
is not possible to start mtr without a default config file.
In method mysql_binlog_send, right after detecting a EOF in the
read event loop, and before deciding if we should change to a new
binlog file there is a execution window where new events can be
written to the binlog and a rotation can happen. When reaching
the test, the function will then change to a new binlog file
ignoring all the events written in this window. This will result
in events not being replicated.
Only when the binlog is detected as deactivated in the event loop
of the dump thread, can we really know that no more events
remain. For this reason, this test is now made under the log lock
in the beginning of the event loop when reading the events.
The technical problem was that THD::user_var_events_alloc was reset to NULL
from a valid value when a stored program is executed during the PREPARE statement.
The user visible problem was that the server crashed if user issued a PREPARE
statement using some combination of stored functions and user variables.
The fix is to restore THD::user_var_events_alloc to the original value.
This is a minimal fix for 5.5.
More proper patch has been already implemented for 5.6+. It avoids
evaluation of stored functions for the PREPARE phase.
From the user point of view, this bug is a regression, introduced by the patch for WL2649
(Number-to-string conversions), revid: bar@mysql.com-20100211041725-ijbox021olab82nv
However, the code resetting THD::user_var_events_alloc exists even in 5.1.
The WL just changed the way arguments are converted to strings and the bug became visible.
DOWNGRADED FROM 5.6.11 TO 5.6.10
Problem was new syntax not accepted by previous version.
Fixed by adding version comment of /*!50531 around the
new syntax.
Like this in the .frm file:
'PARTITION BY KEY /*!50611 ALGORITHM = 2 */ () PARTITIONS 3'
and also changing the output from SHOW CREATE TABLE to:
CREATE TABLE t1 (a INT)
/*!50100 PARTITION BY KEY */ /*!50611 ALGORITHM = 1 */ /*!50100 ()
PARTITIONS 3 */
It will always add the ALGORITHM into the .frm for KEY [sub]partitioned
tables, but for SHOW CREATE TABLE it will only add it in case it is the non
default ALGORITHM = 1.
Also notice that for 5.5, it will say /*!50531 instead of /*!50611, which
will make upgrade from 5.5 > 5.5.31 to 5.6 < 5.6.11 fail!
If one downgrades an fixed version to the same major version (5.5 or 5.6) the
bug 14521864 will be visible again, but unless the .frm is updated, it will
work again when upgrading again.
Also fixed so that the .frm does not get updated version
if a single partition check passes.
5.1 SERVER
Problem was caused by the COM_CHANGE_USER parsing code. That code ignored
character set number passed in COM_CHANGE_USER packet. Instead
character_set_client values was used. User name was not converted at all.
Fixed by using passed character set number to convert both db and user names.
If COM_CHANGE_USER does not contain character set number then
character_set_client is used to convert both names.
This is a backport of the fix for:
Bug#13633549 HANDLE_FATAL_SIGNAL IN TEST_IF_SKIP_SORT_ORDER/CREATE_SORT_INDEX
Don't invoke the range optimizer for a NULL select.
PROBLEM:
When large number of connections are continuously made
with wait_timeout of 600 seconds for some hours, some
connections remain after wait_timeout expired and also
new connections get struck under the configuration and
the scenario reported in bug#16196591.
FIX:
The cause of this bug is the issue identified and fixed in
the BUG#16088658 in 5.6.Also LOCK_thread_count contention
issue fixed in BUG#15921866 in 5.6 need to be in 5.5 as
well. Since the issue is not reproducible, it has been
verified at customer configuration the issue could not
be reproduced after a 48-hour test with a non-debug build
which includes the above two fixes backported.
Analysis:
--------
As part of the fix for Bug#11757464, the 'out of memory' error
condition was not pushed to the diagnostic area as it requires
memory allocation. However in cases of SIGNAL/RESIGNAL 'out of
memory' error, the server may not be out of memory. Hence it
would be good to report the error in such cases.
Fix:
---
Push only non fatal 'out of memory' errors to the diagnostic area.
Since SIGNAL/RESIGNAL of 'out of memory' error may not be fatal,
the error is reported.
Some queries with the "SELECT ... FROM DUAL" nested subqueries
failed with an assertion on debug builds.
Non-debug builds were not affected.
There were a few different issues with similar assertion
failures on different queries:
1. The first problem was related to the incomplete propagation
of the "non-constant" item status from underlying subquery
items to the outer item tree: in some cases non-constants were
interpreted as constants and evaluated at the preparation stage
(val_int() calls withing fix_fields() etc).
Thus, the default implementation of Item_ref::const_item() from
the Item parent class didn't take into account the "const_item"
status of the referenced item tree -- it used the insufficient
"used_tables() == 0" check instead. This worked in most cases
since our "non-constant" functions like RAND() and SLEEP() set
the RAND_TABLE_BIT in the used table map, so they aren't
non-constant from Item_ref's "point of view". However, the
"SELECT ... FROM DUAL" subquery may have an empty map of used
tables, but at the same time subqueries are never "constant" at
the context analysis stage (preparation, view creation etc).
So, the non-contantness of such subqueries was missed.
Fix: the Item_ref::const_item() function has been overloaded to
take into account both (*ref)->const_item() status and tricky
Item_ref::used_tables() return values, since the only
(*ref)->const_item() call is not enough there.
2. In some cases instead of the const_item() call we check a
value of the Item::with_subselect field to recognize items
with nested subqueries. However, the Item_ref class didn't
propagate this value from the referenced item tree.
Fix: Item::has_subquery() and Item_ref::has_subquery()
functions have been backported from 5.6. All direct
references to the with_subselect fields of nested items have
been replaced with the has_subquery() function call.
3. The Item_func_regex class didn't propagate with_subselect
as well, since it overloads the Item_func::fix_fields()
function with insufficient fix_fields() implementation.
Fix: the Item_func_regex::fix_fields() function has been
modified to gather "constant" statuses from inner items.
4. The Item_func_isnull::update_used_tables() function has
a special branch for the underlying item where the maybe_null
value is false: in this case it marks the Item_func_isnull
as a "const_item" and sets the cached_value to false.
However, the Item_func_isnull::val_int() was not in sync with
update_used_tables(): it didn't take into account neither
const_item_cache nor cached_value for the case of
"args[0]->maybe_null == false optimization".
As far as such an Item_func_isnull has "const_item() == true",
it's ok to call Item_func_isnull::val_int() etc from outer
items on preparation stage. In this case the server tried to
call Item_func_isnull::args[0]->isnull(), and if the args[0]
item contained a nested not-nullable subquery, it failed
with an assertion.
Fix: take the value of Item_func_isnull::const_item_cache into
account in the val_int() function.
5. The auxiliary Item_is_not_null_test class has a similar
optimization in the update_used_tables() function as the
Item_func_isnull class has, and the same issue in the val_int()
function.
In addition to that the Item_is_not_null_test::update_used_tables()
doesn't update the const_item_cache value, so the "maybe_null"
optimization is useless there. Thus, we missed some optimizations
of cases like these (before and after the fix):
< <is_not_null_test>(a),
---
> <cache>(<is_not_null_test>(a)),
or
< having (<is_not_null_test>(a) and <is_not_null_test>(a))
---
> having 1
etc.
Fix: update Item_is_not_null_test::const_item_cache in
update_used_tables() and take in into account in val_int().
Backport of fix for Bug#13581962
mysql-test/r/cast.result:
Added test result for Bug#13581962,Bug#14096619
mysql-test/r/ctype_utf8mb4.result:
Added test result for Bug#13581962,Bug#14096619
mysql-test/t/cast.test:
Added test case for Bug#13581962,Bug#14096619
mysql-test/t/ctype_utf8mb4.test:
Added test case for Bug#13581962,Bug#14096619
sql/item_func.h:
limit max length by MY_INT64_NUM_DECIMAL_DIGITS
Backport of Bug#13581962
mysql-test/r/cast.result:
Added test result for Bug#13581962,Bug#14096619
mysql-test/t/cast.test:
Added test case for Bug#13581962,Bug#14096619
sql/item_func.h:
limit max length by MY_INT64_NUM_DECIMAL_DIGITS
Due to an internal change in the server code in between 5.1 and 5.5
(wl#2649) the hash function used in KEY partitioning changed
for numeric and date/time columns (from binary hash calculation
to character based hash calculation).
Also enum/set changed from latin1 ci based hash calculation to
binary hash between 5.1 and 5.5. (bug#11759782).
These changes makes KEY [sub]partitioned tables on any of
the affected column types incompatible with 5.5 and above,
since the calculation of partition id differs.
Also since InnoDB asserts that a deleted row was previously
read (positioned), the server asserts on delete of a row that
is in the wrong partition.
The solution for this situation is:
1) The partitioning engine will check that delete/update will go to the
partition the row was read from and give an error otherwise, consisting
of the rows partitioning fields. This will avoid asserts in InnoDB and
also alert the user that there is a misplaced row. A detailed error
message will be given, including an entry to the error log consisting
of both table name, partition and row content (PK if exists, otherwise
all partitioning columns).
2) A new optional syntax for KEY () partitioning in 5.5 is allowed:
[SUB]PARTITION BY KEY [ALGORITHM = N] (list_of_cols)
Where N = 1 uses the same hashing as 5.1 (Numeric/date/time fields uses
binary hashing, ENUM/SET uses charset hashing) N = 2 uses the same
hashing as 5.5 (Numeric/date/time fields uses charset hashing,
ENUM/SET uses binary hashing). If not set on CREATE/ALTER it will
default to 2.
This new syntax should probably be ignored by NDB.
3) Since there is a demand for avoiding scanning through the full
table, during upgrade the ALTER TABLE t PARTITION BY ... command is
considered a no-op (only .frm change) if everything except ALGORITHM
is the same and ALGORITHM was not set before, which allows manually
upgrading such table by something like:
ALTER TABLE t PARTITION BY KEY ALGORITHM = 1 () or
ALTER TABLE t PARTITION BY KEY ALGORITHM = 2 ()
4) Enhanced partitioning with CHECK/REPAIR to also check for/repair
misplaced rows. (Also works for ALTER TABLE t CHECK/REPAIR PARTITION)
CHECK FOR UPGRADE:
If the .frm version is < 5.5.3
and uses KEY [sub]partitioning
and an affected column type
then it will fail with an message:
KEY () partitioning changed, please run:
ALTER TABLE `test`.`t1` PARTITION BY KEY ALGORITHM = 1 (a)
PARTITIONS 12
(i.e. current partitioning clause, with the addition of
ALGORITHM = 1)
CHECK without FOR UPGRADE:
if MEDIUM (default) or EXTENDED options are given:
Scan all rows and verify that it is in the correct partition.
Fail for the first misplaced row.
REPAIR:
if default or EXTENDED (i.e. not QUICK/USE_FRM):
Scan all rows and every misplaced row is moved into its correct
partitions.
5) Updated mysqlcheck (called by mysql_upgrade) to handle the
new output from CHECK FOR UPGRADE, to run the ALTER statement
instead of running REPAIR.
This will allow mysql_upgrade (or CHECK TABLE t FOR UPGRADE) to upgrade
a KEY [sub]partitioned table that has any affected field type
and a .frm version < 5.5.3 to ALGORITHM = 1 without rebuild.
Also notice that if the .frm has a version of >= 5.5.3 and ALGORITHM
is not set, it is not possible to know if it consists of rows from
5.1 or 5.5! In these cases I suggest that the user does:
(optional)
LOCK TABLE t WRITE;
SHOW CREATE TABLE t;
(verify that it has no ALGORITHM = N, and to be safe, I would suggest
backing up the .frm file, to be used if one need to change to another
ALGORITHM = N, without needing to rebuild/repair)
ALTER TABLE t <old partitioning clause, but with ALGORITHM = N>;
which should set the ALGORITHM to N (if the table has rows from
5.1 I would suggest N = 1, otherwise N = 2)
CHECK TABLE t;
(here one could use the backed up .frm instead and change to a new N
and run CHECK again and see if it passes)
and if there are misplaced rows:
REPAIR TABLE t;
(optional)
UNLOCK TABLES;
ON COL WITH COMPOSITE INDEX
This problem is caused by the patch for the bug#11751794.
While checking for the keypart covering non grouping attribute. we are not
checking whether the root node of the SEL_ARG* tree for the index have any
cvalue or not.
sql/opt_range.cc:
check whether the keeypart_tree has any range tree.
ON COL WITH COMPOSITE INDEX
This problem is caused by the patch for the bug#11751794.
While checking for the keypart covering non grouping attribute. we are not
checking whether the root node of the SEL_ARG* tree for the index have any
cvalue or not.
On a previous fix, user variables with zero length name were incorrectly
considered as event corruption, despite that them are allowed by server.
Fix this wrong assumption by allowing again user variables with zero
length on binary log.
PROPERLY QUOTED IN BINLOG FILE
Problem: In load data file query, User variables are allowed
inside "Into_list" and "Set_list". These user variables used
inside these two lists are not properly guarded with backticks
while server is writting into binlog. Hence user variable names
like a` cannot be used in this context.
Fix: Properly quote these variables while
writting into binlog
mysql-test/r/func_compress.result:
changing result file
mysql-test/r/variables.result:
changing result file
mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result:
changing result file
sql/item_func.cc:
Quote the user variable items
Due to not resetting a member (last_added) of
Deferred events class inside a clean up function
(Deferred_log_events::rewind), there is a memory
leak on filtered slaves.
Fix:
Resetting last_added to NULL in rewind() function.
sql/rpl_utility.cc:
Resetting last_added to NULL to avoid memory leak