ACCEPTED BUT PARSED INCORRECTLY
When we are setting the value in a system variable,
We can set it like
set sys_var="Iden1.Iden2"; //1
set sys_var='Iden1.Iden2'; //2
set sys_var=Iden1.Iden2; //3
set sys_var=.ident1.ident2; //4
set sys_var=`Iden1.Iden2`; //5
While parsing, for case 1(when ANSI_QUOTES is enable) and 2,
we will take as string literal(we will make item of type Item_string).
for case 3 & 4, taken as Item_field, where Iden1 is a table name and
iden2 is a field name.
for case 5, again Item_field type, where iden1.iden2 is taken as
field name.
Now in case 1, when we are assigning some value to system variable
(which can take string or enumerate type data), we are setting only
field part.
This means only iden2 value will be set for system variable. This
result in wrong result.
Solution:
(for string type) We need to Document that we are not allowed to set
system variable which takes string as identifier, otherwise result
in unexpected behaviour.
(for enumerate type)
if we pass iden1.iden2, we will give an error ER_WRONG_TYPE_FOR_VAR
(Incorrect argument type to variable).
LOAD DATA CAN CAUSE SQL INJECTION
Problem:
=======
A long SET expression in LOAD DATA is incorrectly truncated
when written to the binary log.
Analysis:
========
LOAD DATA statements are reconstructed once again before
they are written to the binary log. When SET clauses are
specified as part of LOAD DATA statement, these SET clause
user command strings need to be stored as it is inorder to
reconstruct the original user command. At present these
strings are stored as part of SET clause item tree's
top most Item node's name itself which is incorrect. As an
Item::name can be of MAX_ALIAS_NAME (256) size. Hence the
name will get truncated to "255".
Because of this the rewritten LOAD DATA statement will be
terminated incorrectly. When this statment is read back by
the mysqlbinlog tool it reads a starting single quote and
continuos to read till it finds an ending quote. Hence any
statement written post ending quote will be considered as
a new statement.
Fix:
===
As name field has length restriction the string value
should not be stored in Item::name. A new String list is
maintained to store the SET expression values and this list
is read during reconstrution.
Bug#13116514 - CREATE LOGFILE GROUP INITIAL_SIZE & UNDO_BUFFER_SIZE FAILS
Fixing parser to accept the syntax: to give a size with suffix 'M', eg. undo_buffer_size=10M (M for mega bytes), in 'create logfile group' command.
!TABLES->NEXT_NAME_RESOLUTION_TABLE) || !TAB
Problem:
The context info of select query gets corrupted when a query
with group_concat having order by is present in an order by
clause of the select query. As a result, server crashes with
an assert.
Analysis:
While parsing order by for group_concat, it is presumed that
it is always present before the actual order by for the
select query.
As a result, parser uses select->order_list to populate the
order by items of group_concat and creates a select->gorder_list
to which select->order_list is copied onto. Once this is done,
it empties the select->order_list.
In the case presented in the bugpage, as order by is already
parsed when group_concat's order by is encountered, parser
presumes that it is the second order by in the select query
and creates fake_lex_unit which results in the change of
context info.
Solution:
Make group_concat's order by parsing independent of the select
Post push fix:
setup_ref_array() now uses n_sum_items to determine size of ref_pointer_array.
The problem was that n_sum_items kept growing, it wasn't reset for each query.
A similar memory leak was fixed with the patch for:
Bug 14683676 ENDLESS MEMORY CONSUMPTION IN SETUP_REF_ARRAY WITH MAX IN SUBQUERY
Due to an internal change in the server code in between 5.1 and 5.5
(wl#2649) the hash function used in KEY partitioning changed
for numeric and date/time columns (from binary hash calculation
to character based hash calculation).
Also enum/set changed from latin1 ci based hash calculation to
binary hash between 5.1 and 5.5. (bug#11759782).
These changes makes KEY [sub]partitioned tables on any of
the affected column types incompatible with 5.5 and above,
since the calculation of partition id differs.
Also since InnoDB asserts that a deleted row was previously
read (positioned), the server asserts on delete of a row that
is in the wrong partition.
The solution for this situation is:
1) The partitioning engine will check that delete/update will go to the
partition the row was read from and give an error otherwise, consisting
of the rows partitioning fields. This will avoid asserts in InnoDB and
also alert the user that there is a misplaced row. A detailed error
message will be given, including an entry to the error log consisting
of both table name, partition and row content (PK if exists, otherwise
all partitioning columns).
2) A new optional syntax for KEY () partitioning in 5.5 is allowed:
[SUB]PARTITION BY KEY [ALGORITHM = N] (list_of_cols)
Where N = 1 uses the same hashing as 5.1 (Numeric/date/time fields uses
binary hashing, ENUM/SET uses charset hashing) N = 2 uses the same
hashing as 5.5 (Numeric/date/time fields uses charset hashing,
ENUM/SET uses binary hashing). If not set on CREATE/ALTER it will
default to 2.
This new syntax should probably be ignored by NDB.
3) Since there is a demand for avoiding scanning through the full
table, during upgrade the ALTER TABLE t PARTITION BY ... command is
considered a no-op (only .frm change) if everything except ALGORITHM
is the same and ALGORITHM was not set before, which allows manually
upgrading such table by something like:
ALTER TABLE t PARTITION BY KEY ALGORITHM = 1 () or
ALTER TABLE t PARTITION BY KEY ALGORITHM = 2 ()
4) Enhanced partitioning with CHECK/REPAIR to also check for/repair
misplaced rows. (Also works for ALTER TABLE t CHECK/REPAIR PARTITION)
CHECK FOR UPGRADE:
If the .frm version is < 5.5.3
and uses KEY [sub]partitioning
and an affected column type
then it will fail with an message:
KEY () partitioning changed, please run:
ALTER TABLE `test`.`t1` PARTITION BY KEY ALGORITHM = 1 (a)
PARTITIONS 12
(i.e. current partitioning clause, with the addition of
ALGORITHM = 1)
CHECK without FOR UPGRADE:
if MEDIUM (default) or EXTENDED options are given:
Scan all rows and verify that it is in the correct partition.
Fail for the first misplaced row.
REPAIR:
if default or EXTENDED (i.e. not QUICK/USE_FRM):
Scan all rows and every misplaced row is moved into its correct
partitions.
5) Updated mysqlcheck (called by mysql_upgrade) to handle the
new output from CHECK FOR UPGRADE, to run the ALTER statement
instead of running REPAIR.
This will allow mysql_upgrade (or CHECK TABLE t FOR UPGRADE) to upgrade
a KEY [sub]partitioned table that has any affected field type
and a .frm version < 5.5.3 to ALGORITHM = 1 without rebuild.
Also notice that if the .frm has a version of >= 5.5.3 and ALGORITHM
is not set, it is not possible to know if it consists of rows from
5.1 or 5.5! In these cases I suggest that the user does:
(optional)
LOCK TABLE t WRITE;
SHOW CREATE TABLE t;
(verify that it has no ALGORITHM = N, and to be safe, I would suggest
backing up the .frm file, to be used if one need to change to another
ALGORITHM = N, without needing to rebuild/repair)
ALTER TABLE t <old partitioning clause, but with ALGORITHM = N>;
which should set the ALGORITHM to N (if the table has rows from
5.1 I would suggest N = 1, otherwise N = 2)
CHECK TABLE t;
(here one could use the backed up .frm instead and change to a new N
and run CHECK again and see if it passes)
and if there are misplaced rows:
REPAIR TABLE t;
(optional)
UNLOCK TABLES;
VARIABLES
Analysis:
-------------
After executing the query, new value of the user defined
variables are set in the function "select_dumpvar::send_data".
"select_dumpvar::send_data" first calls function
"Item_func_set_user_var::save_item_result()". This function
checks the nullness of the Item_field passed as parameter
to it and saves it. The nullness of item is stored with
arg[0]'s null_value flag. Then "select_dumpvar::send_data" calls
"Item_func_set_user_var::update()" which notices null
result that was saved and calls "Item_func_set_user_var::
update_hash". But here null_value is not set and args[0]
is different from that given to function "Item_func_set_user_var::
set_item_result()". This causes "Item_func_set_user_var::
update_hash" function to believe that its getting non-null value.
"user_var_entry::length" set to 0 and hence "user_var_entry::value"
is made to point to extra_area allocated in "user_var_entry".
And "Item_func_set_user_var::update_hash" tries to write
at memory beyond extra_area for result type DECIMAL. Because of
this invalid write issue is reported by Valgrind.
Before this bug was introduced, we avoided this problem by
creating "Item_func_set_user_var" object with the same
Item_field as arg[0] and as parameter to
Item_func_set_user_var::save_item_result(). But now
they are refering to different args[0]. Because of this
null_value flag set in parameter Item_field in function
"Item_func_set_user_var::save_item_result()" is not
reflected in "Item_func_set_user_var" object.
Fix:
------------
This issue is reported on versions 5.5.24. Issue does not exists
in 5.5.23, 5.1, 5.6 and trunk.
This issue was introduced by
revid:georgi.kodinov@oracle.com-20120309130449-82e3bs5v3et1x0ef (fix for
bug #12408412), which was pushed into 5.5 and later releases. This patch
has later been reversed in 5.6 and trunk by
revid:norvald.ryeng@oracle.com-20121010135242-xj34gg73h04hrmyh (fix for
bug #14664077). Backported this patch in 5.5 also to fix this issue.
Print the warning(note):
YEAR(x) is deprecated and will be removed in a future release. Please use YEAR(4) instead
on "CREATE TABLE ... YEAR(x)" or "ALTER TABLE MODIFY ... YEAR(x)", where x != 4
------------------------------------------------------------
revno: 3258
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
branch nick: mysql-trunk-bug12663165
timestamp: Thu 2011-07-14 10:05:12 +0200
message:
Bug#12663165 SP DEAD CODE REMOVAL DOESN'T UNDERSTAND CONTINUE HANDLERS
When stored routines are loaded, a simple optimizer tries to locate
and remove dead code. The problem was that this dead code removal
did not work correctly with CONTINUE handlers.
If a statement triggers a CONTINUE handler, the following statement
will be executed after the handler statement has completed. This
means that the following statement is not dead code even if the
previous statement unconditionally alters control flow. This fact
was lost on the dead code removal routine, which ended up with
removing instructions that could have been executed. This could
then lead to assertions, crashes and generally bad behavior when
the stored routine was executed.
This patch fixes the problem by marking as live code all stored
routine instructions that are in the same scope as a CONTINUE handler.
Test case added to sp.test.
There was memory leak when running some tests on PB2.
The reason of the failure is an early return from change_master()
that was supposed to deallocate a dyn-array.
Actually the same bug58915 was fixed in trunk with relocating the dyn-array
destruction into THD::cleanup_after_query() which can't be bypassed.
The current patch backports magne.mahre@oracle.com-20110203101306-q8auashb3d7icxho
and adds two optimizations: were done: the static buffer for the dyn-array to base on,
and the array initialization is called precisely when it's necessary rather than
per each CHANGE-MASTER as before.
There was memory leak when running some tests on PB2.
The reason of the failure is an early return from change_master()
that was supposed to deallocate a dyn-array.
Fixed with relocating the dyn-array's destructor at ~LEX() that is
the end of the session, per Gleb's patch idea.
Two optimizations were done: the static buffer for the dyn-array to base on,
and the array initialization is called precisely when it's necessary rather than
per each CHANGE-MASTER as before.
SET STATEMENT.
Server built with debug asserts, without debug crashes if a user tries
to run a stored procedure that constains query with subquery that include
either LIMIT or LIMIT OFFSET clauses.
The problem was that Item::fix_fields() was not called for the items
representing LIMIT or OFFSET clauses.
The solution is to call Item::fix_fields() right before evaluation in
st_select_lex_unit::set_limit().
In 5.5, REFRESH SLAVE is used as an alias for RESET SLAVE and
was removed in 5.6. Reseting a slave through REFRESH SLAVE was
causing errors in the valgrind platform since reset_slave_info
was undefined.
To fix the problem, we have set reset_slave_info while calling
REFRESH SLAVE.
Before BUG#28796, an empty host was used to identify that an instance was no
longer a slave. However, BUG#28796 changed this behavior and one cannot set
an empty host. Besides, a RESET SLAVE only cleans up information on the next
event to retrieve from the master, disables ssl and resets heartbeat period.
So a call to SHOW SLAVE STATUS after issuing a RESET SLAVE still returns some
valid information, such as host, port, user and password.
To fix this problem, we have introduced the command RESET SLAVE ALL that does
what a regular RESET SLAVE does and also clears host, port, user and password
information thus allowing users to identify when an instance is no longer a
slave.
When CREATE TABLE wasn't given ENGINE=... it would determine
the default ENGINE at parse-time rather than at execution
time, leading to incorrect behaviour (namely, later changes
to the default engine being ignore) when calling CREATE TABLE
from a stored procedure.
We now defer working out the default engine till execution of
CREATE TABLE.
THE EVENT STATUS.
Any ALTER EVENT statement on a disabled event enabled it back
(unless this ALTER EVENT statement explicitly disabled the event).
The problem was that during processing of an ALTER EVENT statement
value of status field was overwritten unconditionally even if new
value was not specified explicitly. As a consequence this field
was set to default value for status which corresponds to ENABLE.
The solution is to check if status field was explicitly specified in
ALTER EVENT statement before assigning new value to status field.
SEEMS TO BE 'LEAKING' INTO THE SCHEMA NAME SPACE)
and bug#12428824 (Parser stack overflow and crash in sp_add_used_routine
with obscure query).
The first problem was that attempts to call a stored function by
its fully qualified name ended up with unwarranted error "ERROR 1305
(42000): FUNCTION someMixedCaseDb.my_function_name does not exist"
if this function belonged to a schema that had uppercase letters in
its name AND --lower_case_table_names was equal to either 1 or 2.
The second problem was that 5.5 version of MySQL server might have
crashed when a user tried to call stored function with too long name
or too long database name (i.e if a function and database name combined
occupied more than 2*3*64 bytes in utf8). This issue didn't affect
versions of server < 5.5.
The first problem was caused by the fact that in cases when a stored
function was called by its fully qualified name we didn't lowercase
name of its schema before performing look up of the function in
mysql.proc table even although lower_case_table_names mode was on.
As result we were unable to find this function since during its
creation we store lowercased version of schema name in the system
table in this mode and field for schema name uses binary collation.
Calls to stored functions were unaffected by this problem since for
them schema name is converted to lowercase as necessary.
The reason for the second bug was that MySQL Server didn't check length
of function name and database name before proceeding with execution of
stored function. As a consequence too long database name or function
name caused buffer overruns in places where the code assumes that their
length is within fixed limits, like mdl_key_init() in 5.5.
Again this issue didn't affect calls to stored procedures as for them
length of schema name and procedure name are properly checked.
This patch fixes both these bugs by adding calls to check_db_name()
and check_routine_name() to grammar rule which corresponds to a call
to a stored function. These functions ensure that length of database
name and function name for routine called is within standard limit.
Moreover call to check_db_name() handles conversion of database name
to lowercase if --lower_case_table_names mode is on.
Note that even although the second issue seems to be only reproducible
in 5.5 we still add code fixing it to 5.1 to be on the safe side (and
make code a bit more robust against possible future changes).
If LOAD DATA INFILE featured a SET clause, the name=value pairs
would be regenerated using item::print. Unfortunately, that code
is mostly optimized for EXPLAIN EXTENDED output and such, and can
not be relied on to return valid SQL.
We now name each value its original, user-supplied form and use
that to create LOAD DATA INFILE statements for statement-based
replication.
MAP 'REPAIR TABLE' TO RECREATE +ANALYZE FOR ENGINES NOT
SUPPORTING NATIVE REPAIR
Executing 'mysqlcheck --check-upgrade --auto-repair ...' will first issue
'CHECK TABLE FOR UPGRADE' for all tables in the database in order to check if the
tables are compatible with the current version of MySQL. Any tables that are
found incompatible are then upgraded using 'REPAIR TABLE'.
The problem was that some engines (e.g. InnoDB) do not support 'REPAIR TABLE'.
This caused any such tables to be left incompatible. As a result such tables were
not properly fixed by the mysql_upgrade tool.
This patch fixes the problem by first changing 'CHECK TABLE FOR UPGRADE' to return
a different error message if the engine does not support REPAIR. Instead of
"Table upgrade required. Please do "REPAIR TABLE ..." it will report
"Table rebuild required. Please do "ALTER TABLE ... FORCE ..."
Second, the patch changes mysqlcheck to do 'ALTER TABLE ... FORCE' instead of
'REPAIR TABLE' in these cases.
This patch also fixes 'ALTER TABLE ... FORCE' to actually rebuild the table.
This change should be reflected in the documentation. Before this patch,
'ALTER TABLE ... FORCE' was unused (See Bug#11746162)
Test case added to mysqlcheck.test
and Order By
When having a UNION statement in a subquery, with no
referenced tables (or only a reference to the virtual
table 'dual'), the UNION did not allow an ORDER BY clause.
i.e:
SELECT(SELECT 1 AS a UNION
SELECT 0 AS a
ORDER BY a) AS b or
SELECT(SELECT 1 AS a FROM dual UNION
SELECT 0 as a
ORDER BY a) AS b
In addition, an ORDER BY / LIMIT clause was not accepted
in subqueries even for single SELECT statements with no
referenced tables (or with 'dual' as table reference)
i.e:
SELECT(SELECT 1 AS a ORDER BY a) AS b or
SELECT(SELECT 1 AS a FROM dual ORDER BY a) AS b
The fix was to allow an optional ORDER BY/LIMIT clause to
the grammar for these cases.
See also: Bug#57986
if embedded in a SELECT
An ORDER BY clause was bound to the incorrect
(sub-)statement when used in a UNION context.
In a query like:
SELECT * FROM a UNION SELECT * FROM b ORDER BY c
the result of SELECT * FROM b is sorted, and then
combined with a. The correct behaviour is that
the ORDER BY clause should be applied on the
final set. Similar behaviour was seen on LIMIT
clauses as well.
In a UNION statement, there will be a select_lex
object for each of the two selects, and a
select_lex_unit object that describes the UNION
itself. Similarly, the same behaviour was also
seen on derived tables.
The bug was caused by using a grammar rule for
ORDER BY and LIMIT that bound these elements
to thd->lex->current_select, which points to the
last of the two selects, instead of to the
fake_select_lex member of the master select_lex_unit
object.
bug #57006 "Deadlock between HANDLER and FLUSH TABLES WITH READ
LOCK" and bug #54673 "It takes too long to get readlock for
'FLUSH TABLES WITH READ LOCK'".
The first bug manifested itself as a deadlock which occurred
when a connection, which had some table open through HANDLER
statement, tried to update some data through DML statement
while another connection tried to execute FLUSH TABLES WITH
READ LOCK concurrently.
What happened was that FTWRL in the second connection managed
to perform first step of GRL acquisition and thus blocked all
upcoming DML. After that it started to wait for table open
through HANDLER statement to be flushed. When the first connection
tried to execute DML it has started to wait for GRL/the second
connection creating deadlock.
The second bug manifested itself as starvation of FLUSH TABLES
WITH READ LOCK statements in cases when there was a constant
stream of concurrent DML statements (in two or more
connections).
This has happened because requests for protection against GRL
which were acquired by DML statements were ignoring presence of
pending GRL and thus the latter was starved.
This patch solves both these problems by re-implementing GRL
using metadata locks.
Similar to the old implementation acquisition of GRL in new
implementation is two-step. During the first step we block
all concurrent DML and DDL statements by acquiring global S
metadata lock (each DML and DDL statement acquires global IX
lock for its duration). During the second step we block commits
by acquiring global S lock in COMMIT namespace (commit code
acquires global IX lock in this namespace).
Note that unlike in old implementation acquisition of
protection against GRL in DML and DDL is semi-automatic.
We assume that any statement which should be blocked by GRL
will either open and acquires write-lock on tables or acquires
metadata locks on objects it is going to modify. For any such
statement global IX metadata lock is automatically acquired
for its duration.
The first problem is solved because waits for GRL become
visible to deadlock detector in metadata locking subsystem
and thus deadlocks like one in the first bug become impossible.
The second problem is solved because global S locks which
are used for GRL implementation are given preference over
IX locks which are acquired by concurrent DML (and we can
switch to fair scheduling in future if needed).
Important change:
FTWRL/GRL no longer blocks DML and DDL on temporary tables.
Before this patch behavior was not consistent in this respect:
in some cases DML/DDL statements on temporary tables were
blocked while in others they were not. Since the main use cases
for FTWRL are various forms of backups and temporary tables are
not preserved during backups we have opted for consistently
allowing DML/DDL on temporary tables during FTWRL/GRL.
Important change:
This patch changes thread state names which are used when
DML/DDL of FTWRL is waiting for global read lock. It is now
either "Waiting for global read lock" or "Waiting for commit
lock" depending on the stage on which FTWRL is.
Incompatible change:
To solve deadlock in events code which was exposed by this
patch we have to replace LOCK_event_metadata mutex with
metadata locks on events. As result we have to prohibit
DDL on events under LOCK TABLES.
This patch also adds extensive test coverage for interaction
of DML/DDL and FTWRL.
Performance of new and old global read lock implementations
in sysbench tests were compared. There were no significant
difference between new and old implementations.
In MySQL 5.5 the new reserved words include:
SLOW as in FLUSH SLOW LOGS
GENERAL as in FLUSH GENERAL LOGS
IGNORE_SERVER_IDS as in CHANGE MASTER ... IGNORE_SERVER_IDS
MASTER_HEARTBEAT_PERIOD as in CHANGE MASTER ... MASTER_HEARTBEAT_PERIOD
These are not reserved words in standard SQL, or in Oracle 11g,
and as such, may affect existing applications.
We fix this by adding the new words to the list of
keywords that are allowed for labels in SPs.