The buffer was simply too small.
In 5.5 and trunk, the size is 311 + 31,
in 5.1 and below, the size is 331
client/sql_string.cc:
Increase buffer size in String::set(double, ...)
include/m_string.h:
Increase FLOATING_POINT_BUFFER
mysql-test/r/type_float.result:
New test cases.
mysql-test/t/type_float.test:
New test cases.
sql/sql_string.cc:
Increase buffer size in String::set(double, ...)
sql/unireg.h:
Move definition of FLOATING_POINT_BUFFER
non-latin1 server error message
The problem was a one byte buffer overflow in the conversion
of a error message between character sets. Ahead of explaining
the problem further, some background information. Before an
error message is sent to the user, the message is converted
to the character set specified in the character_set_results
variable. For various reasons, this conversion might cause
the message to increase in length -- for example, if certain
characters can't be represented in the result character set.
If the final message length is greater than the maximum allowed
length of a error message (MYSQL_ERRMSG_SIZE), the message
is truncated. The message is also always null-terminated
regardless of the character set. The problem arises from this
null-termination. If a message length reached the maximum,
the terminating null character would be placed one byte past
the end of the message buffer.
The solution is to reserve the end of the message buffer for
the null character.
mysql-test/t/ctype_errors.test:
Add test case for Bug#12736295.
sql/sql_error.cc:
The to_end pointer was actually pointing past the end of
the buffer. Since the message is always null terminated,
point to_end to the last position of the buffer.
The server crashes if it processes table map events that are
corrupted, especially if they map different tables to the same
identifier. This could happen, for instance, due to BUG 56226.
We fix this by checking whether the table map has already been
mapped before actually applying the event. If it has been mapped
with different settings an error is raised and the slave SQL
thread stops. If it has been mapped with same settings the event
is skipped. If the table is set to be ignored by the filtering
rules, there is no change in behavior: the event is skipped and
ids are not checked.
mysql-test/suite/rpl/t/rpl_row_corruption.test:
Added a simple test case that checks both cases:
- multiple table maps with the same identifier
- multiple table maps with the same identifier, but only one
is processed (the others are filtered out)
When CREATE TABLE wasn't given ENGINE=... it would determine
the default ENGINE at parse-time rather than at execution
time, leading to incorrect behaviour (namely, later changes
to the default engine being ignore) when calling CREATE TABLE
from a stored procedure.
We now defer working out the default engine till execution of
CREATE TABLE.
mysql-test/r/sp_trans.result:
results!
mysql-test/t/sp_trans.test:
Show that CREATE TABLE (called from store routine) heeds
any changes after CREATE SP / parse-time. Show that explicitly
requesting an ENGINE still works.
sql/sql_parse.cc:
If no ENGINE=... was given at parse-time, determine default
engine at execution time of CREATE TABLE.
sql/sql_yacc.yy:
If CREATE TABLE is not given ENGINE=..., don't bother
figuring out the default engine during parsing; we'll
do it at execution time instead to be aware of the
latest updates.
We must allocate a larger ref_pointer_array. We failed to account for extra
items allocated here:
#0 find_order_in_list
uint el= all_fields.elements;
all_fields.push_front(order_item); /* Add new field to field list. */
ref_pointer_array[el]= order_item;
order->item= ref_pointer_array + el;
#1 setup_order
#2 setup_without_group
#3 JOIN::prepare
mysql-test/r/order_by.result:
New test case.
mysql-test/r/union.result:
New test case.
mysql-test/t/order_by.test:
New test case.
mysql-test/t/union.test:
New test case.
sql/sql_lex.cc:
find_order_in_list() may need some extra space, so multiply og_num by two.
sql/sql_union.cc:
For UNION, the 'n_sum_items' are accumulated in the "global_parameters" select_lex.
This number must be propagated to setup_ref_array()
When preparing a 'fake_select_lex' we need to use global_parameters->order_list
rather than fake_select_lex->order_list (see comments inside st_select_lex_unit::cleanup)
Bug#12637786 was fixed with rb:692 by marko. But that fix has a remaining
bug. It added this assert;
ut_ad(ind_field->prefix_len);
before a section of code that assumes there is a prefix_len.
The patch replaced code that explicitly avoided this with a check for
prefix_len. It turns out that the purge thread can get to that assert
without a prefix_len because it does not use a row_ext_t* .
When UNIV_DEBUG is not defined, the affect of this is that the purge thread
sets the dfield->len to zero and then cannot find the entry in the index to
purge. So secondary index entries remain unpurged.
This patch does not do the assert. Instead, it uses
'if (ind_field->prefix_len) {...}'
around the section of code that assumes a prefix_len. This is the way the
patch I provided to Marko did it.
The test case is simply modified to do a sleep(10) in order to give the
purge thread a chance to run. Without the code change to row0row.c, this
modified testcase will assert if InnoDB was compiled with UNIV_DEBUG.
I tried to sleep(5), but it did not always assert.
bug. It added this assert;
ut_ad(ind_field->prefix_len);
before a section of code that assumes there is a prefix_len.
The patch replaced code that explicitly avoided this with a check for
prefix_len. It turns out that the purge thread can get to that assert
without a prefix_len because it does not use a row_ext_t* .
When UNIV_DEBUG is not defined, the affect of this is that the purge thread
sets the dfield->len to zero and then cannot find the entry in the index to
purge. So secondary index entries remain unpurged.
This patch does not do the assert. Instead, it uses
'if (ind_field->prefix_len) {...}'
around the section of code that assumes a prefix_len. This is the way the
patch I provided to Marko did it.
The test case is simply modified to do a sleep(10) in order to give the
purge thread a chance to run. Without the code change to row0row.c, this
modified testcase will assert if InnoDB was compiled with UNIV_DEBUG.
I tried to sleep(5), but it did not always assert.
row_build_index_entry(): In innodb_file_format=Barracuda
(ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED), a secondary index on a
full column can refer to a field that is stored off-page in the
clustered index record. Take that into account.
rb:692 approved by Jimmy Yang
Before this fix, the test performance_schema.relaylog would fail
with sporadic failures related to statistics on update_cond.
The reason for these failures is that thread scheduling makes
impossible to predict if instrumented conditions will be used on not.
The fix is to relax the test case, to not collect statistics about:
- wait/synch/cond/sql/MYSQL_BIN_LOG::update_cond
- wait/synch/cond/sql/MYSQL_RELAY_LOG::update_cond
DB_COL_APPEARS_TWICE_IN_INDEX: Remove. This condition is already
checked and reported by MySQL before passing the index definition to
the storage engine.
row_create_index_for_mysql(): Remove the redundant check for
DB_COL_APPEARS_TWICE_IN_INDEX. When enforcing the column prefix index
limit, invoke dict_mem_index_free(index) to plug the memory leak. In
the loop, use index->n_def instead of dict_index_get_n_fields(index),
because the latter would be 0 for indexes that have not been copied to
the data dictionary cache.
innodb-use-sys-malloc.test:
Add test cases for attempting to trigger the error checks in
row_create_index_for_mysql(). Before MySQL 5.5 and WL#5743, the leak
is only reproducible if ha_innobase::max_supported_key_part_length()
returned a higher limit than the one used in
row_create_index_for_mysql().
In MySQL 5.5 and later, the leak is reproducible with
innodb_large_prefix=true.
rb:688 approved by Jimmy Yang
BOGUS "THE TABLE MYSQL.PROC IS MISSING,..."
There was a race condition between loading a stored routine
(function/procedure/trigger) specified by fully qualified name
SCHEMA_NAME.PROC_NAME and dropping the stored routine database.
The problem was that there is a window for race condition when one server
thread tries to load a stored routine being executed and the other thread
tries to drop the stored routine schema.
This condition race window exists in implementation of function
mysql_change_db() called by db_load_routine() during loading of stored
routine to cache. Function mysql_change_db() calls check_db_dir_existence()
that might failed because specified database was dropped during concurrent
execution of DROP SCHEMA statement. db_load_routine() calls mysql_change_db()
with flag 'force_switch' set to 'true' value so when referenced db is not found
then my_error() is not called and function mysql_change_db() returns ok.
This shadows information about schema opening error in db_load_routine().
Then db_load_routine() makes attempt to parse stored routine that is failed.
This makes to return error to sp_cache_routines_and_add_tables_aux() but since
during error generation a call to my_error wasn't made and hence
THD::main_da wasn't set we set the generic "mysql.proc table corrupt" error
when running sp_cache_routines_and_add_tables_aux().
The fix is to install an error handler inside db_load_routine() for
the mysql_op_change_db() call, and check later if the ER_BAD_DB_ERROR
was caught.
sql/sql_db.cc:
Added synchronization point "before_db_dir_check" to emulate a race condition during
processing of CALL/DROP SCHEMA.
Fix a failure of the re-enabled innodb-index.test in the embedded server.
Apparently, the embedded server does not default to ENGINE=InnoDB when
copying an InnoDB table by CREATE TABLE t2 SELECT * FROM t1;
OLD VALUE OF INPUT PARAMETER.
The user-visible problem was that CASE-control-flow function
(not CASE-statement) misbehaved in stored routines under some
circumstances. The problem resulted in a crash or wrong data
returned. The error happened when expressions in CASE-function
were not of the same character set.
A CASE-function should return values of the same character set
for all branches. Internally, that means a new Item-instance
for the CONVERT(... USING <some charset>)-function is added
to the item tree when needed. The problem was that such changes
were not properly recorded using THD::change_item_tree(),
thus dangling pointers remain in the item tree after
THD::rollback_item_tree_changes(), which lead to undefined
behavior (i.e. crash / wrong data) for subsequent executions of
the stored routine.
This bug was introduced by a patch for Bug 11753363
(44793 - CHARACTER SETS: CASE CLAUSE, UCS2 OR UTF32, FAILURE).
The fixed function is Item_func_case::fix_length_and_dec().
New CONVERT-items are added in agg_item_set_converter(),
which calls THD::change_item_tree().
The problem was that an intermediate array was passed
to agg_item_set_converter(). Thus, THD::change_item_tree() there
was called on intermediate objects.
Note: those intermediate objects are allocated on THD's
memory root, so it's Ok to put them into "changed item lists".
The fix is to track changes on the correct objects.
TO POSITION FIRST CAN CAUSE DATA TO BE CORRUPTED".
ALTER TABLE MODIFY/CHANGE ... FIRST did nothing except renaming
columns if new version of the table had exactly the same
structure as the old one (i.e. as result of such statement, names
of columns changed their order as specified but data in columns
didn't). The same thing happened for ALTER TABLE DROP COLUMN/ADD
COLUMN statements which were supposed to produce new version of
table with exactly the same structure as the old version of table.
I.e. in the latter case the result was the same as if old column
was renamed instead of being dropped and new column with default
as value being created.
Both these problems were caused by the fact that ALTER TABLE
implementation incorrectly interpreted both these situations as
simple renaming of columns and assumed that in-place ALTER TABLE
algorithm could have been used for them.
This patch fixes this problem by ensuring that in cases when some
column is moved to the first position or some column is dropped
the default ALTER TABLE algorithm involving table copying is
always used. This is achieved by detecting such situations in
mysql_prepare_alter_table() and setting Alter_info::change_level
to ALTER_TABLE_DATA_CHANGED for them.
mysql-test/r/alter_table.result:
Added test for bug #12652385 - "61493: REORDERING COLUMNS TO
POSITION FIRST CAN CAUSE DATA TO BE CORRUPTED".
mysql-test/t/alter_table.test:
Added test for bug #12652385 - "61493: REORDERING COLUMNS TO
POSITION FIRST CAN CAUSE DATA TO BE CORRUPTED".
sql/sql_table.cc:
Changed mysql_prepare_alter_table() to detect situations in
which we some column moved to the first position or some column
is dropped and ensure that such ALTER TABLE statements won't
be carried out using in-place algorithm. The latter could have
happened before this patch if new version of table had the same
structure as the old one (except the column names).
UPDATED TWICE
For multi update it is not allowed to update a column
of a table if that table is accessed through multiple aliases
and either
1) the updated column is used as partitioning key
2) the updated column is part of the primary key
and the primary key is clustered
This check is done in unsafe_key_update().
The bug was that for case 2), it was checked whether
updated_column_number == table_share->primary_key
However, the primary_key variable is the index number of the
primary key, not a column number.
Prior to this bugfix, the first column was wrongly believed to be
the primary key. The columns covered by an index is found in
table->key_info[idx_number]->key_part. The bugfix is to check if
any of the columns in the keyparts of the primary key are
updated.
The user-visible effect is that for storage engines with
clustered primary key (e.g. InnoDB but not MyISAM) queries
like
"UPDATE t1 AS A JOIN t2 AS B SET A.primkey=..."
will now error with
"ERROR HY000: Primary key/partition key update is not allowed
since the table is updated both as 'A' and 'B'."
instead of
"ERROR 1032 (HY000): Can't find record in 't1_tb'"
even if primkey is not the first column in the table. This
was the intended behavior of bugfix 11764529.
mysql-test/r/multi_update.result:
Add test for bug#11882110
mysql-test/r/multi_update_innodb.result:
Add test for bug#11882110
mysql-test/t/multi_update.test:
Add test for bug#11882110
mysql-test/t/multi_update_innodb.test:
Add test for bug#11882110
sql/sql_update.cc:
unsafe_key_update() wrongly checked if the primary key index
number was the same as updated column number. Now it is checked
whether any of the columns making up the primary key is updated.
sql/table.h:
Fix comment on TABLE_SHARE::primary_key. Incorrect comment
was introduced by an earlier merge conflict (as per dlenev)
Issue:
When libmysqld/example/mysql_embedded is executed, it was getting abort. Its a
regression as it was working in 5.1 and failed in 5.5. Issue is there because
remaining_argc/remaining_argv were not getting assigned correctly in
init_embedded_server() which were being used later in init_common_variable().
Solution:
Rectified code to pass correct argc/argv to be used in init_common_variable().
libmysqld/lib_sql.cc:
Rectified remaining_argc/remaining_argv assignment.
mysql-test/r/mysql_embedded.result:
Result file for the test case added.
mysql-test/t/mysql_embedded.test:
Added test case to verify libmysqld/example/mysql_embedded works.
This test case was failing on 5.5 and trunk for two reasons.
1) It waited for the "Waiting for table level lock" process
state while this state was renamed "Waiting for table
metadata lock" with the introduction of MDL in 5.5.
2) SET GLOBAL query_cache_size= 100000; gave a warning since
query_cache_size is supposed to be multiples of 1024.
This patch fixes these two issues and re-enables the test case.
RESULT CONSISTED OF MORE THAN ONE ROW
MySQL converts incorrect DATEs and DATETIMEs to '0000-00-00' on
insertion by default. This means that this sequence is possible:
CREATE TABLE t1(date_notnull DATE NOT NULL);
INSERT INTO t1 values (NULL);
SELECT * FROM t1;
0000-00-00
At the same time, ODBC drivers do not (or at least did not in the
90's) understand the DATE and DATETIME value '0000-00-00'. Thus,
to be able to query for the value 0000-00-00 it was decided in
MySQL 4.x (or maybe even before that) that for the special case
of DATE/DATETIME NOT NULL columns, the query "SELECT ... WHERE
date_notnull IS NULL" should return rows with date_notnull ==
'0000-00-00'. This is documented misbehavior that we do not want
to change.
The hack used to make MySQL return these rows is to convert
"date_notnull IS NULL" to "date_notnull = 0". This is, however,
only done if the table date_notnull belongs to is not an inner
table of an outer join. The rationale for this seems to be that
if there is no join match for the row in the outer table,
null-complemented rows would otherwise not be returned because
the null-complemented DATE value is actually NULL. On the other
hand, this means that the "return rows with 0000-00-00 when the
query asks for IS NULL"-hack is not in effect for outer joins.
In this bug, we have a LEFT JOIN that does not misbehave like
the documentation says it should. The fix is to rewrite
"date_notnull IS NULL" to "date_notnull IS NULL OR
date_notnull = 0"
if dealing with an OUTER JOIN, otherwise
"date_notnull IS NULL" to "date_notnull = 0"
as was done before.
Note:
The bug was originally reported as different result on first
and second execution of SP. The reason was that during first
execution the query was correctly rewritten to an inner join
due to a null-rejecting predicate. On second execution the
"IS NULL" -> "= 0" rewrite was done because there was no outer
join. The real problem, though, was incorrect date/datetime
IS NULL handling for OUTER JOINs.
mysql-test/r/type_datetime.result:
Add test for BUG#12561818
mysql-test/t/type_datetime.test:
Add test for BUG#12561818
sql/sql_select.cc:
Special handling of NULL for DATE/DATETIME NOT NULL columns:
In the case of outer join,
"date_notnull IS NULL"
is now rewritten to
"date_notnull IS NULL OR date_notnull = 0"