The problem was that server didn't check resulting size of prepared
statement argument which was set using mysql_send_long_data() API.
By calling mysql_send_long_data() several times it was possible
to create overly big string and thus force server to allocate
memory for it. There was no way to limit this allocation.
The solution is to add check for size of result string against
value of max_long_data_size start-up parameter. When intermediate
string exceeds max_long_data_size value an appropriate error message
is emitted.
We can't use existing max_allowed_packet parameter for this purpose
since its value is limited by 1GB and therefore using it as a limit
for data set through mysql_send_long_data() API would have been an
incompatible change. Newly introduced max_long_data_size parameter
gets value from max_allowed_packet parameter unless its value is
specified explicitly. This new parameter is marked as deprecated
and will be eventually replaced by max_allowed_packet parameter.
Value of max_long_data_size parameter can be set only at server
startup.
fails when running with ps-protocol).
The problem was that when running in --ps-protocol mode mysqltest.cc
didn't close created prepared statements. So, the plugins could not be
unistalled because there was a prepared statement using them.
A fix is to add a dummy statement that forces mysqltest.cc to close
the last prepared statement (which uses a plugin-defined table).
When a RPM test build in a non-release branch is done,
the $MYSQL_BINDIR variable ends in "/usr"
(rather than in "/usr/lib" as in a RPM release build),
this made test "file_contents" fail.
A branch for this case is added to the test.
The test result is unchanged.
Issue:
SSL_CIPHER set to a specific CIPHER name was not getting picked up by SHOW STATUS Command.
Solution:
If specific cipher name is specified, avoid overwriting of Cipher List with default Cipher names.
pre-locking list caused by triggers).
The thing is that CREATE TRIGGER / DROP TRIGGER may actually
change pre-locking list of (some) stored routines.
The SP-cache does not detect such changes. Thus if sp_head-instance
is cached in SP-cache, subsequent executions of the cached
sp_head will use inaccurate pre-locking list.
The patch is to invalidate SP-cache on CREATE TRIGGER / DROP TRIGGER.
MAP 'REPAIR TABLE' TO RECREATE +ANALYZE FOR ENGINES NOT
SUPPORTING NATIVE REPAIR
Executing 'mysqlcheck --check-upgrade --auto-repair ...' will first issue
'CHECK TABLE FOR UPGRADE' for all tables in the database in order to check if the
tables are compatible with the current version of MySQL. Any tables that are
found incompatible are then upgraded using 'REPAIR TABLE'.
The problem was that some engines (e.g. InnoDB) do not support 'REPAIR TABLE'.
This caused any such tables to be left incompatible. As a result such tables were
not properly fixed by the mysql_upgrade tool.
This patch fixes the problem by first changing 'CHECK TABLE FOR UPGRADE' to return
a different error message if the engine does not support REPAIR. Instead of
"Table upgrade required. Please do "REPAIR TABLE ..." it will report
"Table rebuild required. Please do "ALTER TABLE ... FORCE ..."
Second, the patch changes mysqlcheck to do 'ALTER TABLE ... FORCE' instead of
'REPAIR TABLE' in these cases.
This patch also fixes 'ALTER TABLE ... FORCE' to actually rebuild the table.
This change should be reflected in the documentation. Before this patch,
'ALTER TABLE ... FORCE' was unused (See Bug#11746162)
Test case added to mysqlcheck.test
NON-PRIMARY UNIQUE INDEX USING INNODB
This patch adds the HA_INPLACE_ADD_UNIQUE_INDEX_NO_WRITE
capability flag to InnoDB, indicating that concurrent reads
can be allowed while non-primary unique indexes are created.
This is an follow-up to Bug #11751388 which enabled concurrent
reads when creating non-primary non-unique indexes.
Test case added to innodb_mysql_sync.test.
FLUSH TABLES under FLUSH TABLES <list> WITH READ LOCK leads
to assert failure.
This assert was triggered if a statement tried up upgrade a metadata
lock with an active FLUSH TABLE <list> WITH READ LOCK. The assert
checks that the connection already holds a global intention exclusive
metadata lock. However, FLUSH TABLE <list> WITH READ LOCK does not
acquire this lock in order to be compatible with FLUSH TABLES WITH
READ LOCK. Therefore any metadata lock upgrade caused the assert to
be triggered.
This patch fixes the problem by preventing metadata lock upgrade
if the connection has an active FLUSH TABLE <list> WITH READ LOCK.
ER_TABLE_NOT_LOCKED_FOR_WRITE will instead be reported to the client.
Test case added to flush.test.
@ mysql-test/r/ctype_latin1.result
@ mysql-test/r/ctype_utf8.result
@ mysql-test/t/ctype_latin1.test
@ mysql-test/t/ctype_utf8.test
Adding tests
@ sql/mysqld.h
@ sql/item.cc
@ sql/sql_parse.cc
@ sql/sql_view.cc
Refactoring (thanks to Guilhem for the idea):
Item_string::print() was hard to understand because of the different
QT_ constants: in "query_type==QT_x", QT_x is explicitely included
but the other two QT_ are implicitely excluded. The combinations
with '||' and '&&' make this even harder.
- logic is now more "explicit" by changing QT_ constants to a bitmap of flags:
QT_ORDINARY: no change,
QT_IS -> QT_TO_SYSTEM_CHARSET | QT_WITHOUT_INTRODUCERS,
QT_EXPLAIN -> QT_TO_SYSTEM_CHARSET
(QT_EXPLAIN was introduced in the first version of the Bug#57341 patch)
- Item_string::print() is rewritten using those flags
Bugfix itself:
When QT_TO_SYSTEM_CHARSET is used alone (with no QT_WITHOUT_INTRODUCERS),
we print string literals as follows:
- display introducers if they were in the original query
- print ASCII characters as is
- print non-ASCII characters using hex-escape
Note: as "EXPLAIN" output is only for human readability purposes
and does not need to be a pasrable SQL, so using hex-escape is Ok.
ErrConvString class perfectly suites for hex escaping purposes.
from 5.1 to 5.5
(Former 59405)
In this bug, args[0] in an Item_func_find_in_set stored an
Item_func_weekday that was constant. In
Item_func_find_in_set::fix_length_and_dec(), args[0]->val_str()
was called. Later, when Item_func_find_in_set::val_int() was
called, args[0]->null_value was checked. However, the
Item_func_weekday in args[0] had now been replaced with an
Item_cache. No val_*() calls had been made to this Item_cache,
thus null_value was incorrectly 'true', resulting in missing
rows in the result set.
enum_value gets a value in fix_length_and_dec() iff args[0]
is both constant and non-null. It is therefore unnecessary
to check the null_value of args[0] in val_int().
An alternative fix would be to call args[0]->val_int() inside
Item_func_find_in_set::val_int(). This would ensure
args[0]->null_value was set correctly (always false in this case),
but that would have to be done for every record this const value
is checked against.
Part 2. Function QUOTE() was not multi-byte safe.
@ mysql-test/r/ctype_ucs.result
@ mysql-test/t/ctype_ucs.test
Adding tests
@ sql/item_strfunc.cc
Fixing Item_func_quote::val_str to be multi-byte safe.
@ sql/item_strfunc.h
Multiple size needed for quote characters to mbmaxlen
Problem: wrong character set pointer was passed to my_strtoll10_mb2,
which led to DBUG_ASSERT failure in some cases.
@ mysql-test/r/func_encrypt_ucs2.result
@ mysql-test/t/func_encrypt_ucs2.test
@ mysql-test/r/ctype_ucs.result
@ mysql-test/t/ctype_ucs.test
Adding tests
@ sql/item_func.cc
"cs" initialization was wrong (res does not necessarily point to &str_value)
@ sql/item_strfunc.cc
Item_func_dec_encrypt::val_str() and Item_func_des_descrypt::val_str()
did not set character set for tmp_value (the returned value),
so the old value, which was previously copied from args[1]->val_str(),
was incorrectly returned with tmp_value.
Problem: a byte behind the end of input string was read
in case of a broken XML not having a quote or doublequote
character closing a string value.
Fix: changing condition not to read behind the end of input string
@ mysql-test/r/xml.result
@ mysql-test/t/xml.test
Adding tests
@ strings/xml.c
When checking if the closing quote/doublequote was found,
using p->cur[0] us unsafe, as p->cur can point to the byte after the value.
Comparing p->cur to p->beg instead.
Problem: in case of string CASE/WHEN arguments with different
character sets, Item_func_case::find_item() called comparator
cmp_items[x] on mixed character set Items, so a 8-bit value could
be errouneously referenced to as being utf16/utf32 value,
which led to crash on DBUG_ASSERT() because of wrong value length.
This was wrong, as string comparator expects arguments in the same
character set.
Fix: modify Item_func_case's argument list after calling
agg_arg_charsets_for_comparison() - put the Items in "agg" array
back to "args", because some of the Items in the "agg" array might
have been changed to character set converters:
- to Item_func_conv_charset for non-constant items
- to Item_string for constant items
In other words, perform the same substitution which is done in
all other operations string comparison or string result operations:
Replace
CASE latin1_item WHEN utf16_item THEN ... END
to
CASE CONVERT(latin1_item USING utf16) WHEN utf16_item THEN ... END
Replace
CASE utf16_item WHEN latin1_item THEN ... END
to
CASE utf16_item WHEN CONVERT(latin1_item USING utf16) THEN ... END
@ mysql-test/r/ctype_utf16.result
@ mysql-test/r/ctype_utf32.result
@ mysql-test/t/ctype_utf16.test
@ mysql-test/t/ctype_utf32.test
Adding tests
@ sql/item_cmpfunc.cc
Put "agg" back to "args".
@ sql/sql_string.cc
Backporting a fix for String::set_or_copy_aligned() from 5.6,
for better test coverage:
"SELECT _utf16 0x61" should expand the string to 0x0061 rather
than to 0x000061.
This fix was made in 5.6 under terms of "WL#4616 Implement UTF16-LE".
UPDATES THE TABLE ENTRIES (formerly 55385)
BUG#11764529: MULTI UPDATE+INNODB REPORTS ER_KEY_NOT_FOUND
IF A TABLE IS UPDATED TWICE (formerly 57373)
If multiple-table update updates a row through two aliases and
the first update physically moves the row, the second update will
fail to locate the row. This results in different errors
depending on storage engine:
* MyISAM: Got error 134 from storage engine
* InnoDB: Can't find record in 'tbl'
None of these errors accurately describe the problem.
Furthermore, since MyISAM is non-transactional, the update
executed first will be performed while the second will not.
In addition, for two equal multiple-table update statements,
one could succeed and the other fail based on whether or not
the record actually moved or not. This was inconsistent.
Two update operations may physically move a row:
1) Update of a column in a clustered primary key
2) Update of a column used to calculate which partition the
row belongs to
BUG#11764529 is about case 1) above, BUG#11762751 was about case 2).
The fix for these bugs is to return with an error if multiple-table
update is about to:
a) Update a table through multiple aliases, and
b) Perform an update that may physically more the row
in at least one of these aliases
This avoids
* partial updates as described for MyISAM above,
* provides the same error message that describes the actual problem
for all SEs
* inconsistent behavior where a statement fails or succeeds based on
e.g. the partitioning algorithm of the table.
The problem was that doing ALTER TABLE on a table which had a key
on a TEXT/BLOB column with a prefix longer than the maximum number
of characteres in this column (as per the character set), by mistake,
caused an error (Error 1170 - ER_BLOB_KEY_WITHOUT_LENGTH).
This bug not repeatable in 5.5.
This patch adds a regression test to alter_table.test and
contains no code changes.
("-") IN DATABASE NAMES IN ALTER DATABASE.
mysqldump did not quote database name in 'ALTER DATABASE'
statements in its output. This can further cause a failure
while loading if database name contains a hyphen '-'.
This happened as, while printing the 'ALTER DATABASE'
statements, the database name was not quoted.
Fixed by quoting the database name.
The loop that was looping over subqueries' references to outer field used a
local boolean variable to tell whether the field was grouped or not. But the
implementor failed to reset the variable after each iteration. Thus a field
that was not directly aggregated appeared to be.
Fixed by resetting the variable upon each new iteration.
MONTHNAME(0) claims that it is about to return NOT NULL
value, whereas it actually returns NULL.
As a result storage_engine variable (which cannot be NULL)
protection was bypassed and NULL value was accepted, causing
server crash.
Fixed MONTHNAME(0) to report valid NULL flag.
Problem:
IF() did not copy collation derivation and repertoire from
an argument if the opposite argument was NULL:
IF(cond, res1, NULL)
IF(cond, NULL, res2)
only CHARSET_INFO pointer was copied.
This resulted in illegal mix of collations error.
Fix:
copy all collation parameters from the non-NULL argument:
CHARSET_INFO pointer, derivation, repertoire.
memory reference
There are two issues present here.
1) There is a possibility that we test a byte beyond the
allocated buffer
2) We compare a byte that might never have been
initalized to see if it's 0.
The first issue is not triggered by existing code, but an
ASSERT has been added to safe-guard against introducing
new code that triggers it.
The second issue is what triggers the Valgrind warnings
reported in the bug report. A buffer is allocated in
class String to hold the value. This buffer is populated
by the character data constituting the string, but is not
zero-terminated in most cases. Testing if it is indeed
zero-terminated means that we check a byte that has never
been explicitly set, thus causing Valgrind to trigger.
Note that issue 2 is not a serious problem. The variable
is read, and if it's not zero, we will set it to zero.
There are no further consequences.
Note that this patch does not fix the underlying problems
with issue 1, as it is deemed too risky to fix at this
point (as noted in the bug report). As discussed in
the report, the c_ptr() method should probably be
replaced, but this requires a thorough analysis of the
~200 calls to the method.
attempt to create spatial index on char > 31 bytes".
Attempt to create spatial index on char field with length
greater than 31 byte led to assertion failure on server
compiled with safemutex support.
The problem occurred in mi_create() function which was called
to create a new version of table being altered. This function
failed since it detected an attempt to create a spatial key
on non-binary column and tried to return an error.
On its error path it tried to unlock THR_LOCK_myisam mutex
which has not been not locked at this point. Indeed such an
incorrect behavior was caught by safemutex wrapper and caused
assertion failure.
This patch fixes the problem by ensuring that mi_create()
doesn't releases THR_LOCK_myisam mutex on error path if it was
not acquired.
Assert in Diagnostics_area::set_ok_status() for XA COMMIT
This assert was triggered if XA COMMIT was issued when an XA transaction
already had encountered an error (e.g. a deadlock) which required
the XA transaction to be rolled back.
In general, the assert is triggered if a statement tries to send OK to
the client when an error has already been reported. It was triggered
in this case because the trans_xa_commit() function first reported an
error, then rolled back the transaction and finally returned FALSE,
indicating success. Since trans_xa_commit() reported success,
mysql_execute_command() tried to report OK, triggering the assert.
This patch fixes the problem by fixing trans_xa_commit() to return TRUE
if it encounters an error that requires rollback, even if the rollback
itself is successful.
Test case added to xa.test.