Problem: the test failed because errors were found in the error log.
The test case contains suppressions for an old version of the error message,
but the format of the error message has changed without updating the suppression.
Fix: Update the suppression. Also small fixes to improve the test.
The problem was that server didn't check resulting size of prepared
statement argument which was set using mysql_send_long_data() API.
By calling mysql_send_long_data() several times it was possible
to create overly big string and thus force server to allocate
memory for it. There was no way to limit this allocation.
The solution is to add check for size of result string against
value of max_long_data_size start-up parameter. When intermediate
string exceeds max_long_data_size value an appropriate error message
is emitted.
We can't use existing max_allowed_packet parameter for this purpose
since its value is limited by 1GB and therefore using it as a limit
for data set through mysql_send_long_data() API would have been an
incompatible change. Newly introduced max_long_data_size parameter
gets value from max_allowed_packet parameter unless its value is
specified explicitly. This new parameter is marked as deprecated
and will be eventually replaced by max_allowed_packet parameter.
Value of max_long_data_size parameter can be set only at server
startup.
FAILED DROP DATABASE CAN BREAK STATEMENT BASED REPLICATION
The first phase of DROP DATABASE is to delete the tables in the database.
If deletion of one or more of the tables fail (e.g. due to a FOREIGN KEY
constraint), DROP DATABASE will be aborted. However, some tables could
still have been deleted. The problem was that nothing would be written
to the binary log in this case, so any slaves would not delete these tables.
Therefore the master and the slaves would get out of sync.
This patch fixes the problem by making sure that DROP TABLE is written
to the binary log for the tables that were in fact deleted by the failed
DROP DATABASE statement.
Test case added to binlog.binlog_database.test.
FAILS IN SET_FIELD_ITERATOR
(Former 59299)
When a PROCEDURE does a natural join, resolving of which columns are
used in the join is done only once; consecutive CALLs to the procedure
will reuse this information:
CREATE PROCEDURE proc() SELECT * FROM t1 NATURAL JOIN v1;
CALL proc(); <- natural join columns resolved here
CALL proc(); <- reuse resolved NJ columns from first CALL
The second CALL knows that it can reuse the resolved NJ columns because
the first CALL sets st_select_lex::first_natural_join_processing=false.
The problem in this bug was that the table the view v1 depends on
changed between CREATE PROCEDURE and the first CALL:
CREATE PROCEDURE...
ALTER TABLE t2 CHANGE COLUMN a b CHAR;
CALL proc(); <- error when resolving natural join columns
CALL proc(); <- tries to reuse from first CALL => crash
The fix for this bug is to set first_natural_join_processing= FALSE iff
the natural join columns resolving was successful.
fails when running with ps-protocol).
The problem was that when running in --ps-protocol mode mysqltest.cc
didn't close created prepared statements. So, the plugins could not be
unistalled because there was a prepared statement using them.
A fix is to add a dummy statement that forces mysqltest.cc to close
the last prepared statement (which uses a plugin-defined table).
This patch corrects the problem by fixing the definition and alterations
of the mysql.user table in the .sql files.
Also included are new result files for tests that examine the name
column of the mysql.user table.
When a RPM test build in a non-release branch is done,
the $MYSQL_BINDIR variable ends in "/usr"
(rather than in "/usr/lib" as in a RPM release build),
this made test "file_contents" fail.
A branch for this case is added to the test.
The test result is unchanged.
Issue:
SSL_CIPHER set to a specific CIPHER name was not getting picked up by SHOW STATUS Command.
Solution:
If specific cipher name is specified, avoid overwriting of Cipher List with default Cipher names.
pre-locking list caused by triggers).
The thing is that CREATE TRIGGER / DROP TRIGGER may actually
change pre-locking list of (some) stored routines.
The SP-cache does not detect such changes. Thus if sp_head-instance
is cached in SP-cache, subsequent executions of the cached
sp_head will use inaccurate pre-locking list.
The patch is to invalidate SP-cache on CREATE TRIGGER / DROP TRIGGER.
KEY NO 0 FOR TABLE IN ERROR LOG
With the changes made by the patches for Bug#11751388 and
Bug#11784056, concurrent reads are allowed while secondary
indexes are created in InnoDB. This means that the metadata
lock on the affected table is not upgraded to exclusive
until the .FRM is updated at the end of ALTER TABLE processing.
The problem was that if this lock upgrade failed for some
reason (e.g. timeout), the index information in the server
and inside InnoDB would be out of sync. This would happen
since the add index operation already was committed inside
InnoDB but the table metadata inside the server had not been
updated yet.
This patch fixes the problem by (for now) reverting the
effects of the patches for Bug#11751388 and Bug#11784056.
Concurrent reads will now again be blocked during creation
of secondary indexes in InnoDB.
Test case added to innodb_mysql_lock.test.
MAP 'REPAIR TABLE' TO RECREATE +ANALYZE FOR ENGINES NOT
SUPPORTING NATIVE REPAIR
Executing 'mysqlcheck --check-upgrade --auto-repair ...' will first issue
'CHECK TABLE FOR UPGRADE' for all tables in the database in order to check if the
tables are compatible with the current version of MySQL. Any tables that are
found incompatible are then upgraded using 'REPAIR TABLE'.
The problem was that some engines (e.g. InnoDB) do not support 'REPAIR TABLE'.
This caused any such tables to be left incompatible. As a result such tables were
not properly fixed by the mysql_upgrade tool.
This patch fixes the problem by first changing 'CHECK TABLE FOR UPGRADE' to return
a different error message if the engine does not support REPAIR. Instead of
"Table upgrade required. Please do "REPAIR TABLE ..." it will report
"Table rebuild required. Please do "ALTER TABLE ... FORCE ..."
Second, the patch changes mysqlcheck to do 'ALTER TABLE ... FORCE' instead of
'REPAIR TABLE' in these cases.
This patch also fixes 'ALTER TABLE ... FORCE' to actually rebuild the table.
This change should be reflected in the documentation. Before this patch,
'ALTER TABLE ... FORCE' was unused (See Bug#11746162)
Test case added to mysqlcheck.test
Setting lowercase_table_names to 2 on Windows causing Foreign Key problems
This problem was exposed by the fix for Bug#55222. There was a codepath in dict0load.c,
dict_load_foreigns() that made sure the table name matched case sensitive in order to
load a referenced table into the dictionary as needed. If an engine is rebooted which
accesses a table with foreign keys, and lower_case_table_names=2, then the table with
foreign keys will get an error when it is changed (insert/updated/delete).
Once the referenced tables are loaded into the dictionary cache by a select statement
on those tables, the same change would succeed because the affected code path would
not get followed.
NON-PRIMARY UNIQUE INDEX USING INNODB
This patch adds the HA_INPLACE_ADD_UNIQUE_INDEX_NO_WRITE
capability flag to InnoDB, indicating that concurrent reads
can be allowed while non-primary unique indexes are created.
This is an follow-up to Bug #11751388 which enabled concurrent
reads when creating non-primary non-unique indexes.
Test case added to innodb_mysql_sync.test.
FLUSH TABLES under FLUSH TABLES <list> WITH READ LOCK leads
to assert failure.
This assert was triggered if a statement tried up upgrade a metadata
lock with an active FLUSH TABLE <list> WITH READ LOCK. The assert
checks that the connection already holds a global intention exclusive
metadata lock. However, FLUSH TABLE <list> WITH READ LOCK does not
acquire this lock in order to be compatible with FLUSH TABLES WITH
READ LOCK. Therefore any metadata lock upgrade caused the assert to
be triggered.
This patch fixes the problem by preventing metadata lock upgrade
if the connection has an active FLUSH TABLE <list> WITH READ LOCK.
ER_TABLE_NOT_LOCKED_FOR_WRITE will instead be reported to the client.
Test case added to flush.test.
@ mysql-test/r/ctype_latin1.result
@ mysql-test/r/ctype_utf8.result
@ mysql-test/t/ctype_latin1.test
@ mysql-test/t/ctype_utf8.test
Adding tests
@ sql/mysqld.h
@ sql/item.cc
@ sql/sql_parse.cc
@ sql/sql_view.cc
Refactoring (thanks to Guilhem for the idea):
Item_string::print() was hard to understand because of the different
QT_ constants: in "query_type==QT_x", QT_x is explicitely included
but the other two QT_ are implicitely excluded. The combinations
with '||' and '&&' make this even harder.
- logic is now more "explicit" by changing QT_ constants to a bitmap of flags:
QT_ORDINARY: no change,
QT_IS -> QT_TO_SYSTEM_CHARSET | QT_WITHOUT_INTRODUCERS,
QT_EXPLAIN -> QT_TO_SYSTEM_CHARSET
(QT_EXPLAIN was introduced in the first version of the Bug#57341 patch)
- Item_string::print() is rewritten using those flags
Bugfix itself:
When QT_TO_SYSTEM_CHARSET is used alone (with no QT_WITHOUT_INTRODUCERS),
we print string literals as follows:
- display introducers if they were in the original query
- print ASCII characters as is
- print non-ASCII characters using hex-escape
Note: as "EXPLAIN" output is only for human readability purposes
and does not need to be a pasrable SQL, so using hex-escape is Ok.
ErrConvString class perfectly suites for hex escaping purposes.
from 5.1 to 5.5
(Former 59405)
In this bug, args[0] in an Item_func_find_in_set stored an
Item_func_weekday that was constant. In
Item_func_find_in_set::fix_length_and_dec(), args[0]->val_str()
was called. Later, when Item_func_find_in_set::val_int() was
called, args[0]->null_value was checked. However, the
Item_func_weekday in args[0] had now been replaced with an
Item_cache. No val_*() calls had been made to this Item_cache,
thus null_value was incorrectly 'true', resulting in missing
rows in the result set.
enum_value gets a value in fix_length_and_dec() iff args[0]
is both constant and non-null. It is therefore unnecessary
to check the null_value of args[0] in val_int().
An alternative fix would be to call args[0]->val_int() inside
Item_func_find_in_set::val_int(). This would ensure
args[0]->null_value was set correctly (always false in this case),
but that would have to be done for every record this const value
is checked against.