with no .cnf or .ini extension.
Fix for this bug was pushed as part of Bug#11765482.
mysql-test/r/mysqladmin.result:
Added test case for Bug#11766184.
mysql-test/t/mysqladmin.test:
Added test case for Bug#11766184.
Problem: the test had not been updated after BUG#49978 was pushed
Fix: add 'set $rpl_only_running_threads= 1' to the end of the test
and update the result file.
Also, use include/assert.inc for an assertion (instead of relying
on result file comparison).
Also, move 'set @@global.slave_net_timeout' forwards, to get rid
of a warning.
mysql-test/suite/large_tests/r/rpl_slave_net_timeout.result:
updated result file
mysql-test/suite/large_tests/t/rpl_slave_net_timeout.test:
- Got rid of warning "The requested value for the heartbeat period
exceeds the value of `slave_net_timeout' seconds. A sensible
value for the period should be less than the timeout." by
changing order of "set slave_net_timeout" and
"change master to master_heartbeat_period".
- replaced result file comparison by include/assert.inc
- added "let $rpl_only_running_threads= 1"
Currently, rpl_semi_sync is failing in PB due to the warning message:
"Slave SQL: slave SQL thread is being stopped in the middle of "
"applying of a group having updated a non-transaction table; "
"waiting for the group completion ..."
The problem started happening after the fix for BUG#11762407 what was
automatically suppressing some warning messages.
To fix the current issue, we suppress the aforementioned warning message
and exploit the opportunity to make the sentence clearer.
There is a race between two threads: user thread and the dump
thread. The former sets a debug instruction that makes the latter wait
before processing an Xid event. There can be cases that the dump
thread has not yet processed the previous Xid event, causing it to
wait one Xid event too soon, thus causing sync_slave_with_master never
to resume.
We fix this by moving the instructions that set the debug variable
after calling sync_slave_with_master.
Problem: the test failed because errors were found in the error log.
The test case contains suppressions for an old version of the error message,
but the format of the error message has changed without updating the suppression.
Fix: Update the suppression. Also small fixes to improve the test.
mysql-test/suite/rpl/r/rpl_slave_load_remove_tmpfile.result:
update result file
mysql-test/suite/rpl/t/rpl_slave_load_remove_tmpfile-slave.opt:
Use variables instead of .opt files to avoid server restarts.
mysql-test/suite/rpl/t/rpl_slave_load_remove_tmpfile.test:
1. To fix the bug, we update the regular expression in mtr.add_suppression
so that it matches the real error text.
2. Use wait_for_slave_sql_error.inc when we wait for an error.
This makes the test easier to understand and will produce better
debug info if the test fails.
3. Use server variables instead of command line options to set
the @@GLOBAL.DEBUG variable. This avoids server restarts when
running the test suite.
4. Clarify the comment at the top of the file and add bug reference.
of service in prepared statements).
sql/sql_prepare.cc:
At mysql_stmt_get_longdata(): instead of pushing an internal
error handler (as done in 5.1-tree) we save, set and restore
the statement's diagnostics area and warning info.
The problem was that server didn't check resulting size of prepared
statement argument which was set using mysql_send_long_data() API.
By calling mysql_send_long_data() several times it was possible
to create overly big string and thus force server to allocate
memory for it. There was no way to limit this allocation.
The solution is to add check for size of result string against
value of max_long_data_size start-up parameter. When intermediate
string exceeds max_long_data_size value an appropriate error message
is emitted.
We can't use existing max_allowed_packet parameter for this purpose
since its value is limited by 1GB and therefore using it as a limit
for data set through mysql_send_long_data() API would have been an
incompatible change. Newly introduced max_long_data_size parameter
gets value from max_allowed_packet parameter unless its value is
specified explicitly. This new parameter is marked as deprecated
and will be eventually replaced by max_allowed_packet parameter.
Value of max_long_data_size parameter can be set only at server
startup.
mysql-test/t/variables.test:
Added checking for new start-up parameter max_long_data_size.
sql/item.cc:
Added call to my_message() when accumulated string exceeds
max_long_data_size value. my_message() calls error handler
that was installed in mysql_stmt_get_longdata before call
to Item_param::set_longdata.
The error handler then sets state, last_error and last_errno
fields for current statement to values which correspond to
error which was caught.
sql/mysql_priv.h:
Added max_long_data_size variable declaration.
sql/mysqld.cc:
Added support for start-up parameter 'max_long_data_size'.
This parameter limits size of data which can be sent from
client to server using mysql_send_long_data() API.
sql/set_var.cc:
Added variable 'max_long_data_size' into list of variables
displayed by command 'show variables'.
sql/sql_prepare.cc:
Added error handler class Set_longdata_error_handler.
This handler is used to catch any errors that can be
generated during execution of Item_param::set_longdata().
Source code snippet that makes checking for statement's state
during statement execution is moved from Prepared_statement::execute()
to Prepared_statement::execute_loop() in order not to call
set_parameters() when statement has failed during
set_long_data() execution. If this hadn't been done
the call to set_parameters() would have failed.
tests/mysql_client_test.c:
A testcase for the bug #56976 was added.
FAILED DROP DATABASE CAN BREAK STATEMENT BASED REPLICATION
The first phase of DROP DATABASE is to delete the tables in the database.
If deletion of one or more of the tables fail (e.g. due to a FOREIGN KEY
constraint), DROP DATABASE will be aborted. However, some tables could
still have been deleted. The problem was that nothing would be written
to the binary log in this case, so any slaves would not delete these tables.
Therefore the master and the slaves would get out of sync.
This patch fixes the problem by making sure that DROP TABLE is written
to the binary log for the tables that were in fact deleted by the failed
DROP DATABASE statement.
Test case added to binlog.binlog_database.test.
fails when running with ps-protocol).
The problem was that when running in --ps-protocol mode mysqltest.cc
didn't close created prepared statements. So, the plugins could not be
unistalled because there was a prepared statement using them.
A fix is to add a dummy statement that forces mysqltest.cc to close
the last prepared statement (which uses a plugin-defined table).
This patch corrects the problem by fixing the definition and alterations
of the mysql.user table in the .sql files.
Also included are new result files for tests that examine the name
column of the mysql.user table.
When a RPM test build in a non-release branch is done,
the $MYSQL_BINDIR variable ends in "/usr"
(rather than in "/usr/lib" as in a RPM release build),
this made test "file_contents" fail.
A branch for this case is added to the test.
The test result is unchanged.
mysql-test/t/file_contents.test:
Fight a problem in internal test builds:
When a RPM test build in a non-release branch is done,
the $MYSQL_BINDIR variable ends in "/usr"
(rather than in "/usr/lib" as in a RPM release build),
this made test "file_contents" fail.
Because of this, the old logic did not recognize
that a RPM build is done (trailing '/' missing!)
and took the tar.gz branch.
Just removing the trailing '/' from the "/usr" is
not enough, as the logic for RPMs used to replace
"/lib" which is not present at all; rather, a new
branch was added.
To help in case of future problems, the error messages
for a failing "open()" now also report "$MYSQL_BINDIR".
Issue:
SSL_CIPHER set to a specific CIPHER name was not getting picked up by SHOW STATUS Command.
Solution:
If specific cipher name is specified, avoid overwriting of Cipher List with default Cipher names.
extra/yassl/src/yassl_int.cpp:
If user specified Cipher name is there, avoid populating default
cipher names' list.
mysql-test/r/ssl_cipher.result:
Expected file for ssl_cipher.test test case
mysql-test/t/ssl_cipher-master.opt:
Server option file for ssl_cipher.test test case.
mysql-test/t/ssl_cipher.test:
Test case to verify that user specified SSL cipher name is shown in SHOW STATUS Command.
pre-locking list caused by triggers).
The thing is that CREATE TRIGGER / DROP TRIGGER may actually
change pre-locking list of (some) stored routines.
The SP-cache does not detect such changes. Thus if sp_head-instance
is cached in SP-cache, subsequent executions of the cached
sp_head will use inaccurate pre-locking list.
The patch is to invalidate SP-cache on CREATE TRIGGER / DROP TRIGGER.
MAP 'REPAIR TABLE' TO RECREATE +ANALYZE FOR ENGINES NOT
SUPPORTING NATIVE REPAIR
Executing 'mysqlcheck --check-upgrade --auto-repair ...' will first issue
'CHECK TABLE FOR UPGRADE' for all tables in the database in order to check if the
tables are compatible with the current version of MySQL. Any tables that are
found incompatible are then upgraded using 'REPAIR TABLE'.
The problem was that some engines (e.g. InnoDB) do not support 'REPAIR TABLE'.
This caused any such tables to be left incompatible. As a result such tables were
not properly fixed by the mysql_upgrade tool.
This patch fixes the problem by first changing 'CHECK TABLE FOR UPGRADE' to return
a different error message if the engine does not support REPAIR. Instead of
"Table upgrade required. Please do "REPAIR TABLE ..." it will report
"Table rebuild required. Please do "ALTER TABLE ... FORCE ..."
Second, the patch changes mysqlcheck to do 'ALTER TABLE ... FORCE' instead of
'REPAIR TABLE' in these cases.
This patch also fixes 'ALTER TABLE ... FORCE' to actually rebuild the table.
This change should be reflected in the documentation. Before this patch,
'ALTER TABLE ... FORCE' was unused (See Bug#11746162)
Test case added to mysqlcheck.test
client/mysqlcheck.c:
Changed mysqlcheck to do 'ALTER TABLE ... FORCE' if
'CHECK TABLE FOR UPGRADE' reports ER_TABLE_NEEDS_REBUILD
and not ER_TABLE_NEEDS_UPGRADE.
mysql-test/r/mysqlcheck.result:
Added regression test.
mysql-test/std_data/bug47205.frm:
InnoDB 5.0 FRM which contains a varchar primary key using
utf8_general_ci. This is an incompatible FRM for 5.5.
mysql-test/t/mysqlcheck.test:
Added regression test.
sql/handler.h:
Added new HA_CAN_REPAIR flag.
sql/share/errmsg-utf8.txt:
Added new error message ER_TABLE_NEEDS_REBUILD
sql/sql_admin.cc:
Changed 'CHECK TABLE FOR UPDATE' to give ER_TABLE_NEEDS_REBUILD
instead of ER_TABLE_NEEDS_UPGRADE if the engine does not support
REPAIR (as indicated by the new HA_CAN_REPAIR flag).
sql/sql_lex.h:
Remove unused ALTER_FORCE flag.
sql/sql_yacc.yy:
Make sure ALTER TABLE ... FORCE recreates the table
by setting the ALTER_RECREATE flag as the ALTER_FORCE
flag was unused.
storage/archive/ha_archive.h:
Added new HA_CAN_REPAIR flag to Archive
storage/csv/ha_tina.h:
Added new HA_CAN_REPAIR flag to CSV
storage/federated/ha_federated.h:
Added new HA_CAN_REPAIR flag to Federated
storage/myisam/ha_myisam.cc:
Added new HA_CAN_REPAIR flag to MyISAM
NON-PRIMARY UNIQUE INDEX USING INNODB
This patch adds the HA_INPLACE_ADD_UNIQUE_INDEX_NO_WRITE
capability flag to InnoDB, indicating that concurrent reads
can be allowed while non-primary unique indexes are created.
This is an follow-up to Bug #11751388 which enabled concurrent
reads when creating non-primary non-unique indexes.
Test case added to innodb_mysql_sync.test.
FLUSH TABLES under FLUSH TABLES <list> WITH READ LOCK leads
to assert failure.
This assert was triggered if a statement tried up upgrade a metadata
lock with an active FLUSH TABLE <list> WITH READ LOCK. The assert
checks that the connection already holds a global intention exclusive
metadata lock. However, FLUSH TABLE <list> WITH READ LOCK does not
acquire this lock in order to be compatible with FLUSH TABLES WITH
READ LOCK. Therefore any metadata lock upgrade caused the assert to
be triggered.
This patch fixes the problem by preventing metadata lock upgrade
if the connection has an active FLUSH TABLE <list> WITH READ LOCK.
ER_TABLE_NOT_LOCKED_FOR_WRITE will instead be reported to the client.
Test case added to flush.test.
@ mysql-test/r/ctype_latin1.result
@ mysql-test/r/ctype_utf8.result
@ mysql-test/t/ctype_latin1.test
@ mysql-test/t/ctype_utf8.test
Adding tests
@ sql/mysqld.h
@ sql/item.cc
@ sql/sql_parse.cc
@ sql/sql_view.cc
Refactoring (thanks to Guilhem for the idea):
Item_string::print() was hard to understand because of the different
QT_ constants: in "query_type==QT_x", QT_x is explicitely included
but the other two QT_ are implicitely excluded. The combinations
with '||' and '&&' make this even harder.
- logic is now more "explicit" by changing QT_ constants to a bitmap of flags:
QT_ORDINARY: no change,
QT_IS -> QT_TO_SYSTEM_CHARSET | QT_WITHOUT_INTRODUCERS,
QT_EXPLAIN -> QT_TO_SYSTEM_CHARSET
(QT_EXPLAIN was introduced in the first version of the Bug#57341 patch)
- Item_string::print() is rewritten using those flags
Bugfix itself:
When QT_TO_SYSTEM_CHARSET is used alone (with no QT_WITHOUT_INTRODUCERS),
we print string literals as follows:
- display introducers if they were in the original query
- print ASCII characters as is
- print non-ASCII characters using hex-escape
Note: as "EXPLAIN" output is only for human readability purposes
and does not need to be a pasrable SQL, so using hex-escape is Ok.
ErrConvString class perfectly suites for hex escaping purposes.
from 5.1 to 5.5
(Former 59405)
In this bug, args[0] in an Item_func_find_in_set stored an
Item_func_weekday that was constant. In
Item_func_find_in_set::fix_length_and_dec(), args[0]->val_str()
was called. Later, when Item_func_find_in_set::val_int() was
called, args[0]->null_value was checked. However, the
Item_func_weekday in args[0] had now been replaced with an
Item_cache. No val_*() calls had been made to this Item_cache,
thus null_value was incorrectly 'true', resulting in missing
rows in the result set.
enum_value gets a value in fix_length_and_dec() iff args[0]
is both constant and non-null. It is therefore unnecessary
to check the null_value of args[0] in val_int().
An alternative fix would be to call args[0]->val_int() inside
Item_func_find_in_set::val_int(). This would ensure
args[0]->null_value was set correctly (always false in this case),
but that would have to be done for every record this const value
is checked against.
mysql-test/r/func_set.result:
Add test for BUG#59405
mysql-test/t/func_set.test:
Add test for BUG#59405