The issue is that xa.test failed sporadically on some platforms.
The reason for the test failure is a race condition in xa.test.
The race condition occures between connection that executes statement
INSERT INTO t2 SELECT FROM t1 and other connection that tries to run
statements DELETE FROM t1 and COMMIT. If COMMIT statement had been executed
before the statement INSERT INTO t2 SELECT FROM t1 was locked by lock
on table t1 (as a result of query from table t1) then the INSERT statement
is executed successfully and a following test for deadlock would failed.
This patch fixes this race condition by moving COMMIT statement after commit
of distributed transaction from concurrent session.
If an error message contains '\' backslash it is displayed correctly
through show-slave-status or
query_get_value(SHOW SLAVE STATUS, Last_IO_Error, 1);. But when
SELECT REPLACE(...) is applied backslash is escaped resulting in a
different test output.
Disabled backslash escape on show_slave_status.inc and replaced '\' for
'/' using replace_regex function in order to achieve the same test
output when different path separators are used.
The server crashes when receiving a COM_BINLOG_DUMP command with a position of 0 or
larger than the file size.
The execution proceeds to an error block having the last read replication coordinates
pointer be NULL and its dereferencing crashed the server.
Fixed with making "public" previously used only for heartbeat coordinates.
and cryptic error 1126 message
The problem was that dlopen() related code was using just a subset
of the path normalization routines used in other places.
Fixed the expansion of the pre-dlopen() behavior for plugins and UDFs
to use a platform-dependent consistent encoding of the paths.
Fixed the error dlopen() error handling to take the correct error message
and strip off the trailing newline character(s).
Fixed tests to do a platform independent replace of directories and to
account for the traling slash.
Test extra/rpl_tests/rpl_extra_col_master.test (used by
rpl_extra_col_master_*) ends with the active connection pointing to the
slave. Thus, the two last tests never succeed in changing the binlog
format of the master away from 'row'. With correct active connection
(master) tests fail for binlog 'statement' and 'mixed' formats.
Tests rpl_extra_col_master_* only run when binary log format is
row. Statement and mix replication do not make sense in this
tests since it will try to execute statements on columns that do
not exist. This fix is basically a backport from mysql-5.5, see
changes done as part of BUG 39934.
Fixing the 5.5 part (the 5.6 part will go in a separate commit soon).
Problem:
Item_direct_ref::get_date() incorrectly calculated its "null_value",
which made UNIX_TIMESTAMP(view_column) incorrectly return NULL
for a NOT NULL view_column.
Fix:
Make Item_direct_ref::get_date() calculate null_value
in the similar way with the other methods
(val_real,val_str,val_int,val_decimal):
copy null_value from the referenced Item.
modified:
mysql-test/r/func_time.result
mysql-test/t/func_time.test
sql/item.cc
routines.
mysqldump in xml mode did not dump routines, events or
triggers.
This patch fixes this issue by fixing the if conditions
that disallowed the dump of above mentioned objects in
xml mode, and added the required code to enable dump
in xml format.
If we meet DB_TOO_MANY_CONCURRENT_TRXS during the execution tab_create_graph from row_create_table_for_mysql(), .ibd file for the table should be created already but was not deleted for the error handling.
rb:875 approved by Jimmy Yang
If we meet DB_TOO_MANY_CONCURRENT_TRXS during the execution tab_create_graph from row_create_table_for_mysql(), .ibd file for the table should be created already but was not deleted for the error handling.
rb:875 approved by Jimmy Yang
------------------------------------------------------------
revno: 3258
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
branch nick: mysql-trunk-bug12663165
timestamp: Thu 2011-07-14 10:05:12 +0200
message:
Bug#12663165 SP DEAD CODE REMOVAL DOESN'T UNDERSTAND CONTINUE HANDLERS
When stored routines are loaded, a simple optimizer tries to locate
and remove dead code. The problem was that this dead code removal
did not work correctly with CONTINUE handlers.
If a statement triggers a CONTINUE handler, the following statement
will be executed after the handler statement has completed. This
means that the following statement is not dead code even if the
previous statement unconditionally alters control flow. This fact
was lost on the dead code removal routine, which ended up with
removing instructions that could have been executed. This could
then lead to assertions, crashes and generally bad behavior when
the stored routine was executed.
This patch fixes the problem by marking as live code all stored
routine instructions that are in the same scope as a CONTINUE handler.
Test case added to sp.test.
If init_command was incorrect, we couldn't let users execute
queries, but we couldn't report the issue to the client either
as it does not expect error messages before even sending a
command. Thus, we simply disconnected them without throwing
a clear error.
We now go through the proper sequence once (without executing
any user statements) so we can report back what the problem
is. Only then do we disconnect the user.
As always, root remains unaffected by this as init_command is
(still) not executed for them.
--FLUSH-LOG BREAKS CONSISTENCY
The transaction started by mysqldump gets committed
implicitly when flush-log is specified along with
single-transaction option, and hence can break
consistency.
This is because, COM_REFRESH is executed in order
to flush logs and starting from 5.5 this command
performs an implicit commit.
Fixed by making sure that COM_REFRESH is executed
before the transaction has started and not after it.
Note : This patch triggers following behavioral
changes in mysqldump :
1) After this patch we no longer flush logs before
dumping each database if --single-transaction
option is given like it was done before (in the
absence of --lock-all-tables and --master-data
options).
2) Also, after this patch, we start acquiring
FTWRL before flushing logs in cases when only
--single-transaction and --flush-logs are given.
It becomes safe to use mysqldump with these two
options and without --master-data parameter for
backups.
unix_timestamp() is implemented in this part of the code in place of current_time().
Also, since the pb2 machines may be extremely fast, instead of looping through the code,
we use sleep(1.1) so that the variables t0 and t1 have different values.
CREATE TABLE bug13510739 (c INTEGER NOT NULL, PRIMARY KEY (c)) ENGINE=INNODB;
INSERT INTO bug13510739 VALUES (1), (2), (3), (4);
DELETE FROM bug13510739 WHERE c=2;
HANDLER bug13510739 OPEN;
HANDLER bug13510739 READ `primary` = (2);
HANDLER bug13510739 READ `primary` NEXT; <-- crash
The bug is that in the particular testcase row_search_for_mysql() picked up
a delete-marked record and quit, leaving the cursor non-positioned state and
on the subsequent 'get next' call the code crashed because of the
non-positioned cursor.
In row0sel.cc (line numbers from mysql-trunk):
4653 if (rec_get_deleted_flag(rec, comp)) {
...
4679 if (index == clust_index && unique_search) {
4680
4681 err = DB_RECORD_NOT_FOUND;
4682
4683 goto normal_return;
4684 }
it quit from here, not storing the cursor position.
In contrast, if the record=2 is not found at all (e.g. sleep(1) after DELETE
to let the purge wipe it away completely) then 'get = 2' does find record=3
and quits from here:
4366 if (0 != cmp_dtuple_rec(search_tuple, rec, offsets)) {
...
4394 btr_pcur_store_position(pcur, &mtr);
4395
4396 err = DB_RECORD_NOT_FOUND;
4397 #if 0
4398 ut_print_name(stderr, trx, FALSE, index->name);
4399 fputs(" record not found 3\n", stderr);
4400 #endif
4401
4402 goto normal_return;
Another fix could be to extend the condition on line 4366 to hold only if
seach_tuple matches rec AND if rec is not delete marked.
Notice that in the above test case if we wait about 1 second somewhere after
DELETE and before 'get = 2', then the testcase does not crash and returns 4
instead. Not sure if this is the correct behavior, but this bugfix removes
the crash and makes the code return what it also returns in the non-crashing
case (if rec=2 is not found during 'get = 2', e.g. we have sleep(1) there).
Approved by: Marko (http://bur03.no.oracle.com/rb/r/863/)
The time comparison using current_time() stored in an int variable was giving wrong results as
the current_time() format as an int implementation has been changed in mysql-trunk but not in mysql-5.5.
The time is stored in the format hh:mm:ss as 'time' datatype.But as an int, it is stored as hhmmss,
but only on the trunk. On mysql-5.5,as an int, it is stored as hh.
Hence, the current_time() function has been changed to unix_timestamp() function.
This patch changes the mechanism by which the client enables a
plugin. Instead of using INSERT IGNORE to reload a plugin library,
it now uses REPLACE INTO. This allows users to load a library
multiple times replacing the existing values in the mysql.plugin
table. This allows users to replace the symbol reference to a
different dl name in the table. Thus permitting enabling of
multiple versions of the same library without first disabling
the old version.
A regression test was added to ensure this feature works.
Problem description:
When a view is created using function FORMAT and if FORMAT function uses locale
option,definition of view saved into server doesn't contain that locale information,
Ex:
create table test2 (bb decimal (10,2));
insert into test2 values (10.32),(10009.2),(12345678.21);
create view test3 as select format(bb,1,'sk_SK') as cc from test2;
select * from test3;
+--------------+
| cc |
+--------------+
| 10.3 |
| 10,009.2 |
| 12,345,678.2 |
+--------------+
3 rows in set (0.02 sec)
show create view test3
View: test3
Create View: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost`
SQL SECURITY DEFINER VIEW `test3` AS select format(`test2`.`bb`,1) AS `cc`
from `test2`
character_set_client: latin1
collation_connection: latin1_swedish_ci
1 row in set (0.02 sec)
Problem Analysis:
The function Item_func_format::print() which prints the query string to create
the view does not print the third argument (i.e the locale information). Hence
view is created without locale information.
Problem Solution:
If argument count is more than 2 we now print the third argument onto the query string.
Files changed:
sql/item_strfunc.cc
Function call changes: Item_func_format::print()
mysql-test/t/select.test
Added test case to test the bug
mysql-test/r/select.result
Result of the test case appended here
There was memory leak when running some tests on PB2.
The reason of the failure is an early return from change_master()
that was supposed to deallocate a dyn-array.
Actually the same bug58915 was fixed in trunk with relocating the dyn-array
destruction into THD::cleanup_after_query() which can't be bypassed.
The current patch backports magne.mahre@oracle.com-20110203101306-q8auashb3d7icxho
and adds two optimizations: were done: the static buffer for the dyn-array to base on,
and the array initialization is called precisely when it's necessary rather than
per each CHANGE-MASTER as before.
The counter handler_read_key (SSV::ha_read_key_count) is incremented
incorrectly.
The mysql server maintains a per thread system_status_var (SSV)
object. This object contains among other things the counter
SSV::ha_read_key_count. The purpose of this counter is to measure the
number of requests to read a row based on a key (or the number of
index lookups).
This counter was wrongly incremented in the
ha_innobase::innobase_get_index(). The fix removes
this increment statement (for both innodb and innodb_plugin).
The various callers of the innobase_get_index() was checked to
determine if anybody must increment this counter (if they first call
innobase_get_index() and then perform an index lookup). It was found
that no caller of innobase_get_index() needs to worry about the
SSV::ha_read_key_count counter.