Fix random failures in test 'wait_timeout' that depend on exact timing.
1. Force a reconnect initially if necessary, as otherwise slow startup
might have caused a connection timeout before the test can even start.
2. Explicitly disconnect the first connection to remove confusion about
which connection aborts from timeout, causing test failure.
dropping/creating tables".
The bug could lead to a crash when multi-delete statements were
prepared and used with temporary tables.
The bug was caused by lack of clean-up of multi-delete tables before
re-execution of a prepared statement. In a statement like
DELETE t1 FROM t1, t2 WHERE ... the first table list (t1) is
moved to lex->auxilliary_table_list and excluded from lex->query_tables
or select_lex->tables. Thus it was unaccessible to reinit_stmt_before_use
and not cleaned up before re-execution of a prepared statement.
The implementation of the method Item_func_reverse::val_str
for the REVERSE function modified the argument of the function.
This led to wrong results for expressions that contained
REVERSE(ref) if ref occurred somewhere else in the expressions.
The bug was that if the server was running in mixed binlogging mode,
and an INSERT DELAYED used some needing-row-based components like UUID(),
the server didn't binlog this row-based but statement-based, which
thus failed to insert correct data on the slave.
This changeset implements that when a delayed_insert thread is created,
if the server's global binlog mode is "mixed", that thread will use row-based.
This also fixes BUG#20633 "INSERT DELAYED RAND() or @user_var does not
replicate statement-based": we don't fix it in statement-based mode (would
require bookeeping of rand seeds and user variables used by each row),
but at least it will now work in mixed mode (as row-based will be used).
We re-enable rpl_switch_stm_row_mixed.test (so BUG#18590
which was about re-enabling this test, will be closed) to test the fixes.
Between when it was disabled and now, some good changes to row-based
binlogging (no generation of table map events for non-changed tables)
induce changes in the test's result file.
by default we never run disabled tests (even if they're
explicitely listed on the command-line). We add an option --enable-disabled
which will run tests even though they are disabled, and will print, for each
such test, the comment explaining why it was disabled.
The reason for the change is when you want to run "all tests which are about
NDB" for example: mysql-test-run.pl t/*ndb*.test used to run some disabled
NDB tests, causing failures, causing investigations.
Code amended and approved by Kent.
a too large value": the bug was that if MySQL generated a value for an
auto_increment column, based on auto_increment_* variables, and this value
was bigger than the column's max possible value, then that max possible
value was inserted (after issuing a warning). But this didn't honour
auto_increment_* variables (and so could cause conflicts in a master-master
replication where one master is supposed to generated only even numbers,
and the other only odd numbers), so now we "round down" this max possible
value to honour auto_increment_* variables, before inserting it.
Adding decimal "digits" in multiplication resulted in signed overflow and
producing wrong results.
Fixed by using large enough buffers and intermediary result types :
dec2 (currently longlong) to hold result of adding decimal "digits"
(currently int32).
before update trigger on NDB table".
Two main changes:
- We use TABLE::read_set/write_set bitmaps for marking fields used by
statement instead of Field::query_id in 5.1.
- Now when we mark columns used by statement we take into account columns
used by table's triggers instead of marking all columns as used if table
has triggers.
"temporary table with data directory option fails"
myisam should not use user-specified table name when creating
temporary tables and use generated connection specific real name.
Test included.