The bug was that if the server was running in mixed binlogging mode,
and an INSERT DELAYED used some needing-row-based components like UUID(),
the server didn't binlog this row-based but statement-based, which
thus failed to insert correct data on the slave.
This changeset implements that when a delayed_insert thread is created,
if the server's global binlog mode is "mixed", that thread will use row-based.
This also fixes BUG#20633 "INSERT DELAYED RAND() or @user_var does not
replicate statement-based": we don't fix it in statement-based mode (would
require bookeeping of rand seeds and user variables used by each row),
but at least it will now work in mixed mode (as row-based will be used).
We re-enable rpl_switch_stm_row_mixed.test (so BUG#18590
which was about re-enabling this test, will be closed) to test the fixes.
Between when it was disabled and now, some good changes to row-based
binlogging (no generation of table map events for non-changed tables)
induce changes in the test's result file.
by default we never run disabled tests (even if they're
explicitely listed on the command-line). We add an option --enable-disabled
which will run tests even though they are disabled, and will print, for each
such test, the comment explaining why it was disabled.
The reason for the change is when you want to run "all tests which are about
NDB" for example: mysql-test-run.pl t/*ndb*.test used to run some disabled
NDB tests, causing failures, causing investigations.
Code amended and approved by Kent.
a too large value": the bug was that if MySQL generated a value for an
auto_increment column, based on auto_increment_* variables, and this value
was bigger than the column's max possible value, then that max possible
value was inserted (after issuing a warning). But this didn't honour
auto_increment_* variables (and so could cause conflicts in a master-master
replication where one master is supposed to generated only even numbers,
and the other only odd numbers), so now we "round down" this max possible
value to honour auto_increment_* variables, before inserting it.
Adding decimal "digits" in multiplication resulted in signed overflow and
producing wrong results.
Fixed by using large enough buffers and intermediary result types :
dec2 (currently longlong) to hold result of adding decimal "digits"
(currently int32).
before update trigger on NDB table".
Two main changes:
- We use TABLE::read_set/write_set bitmaps for marking fields used by
statement instead of Field::query_id in 5.1.
- Now when we mark columns used by statement we take into account columns
used by table's triggers instead of marking all columns as used if table
has triggers.
"temporary table with data directory option fails"
myisam should not use user-specified table name when creating
temporary tables and use generated connection specific real name.
Test included.
auto_increment breaks binlog":
if slave's table had a higher auto_increment counter than master's (even
though all rows of the two tables were identical), then in some cases,
REPLACE and INSERT ON DUPLICATE KEY UPDATE failed to replicate
statement-based (it inserted different values on slave from on master).
write_record() contained a "thd->next_insert_id=0" to force an adjustment
of thd->next_insert_id after the update or replacement. But it is this
assigment introduced indeterminism of the statement on the slave, thus
the bug. For ON DUPLICATE, we replace that assignment by a call to
handler::adjust_next_insert_id_after_explicit_value() which is deterministic
(does not depend on slave table's autoinc counter). For REPLACE, this
assignment can simply be removed (as REPLACE can't insert a number larger
than thd->next_insert_id).
We also move a too early restore_auto_increment() down to when we really know
that we can restore the value.