1
0
mirror of https://github.com/MariaDB/server.git synced 2025-07-30 16:24:05 +03:00

Fix for BUG#20188 "REPLACE or ON DUPLICATE KEY UPDATE in

auto_increment breaks binlog":
if slave's table had a higher auto_increment counter than master's (even
though all rows of the two tables were identical), then in some cases,
REPLACE and INSERT ON DUPLICATE KEY UPDATE failed to replicate
statement-based (it inserted different values on slave from on master).
write_record() contained a "thd->next_insert_id=0" to force an adjustment
of thd->next_insert_id after the update or replacement. But it is this
assigment introduced indeterminism of the statement on the slave, thus
the bug. For ON DUPLICATE, we replace that assignment by a call to
handler::adjust_next_insert_id_after_explicit_value() which is deterministic
(does not depend on slave table's autoinc counter). For REPLACE, this
assignment can simply be removed (as REPLACE can't insert a number larger
than thd->next_insert_id).
We also move a too early restore_auto_increment() down to when we really know
that we can restore the value.


mysql-test/r/rpl_insert_id.result:
  result update, without the bugfix, slave's "3 350" were "4 350".
mysql-test/t/rpl_insert_id.test:
  test for BUG#20188 "REPLACE or ON DUPLICATE KEY UPDATE in
  auto_increment breaks binlog".
  There is, in this order:
  - a test of the bug for the case of REPLACE
  - a test of basic ON DUPLICATE KEY UPDATE functionality which was not
  tested before
  - a test of the bug for the case of ON DUPLICATE KEY UPDATE
sql/handler.cc:
  the adjustment of next_insert_id if inserting a big explicit value, is
  moved to a separate method to be used elsewhere.
sql/handler.h:
  see handler.cc
sql/sql_insert.cc:
  restore_auto_increment() means "I know I won't use this autogenerated
  autoincrement value, you are free to reuse it for next row". But we were
  calling restore_auto_increment() in the case of REPLACE: if write_row() fails
  inserting the row, we don't know that we won't use the value, as we are going to
  try again by doing internally an UPDATE of the existing row, or a DELETE
  of the existing row and then an INSERT. So I move restore_auto_increment()
  further down, when we know for sure we failed all possibilities for the row.
  Additionally, in case of REPLACE, we don't need to reset THD::next_insert_id:
  the value of thd->next_insert_id will be suitable for the next row.
  In case of ON DUPLICATE KEY UPDATE, resetting thd->next_insert_id is also
  wrong (breaks statement-based binlog), but cannot simply be removed, as
  thd->next_insert_id must be adjusted if the explicit value exceeds it.
  We now do the adjustment by calling
  handler::adjust_next_insert_id_after_explicit_value() (which, contrary to
  thd->next_insert_id=0, does not depend on the slave table's autoinc counter,
  and so is deterministic).
This commit is contained in:
unknown
2006-07-05 14:41:35 +02:00
parent 57e7bc5513
commit 4e9d7d6c4b
5 changed files with 157 additions and 24 deletions

View File

@ -132,3 +132,68 @@ id last_id
drop function bug15728;
drop function bug15728_insert;
drop table t1, t2;
create table t1 (n int primary key auto_increment not null,
b int, unique(b));
set sql_log_bin=0;
insert into t1 values(null,100);
replace into t1 values(null,50),(null,100),(null,150);
select * from t1 order by n;
n b
2 50
3 100
4 150
truncate table t1;
set sql_log_bin=1;
insert into t1 values(null,100);
select * from t1 order by n;
n b
1 100
insert into t1 values(null,200),(null,300);
delete from t1 where b <> 100;
select * from t1 order by n;
n b
1 100
replace into t1 values(null,100),(null,350);
select * from t1 order by n;
n b
2 100
3 350
select * from t1 order by n;
n b
2 100
3 350
insert into t1 values (NULL,400),(3,500),(NULL,600) on duplicate key UPDATE n=1000;
select * from t1 order by n;
n b
2 100
4 400
1000 350
1001 600
select * from t1 order by n;
n b
2 100
4 400
1000 350
1001 600
drop table t1;
create table t1 (n int primary key auto_increment not null,
b int, unique(b));
insert into t1 values(null,100);
select * from t1 order by n;
n b
1 100
insert into t1 values(null,200),(null,300);
delete from t1 where b <> 100;
select * from t1 order by n;
n b
1 100
insert into t1 values(null,100),(null,350) on duplicate key update n=2;
select * from t1 order by n;
n b
2 100
3 350
select * from t1 order by n;
n b
2 100
3 350
drop table t1;