1
0
mirror of https://github.com/MariaDB/server.git synced 2025-07-30 16:24:05 +03:00

Fix for BUG#20188 "REPLACE or ON DUPLICATE KEY UPDATE in

auto_increment breaks binlog":
if slave's table had a higher auto_increment counter than master's (even
though all rows of the two tables were identical), then in some cases,
REPLACE and INSERT ON DUPLICATE KEY UPDATE failed to replicate
statement-based (it inserted different values on slave from on master).
write_record() contained a "thd->next_insert_id=0" to force an adjustment
of thd->next_insert_id after the update or replacement. But it is this
assigment introduced indeterminism of the statement on the slave, thus
the bug. For ON DUPLICATE, we replace that assignment by a call to
handler::adjust_next_insert_id_after_explicit_value() which is deterministic
(does not depend on slave table's autoinc counter). For REPLACE, this
assignment can simply be removed (as REPLACE can't insert a number larger
than thd->next_insert_id).
We also move a too early restore_auto_increment() down to when we really know
that we can restore the value.


mysql-test/r/rpl_insert_id.result:
  result update, without the bugfix, slave's "3 350" were "4 350".
mysql-test/t/rpl_insert_id.test:
  test for BUG#20188 "REPLACE or ON DUPLICATE KEY UPDATE in
  auto_increment breaks binlog".
  There is, in this order:
  - a test of the bug for the case of REPLACE
  - a test of basic ON DUPLICATE KEY UPDATE functionality which was not
  tested before
  - a test of the bug for the case of ON DUPLICATE KEY UPDATE
sql/handler.cc:
  the adjustment of next_insert_id if inserting a big explicit value, is
  moved to a separate method to be used elsewhere.
sql/handler.h:
  see handler.cc
sql/sql_insert.cc:
  restore_auto_increment() means "I know I won't use this autogenerated
  autoincrement value, you are free to reuse it for next row". But we were
  calling restore_auto_increment() in the case of REPLACE: if write_row() fails
  inserting the row, we don't know that we won't use the value, as we are going to
  try again by doing internally an UPDATE of the existing row, or a DELETE
  of the existing row and then an INSERT. So I move restore_auto_increment()
  further down, when we know for sure we failed all possibilities for the row.
  Additionally, in case of REPLACE, we don't need to reset THD::next_insert_id:
  the value of thd->next_insert_id will be suitable for the next row.
  In case of ON DUPLICATE KEY UPDATE, resetting thd->next_insert_id is also
  wrong (breaks statement-based binlog), but cannot simply be removed, as
  thd->next_insert_id must be adjusted if the explicit value exceeds it.
  We now do the adjustment by calling
  handler::adjust_next_insert_id_after_explicit_value() (which, contrary to
  thd->next_insert_id=0, does not depend on the slave table's autoinc counter,
  and so is deterministic).
This commit is contained in:
unknown
2006-07-05 14:41:35 +02:00
parent 57e7bc5513
commit 4e9d7d6c4b
5 changed files with 157 additions and 24 deletions

View File

@ -955,7 +955,6 @@ int write_record(THD *thd, TABLE *table,COPY_INFO *info)
uint key_nr;
if (error != HA_WRITE_SKIP)
goto err;
table->file->restore_auto_increment();
if ((int) (key_nr = table->file->get_dup_key(error)) < 0)
{
error=HA_WRITE_SKIP; /* Database can't find key */
@ -1028,20 +1027,20 @@ int write_record(THD *thd, TABLE *table,COPY_INFO *info)
if (res == VIEW_CHECK_ERROR)
goto before_trg_err;
if (thd->clear_next_insert_id)
{
/* Reset auto-increment cacheing if we do an update */
thd->clear_next_insert_id= 0;
thd->next_insert_id= 0;
}
if ((error=table->file->update_row(table->record[1],table->record[0])))
{
if ((error == HA_ERR_FOUND_DUPP_KEY) && info->ignore)
{
table->file->restore_auto_increment();
goto ok_or_after_trg_err;
}
goto err;
}
info->updated++;
if (table->next_number_field)
table->file->adjust_next_insert_id_after_explicit_value(table->next_number_field->val_int());
trg_error= (table->triggers &&
table->triggers->process_triggers(thd, TRG_EVENT_UPDATE,
TRG_ACTION_AFTER, TRUE));
@ -1067,12 +1066,6 @@ int write_record(THD *thd, TABLE *table,COPY_INFO *info)
table->triggers->process_triggers(thd, TRG_EVENT_UPDATE,
TRG_ACTION_BEFORE, TRUE))
goto before_trg_err;
if (thd->clear_next_insert_id)
{
/* Reset auto-increment cacheing if we do an update */
thd->clear_next_insert_id= 0;
thd->next_insert_id= 0;
}
if ((error=table->file->update_row(table->record[1],
table->record[0])))
goto err;
@ -1142,6 +1135,7 @@ err:
table->file->print_error(error,MYF(0));
before_trg_err:
table->file->restore_auto_increment();
if (key)
my_safe_afree(key, table->s->max_unique_length, MAX_KEY_LENGTH);
DBUG_RETURN(1);