dropping/creating tables".
The bug could lead to a crash when multi-delete statements were
prepared and used with temporary tables.
The bug was caused by lack of clean-up of multi-delete tables before
re-execution of a prepared statement. In a statement like
DELETE t1 FROM t1, t2 WHERE ... the first table list (t1) is
moved to lex->auxilliary_table_list and excluded from lex->query_tables
or select_lex->tables. Thus it was unaccessible to reinit_stmt_before_use
and not cleaned up before re-execution of a prepared statement.
mysql-test/r/ps.result:
Updated test results (Bug#19399)
mysql-test/t/ps.test:
A test case for Bug#19399 "Stored Procedures 'Lost Connection' when
dropping/creating tables": test that multi-delete
tables are cleaned up properly before re-execution.
sql/sql_lex.cc:
Always initialize auxilliary_table_list when we initialize the lex:
this way we don't have to check that lex->sql_command equals to
SQLCOM_DELETE_MULTI whenever we need to access auxilliary_table_list.
In particular, in reinit_stmt_before_use we can simply check that
auxilliary_table_list is not NULL and clean it up if the check returns
a true value.
sql/sql_prepare.cc:
Move the one table clean-up functionality to a method of st_table_list.
Clean up auxiliary_table_list if it's not empty.
sql/table.cc:
Implement st_table_list::reinit_before_use().
sql/table.h:
Declare st_table_list::reinit_before_use().
The implementation of the method Item_func_reverse::val_str
for the REVERSE function modified the argument of the function.
This led to wrong results for expressions that contained
REVERSE(ref) if ref occurred somewhere else in the expressions.
mysql-test/r/func_str.result:
Added a test case for bug #18243.
mysql-test/t/func_str.test:
Added a test case for bug #18243.
sql/item_strfunc.cc:
Fixed bug #18243.
The implementation of the method Item_func_reverse::val_str
for the REVERSE function modified the argument of the function.
This led to wrong results for expressions that contained
REVERSE(ref) if ref occurred somewhere else in the expressions.
The implementation of Item_func_reverse::val_str has been changed
to make the argument intact.
sql/item_strfunc.h:
Fixed bug #18243.
Added tmp_value to the Item_func_reverse class to store
the result of the function. It erroneously replaced the
argument before this fix.
mysql-test/t/ndb_dd_backuprestore.test:
make sure only run in default cluster
mysql-test/t/rpl_ndb_dd_advance.test:
make sure only run in default cluster
mysql-test/t/rpl_ndb_sync.test:
make sure only run in default cluster
mysql_client_test like mysql-test-run". Nothing to document.
mysql-test/mysql-test-run.pl:
if --debug, add debugging of mysql_client_test (useful at least to know what
insert_id values it receives in the ok packets of INSERT).
into gbichot3.local:/home/mysql_src/mysql-5.1-new-WL3146-handler
mysql-test/mysql-test-run.pl:
Auto merged
mysql-test/r/rpl_row_create_table.result:
Auto merged
mysql-test/t/rpl_row_create_table.test:
Auto merged
sql/sql_class.h:
Auto merged
sql/sql_insert.cc:
Auto merged
The bug was that if the server was running in mixed binlogging mode,
and an INSERT DELAYED used some needing-row-based components like UUID(),
the server didn't binlog this row-based but statement-based, which
thus failed to insert correct data on the slave.
This changeset implements that when a delayed_insert thread is created,
if the server's global binlog mode is "mixed", that thread will use row-based.
This also fixes BUG#20633 "INSERT DELAYED RAND() or @user_var does not
replicate statement-based": we don't fix it in statement-based mode (would
require bookeeping of rand seeds and user variables used by each row),
but at least it will now work in mixed mode (as row-based will be used).
We re-enable rpl_switch_stm_row_mixed.test (so BUG#18590
which was about re-enabling this test, will be closed) to test the fixes.
Between when it was disabled and now, some good changes to row-based
binlogging (no generation of table map events for non-changed tables)
induce changes in the test's result file.
mysql-test/r/rpl_switch_stm_row_mixed.result:
result update.
Note that some pieces of binlog are gone, not due to my test but to changes
to the row-based binlogging code (non-changed tables don't generate
table map binlog events now) done while the test was disabled.
mysql-test/t/disabled.def:
this test works now
mysql-test/t/rpl_switch_stm_row_mixed.test:
testing fix to make INSERT DELAYED work in mixed mode
sql/sql_insert.cc:
In mixed binlogging mode, the delayed_insert system thread now always
uses row-based binlogging.
This makes replication of INSERT DELAYED VALUES(RAND()) or VALUES(@a)
work in mixed mode (it does not in statement-based).
by default we never run disabled tests (even if they're
explicitely listed on the command-line). We add an option --enable-disabled
which will run tests even though they are disabled, and will print, for each
such test, the comment explaining why it was disabled.
The reason for the change is when you want to run "all tests which are about
NDB" for example: mysql-test-run.pl t/*ndb*.test used to run some disabled
NDB tests, causing failures, causing investigations.
Code amended and approved by Kent.
mysql-test/lib/mtr_cases.pl:
always detect if a test is listed as disabled, and read the comment why is is.
If it is listed, don't run the test, except if
--enable-disabled was given, then mark the test as to-run-even-
though-it-is-listed-as-disabled.
mysql-test/lib/mtr_report.pl:
Report tests which will run though they are listed as disabled
(does something only if --enable-disabled).
mysql-test/mysql-test-run.pl:
New behaviour: by default we never run disabled tests (even if they're
explicitely listed on the command-line). We add an option --enable-disabled
which will run tests even though they are disabled, and will print, for each
such test, the comment explaining why it was disabled.
mysql-test/r/archive.result:
After merge fix. It might come from the fix for
bug 1662 (ALTER TABLE LIKE ignores DATA/INDEX DIRECTPORY)
sql/time.cc:
After merge fix. Auto resolve failed because this piece
of code was moved from another file to here.
a too large value": the bug was that if MySQL generated a value for an
auto_increment column, based on auto_increment_* variables, and this value
was bigger than the column's max possible value, then that max possible
value was inserted (after issuing a warning). But this didn't honour
auto_increment_* variables (and so could cause conflicts in a master-master
replication where one master is supposed to generated only even numbers,
and the other only odd numbers), so now we "round down" this max possible
value to honour auto_increment_* variables, before inserting it.
mysql-test/r/rpl_auto_increment.result:
result update. Before the fix, the result was that master inserted 127 in t1
(which didn't honour auto_increment_* variables!),
instead of failing with "duplicate key 125" like now.
mysql-test/t/rpl_auto_increment.test:
Test for BUG#20524 "auto_increment_* not observed when inserting
a too large value".
We also check the pathological case (table t2) where it's impossible to
"round down".
The fixer of BUG#20573 will be able to use table t2 for testing his fix.
sql/handler.cc:
If handler::update_auto_increment() generates a value larger than the field's
max possible value, we used to simply insert this max possible value
(after pushing a warning). Now we "round down" this max possible value to
honour auto_increment_* variables (if at all possible), before trying the
insertion.
into chilla.local:/home/mydev/mysql-5.1-ateam
libmysqld/lib_sql.cc:
Auto merged
libmysqld/libmysqld.c:
Auto merged
mysql-test/r/func_sapdb.result:
Auto merged
mysql-test/r/func_time.result:
Auto merged
mysql-test/r/gis-rtree.result:
Auto merged
mysql-test/r/myisam.result:
Auto merged
mysql-test/r/symlink.result:
Auto merged
mysql-test/t/func_time.test:
Auto merged
mysql-test/t/key.test:
Auto merged
mysql-test/t/myisam.test:
Auto merged
scripts/make_binary_distribution.sh:
Auto merged
sql/field.cc:
Auto merged
sql-common/client.c:
Auto merged
sql/opt_sum.cc:
Auto merged
sql/sql_class.cc:
Auto merged
sql/sql_parse.cc:
Auto merged
sql/table.cc:
Auto merged
storage/myisam/mi_check.c:
Auto merged
storage/myisam/mi_create.c:
Auto merged
storage/myisam/mi_delete_table.c:
Auto merged
storage/myisam/mi_dynrec.c:
Auto merged
storage/myisam/mi_key.c:
Auto merged
storage/myisam/mi_rkey.c:
Auto merged
storage/myisam/rt_index.c:
Auto merged
storage/myisam/rt_mbr.c:
Auto merged
support-files/mysql.spec.sh:
Auto merged
mysql-test/r/ctype_utf8.result:
Manual merge
mysql-test/r/key.result:
Manual merge
mysql-test/t/ctype_utf8.test:
Manual merge
sql/item_timefunc.cc:
Manual merge
into mysql.com:/home/dlenev/mysql-5.0-bg18437-3
mysql-test/t/federated.test:
Auto merged
sql/ha_ndbcluster.cc:
Auto merged
sql/item.cc:
Auto merged
sql/mysql_priv.h:
Auto merged
sql/sql_delete.cc:
Auto merged
sql/sql_insert.cc:
Auto merged
sql/sql_parse.cc:
Auto merged
sql/sql_table.cc:
Auto merged
sql/sql_trigger.cc:
Auto merged
sql/sql_update.cc:
Auto merged
mysql-test/r/federated.result:
Manual merge.
Adding decimal "digits" in multiplication resulted in signed overflow and
producing wrong results.
Fixed by using large enough buffers and intermediary result types :
dec2 (currently longlong) to hold result of adding decimal "digits"
(currently int32).
mysql-test/r/select.result:
Bug #20569 Garbage in DECIMAL results from some mathematical functions
* test suite for the bug
mysql-test/t/select.test:
Bug #20569 Garbage in DECIMAL results from some mathematical functions
* test suite for the bug
strings/decimal.c:
Bug #20569 Garbage in DECIMAL results from some mathematical functions
* fixed the overflow in adding decimal "digits"
before update trigger on NDB table".
Two main changes:
- We use TABLE::read_set/write_set bitmaps for marking fields used by
statement instead of Field::query_id in 5.1.
- Now when we mark columns used by statement we take into account columns
used by table's triggers instead of marking all columns as used if table
has triggers.
mysql-test/r/federated.result:
Changed test in order to make it work with RBR.
RBR changes the way in which we execute "DELETE FROM t1" statement - we don't
use handler::delete_all_rows() method if RBR is enabled (see bug#19066).
As result federated engine produces different sequences of statements for
remote server in non-RBR and in RBR cases. And this changes order of the
rows inserted by following INSERT statements.
mysql-test/t/federated.test:
Changed test in order to make it work with RBR.
RBR changes the way in which we execute "DELETE FROM t1" statement - we don't
use handler::delete_all_rows() method if RBR is enabled (see bug#19066).
As result federated engine produces different sequences of statements for
remote server in non-RBR and in RBR cases. And this changes order of the
rows inserted by following INSERT statements.
sql/ha_partition.cc:
Added handling of HA_EXTRA_WRITE_CAN_REPLACE/HA_EXTRA_WRITE_CANNOT_REPLACE
to ha_partition::extra().
sql/item.cc:
Adjusted comment after merge. In 5.1 we use TABLE::read_set/write_set
bitmaps instead of Field::query_id for marking columns used.
sql/log_event.cc:
Write_rows_log_event::do_before_row_operations():
Now we explicitly inform handler that we want to replace rows so it can
promote operation done by write_row() to replace.
sql/mysql_priv.h:
Removed declaration of mark_fields_used_by_triggers_for_insert_stmt() which
is no longer used (we have TABLE::mark_columns_needed_for_insert() instead).
sql/sql_insert.cc:
Adjusted code after merge. Get rid of mark_fields_used_by_triggers_for_insert_stmt()
as now we use TABLE::mark_columns_needed_for_insert() for the same purprose.
Aligned places where we call this method with places where we call
mark_fields_used_by_triggers_for_insert() in 5.0.
Finally we no longer need to call handler::extra(HA_EXTRA_WRITE_CAN_REPLACE)
in case of REPLACE statement since in 5.1 write_record() marks all columns
as used before doing actual row replacement.
sql/sql_load.cc:
Adjusted code after merge. In 5.1 we use TABLE::mark_columns_needed_for_insert() instead of
mark_fields_used_by_triggers_for_insert_stmt() routine. We also no longer
need to call handler::extra(HA_EXTRA_RETRIEVE_ALL_COLS) if we execute LOAD
DATA REPLACE since in 5.1 write_record() will mark all columns as used before
doing actual row replacement.
sql/sql_trigger.cc:
Table_triggers_list::mark_fields_used():
We use TABLE::read_set/write_set bitmaps for marking fields used instead
of Field::query_id in 5.1.
sql/sql_trigger.h:
TABLE::mark_columns_needed_for_* methods no longer need to be friends of
Table_triggers_list class as intead of dirrectly accessing its private
members they can use public Table_triggers_list::mark_fields_used() method.
Also Table_triggers)list::mark_fields_used() no longer needs THD argument.
sql/table.cc:
TABLE::mark_columns_needed_for_*():
Now we mark columns which are really used by table's triggers instead of
marking all columns as used if table has triggers.
into xiphis.org:/home/antony/work2/p2-bug8706.3-merge-5.0
configure.in:
Auto merged
mysql-test/r/federated.result:
Auto merged
mysql-test/r/func_sapdb.result:
Auto merged
mysql-test/r/func_time.result:
Auto merged
mysql-test/r/symlink.result:
Auto merged
mysql-test/t/federated.test:
Auto merged
mysql-test/t/func_sapdb.test:
Auto merged
mysql-test/t/func_time.test:
Auto merged
sql/item_timefunc.cc:
Auto merged
sql/sql_parse.cc:
Auto merged
"temporary table with data directory option fails"
myisam should not use user-specified table name when creating
temporary tables and use generated connection specific real name.
Test included.
myisam/mi_create.c:
Bug#8706
When creating a temporary table with directory override, ensure that
the real filename is using the hidden temporary name otherwise
multiple clients cannot have same named temporary tables without
conflict.
mysql-test/r/myisam.result:
Bug#8706
Test for bug
mysql-test/t/myisam.test:
Bug#8706
Test for bug
- flush gci needs to be reset on disconnect as cluster may reconnect after --initial with a smaller gci
storage/ndb/src/ndbapi/NdbEventOperationImpl.cpp:
Bug #20843 tests fails randomly with assertion in completeClusterFailed
reenabled test
into zippy.(none):/home/cmiller/work/mysql/merge/mysql-5.1-new-maint
BUILD/compile-dist:
Auto merged
BitKeeper/deleted/.del-partition_innodb.result:
Auto merged
BitKeeper/deleted/.del-partition_innodb.test:
Auto merged
client/mysqltest.c:
Auto merged
mysql-test/r/create.result:
Auto merged
mysql-test/r/create_not_windows.result:
Auto merged
mysql-test/r/func_group.result:
Auto merged
mysql-test/r/innodb_mysql.result:
Auto merged
mysql-test/r/partition.result:
Auto merged
mysql-test/r/sp.result:
Auto merged
mysql-test/t/innodb_mysql.test:
Auto merged
mysql-test/t/partition.test:
Auto merged
mysql-test/t/ps_1general.test:
Auto merged
mysql-test/t/sp.test:
Auto merged
mysql-test/t/wait_timeout.test:
Auto merged
mysys/my_lib.c:
Auto merged
sql/field.cc:
Auto merged
sql/field.h:
Auto merged
sql/unireg.cc:
Auto merged
mysql-test/extra/rpl_tests/rpl_log.test:
manuakl merge
mysql-test/lib/mtr_process.pl:
manuakl merge
mysql-test/mysql-test-run.pl:
manuakl merge
mysql-test/r/type_newdecimal.result:
manuakl merge
mysql-test/t/create.test:
manuakl merge
mysql-test/t/func_group.test:
manuakl merge
mysql-test/t/type_newdecimal.test:
manuakl merge
mysql-test/mysql-test-run.pl:
Now that the RPM spec files use the Perl script to run the tests, we also need the "Logging:"
line in its output, because otherwise we lack the information how the test suite was run.
Add the line.
Also some error in handling options for subpartitions.
mysql-test/r/partition.result:
New test cases
mysql-test/t/partition.test:
New test cases
sql/ha_partition.cc:
Added partition_element to prepare_new_partition so that we can properly set-up table
before creating partitions.
sql/ha_partition.h:
Added partition_element to prepare_new_partition so that we can properly set-up table
before creating partitions.
sql/sql_yacc.yy:
Ensure that subpartitions always inherit options from the partition they belong to.
They can change it afterwards but will use the options as set on partition level
if set at that level.
Reverting to old behaviour of writing the query before all rows
have been written.
mysql-test/r/rpl_row_delayed_ins.result:
Result change
sql/sql_class.cc:
Adding debug message to binlog_query()
sql/sql_insert.cc:
- Changing write_delayed() to use a LEX_STRING for the query.
- Adding query string to class delayed_row.
- Removing query string from class delayed_insert.
- Adding code to copy query string and delete it when the row
is executed.
- Logging query at first row instead of after all rows are
inserted (reverting to old behaviour).
- Flushing the pending row event after all rows have been inserted.
This is necessary since binlog_query() is called before all rows
instead of after.
mysql-test/r/rpl_insert.result:
New BitKeeper file ``mysql-test/r/rpl_insert.result''
mysql-test/t/rpl_insert.test:
New BitKeeper file ``mysql-test/t/rpl_insert.test''
auto_increment breaks binlog":
if slave's table had a higher auto_increment counter than master's (even
though all rows of the two tables were identical), then in some cases,
REPLACE and INSERT ON DUPLICATE KEY UPDATE failed to replicate
statement-based (it inserted different values on slave from on master).
write_record() contained a "thd->next_insert_id=0" to force an adjustment
of thd->next_insert_id after the update or replacement. But it is this
assigment introduced indeterminism of the statement on the slave, thus
the bug. For ON DUPLICATE, we replace that assignment by a call to
handler::adjust_next_insert_id_after_explicit_value() which is deterministic
(does not depend on slave table's autoinc counter). For REPLACE, this
assignment can simply be removed (as REPLACE can't insert a number larger
than thd->next_insert_id).
We also move a too early restore_auto_increment() down to when we really know
that we can restore the value.
mysql-test/r/rpl_insert_id.result:
result update, without the bugfix, slave's "3 350" were "4 350".
mysql-test/t/rpl_insert_id.test:
test for BUG#20188 "REPLACE or ON DUPLICATE KEY UPDATE in
auto_increment breaks binlog".
There is, in this order:
- a test of the bug for the case of REPLACE
- a test of basic ON DUPLICATE KEY UPDATE functionality which was not
tested before
- a test of the bug for the case of ON DUPLICATE KEY UPDATE
sql/handler.cc:
the adjustment of next_insert_id if inserting a big explicit value, is
moved to a separate method to be used elsewhere.
sql/handler.h:
see handler.cc
sql/sql_insert.cc:
restore_auto_increment() means "I know I won't use this autogenerated
autoincrement value, you are free to reuse it for next row". But we were
calling restore_auto_increment() in the case of REPLACE: if write_row() fails
inserting the row, we don't know that we won't use the value, as we are going to
try again by doing internally an UPDATE of the existing row, or a DELETE
of the existing row and then an INSERT. So I move restore_auto_increment()
further down, when we know for sure we failed all possibilities for the row.
Additionally, in case of REPLACE, we don't need to reset THD::next_insert_id:
the value of thd->next_insert_id will be suitable for the next row.
In case of ON DUPLICATE KEY UPDATE, resetting thd->next_insert_id is also
wrong (breaks statement-based binlog), but cannot simply be removed, as
thd->next_insert_id must be adjusted if the explicit value exceeds it.
We now do the adjustment by calling
handler::adjust_next_insert_id_after_explicit_value() (which, contrary to
thd->next_insert_id=0, does not depend on the slave table's autoinc counter,
and so is deterministic).
into mysql.com:/home/mydev/mysql-5.0-ateam
myisam/mi_key.c:
Auto merged
mysql-test/r/gis-rtree.result:
Auto merged
mysql-test/t/gis-rtree.test:
Auto merged
myisam/mi_check.c:
SCCS merged