a too large value": the bug was that if MySQL generated a value for an
auto_increment column, based on auto_increment_* variables, and this value
was bigger than the column's max possible value, then that max possible
value was inserted (after issuing a warning). But this didn't honour
auto_increment_* variables (and so could cause conflicts in a master-master
replication where one master is supposed to generated only even numbers,
and the other only odd numbers), so now we "round down" this max possible
value to honour auto_increment_* variables, before inserting it.
Adding decimal "digits" in multiplication resulted in signed overflow and
producing wrong results.
Fixed by using large enough buffers and intermediary result types :
dec2 (currently longlong) to hold result of adding decimal "digits"
(currently int32).
before update trigger on NDB table".
Two main changes:
- We use TABLE::read_set/write_set bitmaps for marking fields used by
statement instead of Field::query_id in 5.1.
- Now when we mark columns used by statement we take into account columns
used by table's triggers instead of marking all columns as used if table
has triggers.
"temporary table with data directory option fails"
myisam should not use user-specified table name when creating
temporary tables and use generated connection specific real name.
Test included.
auto_increment breaks binlog":
if slave's table had a higher auto_increment counter than master's (even
though all rows of the two tables were identical), then in some cases,
REPLACE and INSERT ON DUPLICATE KEY UPDATE failed to replicate
statement-based (it inserted different values on slave from on master).
write_record() contained a "thd->next_insert_id=0" to force an adjustment
of thd->next_insert_id after the update or replacement. But it is this
assigment introduced indeterminism of the statement on the slave, thus
the bug. For ON DUPLICATE, we replace that assignment by a call to
handler::adjust_next_insert_id_after_explicit_value() which is deterministic
(does not depend on slave table's autoinc counter). For REPLACE, this
assignment can simply be removed (as REPLACE can't insert a number larger
than thd->next_insert_id).
We also move a too early restore_auto_increment() down to when we really know
that we can restore the value.
run at startup"
The server returned an error when trying to execute init-file with a
stored procedure that could return multiple result sets to the client.
A stored procedure can return multiple result sets if it contains
PREPARE, SELECT, SHOW and similar statements.
The fix is to set client_capabilites|=CLIENT_MULTI_RESULTS in
sql_parse.cc:handle_bootstrap(). There is no "client" really, so
nothing is ever sent. This makes init-file feature behave consistently:
the prepared statements that can be called directly in the init-file
can be used in a stored procedure too.
Re-committed the patch originally submitted by Per-Erik after review.
When compiling INSERT statements the check whether columns are provided values
depends on the flag whether a field is used in that query (Field::query_id).
However the check for updatability of VIEW columns (check_view_insertability())
was calling fix_fields() and thus setting the Field::query_id even for the
view fields that are not referenced in the current INSERT statement.
So the correct check for columns without default values
( check_that_all_fields_are_given_values() ) is assuming that all the VIEW
columns were mentioned in the INSERT field list and was issuing no
warnings or errors.
Fixed check_view_insertability() to turn off the flag whether or not to set
Field::query_id (THREAD::set_query_id) before calling fix fields and restore
it when it's done.