This assert could be triggered if -1 was inserted into
an auto increment column by a statement writing more than
one row.
Unless explicitly given, an interval of auto increment values
is generated when a statement first needs an auto increment
value. The triggered assert checks that the auto increment
counter is equal to or higher than the lower bound of this
interval.
Generally, the auto increment counter starts at 1 and is
incremented by 1 each time it is used. However, inserting an
explicit value into the auto increment column, sets the auto
increment counter to this value + 1 if this value is higher
than the current value of the auto increment counter.
This bug was triggered if the explicit value was -1. Since the
value was converted to unsigned before any comparisons were made,
it was found to be higher than the current vale of the auto
increment counter and the counter was set to -1 + 1. This value
was below the reserved interval and caused the assert to be
triggered the next time the statement tried to write a row.
With the patch for Bug#39828, this bug is no longer repeatable.
Now, -1 + 1 is detected as an "overflow" which causes the auto
increment counter to be set to ULONGLONG_MAX. This avoids hitting
the assert for the next insert and causes a new interval of
auto increment values to be generated. This resolves the issue.
This patch therefore only contains a regression test and no code
changes. Test case added to auto_increment.test.
It includes speed optimizations for HANDLER READ by caching as much as possible in HANDLER OPEN
Other things:
- Added mysqld option --disable-thr-alarm to be able to benchmark things without thr_alarm
- Changed 'Locked' state to 'System lock' and 'Table lock' (these where used in the code but never shown to end user)
- Better error message if mysql_install_db.sh fails
- Moved handler function prototypes to sql_handler.h
- Remove not anymore used 'thd->locked' member
include/thr_alarm.h:
Added my_disable_thr_alarm
include/thr_lock.h:
Add new member to THR_LOCK_DATA to remember original lock type state. This is needed as thr_unlock() resets type to TL_UNLOCK.
mysql-test/include/check_no_concurrent_insert.inc:
Locked -> Table lock
mysql-test/include/handler.inc:
Locked -> Table lock
mysql-test/r/handler_innodb.result:
Updated results for new tests
mysql-test/r/handler_myisam.result:
Updated results for new tests
mysql-test/r/sp-threads.result:
Locked -> Table lock
mysql-test/suite/binlog/t/binlog_stm_row.test:
Locked -> Table lock
mysql-test/suite/funcs_1/datadict/processlist_val.inc:
Locked -> Table lock
mysql-test/suite/pbxt/t/lock_multi.test:
Locked -> Table lock
mysql-test/suite/sys_vars/r/concurrent_insert_func.result:
Locked -> Table lock
mysql-test/suite/sys_vars/t/concurrent_insert_func.test:
Locked -> Table lock
mysql-test/suite/sys_vars/t/delayed_insert_limit_func.test:
Locked -> Table lock
mysql-test/suite/sys_vars/t/query_cache_wlock_invalidate_func.test:
Locked -> Table lock
mysql-test/suite/sys_vars/t/sql_low_priority_updates_func.test:
Locked -> Table lock
mysql-test/t/insert_notembedded.test:
Locked -> Table lock
mysql-test/t/lock_multi.test:
Locked -> Table lock
mysql-test/t/merge-big.test:
Locked -> Table lock
mysql-test/t/multi_update.test:
Locked -> Table lock
mysql-test/t/query_cache_28249.test:
Locked -> Table lock
mysql-test/t/sp_notembedded.test:
Locked -> Table lock
mysql-test/t/sp_sync.test:
Locked -> Table lock
mysql-test/t/status.test:
Locked -> Table lock
mysql-test/t/trigger_notembedded.test:
Locked -> Table lock
mysys/thr_alarm.c:
Added option to disable thr_alarm
mysys/thr_lock.c:
Detect loops
scripts/mysql_install_db.sh:
Give better error message if something goes wrong
sql/Makefile.am:
Added sql_handler.h
sql/lock.cc:
Split functions to allow one to cache value if store_lock() (for HANDLER functions).
- Split mysql_lock_tables() into two functions, where first one allocates MYSQL_LOCK and other other one uses it.
- Made get_lock_data() an external function.
- Added argument to mysql_unlock_tables() to not free sql_lock.
- Added argument to reset_lock_data() to reset lock structure to initial state (as after get_lock_data())
sql/mysql_priv.h:
Moved handler function prototypes to sql_handler.h
Added new lock functions.
sql/mysqld.cc:
Added --thread-alarm startup option
sql/net_serv.cc:
Don't call vio_blocking() if not needed
sql/sql_base.cc:
include sql_handler.h
sql/sql_class.cc:
include sql_handler.h
Remove not anymore used 'thd->locked' member
sql/sql_class.h:
Remove not anymore used 'thd->locked' member
sql/sql_db.cc:
include sql_handler.h
sql/sql_delete.cc:
include sql_handler.h
sql/sql_handler.cc:
Rewrote all code to use SQL_HANDLER instead of TABLE_LIST (original interface)
Rewrote mysql_ha_open() to cache all things from TABLE_LIST and items for field list, where etc.
In mysql_ha_open() also cache MYSQL_LOCK structure from get_lock_data().
Split functions into smaller sub functions (needed to be able to implement mysql_ha_read_prepare())
Added mysql_ha_read_prepare() to allow one to prepare HANDLER READ.
sql/sql_handler.h:
Interface to sql_handler.cc
sql/sql_parse.cc:
include sql_handler.h
sql/sql_prepare.cc:
Added mysql_test_handler_read(), prepare for HANDLER READ
sql/sql_rename.cc:
include sql_handler.h
sql/sql_show.cc:
Removed usage of thd->locked
sql/sql_table.cc:
include sql_handler.h
sql/sql_trigger.cc:
include sql_handler.h
The patch also fixes a race in rpl_stop_slave.test.
On machines with lots of CPU and memory, something like `mtr --parallel=10`
can speed up the test suite enormously. However, we have a few test cases
that run for long (several minutes), and if we are unlucky and happen to
schedule those towards the end of the test suite, we end up with most
workers idle while waiting for the last slow test to end, significantly
delaying the finish of the entire suite.
Improve this by marking the offending tests as taking "long", and trying
to schedule those tests early. This reduces the time towards the end of
the test suite run where some workers are waiting with nothing to do for
the remaining workers each to finish their last test.
Also, the rpl_stop_slave test had a race which could cause it to take
a 300 seconds debug_sync timeout; this is fixed.
Testing on a 4-core 8GB machine, this patch speeds up the test suite with
around 30% for --parallel=10 (debug build), allowing to run the entire
suite in 5 minutes.
* move a capability from a virtual handler method to table_flags()
* rephrase error messages to avoid hard-coded English parts
* admit in test cases that they need xtradb, not innodb
mysql-test/suite/vcol/t/rpl_vcol.test:
this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_blocked_sql_funcs_innodb.test:
this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_column_def_options_innodb.test:
this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_handler_innodb.test:
this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_ins_upd_innodb.test:
this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_keys_innodb.test:
this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_non_stored_columns_innodb.test:
this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_partition_innodb.test:
this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_select_innodb.test:
this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_supported_sql_funcs_innodb.test:
this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_trigger_sp_innodb.test:
this test needs xtradb, it will fail with innodb
mysql-test/suite/vcol/t/vcol_view_innodb.test:
this test needs xtradb, it will fail with innodb
sql/ha_partition.h:
check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
sql/handler.h:
check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
sql/share/errmsg.txt:
no hard-coded english parts in the error messages (ER_UNSUPPORTED_ACTION_ON_VIRTUAL_COLUMN)
sql/sql_table.cc:
no hard-coded english parts in the error messages
sql/table.cc:
* check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
* no "csv workaround" is needed
* no hard-coded english parts in the error messages
storage/maria/ha_maria.cc:
check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
storage/maria/ha_maria.h:
check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
storage/myisam/ha_myisam.cc:
check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
storage/myisam/ha_myisam.h:
check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
storage/xtradb/handler/ha_innodb.cc:
check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
storage/xtradb/handler/ha_innodb.h:
check_if_supported_virtual_columns() -> HA_CAN_VIRTUAL_COLUMNS
mysqlbinlog only prints "use $database" statements to its output stream
when the active default database changes between events. This will cause
"No Database Selected" error when dropping and recreating that database.
To fix the problem, we clear print_event_info->db when printing an event
of CREATE/DROP/ALTER database statements, so that the Query_log_event
after such statements will be printed with the use 'db' anyway except
transaction keywords.
mysql-test/r/mysqlbinlog.result:
Test result for Bug#50914.
mysql-test/t/mysqlbinlog.test:
Added test to verify if the approach of the mysqlbinlog prints
"use $database" statements to its output stream will cause
"No Database Selected" error when dropping and recreating
that database.
sql/log_event.cc:
Updated code to clear print_event_info->db when printing an event
of CREATE/DROP/ALTER database statements, so that the Query_log_event
after such statements will be printed with the use 'db' anyway except
transaction keywords.
- Removed files specific to compiling on OS/2
- Removed files specific to SCO Unix packaging
- Removed "libmysqld/copyright", text is included in documentation
- Removed LaTeX headers for NDB Doxygen documentation
- Removed obsolete NDB files
- Removed "mkisofs" binaries
- Removed the "cvs2cl.pl" script
- Changed a few GPL texts to use "program" instead of "library"
This patch improves the selection of index to use to apply row-based
DELETE and UPDATE events on tables with no primary key (original code
picks the first index unconditionally).
If ANALYZE TABLE is done, the index cardinalities will be compared and
the best index will be used.
Fixes some problems in the original patch:
- Without ANALYZE TABLE, rec_per_key statistics is not available; in this
case the original patch could choose a really bad index, even ignoring
a primary key.
- The original patch did not consider multi-column keys correctly, and
could thus pick a less desirable single-column key over a good
multi-column index.
Also fixes Bug#58997, and adds test cases.
One of the hash functions employed by the BNLH join algorithm
calculates the the value of hash index for key value utilizing
every byte of the key buffer. To make this calculation valid
one has to ensure that for any key value unused bytes of the
buffer are filled with with a certain filler. We choose 0 as
a filler for these bytes.
Added an optional boolean parameter with_zerofill to the function
key_copy. If the value of the parameter is TRUE all unused bytes
of the key buffer is filled with 0.
When the optimizer creates items out of other items it does
not have to call the fix_fields method. Usually in these
cases it calls quick_fix_field() that just marks the
created item as fixed. If the created item is an Item_func
object then calling quick_fix_field() works fine if the
arguments of the created functional item are already fixed.
Otherwise some unfixed nodes remain in the item tree and
it triggers an assertion failure whenever the item is
evaluated.
Fixed the problem by making the method quick_fix_field
virtual and providing an implementation for the class
Item_func objects that recursively calls the method
for unfixed arguments of any functional item.
In some cases the function make_cond_for_index() was mistaken
when detecting index only pushdown conditions for a table:
a pushdown condition that was not index only could be marked
as such.
It happened because the procedure erroneously used the markers
for index only conditions that remained from the calls of
this function that extracted the index conditions for other
tables.
Fixed by erasing index only markers as soon as they are need
anymore.
ASSERT happens due to improper calculation of the max_length
in Item_func_div object, if dividend has max_length == 0 then
Item_func_div::max_length is set to 0 under some circumstances.
The fix:
If decimals == NOT_FIXED_DEC then set
Item_func_div::max_length to max possible
DOUBLE length value.
mysql-test/r/func_math.result:
test case
mysql-test/t/func_math.test:
test case
sql/item_func.cc:
The fix:
If decimals == NOT_FIXED_DEC then set
Item_func_div::max_length to max possible
DOUBLE length value.
Bug#57071: EXTRACT(WEEK from date_col) cannot be allowed as partitioning function
There were functions allowed as partitioning functions
that implicit allowed cast. That could result in unacceptable
behaviour.
Solution was to check that the arguments of date and time functions
have allowed types (field and date/datetime/time depending on function).
mysql-test/r/partition.result:
Updated result
mysql-test/r/partition_error.result:
Updated result
mysql-test/suite/parts/inc/part_supported_sql_funcs_main.inc:
disabled test with not allowed arguments.
mysql-test/suite/parts/r/part_supported_sql_func_innodb.result:
Updated result
mysql-test/suite/parts/r/part_supported_sql_func_myisam.result:
Updated result
mysql-test/t/partition.test:
Fixed typo in bug number and removed non allowed function (bad argument)
mysql-test/t/partition_error.test:
Added tests to verify correct type of argument.
sql/item.h:
Renamed processor since it is no longer only for timezone
sql/item_func.h:
Added help functions for checking date/time/datetime arguments.
sql/item_timefunc.h:
Added processors for argument correctness
sql/sql_partition.cc:
renamed the processor for checking arguments.
BUILD/compile-pentium64:
Added --with-zlib-dir=bundled when doing static build.
mysql-test/suite/maria/r/maria.result:
Added test case
mysql-test/suite/maria/t/maria.test:
Added test case
storage/maria/ma_blockrec.c:
We need to flush blob pages to disk to ensure they overwrite any reused head/tail pages.
If not, REPAIR will find rows that was previously deleted.
Problem: master executed a statement that would fail on slave
(namely, DROP USER 'create_rout_db'@'localhost').
Then the test did:
--let $rpl_only_running_threads= 1
--source include/rpl_reset.inc
rpl_reset.inc calls rpl_sync.inc, which first checks which of
the threads are running and then syncs those threads that are
running. If the SQL thread fails after the check, the sync will
fail. So there was a race in the test and it failed on some
slow hosts.
Fix: Don't replicate the failing statement.
Item_sum_max/Item_sum_min incorrectly set null_value flag and
attempt to get result in parent functions leads to crash.
This happens due to double evaluation of the function argumet.
First evaluation happens in the comparator and second one
happens in Item_cache::cache_value().
The fix is to introduce new Item_cache object which
holds result of the argument and use this cached value
as an argument of the comparator.
mysql-test/r/func_group.result:
test case
mysql-test/t/func_group.test:
test case
sql/item.cc:
added assertion that ether we have some result or result is NULL.
sql/item_sum.cc:
introduce new Item_cache object which
holds result of the argument and use this cached value
as an argument of the comparator.
sql/item_sum.h:
introduce new Item_cache object which
holds result of the argument and use this cached value
as an argument of the comparator.
row_upd_changes_ord_field_binary(): Do not return TRUE if the update
vector changes a column that is covered by a prefix index, but does
not change the column prefix. Add the row_ext_t parameter for
determining whether the prefixes of externally stored columns match.
dfield_datas_are_binary_equal(): Add the parameter len, for comparing
column prefixes when len > 0.
innodb.test: Add a test case where the patch of Bug #55284 failed
without this fix.
rb:537 approved by Jimmy Yang
Normally, auto_increment value is generated for the column by
inserting either NULL or 0 into it. NO_AUTO_VALUE_ON_ZERO
suppresses this behavior for 0 so that only NULL generates
the auto_increment value. This behavior is also followed by
a slave, specifically by the SQL Thread, when applying events
in the statement format from a master. However, when applying
events in the row format, the flag was ignored thus causing
an assertion failure:
"Assertion failed: next_insert_id == 0, file .\handler.cc"
In fact, we never need to generate a auto_increment value for
the column when applying events in row format on slave. So we
don't allow it to happen by using 'MODE_NO_AUTO_VALUE_ON_ZERO'.
Refactoring: Get rid of all the sql_mode checks to rows_log_event
when applying it for avoiding problems caused by the inconsistency
of the sql_mode on slave and master as the sql_mode is not set for
Rows_log_event.
mysql-test/extra/rpl_tests/rpl_auto_increment.test:
Added test to verify if the assertion of "next_insert_id == 0"
will fail in ha_external_lock() function.
mysql-test/suite/rpl/r/rpl_auto_increment.result:
Test result for bug#56662.
sql/log_event.cc:
Added code to not allow generation of auto_increment value when
processing rows event by adding 'MODE_NO_AUTO_VALUE_ON_ZERO' to
sql_mode.
sql/rpl_record.cc:
Added code to get rid of the 'MODE_STRICT_TRANS_TABLES' and
MODE_STRICT_ALL_TABLES check to the table when appling the
rows event and treat it as no strict.