This test case uses mysqlbinlog to dump the content of master-bin.000001,
but the content of master-bin.000001 is not that this test needs.
MTR runs a lot of test cases on one server, so when this test starts, the current binlog file
might not be master-bin.000001, or there are other events are written by tests before.
'RESET MASTER' command must be called at the begin, it ensures that binlog of this test
is wrote to master-bin.000001 correctly.
Three other tests have the same problem, They were fixed together.
mysqlbinlog-cp932
binlog_incident
binlog_tmp_table
Postfix.
extra/rpl_tests/rpl_row_sp006.test had changed to fix this bug.
extra/rpl_tests/rpl_row_sp006.test is also referenced by rpl_ndb_sp006,
So rpl_row_sp006.result must be changed too.
The plugin feature is disabled, you need HAVE_DLOPEN
Selectively skip tests that require dynamic loading (ie: static builds).
mysql-test/include/have_dynamic_loading.inc:
Add require file.
mysql-test/t/ps_not_windows.test:
Test requires dynamic loading support.
on subquery inside a SP
Problem: repeated call of a SP containing an incorrect query with a
subselect may lead to failed ASSERT().
Fix: set proper sublelect's state in case of error occured during
subquery transformation.
mysql-test/r/sp.result:
Fix for bug#46629: Item_in_subselect::val_int(): Assertion `0'
on subquery inside a SP
- test result.
mysql-test/t/sp.test:
Fix for bug#46629: Item_in_subselect::val_int(): Assertion `0'
on subquery inside a SP
- test case.
sql/item_subselect.cc:
Fix for bug#46629: Item_in_subselect::val_int(): Assertion `0'
on subquery inside a SP
- don't set Item_subselect::changed in the Item_subselect::fix_fields()
if an error occured during subquery transformation.
That prevents us of further processing incorrect subqueries after
Item_in_subselect::select_in_like_transformer().
Memory allocated in TMP_TABLE_PARAM::copy_field is not cleaned up.
The fix is to clean up TMP_TABLE_PARAM::copy_field array in JOIN::destroy.
mysql-test/r/explain.result:
test result
mysql-test/t/explain.test:
test case
sql/sql_select.cc:
Memory allocated in TMP_TABLE_PARAM::copy_field is not cleaned up.
The fix is to clean up TMP_TABLE_PARAM::copy_field array in JOIN::destroy.
name as existing view
When trying to create a table with the same name as existing view with
join, mysql server crashes.
The problem is when create table is issued with the same name as view, while
verifying with the existing tables, we assume that base table object is
created always.
In this case, since it is a view over multiple tables, we don't have the
mysql derived table object.
Fixed the logic which checks if there is an existing table to not to assume
that table object is created when the base table is view over multiple
tables.
mysql-test/r/create.result:
BUG#46384 - mysqld segfault when trying to create table with same
name as existing view
Testcase for the bug
mysql-test/t/create.test:
BUG#46384 - mysqld segfault when trying to create table with same
name as existing view
Testcase for the bug
sql/sql_insert.cc:
BUG#46384 - mysqld segfault when trying to create table with same
name as existing view
Fixed create_table_from_items() method to properly check, if the base table
is a view over multiple tables.
Inserting a negative value in the autoincrement column of a
partitioned innodb table was causing the value of the auto
increment counter to wrap around into a very large positive
value. The consequences are the same as if a very large positive
value was inserted into a column, e.g. reduced autoincrement
range, failure to read autoincrement counter.
The current patch ensures that before calculating the next
auto increment value, the current value is within the positive
maximum allowed limit.
mysql-test/suite/parts/inc/partition_auto_increment.inc:
Bug#45823 Assertion failure in file row/row0mysql.c line 1386
Adds tests for performing insert,update and delete on a partition
table with negative auto_increment values.
mysql-test/suite/parts/r/partition_auto_increment_innodb.result:
Bug#45823 Assertion failure in file row/row0mysql.c line 1386
Result file for the innodb engine.
mysql-test/suite/parts/r/partition_auto_increment_memory.result:
Bug#45823 Assertion failure in file row/row0mysql.c line 1386
Result file for the memory engine.
mysql-test/suite/parts/r/partition_auto_increment_myisam.result:
Bug#45823 Assertion failure in file row/row0mysql.c line 1386
Result file for the myisam engine.
mysql-test/suite/parts/r/partition_auto_increment_ndb.result:
Bug#45823 Assertion failure in file row/row0mysql.c line 1386
Result file for the ndb engine.
mysql-test/suite/parts/t/partition_auto_increment_archive.test:
Bug#45823 Assertion failure in file row/row0mysql.c line 1386
Adds a variable that allows the Archive engine to skip tests
that involve insertion of negative auto increment values.
mysql-test/suite/parts/t/partition_auto_increment_blackhole.test:
Bug#45823 Assertion failure in file row/row0mysql.c line 1386
Adds a variable that allows the Blackhole engine to skip tests
that involve insertion of negative auto increment values.
sql/ha_partition.cc:
Bug#45823 Assertion failure in file row/row0mysql.c line 1386
Ensures that the current value is lesser than the upper limit
for the type of the field before setting the next auto increment
value to be calculated.
sql/ha_partition.h:
Bug#45823 Assertion failure in file row/row0mysql.c line 1386
Modifies the set_auto_increment_if_higher function, to take
into account negative auto increment values when doing a
comparison.
Essentially, Bug#45574 results in this bug. The 'CREATE DATABASE IF NOT EXISTS' statement was not
binlogged, when the database has existed.
Sometimes, the master and slaves become inconsistent. The "CREATE DATABASE
IF NOT EXISTS mysqltest1" statement is not binlogged
if the db 'mysqltest1' existed before the test case is executed.
So the db 'mysqltest1' can't be created on slave.
Patch of Bug#45574 has resolved this problem.
But I think it is better to replace 'mysqltest1' by default db 'test'.
results in server crash
check_group_min_max_predicates() assumed the input condition
item to be one of COND_ITEM, SUBSELECT_ITEM, or FUNC_ITEM.
Since a condition of the form "field" is also a valid condition
equivalent to "field <> 0", using such a condition in a query
where the loose index scan was chosen resulted in a debug
assertion failure.
Fixed by handling conditions of the FIELD_ITEM type in
check_group_min_max_predicates().
mysql-test/r/group_min_max.result:
Added a test case for bug #46607.
mysql-test/t/group_min_max.test:
Added a test case for bug #46607.
sql/opt_range.cc:
Handle conditions of the FUNC_ITEM type in
check_group_mix_max_predicates().
If an EVENT is created without the DEFINER clause set explicitly or with it set
to CURRENT_USER, the master and slaves become inconsistent. This issue stems from
the fact that in both cases, the DEFINER is set to the CURRENT_USER of the current
thread. On the master, the CURRENT_USER is the mysqld's user, while on the slave,
the CURRENT_USER is empty for the SQL Thread which is responsible for executing
the statement.
To fix the problem, we do what follows. If the definer is not set explicitly,
a DEFINER clause is added when writing the query into binlog; if 'CURRENT_USER' is
used as the DEFINER, it is replaced with the value of the current user before
writing to binlog.
mysql-test/suite/rpl/r/rpl_create_if_not_exists.result:
Updated the result file after fixing bug#44331
mysql-test/suite/rpl/r/rpl_drop_if_exists.result:
Updated the result file after fixing bug#44331
mysql-test/suite/rpl/r/rpl_events.result:
Test result of Bug#44331
mysql-test/suite/rpl/r/rpl_innodb_mixed_dml.result:
Updated the result file after fixing bug#44331
mysql-test/suite/rpl/t/rpl_events.test:
Added test to verify if the definer is consistent between master and slave
when the event is created without the DEFINER clause set explicitly or the
DEFINER is set to CURRENT_USER
sql/events.cc:
The "create_query_string" function is added to create a new query string
for removing executable comments.
sql/sql_yacc.yy:
The remember_name token was added for recording the offset of EVENT_SYM.
mysql-test/r/lock_multi_bug38499.result:
Update test case result.
mysql-test/r/lock_multi_bug38691.result:
Update test case result.
mysql-test/t/lock_multi_bug38499.test:
Do not sync .frm files.
mysql-test/t/lock_multi_bug38691.test:
Do not sync .frm files.
sql/opt_range.cc:
Removed duplicate code (if statement must have been duplicated during earlier merge).
sql/sql_partition.cc:
After mergeing bug#46362 and bug#20577, the NULL partition was also searched
when col = const, fixed by checking if = or range.
When a connection is dropped any remaining temporary table is also automatically
dropped and the SQL statement of this operation is written to the binary log in
order to drop such tables on the slave and keep the slave in sync. Specifically,
the current code base creates the following type of statement:
DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `db`.`table`;
Unfortunately, appending the database to the table name in this manner circumvents
the replicate-rewrite-db option (and any options that check the current database).
To solve the issue, we started writing the statement to the binary as follows:
use `db`; DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `table`;
Slave does not correctly handle "expected errors" leading to inconsistencies
between the mater and slave. Specifically, when a statement changes both
transactional and non-transactional tables, the transactional changes are
automatically rolled back on the master but the slave ignores the error and
does not roll them back thus leading to inconsistencies.
To fix the problem, we automatically roll back a statement that fails on
the slave but note that the transaction is not rolled back unless a "rollback"
command is in the relay log file.
mysql-test/extra/rpl_tests/rpl_mixing_engines.test:
Enabled item 13.e which was disabled because of the bug fixed by the
current and removed item 14 which was introduced by mistake.
field references
This error requires a combination of factors :
1. An "impossible where" in the outermost SELECT
2. An aggregate in the outermost SELECT
3. A correlated subquery with a WHERE clause that includes an outer
field reference as a top level WHERE sargable predicate
When JOIN::optimize detects an "impossible WHERE" it will bail out
without doing the rest of the work and initializations. It will not
call make_join_statistics() as well. And make_join_statistics fills
in various structures for each table referenced.
When processing the result of the "impossible WHERE" the query must
send a single row of data if there are aggregate functions in it.
In this case the server marks all the aggregates as having received
no rows and calls the relevant Item::val_xxx() method on the SELECT
list. However if this SELECT list happens to contain a correlated
subquery this subquery is evaluated in a normal evaluation mode.
And if this correlated subquery has a reference to a field from the
outermost "impossible where" SELECT the add_key_fields will mistakenly
consider the outer field reference as a "local" field reference when
looking for sargable predicates.
But since the SELECT where the outer field reference refers to is not
completely initialized due to the "impossible WHERE" in this level
we'll get a NULL pointer reference.
Fixed by making a better condition for discovering if a field is "local"
to the SELECT level being processed.
It's not enough to look for OUTER_REF_TABLE_BIT in this case since
for outer references to constant tables the Item_field::used_tables()
will return 0 regardless of whether the field reference is from the
local SELECT or not.
The crash happens because select_union object is used as result set
for queries which have derived tables.
select_union use temporary table as data storage and if
fields count exceeds 10(count of values for procedure ANALYSE())
then we get a crash on fill_record() function.
mysql-test/r/analyse.result:
test result
mysql-test/r/subselect.result:
result fix
mysql-test/t/analyse.test:
test case
mysql-test/t/subselect.test:
test fix
sql/sql_yacc.yy:
The crash happens because select_union object is used as result set
for queries which have derived tables.
select_union use temporary table as data storage and if
fields count exceeds 10(count of values for procedure ANALYSE())
then we get a crash on fill_record() function.
binlog
Mixing transactional (T) and non-transactional (N) tables on behalf of a
transaction may lead to inconsistencies among masters and slaves in STATEMENT
mode. The problem stems from the fact that although modifications done to
non-transactional tables on behalf of a transaction become immediately visible
to other connections they do not immediately get to the binary log and therefore
consistency is broken. Although there may be issues in mixing T and M tables in
STATEMENT mode, there are safe combinations that clients find useful.
In this bug, we fix the following issue. Mixing N and T tables in multi-level
(e.g. a statement that fires a trigger) or multi-table table statements (e.g.
update t1, t2...) were not handled correctly. In such cases, it was not possible
to distinguish when a T table was updated if the sequence of changes was N and T.
In a nutshell, just the flag "modified_non_trans_table" was not enough to reflect
that both a N and T tables were changed. To circumvent this issue, we check if an
engine is registered in the handler's list and changed something which means that
a T table was modified.
Check WL 2687 for a full-fledged patch that will make the use of either the MIXED or
ROW modes completely safe.
mysql-test/suite/binlog/r/binlog_row_mix_innodb_myisam.result:
Truncate statement is wrapped in BEGIN/COMMIT.
mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result:
Truncate statement is wrapped in BEGIN/COMMIT.
Problem was that the partition containing NULL values
was pruned away, since '2001-01-01' < '2001-02-00' but
TO_DAYS('2001-02-00') is NULL.
Added the NULL partition for RANGE/LIST partitioning on TO_DAYS()
function to be scanned too.
Also fixed a bug that added ALLOW_INVALID_DATES to sql_mode
(SELECT * FROM t WHERE date_col < '1999-99-99' on a RANGE/LIST
partitioned table would add it).
mysql-test/include/partition_date_range.inc:
Bug#20577: Partitions: use of to_days() function leads to selection failures
Added include file to decrease test code duplication
mysql-test/r/partition_pruning.result:
Bug#20577: Partitions: use of to_days() function leads to selection failures
Added test results
mysql-test/r/partition_range.result:
Bug#20577: Partitions: use of to_days() function leads to selection failures
Updated test result.
This fix adds the partition containing NULL values to
the list of partitions to be scanned.
mysql-test/t/partition_pruning.test:
Bug#20577: Partitions: use of to_days() function leads to selection failures
Added test case
sql/item.h:
Bug#20577: Partitions: use of to_days() function leads to selection failures
Added MONOTONIC_*INCREASE_NOT_NULL values to be used by TO_DAYS.
sql/item_timefunc.cc:
Bug#20577: Partitions: use of to_days() function leads to selection failures
Calculate the number of days as return value even for invalid dates.
This is so that pruning can be used even for invalid dates.
sql/opt_range.cc:
Bug#20577: Partitions: use of to_days() function leads to selection failures
Fixed a bug that added ALLOW_INVALID_DATES to sql_mode
(SELECT * FROM t WHERE date_col < '1999-99-99' on a RANGE/LIST
partitioned table would add it).
sql/partition_info.h:
Bug#20577: Partitions: use of to_days() function leads to selection failures
Resetting ret_null_part when a single partition is to be used, this
to avoid adding the NULL partition.
sql/sql_partition.cc:
Bug#20577: Partitions: use of to_days() function leads to selection failures
Always include the NULL partition if RANGE or LIST.
Use the returned value for the function for pruning, even if
it is marked as NULL, so that even '2000-00-00' can be
used for pruning, even if TO_DAYS('2000-00-00') is NULL.
Changed == to >= in get_next_partition_id_list to avoid
crash if part_iter->part_nums is not correctly setup.
There were a problem since pruning uses the field
for comparison (while evaluate_join_record uses longlong),
resulting in pruning failures when comparing DATE to DATETIME.
Fix was to always comparing DATE vs DATETIME as DATETIME,
by adding ' 00:00:00' to the DATE string.
And adding optimization for comparing with 23:59:59, so that
DATETIME_col > '2001-02-03 23:59:59' ->
TO_DAYS(DATETIME_col) > TO_DAYS('2001-02-03 23:59:59') instead
of '>='.
mysql-test/r/partition_pruning.result:
Bug#46362: Endpoint should be set to false for TO_DAYS(DATE)
Updated result-file
mysql-test/t/partition_pruning.test:
Bug#46362: Endpoint should be set to false for TO_DAYS(DATE)
Added testcases.
sql-common/my_time.c:
Bug#46362: Endpoint should be set to false for TO_DAYS(DATE)
removed duplicate assignment.
sql/item.cc:
Bug#46362: Endpoint should be set to false for TO_DAYS(DATE)
Changed field_is_equal_to_item into field_cmp_to_item, to
better handling DATE vs DATETIME comparision.
sql/item.h:
Bug#46362: Endpoint should be set to false for TO_DAYS(DATE)
Updated comment
sql/item_timefunc.cc:
Bug#46362: Endpoint should be set to false for TO_DAYS(DATE)
Added optimization (pruning) of DATETIME where time-part is
23:59:59
sql/opt_range.cc:
Bug#46362: Endpoint should be set to false for TO_DAYS(DATE)
Using the new stored_field_cmp_to_item for better pruning.
The problem was that creating a DECIMAL column from a decimal
value could lead to a failed assertion as decimal values can
have a higher precision than those attached to a table. The
assert could be triggered by creating a table from a decimal
with a large (> 30) scale. Also, there was a problem in
calculating the number of digits in the integral and fractional
parts if both exceeded the maximum number of digits permitted
by the new decimal type.
The solution is to ensure that truncation procedure is executed
when deducing a DECIMAL column from a decimal value of higher
precision. If the integer part is equal to or bigger than the
maximum precision for the DECIMAL type (65), the integer part
is truncated to fit and the fractional becomes zero. Otherwise,
the fractional part is truncated to fit into the space left
after the integer part is copied.
This patch borrows code and ideas from Martin Hansson's patch.
mysql-test/r/type_newdecimal.result:
Add test case result for Bug#45261. Also, update test case to
reflect that an additive operation increases the precision of
the resulting type by 1.
mysql-test/t/type_newdecimal.test:
Add test case for Bug#45261
sql/field.cc:
Added DBUG_ASSERT to ensure object's invariant is maintained.
Implement method to create a field to hold a decimal value
from an item.
sql/field.h:
Explain member variable. Add method to create a new decimal field.
sql/item.cc:
The precision should only be capped when storing the value
on a table. Also, this makes it impossible to calculate the
integer part if Item::decimals (the scale) is larger than the
precision.
sql/item.h:
Simplify calculation of integer part.
sql/item_cmpfunc.cc:
Do not limit the precision. It will be capped later.
sql/item_func.cc:
Use new method for allocating a new decimal field.
Add a specialized method for retrieving the precision
of a user variable item.
sql/item_func.h:
Add method to return the precision of a user variable.
sql/item_sum.cc:
Use new method for allocating a new decimal field.
sql/my_decimal.h:
The integer part could be improperly calculated for a decimal
with 31 digits in the fractional part.
sql/sql_select.cc:
Use new method which truncates the integer or decimal parts
as needed.
INSERT ... SELECT ...
Problem was that when bulk insert is used on an empty
table/partition, it disables the indexes for better
performance, but in this specific case it also tries
to read from that partition using an index, which is
not possible since it has been disabled.
Solution was to allow index reads on disabled indexes
if there are no records.
Also reverted the patch for bug#38005, since that was a workaround
in the partitioning engine instead of a fix in myisam.
mysql-test/r/partition.result:
Bug#46639: 1030 (HY000): Got error 124 from storage engine on
INSERT ... SELECT ...
updated result file
mysql-test/t/partition.test:
Bug#46639: 1030 (HY000): Got error 124 from storage engine on
INSERT ... SELECT ...
Added testcase
sql/ha_partition.cc:
Bug#46639: 1030 (HY000): Got error 124 from storage engine on
INSERT ... SELECT ...
reverted the patch for bug#38005, since that was a workaround
around this problem, not needed after fixing it in myisam.
storage/myisam/mi_search.c:
Bug#46639: 1030 (HY000): Got error 124 from storage engine on
INSERT ... SELECT ...
Return HA_ERR_END_OF_FILE instead of HA_ERR_WRONG_INDEX
when there are no rows.
(temporary) TABLE, crash
Problem: if one has an open "HANDLER t1", further "TRUNCATE t1"
doesn't close the handler and leaves handler table hash in an
inconsistent state, that may lead to a server crash.
Fix: TRUNCATE should implicitly close all open handlers.
Doc. request: the fact should be described in the manual accordingly.
mysql-test/r/handler_myisam.result:
Fix for bug #46456 [Ver->Prg]: HANDLER OPEN + TRUNCATE + DROP
(temporary) TABLE, crash
- test result.
mysql-test/t/handler_myisam.test:
Fix for bug #46456 [Ver->Prg]: HANDLER OPEN + TRUNCATE + DROP
(temporary) TABLE, crash
- test case.
sql/sql_delete.cc:
Fix for bug #46456 [Ver->Prg]: HANDLER OPEN + TRUNCATE + DROP
(temporary) TABLE, crash
- remove all truncated tables from the HANDLER's hash.
view manipulations
The bespoke flag was not properly reset after last call to
fill_record. Fixed by resetting in caller mysql_update.
mysql-test/r/auto_increment.result:
Bug#46616: Test result.
mysql-test/t/auto_increment.test:
Bug#46616: Test case.
sql/sql_update.cc:
Bug#46616: Fix.
view that has Group By
Table access rights checking function check_grant() assumed
that no view is opened when it's called.
This is not true with nested views where the inner view
needs materialization. In this case the view is already
materialized when check_grant() is called for it.
This caused check_grant() to not look for table level
grants on the materialized view table.
Fixed by checking if a view is already materialized and if
it is check table level grants using the original table name
(not the ones of the materialized temp table).