This bug in the function Loose_scan_opt::check_ref_access_part1 could lead
to choosing an invalid execution plan employing a loose scan access to a
semi-join table even in the cases when such access could not be used at all.
This could result in wrong answers for some queries with IN subqueries.
Analysis:
The optimizer distinguishes two kinds of 'constant' conditions:
expensive ones, and non-expensive ones. The non-expensive conditions
are evaluated inside make_join_select(), and if false, already the
optimizer detects empty query results.
In order to avoid arbitrarily expensive optimization, the evaluation of
expensive constant conditions is delayed until execution. These conditions
are attached to JOIN::exec_const_cond and evaluated in the beginning of
JOIN::exec. The relevant execution logic is:
JOIN::exec()
{
if (! join->exec_const_cond->val_int())
{
produce an empty result;
stop execution
}
continue execution
execute the original WHERE clause (that contains exec_const_cond)
...
}
As a result, when an expensive constant condition is
TRUE, it is evaluated twice - once through
JOIN::exec_const_cond, and once through JOIN::cond.
When the expensive constant condition is a subquery,
predicate, the subquery is evaluated twice. If we have
many levels of subqueries, this logic results in a chain
of recursive subquery executions that walk a perfect
binary tree. The result is that for subquries with depth N,
JOIN::exec is executed O(2^N) times.
Solution:
Notice that the second execution of the constant conditions
happens inside do_select(), in the branch:
if (join->table_count == join->const_tables) { ... }
In this case exec_const_cond is equivalent to the whole WHERE
clause, therefore the WHERE clause has already been checked in
the beginnig of JOIN::exec, and has been found to be true.
The bug is addressed by not evaluating the WHERE clause if there
was exec_const_conds, and it was TRUE.
A patch for alter_table-big.test has been committed earlier.
This is a patch for create-big.test:
The test used to time-out after 900 seconds.
It relied on debug sleeps that are no longer present in the
code. Since the sleeps are long gone, fixing the problem didn't
involve just updating the result file or using macro
"show_binlog_events2.inc" instead of "show binlog events"
statement. The test needed to be rewritten using debug sync
points, and result then needed to be updated.
So, the sleeps have been replaced by debug_sync points and the test execution time has
been reduced significantly.
Setting query_cache_size to larger values might fail depending on the memory
pressure being put on the system. This can be seen on pushbuild as the test
case query_cache_size_basic tries to allocate a +3GB query cache, which
succeeds in some machines and fails in others.
So this part of the code where the test case tries to allocate +3GB query cache has been
disabled for now to get the test running on pb2.
A non-first execution of a prepared statement missed a call of the
TABLE_LIST::process_index_hints() method in the code of the function
setup_tables().
At some scenarios this could lead to the choice of a quite inefficient
execution plan for the base query of the prepared statement.
This bug in the function setup_semijoin_dups_elimination() could
lead to invalid choice of the sequence of tables for which semi-join
duplicate elimination was applied.
Due to this bug the function SEL_IMERGE::or_sel_tree_with_checks()
could build an inconsistent merge tree if one of the SEL_TREEs in the
resulting index merge happened to contain a full key range.
This could trigger an assertion failure.
rb://816
approved by: Marko Makela
The title is misleading. This bug was actually introduced by
bug 12635227 and was unearthed by a later optimization.
We need to free buf_page_t structs that we are allocating using
malloc() at shutdown.
The function key_and() erroneously called SEL_ARG::increment_use_count()
when SEL_ARG::incr_refs() should had been called. This could lead to
wrong values of use_count for some SEL_ARG trees.
Apart from the fix, the patch also adds few more unrelated test
cases for partial matching, and fixes few typos.
Analysis:
This bug uncovered that partial matching via rowid intersection
didn't handle the case when:
- the left IN argument has some NULLs,
- there are no non-null value matches, and there is no non-null
column,
- the subquery columns that are not covered with the NULLs in
the left IN argument contain at least one row, such that it
has NULL values in all columns where the left IN operand has
no NULLs.
In this case there is a partial match.
In addition the analysis of the related code uncovered incorrect
handling of few other related cases.
Solution:
The solution for the bug is to check if there exists a row with
NULLs in all columns other than the ones having NULL in the
let IN operand.
The check is implemented via checking whether the bitmaps that
store NULL information in class Ordered_key have a non-empty
intersection for the relevant columns.
The intersection itself is implemented via the function
bitmap_exists_intersection() in my_bitmap.c.
The function setup_semijoin_dups_elimination erroneously assumed
that if join_cache_level is set to 3 or 4 then the type of the
access to a table cannot be JT_REF or JT_EQ_REF. This could lead
to wrong query result sets.
If the optimizer switch 'semijoin_with_cache' is set to 'off' then
join cache cannot be used to join inner tables of a semijoin.
Also fixed a bug in the function check_join_cache_usage() that led
to wrong output of the EXPLAIN commands for some test cases.
BY CACHING OR REDUCING CREATEEVENT CALLS".
5.5 versions of MySQL server performed worse than 5.1 versions
under single-connection workload in autocommit mode on Windows XP.
Part of this slowdown can be attributed to overhead associated
with constant creation/destruction of MDL_lock objects in the MDL
subsystem. The problem is that creation/destruction of these
objects causes creation and destruction of associated
synchronization primitives, which are expensive on Windows XP.
This patch tries to alleviate this problem by introducing a cache
of unused MDL_object_lock objects. Instead of destroying such
objects we put them into the cache and then reuse with a new
key when creation of a new object is requested.
To limit the size of this cache, a new --metadata-locks-cache-size
start-up parameter was introduced.
mysql-test/r/mysqld--help-notwin.result:
Updated test after adding --metadata-locks-cache-size
parameter.
mysql-test/r/mysqld--help-win.result:
Updated test after adding --metadata-locks-cache-size
parameter.
mysql-test/suite/sys_vars/r/metadata_locks_cache_size_basic.result:
Added test coverage for newly introduced --metadata_locks_cache_size
start-up parameter and corresponding global read-only variable.
mysql-test/suite/sys_vars/t/metadata_locks_cache_size_basic-master.opt:
Added test coverage for newly introduced --metadata_locks_cache_size
start-up parameter and corresponding global read-only variable.
mysql-test/suite/sys_vars/t/metadata_locks_cache_size_basic.test:
Added test coverage for newly introduced --metadata_locks_cache_size
start-up parameter and corresponding global read-only variable.
sql/mdl.cc:
Introduced caching of unused MDL_object_lock objects, in order to
avoid costs associated with constant creation and destruction of
such objects in single-connection workloads run in autocommit mode.
Such costs can be pretty high on systems where creation and
destruction of synchronization primitives require a system call
(e.g. Windows XP).
To implement this cache,a list of unused MDL_object_lock instances
was added to MDL_map object. Instead of being destroyed
MDL_object_lock instances are put into this list and re-used later
when creation of a new instance is required. Also added
MDL_lock::m_version counter to allow threads having outstanding
references to an MDL_object_lock instance to notice that it has
been moved to the unused objects list.
Added a global variable for a start-up parameter that limits
the size of the unused objects list.
Note that we don't cache MDL_scoped_lock objects since they
are supposed to be created only during execution of DDL
statements and therefore should not affect performance much.
sql/mdl.h:
Added a global variable for start-up parameter that limits the
size of the unused MDL_object_lock objects list and constant
for its default value.
sql/sql_plist.h:
Added I_P_List<>::pop_front() function.
sql/sys_vars.cc:
Introduced --metadata-locks-cache-size start-up parameter
for specifying size of the cache of unused MDL_object_lock
objects.
OPTION SKIP-WRITE-BINLOG
System tables were not getting upgraded when
mysql_upgrade was run with --skip-write-binlog
option. (Same for --write-binlog.) Also, with
this option, mysql_upgrade_info file was not
getting created after the upgrade.
mysql_upgrade makes use of mysql client tool in
order to run upgrade scripts, while doing so it
passes some of the command line options (used to
start mysql_upgrade) directly to mysql client.
The reason behind this bug being, some options
like skip-write-binlog and upgrade-system-tables
were being passed to mysql tool along with other
options, and hence mysql execution failed due
presence of these invalid options.
Fixed this issue by filtering out the above mentioned
options from the list of options that will be passed to
mysql and mysqlcheck tools. However, since --write-binlog
is supported by mysqlcheck, this option would be used
explicitly while running mysqlcheck. (not part of patch,
already there)
Checking the contents of general log after the upgrade
is not doable via an mtr test. So performed manual test.
Added a test to verify the creation of mysql_upgrade_info.
client/mysql_upgrade.c:
Bug#11827359 60223: MYSQL_UPGRADE PROBLEM WITH
OPTION SKIP-WRITE-BINLOG
With this patch, --upgrade-system-tables and
--write-binlog options will not be added to the
list of options, used to start mysql and mysqlcheck
tools.
mysql-test/r/mysql_upgrade.result:
Added a testcase for Bug#11827359.
mysql-test/t/mysql_upgrade.test:
Added a testcase for Bug#11827359.
mysql-test/r/select.result:
Test case for lp:780425
mysql-test/r/select_pkeycache.result:
lp:780425
mysql-test/t/select.test:
lp:780425
sql/sql_select.cc:
Added DBUG_ASSERT to be prove some logic and later be able to simplify the code
Set implicit_grouping if we delete a GROUP BY to signal do_select() that a grouping needs to be done.
A bug in the code of the function key_or could lead to a situation
when performing of an OR operation for one index changes the result
the operation for another index. This bug is fixed with this patch.
Also corrected the specification and the code of the function
or_sel_tree_with_checks.
In MariaDB, when running in ONLY_FULL_GROUP_BY mode,
the server produced in incorrect error message that there
is an aggregate function without GROUP BY, for artificially
created MIN/MAX functions during subquery MIN/MAX optimization.
The fix introduces a way to distinguish between artifially
created MIN/MAX functions as a result of a rewrite, and normal
ones present in the query. The test for ONLY_FULL_GROUP_BY violation
now tests in addition if a MIN/MAX function was part of a MIN/MAX
subquery rewrite.
In order to be able to distinguish these MIN/MAX functions, the
patch introduces an additional flag in Item_in_subselect::in_strategy -
SUBS_STRATEGY_CHOSEN. This flag is set when the optimizer makes its
final choice of a subuqery strategy. In order to make the choice
consistent, access to Item_in_subselect::in_strategy is provided
via new class methods.
******
Fix MySQL BUG#12329653
In MariaDB, when running in ONLY_FULL_GROUP_BY mode,
the server produced in incorrect error message that there
is an aggregate function without GROUP BY, for artificially
created MIN/MAX functions during subquery MIN/MAX optimization.
The fix introduces a way to distinguish between artifially
created MIN/MAX functions as a result of a rewrite, and normal
ones present in the query. The test for ONLY_FULL_GROUP_BY violation
now tests in addition if a MIN/MAX function was part of a MIN/MAX
subquery rewrite.
In order to be able to distinguish these MIN/MAX functions, the
patch introduces an additional flag in Item_in_subselect::in_strategy -
SUBS_STRATEGY_CHOSEN. This flag is set when the optimizer makes its
final choice of a subuqery strategy. In order to make the choice
consistent, access to Item_in_subselect::in_strategy is provided
via new class methods.
The function add_ref_to_table_cond missed updating the value of
join_tab->pre_idx_push_select_cond after having updated the value
of join_tab->select->pre_idx_push_select_cond.
SCAN/CPU) => SLAVE FAILURE
When a statement containing a large amount of ROWs to be applied on
the slave, and the slave's table does not contain a PK, it can take a
considerable amount of time to find and change all the rows that are
to be changed.
The proper slave enhancement will be implemented in WL 5597. However,
in this bug we are making it clear to the user what the problem is, by
printing a message to the error log if the execution time, for a given
statement in RBR, takes more than LONG_FIND_ROW_THRESHOLD (set to 60
seconds). This shall help the DBA to diagnose what's happening when
facing a slave server that is quiet for no apparent reason...
The note is only printed to the error log if log_warnings is set to be
greater than 1.
sql/log_event.cc:
Core of the patch.
In Rows_log_event::do_apply_event, sets STMT start
timestamp.
In Rows_log_event::find_row, if a PK is not used, then the start
timestamp is checked to see if the time spent on this STMT is enough
to justify the printing of a note (only if it was not printed before).
sql/log_event.h:
Defining LONG_FIND_ROW_THRESHOLD.
sql/rpl_rli.cc:
Resets long_find_row state in rli->context_cleanup().
sql/rpl_rli.h:
Two new rli properties that are necessary to control when to
emit a note in the error log: 1) timestamp that states when the
ROW statement started; 2) flag indicating whether the note has
been emitted for the current statement or not. Also deployed
accessors.
The bug was accidentally fixed by fixing
Bug#11759688 52020: InnoDB can still deadlock on just INSERT...ON DUPLICATE KEY
a.k.a. the reintroduction of
Bug#7975 deadlock without any locking, simple select and update
alter_treable-big.test was failing due to the use of RAND() function which is no more
replication safe.
This has been modified using static values.
Also, 'sleep' has been replaced using 'debug_sync' and the execution time of the
test has been reduced significantly.
This test is now taken out of the disabled.def file and is being enabled.
a.k.a. Bug#7975 deadlock without any locking, simple select and update
Bug#7975 was reintroduced when the storage engine API was made
pluggable in MySQL 5.1. Instead of looking at thd->lex directly, we
rely on handler::extra(). But, we were looking at the wrong extra()
flag, and we were ignoring the TRX_DUP_REPLACE flag in places where we
should obey it.
innodb_replace.test: Add tests for hopefully all affected statement
types, so that bug should never ever resurface. This kind of tests
should have been added when fixing Bug#7975 in MySQL 5.0.3 in the
first place.
rb:806 approved by Sunny Bains
A patch for this bug has already been pushed. A minor change is made here.
The database to be used after re-enabling the disabled code is 'TEST'.
But instead, 'MYSQL' was being used.
This is the minor change that is being made here.