Analysis:
The fix for bug lp:985667 implements the method Item_subselect::no_rows_in_result()
for all main kinds of subqueries. The purpose of this method is to be called from
return_zero_rows() and set Items to some default value in the case when a query
returns no rows. Aggregates and subqueries require special treatment in this case.
Every implementation of Item_subselect::no_rows_in_result() called
Item_subselect::make_const() to set the subquery predicate to its default value
irrespective of where the predicate was located in the query. Once the predicate
was set to a constant it was never executed.
At the same time, the JOIN object of the fake select for UNIONs (the one used for
the final result of the UNION), was set after all subqueries in the union were
executed. Since we set the subquery as constant, it was never executed, and the
corresponding JOIN was never created.
In order to decide whether the result of NOT IN is NULL or FALSE, Item_in_optimizer
needs to check if the subquery result was empty or not. This is where we got the
crash, because subselect_union_engine::no_rows() checks for
unit->fake_select_lex->join->send_records, and the join object was NULL.
Solution:
If a subquery is in the HAVING clause it must be evaluated in order to know its
result, so that we can properly filter the result records. Once subqueries in the
HAVING clause are executed even in the case of no result rows, this specific
crash will be solved, because the UNION will be executed, and its JOIN will be
constructed. Therefore the fix for this crash is to narrow the fix for lp:985667,
and to apply Item_subselect::no_rows_in_result() only when the subquery predicate
is in the SELECT clause.
The class Item_func missed an implementation of the virtual
function update_null_value.
Back-ported the fix for bug 62125 from mysql 5.6 code line.
The test case was also back-ported.
Analysis:
Queries with implicit grouping (there is aggregate, but no group by)
follow some non-obvious semantics in the case of empty result set.
Aggregate functions produce some special "natural" value depending on
the function. For instance MIN/MAX return NULL, COUNT returns 0.
The complexity comes from non-aggregate expressions in the select list.
If the non-aggregate expression is a constant, it can be computed, so
we should return its value, however if the expression is non-constant,
and depends on columns from the empty result set, then the only meaningful
value is NULL.
The cause of the wrong result was that for subqueries the optimizer didn't
make a difference between constant and non-constant ones in the case of
empty result for implicit grouping.
Solution:
In all implementations of Item_subselect::no_rows_in_result() check if the
subquery predicate is constant. If it is constant, do not set it to the
default value for implicit grouping, instead let it be evaluated.
client/mysqldump.c:
Added LINT_INIT
mysql-test/mysql-test-run.pl:
Disable warning if example engine is not found
mysql-test/suite/rpl/t/rpl_semi_sync.test:
Rpl_semi_sync_master_yes_tx may be different on Windows
sql/sql_plugin.cc:
More DBUG_PRINT
support-files/compiler_warnings.supp:
Disable some innobase warnings
unittest/mysys/my_vsnprintf-t.c:
Fixed test failure on Solaris (typo)
Problem
========
Replication breaks in the cases if the event length exceeds
the size of master Dump thread's max_allowed_packet.
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
That can happen e.g with row-based replication in Update_rows event.
Fix
====
The problem is fixed in 2 steps:
1.) The Dump thread limit to read event is increased to the upper limit
i.e. Dump thread reads whatever gets logged in the binary log.
2.) On the slave side we increase the the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option (slave_max_allowed_packet)
included, is used to regulate the max_allowed_packet of the
slave thread (IO/SQL) by the DBA, and facilitates the sending of
large packets from the master to the slave.
This causes the large packets to be received by the slave and apply
it successfully.
sql/log_event.cc:
The max_allowed_packet is not evaluated to the new option
slave_max_allowed_packet after the fix.
sql/log_event.h:
Added the new option in the log_event.h file.
sql/mysqld.cc:
Added a new option to the server.
sql/slave.cc:
Increasing the session max_allowed_packet to a large value,
i.e. not taking global(max_allowed) into consideration, for the slave's threads.
sql/sql_repl.cc:
The dump thread's max_allowed_packet is set to the upper limit
which makes it independent and it now reads whatever gets
logged in the binary log.
The bug prevented acceptance of UNION queries whose non-first select
clauses contained join expressions with degenerated single-table nests
as valid queries.
The bug was introduced into mysql-5.5 code line by the patch for
bug 33204.
Fixed MDEV-331: last_insert_id() returns a signed number
mysql-test/r/auto_increment.result:
Added test case
mysql-test/t/auto_increment.test:
Added test case
sql/item_func.h:
Changed last_insert_id() to be unsigned.
- Make SHOW EXPLAIN code take into account that st_select_lex object without joins can be
a full-featured SELECTs which were already executed and cleaned up.
Analysis:
When the method JOIN::choose_subquery_plan() decided to apply
the IN-TO-EXISTS strategy, it set the unit and select_lex
uncacheable flag to UNCACHEABLE_DEPENDENT_INJECTED unconditionally.
As result, even if IN-TO-EXISTS injected non-correlated predicates,
the subquery was still treated as correlated.
Solution:
Set the subquery as correlated only if the injected predicate(s) depend
on the outer query.
- The problem was that create_ref_for_key() would act differently, depending on
whether we're running EXPLAIN or the actual query.
- As the first step, fixed the EXPLAIN printout not to depend on actions in create_ref_for_key().
backport dmitry.shulga@oracle.com-20120209125742-w7hdxv0103ymb8ko from mysql-trunk:
Patch for bug#11764747 (formerly known as 57612): SET GLOBAL READ_ONLY=1 cannot
progress when a table is locked with LOCK TABLES.
The reason for the bug was that mysql server makes a flush of all open tables
during handling of statement 'SET GLOBAL READ_ONLY=1'. Therefore if some of
these tables were locked by "LOCK TABLE ... READ" from a different connection,
then execution of statement 'SET GLOBAL READ_ONLY=1' would be waiting for
the lock for such table even if the table was locked in a compatible read mode.
Flushing of all open tables before setting of read_only system variable
is inherited from 5.1 implementation since this was the only possible approach
to ensure that there isn't any pending write operations on open tables.
Start from version 5.5 and above such behaviour is guaranteed by the fact
that we acquire global_read_lock before setting read_only flag. Since
acquiring of global_read_lock is successful only when there isn't any
active write operation then we can remove flushing of open tables from
processing of SET GLOBAL READ_ONLY=1.
This modification changes the server behavior so that read locks held
by other connections (LOCK TABLE ... READ) no longer will block attempts
to enable read_only.
Analysis:
The crash is a result of Item_cache_temporal::example not being set
(it is NULL). It turns out that the value of Item_cache_temporal
may be set directly by calling Item_cache_temporal::store_packed
without ever setting the "example" of this Item_cache. Therefore
the failing assertion is too narrow.
Solution:
Remove the assert.
In principle we could overwrite this method for Item_cache_temporal,
but it doesn't make sense just for this assert.
Renamed the system variable optimizer_use_stat_tables to use_stat_tables.
This variable now has only 3 possible values:
'never', 'complementary', 'preferably'.
If the server has been launched with
--use-stat-tables='complementary'|'preferably'
then the statictics tables can be employed by the optimizer and by the
ANALYZE command.
CHEAP SQ: Valgrind warnings "Memory lost" with IN and EXISTS nested subquery, materialization+semijoin
Analysis:
The memory leak was a result of the interaction of semi-join optimization
with early optimization of constant subqueries. The function:
setup_jtbm_semi_joins() created a dummy temporary table "dummy_table"
in order to make some JOIN_TAB objects complete. Normally, such temporary
tables are freed inside JOIN_TAB::cleanup.
However, the inner-most subquery is pre-optimized, which allows the
optimization fo the MAX subquery to determine that its WHERE is TRUE,
and thus to compute the result of the MAX during optimization. This
ultimately allows the optimize phase of the outer query to find that
it WHERE clause is FALSE. Once JOIN::optimize finds that the result
set is empty, it sets zero_result_cause, and returns *before* it ever
reached make_join_statistics(). As a result the query plan has no
JOIN_TABs at all. Since the temporary table is supposed to be cleanup
via JOIN_TAB::cleanup, this never happens because there is no JOIN_TAB
for this table. Hence we get a memory leak.
Solution:
Whenever there are no JOIN_TABs, iterate over all table reference in
JOIN::join_list, and free the ones that contain semi-join temporary
tables.
Fixed by backport of:
------------------------------------------------------------
revno: 3402.50.156
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
branch nick: mysql-trunk-test
timestamp: Wed 2012-02-08 14:10:23 +0100
message:
Bug#13417754 ASSERT IN ROW_DROP_DATABASE_FOR_MYSQL DURING DROP SCHEMA
This assert could be triggered if an InnoDB table was being moved
to a different database using ALTER TABLE ... RENAME, while this
database concurrently was being dropped by DROP DATABASE.
The reason for the problem was that no metadata lock was taken
on the target database by ALTER TABLE ... RENAME.
DROP DATABASE was therefore not blocked and could remove
the database while ALTER TABLE ... RENAME was executing. This
could cause the assert in InnoDB to be triggered.
This patch fixes the problem by taking a IX metadata lock on
the target database before ALTER TABLE ... RENAME starts
moving a table to a different database.
Note that this problem did not occur with RENAME TABLE which
already takes the correct metadata locks.
Also note that this patch slightly changes the behavior of
ALTER TABLE ... RENAME. Before, the statement would abort and
return an error if a lock on the target table name could not
be taken immediately. With this patch, ALTER TABLE ... RENAME
will instead block and wait until the lock can be taken
(or until we get a lock timeout). This also means that it is
possible to get ER_LOCK_DEADLOCK errors in this situation
since we allow ALTER TABLE ... RENAME to wait and not just
abort immediately.
- Fixed code that was not ready for a major version number > 9
- Fixed test cases that assumed max major version number could be 9
Updated version number for depricated options (will be removed in a later commit)
VERSION:
Version number 10.0.0
client/mysqlbinlog.cc:
Added support for major version numbers > 9
cmake/mysql_version.cmake:
Added support for version numbers that is 0
mysql-test/r/comments.result:
Modified test to handle version number 100000
mysql-test/r/func_system.result:
Modified test to handle version number 100000
mysql-test/r/log_state.result:
Updated depricated error message
mysql-test/r/sp.result:
Modified test to handle version number 100000
mysql-test/r/subselect4.result:
Updated depricated error message
mysql-test/r/variables.result:
Updated depricated error message
mysql-test/suite/rpl/r/rpl_conditional_comments.result:
Modified test to handle version number 100000
mysql-test/suite/rpl/r/rpl_loaddatalocal.result:
Modified test to handle version number 100000
mysql-test/suite/rpl/t/rpl_conditional_comments.test:
Modified test to handle version number 100000
mysql-test/suite/rpl/t/rpl_loaddatalocal.test:
Modified test to handle version number 100000
mysql-test/suite/sys_vars/r/debug_basic.result:
Updated depricated error message
mysql-test/suite/sys_vars/r/engine_condition_pushdown_basic.result:
Updated depricated error message
mysql-test/suite/sys_vars/r/log_basic.result:
Updated depricated error message
mysql-test/suite/sys_vars/r/log_slow_queries_basic.result:
Updated depricated error message
mysql-test/suite/sys_vars/r/multi_range_count_basic.result:
Updated depricated error message
mysql-test/suite/sys_vars/r/rpl_recovery_rank_basic.result:
Updated depricated error message
mysql-test/suite/sys_vars/r/sql_big_selects_func.result:
Updated depricated error message
mysql-test/suite/sys_vars/r/sql_max_join_size_basic.result:
Updated depricated error message
mysql-test/suite/sys_vars/r/sql_max_join_size_func.result:
Updated depricated error message
mysql-test/t/comments.test:
Modified test to handle version number 100000
mysql-test/t/file_contents.test:
Modified test to handle version number 100000
mysql-test/t/func_system.test:
Modified test to handle version number 100000
mysql-test/t/parser_not_embedded.test:
Modified test to handle version number 100000
mysql-test/t/sp.test:
Modified test to handle version number 100000
sql/mysqld.cc:
Updated version number for depricated options (will be removed in a later commit)
sql/slave.cc:
Modified test to handle version number 100000
Better error messages
sql/sql_lex.cc:
Modified test to handle version number 100000 in comment syntax
sql/sys_vars.cc:
Updated version number for depricated options (will be removed in a later commit)