Prohibited to use hash join algorithm BNLH if join attributes
need non-binary collations. It has to be done because BNLH does
not support join for such attributes yet.
Later this limitations will be lifted.
Changed default collations for the schemes of some test cases
to preserve the old execution plans.
Lines below which were added in the patch for Bug#56814 cause this crash:
+ if (table->table)
+ table->table->maybe_null= FALSE;
Consider following test case:
--
CREATE TABLE t1(f1 INT NOT NULL);
INSERT INTO t1 VALUES (16777214),(0);
SELECT COUNT(*) FROM t1 LEFT JOIN t1 t2
ON 1 WHERE t2.f1 > 1 GROUP BY t2.f1;
DROP TABLE t1;
--
We set TABLE::maybe_null to FALSE for t2 table
and in create_tmp_field() we create appropriate tmp table field
using create_tmp_field_from_item() function instead of
create_tmp_field_from_field. As a result we have
LONGLONG field. As we have GROUP BY clause we calculate
group buffer length, see calc_group_buffer().
Item from group list which is used for calculation
refer to the field from real tables and have LONG type.
So group buffer length become insufficient for storing of
LONGLONG value. It leads to overwriting of wrong memory
area in do_field_int() function which is called from
end_update().
After some investigation I found out that
create_tmp_field_from_item() is used only for OLAP
grouping and can not be used for common grouping
as it could be an incompatibility between tmp
table fields and group buffer length.
We can not remove create_tmp_field_from_item() call from
create_tmp_field as OLAP needs it and we can not use this
function for common grouping. So we should remove setting
TABLE::maybe_null to FALSE from simplify_joins().
In this case we'll get wrong behaviour of
list_contains_unique_index() back. To fix it we
could use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join.
mysql-test/r/group_by.result:
test case
mysql-test/r/join_outer.result:
test case
mysql-test/t/group_by.test:
test case
mysql-test/t/join_outer.test:
test case
sql/sql_select.cc:
--remove wrong code
--use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join
Lines below which were added in the patch for Bug#56814 cause this crash:
+ if (table->table)
+ table->table->maybe_null= FALSE;
Consider following test case:
--
CREATE TABLE t1(f1 INT NOT NULL);
INSERT INTO t1 VALUES (16777214),(0);
SELECT COUNT(*) FROM t1 LEFT JOIN t1 t2
ON 1 WHERE t2.f1 > 1 GROUP BY t2.f1;
DROP TABLE t1;
--
We set TABLE::maybe_null to FALSE for t2 table
and in create_tmp_field() we create appropriate tmp table field
using create_tmp_field_from_item() function instead of
create_tmp_field_from_field. As a result we have
LONGLONG field. As we have GROUP BY clause we calculate
group buffer length, see calc_group_buffer().
Item from group list which is used for calculation
refer to the field from real tables and have LONG type.
So group buffer length become insufficient for storing of
LONGLONG value. It leads to overwriting of wrong memory
area in do_field_int() function which is called from
end_update().
After some investigation I found out that
create_tmp_field_from_item() is used only for OLAP
grouping and can not be used for common grouping
as it could be an incompatibility between tmp
table fields and group buffer length.
We can not remove create_tmp_field_from_item() call from
create_tmp_field as OLAP needs it and we can not use this
function for common grouping. So we should remove setting
TABLE::maybe_null to FALSE from simplify_joins().
In this case we'll get wrong behaviour of
list_contains_unique_index() back. To fix it we
could use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join.
When join buffers are employed no index scan for the first
table with grouping columns can be used.
mysql-test/r/join_cache.result:
Added a test case for bug #664508.
Sorted results for some other test cases.
mysql-test/t/join_cache.test:
Added a test case for bug #664508.
Sorted results for some other test cases.
- A prerequisite cleanup patch for making KILL reliable.
The test case main.kill did not work reliably.
The following problems have been identified:
1. A kill signal could go lost if it came in, short before a
thread went reading on the client connection.
2. A kill signal could go lost if it came in, short before a
thread went waiting on a condition variable.
These problems have been solved as follows. Please see also
added code comments for more details.
1. There is no safe way to detect, when a thread enters the
blocking state of a read(2) or recv(2) system call, where it
can be interrupted by a signal. Hence it is not possible to
wait for the right moment to send a kill signal. It has been
decided, not to fix it in the code. Instead, the test case
repeats the KILL statement until the connection terminates.
2. Before waiting on a condition variable, we register it
together with a synchronizating mutex in THD::mysys_var. After
this, we need to test THD::killed again. At some places we did
only test it in a loop condition before the registration. When
THD::killed had been set between this test and the registration,
we entered waiting without noticing the killed flag. Additional
checks ahve been introduced where required.
In addition to the above, a re-write of the main.kill test
case has been done. All sleeps have been replaced by Debug
Sync Facility synchronization. A couple of sync points have
been added to the server code.
To avoid further problems, if the test case fails in spite of
the fixes, the test case has been added to the "experimental"
list for now.
- Most of the work on this patch is authored by Ingo Struewing
mysql-test/t/kill.test:
Re-wrote test case to use Debug Sync points instead of sleeps
sql/event_queue.cc:
Fixed kill detection in Event_queue::cond_wait() by adding a check
after enter_cond().
sql/lock.cc:
Moved Debug Sync points behind enter_cond().
Fixed comments.
sql/slave.cc:
Fixed kill detection in start_slave_thread() by adding a check
after enter_cond().
sql/sql_class.cc:
Swapped order of kill and close in THD::awake().
Added comments.
sql/sql_class.h:
Added a comment to THD::killed.
sql/sql_parse.cc:
Added a sync point in do_command().
sql/sql_select.cc:
Added a sync point in JOIN::optimize().
- A prerequisite cleanup patch for making KILL reliable.
The test case main.kill did not work reliably.
The following problems have been identified:
1. A kill signal could go lost if it came in, short before a
thread went reading on the client connection.
2. A kill signal could go lost if it came in, short before a
thread went waiting on a condition variable.
These problems have been solved as follows. Please see also
added code comments for more details.
1. There is no safe way to detect, when a thread enters the
blocking state of a read(2) or recv(2) system call, where it
can be interrupted by a signal. Hence it is not possible to
wait for the right moment to send a kill signal. It has been
decided, not to fix it in the code. Instead, the test case
repeats the KILL statement until the connection terminates.
2. Before waiting on a condition variable, we register it
together with a synchronizating mutex in THD::mysys_var. After
this, we need to test THD::killed again. At some places we did
only test it in a loop condition before the registration. When
THD::killed had been set between this test and the registration,
we entered waiting without noticing the killed flag. Additional
checks ahve been introduced where required.
In addition to the above, a re-write of the main.kill test
case has been done. All sleeps have been replaced by Debug
Sync Facility synchronization. A couple of sync points have
been added to the server code.
To avoid further problems, if the test case fails in spite of
the fixes, the test case has been added to the "experimental"
list for now.
- Most of the work on this patch is authored by Ingo Struewing
- Added more tests to the MWL#89 specific test, and made the test more modular.
- Updated test files.
- Fixed a memory leak.
- More comments.
mysql-test/r/subselect_mat.result:
- Updated the test file to reflect the new optimizer switches related to
materialized subquery execution.
- Added one extra test to test all cases that expose BUG#40037 (this is an old bug from 5.x).
- Updated the test result with correct results that expose BUG#40037.
mysql-test/t/subselect_mat.test:
- Updated the test file to reflect the new optimizer switches related to
materialized subquery execution.
- Added one extra test to test all cases that expose BUG#40037 (this is an old bug from 5.x).
- Updated the test result with correct results that expose BUG#40037.
sql/sql_select.cc:
Fixed a memory leak reported by Valgrind.
BUG#26447 prefer a clustered key for an index scan, as secondary index is always slower
... which was fixed to cause
BUG#35850 queries take 50% longer
... and was reverted
and
BUG#39653 prefer a secondary index for an index scan, as clustered key is always slower
... which was fixed to cause
BUG#55656 mysqldump takes six days instead of half an hour
... and was amended with a special case workaround
sql/opt_range.cc:
move get_index_only_read_time() into the handler class
sql/sql_select.cc:
use cost not an index length when choosing a cheaper index
create_sort_index() function overwrites original JOIN_TAB::type field.
At re-execution of subquery overwritten JOIN_TAB::type(JT_ALL) is
used instead of JT_FT. It misleads test_if_skip_sort_order() and
the function tries to find suitable key for the order that should
not be allowed for FULLTEXT(JT_FT) table.
The fix is to restore JOIN_TAB strucures for subselect on re-execution
for EXPLAIN.
Additional fix:
Update TABLE::maybe_null field which
affects list_contains_unique_index() behaviour as it
could have the value(maybe_null==TRUE) based on the
assumption that this join is outer
(see setup_table_map() func).
mysql-test/r/explain.result:
test case
mysql-test/t/explain.test:
test case
sql/item_subselect.cc:
Make subquery uncacheable in case of EXPLAIN. It allows to keep
original JOIN_TAB::type(see JOIN::save_join_tab) and restore it
on re-execution.
sql/sql_select.cc:
-restore JOIN_TAB strucures for subselect on re-execution for EXPLAIN
-Update TABLE::maybe_null field as it could have
the value(maybe_null==TRUE) based on the assumption
that this join is outer(see setup_table_map() func).
This change is not related to the crash problem but
affects EXPLAIN results in the test case.
create_sort_index() function overwrites original JOIN_TAB::type field.
At re-execution of subquery overwritten JOIN_TAB::type(JT_ALL) is
used instead of JT_FT. It misleads test_if_skip_sort_order() and
the function tries to find suitable key for the order that should
not be allowed for FULLTEXT(JT_FT) table.
The fix is to restore JOIN_TAB strucures for subselect on re-execution
for EXPLAIN.
Additional fix:
Update TABLE::maybe_null field which
affects list_contains_unique_index() behaviour as it
could have the value(maybe_null==TRUE) based on the
assumption that this join is outer
(see setup_table_map() func).
Subquery executes twice, at top level JOIN::optimize and ::execute stages.
At first execution create_sort_index() function is called and
FT_SELECT object is created and destroyed. HANDLER::ft_handler is cleaned up
in the object destructor and at second execution FT_SELECT::get_next() method
returns error.
The fix is to reinit HANDLER::ft_handler field before re-execution of subquery.
mysql-test/r/fulltext.result:
test case
mysql-test/t/fulltext.test:
test case
sql/item_func.cc:
reinit ft_handler before re-execution of subquery
sql/item_func.h:
Fixed method name
sql/sql_select.cc:
reinit ft_handler before re-execution of subquery
Subquery executes twice, at top level JOIN::optimize and ::execute stages.
At first execution create_sort_index() function is called and
FT_SELECT object is created and destroyed. HANDLER::ft_handler is cleaned up
in the object destructor and at second execution FT_SELECT::get_next() method
returns error.
The fix is to reinit HANDLER::ft_handler field before re-execution of subquery.
- When building multiple-equalities for HAVING, don't set JOIN::cond_equal, set
join_having_equal instead. Setting JOIN::cond_equal based on HAVING makes
equality propagation data self-inconsistent
- Changed the default optimizer switches to provide 5.1/5.2 compatible behavior
- Added a regression test file to test consistently all cases covered by MWL#89
- Added/corrected/improved comments.
Employed the same kind of optimization as in the fix for the cases
when join buffer is used.
The optimization performs early evaluation of the conditions from
on expression with table references to only outer tables of
an outer join.
Improved handling of EXPLAIN statements for subqueries.
This patch specifically solves the problem when EXPLAIN reports:
"const row not found"
instead of
"no matching row in const table".
Added a possibility not to factor out the condition pushed to
the access index out of the condition pushed to a joined table.
This is useful for the condition pushed to the index when a hashed
join buffer for BKA is employed. In this case the index condition
may be false for some, but for all records with the same key.
So the condition must be checked not only after index lookup,
but after fetching row data as well, and it makes sense not to
factor out the condition from the condition checked after reading
row data,
The bug happened because the condition pushed to an index always
was factor out from the condition pushed to the accessed table.
Phase 3: Implementation of re-optimization of subqueries with injected predicates
and cost comparison between Materialization and IN->EXISTS strategies.
The commit contains the following known problems:
- The implementation of EXPLAIN has not been re-engineered to reflect the
changes in subquery optimization. EXPLAIN for subqueries is called during
the execute phase, which results in different code paths during JOIN::optimize
and thus in differing EXPLAIN messages for constant/system tables.
- There are some valgrind warnings that need investigation
- Several EXPLAINs with minor differences need to be reconsidered after fixing
the EXPLAIN problem above.
This patch also adds one extra optimizer_switch: 'in_to_exists' for complete
manual control of the subquery execution strategies.
Applied the fix for bug #47217 from the mysql-6.0 codebase.
The patch adds not null predicates generated for the left parts
of the equality predicates used for ref accesses. This is done
for such predicates both in where conditions and on conditions.
For the where conditions the not null predicates were generated
but in 5.0/5.1 they actually never were used due to some lame
merge from 4.1 to 5.0. The fix for bug #47217 made these
predicates to be used in the condition pushed to the tables.
Yet only this patch generates not null predicates for equality
predicated from on conditions of outer joins.
This patch introduces a performance regression that can be
observed on a test case from null_key.test. The regression
will disappear after the fix for bug #57024 from mariadb-5.1
is pulled into mariadb-5.3.
The patch contains many changes in the outputs of the EXPLAIN
commands since generated not null predicates are considered as
parts of the conditions pushed to join tables and may add
'Usingwhere' in some rows of EXPLAINs where there used
to be no such comments.
The condition over the outer tables now are extracted from
the on condition of any outer join. This condition is
saved in a special field of the JOIN_TAB structure for
the first inner table of the outer join. The condition
is checked before the first inner table is accessed. If
it turns out to be false the table is not accessed at all
and a null complemented row is generated immediately.
sql/field.cc:
Remove feature of 'new_mode' that was never implemtented in a newer MySQL version.
sql/item_cmpfunc.cc:
Boyer more is stable; Don't have to be protected by --skip-new anymore
sql/mysqld.cc:
Don't disable some proven stable functions with --skip-new
sql/records.cc:
Don't disable record caching with --safe-mode anymore
sql/sql_delete.cc:
Do fast truncate even if --skip-new or --safe is used
sql/sql_parse.cc:
Use always mysql_optimizer() for optimizer (instead of mysql_recreate_table() in case of --safe or --skip-new)
sql/sql_select.cc:
Don't disable 'only_eq_ref_tables' if --safe is used.
sql/sql_yacc.yy:
Removed not meaningfull test of --old
* Fixed a "crack" between semijoin analysis and materialization analysis
where semijoin didn't set the correct strategy for the IN predicate.
* Cosmetic changes in the code/comments.