Adds deprecation warning for the mysqlbinlog options
--base64-output=always and --base64-output.
A warning is printed when the flags are used,
and also when running mysqlbinlog --help.
client/mysqlbinlog.cc:
Give a warning for --base64-output=always and --base64-output,
and print warning in mysqlbinlog --help.
mysql-test/r/mysqlbinlog.result:
updated result file
mysql-test/t/mysqlbinlog.test:
Test that mysqlbinlog --base64-output=always gives a warning.
This crash could happen if TRUNCATE TABLE indirectly failed to open a
merge table due to failures to open underlying tables. Even if opening
failed, the TRUNCATE TABLE code would try to invalidate the table in
the query cache. Since this table had been closed and memory released,
this could lead to a crash.
This bug was introduced by a combination of the changes introduced by
the patch for Bug#52044, where failing to open a table will cause opened
tables to be closed. And the changes in patch for Bug#49938, where
TRUNCATE TABLE uses the standard open tables function.
This patch fixes the problem by setting the TABLE pointer to NULL before
invalidating the query cache.
Test case added to truncate_coverage.test.
Lines below which were added in the patch for Bug#56814 cause this crash:
+ if (table->table)
+ table->table->maybe_null= FALSE;
Consider following test case:
--
CREATE TABLE t1(f1 INT NOT NULL);
INSERT INTO t1 VALUES (16777214),(0);
SELECT COUNT(*) FROM t1 LEFT JOIN t1 t2
ON 1 WHERE t2.f1 > 1 GROUP BY t2.f1;
DROP TABLE t1;
--
We set TABLE::maybe_null to FALSE for t2 table
and in create_tmp_field() we create appropriate tmp table field
using create_tmp_field_from_item() function instead of
create_tmp_field_from_field. As a result we have
LONGLONG field. As we have GROUP BY clause we calculate
group buffer length, see calc_group_buffer().
Item from group list which is used for calculation
refer to the field from real tables and have LONG type.
So group buffer length become insufficient for storing of
LONGLONG value. It leads to overwriting of wrong memory
area in do_field_int() function which is called from
end_update().
After some investigation I found out that
create_tmp_field_from_item() is used only for OLAP
grouping and can not be used for common grouping
as it could be an incompatibility between tmp
table fields and group buffer length.
We can not remove create_tmp_field_from_item() call from
create_tmp_field as OLAP needs it and we can not use this
function for common grouping. So we should remove setting
TABLE::maybe_null to FALSE from simplify_joins().
In this case we'll get wrong behaviour of
list_contains_unique_index() back. To fix it we
could use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join.
mysql-test/r/group_by.result:
test case
mysql-test/r/join_outer.result:
test case
mysql-test/t/group_by.test:
test case
mysql-test/t/join_outer.test:
test case
sql/sql_select.cc:
--remove wrong code
--use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join
The problem is caused by bug49487 fix and became visible
after after bug56679 fix.
Items are cleaned up and set to unfixed state after filling derived table.
So we can not rely on item::fixed state in Item_func_group_concat::print
and we can not use 'args' array as items there may be cleaned up.
The fix is always to use orig_args array of items as it
always should contain the correct data.
mysql-test/r/func_gconcat.result:
test case
mysql-test/t/func_gconcat.test:
test case
sql/item_sum.cc:
The fix is always to use orig_args array of items.
The problem is dividing by const value when
the result is out of supported range.
The fix:
-return LONGLONG_MIN if the result is out of supported range for DIV operator.
-return 0 if divisor is -1 for MOD operator.
mysql-test/r/func_math.result:
test case
mysql-test/t/func_math.test:
test case
sql/item_func.cc:
-return LONGLONG_MIN if the result is out of supported range for DIV operator.
-return 0 if divisor is -1 for MOD operator.
The problem was that the warnings risen by a trigger were not cleared upon
successful completion. The warnings should be cleared if the trigger completes
successfully.
The fix is to skip merging warnings into caller's Warning Info for triggers.
- A prerequisite cleanup patch for making KILL reliable.
The test case main.kill did not work reliably.
The following problems have been identified:
1. A kill signal could go lost if it came in, short before a
thread went reading on the client connection.
2. A kill signal could go lost if it came in, short before a
thread went waiting on a condition variable.
These problems have been solved as follows. Please see also
added code comments for more details.
1. There is no safe way to detect, when a thread enters the
blocking state of a read(2) or recv(2) system call, where it
can be interrupted by a signal. Hence it is not possible to
wait for the right moment to send a kill signal. It has been
decided, not to fix it in the code. Instead, the test case
repeats the KILL statement until the connection terminates.
2. Before waiting on a condition variable, we register it
together with a synchronizating mutex in THD::mysys_var. After
this, we need to test THD::killed again. At some places we did
only test it in a loop condition before the registration. When
THD::killed had been set between this test and the registration,
we entered waiting without noticing the killed flag. Additional
checks ahve been introduced where required.
In addition to the above, a re-write of the main.kill test
case has been done. All sleeps have been replaced by Debug
Sync Facility synchronization. A couple of sync points have
been added to the server code.
To avoid further problems, if the test case fails in spite of
the fixes, the test case has been added to the "experimental"
list for now.
- Most of the work on this patch is authored by Ingo Struewing
mysql-test/t/kill.test:
Re-wrote test case to use Debug Sync points instead of sleeps
sql/event_queue.cc:
Fixed kill detection in Event_queue::cond_wait() by adding a check
after enter_cond().
sql/lock.cc:
Moved Debug Sync points behind enter_cond().
Fixed comments.
sql/slave.cc:
Fixed kill detection in start_slave_thread() by adding a check
after enter_cond().
sql/sql_class.cc:
Swapped order of kill and close in THD::awake().
Added comments.
sql/sql_class.h:
Added a comment to THD::killed.
sql/sql_parse.cc:
Added a sync point in do_command().
sql/sql_select.cc:
Added a sync point in JOIN::optimize().
str_to_date function should only try to generate a warning for
invalid input strings, not when input value is NULL. In latter
case, val_str() of input argument will return a nil pointer.
Trying to generate a warning using this pointer lead to a
segmentation fault. Solution: Only generate warning when pointer
to input string is non-nil.
mysql-test/r/func_time.result:
Added test case for Bug#57512
mysql-test/t/func_time.test:
Added test case for Bug#57512
sql/item_timefunc.cc:
Skip generating warning when pointer to input string is nil
since this implies that input argument was NULL.
main.mysqltest skipped on Windows because a perl intentionally does exit(1)
Use exit(2), as exit(1) on Windows is indistinguishable from failing to
execute perl.
data dictionary confusion
On file systems with case insensitive file names, and
lower_case_table_names set to '2', the server could crash
due to a table definition cache inconsistency. This is
the default setting on MacOSX, but may also be set and
used on MS Windows.
The bug is caused by using two different strategies for
creating the hash key for the table definition cache, resulting
in failure to look up an entry which is present in the cache,
or failure to delete an existing entry. One strategy was to
use the real table name (with case preserved), and the other
to use a normalized table name (i.e a lower case version).
This is manifested in two cases. One is during 'DROP DATABASE',
where all known files are removed. The removal from
the table definition cache is done via a generated list of
TABLE_LIST with keys (wrongly) created using the case preserved
name. The other is during CREATE TABLE, where the cache lookup
is also (wrongly) based on the case preserved name.
The fix was to use only the normalized table name when
creating hash keys.
sql/sql_db.cc:
Normalize table name (i.e lower case it)
sql/sql_table.cc:
table_name contains the normalized name
alias contains the real table name
After the fix for
Bug #55077 Assertion failed: width > 0 && to != ((void *)0), file .\dtoa.c
we no longer try to allocate a string of length 'field_length'
so the asserts are relevant only for ZEROFILL columns.
mysql-test/r/select.result:
Add test case for Bug#57203
mysql-test/t/select.test:
Add test case for Bug#57203
sql/field.cc:
Rewrite the DBUG_ASSERTS on field_length.
create_sort_index() function overwrites original JOIN_TAB::type field.
At re-execution of subquery overwritten JOIN_TAB::type(JT_ALL) is
used instead of JT_FT. It misleads test_if_skip_sort_order() and
the function tries to find suitable key for the order that should
not be allowed for FULLTEXT(JT_FT) table.
The fix is to restore JOIN_TAB strucures for subselect on re-execution
for EXPLAIN.
Additional fix:
Update TABLE::maybe_null field which
affects list_contains_unique_index() behaviour as it
could have the value(maybe_null==TRUE) based on the
assumption that this join is outer
(see setup_table_map() func).
mysql-test/r/explain.result:
test case
mysql-test/t/explain.test:
test case
sql/item_subselect.cc:
Make subquery uncacheable in case of EXPLAIN. It allows to keep
original JOIN_TAB::type(see JOIN::save_join_tab) and restore it
on re-execution.
sql/sql_select.cc:
-restore JOIN_TAB strucures for subselect on re-execution for EXPLAIN
-Update TABLE::maybe_null field as it could have
the value(maybe_null==TRUE) based on the assumption
that this join is outer(see setup_table_map() func).
This change is not related to the crash problem but
affects EXPLAIN results in the test case.
Subquery executes twice, at top level JOIN::optimize and ::execute stages.
At first execution create_sort_index() function is called and
FT_SELECT object is created and destroyed. HANDLER::ft_handler is cleaned up
in the object destructor and at second execution FT_SELECT::get_next() method
returns error.
The fix is to reinit HANDLER::ft_handler field before re-execution of subquery.
mysql-test/r/fulltext.result:
test case
mysql-test/t/fulltext.test:
test case
sql/item_func.cc:
reinit ft_handler before re-execution of subquery
sql/item_func.h:
Fixed method name
sql/sql_select.cc:
reinit ft_handler before re-execution of subquery
sql_show.cc during rqg_info_schema test on Windows".
Ensure we do not access freed memory when filling
information_schema.views when one of the views
could not be properly opened.
mysql-test/r/information_schema.result:
Update results - a fix for Bug#56540.
mysql-test/t/information_schema.test:
Add a test case for Bug#56540
sql/sql_base.cc:
Push an error into the Diagnostics area
when we return an error.
This directs get_all_tables() to the execution
branch which doesn't involve 'process_table()'
when no table/view was opened.
sql/sql_show.cc:
Do not try to access underlying table fields
when opening of a view failed. The underlying
table is closed in that case, and accessing
its fields may lead to dereferencing a damaged
pointer.
thd->in_sub_stmt || (thd->state..
OPTIMIZE TABLE is not directly supported by InnoDB. Instead,
recreate and analyze of the table is done. After recreate,
the table is closed and locks are released before the table
is reopened and locks re-acquired for the analyze phase.
This assertion was triggered if OPTIMIZE TABLE failed to
acquire thr_lock locks before starting the analyze phase.
The assertion tests (among other things) that there no
active statement transaction. However, as part of acquiring
the thr_lock lock, external_lock() is called for InnoDB
tables and this causes a statement transaction to be started.
If thr_multi_lock() later fails (e.g. due to timeout),
the failure handling code causes this assert to be triggered.
This patch fixes the problem by doing rollback of the
current statement transaction in case open_ltable (used by
OPTIMIZE TABLE) fails to acquire thr_lock locks.
Test case added to lock_sync.test.