Improved error handling such that queries against Information_Schema.Tables won't
fail if a Federated table is unable to connect to remote host.
sql/sql_show.cc:
If Handler::Info() fails, save error text in TABLE COMMENTS column, clear error.
The problem is that the logic which checks if a pointer is
valid relies on a poor heuristic based on the start and end
addresses of the data segment and heap.
Apart from miscalculating the heap bounds, this approach also
suffers from the fact that memory can come from places other
than the heap. See Bug#58528 for a more detailed explanation.
On Linux, the solution is to access the process's memory
through /proc/self/task/<tid>/mem, which allows for retrieving
the contents of pages within the virtual address space of
the calling process. If a address range is not mapped, a
input/output error is returned.
client/mysqltest.cc:
Use new interface to my_safe_print_str.
include/my_stacktrace.h:
Drop name from my_safe_print_str.
mysys/stacktrace.c:
Access the process's memory through a file descriptor and
dump the contents of the memory range. The file descriptor
offset is equivalent to a offset into the address space.
Do not print the name of the variable associated with the
address. It can be better accomplished at a higher level.
sql/mysqld.cc:
Put the variable dumping information within its own newline block.
Use symbolic names which better convey information to the user.
the DROP statement ..."
Problem: When using temporary tables and closing a session, an
implicit DROP TEMPORARY TABLE IF EXISTS is written to the binary
log (while cleaning up the context of the session THD - see:
sql_class.cc:THD::cleanup which calls close_temporary_tables).
close_temporary_tables, first checks if the binary log is opened
and then proceeds to creating the DROP statements. Then, such
statements, are written to the binary log through
MYSQL_BIN_LOG::write(Log_event *). Inside, there is another check
if the binary log is opened and if not an error is returned. This
is where the faulty behavior is triggered. Given that the test
case replays a binary log, with temp tables statements, and right
after it issues RESET MASTER, there is a chance that is_open will
report false (when the mysql session is closed and the temporary
tables are written).
is_open may return false, because MYSQL_BIN_LOG::reset_logs is
not setting the correct flag (LOG_CLOSE_TO_BE_OPENED), on the
MYSQL_LOG_BIN::log_state (instead it sets just the
LOG_CLOSE_INDEX flag, leaving the log_state to
LOG_CLOSED). Thence, when writing the DROP statement as part of
the THD::cleanup, the thread could get a return value of false
for is_open - inside MYSQL_BIN_LOG::write, ultimately reporting
that it can't write the event to the binary log.
Fix: We fix this by adding the correct flag, missing in the
second close.
metadata"
Improved error handling such that queries against Information_Schema.Tables won't
fail if a federated table can't make a remote connection.
mysql-test/r/merge.result:
Updated with warnings that were previously masked.
mysql-test/r/show_check.result:
Updated with warnings that were previously masked.
mysql-test/r/view.result:
Updated with warnings that were previously masked.
sql/sql_show.cc:
If get_schema_tables_record() encounters an error, push a warning,
set the TABLE COMMENT column with the error text, and clear the
error so that the operation can continue.
When using BINLOG statement to execute rows log events, session variables
foreign_key_checks and unique_checks are changed temporarily. As each rows
log event has their own special session environment and its own
foreign_key_checks and unique_checks can be different from current session
which executing the BINLOG statement. But these variables are not restored
correctly after BINLOG statement. This problem will cause that the following
statements fail or generate unexpected data.
In this patch, code is added to backup and restore these two variables.
So BINLOG statement will not affect current session's variables again.
mysql-test/extra/binlog_tests/binlog.test:
Add test to verify this patch.
mysql-test/suite/binlog/r/binlog_row_binlog.result:
Add test to verify this patch.
mysql-test/suite/binlog/r/binlog_stm_binlog.result:
Add test to verify this patch.
sql/sql_binlog.cc:
Add code to backup and restore thd->options.
Problem: MySQL cp1251 did not support 'U+20AC EURO SIGN'
which was assigned a few years ago to 0x88.
Fix: adding mapping: 0x88 <-> U+20AC
@ mysql-test/include/ctype_8bit.inc
New shared file to test 8bit character sets.
@ mysql-test/r/ctype_cp1251.result
@ mysql-test/t/ctype_cp1251.test
Adding tests
@ sql/share/charsets/cp1251.xml
Adding mapping
@ strings/ctype-extra.c
Regenerating ctype-extra.c using strings/conf_to_src
according to new cp1251.xml
43233/55794.
mysql-test/r/change_user.result:
Don't use -1 integer wrap around. It used to work, but now we do what's
actually in the documentation. In tests, we now use DEFAULT or the
numeral equivalent (as we do in the 5.6 tests).
mysql-test/r/key_cache.result:
Can't drop default key case is an error now, not a warning, for compatibility
with 5.6.
mysql-test/r/variables.result:
Can't drop default key case is an error now, not a warning, for compatibility
with 5.6.
mysql-test/t/change_user.test:
Don't use -1 integer wrap around. It used to work, but now we do what's
actually in the documentation. In tests, we now use DEFAULT or the
numeral equivalent (as we do in the 5.6 tests).
mysql-test/t/key_cache.test:
Can't drop default key case is an error now, not a warning, for compatibility
with 5.6.
mysql-test/t/variables.test:
Can't drop default key case is an error now, not a warning, for compatibility
with 5.6.
sql/mysqld.cc:
0 is a legal (albeit magic) value: "drop key cache."
sql/set_var.cc:
bound_unsigned() can go now, it was just a kludge until things are done
The Right Way, which they are now.
Can't drop default key case is an error now, not a warning, for compatibility
with 5.6.
tests/mysql_client_test.c:
Don't use -1 integer wrap around. It used to work, but now we do what's
actually in the documentation. In tests, we now use DEFAULT or the
numeral equivalent (as we do in the 5.6 tests).
> revision-id: gshchepa@mysql.com-20100801181236-uyuq6ewaq43rw780
> parent: alexey.kopytov@sun.com-20100723115254-jjwmhq97b9wl932l
> committer: Gleb Shchepa <gshchepa@mysql.com>
> branch nick: mysql-5.1-security
> timestamp: Sun 2010-08-01 22:12:36 +0400
> Bug #54461: crash with longblob and union or update with subquery
>
> Queries may crash, if
> 1) the GREATEST or the LEAST function has a mixed list of
> numeric and LONGBLOB arguments and
> 2) the result of such a function goes through an intermediate
> temporary table.
>
> An Item that references a LONGBLOB field has max_length of
> UINT_MAX32 == (2^32 - 1).
>
> The current implementation of GREATEST/LEAST returns REAL
> result for a mixed list of numeric and string arguments (that
> contradicts with the current documentation, this contradiction
> was discussed and it was decided to update the documentation).
>
> The max_length of such a function call was calculated as a
> maximum of argument max_length values (i.e. UINT_MAX32).
>
> That max_length value of UINT_MAX32 was used as a length for
> the intermediate temporary table Field_double to hold
> GREATEST/LEAST function result.
>
> The Field_double::val_str() method call on that field
> allocates a String value.
>
> Since an allocation of String reserves an additional byte
> for a zero-termination, the size of String buffer was
> set to (UINT_MAX32 + 1), that caused an integer overflow:
> actually, an empty buffer of size 0 was allocated.
>
> An initialization of the "first" byte of that zero-size
> buffer with '\0' caused a crash.
>
> The Item_func_min_max::fix_length_and_dec() has been
> modified to calculate max_length for the REAL result like
> we do it for arithmetical operators.
mysql-test/r/func_misc.result:
Test case for bug #54461.
mysql-test/t/func_misc.test:
Test case for bug #54461.
sql/item_func.cc:
Bug #54461: crash with longblob and union or update with subquery
The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.
In case of low memory sort buffer QUICK_INDEX_MERGE_SELECT creates
temporary file where is stores row ids which meet QUICK_SELECT ranges
except of clustered pk range, clustered range is processed separately.
In init_read_record we check if temporary file is used and choose
appropriate record access method. It does not take into account that
temporary file contains partial result in case of QUICK_INDEX_MERGE_SELECT
with clustered pk range.
The fix is always to use rr_quick if QUICK_INDEX_MERGE_SELECT
with clustered pk range is used.
mysql-test/suite/innodb/r/innodb_mysql.result:
test case
mysql-test/suite/innodb/t/innodb_mysql.test:
test case
mysql-test/suite/innodb_plugin/r/innodb_mysql.result:
test case
mysql-test/suite/innodb_plugin/t/innodb_mysql.test:
test case
sql/opt_range.h:
added new method
sql/records.cc:
The fix is always to use rr_quick if QUICK_INDEX_MERGE_SELECT
with clustered pk range is used.
> revision-id: alexey.kopytov@sun.com-20100824103548-ikm79qlfrvggyj9h
> parent: sunny.bains@oracle.com-20100816001222-xqc447tr6jwh8c53
> committer: Alexey Kopytov <Alexey.Kopytov@Sun.com>
> branch nick: 5.1-security
> timestamp: Tue 2010-08-24 14:35:48 +0400
> message:
> Bug #55568: user variable assignments crash server when used
> within query
>
> The server could crash after materializing a derived table
> which requires a temporary table for grouping.
>
> When destroying the temporary table used to execute a query for
> a derived table, JOIN::destroy() did not clean up Item_fields
> pointing to fields in the temporary table. This led to
> dereferencing a dangling pointer when printing out the items
> tree later in the outer SELECT.
>
> The solution is an addendum to the patch for bug37362: in
> addition to cleaning up items in tmp_all_fields3, do the same
> for items in tmp_all_fields1, since now we have an example
> where this is necessary.
sql/field.cc:
Make sure field->table_name is not set to NULL in
Field::make_field() to avoid assertion failure in
Item_field::make_field() after cleaning up items
(the assertion fired in udf.test when running
the test suite with the patch applied).
sql/sql_select.cc:
In addition to cleaning up items in tmp_all_fields3, do the
same for items in tmp_all_fields1.
Introduce a new helper function to avoid code duplication.
sql/sql_select.h:
Introduce a new helper function to avoid code duplication in
JOIN::destroy().
and related small fixes.
mysql-test/t/user_var.test:
test for bug
sql/field_conv.cc:
From the C standard, memcpy() has undefined behaviour if to->ptr==from->ptr
sql/item_func.cc:
In the case of BUG#56138, entry->value==ptr in which case memcpy()
has undefined results per the C standard.
sql/sql_select.cc:
Work around a bug in old gcc
Problem: crash in Item_float constructor on DBUG_ASSERT due
to not null-terminated string parameter.
Fix: making Item_float::Item_float non-null-termintated parameter safe:
- Using temporary buffer when generating error
modified:
@ mysql-test/r/xml.result
@ mysql-test/t/xml.test
@ sql/item.cc
ESCAPE argument might be empty string. It leads
to server crash under some circumstances.
The fix:
-added check if ESCAPE argument result is not empty string
mysql-test/r/ctype_latin1.result:
test case
mysql-test/t/ctype_latin1.test:
test case
sql/item_cmpfunc.cc:
-added check if ESCAPE argument result is not empty string
Problem: When GET_FORMAT() is called two times from the upper
level function (e.g. LEAST in the bug report), on the second
call "res= args[0]->val_str(...)" and str point to the same
String object.
1. Fix: changing the order from
- get val_str into tmp_value then convert to str
to
- get val_str into str then convert to tmp_value
The new order is more correct: the purpose of "str" parameter
is exactly to call val_str() for arguments.
The purpose of String class members (like tmp_value) is to do further
actions on the result.
Doing it in the other way around give unexpected surprises.
2. Using str_value instead of str to do padding, for the same reason.
Bug#55794: ulonglong options of mysqld show wrong values.
Port the few remaining system variables to the correct mechanism --
range-check in check-stage (and throw error or warning at that point
as needed and depending on STRICTness), update in update stage.
Fix some signedness errors when retrieving sysvar values for display.
mysql-test/r/variables.result:
Show that we throw warnings or errors depending on strictness
even for "special" variables now.
mysql-test/t/variables.test:
Show that we throw warnings or errors depending on strictness
even for "special" variables now.
sql/item_func.cc:
show sys_var_ulonglong_ptr and SHOW_LONGLONG type variables as unsigned.
sql/set_var.cc:
move range-checking from update stage to check stage for the remaining
few sys-vars that broke the pattern
sql/set_var.h:
add check functions.
Bug#57820 extractvalue crashes
Problem: ExtractValue and Replace crashed in some cases
due to invalid handling of empty and NULL arguments.
Per file comments:
@mysql-test/r/ctype_ujis.result
@mysql-test/r/xml.result
@mysql-test/t/ctype_ujis.test
@mysql-test/t/xml.test
Adding tests
@sql/item_strfunc.cc
Make sure Item_func_replace::val_str safely handles empty strings.
@sql/item_xmlfunc.cc
set null_value if nodeset_func returned NULL,
which is possible when the second argument is an
unset user variable.
There were some misunderstandings about parameters pertaining to buffer-size.
Patches fixes the reported off by one and
clarifies the documentation.
mysql-test/r/type_newdecimal.result:
add test
mysql-test/t/type_newdecimal.test:
add test
sql/field.cc:
adjust buffer size by one to account for terminator.
sql/my_decimal.cc:
adjust buffer size by one to account for terminator.
clarify needs in comments.
sql/my_decimal.h:
clarify buffer-size needs to prevent future off-by-one bugs.
strings/decimal.c:
clarify buffer-size needs and parameters to prevent future off-by-one bugs
MySQL 5.1 server
Server used to clip overly long user-names. This was presumably lost
when code was made UTF8-clean.
Now we emulate the behaviour for backward compatibility, but UTF8-ly
correct.
mysql-test/r/connect.result:
Show that user-names that are too long get clipped now.
mysql-test/t/connect.test:
Show that user-names that are too long get clipped now.
sql/sql_connect.cc:
Clip user-name to 16 characters (not bytes).
strings/CHARSET_INFO.txt:
Clarify in docs.
Bug#57995: Compiler flag change build error on OSX 10.4: my_getncpus.c
Bug#57996: Compiler flag change build error on OSX 10.5 : bind.c
Bug#57994: Compiler flag change build error : my_redel.c
Bug#57993: Compiler flag change build error on FreeBsd 7.0 : regexec.c
Bug#57992: Compiler flag change build error on FreeBsd : mf_keycache.c
Bug#57997: Compiler flag change build error on OSX 10.6: debug_sync.cc
Fix assorted compiler generated warnings.
cmd-line-utils/readline/bind.c:
Bug#57996: Compiler flag change build error on OSX 10.5 : bind.c
Initialize variable to work around a false positive warning.
include/m_string.h:
Bug#57994: Compiler flag change build error : my_redel.c
The expansion of stpcpy (in glibc) causes warnings if the
return value of strmov is not being used. Since stpcpy is
a GNU extension and the expansion ends up using a built-in
provided by GCC, use the compiler provided built-in directly
when possible.
include/my_compiler.h:
Define a dummy MY_GNUC_PREREQ when not compiling with GCC.
libmysql/libmysql.c:
Bug#58057: 5.1 libmysql/libmysql.c unused variable/compile failure
Variable might not be used in some cases. So, tag it as unused.
mysys/mf_keycache.c:
Bug#57992: Compiler flag change build error on FreeBsd : mf_keycache.c
Use UNINIT_VAR to work around a false positive warning.
mysys/my_getncpus.c:
Bug#57995: Compiler flag change build error on OSX 10.4: my_getncpus.c
Declare variable in the same block where it is used.
regex/regexec.c:
Bug#57993: Compiler flag change build error on FreeBsd 7.0 : regexec.c
Work around a compiler bug which causes the cast to not be enforced.
sql/debug_sync.cc:
Bug#57997: Compiler flag change build error on OSX 10.6: debug_sync.cc
Use UNINIT_VAR to work around a false positive warning.
sql/handler.cc:
Use UNINIT_VAR to work around a false positive warning.
sql/slave.cc:
Use UNINIT_VAR to work around a false positive warning.
sql/sql_partition.cc:
Use UNINIT_VAR to work around a false positive warning.
storage/myisam/ft_nlq_search.c:
Use UNINIT_VAR to work around a false positive warning.
storage/myisam/mi_create.c:
Use UNINIT_VAR to work around a false positive warning.
storage/myisammrg/myrg_open.c:
Use UNINIT_VAR to work around a false positive warning.
tests/mysql_client_test.c:
Change function to take a pointer to const, no need for a cast.
with on duplicate key update
There was a missed corner case in the partitioning
handler, which caused the next_insert_id to be changed
in the second level handlers (i.e the hander of a partition),
which caused this debug assertion.
The solution was to always ensure that only the partitioning
level generates auto_increment values, since if it was done
within a partition, it may fail to match the partition
function.
mysql-test/suite/parts/inc/partition_auto_increment.inc:
Added tests
mysql-test/suite/parts/r/partition_auto_increment_blackhole.result:
updated results
mysql-test/suite/parts/r/partition_auto_increment_innodb.result:
updated results
mysql-test/suite/parts/r/partition_auto_increment_memory.result:
updated results
mysql-test/suite/parts/r/partition_auto_increment_myisam.result:
updated results
sql/ha_partition.cc:
In <engine>::write_row the auto_inc value is generated
through handler::update_auto_increment (which calls <engine>::get_auto_increment() if needed).
If:
* INSERT_ID was set to 0
* it was updated to 0 by 'INSERT ... ON DUPLICATE KEY UPDATE' and changed partitions for the row
Then it would try to generate a auto_increment value in the
<engine for a specific partition>::write_row, which will
trigger the assert.
So the solution is to prevent this by,
in ha_partition::write_row set auto_inc_field_not_null and
add MODE_NO_AUTO_VALUE_ON_ZERO
in ha_partition::update_row (when changing partition) temporary
set table->next_number_field to NULL which calling the
partitions ::write_row().
in different default schema.
In strict mode, when data truncation or conversion happens,
THD::killed is set to THD::KILL_BAD_DATA.
This is abuse of KILL mechanism to guarantee that execution
of statement is aborted.
The stored procedures execution, on the other hand,
upon detection that a connection was killed, would
terminate immediately, without trying to restore the caller's
context, in particular, restore the caller's current schema.
The fix is, when terminating a stored procedure execution,
to only bypass cleanup if the entire connection was killed,
not in case of other forms of KILL.
mysql-test/r/sp-bugs.result:
Added result for a test case for bug#54375.
mysql-test/t/sp-bugs.test:
Added test case for bug#54375.
sql/sp_head.cc:
sp_head::execute modified: restore saved current db if
connection is not killed.
ALTER TABLE RENAME, DISABLE KEYS.
The code of ALTER TABLE RENAME, DISABLE KEYS could
issue a commit while holding LOCK_open mutex.
This is a regression introduced by the fix for
Bug 54453.
This failed an assert guarding us against a potential
deadlock with connections trying to execute
FLUSH TABLES WITH READ LOCK.
The fix is to move acquisition of LOCK_open outside
the section that issues ha_autocommit_or_rollback().
LOCK_open is taken to protect against concurrent
operations with .frms and the table definition
cache, and doesn't need to cover the call to commit.
A test case added to innodb_mysql.test.
The patch is to be null-merged to 5.5, which
already has 54453 null-merged to it.
mysql-test/suite/innodb/r/innodb_mysql.result:
Added test results for test for bug#56619.
mysql-test/suite/innodb/t/innodb_mysql.test:
Added test for bug#56619.
sql/sql_table.cc:
mysql_alter_table() modified: moved acquisition of LOCK_open
after call to ha_autocommit_or_rollback.
Quoting from the bug report:
The pstack library has been included in MySQL since version
4.0.0. It's useless and should be removed.
Details: According to its own documentation, pstack only works
on Linux on x86 in 32 bit mode and requires LinuxThreads and a
statically linked binary. It doesn't really support any Linux
from 2003 or later and doesn't work on any other OS.
The --enable-pstack option is thus deprecated and has no effect.
Problem: a flaw (derefencing a NULL pointer) in the LIKE optimization
code may lead to a server crash in some rare cases.
Fix: check the pointer before its dereferencing.
mysql-test/r/func_like.result:
Fix for bug #54575: crash when joining tables with unique set column
- test result.
mysql-test/t/func_like.test:
Fix for bug #54575: crash when joining tables with unique set column
- test case.
sql/item_cmpfunc.cc:
Fix for bug #54575: crash when joining tables with unique set column
- check res2 buffer pointer before its dereferencing
as it may be NULL in some cases.
sporadically.
The cause of the sporadic time out was a leaking protection
against the global read lock, taken by the RENAME statement,
and not released in case of an error occurred during RENAME.
The leaking protection counter would lead to the value of
protect_against_global_read never dropping to 0.
Consequently FLUSH TABLES in all connections, including the
one that leaked the protection, could not proceed.
The fix is to ensure that all branchesin RENAME code properly
release GRL protection.
mysql-test/r/log_tables.result:
Added results for test for bug#47924.
mysql-test/t/log_tables.test:
Added test for bug#47924.
sql/sql_rename.cc:
mysql_rename_tables() modified: replaced return from function
to goto to clean up code block in case of error.
by a function and column
The bugreport reveals two different bugs about grouping
on a function:
1) grouping by the TIME_TO_SEC function result caused
a server crash or wrong results and
2) grouping by the function returning a blob caused
an unexpected "Duplicate entry" error and wrong
result.
Details for the 1st bug:
TIME_TO_SEC() returns NULL if its argument is invalid (empty
string for example). Thus its nullability depends not only
on the nullability of its arguments but also on their values.
Fixed by (overoptimistically) setting TIME_TO_SEC() to be
nullable despite the nullability of its arguments.
Details for the 2nd bug:
The server is unable to create indices on blobs without
explicit blob key part length. However, this fact was
ignored for blob function result fields of GROUP BY
intermediate tables.
Fixed by disabling GROUP BY index creation for blob
function result fields like regular blob fields.
mysql-test/r/func_time.result:
Test case for bug #52160.
mysql-test/r/type_blob.result:
Test case for bug #52160.
mysql-test/t/func_time.test:
Test case for bug #52160.
mysql-test/t/type_blob.test:
Test case for bug #52160.
sql/item_timefunc.h:
Bug #52160: crash and inconsistent results when grouping
by a function and column
TIME_TO_SEC() returns NULL if its argument is invalid (empty
string for example). Thus its nullability depends not only
Fixed by (overoptimistically) setting TIME_TO_SEC() to be
nullable despite the nullability of its arguments.
sql/sql_select.cc:
Bug #52160: crash and inconsistent results when grouping
by a function and column
The server is unable to create indices on blobs without
explicit blob key part length. However, this fact was
ignored for blob function result fields of GROUP BY
intermediate tables.
Fixed by disabling GROUP BY index creation for blob
function result fields like regular blob fields.
Lines below which were added in the patch for Bug#56814 cause this crash:
+ if (table->table)
+ table->table->maybe_null= FALSE;
Consider following test case:
--
CREATE TABLE t1(f1 INT NOT NULL);
INSERT INTO t1 VALUES (16777214),(0);
SELECT COUNT(*) FROM t1 LEFT JOIN t1 t2
ON 1 WHERE t2.f1 > 1 GROUP BY t2.f1;
DROP TABLE t1;
--
We set TABLE::maybe_null to FALSE for t2 table
and in create_tmp_field() we create appropriate tmp table field
using create_tmp_field_from_item() function instead of
create_tmp_field_from_field. As a result we have
LONGLONG field. As we have GROUP BY clause we calculate
group buffer length, see calc_group_buffer().
Item from group list which is used for calculation
refer to the field from real tables and have LONG type.
So group buffer length become insufficient for storing of
LONGLONG value. It leads to overwriting of wrong memory
area in do_field_int() function which is called from
end_update().
After some investigation I found out that
create_tmp_field_from_item() is used only for OLAP
grouping and can not be used for common grouping
as it could be an incompatibility between tmp
table fields and group buffer length.
We can not remove create_tmp_field_from_item() call from
create_tmp_field as OLAP needs it and we can not use this
function for common grouping. So we should remove setting
TABLE::maybe_null to FALSE from simplify_joins().
In this case we'll get wrong behaviour of
list_contains_unique_index() back. To fix it we
could use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join.
mysql-test/r/group_by.result:
test case
mysql-test/r/join_outer.result:
test case
mysql-test/t/group_by.test:
test case
mysql-test/t/join_outer.test:
test case
sql/sql_select.cc:
--remove wrong code
--use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join
The problem is caused by bug49487 fix and became visible
after after bug56679 fix.
Items are cleaned up and set to unfixed state after filling derived table.
So we can not rely on item::fixed state in Item_func_group_concat::print
and we can not use 'args' array as items there may be cleaned up.
The fix is always to use orig_args array of items as it
always should contain the correct data.
mysql-test/r/func_gconcat.result:
test case
mysql-test/t/func_gconcat.test:
test case
sql/item_sum.cc:
The fix is always to use orig_args array of items.
The problem is dividing by const value when
the result is out of supported range.
The fix:
-return LONGLONG_MIN if the result is out of supported range for DIV operator.
-return 0 if divisor is -1 for MOD operator.
mysql-test/r/func_math.result:
test case
mysql-test/t/func_math.test:
test case
sql/item_func.cc:
-return LONGLONG_MIN if the result is out of supported range for DIV operator.
-return 0 if divisor is -1 for MOD operator.
"Grantor" columns' data is lost when replicating mysql.tables_priv.
Slave SQL thread used its default user ''@'' as the grantor of GRANT|REVOKE
statements executing on it.
In this patch, current user is put in query log event for all GRANT and REVOKE
statement, SQL thread uses the user in query log event as grantor.
mysql-test/suite/rpl/r/rpl_do_grant.result:
Add test for this bug.
mysql-test/suite/rpl/t/rpl_do_grant.test:
Add test for this bug.
sql/log_event.cc:
Refactoring THD::current_user_used and related functions.
current_user_used is used to judge if current user should be
binlogged in query log event. So it is better to call it m_binlog_invoker.
The related functions are renamed too.
sql/sql_class.cc:
Refactoring THD::current_user_used and related functions.
current_user_used is used to judge if current user should be
binlogged in query log event. So it is better to call it m_binlog_invoker.
The related functions are renamed too.
sql/sql_class.h:
Refactoring THD::current_user_used and related functions.
current_user_used is used to judge if current user should be
binlogged in query log event. So it is better to call it m_binlog_invoker.
The related functions are renamed too.
sql/sql_parse.cc:
Call binlog_invoker() for GRANT and REVOKE statements.
Rows events were applied wrongly on the temporary table with the same name.
But rows events are generated only for base tables. As temporary
table's data never be binlogged on row mode. Normally, base table of the
same name cannot be updated if a temporary table has the same name.
But there are two cases which can generate rows events on
the base table of same name.
Case1: 'CREATE TABLE ... SELECT' statement.
In mixed format, it will generate rows events if it is unsafe.
Case2: Drop a transactional temporary table in a transaction
(happens only on 5.5+).
BEGIN;
DROP TEMPORARY TABLE t1; # t1 is a InnoDB table
INSERT INTO t1 VALUES(rand()); # t1 is a MyISAM table
COMMIT;
'DROP TEMPORARY TABLE' will be put in the transaction cache and
binlogged after the rows events generated by the 'INSERT' statement.
After this patch, slave opens only base table when applying a rows event.
Fix assorted warnings that are generated in optimized builds.
Most of it is silencing variables that are set but unused.
This patch also introduces the MY_ASSERT_UNREACHABLE macro
which helps the compiler to deduce that a certain piece of
code is unreachable.
include/my_compiler.h:
Use GCC's __builtin_unreachable if available. It allows
GCC to deduce the unreachability of certain code paths,
thus avoiding warnings that, for example, accused that a
variable could be used without being initialized (due to
unreachable code paths).
data dictionary confusion
On file systems with case insensitive file names, and
lower_case_table_names set to '2', the server could crash
due to a table definition cache inconsistency. This is
the default setting on MacOSX, but may also be set and
used on MS Windows.
The bug is caused by using two different strategies for
creating the hash key for the table definition cache, resulting
in failure to look up an entry which is present in the cache,
or failure to delete an existing entry. One strategy was to
use the real table name (with case preserved), and the other
to use a normalized table name (i.e a lower case version).
This is manifested in two cases. One is during 'DROP DATABASE',
where all known files are removed. The removal from
the table definition cache is done via a generated list of
TABLE_LIST with keys (wrongly) created using the case preserved
name. The other is during CREATE TABLE, where the cache lookup
is also (wrongly) based on the case preserved name.
The fix was to use only the normalized table name when
creating hash keys.
sql/sql_db.cc:
Normalize table name (i.e lower case it)
sql/sql_table.cc:
table_name contains the normalized name
alias contains the real table name
create_sort_index() function overwrites original JOIN_TAB::type field.
At re-execution of subquery overwritten JOIN_TAB::type(JT_ALL) is
used instead of JT_FT. It misleads test_if_skip_sort_order() and
the function tries to find suitable key for the order that should
not be allowed for FULLTEXT(JT_FT) table.
The fix is to restore JOIN_TAB strucures for subselect on re-execution
for EXPLAIN.
Additional fix:
Update TABLE::maybe_null field which
affects list_contains_unique_index() behaviour as it
could have the value(maybe_null==TRUE) based on the
assumption that this join is outer
(see setup_table_map() func).
mysql-test/r/explain.result:
test case
mysql-test/t/explain.test:
test case
sql/item_subselect.cc:
Make subquery uncacheable in case of EXPLAIN. It allows to keep
original JOIN_TAB::type(see JOIN::save_join_tab) and restore it
on re-execution.
sql/sql_select.cc:
-restore JOIN_TAB strucures for subselect on re-execution for EXPLAIN
-Update TABLE::maybe_null field as it could have
the value(maybe_null==TRUE) based on the assumption
that this join is outer(see setup_table_map() func).
This change is not related to the crash problem but
affects EXPLAIN results in the test case.
For crash testing: kill the server without generating core file.
include/my_dbug.h
Use kill(getpid(), SIGKILL) which cannot be caught by signal handlers.
All DBUG_XXX macros should be no-ops in optimized mode, do that for DBUG_ABORT as well.
sql/handler.cc
Kill server without generating core.
sql/log.cc
Kill server without generating core.