Problem:
=======
Found using AddressSanitizer testing.
The mysqlbinlog utility may result in out-of-bound heap
buffer reads and thus, undefined behaviour, when processing
RBR events in the old (pre-5.1 GA) format.
The following code in process_event() would only be correct
if Rows_log_event was the base class for
Write,Update,Delete_rows_log_event_old classes:
case PRE_GA_WRITE_ROWS_EVENT:
case PRE_GA_DELETE_ROWS_EVENT:
case PRE_GA_UPDATE_ROWS_EVENT:
...
Rows_log_event *e= (Rows_log_event*) ev;
Table_map_log_event *ignored_map=
print_event_info->m_table_map_ignored.get_table(e->get_table_id());
...
if (e->get_flags(Rows_log_event::STMT_END_F))
{
...
}
However, Rows_log_event is only the base class for the
Write,Update_Delete_rows_event family of classes, but not
for their *_old counterparts. So the above typecasts are
incorrect for the old-format RBR events and may result (and
do result according to AddressSanitizer reports) in reading
memory outside of the previously allocated on heap buffer.
Fix:
===
The above mentioned invalid type cast has been replaced with
appropriate old counterpart.
Note:The above mentioned issue is present only mysql-5.1 and
5.5. This is fixed in mysql-5.6 and above as part of
Bug#55790. Hence few of the relevant changes of Bug#55790 are
being back ported to fix the current issue.
INTERACTIVE MODE
In interactive mode, libedit/readline allocates memory
for every new line entered & later the allocated memory
never gets freed.
Fixed by freeing the allocated memory blocks appropriately.
Item_func_group_concat::copy_or_same() creates a copy of original object.
It also creates a copy of ORDER structure because ORDER struct elements may
be modified in find_order_in_list() called from Item_func_group_concat::setup().
As ORDER copy is created using memcpy, ORDER::next elements point to original
ORDER structs. Thus find_order_in_list() called from EXECUTE stmt modifies
ordinal ORDER item pointers so they point to runtime items, these items are
freed after execution, so original ORDER structure becomes invalid.
The fix is to properly update ORDER::next fields so that they point to
new ORDER elements.
COLUMNS ARE USED INSIDE A STORED PROCEDURE
Problem: The operator '=' overload method inside
'String' class is not coping str_charset member from
R.H.S object to L.H.S object. Hence charset is wrongly
set while using string assignments
Analaysis: The above mentioned problem is
identified while doing the analaysis of bug#14593883.
Though the test scenario mentioned in the bug page
is not an issue in mysql-5.1 code, the actual root cause
ie., "str_charset member is not copied" exists in the
mysql-5.1 code base.
Fix: Handle coping str_charset member in operator '=' overload
method.
For a fresh insert, page_zip_available() was counting some fields twice.
In the worst case, the compressed page size grows by PAGE_ZIP_DIR_SLOT_SIZE
plus the size of the record that is being inserted. The size of the record
already includes the fields that will be stored in the uncompressed portion
of the compressed page.
page_zip_get_trailer_len(): Remove the output parameter entry_size,
because no caller is interested in it.
page_zip_max_ins_size(), page_zip_available(): Assume that the page grows
by PAGE_ZIP_DIR_SLOT_SIZE and the record size (which includes the fields
that would be stored in the uncompressed portion of the page).
rb#2169 approved by Sunny Bains
IN IN-CLAUSE USING MYISAM OR MEMORY ENGINE
Backport from 5.6. Original message:
The coincidences caused a data loss:
* The query has IN subqueries nested twice,
* the WHERE clause of the inner subquery refers to the
outer field, and the whole WHERE clause returns FALSE,
* the inner subquery has a LEFT JOIN that joins a single
row with a row of NULLs; one of that NULL columns
represents the select list of the subquery.
Normally, that inner subquery should return empty record set.
However, in our case:
* the Item_is_not_null_test item goes constant, since
its underlying field is NULL (because of LEFT JOIN ... ON
FALSE of const table row with a row of nulls);
* we evaluate Item_is_not_null_test::val_int() as a part
of fake HAVING expression of the transformed subquery;
* as far as the underlying field is NULL, we optimize
out the whole fake HAVING expression as FALSE as well
as a whole subquery with a zero result:
Impossible HAVING noticed after reading const tables";
* thus, the optimizer ignores the presence of the WHERE
clause (the WHERE expression is FALSE in our case, so
the subquery should return empty set);
* however, during the evaluation of the
Item_is_not_null_test::val_int() in the optimizer,
it marked its "owner" with the "was_null" flag -- that
forced the subquery to return UNKNOWN instead of empty
set.
That caused a wrong result.
The problem is a regression of the small cleanup in
the fix for the bug11827369 (the Item_is_not_null_test part)
that conflicts with optimizations in the fix for the bug11752543.
Before that regression the Item_is_not_null_test items
never were constants.
The fix is the rollback of Item_is_not_null_test parts
of the bug11827369 fix.
page_zip_compress_node_ptrs(): Do not attempt to invoke deflate() with
c_stream->avail_in, because it will result in Z_BUF_ERROR (and
page_zip_compress() failure and unnecessary further splits of the node
pointer page). A node pointer record can have empty payload, provided
that all key fields are empty.
Approved by Jimmy Yang
GRANT STATEMENT
Description: A missing length check causes problem while
copying source to destination when
lower_case_table_names is set to a value
other than 0. This patch fixes the issue
by ensuring that requried bound check is
performed.
Problem:
When the VALUES() function is inappropriately used in the SET stmt the server
exits.
set port = values(v);
This happens because the values(v) will be parsed as an Item_insert_value by
the parser. Both Item_field and Item_insert_value return the type as
FIELD_ITEM. But for Item_insert_value the field_name member is NULL. In
set_var constructor, when the type of the item is FIELD_ITEM we try to access
the non-existent field_name.
The class hierarchy is as follows:
Item -> Item_ident -> Item_field -> Item_insert_value
The Item_ident::field_name is NULL for Item_insert_value.
Solution:
In the parsing stage, in the set_var constructor if the item type is
FIELD_ITEM and if the field_name is non-existent, then it is probably
the Item_insert_value. So leave it as it is for later evaluation.
rb://2004 approved by Roy and Norvald.
HOST HAS '_' IN THE HOSTNAME
Problem:
=======
'_' and '%' are treated as a wildcards by the ACL code and
this is documented in the manual. The problem with
mysql_install_db is that it does not take this into account
when creating the initial GRANT tables:
--- cut ---
REPLACE INTO tmp_user SELECT @current_hostname,'root','','Y',
'Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y',
'Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','','','','',
0,0,0,0 FROM dual WHERE LOWER( @current_hostname) != 'localhost';
--- cut ---
If @current_hostname contains any wildcard characters, then
a wildcard entry will be defined for the 'root' user,
which is a flaw.
Analysis:
========
As per the bug description when we have a hostname with a
wildcard character in it, it allows clients from several other
hosts with similar name pattern to connect to the server as root.
For example, if the hostname is like 'host_.com' then the same
name is logged in mysql.user table. This allows 'root' users
from other hosts like 'host1.com', 'host2.com' ... to connect
to the server as root user.
While creating the intial GRANT tables we do not have a check
for wildcard characters in hostname.
Fix:
===
As part of fix escape character "\" is added before wildcard
character to make it a plain character, so that the one and
only host with the exact name will be able to connect to the
server.
OPENSSL
Description: Specify preference to disable compression
while using OpenSSL library. OpenSSL uses
zlib compression by default which may
lead to some problems.
PLATFORM= MACOSX10.6 X86_64 MAX
Problem: The test was failing on pb2's mac machine because
it was not cleaned up properly. The test checks if
the command 'start slave until' throws a proper
error when issued with a wrong number/type of
parameters. After this,the replication stream was
stopped using the include file 'rpl_end.inc'.
The errors thrown earlier left the slave in an
inconsistent state to be closed by the include
file which was caught by the mac machine.
Fix: Started slave by invoking start_slave.inc to have a
working slave before calling rpl_reset.inc
Problem: The test file was not in a good shape. It tested
start slave until relay log file/pos combination
wrongly. A couple of commands were executed at
master and replicated at slave. Next, the
coordinates in terms of relay log file and pos
were noted down followed by reset slave and start
slave until saved relay log file/pos. Reset slave
deletes all relay log files and makes the slave
forget its replication position. So, using the
saved coordiantes after reset slave is wrong.
Fix: Split the test in two parts:
a) Test for start slave until master log file/pos and
checking for correct errors in the failure
scenarios.
b) Test for start slave until relay log file/pos.
Problem: The variables auto_increment_increment and
auto_increment_offset were set in the the include
file rpl_init.inc. This was only configured for
some connections that are rarely used by test
cases, so likely that it will cause confusion.
If replication tests want to setup these variables
they should do so explicitly.
Fix:
a) Removed code to set the variables
auto_increment_increment and auto_increment_offset
in the include file.
b) Updated tests files using the same.
In method mysql_binlog_send, right after detecting a EOF in the
read event loop, and before deciding if we should change to a new
binlog file there is a execution window where new events can be
written to the binlog and a rotation can happen. When reaching
the test, the function will then change to a new binlog file
ignoring all the events written in this window. This will result
in events not being replicated.
Only when the binlog is detected as deactivated in the event loop
of the dump thread, can we really know that no more events
remain. For this reason, this test is now made under the log lock
in the beginning of the event loop when reading the events.
TLS AND DTLS RECORD PROTOCOLS
Description: In yassl, decryption phase in TLS protocol
depends on type of padding. This patch
removes this dependancy and makes error
generation/decryption process independent
of padding type.
post push fix:
rpl_stm_until.test was disabled because of
this bug. Enabled and fixed it.
Removed a part of the test that was obsolete.
It tested replication from 4.0 master to 5.0
slave.
FROM SHOW CREATE
Problem: The length of the internally generated foreign key name
is not checked.
Solution: The length of the internally generated foreign key name is
checked. If it is greater than the allowed limit, an error message
is reported. Also, the constraint name is printed in the same manner
as the table name, using the system charset information.
rb://1969 approved by Marko.
Problem: Sys_vars suite is disabled in mysql-5.1 branch.
Fix: To enable sys_vars suite in mysql-5.1, add it in
mysql-test-run.pl file and also sys_vars suite should be
added to Makefile.am inorder to get that test directory
5.1 SERVER
Problem was caused by the COM_CHANGE_USER parsing code. That code ignored
character set number passed in COM_CHANGE_USER packet. Instead
character_set_client values was used. User name was not converted at all.
Fixed by using passed character set number to convert both db and user names.
If COM_CHANGE_USER does not contain character set number then
character_set_client is used to convert both names.
This is a backport of the fix for:
Bug#13633549 HANDLE_FATAL_SIGNAL IN TEST_IF_SKIP_SORT_ORDER/CREATE_SORT_INDEX
Don't invoke the range optimizer for a NULL select.
Some queries with the "SELECT ... FROM DUAL" nested subqueries
failed with an assertion on debug builds.
Non-debug builds were not affected.
There were a few different issues with similar assertion
failures on different queries:
1. The first problem was related to the incomplete propagation
of the "non-constant" item status from underlying subquery
items to the outer item tree: in some cases non-constants were
interpreted as constants and evaluated at the preparation stage
(val_int() calls withing fix_fields() etc).
Thus, the default implementation of Item_ref::const_item() from
the Item parent class didn't take into account the "const_item"
status of the referenced item tree -- it used the insufficient
"used_tables() == 0" check instead. This worked in most cases
since our "non-constant" functions like RAND() and SLEEP() set
the RAND_TABLE_BIT in the used table map, so they aren't
non-constant from Item_ref's "point of view". However, the
"SELECT ... FROM DUAL" subquery may have an empty map of used
tables, but at the same time subqueries are never "constant" at
the context analysis stage (preparation, view creation etc).
So, the non-contantness of such subqueries was missed.
Fix: the Item_ref::const_item() function has been overloaded to
take into account both (*ref)->const_item() status and tricky
Item_ref::used_tables() return values, since the only
(*ref)->const_item() call is not enough there.
2. In some cases instead of the const_item() call we check a
value of the Item::with_subselect field to recognize items
with nested subqueries. However, the Item_ref class didn't
propagate this value from the referenced item tree.
Fix: Item::has_subquery() and Item_ref::has_subquery()
functions have been backported from 5.6. All direct
references to the with_subselect fields of nested items have
been replaced with the has_subquery() function call.
3. The Item_func_regex class didn't propagate with_subselect
as well, since it overloads the Item_func::fix_fields()
function with insufficient fix_fields() implementation.
Fix: the Item_func_regex::fix_fields() function has been
modified to gather "constant" statuses from inner items.
4. The Item_func_isnull::update_used_tables() function has
a special branch for the underlying item where the maybe_null
value is false: in this case it marks the Item_func_isnull
as a "const_item" and sets the cached_value to false.
However, the Item_func_isnull::val_int() was not in sync with
update_used_tables(): it didn't take into account neither
const_item_cache nor cached_value for the case of
"args[0]->maybe_null == false optimization".
As far as such an Item_func_isnull has "const_item() == true",
it's ok to call Item_func_isnull::val_int() etc from outer
items on preparation stage. In this case the server tried to
call Item_func_isnull::args[0]->isnull(), and if the args[0]
item contained a nested not-nullable subquery, it failed
with an assertion.
Fix: take the value of Item_func_isnull::const_item_cache into
account in the val_int() function.
5. The auxiliary Item_is_not_null_test class has a similar
optimization in the update_used_tables() function as the
Item_func_isnull class has, and the same issue in the val_int()
function.
In addition to that the Item_is_not_null_test::update_used_tables()
doesn't update the const_item_cache value, so the "maybe_null"
optimization is useless there. Thus, we missed some optimizations
of cases like these (before and after the fix):
< <is_not_null_test>(a),
---
> <cache>(<is_not_null_test>(a)),
or
< having (<is_not_null_test>(a) and <is_not_null_test>(a))
---
> having 1
etc.
Fix: update Item_is_not_null_test::const_item_cache in
update_used_tables() and take in into account in val_int().
innodb_bug12400341.test is disabled for valgrind daily test.
It might be affected by the previous test's undo slots existing,
because of slower execution.
ON COL WITH COMPOSITE INDEX
This problem is caused by the patch for the bug#11751794.
While checking for the keypart covering non grouping attribute. we are not
checking whether the root node of the SEL_ARG* tree for the index have any
cvalue or not.
On a previous fix, user variables with zero length name were incorrectly
considered as event corruption, despite that them are allowed by server.
Fix this wrong assumption by allowing again user variables with zero
length on binary log.
PROPERLY QUOTED IN BINLOG FILE
Problem: In load data file query, User variables are allowed
inside "Into_list" and "Set_list". These user variables used
inside these two lists are not properly guarded with backticks
while server is writting into binlog. Hence user variable names
like a` cannot be used in this context.
Fix: Properly quote these variables while
writting into binlog
CERTAIN LEVEL
Problem description: mysqld crashes when we update the max_connections
variable to lesser value than the number of currently open connections.
Analysis: The "alarm_queue.max_elements" size will be decided at the
server start time and it will get modified if we change max_connections
value. In the current scenario the value of "alarm_queue.max_elements"
is decremented when the max_connections is set to 2. When updating the
"alarm_queue.max_elements" value we are not updating "max_used_alarms"
value. Hence, instead of getting the warning "thr_alarm queue is full"
it is ending up in asserting the server at the time of inserting new
elements in the queue.
Fix: the fix is to dynamically increase the size of the alarm_queue.
In order to do that, queue_insert_safe() should be used instead if
queue_insert().
Some queries with the "SELECT ... FROM DUAL" nested subqueries
failed with an assertion on debug builds.
Non-debug builds were not affected.
There were a few different issues with similar assertion
failures on different queries:
1. The first problem was related to the incomplete propagation
of the "non-constant" item status from underlying subquery
items to the outer item tree: in some cases non-constants were
interpreted as constants and evaluated at the preparation stage
(val_int() calls withing fix_fields() etc).
Thus, the default implementation of Item_ref::const_item() from
the Item parent class didn't take into account the "const_item"
status of the referenced item tree -- it used the insufficient
"used_tables() == 0" check instead. This worked in most cases
since our "non-constant" functions like RAND() and SLEEP() set
the RAND_TABLE_BIT in the used table map, so they aren't
non-constant from Item_ref's "point of view". However, the
"SELECT ... FROM DUAL" subquery may have an empty map of used
tables, but at the same time subqueries are never "constant" at
the context analysis stage (preparation, view creation etc).
So, the non-contantness of such subqueries was missed.
Fix: the Item_ref::const_item() function has been overloaded to
take into account both (*ref)->const_item() status and tricky
Item_ref::used_tables() return values, since the only
(*ref)->const_item() call is not enough there.
2. In some cases instead of the const_item() call we check a
value of the Item::with_subselect field to recognize items
with nested subqueries. However, the Item_ref class didn't
propagate this value from the referenced item tree.
Fix: Item::has_subquery() and Item_ref::has_subquery()
functions have been backported from 5.6. All direct
references to the with_subselect fields of nested items have
been with the has_subquery() function call.
3. The Item_func_regex class didn't propagate with_subselect
as well, since it overloads the Item_func::fix_fields()
function with insufficient fix_fields() implementation.
Fix: the Item_func_regex::fix_fields() function has been
modified to gather "constant" statuses from inner items.
4. The Item_func_isnull::update_used_tables() function has
a special branch for the underlying item where the maybe_null
value is false: in this case it marks the Item_func_isnull
as a "const_item" and sets the cached_value to false.
However, the Item_func_isnull::val_int() was not in sync with
update_used_tables(): it didn't take into account neither
const_item_cache nor cached_value for the case of
"args[0]->maybe_null == false optimization".
As far as such an Item_func_isnull has "const_item() == true",
it's ok to call Item_func_isnull::val_int() etc from outer
items on preparation stage. In this case the server tried to
call Item_func_isnull::args[0]->isnull(), and if the args[0]
item contained a nested not-nullable subquery, it failed
with an assertion.
Fix: take the value of Item_func_isnull::const_item_cache into
account in the val_int() function.
5. The auxiliary Item_is_not_null_test class has a similar
optimization in the update_used_tables() function as the
Item_func_isnull class has, and the same issue in the val_int()
function.
In addition to that the Item_is_not_null_test::update_used_tables()
doesn't update the const_item_cache value, so the "maybe_null"
optimization is useless there. Thus, we missed some optimizations
of cases like these (before and after the fix):
< <is_not_null_test>(a),
---
> <cache>(<is_not_null_test>(a)),
or
< having (<is_not_null_test>(a) and <is_not_null_test>(a))
---
> having 1
etc.
Fix: update Item_is_not_null_test::const_item_cache in
update_used_tables() and take in into account in val_int().