This has been back-ported from 6.0 as the problems proved to afflict
5.1 as well.
The fix exposed two new bugs. They were reported as follows.
Bug no 52174: Sometimes wrong plan when reading a MAX value
from non-NULL index
Bug no 52173: Reading NULL value from non-NULL index gives wrong
result in embedded server
Both bugs taken together affect a much smaller class of queries than #47762,
so the fix stays for now.
Optimizer erroneously translated LEFT JOIN into INNER JOIN.
It leads to cutting rows with NULL right side. It happens
because Item_row uses not_null_tables() method form the
base(Item) class and does not calculate 'null tables'
properly. The fix is adding calculation of 'not null tables'
to Item_row.
The crash happens because of discrepancy between values of
conts_tables and join->const_table_map(make_join_statisctics).
Calculation of conts_tables used condition with
HA_STATS_RECORDS_IS_EXACT flag check. Calculation of
join->const_table_map does not use this flag check.
In case of MERGE table without union with index
the table does not become const table and
thus join_read_const_table() is not called
for the table. join->const_table_map supposes
this table is const and later in make_join_select
this table is used for making&calculation const
condition. As table record buffer is not populated
it leads to crash.
The fix is adding a check if an engine supports
HA_STATS_RECORDS_IS_EXACT flag before updating
join->const_table_map.
All numeric operators and functions on integer, floating point
and DECIMAL values now throw an 'out of range' error rather
than returning an incorrect value or NULL, when the result is
out of supported range for the corresponding data type.
Some test cases in the test suite had to be updated
accordingly either because the test case itself relied on a
value returned in case of a numeric overflow, or because a
numeric overflow was the root cause of the corresponding bugs.
The latter tests are no longer relevant, since the expressions
used to trigger the corresponding bugs are not valid anymore.
However, such test cases have been adjusted and kept "for the
record".
reverse DNS lookup of "localhost" returns "broadcasthost" on Snow Leopard, and NULL on most others.
Simply ignore the output, as this is not an essential part of UDF testing.
Corrected some packaging bugs:
- install mysqlservices library
- install libmysqlclient_r.so.{16,16.0.0} as links
to libmysqlclient.so
- install libmysqld-debug.a
- install my_safe_process, my_safe_kill and
symlinks to mysql-test-run.pl (mtr, mysql-test-run)
into correct place ${INSTALL_MYSQLTESTDIR}
for InnoDB
The class Field_bit_as_char stores the metadata for the
field incorrecly because bytes_in_rec and bit_len are set
to (field_length + 7 ) / 8 and 0 respectively, while
Field_bit has the correct values field_length / 8 and
field_length % 8.
Solved the problem by re-computing the values for the
metadata based on the field_length instead of using the
bytes_in_rec and bit_len variables.
To handle compatibility with old server, a table map
flag was added to indicate that the bit computation is
exact. If the flag is clear, the slave computes the
number of bytes required to store the bit field and
compares that instead, effectively allowing replication
*without conversion* from any field length that require
the same number of bytes to store.
definition at engine
If a single ALTER TABLE contains both DROP INDEX and ADD INDEX using
the same index name (a.k.a. index modification) we need to disable
in-place alter table because we can't ask the storage engine to have
two copies of the index with the same name even temporarily (if we
first do the ADD INDEX and then DROP INDEX) and we can't modify
indexes that are needed by e.g. foreign keys if we first do
DROP INDEX and then ADD INDEX.
Fixed the problem by disabling in-place ALTER TABLE for these cases.
concurrent I_S query
There were two problem:
1) MYSQL_LOCK_IGNORE_FLUSH also ignored name locks
2) there was a race between abort_and_upgrade_locks and
alter_close_tables
(i.e. remove_table_from_cache and
close_data_files_and_morph_locks)
Which allowed the table to be opened with MYSQL_LOCK_IGNORE_FLUSH flag
resulting in renaming a partition that was already in use,
which could cause the table to be unusable.
Solution was to not allow IGNORE_FLUSH to skip waiting for
a named locked table.
And to not release the LOCK_open mutex between the
calls to remove_table_from_cache and
close_data_files_and_morph_locks by merging the functions
abort_and_upgrade_locks and alter_close_tables.
In BUG#49562 we fixed the case where numeric user var events
would not serialize the flag stating whether the value was signed
or unsigned (unsigned_flag). This fixed the case that the slave
would get an overflow while treating the unsigned values as
signed.
In this bug, we find that the unsigned_flag can sometimes change
between the moment that the user value is recorded for binlogging
purposes and the actual binlogging time. Since we take the
unsigned_flag from the runtime variable data, at binlogging time,
and the variable value is comes from the copy taken earlier in
the execution, there may be inconsistency in the
User_var_log_event between the variable value and its
unsigned_flag.
We fix this by also copying the unsigned_flag of the
user_var_entry when its value is copied, for binlogging
purposes. Later, at binlogging time, we use the copied
unsigned_flag and not the one in the runtime user_var_entry
instance.
Non-determinism of the test was caused by lack of setting a proper value to hb period,
actually fixed by BUG@50767.
These fixes aim at possible non-determinism in comparison of received
hb events by master and slave in the circular part of the test.
Even though the HB periods ratio was choosen to be as high as 10, it's still incorrect
to compare number of hb-events basing only a relation between their periods.
Yet another issue is relatively short 60 secs timeout of wait_for_status_var.inc
makes valgrind runs to fail.
Fixed with deploying wait_for_slave_io_to_start afront of calling wait_for_status_var.
The test is made runnable only with MIXED binlog-format as it has close to 1 min
total exec time and there is nothing format specific in it.
NULL column for NULL
The optimization to read MIN() and MAX() values from an
index did not properly handle comparisons with NULL
values. Fixed by giving up the particular optimization step
if there are non-NULL safe comparisons with NULL values, as
the result is NULL anyway.
Also, Oracle copyright notice was added to all files.
Base Tables
The type inferrence of a view column caused the result to be
interpreted as the wrong type: DATE colums were interpreted
as TIME and TIME as DATETIME. This happened because view
columns are represented by Item_ref objects as opposed to
Item_field's. Item_ref had no method for retrieving a TIME
value and thus was forced to depend on the default
implementation for any expression, which caused the
expression to be evaluated as a string and then parsed into
a TIME/DATETIME value.
Fixed by letting Item_ref classes forward the request for a
TIME value to the referred Item - which is a field in this
case - this reads the TIME value directly without
conversion.
DDL no longer aborts mysql_lock_tables(), and hence
we no longer need to support need_reopen flag of this
call.
Remove the flag, and all the code in the server
that was responsible for handling the case when
it was set. This allowed to simplify:
open_and_lock_tables_derived(), the delayed thread,
multi-update.
Rename MYSQL_LOCK_IGNORE_FLUSH to MYSQL_OPEN_IGNORE_FLUSH,
since we now only support this flag in open_table().
Rename MYSQL_LOCK_PERF_SCHEMA to MYSQL_LOCK_LOG_TABLE,
to avoid confusion.
Move the wait for the global read lock for cases
when we do updates in SELECT f1() or DO (UPDATE) to
open_table() from mysql_lock_tables(). When waiting
for the read lock, we could raise need_reopen flag,
which is no longer present in mysql_lock_tables().
Since the block responsible for waiting for GRL
was moved, MYSQL_LOCK_IGNORE_GLOBAL_READ_LOCK
was renamed to MYSQL_OPEN_IGNORE_GLOBAL_READ_LOCK.
There are two issues fixed here:
1. We needed to update the result file, for some of
mysqlbinlog_* tests, because now the some padding chars
are not output anymore.
2. We needed to change the Field_string::pack so that
for BINARY types the padding chars are not packed
(lengthsp will return full length for these types).
index cardinalities=1
Parallel repair didn't poroperly update index cardinality
in certain cases.
When myisam_sort_buffer_size is not enough to store all
keys, index cardinality was updated before index was
actually written, when no index statistic is available.
Conflicts:
Text conflict in client/mysqlbinlog.cc
Text conflict in mysql-test/r/explain.result
Text conflict in mysql-test/r/subselect.result
Text conflict in mysql-test/r/subselect3.result
Text conflict in mysql-test/r/type_datetime.result
Text conflict in sql/share/Makefile.am
- Fixing crash on attempt to create a fulltext index with an utf8mb4 column
- fixing wrong border width for supplementary characters in mysql client:
mysql --default-character-set=utf8mb4 -e "select concat(_utf32 0x20000,'a')"
Split rpl_row_charset into:
- rpl_row_utf16.
- rpl_row_utf32.
This way these tests can run independently if server supports
either one of the charsets but not both.
Cleaned up rpl_row_utf32 which had a spurious instruction:
-- let $reset_slave_type_conversions= 0