(MYSQLBINLOG -V CRASHES WITH THAT BINLOG)
Problem: If slave receives a corrupted row event,
slave server is crashing.
Analysis: When slave is unpacking the row event, it is
not validating the data before applying the event. If the
data is corrupted for eg: the length of a field is wrong,
it could end up reading wrong data leading to a crash.
A similar problem happens when mysqlbinlog tool is used
against a corrupted binlog using '-v' option. Due to -v
option, the tool tries to print the values of all the
fields. Corrupted field length could lead to a crash.
Fix: Before unpacking the field, a verification
will be made on the length. If it falls into the event
range, only then it will be unpacked. Otherwise,
"ER_SLAVE_CORRUPT_EVENT" error will be thrown.
Incase mysqlbinlog -v case, the field value will not be
printed and the processing of the file will be stopped.
sql/field.h:
Removed a function which is not required anymore
sql/log_event.cc:
Adding a validation on the field length before
the tool tries to print the value.
sql/log_event.h:
Changing unpack_row call according to the new arguments
sql/log_event_old.h:
Changing unpack_row call according to the new arguments
sql/rpl_record.cc:
Adding a new argument 'row_end' which tells
the end position of the complete data in the
row event. It will be used to do validation
before doing 'unpack' field.
sql/rpl_record.h:
Adding a new argument 'row_end' which tells
the end position of the complete data in the
row event. It will be used to do validation
before doing 'unpack' field.
sql/rpl_utility.cc:
Now calc_field_size() is required for client too.
Add another test case.
Fix bug found: When one thread aborts, another thread might
wrongly advance the relay log position too far, causing the
fauling transaction to be skipped and thus lost upon
restarting the SQL thread after the failure.
Fix locking bug / race introduced by previous patch: If we
bail out of next_event() due to kill, there was a small window
where we could exit without rli->data_lock held, causing an
unlock-twice bug.
Add a test case for killing a waiting query in parallel replication.
Fix several bugs found:
- We should not wakeup_subsequent_commits() in ha_rollback_trans(), since we
do not know the right wakeup_error() to give.
- When a wait_for_prior_commit() is killed, we must unregister from the
waitee so we do not race and get an extra (non-kill) wakeup.
- We need to deal with error propagation correctly in queue_for_group_commit
when one thread is killed.
- Fix one locking issue in queue_for_group_commit(), we could unlock the
waitee lock too early and this end up processing wakeup() with insufficient
locking.
- Fix Xid_log_event::do_apply_event; if commit fails it must not update the
in-memory @@gtid_slave_pos state.
- Fix and cleanup some things in the rpl_parallel.cc error handling.
- Add a missing check for killed in the slave sql driver thread, to avoid a
race.
- tc_acquire_table and tc_release_table do not access
TABLE_SHARE::tdc.used_tables anymore
- in tc_acquire_table(): release LOCK_tdc after we relase LOCK_open
(saves a few CPU cycles in critical section)
- in tc_release_table(): if we reached table cache threshold, evict
to-be-released table without moving it to unused_tables. unused_tables
must be empty at this point.
When a DSO is loaded we rewrite service pointers to point to the actual service structures.
But when a DSO is unloaded, we have to restore their original values, in case this DSO
wasn't removed from memory on dlclose() and is later loaded again.
When marking used columns the function find_field_in_table_ref() erroneously
called the walk method for the real item behind a view/derived table field
with the second parameter set to TRUE.
This erroneous code was introduced in 2006.
Bug#17867117 - ERROR RESULT WHEN "COUNT + DISTINCT + CASE WHEN" NEED MERGE_WALK
Problem:
COUNT DISTINCT gives incorrect result when it uses a Unique
Tree and its last inserted record has null value.
Here is how COUNT DISTINCT is processed, given that this query is not
using loose index scan.
When a row is produced as a result of joining tables (there is only
one table here), we store the SELECTed value in a Unique tree. This
allows elimination of any duplicates, and thus implements DISTINCT.
When we have processed all rows like this, we walk the Unique tree,
counting its elements, in Aggregator_distinct::endup() (tree->walk());
for each element we call Item_sum_count::add(). Such function wants to
ignore any NULL value, for that it checks item_sum -> args[0] ->
null_value. It is a mistake: when walking the Unique tree, the value
to be aggregated is not item_sum ->args[0] but rather table ->
field[0].
Solution:
instead of item_sum -> args[0] -> null_value, use arg_is_null(), which
knows where to look (like in fix for bug 57932).
As a consequence of this solution, we have to make arg_is_null() a
little more general:
1) Because it was so far only used for AVG() (which always has a
single argument), this function was looking at a single argument; now
that it has to work with COUNT(DISTINCT expression1,expression2), it
must look at all arguments.
2) Because we start using arg_is_null () for COUNT(DISTINCT), i.e. in
Item_sum_count::add (), it implies that we are also using it for
COUNT(no DISTINCT) (same add ()). For COUNT(no DISTINCT), the
nullness to check is that of item_sum -> args[0]. But the null_value
of such item is reliable only if val_*() has been called on it. So far
arg_is_null() was always used after a call to arg_val*(), so could
rely on null_value; but for COUNT, there is no call to arg_val*(), so
arg_is_null() has to call is_null() instead.
Testcase for 16539979 by Neeraj. Testcase for 17867117 contributed by
Xiaobin Lin from Taobao.
This was actually implemented as part of MDEV-4506, parallel replication.
Unfortunately, one of the conditionals was reversed. So fsync of master.info
was disabled in non-gtid mode, instead of in gtid mode.
So fix the conditional to be correct.
fix the code to compile with clang. fix warnings too.
include/probes_mysql_nodtrace.h:
clang++ doesn't like numeric _constants_ being used in ||
(it suspects that the intention was | ). Boolean constants are ok.
sql/hostname.cc:
only used in DBUG_ASSERT
sql/item.cc:
str_to_time and str_to_datetime return bool, not MYSQL_TIMESTAMP_xxx
sql/item_func.cc:
str_to_datetime_with_warn() returns bool, not MYSQL_TIMESTAMP_xxx
storage/cassandra/CMakeLists.txt:
CMAKE_CXX_FLAGS can be empty
storage/connect/odbconn.cpp:
HWND is void*
storage/connect/user_connect.h:
deprecated on FreeBSD and unused anyway
storage/connect/value.cpp:
bad characters inside. unused.
storage/spider/spd_trx.cc:
clang++ warns that memset will also overwrite vtbl. it might be as well a good idea,
as it asserts that the object will only be used as a storage.
silence the warning.
when frm file exists, but the table does not.
In recover_from_failed_open(), allow the discovery
to fail without an error, when
open_strategy == TABLE_LIST::OPEN_IF_EXISTS.
merged from 5.6:
Bug#14521864: MYSQL 5.1 TO 5.5 BUGS PARTITIONING
Bug#16589511: MYSQL_UPGRADE FAILS TO WRITE OUT ENTIRE ALTER TABLE ... ALGORITHM= ... STATEMENT
Bug#16274455: CAN NOT ACESS PARTITIONED TABLES WHEN DOWNGRADED FROM 5.6.11 TO 5.6.10
plus minor changes from 5.6, mainly comments
The update of mysql.gtid_slave_pos during event replication used
the wrong way to disable binary logging for the table updates.
Fix to instead temporarily clear the OPT_BIN_LOG flag.