mirror of
https://github.com/MariaDB/server.git
synced 2025-07-30 16:24:05 +03:00
Bug#17632978 SLAVE CRASHES IF ROW EVENT IS CORRUPTED
(MYSQLBINLOG -V CRASHES WITH THAT BINLOG) Problem: If slave receives a corrupted row event, slave server is crashing. Analysis: When slave is unpacking the row event, it is not validating the data before applying the event. If the data is corrupted for eg: the length of a field is wrong, it could end up reading wrong data leading to a crash. A similar problem happens when mysqlbinlog tool is used against a corrupted binlog using '-v' option. Due to -v option, the tool tries to print the values of all the fields. Corrupted field length could lead to a crash. Fix: Before unpacking the field, a verification will be made on the length. If it falls into the event range, only then it will be unpacked. Otherwise, "ER_SLAVE_CORRUPT_EVENT" error will be thrown. Incase mysqlbinlog -v case, the field value will not be printed and the processing of the file will be stopped.
This commit is contained in:
@ -186,7 +186,7 @@ int compare_lengths(Field *field, enum_field_types source_type, uint16 metadata)
|
||||
DBUG_PRINT("result", ("%d", result));
|
||||
DBUG_RETURN(result);
|
||||
}
|
||||
|
||||
#endif //MYSQL_CLIENT
|
||||
/*********************************************************************
|
||||
* table_def member definitions *
|
||||
*********************************************************************/
|
||||
@ -285,7 +285,6 @@ uint32 table_def::calc_field_size(uint col, uchar *master_data) const
|
||||
case MYSQL_TYPE_VARCHAR:
|
||||
{
|
||||
length= m_field_metadata[col] > 255 ? 2 : 1; // c&p of Field_varstring::data_length()
|
||||
DBUG_ASSERT(uint2korr(master_data) > 0);
|
||||
length+= length == 1 ? (uint32) *master_data : uint2korr(master_data);
|
||||
break;
|
||||
}
|
||||
@ -295,17 +294,6 @@ uint32 table_def::calc_field_size(uint col, uchar *master_data) const
|
||||
case MYSQL_TYPE_BLOB:
|
||||
case MYSQL_TYPE_GEOMETRY:
|
||||
{
|
||||
#if 1
|
||||
/*
|
||||
BUG#29549:
|
||||
This is currently broken for NDB, which is using big-endian
|
||||
order when packing length of BLOB. Once they have decided how to
|
||||
fix the issue, we can enable the code below to make sure to
|
||||
always read the length in little-endian order.
|
||||
*/
|
||||
Field_blob fb(m_field_metadata[col]);
|
||||
length= fb.get_packed_size(master_data, TRUE);
|
||||
#else
|
||||
/*
|
||||
Compute the length of the data. We cannot use get_length() here
|
||||
since it is dependent on the specific table (and also checks the
|
||||
@ -331,7 +319,6 @@ uint32 table_def::calc_field_size(uint col, uchar *master_data) const
|
||||
}
|
||||
|
||||
length+= m_field_metadata[col];
|
||||
#endif
|
||||
break;
|
||||
}
|
||||
default:
|
||||
@ -340,7 +327,7 @@ uint32 table_def::calc_field_size(uint col, uchar *master_data) const
|
||||
return length;
|
||||
}
|
||||
|
||||
|
||||
#ifndef MYSQL_CLIENT
|
||||
/**
|
||||
*/
|
||||
void show_sql_type(enum_field_types type, uint16 metadata, String *str, CHARSET_INFO *field_cs)
|
||||
|
Reference in New Issue
Block a user