There actually were several problems here:
- WRITE-lock is required to load events from the mysql.event table,
but in the read-only mode an ordinary user can not acquire it;
- Security_context::master_access attribute was not properly
initialized in Security_context::init(), which led to differences
in behavior with and without debug configure options.
- if the server failed to load events from mysql.event, it forgot to
close the mysql.event table, that led to the coredump, described
in the bug report.
The patch is to fix all these problems:
- Use the super-user to acquire WRITE-lock on the mysql.even table;
- The WRITE-lock is acquired by the event scheduler in two cases:
- on initial loading of events from the database;
- when an event has been executed, so its attributes should
be updated.
Other cases when WRITE-lock is needed for the mysql.event table
happen under the user account. So, nothing should be changed there
for the read-only mode. The user is able to create/update/drop
an event only if he is a super-user.
- Initialize Security_context::master_access;
- Close the mysql.event table in case something went wrong.
is possible):
When skipping the beginning of a transaction starting with BEGIN, the OPTION_BEGIN
flag was not set correctly, which caused the slave to not recognize that it was
inside a group. This patch sets the OPTION_BEGIN flag for BEGIN, COMMIT, ROLLBACK,
and XID events. It also adds checks if inside a group before decreasing the
slave skip counter to zero.
Begin_query_log_event was not marked that it could not end a group, which is now
corrected.
Problem: lying to the optimizer that a function (Item_func_inet_ntoa)
cannot return NULL values leads to unexpected results (in the case group
keys creation/comparison is broken).
Fix: Item_func_inet_ntoa::maybe_null should be set properly.
"CSV does not work with NULL value in datetime fields"
Attempting to insert a row with a NULL value for a DATETIME field
results in a CSV file which the storage engine cannot read.
Don't blindly assume that "0" is acceptable for all field types,
Since CSV does not support NULL, we find out from the field the
default non-null value.
Do not permit the creation of a table with a nullable columns.
The general log write function (general_log_print) uses printf style
arguments which need to be pre-processed, meaning that the all arguments
are copied to a single buffer and the problem is that the buffer size is
constant (1022 characters) but queries can be much larger then this.
The solution is to introduce a new log write function that accepts a
buffer and it's length as arguments. The function is to be used when
a formatted output is not required, which is the case for almost all
query write-to-log calls.
This is a incompatible change with respect to the log format of prepared
statements.
No warning was generated when a TIMESTAMP with a non-zero time part
was converted to a DATE value. This caused index lookup to assume
that this is a valid conversion and was returning rows that match
a comparison between a TIMESTAMP value and a DATE keypart.
Fixed by generating a warning on such a truncation.
Buffer used when setting variables was not dimensioned to accomodate
trailing '\0'. An overflow by one character was therefore possible.
CS corrects limits to prevent such overflows.
Previously, UDF *_init functions were passed constant strings with erroneous lengths. The length came from the containing variable's size, not the length of the value itself.
Now the *_init functions get the constant as a null terminated string with the correct length supplied too.
table to partitioned
Problem:
Crashed because usage of an uninitialised mutex when auto_incrementing
a partitioned temporary table
Fix:
Only locking (using the mutex) if not temporary table.
The problem was that the RETURNS column in the mysql.proc was of
CHAR(64). That was not enough for storing long-named datatypes.
The fix is to change CHAR(64) to LONGBLOB, and to throw warnings
at the time a stored routine is created if some data is truncated
during writing into mysql.proc.