With statement- or mixed-mode logging, "LOAD DATA INFILE" queries
are written to the binlog using special types of log events.
When mysqlbinlog reads such events, it re-creates the file in a
temporary directory with a generated filename and outputs a
"LOAD DATA INFILE" query where the filename is replaced by the
generated file. The temporary file is not deleted by mysqlbinlog
after termination.
To fix the problem, in mixed mode we go to row-based. In SBR, we
document it to remind user the tmpfile is left in a temporary
directory.
With statement- or mixed-mode logging, "LOAD DATA INFILE" queries
are written to the binlog using special types of log events.
When mysqlbinlog reads such events, it re-creates the file in a
temporary directory with a generated filename and outputs a
"LOAD DATA INFILE" query where the filename is replaced by the
generated file. The temporary file is not deleted by mysqlbinlog
after termination.
To fix the problem, in mixed mode we go to row-based. In SBR, we
document it to remind user the tmpfile is left in a temporary
directory.
/*![:version:] Query Code */, where [:version:] is a sequence of 5
digits representing the mysql server version(e.g /*!50200 ... */),
is a special comment that the query in it can be executed on those
servers whose versions are larger than the version appearing in the
comment. It leads to a security issue when slave's version is larger
than master's. A malicious user can improve his privileges on slaves.
Because slave SQL thread is running with SUPER privileges, so it can
execute queries that he/she does not have privileges on master.
This bug is fixed with the logic below:
- To replace '!' with ' ' in the magic comments which are not applied on
master. So they become common comments and will not be applied on slave.
- Example:
'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /*!99999 ,(3)*/
will be binlogged as
'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /* 99999 ,(3)*/
table with active trx
Essentially, the problem is that InnoDB does a implicit commit
when a cursor (table handler) is unlocked/closed, creating
a dissonance between the transaction state within the server
layer and the storage engine layer. Theoretically, a statement
transaction can encompass several table instances in a similar
manner to a multiple statement transaction, hence it does not
make sense to limit a statement transaction to the lifetime of
the table instances (cursors) used within it.
Since this particular instance of the problem is only triggerable
on 5.1 and is masked on 5.5 due 2PC being skipped (assertion is in
the prepare phase of a 2PC), the solution (which is less risky) is
to explicitly end the transaction before the cached table is unlock
on rename table.
The patch is to be null merged into trunk.
Problem: when SHOW BINLOG EVENTS was issued, it increased the value of
@@session.max_allowed_packet. This allowed a non-root user to increase
the amount of memory used by her thread arbitrarily. Thus, it removes
the bound on the amount of system resources used by a client, so it
presents a security risk (DoS attack).
Fix: it is correct to increase the value of @@session.max_allowed_packet
while executing SHOW BINLOG EVENTS (see BUG 30435). However, the
increase should only be temporary. Thus, the fix is to restore the value
when SHOW BINLOG EVENTS ends.
The value of @@session.max_allowed_packet is also increased in
mysql_binlog_send (i.e., the binlog dump thread). It is not clear if this
can cause any trouble, since normally the client that issues
COM_BINLOG_DUMP will not issue any other commands that would be affected
by the increased value of @@session.max_allowed_packet. However, we
restore the value just in case.
Merge up to sunny.bains@oracle.com-20100625081841-ppulnkjk1qlazh82 .
There are 8 more changesets in mysql-5.1-innodb, but PB2 shows a
failure for a test added in one of them. If that is resolved quickly
then those 8 more changesets will be merged too.
DROP USER
RENAME USER CURRENT_USER() ...
GRANT ... TO CURRENT_USER()
REVOKE ... FROM CURRENT_USER()
ALTER DEFINER = CURRENT_USER() EVENTbut, When these statements are binlogged, CURRENT_USER() just is binlogged
as 'CURRENT_USER()', it is not expanded to the real user name. When slave
executes the log event, 'CURRENT_USER()' is expand to the user of slave
SQL thread, but SQL thread's user name always NULL. This breaks the replication.
After this patch, session's user will be written into query log events
if these statements call CURREN_USER() or 'ALTER EVENT' does not assign a definer.
ha_innobase::create(): Add the local variable row_type = form->s->row_type.
Adjust it to ROW_TYPE_COMPRESSED when ROW_FORMAT is not specified or inherited
but KEY_BLOCK_SIZE is. Observe the inherited ROW_FORMAT even when it is not
explicitly specified.
innodb_bug54679.test: New test, to test the bug and to ensure that there are
no regressions. (The only difference in the test result without the patch
applied is that the first ALTER TABLE changes ROW_FORMAT to Compact.)
DROP USER
RENAME USER CURRENT_USER() ...
GRANT ... TO CURRENT_USER()
REVOKE ... FROM CURRENT_USER()
ALTER DEFINER = CURRENT_USER() EVENTbut, When these statements are binlogged, CURRENT_USER() just is binlogged
as 'CURRENT_USER()', it is not expanded to the real user name. When slave
executes the log event, 'CURRENT_USER()' is expand to the user of slave
SQL thread, but SQL thread's user name always NULL. This breaks the replication.
After this patch, session's user will be written into query log events
if these statements call CURREN_USER() or 'ALTER EVENT' does not assign a definer.
mysql_client_binlog_statement
Problem: server may read from unassigned memory performing
"wrong" BINLOG queries.
Fix: never read from unassigned memory.
Valgrind warning happpens because of uninitialized null bytes.
In row_sel_push_cache_row_for_mysql() function we fill fetch cache
with necessary field values, row_sel_store_mysql_rec() is called
for this and leaves null bytes untouched.
Later row_sel_pop_cached_row_for_mysql() rewrites table record
buffer with uninited null bytes. We can see the problem from the
test case:
At 'SELECT...' we call row_sel_push...->row_sel_store...->row_sel_pop_cached...
chain which rewrites table->record[0] buffer with uninitialized null bytes.
When we call 'UPDATE...' statement, compare_record uses this buffer and
valgrind warning occurs.
The fix is to init null bytes with default values.
Logging slow stored procedures caused the slow log to write
very large lock times. The lock times was a result of a
negative number being cast to an unsigned integer.
The reason the lock time appeard negative was because
one of the measurements points was reset after execution
causing it to change order with the start time of the
statement.
This bug is related to bug 47905 which in turn was
introduced because of a joint fix for 12480,12481,12482 and 11587.
The fix is to only reset the start_time before any statement
execution in a SP while not resetting start_utime or
utime_after_lock which are used for measuring the
performance of the SP. Start_time is used to set the
timestamp on the replication event which controlls how
the slave interprets time functions like NOW().
When using Unique Keys with nullable parts in RBR, the slave can
choose the wrong row to update. This happens because a table with
an unique key containing nullable parts cannot strictly guarantee
uniqueness. As stated in the manual, for all engines, a UNIQUE
index allows multiple NULL values for columns that can contain
NULL.
We fix this at the slave by extending the checks before assuming
that the row found through an unique index is is the correct
one. This means that when a record (R) is fetched from the storage
engine and a key that is not primary (K) is used, the server does
the following:
- If K is unique and has no nullable parts, it returns R;
- Otherwise, if any field in the before image that is part of K
is null do an index scan;
- If there is no NULL field in the BI part of K, then return R.
A side change: renamed the existing test case file and added a
test case covering the changes in this patch.
In semi-consistent read, only unlock freshly locked non-matching records.
lock_rec_lock_fast(): Return LOCK_REC_SUCCESS,
LOCK_REC_SUCCESS_CREATED, or LOCK_REC_FAIL instead of TRUE/FALSE.
enum db_err: Add DB_SUCCESS_LOCKED_REC for indicating a successful
operation where a record lock was created.
lock_sec_rec_read_check_and_lock(),
lock_clust_rec_read_check_and_lock(), lock_rec_enqueue_waiting(),
lock_rec_lock_slow(), lock_rec_lock(), row_ins_set_shared_rec_lock(),
row_ins_set_exclusive_rec_lock(), sel_set_rec_lock(),
row_sel_get_clust_rec_for_mysql(): Return DB_SUCCESS_LOCKED_REC if a
new record lock was created. Adjust callers.
row_unlock_for_mysql(): Correct the function documentation.
row_prebuilt_t::new_rec_locks: Correct the documentation.
In semi-consistent read, only unlock freshly locked non-matching records.
Define DB_SUCCESS_LOCKED_REC for indicating a successful operation
where a record lock was created.
lock_rec_lock_fast(): Return LOCK_REC_SUCCESS,
LOCK_REC_SUCCESS_CREATED, or LOCK_REC_FAIL instead of TRUE/FALSE.
lock_sec_rec_read_check_and_lock(),
lock_clust_rec_read_check_and_lock(), lock_rec_enqueue_waiting(),
lock_rec_lock_slow(), lock_rec_lock(), row_ins_set_shared_rec_lock(),
row_ins_set_exclusive_rec_lock(), sel_set_rec_lock(),
row_sel_get_clust_rec_for_mysql(): Return DB_SUCCESS_LOCKED_REC if a
new record lock was created. Adjust callers.
row_unlock_for_mysql(): Correct the function documentation.
row_prebuilt_t::new_rec_locks: Correct the documentation.
Add code to waiting for a set of errors.
Add code to waiting for an error instead of waiting for io thread to stop, as
after 'START SLAVE', the status of io thread is still not running.
But it doesn't mean slave io thread encounters an error.