write()/read()
Sometimes stop/restart master or stop/restart salve can cause
network error, which can cause the 'invalid file descriptor
-1 in syscall write()/read()' warnings. All involved test
cases except rpl_slave_load_remove_tmpfile belong to the
kind of network error. So they are expected.
The 'rpl_slave_load_remove_tmpfile' belongs to file error,
but it is testing the file error as following code:
DBUG_EXECUTE_IF("remove_slave_load_file_before_write",
my_close(fd,MYF(0)); fd= -1; my_delete(fname, MYF(0)););
So it's expected too.
To fix the problem, add the valgrind warnings to the global
suppression list to suppress it.
The test case rpl_binlog_corruption fails on windows because when
adding a line to the binary log index file it gets terminated
with a CR+LF (which btw, is the normal case in windows, but not on
Unixes - LF). This causes mismatch between the relay log names,
causing mysqld to report that it cannot find the log file.
We fix this by creating the instrumented index file through
mysql, ie, using SELECT ... INTO DUMPFILE ..., as opposed on
relying on ultimatly OS commands like: -- echo "..." >
index. These changes go into the file and make the procedure
platform independent:
include/setup_fake_relay_log.inc
Side note: when using SELECT ... INTO DUMPFILE ..., one needs to
check if mysqld is running with secure_file_priv. If it is, we do
it in two steps: 1. create the file on the allowed location;
2. move it to the datadir. If it is not, then we just create the
file directly on the datadir (so previous step 2. is not needed).
Performing fulltext prefix search (a word with truncation
operator) may cause a dead-loop. ft_min_word_len value
doesn't matter actually.
The problem was introduced along with "smarter index merge"
optimization.
There was two problems:
The first was the symptom, caused by bad error handling in
ha_partition. It did not handle print_error etc. when
having no partitions (when used by dummy handler).
The second was the real problem that when dropping tables
it reused the table type (storage engine) from when the lock
was asked for, not the table type that it had when gaining
the exclusive name lock. So that it tried to delete tables
from wrong storage engines.
Solutions for the first problem was to accept some handler
calls to the partitioning handler even if it was not setup
with any partitions, and also if possible fallback
to use the base handler's default functions.
Solution for the second problem was to remove the optimization
to reuse the definition from the cache, instead always check
the frm-file when holding the LOCK_open mutex
(updated with a fix for a debug print crash and better
comments as required by reviewer, and removed optimization
to avoid reading the frm-file).
------------------------------------------------------------------------
r6488 | sunny | 2010-01-21 02:55:08 +0200 (Thu, 21 Jan 2010) | 2 lines
Changed paths:
M /branches/5.1/mysql-test/innodb-autoinc.result
M /branches/5.1/mysql-test/innodb-autoinc.test
branches/5.1: Factor out test for bug#44030 from innodb-autoinc.test
into a separate test/result files.
------------------------------------------------------------------------
r6489 | sunny | 2010-01-21 02:57:50 +0200 (Thu, 21 Jan 2010) | 2 lines
Changed paths:
A /branches/5.1/mysql-test/innodb-autoinc-44030.result
A /branches/5.1/mysql-test/innodb-autoinc-44030.test
branches/5.1: Factor out test for bug#44030 from innodb-autoinc.test
into a separate test/result files.
------------------------------------------------------------------------
r6492 | sunny | 2010-01-21 09:38:35 +0200 (Thu, 21 Jan 2010) | 1 line
Changed paths:
M /branches/5.1/mysql-test/innodb-autoinc-44030.test
branches/5.1: Add reference to bug#47621 in the comment.
------------------------------------------------------------------------
r6535 | sunny | 2010-01-30 00:08:40 +0200 (Sat, 30 Jan 2010) | 11 lines
Changed paths:
M /branches/5.1/handler/ha_innodb.cc
branches/5.1: Undo the change from r6424. We need to return DB_SUCCESS even
if we were unable to initialize the tabe autoinc value. This is required for
the open to succeed. The only condition we currently treat as a hard error
is if the autoinc field instance passed in by MySQL is NULL.
Previously if the table autoinc value was 0 and the next value was requested
we had an assertion that would fail. Change that assertion and treat a value
of 0 to mean that the autoinc system is unavailable. Generation of next
value will now return failure.
rb://237
------------------------------------------------------------------------
r6536 | sunny | 2010-01-30 00:13:42 +0200 (Sat, 30 Jan 2010) | 6 lines
Changed paths:
M /branches/5.1/handler/ha_innodb.cc
M /branches/5.1/mysql-test/innodb-autoinc.result
M /branches/5.1/mysql-test/innodb-autoinc.test
branches/5.1: Check *first_value everytime against the column max
value and set *first_value to next autoinc if it's > col max value.
ie. not rely on what is passed in from MySQL.
[49497] Error 1467 (ER_AUTOINC_READ_FAILED) on inserting a negative value
rb://236
------------------------------------------------------------------------
r6537 | sunny | 2010-01-30 00:35:00 +0200 (Sat, 30 Jan 2010) | 2 lines
Changed paths:
M /branches/5.1/handler/ha_innodb.cc
M /branches/5.1/mysql-test/innodb-autoinc.result
M /branches/5.1/mysql-test/innodb-autoinc.test
branches/5.1: Undo r6536.
------------------------------------------------------------------------
r6538 | sunny | 2010-01-30 00:43:06 +0200 (Sat, 30 Jan 2010) | 6 lines
Changed paths:
M /branches/5.1/handler/ha_innodb.cc
M /branches/5.1/mysql-test/innodb-autoinc.result
M /branches/5.1/mysql-test/innodb-autoinc.test
branches/5.1: Check *first_value every time against the column max
value and set *first_value to next autoinc if it's > col max value.
ie. not rely on what is passed in from MySQL.
[49497] Error 1467 (ER_AUTOINC_READ_FAILED) on inserting a negative value
rb://236
------------------------------------------------------------------------
REVOKE/GRANT; ALTER EVENT.
The following statements support the CURRENT_USER() where a user is needed.
DROP USER
RENAME USER CURRENT_USER() ...
GRANT ... TO CURRENT_USER()
REVOKE ... FROM CURRENT_USER()
ALTER DEFINER = CURRENT_USER() EVENT
but, When these statements are binlogged, CURRENT_USER() just is binlogged
as 'CURRENT_USER()', it is not expanded to the real user name. When slave
executes the log event, 'CURRENT_USER()' is expand to the user of slave
SQL thread, but SQL thread's user name always NULL. This breaks the replication.
After this patch, All above statements are rewritten when they are binlogged.
The CURRENT_USER() is expanded to the real user's name and host.
value and set *first_value to next autoinc if it's > col max value.
ie. not rely on what is passed in from MySQL.
[49497] Error 1467 (ER_AUTOINC_READ_FAILED) on inserting a negative value
rb://236
value and set *first_value to next autoinc if it's > col max value.
ie. not rely on what is passed in from MySQL.
[49497] Error 1467 (ER_AUTOINC_READ_FAILED) on inserting a negative value
rb://236
Fixed 2 problems :
1. test_if_order_by_key() was continuing on the primary key
as if it has a primary key suffix (as the secondary keys do).
This leads to crashes in ORDER BY <pk>,<pk>.
Fixed by not treating the primary key as the secondary one
and not depending on it being clustered with a primary key.
2. The cost calculation was trying to read the records
per key when operating on ORDER BYs that order on all of the
secondary key + some of the primary key.
This leads to crashes because of out-of-bounds array access.
Fixed by assuming we'll find 1 record per key in such cases.
column is used for ORDER BY
Problem: filesort isn't meant for null length sort data
(e.g. char(0)), that leads to a server crash.
Fix: disregard sort order if sort data record length is 0 (nothing
to sort).
number to InnoDB index structure directly. Fix Bug #47622:
"the new index is added before the existing ones in MySQL,
but after one in SE".
rb://215, approved by Marko
The problem was that a DROP TRIGGER statement inside a stored
procedure could cause a crash in subsequent invocations. This
was due to the addition, on the first execution, of a temporary
table reference to the stored procedure query table list. In
a subsequent invocation, there would be a attempt to reinitialize
the temporary table reference, which by then was already gone.
The solution is to backup and reset the query table list each
time a trigger needs to be dropped. This ensures that any temp
changes to the query table list are discarded. It is safe to
do so at this time as drop trigger is restricted from more
complicated scenarios (ie, not allowed within stored functions,
etc).
Server crashes when accessing ARCHIVE table with missing
.ARZ file.
When opening a table, ARCHIVE didn't properly pass through
error code from lower level azopen() to higher level open()
method.
Bulk REPLACE or bulk INSERT ... ON DUPLICATE KEY UPDATE may
break dynamic record MyISAM table.
The problem is limited to bulk REPLACE and INSERT ... ON
DUPLICATE KEY UPDATE, because only these operations may
be done via UPDATE internally and may request write cache.
When flushing write cache, MyISAM may write remaining
cached data at wrong position. Fixed by requesting write
cache to seek to a correct position.
table and view...
Invalid memory reads after a query referencing MyISAM table
multiple times with write lock. Invalid memory reads may
lead to server crash, valgrind warnings, incorrect values
in INFORMATION_SCHEMA.TABLES.{TABLE_ROWS, DATA_LENGTH,
INDEX_LENGTH, ...}.
This may happen when one of the table instances gets closed
after a query, e.g. out of slots in open tables cache. UNION,
MERGE and VIEW are irrelevant.
The problem was that MyISAM didn't restore state info
pointer to default value.
In case of 'CREATE VIEW' subselect transformation does not happen(see JOIN::prepare).
During fix_fields Item_row may call is_null() method for its arugmens which
leads to item calculation(wrong subselect in our case as
transformation did not happen before). This is_null() call
does not make sence for 'CREATE VIEW'.
Note:
Only Item_row is affected because other items don't call is_null()
during fix_fields() for arguments.
SHOW CREATE TABLE on a view (v1) that contains a function whose
statement uses another view (v2), could trigger a infinite loop
if the view referenced within the function causes a warning to
be raised while opening the said view (v2).
The problem was a infinite loop over the stack of internal error
handlers. The problem would be triggered if the stack contained
two or more handlers and the first two handlers didn't handle the
raised condition. In this case, the loop variable would always
point to the second handler in the stack.
The solution is to correct the loop variable assignment so that
the loop is able to iterate over all handlers in the stack.
in multitable delete/subquery
SQL_BUFFER_RESULT should not have an effect on non-SELECT
statements according to our documentation.
Fixed by not passing it through to multi-table DELETE (similarly
to how it's done for multi-table UPDATE).
The problem was that a failure to open a view wasn't being
properly handled. When opening a view with unknown definer,
the open procedure would be treated as successful and would
later crash when attempting to lock the view (which wasn't
opened to begin with).
The solution is to skip further processing when opening a
table if it fails with a fatal error.
being logged to slow query log
The problem is that the execution time for a multi-statement
stored procedure as a whole may not be accurate, and thus not
be entered into the slow query log even if the total time
exceeds long_query_time. The reason for this is that
THD::utime_after_lock used for time calculation may be reset
at the start of each new statement, possibly leaving the total
SP execution equal to the time spent executing the last
statement in the SP.
This patch stores the utime on start of SP execution, and
restores it on exit of SP execution. A test is added.
error causes debug assertion
The IGNORE option of the multiple-table UPDATE command was
not intended to suppress errors caused by the
sql_safe_updates mode. This flag will raise an error if the
execution of UPDATE does not use a key for row retrieval,
and should continue do so regardless of the IGNORE option.
However the implementation of IGNORE does not support
exceptions to the rule; it always converts errors to
warnings and cannot be extended. The Internal_error_handler
interface offers the infrastructure to handle individual
errors, making sure that the error raised by
sql_safe_updates is not silenced.
Fixed by implementing an Internal_error_handler and using it
for UPDATE IGNORE commands.
The 'rpl_get_master_version_and_clock' test verifies if the slave I/O
thread tries to reconnect to master when it tries to get the values of
the UNIX_TIMESTAMP, SERVER_ID from master under network disconnection.
So the master server is restarted for making the transient network
disconnection, during the period the COM_REGISTER_SLAVE failures are
produced in server log file when the slave I/O thread tries to
register on master.
To fix the problem, suppress COM_REGISTER_SLAVE failures in server log
file by mtr suppression, because they are expected.
When replicating from 4.1 master to 5.0 slave START SLAVE UNTIL can stop too late.
The necessary in calculating of the beginning of an event the event's length
did not correspond to the master's genuine information at the event's execution time.
That piece of info was changed at the event's relay-logging due to binlog_version<4 event
conversion by IO thread.
Fixed with storing the master genuine Query_log_event size into a new status
variable at relay-logging of the event. The stored info is extacted at the event
execution and participate further to caclulate the correct start position of the event
in the until-pos stopping routine.
The new status variable's algorithm will be only active when the event comes
from the master of version < 5.0 (binlog_version < 4).
Detailed revision comments:
r6471 | calvin | 2010-01-16 01:43:27 +0200 (Sat, 16 Jan 2010) | 4 lines
branches/5.1: fix bug#49396: main.innodb test fails in embedded mode
Change replace_result by using $MYSQLD_DATADIR. Tested in both embedded
mode and normal server mode.