conflicts:
Text conflict in client/mysqltest.cc
Text conflict in mysql-test/include/wait_until_connected_again.inc
Text conflict in mysql-test/lib/mtr_report.pm
Text conflict in mysql-test/mysql-test-run.pl
Text conflict in mysql-test/r/events_bugs.result
Text conflict in mysql-test/r/log_state.result
Text conflict in mysql-test/r/myisam_data_pointer_size_func.result
Text conflict in mysql-test/r/mysqlcheck.result
Text conflict in mysql-test/r/query_cache.result
Text conflict in mysql-test/r/status.result
Text conflict in mysql-test/suite/binlog/r/binlog_index.result
Text conflict in mysql-test/suite/binlog/r/binlog_innodb.result
Text conflict in mysql-test/suite/rpl/r/rpl_packet.result
Text conflict in mysql-test/suite/rpl/t/rpl_packet.test
Text conflict in mysql-test/t/disabled.def
Text conflict in mysql-test/t/events_bugs.test
Text conflict in mysql-test/t/log_state.test
Text conflict in mysql-test/t/myisam_data_pointer_size_func.test
Text conflict in mysql-test/t/mysqlcheck.test
Text conflict in mysql-test/t/query_cache.test
Text conflict in mysql-test/t/rpl_init_slave_func.test
Text conflict in mysql-test/t/status.test
The original goal of the test, as reported on BUG #25721, is to check whether
a deadlock happens or not when concurrently CREATING/ALTERING/DROPPING the
same server. For that a procedure (p1) is created that runs the exact same
CREATE/ALTER/DROP statements for 10K iterations and two different connections
(threads - t1 and t2) call p1 concurrently. At the end of the 10K iterations,
the test checks if there was errors while running the loop (SELECT e >0).
The problem is that In some cases it may happen that one thread, t1, gets
scheduled to execute with just enough time to complete the iteration and
never bumps into the other thread t2. Meaning that t1 will never run into an
SQL exception. On the other hand, the other thread, t2, may run into t1 and
never issue any/part of its own statements because it will throw an SQLEXCEPTION.
This is probably the case for failures where only one value differs.
Furthermore, there is a third scenario: both threads are scheduled to run
interleaved for each iteration (or even one thread completes all iterations
before the other starts). In this case, both will succeed without any error.
This is probably the case for the failure that reports two different values.
This patch addresses the failure in pushbuild by removing the error counting
and the printout (SELECT > 0) at the end of the test. A timeout should occur
if the error that the test is checking surfaces.
- Additional patch with improved protection by putting it all inside an "eval"
- Calling 'hostpath' on a truncated socket may also croak.
- Remove the need to create any directory parts of "path" inside the function.
- Fix problem with for example ./mtr --timer, caused by new version of "Getopt::Long"
- Evaluating the "$opt" variable as a string, returns the name of the parameter
to be modified instead of "Getopt::Long::Callback" which is the class name
In mtr.check_warnings, `text` was declares as type text, which is
64K, and when the server log grows larger than this, it would be
truncated, and then check_warnings was actually checking the
error messages of a previous test and complain warnings.
This patch fixed the problem by change the type of `text` to
mediumtext, which is 16M.
Log output of mysqltest when running check-testcase together
with errput. Remove --silent option for mysqltest when running
check-testcase/check-warnings
bug#33094: Error in upgrading from 5.0 to 5.1 when table contains
triggers
and
#41385: Crash when attempting to repair a #mysql50# upgraded table
with triggers.
Problem:
1. trigger code didn't assume a table name may have
a "#mysql50#" prefix, that may lead to a failing ASSERT().
2. "ALTER DATABASE ... UPGRADE DATA DIRECTORY NAME" failed
for databases with "#mysql50#" prefix if any trigger.
3. mysqlcheck --fix-table-name didn't use UTF8 as a default
character set that resulted in (parsing) errors for tables with
non-latin symbols in their names and definitions of triggers.
Fix:
1. properly handle table/database names with "#mysql50#" prefix.
2. handle --default-character-set mysqlcheck option;
if mysqlcheck is launched with --fix-table-name or --fix-db-name
set default character set to UTF8 if no --default-character-set
option given.
Note: if given --fix-table-name or --fix-db-name option,
without --default-character-set mysqlcheck option
default character set is UTF8.
The next number (AUTO_INCREMENT) field of the table for write
rows events are not initialized, and cause some engines (innodb)
not correctly update the tables's auto_increment value.
This patch fixed this problem by honor next number fields if present.
+ add workaround for bug 38124
+ messages into the protocol when sessions are switched
+ replace error numbers by error names
+ reset of system variables to initial values per subtest
+ remove a file created by this test
+ minor improvements in structure and formatting
Added cleanup of status variables to the end of binlog_database.
Re-recorded .result file to account for cleanup statement.
NOTE: binlog.binlog_innodb also has had an FLUSH STATUS; statement added to it as well, but
adding this cleanup as a preventative measure.
Bounds-checks and blocksize corrections were applied to user-input,
but constants in the server were trusted implicitly. If these values
did not actually meet the requirements, the user could not set change
a variable, then set it back to the (wonky) factory default or maximum
by explicitly specifying it (SET <var>=<value> vs SET <var>=DEFAULT).
Now checks also apply to the server's presets. Wonky values and maxima
get corrected at startup. Consequently all non-offsetted values the user
sees are valid, and users can set the variable to that exact value if
they so desire.
Problem 1: The test waits for an error in the slave sql thread,
then resolves the error and issues 'start slave'. However, there
is a gap between when the error is reported and the slave sql
thread stops. If this gap was long, the slave would still be
running when 'start slave' happened, so 'start slave' would fail
and cause a test failure.
Fix 1: Made wait_for_slave_sql_error wait for the slave to stop
instead of wait for error in the IO thread. After stopping, the
error code is verified. If the error code is wrong, debug info
is printed. To print debug info, the debug printing code in
wait_for_slave_param.inc was moved out to a new file,
show_rpl_debug_info.inc.
Problem 2: rpl_stm_mystery22 is a horrible name, the comments in
the file didn't explain anything useful, the test was generally
hard to follow, and the test was essentially duplicated between
rpl_stm_mystery22 and rpl_row_mystery22.
Fix 2: The test is about conflicts in the slave SQL thread,
hence I renamed the tests to rpl_{stm,row}_conflicts. Refactored
the test so that the work is done in
extra/rpl_tests/rpl_conflicts.inc, and
rpl.rpl_{row,stm}_conflicts merely sets some variables and then
sourced extra/rpl_tests/rpl_conflicts.inc.
The tests have been rewritten and comments added.
Problem 3: When calling wait_for_slave_sql_error.inc, you always
want to verify that the sql thread stops because of the expected
error and not because of some other error. Currently,
wait_for_slave_sql_error.inc allows the caller to omit the error
code, in which case all error codes are accepted.
Fix 3: Made wait_for_slave_sql_error.inc fail if no error code
is given. Updated rpl_filter_tables_not_exist accordingly.
Problem 4: rpl_filter_tables_not_exist had a typo, the dollar
sign was missing in a 'let' statement.
Fix 4: Added dollar sign.
Problem 5: When replicating from other servers than the one named
'master', the wait_for_slave_* macros were unable to print debug
info on the master.
Fix 5: Replace parameter $slave_keep_connection by
$master_connection.
2. Avoid bad effects of bug 41925 Warning 1366 Incorrect string value:
... for column processlist.info
3. Add poll routines which ensure that subtests meet stable scenarios.
This does not change the sense of the subtests.
When substituting system constant functions with a constant result
the server was not expecting that the function may return NULL.
Fixed by checking for NULL and returning Item_null (in the relevant
collation) if the result of the system constant function was NULL.
The special TRUNCATE TABLE (DDL) transaction wasn't being properly
rolled back if a error occurred during row by row deletion. The
error can be caused by a foreign key restriction imposed by InnoDB
SE and would cause the server to erroneously issue a implicit
commit.
The solution is to rollback the transaction if a truncation via row
by row deletion fails, otherwise commit. All effects of a TRUNCATE
ABLE operation are rolled back if a row by row deletion fails.
Problem: when mtr tries to create a directory, and the target
exists but is a file instead of directory, it tries several times
to create the directory again before it fails.
Fix: make it check if the target exists and is a non-directory.
The problem is that a mysql connection instance is not thread-safe
and reentrant, meaning that it can't be used concurrently and can't
be re-entered while it's already running. This applies for any form
of the server (embedded or not), but this rule can be violated in a
test case if the test sends a new command without waiting for the
result of previous command that was sent asynchronously and this can
lead to hangs when over a network or to crashes under embedded server
as the server query execution path will be re-entered concurrently
with the same connection structure.
The solution is to rework the test case so that the aforementioned
rule is obeyed.
Passing dubious "year zero" in non-zero date (not "0000-00-00") could
lead to negative value for year internally, while variable was unsigned.
This led to Really Bad Things further down the line.
Now doing calculations with signed type for year internally.
The binlog_innodb test was sensitive to what tests ran before it. Now run
FLUSH STATUS before performing operations that need to be checked.
sys_var_thd_ulong::update() was improperly casting an option value from
ulonglong to ulong before comparing it to the max allowed value. On systems
where ulong and ulonglong are of different size, this caused values greater
than ULONG_MAX to wrap around (not be truncated to ULONG_MAX, which appears to
have been the intention of the original coder), and caused some checks to work
incorrectly. This wasn't generally visible to the user, because later checks
would prevent the wrapped-around value from being used. But it caused warning
messages to differ between 32- and 64-bit platforms. Fix is to just remove the
cast. Also added a DBUG_ASSERT to ensure that the value really is capped
properly before finally stuffing it into the ulong.
locking type of temp table
The problem is that INSERT INTO .. SELECT FROM .. and CREATE
TABLE .. SELECT FROM a temporary table could inadvertently
overwrite the locking type of the temporary table. The lock
type of temporary tables should be a write lock by default.
The solution is to reset the lock type of temporary tables
back to its default value after they are used in a statement.