OPTION SKIP-WRITE-BINLOG
System tables were not getting upgraded when
mysql_upgrade was run with --skip-write-binlog
option. (Same for --write-binlog.) Also, with
this option, mysql_upgrade_info file was not
getting created after the upgrade.
mysql_upgrade makes use of mysql client tool in
order to run upgrade scripts, while doing so it
passes some of the command line options (used to
start mysql_upgrade) directly to mysql client.
The reason behind this bug being, some options
like skip-write-binlog and upgrade-system-tables
were being passed to mysql tool along with other
options, and hence mysql execution failed due
presence of these invalid options.
Fixed this issue by filtering out the above mentioned
options from the list of options that will be passed to
mysql and mysqlcheck tools. However, since --write-binlog
is supported by mysqlcheck, this option would be used
explicitly while running mysqlcheck. (not part of patch,
already there)
Checking the contents of general log after the upgrade
is not doable via an mtr test. So performed manual test.
Added a test to verify the creation of mysql_upgrade_info.
SCAN/CPU) => SLAVE FAILURE
When a statement containing a large amount of ROWs to be applied on
the slave, and the slave's table does not contain a PK, it can take a
considerable amount of time to find and change all the rows that are
to be changed.
The proper slave enhancement will be implemented in WL 5597. However,
in this bug we are making it clear to the user what the problem is, by
printing a message to the error log if the execution time, for a given
statement in RBR, takes more than LONG_FIND_ROW_THRESHOLD (set to 60
seconds). This shall help the DBA to diagnose what's happening when
facing a slave server that is quiet for no apparent reason...
The note is only printed to the error log if log_warnings is set to be
greater than 1.
The bug was accidentally fixed by fixing
Bug#11759688 52020: InnoDB can still deadlock on just INSERT...ON DUPLICATE KEY
a.k.a. the reintroduction of
Bug#7975 deadlock without any locking, simple select and update
alter_treable-big.test was failing due to the use of RAND() function which is no more
replication safe.
This has been modified using static values.
Also, 'sleep' has been replaced using 'debug_sync' and the execution time of the
test has been reduced significantly.
This test is now taken out of the disabled.def file and is being enabled.
a.k.a. Bug#7975 deadlock without any locking, simple select and update
Bug#7975 was reintroduced when the storage engine API was made
pluggable in MySQL 5.1. Instead of looking at thd->lex directly, we
rely on handler::extra(). But, we were looking at the wrong extra()
flag, and we were ignoring the TRX_DUP_REPLACE flag in places where we
should obey it.
innodb_replace.test: Add tests for hopefully all affected statement
types, so that bug should never ever resurface. This kind of tests
should have been added when fixing Bug#7975 in MySQL 5.0.3 in the
first place.
rb:806 approved by Sunny Bains
A patch for this bug has already been pushed. A minor change is made here.
The database to be used after re-enabling the disabled code is 'TEST'.
But instead, 'MYSQL' was being used.
This is the minor change that is being made here.
Don't do this for echo, instead:
1) Enable replacements also for assignment from backquoted SQL
2) Allow replace_regex to take a variable for the *entire* argument list
With this, the test can be amended, but only in its version in trunk
post-push fixes for show_slave_io_error= 1 of wait_for_slave_io_error.inc;
Unix and win format path specifically so few tests have to change show_slave_io_error
to zero.
The bug case is similar to one fixed earlier bug_49536.
Deadlock involving LOCK_log appears to be possible because the purge running thread
is holding LOCK_log whereas there is no sense of doing that and which fact was
exploited by the earlier bug fixes.
Fixed with small reengineering of rotate_and_purge(), adding two new methods and
setting up a policy to execute those instead of the former
rotate_and_purge(RP_LOCK_LOG_IS_ALREADY_LOCKED).
The policy for using rotate(), purge() is that if the caller acquires LOCK_log itself,
it should call rotate(), release the mutex and run purge().
Side effect of this patch is refining error message of bug@11747416 to print
the whole path.
This was an attempt to address problems with the Bug#12612184 fix.
Even with this follow-up fix, crash recovery can be broken.
Let us fix the bug later.
In the ON UPDATE CASCADE clause of FOREIGN KEY constraints, the
calculated update vector was not fully initialized. This bug was
introduced in the InnoDB Plugin when implementing support for
ROW_FORMAT=DYNAMIC.
Additionally, the data type information was not initialized, but
apparently it has never been needed in this case. Nevertheless, it is
not good programming practice to pass uninitialized values around.
calc_row_difference(): Declare the update field uninitialized in
Valgrind. Copy the data type information as well, except when the
field is SQL NULL. In the built-in InnoDB, initialize
ufield->extern_storage = FALSE (an initialization bug that had gone
unnoticed this far). The InnoDB Plugin and later have this flag to
dfield_t and have always initialized it properly.
row_ins_cascade_calc_update_vec(): Reduce the scope of some
pointers. Initialize orig_len. (This caused the bug in InnoDB Plugin
and later.)
row_ins_foreign_check_on_constraint(): Simplify a condition. Declare
the update vector uninitialized.
rb:771 approved by Jimmy Yang
PARENT FOR OTHER ONE
Do not try to lookup key_nr'th key in 'table' because there may not be such
a key there. key_nr is the number of the key in the _child_ table name, not
in the parent table.
Instead just print the fields of the record that are covered by the first key
defined on the parent table.
This bug gets a better fix in MySQL 5.6, which is too risky for 5.1 and 5.5.
Approved by: Jon Olav Hauglid (via IM)
TESTS: CRASH, CORRUPTION, 4G MEMOR
Issue: Valgrind errors due to checksum and optimize
query against archive tables with null columns.
Table record buffer was not initialized.
Solution: Initialize the record buffer.
TESTS: CRASH, CORRUPTION, 4G MEMOR
Issue: Valgrind errors due to checksum and optimize
query angaist archive tables with null columns.
Table record buffer was not initialized.
Solution: Initialize the record buffer.
USING MYISAM_USE_MMAP ON WINDOWS
When OPTIMIZE/REPAIR TABLE is switching to new data file,
old data file is removed while memory mapping is still
active.
With 5.1 implementation of nt_share_delete() it is not
permitted to remove mmaped file.
This fix disables memory mapping for mi_repair() operations.
The assertion in innodb is triggered in this way:
1. mysql server does lookup on the primary key with full key,
innodb decides to not store cursor position because
"any index_next/prev call will return EOF anyway"
2. server asks innodb to return any next record in the index and the
assertion is triggered because no cursor position is stored.
It happens when a unique search (match_mode=ROW_SEL_EXACT)
in the clustered index is performed. InnoDB has never stored
the cursor position after a unique key lookup in the
clustered index because storing the position is an expensive
operation. The bug was introduced by
WL3220 'Loose index scan for aggregate functions'.
The fix is to disallow loose index scan optimization
for AGG_FUNC(DISTINCT ...) if GROUP_MIN_MAX quick select
uses clustered key.
modified function do_get_error in mysqltest.cc to handle multiple variable passed
added test case to mysqltest.test to verify handling to multiple errors passed
This patch corrects a defect in the building of the DELETE commands for
disabling a plugin whereby only the original plugin data was deleted. If there
were other plugins, the delete did not remove the rows. The code has been
changed to remove all rows from the mysql.plugin table that were inserted when
the plugin was loaded. The test has also been changed to correctly identify if
all rows have been deleted.
Buffer over-run on all platforms, crash on windows, wrong result on other platforms,
when rounding numbers which start with 999999999 and have
precision = 9 or 18 or 27 or 36 ...
When temporary tables is used for result sorting
result field for gconcat function is created using
group_concat_max_len size. It leads to result truncation
when character_set_results is multi-byte character set due
to insufficient tmp table field size.
The fix is to increase temporary table field size for
gconcat. Method make_string_field() is overloaded
for Item_func_group_concat class and uses
max_characters * collation.collation->mbmaxlen size for
result field. max_characters is maximum number of characters
what can fit into max_length size.