MAP 'REPAIR TABLE' TO RECREATE +ANALYZE FOR ENGINES NOT
SUPPORTING NATIVE REPAIR
Executing 'mysqlcheck --check-upgrade --auto-repair ...' will first issue
'CHECK TABLE FOR UPGRADE' for all tables in the database in order to check if the
tables are compatible with the current version of MySQL. Any tables that are
found incompatible are then upgraded using 'REPAIR TABLE'.
The problem was that some engines (e.g. InnoDB) do not support 'REPAIR TABLE'.
This caused any such tables to be left incompatible. As a result such tables were
not properly fixed by the mysql_upgrade tool.
This patch fixes the problem by first changing 'CHECK TABLE FOR UPGRADE' to return
a different error message if the engine does not support REPAIR. Instead of
"Table upgrade required. Please do "REPAIR TABLE ..." it will report
"Table rebuild required. Please do "ALTER TABLE ... FORCE ..."
Second, the patch changes mysqlcheck to do 'ALTER TABLE ... FORCE' instead of
'REPAIR TABLE' in these cases.
This patch also fixes 'ALTER TABLE ... FORCE' to actually rebuild the table.
This change should be reflected in the documentation. Before this patch,
'ALTER TABLE ... FORCE' was unused (See Bug#11746162)
Test case added to mysqlcheck.test
NON-PRIMARY UNIQUE INDEX USING INNODB
This patch adds the HA_INPLACE_ADD_UNIQUE_INDEX_NO_WRITE
capability flag to InnoDB, indicating that concurrent reads
can be allowed while non-primary unique indexes are created.
This is an follow-up to Bug #11751388 which enabled concurrent
reads when creating non-primary non-unique indexes.
Test case added to innodb_mysql_sync.test.
FLUSH TABLES under FLUSH TABLES <list> WITH READ LOCK leads
to assert failure.
This assert was triggered if a statement tried up upgrade a metadata
lock with an active FLUSH TABLE <list> WITH READ LOCK. The assert
checks that the connection already holds a global intention exclusive
metadata lock. However, FLUSH TABLE <list> WITH READ LOCK does not
acquire this lock in order to be compatible with FLUSH TABLES WITH
READ LOCK. Therefore any metadata lock upgrade caused the assert to
be triggered.
This patch fixes the problem by preventing metadata lock upgrade
if the connection has an active FLUSH TABLE <list> WITH READ LOCK.
ER_TABLE_NOT_LOCKED_FOR_WRITE will instead be reported to the client.
Test case added to flush.test.
@ mysql-test/r/ctype_latin1.result
@ mysql-test/r/ctype_utf8.result
@ mysql-test/t/ctype_latin1.test
@ mysql-test/t/ctype_utf8.test
Adding tests
@ sql/mysqld.h
@ sql/item.cc
@ sql/sql_parse.cc
@ sql/sql_view.cc
Refactoring (thanks to Guilhem for the idea):
Item_string::print() was hard to understand because of the different
QT_ constants: in "query_type==QT_x", QT_x is explicitely included
but the other two QT_ are implicitely excluded. The combinations
with '||' and '&&' make this even harder.
- logic is now more "explicit" by changing QT_ constants to a bitmap of flags:
QT_ORDINARY: no change,
QT_IS -> QT_TO_SYSTEM_CHARSET | QT_WITHOUT_INTRODUCERS,
QT_EXPLAIN -> QT_TO_SYSTEM_CHARSET
(QT_EXPLAIN was introduced in the first version of the Bug#57341 patch)
- Item_string::print() is rewritten using those flags
Bugfix itself:
When QT_TO_SYSTEM_CHARSET is used alone (with no QT_WITHOUT_INTRODUCERS),
we print string literals as follows:
- display introducers if they were in the original query
- print ASCII characters as is
- print non-ASCII characters using hex-escape
Note: as "EXPLAIN" output is only for human readability purposes
and does not need to be a pasrable SQL, so using hex-escape is Ok.
ErrConvString class perfectly suites for hex escaping purposes.
from 5.1 to 5.5
(Former 59405)
In this bug, args[0] in an Item_func_find_in_set stored an
Item_func_weekday that was constant. In
Item_func_find_in_set::fix_length_and_dec(), args[0]->val_str()
was called. Later, when Item_func_find_in_set::val_int() was
called, args[0]->null_value was checked. However, the
Item_func_weekday in args[0] had now been replaced with an
Item_cache. No val_*() calls had been made to this Item_cache,
thus null_value was incorrectly 'true', resulting in missing
rows in the result set.
enum_value gets a value in fix_length_and_dec() iff args[0]
is both constant and non-null. It is therefore unnecessary
to check the null_value of args[0] in val_int().
An alternative fix would be to call args[0]->val_int() inside
Item_func_find_in_set::val_int(). This would ensure
args[0]->null_value was set correctly (always false in this case),
but that would have to be done for every record this const value
is checked against.
Part 2. Function QUOTE() was not multi-byte safe.
@ mysql-test/r/ctype_ucs.result
@ mysql-test/t/ctype_ucs.test
Adding tests
@ sql/item_strfunc.cc
Fixing Item_func_quote::val_str to be multi-byte safe.
@ sql/item_strfunc.h
Multiple size needed for quote characters to mbmaxlen
Problem: wrong character set pointer was passed to my_strtoll10_mb2,
which led to DBUG_ASSERT failure in some cases.
@ mysql-test/r/func_encrypt_ucs2.result
@ mysql-test/t/func_encrypt_ucs2.test
@ mysql-test/r/ctype_ucs.result
@ mysql-test/t/ctype_ucs.test
Adding tests
@ sql/item_func.cc
"cs" initialization was wrong (res does not necessarily point to &str_value)
@ sql/item_strfunc.cc
Item_func_dec_encrypt::val_str() and Item_func_des_descrypt::val_str()
did not set character set for tmp_value (the returned value),
so the old value, which was previously copied from args[1]->val_str(),
was incorrectly returned with tmp_value.
Problem: a byte behind the end of input string was read
in case of a broken XML not having a quote or doublequote
character closing a string value.
Fix: changing condition not to read behind the end of input string
@ mysql-test/r/xml.result
@ mysql-test/t/xml.test
Adding tests
@ strings/xml.c
When checking if the closing quote/doublequote was found,
using p->cur[0] us unsafe, as p->cur can point to the byte after the value.
Comparing p->cur to p->beg instead.
Problem: in case of string CASE/WHEN arguments with different
character sets, Item_func_case::find_item() called comparator
cmp_items[x] on mixed character set Items, so a 8-bit value could
be errouneously referenced to as being utf16/utf32 value,
which led to crash on DBUG_ASSERT() because of wrong value length.
This was wrong, as string comparator expects arguments in the same
character set.
Fix: modify Item_func_case's argument list after calling
agg_arg_charsets_for_comparison() - put the Items in "agg" array
back to "args", because some of the Items in the "agg" array might
have been changed to character set converters:
- to Item_func_conv_charset for non-constant items
- to Item_string for constant items
In other words, perform the same substitution which is done in
all other operations string comparison or string result operations:
Replace
CASE latin1_item WHEN utf16_item THEN ... END
to
CASE CONVERT(latin1_item USING utf16) WHEN utf16_item THEN ... END
Replace
CASE utf16_item WHEN latin1_item THEN ... END
to
CASE utf16_item WHEN CONVERT(latin1_item USING utf16) THEN ... END
@ mysql-test/r/ctype_utf16.result
@ mysql-test/r/ctype_utf32.result
@ mysql-test/t/ctype_utf16.test
@ mysql-test/t/ctype_utf32.test
Adding tests
@ sql/item_cmpfunc.cc
Put "agg" back to "args".
@ sql/sql_string.cc
Backporting a fix for String::set_or_copy_aligned() from 5.6,
for better test coverage:
"SELECT _utf16 0x61" should expand the string to 0x0061 rather
than to 0x000061.
This fix was made in 5.6 under terms of "WL#4616 Implement UTF16-LE".
UPDATES THE TABLE ENTRIES (formerly 55385)
BUG#11764529: MULTI UPDATE+INNODB REPORTS ER_KEY_NOT_FOUND
IF A TABLE IS UPDATED TWICE (formerly 57373)
If multiple-table update updates a row through two aliases and
the first update physically moves the row, the second update will
fail to locate the row. This results in different errors
depending on storage engine:
* MyISAM: Got error 134 from storage engine
* InnoDB: Can't find record in 'tbl'
None of these errors accurately describe the problem.
Furthermore, since MyISAM is non-transactional, the update
executed first will be performed while the second will not.
In addition, for two equal multiple-table update statements,
one could succeed and the other fail based on whether or not
the record actually moved or not. This was inconsistent.
Two update operations may physically move a row:
1) Update of a column in a clustered primary key
2) Update of a column used to calculate which partition the
row belongs to
BUG#11764529 is about case 1) above, BUG#11762751 was about case 2).
The fix for these bugs is to return with an error if multiple-table
update is about to:
a) Update a table through multiple aliases, and
b) Perform an update that may physically more the row
in at least one of these aliases
This avoids
* partial updates as described for MyISAM above,
* provides the same error message that describes the actual problem
for all SEs
* inconsistent behavior where a statement fails or succeeds based on
e.g. the partitioning algorithm of the table.
The problem was that doing ALTER TABLE on a table which had a key
on a TEXT/BLOB column with a prefix longer than the maximum number
of characteres in this column (as per the character set), by mistake,
caused an error (Error 1170 - ER_BLOB_KEY_WITHOUT_LENGTH).
This bug not repeatable in 5.5.
This patch adds a regression test to alter_table.test and
contains no code changes.
("-") IN DATABASE NAMES IN ALTER DATABASE.
mysqldump did not quote database name in 'ALTER DATABASE'
statements in its output. This can further cause a failure
while loading if database name contains a hyphen '-'.
This happened as, while printing the 'ALTER DATABASE'
statements, the database name was not quoted.
Fixed by quoting the database name.
The loop that was looping over subqueries' references to outer field used a
local boolean variable to tell whether the field was grouped or not. But the
implementor failed to reset the variable after each iteration. Thus a field
that was not directly aggregated appeared to be.
Fixed by resetting the variable upon each new iteration.
MONTHNAME(0) claims that it is about to return NOT NULL
value, whereas it actually returns NULL.
As a result storage_engine variable (which cannot be NULL)
protection was bypassed and NULL value was accepted, causing
server crash.
Fixed MONTHNAME(0) to report valid NULL flag.
Problem:
IF() did not copy collation derivation and repertoire from
an argument if the opposite argument was NULL:
IF(cond, res1, NULL)
IF(cond, NULL, res2)
only CHARSET_INFO pointer was copied.
This resulted in illegal mix of collations error.
Fix:
copy all collation parameters from the non-NULL argument:
CHARSET_INFO pointer, derivation, repertoire.
memory reference
There are two issues present here.
1) There is a possibility that we test a byte beyond the
allocated buffer
2) We compare a byte that might never have been
initalized to see if it's 0.
The first issue is not triggered by existing code, but an
ASSERT has been added to safe-guard against introducing
new code that triggers it.
The second issue is what triggers the Valgrind warnings
reported in the bug report. A buffer is allocated in
class String to hold the value. This buffer is populated
by the character data constituting the string, but is not
zero-terminated in most cases. Testing if it is indeed
zero-terminated means that we check a byte that has never
been explicitly set, thus causing Valgrind to trigger.
Note that issue 2 is not a serious problem. The variable
is read, and if it's not zero, we will set it to zero.
There are no further consequences.
Note that this patch does not fix the underlying problems
with issue 1, as it is deemed too risky to fix at this
point (as noted in the bug report). As discussed in
the report, the c_ptr() method should probably be
replaced, but this requires a thorough analysis of the
~200 calls to the method.
attempt to create spatial index on char > 31 bytes".
Attempt to create spatial index on char field with length
greater than 31 byte led to assertion failure on server
compiled with safemutex support.
The problem occurred in mi_create() function which was called
to create a new version of table being altered. This function
failed since it detected an attempt to create a spatial key
on non-binary column and tried to return an error.
On its error path it tried to unlock THR_LOCK_myisam mutex
which has not been not locked at this point. Indeed such an
incorrect behavior was caught by safemutex wrapper and caused
assertion failure.
This patch fixes the problem by ensuring that mi_create()
doesn't releases THR_LOCK_myisam mutex on error path if it was
not acquired.
Assert in Diagnostics_area::set_ok_status() for XA COMMIT
This assert was triggered if XA COMMIT was issued when an XA transaction
already had encountered an error (e.g. a deadlock) which required
the XA transaction to be rolled back.
In general, the assert is triggered if a statement tries to send OK to
the client when an error has already been reported. It was triggered
in this case because the trans_xa_commit() function first reported an
error, then rolled back the transaction and finally returned FALSE,
indicating success. Since trans_xa_commit() reported success,
mysql_execute_command() tried to report OK, triggering the assert.
This patch fixes the problem by fixing trans_xa_commit() to return TRUE
if it encounters an error that requires rollback, even if the rollback
itself is successful.
Test case added to xa.test.
With this change, there will be new files "INFO_SRC"
and "INFO_BIN", which describe the source and the
binaries.
They will be contained in all packages:
- in "tar.gz" and derived packages, in "docs/",
- in RPMs, in "/usr/share/doc/packages/MySQL-server".
"INFO_SRC" is also part of a source tarball.
It gives the version as exact as possible, preferably
by calling "bzr version-info" on the source tree.
If that is not possible, it just contains the three
level version number.
"INFO_BIN" contains some info when and where the
binaries were built, the options given to the compiler,
and the flags controlling the included features.
The tests (test "mysql" in the main suite) are extended
to verify the existence of both "INFO_SRC" and "INFO_BIN",
as well as some of the expected contents.
Also fix bug#59110: Memory leak of QUICK_SELECT_I allocated memory.
Includes Jørgen Lølands review comments.
Root cause of these bugs are that test_if_skip_sort_order() decided to
revert the 'skip_sort_order' descision (and use filesort) after the
query plan has been updated to reflect a 'skip' of the sort order.
This might happen in 'check_reverse_order:' if we have a
select->quick which could not be made descending by appending
a QUICK_SELECT_DESC. ().
The original 'save_quick' was then restored after the QEP has been modified,
which caused:
- An incorrect 'precomputed_group_by= TRUE' may have been set,
and not reverted, as part of the already modifified QEP (Bug#59308)
- A 'select->quick' might have been created which we fail to delete (bug#59110).
This fix is a refactorication of test_if_skip_sort_order() where all logic
related to modification of QEP (controlled by argument 'bool no_changes'), is
moved to the end of test_if_skip_sort_order(), and done after *all* 'test_if_skip'
checks has been performed - including the 'check_reverse_order:' checks.
The refactorication above contains now intentional changes to the logic which
has been moved to the end of the function.
Furthermore, a smaller part of the fix address the handling of the
select->quick objects which may already exists when we call
'test_if_skip_sort_order()' (save_quick) -and
new select->quick's created during test_if_skip_sort_order():
- Before new select->quick may be created by calling ::test_quick_select(), we
set 'select->quick= 0' to avoid that ::test_quick_select() prematurely
delete the save_quick's. (After this call we may have both a 'save_quick'
and 'select->quick')
- All returns from ::test_if_skip_sort_order() where we may have both a
'save_quick' and a 'select->quick' has been changed to goto's to the
exit points 'skiped_sort_order:' or 'need_filesort:' where we
decide which of the QUICK_SELECT's to keep, and delete the other.
handling.
The problem was that parsing of nested regular expression involved
recursive calls. Such recursion didn't take into account the amount of
available stack space, which ended up leading to stack overflow crashes.
Bug #55755 : Date STD variable signness breaks server on FreeBSD and OpenBSD
* Added a check to configure on the size of time_t
* Created a macro to check for a valid time_t that is safe to use with datetime
functions and store in TIMESTAMP columns.
* Used the macro consistently instead of the ad-hoc checks introduced by 52315
* Fixed compliation warnings on platforms where the size of time_t is smaller than
the size of a long (e.g. OpenBSD 4.8 64 amd64).
Bug #52315: utc_date() crashes when system time > year 2037
* Added a correct check for the timestamp range instead of just variable size check to
SET TIMESTAMP.
* Added overflow checking before converting to time_t.
* Using a correct localized error message in this case instead of the generic error.
* Added a test suite.
* fixed the checks so that they check for unsigned time_t as well. Used the checks
consistently across the source code.
* fixed the original test case to expect the new error code.
primary_key_no == 0".
Attempt to create InnoDB table with non-nullable column of
geometry type having an unique key with length 12 on it and
with some other candidate key led to server crash due to
assertion failure in both non-debug and debug builds.
The problem was that such a non-candidate key could have
been sorted as the first key in table/.FRM, before any legit
candidate keys. This resulted in assertion failure in InnoDB
engine which assumes that primary key should either be the
first key in table/.FRM or should not exist at all.
The reason behind such an incorrect sorting was an wrong
value of Create_field::key_length member for geometry field
(which was set to its pack_length == 12) which confused code
in mysql_prepare_create_table(), so it would skip marking
such key as a key with partial segments.
This patch fixes the problem by ensuring that this member
gets the same value of Create_field::key_length member as
for other blob fields (from which geometry field class is
inherited), and as result unique keys on geometry fields
are correctly marked as having partial segments.
Root cause for this bug is that the optimizer try to detect&
optimize the special case:
'<field> BETWEEN c1 AND c1' and handle this as the condition '<field> = c1'
This was implemented inside add_key_field(.. *field, *value[]...)
which assumed field to refer key Field, and value[] to refer a [low...high]
constant pair. value[0] and value[1] was then compared for equality.
In a 'normal' BETWEEN condition of the form '<field> BETWEEN val1 and val2' the
BETWEEN operation is represented with an argementlist containing the
values [<field>, val1, val2] - add_key_field() is then called with
parameters field=<field>, *value=val1.
However, if the BETWEEN predicate specified:
1) '<const1> BETWEEN<const2> AND<field>
the 'field' and 'value' arguments to add_key_field() had to be swapped.
This was implemented by trying to cheat add_key_field() to handle it like:
2) '<const1> GE<const2> AND<const1> LE<field>'
As we didn't really replace the BETWEEN operation with 'ge' and 'le',
add_key_field() still handled it as a 'BETWEEN' and compared the (swapped)
arguments<const1> and<const2> for equality. If they was equal, the
condition 1) was incorrectly 'optimized' to:
3) '<field> EQ <const1>'
This fix moves this optimization of '<field> BETWEEN c1 AND c1' into
add_key_fields() which then calls add_key_equal_fields() to collect
key equality / comparison for the key fields in the BETWEEN condition.
In SBR, if a statement does not fail, it is always written to the binary
log, regardless if rows are changed or not. If there is a failure, a
statement is only written to the binary log if a non-transactional (.e.g.
MyIsam) engine is updated.
INSERT ON DUPLICATE KEY UPDATE and INSERT IGNORE were not following the
rule above and were not written to the binary log, if then engine was
Innodb.
ZERO
When dates are represented internally as strings, i.e. when a string constant
is compared to a date value, both values are converted to long integers,
ostensibly for fast comparisons. DATE typed integer values are converted to
DATETIME by multiplying by 1,000,000 (each digit pair representing hour,
minute and second, respectively). But the mechanism did not distuinguish
cached INTEGER values, already in correct format, from newly converted
strings.
Fixed by marking the INTEGER cache as being of DATETIME format.
Problem: the scanner function tested for strings "<![CDATA[" and
"-->" without checking input string boundaries, which led to valgrind's
"Conditional jump or move depends on uninitialised value(s)" error.
Fix: Adding boundary checking.
@ mysql-test/r/xml.result
@ mysql-test/t/xml.test
Adding test
@ strings/xml.c
Adding a helper function my_xml_parser_prefix_cmp(),
with input string boundary check.