ordered data from archive tables
Archive was using wrong memory address to check if field
is NULL (after filesort, when reading record again).
mysql-test/r/archive.result:
A test case for BUG#11764339.
mysql-test/t/archive.test:
A test case for BUG#11764339.
storage/archive/ha_archive.cc:
Null bytes are restored to "record" buffer, which may
or may not be equal to record buffer for field. Check
null bits in "record" buffer, instead of Field::null_ptr.
Before this fix, all the performance schema instrumentation for both the binary log
and the relay log would use the following instruments:
- wait/io/file/sql/binlog
- wait/io/file/sql/binlog_index
- wait/synch/mutex/sql/MYSQL_BIN_LOG::LOCK_index
- wait/synch/cond/sql/MYSQL_BIN_LOG::update_cond
This instrumentation is too general and can be more specific.
With this fix, the binlog instrumentation is identical,
and the relay log instrumentation is changed to:
- wait/io/file/sql/relaylog
- wait/io/file/sql/relaylog_index
- wait/synch/mutex/sql/MYSQL_RELAY_LOG::LOCK_index
- wait/synch/cond/sql/MYSQL_RELAY_LOG::update_cond
With this change, the performance instrumentation for the binary log and the relay log,
which share the same structure but have different uses, is more detailed.
This is especially important for hosts in the middle of a replication chain,
that are both masters (binlog) and slaves (relaylog).
Problem: a byte behind the end of input string was read
in case of a broken XML not having a quote or doublequote
character closing a string value.
Fix: changing condition not to read behind the end of input string
@ mysql-test/r/xml.result
@ mysql-test/t/xml.test
Adding tests
@ strings/xml.c
When checking if the closing quote/doublequote was found,
using p->cur[0] us unsafe, as p->cur can point to the byte after the value.
Comparing p->cur to p->beg instead.
Problem: in case of string CASE/WHEN arguments with different
character sets, Item_func_case::find_item() called comparator
cmp_items[x] on mixed character set Items, so a 8-bit value could
be errouneously referenced to as being utf16/utf32 value,
which led to crash on DBUG_ASSERT() because of wrong value length.
This was wrong, as string comparator expects arguments in the same
character set.
Fix: modify Item_func_case's argument list after calling
agg_arg_charsets_for_comparison() - put the Items in "agg" array
back to "args", because some of the Items in the "agg" array might
have been changed to character set converters:
- to Item_func_conv_charset for non-constant items
- to Item_string for constant items
In other words, perform the same substitution which is done in
all other operations string comparison or string result operations:
Replace
CASE latin1_item WHEN utf16_item THEN ... END
to
CASE CONVERT(latin1_item USING utf16) WHEN utf16_item THEN ... END
Replace
CASE utf16_item WHEN latin1_item THEN ... END
to
CASE utf16_item WHEN CONVERT(latin1_item USING utf16) THEN ... END
@ mysql-test/r/ctype_utf16.result
@ mysql-test/r/ctype_utf32.result
@ mysql-test/t/ctype_utf16.test
@ mysql-test/t/ctype_utf32.test
Adding tests
@ sql/item_cmpfunc.cc
Put "agg" back to "args".
@ sql/sql_string.cc
Backporting a fix for String::set_or_copy_aligned() from 5.6,
for better test coverage:
"SELECT _utf16 0x61" should expand the string to 0x0061 rather
than to 0x000061.
This fix was made in 5.6 under terms of "WL#4616 Implement UTF16-LE".
IS FAILING".
The problem was that large_tests.lock_tables_big test was
failing due to exceeding open files limit on platforms where
this limit was set too low (this test simultaneously opens
approx. 6000 files).
This patch solves this issue by ensuring that this test is
skipped on such platforms.
This is a backport of the patch for MySQL Bug#50574.
Adding a SPATIAL INDEX on non-geometrical columns caused a
segmentation fault when the table was subsequently
inserted into.
A test was added in mysql_prepare_create_table to explicitly
check whether non-geometrical columns are used in a
spatial index, and throw an error if so.
For MySQL 5.5 and later, a new and more meaningful error
message was introduced. For 5.1, we (re-)use an existing
error code.
mysql-test/r/filesort_debug.result:
New test case.
mysql-test/t/filesort_debug.test:
New test case.
sql/filesort.cc:
thd->killed does not imply thd->is_error(), so test for that separately.
UPDATES THE TABLE ENTRIES (formerly 55385)
BUG#11764529: MULTI UPDATE+INNODB REPORTS ER_KEY_NOT_FOUND
IF A TABLE IS UPDATED TWICE (formerly 57373)
If multiple-table update updates a row through two aliases and
the first update physically moves the row, the second update will
fail to locate the row. This results in different errors
depending on storage engine:
* MyISAM: Got error 134 from storage engine
* InnoDB: Can't find record in 'tbl'
None of these errors accurately describe the problem.
Furthermore, since MyISAM is non-transactional, the update
executed first will be performed while the second will not.
In addition, for two equal multiple-table update statements,
one could succeed and the other fail based on whether or not
the record actually moved or not. This was inconsistent.
Two update operations may physically move a row:
1) Update of a column in a clustered primary key
2) Update of a column used to calculate which partition the
row belongs to
BUG#11764529 is about case 1) above, BUG#11762751 was about case 2).
The fix for these bugs is to return with an error if multiple-table
update is about to:
a) Update a table through multiple aliases, and
b) Perform an update that may physically more the row
in at least one of these aliases
This avoids
* partial updates as described for MyISAM above,
* provides the same error message that describes the actual problem
for all SEs
* inconsistent behavior where a statement fails or succeeds based on
e.g. the partitioning algorithm of the table.
mysql-test/r/multi_update.result:
Add test for bug#57373
mysql-test/r/multi_update_innodb.result:
Add test for bug#57373
mysql-test/r/partition.result:
Add test for bug#55385
mysql-test/t/multi_update.test:
Add test for bug#57373
mysql-test/t/multi_update_innodb.test:
Add test for bug#57373
mysql-test/t/partition.test:
Add test for bug#55385
sql/handler.cc:
Translate handler error HA_ERR_RECORD_DELETED to server error
sql/share/errmsg-utf8.txt:
New error message for multi-table update where the same table is updated multiple times.
sql/sql_update.cc:
Add function unsafe_key_update()
Added code to call 'ctest' if the needed cmake file is present
Will do so unless tests/suited named on mtr command line
Also add option to turn on/off
Will be made to look like a test 'unit-test' which counts towards total
Extracts summary report and any test failures from ctest output
Addendum: added override to turn off in PB, add back in selected invocations
The problem was that doing ALTER TABLE on a table which had a key
on a TEXT/BLOB column with a prefix longer than the maximum number
of characteres in this column (as per the character set), by mistake,
caused an error (Error 1170 - ER_BLOB_KEY_WITHOUT_LENGTH).
This bug not repeatable in 5.5.
This patch adds a regression test to alter_table.test and
contains no code changes.
("-") IN DATABASE NAMES IN ALTER DATABASE.
mysqldump did not quote database name in 'ALTER DATABASE'
statements in its output. This can further cause a failure
while loading if database name contains a hyphen '-'.
This happened as, while printing the 'ALTER DATABASE'
statements, the database name was not quoted.
Fixed by quoting the database name.
client/mysqldump.c:
Bug#11766310 : 59398: MYSQLDUMP 5.1 CAN'T HANDLE A DASH
("-") IN DATABASE NAMES IN ALTER DATABASE.
Modified the print statement in order to print the quoted
database name for 'ALTER DATABASE' statements.
mysql-test/r/mysqldump.result:
Added a test case for bug#11766310.
mysql-test/t/mysqldump.test:
Added a test case for bug#11766310.
The loop that was looping over subqueries' references to outer field used a
local boolean variable to tell whether the field was grouped or not. But the
implementor failed to reset the variable after each iteration. Thus a field
that was not directly aggregated appeared to be.
Fixed by resetting the variable upon each new iteration.
MONTHNAME(0) claims that it is about to return NOT NULL
value, whereas it actually returns NULL.
As a result storage_engine variable (which cannot be NULL)
protection was bypassed and NULL value was accepted, causing
server crash.
Fixed MONTHNAME(0) to report valid NULL flag.
mysql-test/r/func_time.result:
A test case for BUG#11766720.
mysql-test/t/func_time.test:
A test case for BUG#11766720.
sql/item_timefunc.cc:
MONTHNAME(0) must report NULL, as opposed to base class
MONTH(0) which is NOT NULL.
Fixed Item_func_monthname to inherit from Item_str_func
instead of Item_func_month.
sql/item_timefunc.h:
MONTHNAME(0) must report NULL, as opposed to base class
MONTH(0) which is NOT NULL.
Fixed Item_func_monthname to inherit from Item_str_func
instead of Item_func_month.
Problem:
IF() did not copy collation derivation and repertoire from
an argument if the opposite argument was NULL:
IF(cond, res1, NULL)
IF(cond, NULL, res2)
only CHARSET_INFO pointer was copied.
This resulted in illegal mix of collations error.
Fix:
copy all collation parameters from the non-NULL argument:
CHARSET_INFO pointer, derivation, repertoire.
This assumption in Item_cache_datetime::cache_value_int
was wrong:
- /* Assume here that the underlying item will do correct conversion.*/
- int_value= example->val_int_result();
mysql-test/r/subselect_innodb.result:
New test case.
mysql-test/t/subselect_innodb.test:
New test case.
sql/item.cc:
In Item_cache_datetime::cache_value_int()
- call get_time() or get_date() depending on desired type
- convert the returned MYSQL_TIME value to longlong depending on desired type
sql/item.h:
The cached int_value in Item_cache_datetime should not be unsigned:
- it is used mostly in signed context
- it can actually have negative value (for TIME data type)
sql/item_cmpfunc.cc:
Add comment on Bug#59685
sql/item_subselect.cc:
Add some DBUG_TRACE for easier bug-hunting.