When using BINLOG statement to execute rows log events, session variables
foreign_key_checks and unique_checks are changed temporarily. As each rows
log event has their own special session environment and its own
foreign_key_checks and unique_checks can be different from current session
which executing the BINLOG statement. But these variables are not restored
correctly after BINLOG statement. This problem will cause that the following
statements fail or generate unexpected data.
In this patch, code is added to backup and restore these two variables.
So BINLOG statement will not affect current session's variables again.
Problem: MySQL cp1251 did not support 'U+20AC EURO SIGN'
which was assigned a few years ago to 0x88.
Fix: adding mapping: 0x88 <-> U+20AC
@ mysql-test/include/ctype_8bit.inc
New shared file to test 8bit character sets.
@ mysql-test/r/ctype_cp1251.result
@ mysql-test/t/ctype_cp1251.test
Adding tests
@ sql/share/charsets/cp1251.xml
Adding mapping
@ strings/ctype-extra.c
Regenerating ctype-extra.c using strings/conf_to_src
according to new cp1251.xml
After dropping and recreating the database specified along with --one-database
option at command line, mysql client keeps filtering the statements even after
the execution of a 'USE' command on the same database.
--one-database option enables the filtering of statements when the current
database is not the one specified at the command line. However, when the same
database is dropped and recreated the variable (current_db) that holds the
inital database name gets altered. This bug exploits the fact that current_db
initially gets set to null value (0) when a 'use db_name' follows the recreation
of same database db_name (speficied at the command line) and hence skip_updates
gets set to 1, which inturn triggers the further filtering of statements.
Fixed by making get_current_db() a no-op function when one_database is set,
and hence, under that condition current_db will not get altered.
Note, however the value of current_db can change when we execute 'connect'
command with a differnet database to reconnect to the server, in which case,
the behavior of --one-database will be formulated using this new database.
It does work in general, the problem here was that the test name
'alter_table' matches 'main.alter_table-big' which has already been found.
Fixed by matching more explicitly (with/without suite name)
If mysqltest dies, mtr waits to see if mysqld dies too within 100ms
But in that case, it should not care about expected crash
Fix: jump past the code that checks the expect file
Follow-up discussed with Reporter:
Avoid hard shutdown after test failure, if caused by server log warning
AND we are running valgrind
More general pick-up of valgrind summaries, order may apparently vary
Do exit(1) if we did find valgrind summary warnings
In case of low memory sort buffer QUICK_INDEX_MERGE_SELECT creates
temporary file where is stores row ids which meet QUICK_SELECT ranges
except of clustered pk range, clustered range is processed separately.
In init_read_record we check if temporary file is used and choose
appropriate record access method. It does not take into account that
temporary file contains partial result in case of QUICK_INDEX_MERGE_SELECT
with clustered pk range.
The fix is always to use rr_quick if QUICK_INDEX_MERGE_SELECT
with clustered pk range is used.
Problem: crash in Item_float constructor on DBUG_ASSERT due
to not null-terminated string parameter.
Fix: making Item_float::Item_float non-null-termintated parameter safe:
- Using temporary buffer when generating error
modified:
@ mysql-test/r/xml.result
@ mysql-test/t/xml.test
@ sql/item.cc
Bug#55794: ulonglong options of mysqld show wrong values.
Port the few remaining system variables to the correct mechanism --
range-check in check-stage (and throw error or warning at that point
as needed and depending on STRICTness), update in update stage.
Fix some signedness errors when retrieving sysvar values for display.
MySQL 5.1 server
Server used to clip overly long user-names. This was presumably lost
when code was made UTF8-clean.
Now we emulate the behaviour for backward compatibility, but UTF8-ly
correct.
The test result differs on windows, since
it writes out 'localhost:<port>' instead of
only 'localhost', since it uses tcp/ip instead
of unix sockets on windows.
Fixed by replacing that column.
Also requires --big-test from some long running tests
and added a weekly run of all test requiring --big-test.
with on duplicate key update
There was a missed corner case in the partitioning
handler, which caused the next_insert_id to be changed
in the second level handlers (i.e the hander of a partition),
which caused this debug assertion.
The solution was to always ensure that only the partitioning
level generates auto_increment values, since if it was done
within a partition, it may fail to match the partition
function.
in different default schema.
In strict mode, when data truncation or conversion happens,
THD::killed is set to THD::KILL_BAD_DATA.
This is abuse of KILL mechanism to guarantee that execution
of statement is aborted.
The stored procedures execution, on the other hand,
upon detection that a connection was killed, would
terminate immediately, without trying to restore the caller's
context, in particular, restore the caller's current schema.
The fix is, when terminating a stored procedure execution,
to only bypass cleanup if the entire connection was killed,
not in case of other forms of KILL.
ALTER TABLE RENAME, DISABLE KEYS.
The code of ALTER TABLE RENAME, DISABLE KEYS could
issue a commit while holding LOCK_open mutex.
This is a regression introduced by the fix for
Bug 54453.
This failed an assert guarding us against a potential
deadlock with connections trying to execute
FLUSH TABLES WITH READ LOCK.
The fix is to move acquisition of LOCK_open outside
the section that issues ha_autocommit_or_rollback().
LOCK_open is taken to protect against concurrent
operations with .frms and the table definition
cache, and doesn't need to cover the call to commit.
A test case added to innodb_mysql.test.
The patch is to be null-merged to 5.5, which
already has 54453 null-merged to it.
sporadically.
The cause of the sporadic time out was a leaking protection
against the global read lock, taken by the RENAME statement,
and not released in case of an error occurred during RENAME.
The leaking protection counter would lead to the value of
protect_against_global_read never dropping to 0.
Consequently FLUSH TABLES in all connections, including the
one that leaked the protection, could not proceed.
The fix is to ensure that all branchesin RENAME code properly
release GRL protection.
There were actually more problems in this area:
Slaves (if any) were unconditionally restarted, this appears unnecessary.
Sort criteria were suboptimal, included the test name.
Added logic to "reserve" a sequence of tests with same config for one thread
Got rid of sort_criteria hash, put it into the test case itself
Adds little sanity check that expected worker picks up test
Fixed some tests that may fail if starting on running server
Some of these fail only if *same* test is repeated.
Finally, special sorting of tests that do --force-restart
removed and replaced by the comprehensive innodb-create-options.test.
It uses the rules listed in the comments at the top of that test.
This patch introduces these differences from previous behavior;
1) KEY_BLOCK_SIZE=0 is allowed by Innodb in both strict and non-strict mode
with no errors or warnings. It was previously used by the server to set
KEY_BLOCK_SIZE to undefined. (Bug#56628)
2) An explicit valid non-DEFAULT ROW_FORMAT always takes priority over a
valid KEY_BLOCK_SIZE. (bug#56632)
3) Automatic use of COMPRESSED row format is only done if the ROW_FORMAT
is DEFAULT or unspecified.
4) ROW_FORMAT=FIXED is prevented in strict mode.
This patch also includes various formatting changes for consistency with
InnoDB coding standards.
Related Bugs
Bug#54679: ALTER TABLE causes compressed row_format to revert to compact
Bug#56628: ALTER TABLE .. KEY_BLOCK_SIZE=0 produces untrue warning or unnecessary error
Bug#56632: ALTER TABLE implicitly changes ROW_FORMAT to COMPRESSED
Replace the array of mutexes that used to protect
dict_index_t::stat_n_diff_key_vals[] with an array of rw locks that protects
all the stats related members in dict_table_t and all of its indexes.
Approved by: Jimmy (rb://503)
by a function and column
The bugreport reveals two different bugs about grouping
on a function:
1) grouping by the TIME_TO_SEC function result caused
a server crash or wrong results and
2) grouping by the function returning a blob caused
an unexpected "Duplicate entry" error and wrong
result.
Details for the 1st bug:
TIME_TO_SEC() returns NULL if its argument is invalid (empty
string for example). Thus its nullability depends not only
on the nullability of its arguments but also on their values.
Fixed by (overoptimistically) setting TIME_TO_SEC() to be
nullable despite the nullability of its arguments.
Details for the 2nd bug:
The server is unable to create indices on blobs without
explicit blob key part length. However, this fact was
ignored for blob function result fields of GROUP BY
intermediate tables.
Fixed by disabling GROUP BY index creation for blob
function result fields like regular blob fields.
Lines below which were added in the patch for Bug#56814 cause this crash:
+ if (table->table)
+ table->table->maybe_null= FALSE;
Consider following test case:
--
CREATE TABLE t1(f1 INT NOT NULL);
INSERT INTO t1 VALUES (16777214),(0);
SELECT COUNT(*) FROM t1 LEFT JOIN t1 t2
ON 1 WHERE t2.f1 > 1 GROUP BY t2.f1;
DROP TABLE t1;
--
We set TABLE::maybe_null to FALSE for t2 table
and in create_tmp_field() we create appropriate tmp table field
using create_tmp_field_from_item() function instead of
create_tmp_field_from_field. As a result we have
LONGLONG field. As we have GROUP BY clause we calculate
group buffer length, see calc_group_buffer().
Item from group list which is used for calculation
refer to the field from real tables and have LONG type.
So group buffer length become insufficient for storing of
LONGLONG value. It leads to overwriting of wrong memory
area in do_field_int() function which is called from
end_update().
After some investigation I found out that
create_tmp_field_from_item() is used only for OLAP
grouping and can not be used for common grouping
as it could be an incompatibility between tmp
table fields and group buffer length.
We can not remove create_tmp_field_from_item() call from
create_tmp_field as OLAP needs it and we can not use this
function for common grouping. So we should remove setting
TABLE::maybe_null to FALSE from simplify_joins().
In this case we'll get wrong behaviour of
list_contains_unique_index() back. To fix it we
could use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join.