Valgrind warning happpens because of uninitialized null bytes.
In row_sel_push_cache_row_for_mysql() function we fill fetch cache
with necessary field values, row_sel_store_mysql_rec() is called
for this and leaves null bytes untouched.
Later row_sel_pop_cached_row_for_mysql() rewrites table record
buffer with uninited null bytes. We can see the problem from the
test case:
At 'SELECT...' we call row_sel_push...->row_sel_store...->row_sel_pop_cached...
chain which rewrites table->record[0] buffer with uninitialized null bytes.
When we call 'UPDATE...' statement, compare_record uses this buffer and
valgrind warning occurs.
The fix is to init null bytes with default values.
Problem: the server missed the fact that one can read from
2 indexes alternately using HANDLER interface.
Fix: check if the same (initialized) index is involved
reading next/prev values from the index.
Added option --user-args, to be used with --start*
Only keeps --defaults-file and --defaults-group-suffix
Also added missing help text entry for --start-and-exit
Bug#46527 COMMIT AND CHAIN RELEASE does not make sense
Bug#53343 completion_type=1, COMMIT/ROLLBACK AND CHAIN don't
preserve the isolation level
Bug#53346 completion_type has strange effect in a stored
procedure/prepared statement
Added test cases to verify the expected behaviour of :
SET SESSION TRANSACTION ISOLATION LEVEL,
SET TRANSACTION ISOLATION LEVEL,
@@completion_type,
COMMIT AND CHAIN,
ROLLBACK AND CHAIN
..and some combinations of the above
Logging slow stored procedures caused the slow log to write
very large lock times. The lock times was a result of a
negative number being cast to an unsigned integer.
The reason the lock time appeard negative was because
one of the measurements points was reset after execution
causing it to change order with the start time of the
statement.
This bug is related to bug 47905 which in turn was
introduced because of a joint fix for 12480,12481,12482 and 11587.
The fix is to only reset the start_time before any statement
execution in a SP while not resetting start_utime or
utime_after_lock which are used for measuring the
performance of the SP. Start_time is used to set the
timestamp on the replication event which controlls how
the slave interprets time functions like NOW().
The problem is in the Item_func_isnull::update_used_tables() function,
bracket is at the wrong place. Because of that isnull item erroneously
is treated as const item. The fix is to set brackets in the right place.
This crash happened if a table was listed twice in a DROP TABLE statement,
and the statement was executed while in LOCK TABLES mode. Since the two
elements of table list were identical, they were assigned the same TABLE object.
During processing of the first table element, the TABLE instance was destroyed
and the second table list element was left with a dangling reference.
When this reference was later accessed, the server crashed.
Listing the same table twice in DROP TABLES should give an ER_NONUNIQ_TABLE
error. However, this did not happen as the check for unique table names was
skipped due to the lock type for table list elements being set to TL_IGNORE.
Previously TL_UNLOCK was used and the unique check was performed.
This bug was a regression introduced by a pre-requisite patch for
Bug#51263 "Deadlock between transactional SELECT and ALTER TABLE ...
REBUILD PARTITION". The regression only existed in an internal team
tree and never in any released code.
This patch reverts DROP TABLE (and DROP VIEW) to the old behavior of
using TL_UNLOCK locks. Test case added to drop.test.
locks for DML statements and changes the way MDL locks
are acquired/granted in contended case.
Instead of backing-off when a lock conflict is encountered
and waiting for it to go away before restarting open_tables()
process we now wait for lock to be released without releasing
any previously acquired locks. If conflicting lock goes away
we resume opening tables. If waiting leads to a deadlock we
try to resolve it by backing-off and restarting open_tables()
immediately.
As result both waiting for possibility to acquire and
acquiring of a metadata lock now always happen within the
same MDL API call. This has allowed to make release of a lock
and granting it to the most appropriate pending request an
atomic operation.
Thanks to this it became possible to wake up during release
of lock only those waiters which requests can be satisfied
at the moment as well as wake up only one waiter in case
when granting its request would prevent all other requests
from being satisfied. This solves thundering herd problem
which occured in cases when we were releasing some lock and
woke up many waiters for SNRW or X locks (this was the issue
in bug#52289 "performance regression for MyISAM in sysbench
OLTP_RW test".
This also allowed to implement more fair (FIFO) scheduling
among waiters with the same priority.
It also opens the door for introducing new types of requests
for metadata locks such as low-prio SNRW lock which is
necessary in order to support LOCK TABLES LOW_PRIORITY WRITE.
Notice that after this sometimes can report ER_LOCK_DEADLOCK
error in cases in which it has not happened before.
Particularly we will always report this error if waiting for
conflicting lock has happened in the middle of transaction
and resulted in a deadlock. Before this patch the error was
not reported if deadlock could have been resolved by backing
off all metadata locks acquired by the current statement.
Conflicts:
Text conflict in mysql-test/r/archive.result
Contents conflict in mysql-test/r/innodb_bug38231.result
Text conflict in mysql-test/r/mdl_sync.result
Text conflict in mysql-test/suite/binlog/t/disabled.def
Text conflict in mysql-test/suite/rpl_ndb/r/rpl_ndb_binlog_format_errors.result
Text conflict in mysql-test/t/archive.test
Contents conflict in mysql-test/t/innodb_bug38231.test
Text conflict in mysql-test/t/mdl_sync.test
Text conflict in sql/sp_head.cc
Text conflict in sql/sql_show.cc
Text conflict in sql/table.cc
Text conflict in sql/table.h
Some of the server implementations don't support dates later
than 2038 due to the internal time type being 32 bit.
Added checks so that the server will refuse dates that cannot
be handled by either throwing an error when setting date at
runtime or by refusing to start or shutting down the server if
the system date cannot be stored in my_time_t.
When using Unique Keys with nullable parts in RBR, the slave can
choose the wrong row to update. This happens because a table with
an unique key containing nullable parts cannot strictly guarantee
uniqueness. As stated in the manual, for all engines, a UNIQUE
index allows multiple NULL values for columns that can contain
NULL.
We fix this at the slave by extending the checks before assuming
that the row found through an unique index is is the correct
one. This means that when a record (R) is fetched from the storage
engine and a key that is not primary (K) is used, the server does
the following:
- If K is unique and has no nullable parts, it returns R;
- Otherwise, if any field in the before image that is part of K
is null do an index scan;
- If there is no NULL field in the BI part of K, then return R.
A side change: renamed the existing test case file and added a
test case covering the changes in this patch.
Problems:
- regression (compating to version 5.1) in metadata for BLOB types
- inconsistency between length metadata in server and embedded for BLOB types
- wrong max_length calculation in items derived from BLOB columns
@ libmysqld/lib_sql.cc
Calculating length metadata in embedded similary to server version,
using new function char_to_byte_length_safe().
@ mysql-test/r/ctype_utf16.result
Adding tests
@ mysql-test/r/ctype_utf32.result
Adding tests
@ mysql-test/r/ctype_utf8.result
Adding tests
@ mysql-test/r/ctype_utf8mb4.result
Adding tests
@ mysql-test/t/ctype_utf16.test
Adding tests
@ mysql-test/t/ctype_utf32.test
Adding tests
@ mysql-test/t/ctype_utf8.test
Adding tests
@ mysql-test/t/ctype_utf8mb4.test
Adding tests
@ sql/field.cc
Overriding char_length() for Field_blob:
unlike in generic Item::char_length() we don't
divide to mbmaxlen for BLOBs.
@ sql/field.h
- Making Field::char_length() virtual
- Adding prototype for Field_blob::char_length()
@ sql/item.h
- Adding new helper function char_to_byte_length_safe()
- Using new function
@ sql/protocol.cc
Using new function char_to_byte_length_safe().
modified:
libmysqld/lib_sql.cc
mysql-test/r/ctype_utf16.result
mysql-test/r/ctype_utf32.result
mysql-test/r/ctype_utf8.result
mysql-test/r/ctype_utf8mb4.result
mysql-test/t/ctype_utf16.test
mysql-test/t/ctype_utf32.test
mysql-test/t/ctype_utf8.test
mysql-test/t/ctype_utf8mb4.test
sql/field.cc
sql/field.h
sql/item.h
sql/protocol.cc
innodb.innodb [ fail ]
Test ended at 2010-06-02 15:04:06
CURRENT_TEST: innodb.innodb
--- /usr/w/mysql-trunk-innodb/mysql-test/suite/innodb/r/innodb.result 2010-05-23 23:10:26.576407000 +0300
+++ /usr/w/mysql-trunk-innodb/mysql-test/suite/innodb/r/innodb.reject 2010-06-02 15:04:05.000000000 +0300
@@ -2648,7 +2648,7 @@
create table t4 (s1 char(2) binary,primary key (s1)) engine=innodb;
insert into t1 values (0x41),(0x4120),(0x4100);
insert into t2 values (0x41),(0x4120),(0x4100);
-ERROR 23000: Duplicate entry 'A\x00' for key 'PRIMARY'
+ERROR 23000: Duplicate entry 'A' for key 'PRIMARY'
insert into t2 values (0x41),(0x4120);
The change in the printout was introduced in:
------------------------------------------------------------
revno: 3008.6.2
revision-id: sergey.glukhov@sun.com-20100527160143-57nas8nplzpj26dz
parent: sergey.glukhov@sun.com-20100527155443-24vqi9o8rpnkyci7
committer: Sergey Glukhov <Sergey.Glukhov@sun.com>
branch nick: mysql-trunk-bugfixing
timestamp: Thu 2010-05-27 20:01:43 +0400
message:
Bug#52430 Incorrect key in the error message for duplicate key error involving BINARY type
For BINARY(N) strip trailing zeroes to make the error message nice-looking
@ mysql-test/r/errors.result
test case
@ mysql-test/r/type_binary.result
result fix
@ mysql-test/t/errors.test
test case
@ sql/key.cc
For BINARY(N) strip trailing zeroes to make the error message nice-looking
and its author (Sergey) did not notice the test failure because that test
has been disabled in his tree.
This patch aims at moving some rpl tests to be run on a daily
basis instead of running on a per push basis. To accomplish such
goal the following modifications are proposed:
- MTR: added --skip-test-list cli parameter
This option allows the user to specify more than one
disabled.def file, for example:
perl mtr --skip-test-list=list1.list --skip-test-list=list2.list
- Added collections/disabled-per-push.list
This file lists the test cases that should be disabled per
push.
- Changed mysql-test/collections/default.push
Added --skip-test-list=collections/disabled-per-push.list
to rpl_binlog_row, ps_row and n_mix runs.
- Changed mysql-test/collections/default.daily
Added rpl_binlog_row run (since it is partially run per push we
should run it fully on a daily basis).
------------------------------------------------------------
revno: 3495
committer: Marko Mäkelä <marko.makela@oracle.com>
branch nick: 5.1-innodb
timestamp: Wed 2010-06-02 13:37:14 +0300
message:
Bug#53674: InnoDB: Error: unlock row could not find a 4 mode lock on the record
In semi-consistent read, only unlock freshly locked non-matching records.
lock_rec_lock_fast(): Return LOCK_REC_SUCCESS,
LOCK_REC_SUCCESS_CREATED, or LOCK_REC_FAIL instead of TRUE/FALSE.
enum db_err: Add DB_SUCCESS_LOCKED_REC for indicating a successful
operation where a record lock was created.
lock_sec_rec_read_check_and_lock(),
lock_clust_rec_read_check_and_lock(), lock_rec_enqueue_waiting(),
lock_rec_lock_slow(), lock_rec_lock(), row_ins_set_shared_rec_lock(),
row_ins_set_exclusive_rec_lock(), sel_set_rec_lock(),
row_sel_get_clust_rec_for_mysql(): Return DB_SUCCESS_LOCKED_REC if a
new record lock was created. Adjust callers.
row_unlock_for_mysql(): Correct the function documentation.
row_prebuilt_t::new_rec_locks: Correct the documentation.
MTR will ignore fully qualified test name entries in disabled.def
lists. Therefore, it would still run the test case, even if it is
listed.
This patch fix this by extending the check when marking the test
case as disabled to take into consideration not only the cases that
contain the simple test name but also those that contain fully
qualified test names.
In semi-consistent read, only unlock freshly locked non-matching records.
lock_rec_lock_fast(): Return LOCK_REC_SUCCESS,
LOCK_REC_SUCCESS_CREATED, or LOCK_REC_FAIL instead of TRUE/FALSE.
enum db_err: Add DB_SUCCESS_LOCKED_REC for indicating a successful
operation where a record lock was created.
lock_sec_rec_read_check_and_lock(),
lock_clust_rec_read_check_and_lock(), lock_rec_enqueue_waiting(),
lock_rec_lock_slow(), lock_rec_lock(), row_ins_set_shared_rec_lock(),
row_ins_set_exclusive_rec_lock(), sel_set_rec_lock(),
row_sel_get_clust_rec_for_mysql(): Return DB_SUCCESS_LOCKED_REC if a
new record lock was created. Adjust callers.
row_unlock_for_mysql(): Correct the function documentation.
row_prebuilt_t::new_rec_locks: Correct the documentation.
In semi-consistent read, only unlock freshly locked non-matching records.
Define DB_SUCCESS_LOCKED_REC for indicating a successful operation
where a record lock was created.
lock_rec_lock_fast(): Return LOCK_REC_SUCCESS,
LOCK_REC_SUCCESS_CREATED, or LOCK_REC_FAIL instead of TRUE/FALSE.
lock_sec_rec_read_check_and_lock(),
lock_clust_rec_read_check_and_lock(), lock_rec_enqueue_waiting(),
lock_rec_lock_slow(), lock_rec_lock(), row_ins_set_shared_rec_lock(),
row_ins_set_exclusive_rec_lock(), sel_set_rec_lock(),
row_sel_get_clust_rec_for_mysql(): Return DB_SUCCESS_LOCKED_REC if a
new record lock was created. Adjust callers.
row_unlock_for_mysql(): Correct the function documentation.
row_prebuilt_t::new_rec_locks: Correct the documentation.