Item*) at opt_sum.cc:305
Queries applying MIN/MAX functions to indexed columns are
optimized to read directly from the index if all key parts
of the index preceding the aggregated key part are bound to
constants by the WHERE clause. A prefix length is also
produced, equal to the total length of the bound key
parts. If the aggregated column itself is bound to a
constant, however, it is also included in the prefix.
Such full search keys are read as closed intervals for
reasons beyond the scope of this bug. However, the procedure
missed one case where a key part meant for use as range
endpoint was being overwritten with a NULL value destined
for equality checking. In this case the key part was
overwritten but the range flag remained, causing open
interval reading to be performed.
Bug was fixed by adding more stringent checking to the
search key building procedure (matching_cond) and never
allow overwrites of range predicates with non-range
predicates.
An assertion was added to make sure open intervals are never
used with full search keys.
Valgrind warning happpens because of uninitialized null bytes.
In row_sel_push_cache_row_for_mysql() function we fill fetch cache
with necessary field values, row_sel_store_mysql_rec() is called
for this and leaves null bytes untouched.
Later row_sel_pop_cached_row_for_mysql() rewrites table record
buffer with uninited null bytes. We can see the problem from the
test case:
At 'SELECT...' we call row_sel_push...->row_sel_store...->row_sel_pop_cached...
chain which rewrites table->record[0] buffer with uninitialized null bytes.
When we call 'UPDATE...' statement, compare_record uses this buffer and
valgrind warning occurs.
The fix is to init null bytes with default values.
Problem: the server missed the fact that one can read from
2 indexes alternately using HANDLER interface.
Fix: check if the same (initialized) index is involved
reading next/prev values from the index.
Bug#46527 COMMIT AND CHAIN RELEASE does not make sense
Bug#53343 completion_type=1, COMMIT/ROLLBACK AND CHAIN don't
preserve the isolation level
Bug#53346 completion_type has strange effect in a stored
procedure/prepared statement
Added test cases to verify the expected behaviour of :
SET SESSION TRANSACTION ISOLATION LEVEL,
SET TRANSACTION ISOLATION LEVEL,
@@completion_type,
COMMIT AND CHAIN,
ROLLBACK AND CHAIN
..and some combinations of the above
The problem is in the Item_func_isnull::update_used_tables() function,
bracket is at the wrong place. Because of that isnull item erroneously
is treated as const item. The fix is to set brackets in the right place.
This crash happened if a table was listed twice in a DROP TABLE statement,
and the statement was executed while in LOCK TABLES mode. Since the two
elements of table list were identical, they were assigned the same TABLE object.
During processing of the first table element, the TABLE instance was destroyed
and the second table list element was left with a dangling reference.
When this reference was later accessed, the server crashed.
Listing the same table twice in DROP TABLES should give an ER_NONUNIQ_TABLE
error. However, this did not happen as the check for unique table names was
skipped due to the lock type for table list elements being set to TL_IGNORE.
Previously TL_UNLOCK was used and the unique check was performed.
This bug was a regression introduced by a pre-requisite patch for
Bug#51263 "Deadlock between transactional SELECT and ALTER TABLE ...
REBUILD PARTITION". The regression only existed in an internal team
tree and never in any released code.
This patch reverts DROP TABLE (and DROP VIEW) to the old behavior of
using TL_UNLOCK locks. Test case added to drop.test.
locks for DML statements and changes the way MDL locks
are acquired/granted in contended case.
Instead of backing-off when a lock conflict is encountered
and waiting for it to go away before restarting open_tables()
process we now wait for lock to be released without releasing
any previously acquired locks. If conflicting lock goes away
we resume opening tables. If waiting leads to a deadlock we
try to resolve it by backing-off and restarting open_tables()
immediately.
As result both waiting for possibility to acquire and
acquiring of a metadata lock now always happen within the
same MDL API call. This has allowed to make release of a lock
and granting it to the most appropriate pending request an
atomic operation.
Thanks to this it became possible to wake up during release
of lock only those waiters which requests can be satisfied
at the moment as well as wake up only one waiter in case
when granting its request would prevent all other requests
from being satisfied. This solves thundering herd problem
which occured in cases when we were releasing some lock and
woke up many waiters for SNRW or X locks (this was the issue
in bug#52289 "performance regression for MyISAM in sysbench
OLTP_RW test".
This also allowed to implement more fair (FIFO) scheduling
among waiters with the same priority.
It also opens the door for introducing new types of requests
for metadata locks such as low-prio SNRW lock which is
necessary in order to support LOCK TABLES LOW_PRIORITY WRITE.
Notice that after this sometimes can report ER_LOCK_DEADLOCK
error in cases in which it has not happened before.
Particularly we will always report this error if waiting for
conflicting lock has happened in the middle of transaction
and resulted in a deadlock. Before this patch the error was
not reported if deadlock could have been resolved by backing
off all metadata locks acquired by the current statement.
Conflicts:
Text conflict in mysql-test/r/archive.result
Contents conflict in mysql-test/r/innodb_bug38231.result
Text conflict in mysql-test/r/mdl_sync.result
Text conflict in mysql-test/suite/binlog/t/disabled.def
Text conflict in mysql-test/suite/rpl_ndb/r/rpl_ndb_binlog_format_errors.result
Text conflict in mysql-test/t/archive.test
Contents conflict in mysql-test/t/innodb_bug38231.test
Text conflict in mysql-test/t/mdl_sync.test
Text conflict in sql/sp_head.cc
Text conflict in sql/sql_show.cc
Text conflict in sql/table.cc
Text conflict in sql/table.h
Some of the server implementations don't support dates later
than 2038 due to the internal time type being 32 bit.
Added checks so that the server will refuse dates that cannot
be handled by either throwing an error when setting date at
runtime or by refusing to start or shutting down the server if
the system date cannot be stored in my_time_t.
Problems:
- regression (compating to version 5.1) in metadata for BLOB types
- inconsistency between length metadata in server and embedded for BLOB types
- wrong max_length calculation in items derived from BLOB columns
@ libmysqld/lib_sql.cc
Calculating length metadata in embedded similary to server version,
using new function char_to_byte_length_safe().
@ mysql-test/r/ctype_utf16.result
Adding tests
@ mysql-test/r/ctype_utf32.result
Adding tests
@ mysql-test/r/ctype_utf8.result
Adding tests
@ mysql-test/r/ctype_utf8mb4.result
Adding tests
@ mysql-test/t/ctype_utf16.test
Adding tests
@ mysql-test/t/ctype_utf32.test
Adding tests
@ mysql-test/t/ctype_utf8.test
Adding tests
@ mysql-test/t/ctype_utf8mb4.test
Adding tests
@ sql/field.cc
Overriding char_length() for Field_blob:
unlike in generic Item::char_length() we don't
divide to mbmaxlen for BLOBs.
@ sql/field.h
- Making Field::char_length() virtual
- Adding prototype for Field_blob::char_length()
@ sql/item.h
- Adding new helper function char_to_byte_length_safe()
- Using new function
@ sql/protocol.cc
Using new function char_to_byte_length_safe().
modified:
libmysqld/lib_sql.cc
mysql-test/r/ctype_utf16.result
mysql-test/r/ctype_utf32.result
mysql-test/r/ctype_utf8.result
mysql-test/r/ctype_utf8mb4.result
mysql-test/t/ctype_utf16.test
mysql-test/t/ctype_utf32.test
mysql-test/t/ctype_utf8.test
mysql-test/t/ctype_utf8mb4.test
sql/field.cc
sql/field.h
sql/item.h
sql/protocol.cc
- revid:sp1r-svoj@mysql.com/june.mysql.com-20080324111246-00461
- revid:sp1r-svoj@mysql.com/june.mysql.com-20080414125521-40866
BUG#35274 - merge table doesn't need any base tables, gives
error 124 when key accessed
SELECT queries that use index against a merge table with empty
underlying tables list may return with error "Got error 124 from
storage engine".
The problem was that wrong error being returned.
when it should use index
Sometimes the LEFT/RIGHT JOIN with an empty table caused an
unnecessary filesort.
Sample query, where t1.i1 is indexed and t3 is empty:
SELECT t1.*, t2.* FROM t1 JOIN t2 ON t1.i1 = t2.i2
LEFT JOIN t3 ON t2.i2 = t3.i3
ORDER BY t1.i1 LIMIT 5;
The server erroneously used an item of empty outer-joined
table as a common constant of a Item_equal (multi-equivalence
expression).
By the fix for the bug 16590 the constant status of such
an item has been propagated to st_table::const_key_parts
map bits related to other Item_equal argument-related
key parts (those are obviously not constant in our case).
As far as test_if_skip_sort_order function skips constant
prefixes of testing keys, this caused an ignorance of
available indices, since some prefixes were marked as
constant by mistake.
and .tar.gz, windows vs linux..
On Intel x86 machines index selection by the MySQL query
optimizer could sometimes depend on the compiler version and
optimization flags used to build the server binary.
The problem was a result of a known issue with floating point
calculations on x86: since internal FPU precision (80 bit)
differs from precision used by programs (32-bit float or 64-bit
double), the result of calculating a complex expression may
depend on how FPU registers are allocated by the compiler and
whether intermediate values are spilled from FPU to memory. In
this particular case compiler versions and optimization flags
had an effect on cost calculation when choosing the best index
in best_access_path().
A possible solution to this problem which has already been
implemented in mysql-trunk is to limit FPU internal precision
to 64 bits. So the fix is a backport of the relevant code to
5.1 from mysql-trunk.
The problem is that if a NULL is stored in an Item_cache_decimal object,
the associated my_decimal object is not initialized. However, it is still
accessed when val_int() is called. The fix is to check for null_value
within val_int(), and return without accessing the my_decimal object when
the cached value is NULL.
Bug#52122 reports the same issue for val_real(), and this patch also includes
fixes for val_real() and val_str() and corresponding test cases from that
bug report.
Also, NULL is returned from val_decimal() when value is null. This will
avoid that callers access an uninitialized my_decimal object.
Made similar changes to all other Item_cache classes. Now all val_*
methods should return a well defined value when actual value is NULL.
is allowed on views (not documented, broken)".
Remove support of ALTER TABLE RENAME for views as:
a) this feature was not documented,
c) does not add any compatibility with other databases,
b) its implementation doesn't follow metadata locking
protocol by accessing .FRM without holding any
metadata lock,
c) its implementation complicates ALTER TABLE's code
by introducing yet another separate branch to it.
After this patch one can rename a view by using the
documented way - RENAME TABLE statement.
without FOR UPDATE is causing a lock".
SELECT statements with subqueries referencing InnoDB tables
were acquiring shared locks on rows in these tables when they
were executed in REPEATABLE-READ mode and with statement or
mixed mode binary logging turned on.
This was a regression which were introduced when fixing
bug 39843.
The problem was that for tables belonging to subqueries
parser set TL_READ_DEFAULT as a lock type. In cases when
statement/mixed binary logging at open_tables() time this
type of lock was converted to TL_READ_NO_INSERT lock at
open_tables() time and caused InnoDB engine to acquire
shared locks on reads from these tables. Although in some
cases such behavior was correct (e.g. for subqueries in
DELETE) in case of SELECT it has caused unnecessary locking.
This patch implements minimal version of the fix for the
specific problem described in the bug-report which supposed
to be not too risky for pushing into 5.1 tree.
The 5.5 tree already contains a more appropriate solution
which also addresses other related issues like bug 53921
"Wrong locks for SELECTs used stored functions may lead
to broken SBR".
This patch tries to solve the problem by ensuring that
TL_READ_DEFAULT lock which is set in the parser for
tables participating in subqueries at open_tables()
time is interpreted as TL_READ_NO_INSERT or TL_READ.
TL_READ is used only if we know that this is a SELECT
and that this particular table is not used by a stored
function.
Test coverage is added for both InnoDB and MyISAM.
This patch introduces an "incompatible" change in locking
scheme for subqueries used in SELECT ... FOR UPDATE and
SELECT .. IN SHARE MODE.
In 4.1 (as well as in 5.0 and 5.1 before fix for bug 39843)
the server would use a snapshot InnoDB read for subqueries
in SELECT FOR UPDATE and SELECT .. IN SHARE MODE statements,
regardless of whether the binary log is on or off.
If the user required a different type of read (i.e. locking
read), he/she could request so explicitly by providing FOR
UPDATE/IN SHARE MODE clause for each individual subquery.
The patch for bug 39843 broke this behaviour (which was not
documented or tested), and started to use locking reads for
all subqueries in SELECT ... FOR UPDATE/IN SHARE MODE.
This patch restores 4.1 behaviour.
This patch should be mostly null-merged into 5.5 tree.