Problem was that the handler call ::extra(HA_EXTRA_CACHE) was cached
but the ::extra(HA_EXTRA_PREPARE_FOR_UPDATE) was not.
Solution was to also cache the other call and forward it when moving
to a new partition to scan.
Remove acquisition of LOCK_open around file system operations,
since such operations are now protected by metadata locks.
Rework table discovery algorithm to not require LOCK_open.
No new tests added since all MDL locking operations are covered
in lock.test and mdl_sync.test, and as long as these tests
pass despite the increased concurrency, consistency must be
unaffected.
INSERT IGNORE ... SELECT ... UNION SELECT ...
This assert was triggered by INSERT IGNORE ... SELECT. The assert checks that a
statement either sends OK or an error to the client. If the bug was triggered
on release builds, it caused OK to be sent to the client instead of the correct
error message (in this case ER_FIELD_SPECIFIED_TWICE).
The reason the assert was triggered, was that lex->no_error was set to TRUE
during JOIN::optimize() because of IGNORE. This causes all errors to be ignored.
However, not all errors can be ignored. Some, such as ER_FIELD_SPECIFIED_TWICE
will cause the INSERT to fail no matter what. But since lex->no_error was set,
the critical errors were ignored, the INSERT failed and neither OK nor the
error message was sent to the client.
This patch fixes the problem by temporarily turning off lex->no_error in
places where errors cannot be ignored during processing of INSERT ... SELECT.
Test case added to insert.test.
The CONVERT_TZ function crashes the server when the
timezone argument is an empty SET field value.
1) The CONVERT_TZ may find a timezone string in the
tz_names hash.
2) A string representation of the empty SET is a
String of zero length with the NULL pointer.
3) If the key argument length is zero, hash functions
do comparison using the length of the record being
compared against.
I.e. a zero-length String buffer is an invalid
argument for hash search functions, and if String
points to NULL buffer, hashcmp() fails with SEGV
accessing that memory.
The my_tz_find function has been modified to
treat empty Strings as invalid timezone values
to skip unnecessary hash search.
FLUSH TABLES <list> WITH READ LOCK are incompatible" to
be pushed as separate patch.
Replaced thread state name "Waiting for table", which was
used by threads waiting for a metadata lock or table flush,
with a set of names which better reflect types of resources
being waited for.
Also replaced "Table lock" thread state name, which was used
by threads waiting on thr_lock.c table level lock, with more
elaborate "Waiting for table level lock", to make it
more consistent with other thread state names.
Updated test cases and their results according to these
changes.
Fixed sys_vars.query_cache_wlock_invalidate_func test to not
to wait for timeout of wait_condition.inc script.
file .\item_subselect.cc, line 836
IN quantified predicates are never executed directly. They are rather wrapped
inside nodes called IN Optimizers (Item_in_optimizer) which take care of the
execution. However, this is not done during query preparation. Unfortunately
the LIKE predicate pre-evaluates constant right-hand side arguments even
during name resolution. Likely this is meant as an optimization.
Fixed by not pre-evaluating LIKE arguments in view prepare mode.
Handle overflow when reading value from SELECT MAX(C) FROM T;
Call ha_innobase::info() after initializing the autoinc value
in ha_innobase::open().
Fix for both the builtin and plugin.
rb://402
The enum system variables were handled inconsistently
as ints, unsigned int and unsigned long on various places.
This caused problems on platforms on which
sizeof(int) != sizeof(long).
Fixed by homogenizing the type of the enum variables
to unsigned int, since it's size compatible with the C enum
type.
Removed the test from the experimental list.
if() treated any non-numeric string as false
Fixed to treat those as true instead
Added some test cases
Fixed missing $ in variable name in include/mix2.inc
With statement- or mixed-mode logging, "LOAD DATA INFILE" queries
are written to the binlog using special types of log events.
When mysqlbinlog reads such events, it re-creates the file in a
temporary directory with a generated filename and outputs a
"LOAD DATA INFILE" query where the filename is replaced by the
generated file. The temporary file is not deleted by mysqlbinlog
after termination.
To fix the problem, in mixed mode we go to row-based. In SBR, we
document it to remind user the tmpfile is left in a temporary
directory.
With statement- or mixed-mode logging, "LOAD DATA INFILE" queries
are written to the binlog using special types of log events.
When mysqlbinlog reads such events, it re-creates the file in a
temporary directory with a generated filename and outputs a
"LOAD DATA INFILE" query where the filename is replaced by the
generated file. The temporary file is not deleted by mysqlbinlog
after termination.
To fix the problem, in mixed mode we go to row-based. In SBR, we
document it to remind user the tmpfile is left in a temporary
directory.
A CREATE...SELECT that fails is written to the binary log if a non-transactional
statement is updated. If the logging format is ROW, the CREATE statement and the
changes are written to the binary log as distinct events and by consequence the
created table is not rolled back in the slave.
In this patch, we opted to let the slave goes out of sync by not writting to the
binary log the CREATE statement. We do this by simply reseting the binary log's
cache.
Queries may crash, if
1) the GREATEST or the LEAST function has a mixed list of
numeric and LONGBLOB arguments and
2) the result of such a function goes through an intermediate
temporary table.
An Item that references a LONGBLOB field has max_length of
UINT_MAX32 == (2^32 - 1).
The current implementation of GREATEST/LEAST returns REAL
result for a mixed list of numeric and string arguments (that
contradicts with the current documentation, this contradiction
was discussed and it was decided to update the documentation).
The max_length of such a function call was calculated as a
maximum of argument max_length values (i.e. UINT_MAX32).
That max_length value of UINT_MAX32 was used as a length for
the intermediate temporary table Field_double to hold
GREATEST/LEAST function result.
The Field_double::val_str() method call on that field
allocates a String value.
Since an allocation of String reserves an additional byte
for a zero-termination, the size of String buffer was
set to (UINT_MAX32 + 1), that caused an integer overflow:
actually, an empty buffer of size 0 was allocated.
An initialization of the "first" byte of that zero-size
buffer with '\0' caused a crash.
The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.
******
Bug #54461: crash with longblob and union or update with subquery
Queries may crash, if
1) the GREATEST or the LEAST function has a mixed list of
numeric and LONGBLOB arguments and
2) the result of such a function goes through an intermediate
temporary table.
An Item that references a LONGBLOB field has max_length of
UINT_MAX32 == (2^32 - 1).
The current implementation of GREATEST/LEAST returns REAL
result for a mixed list of numeric and string arguments (that
contradicts with the current documentation, this contradiction
was discussed and it was decided to update the documentation).
The max_length of such a function call was calculated as a
maximum of argument max_length values (i.e. UINT_MAX32).
That max_length value of UINT_MAX32 was used as a length for
the intermediate temporary table Field_double to hold
GREATEST/LEAST function result.
The Field_double::val_str() method call on that field
allocates a String value.
Since an allocation of String reserves an additional byte
for a zero-termination, the size of String buffer was
set to (UINT_MAX32 + 1), that caused an integer overflow:
actually, an empty buffer of size 0 was allocated.
An initialization of the "first" byte of that zero-size
buffer with '\0' caused a crash.
The Item_func_min_max::fix_length_and_dec() has been
modified to calculate max_length for the REAL result like
we do it for arithmetical operators.
******
This patch fixes the following bugs:
- Bug#5889: Exit handler for a warning doesn't hide the warning in
trigger
- Bug#9857: Stored procedures: handler for sqlwarning ignored
- Bug#23032: Handlers declared in a SP do not handle warnings generated
in sub-SP
- Bug#36185: Incorrect precedence for warning and exception handlers
The problem was in the way warnings/errors during stored routine execution
were handled. Prior to this patch the logic was as follows:
- when a warning/an error happens: if we're executing a stored routine,
and there is a handler for that warning/error, remember the handler,
ignore the warning/error and continue execution.
- after a stored routine instruction is executed: check for a remembered
handler and activate one (if any).
This logic caused several problems:
- if one instruction generates several warnings (errors) it's impossible
to choose the right handler -- a handler for the first generated
condition was chosen and remembered for activation.
- mess with handling conditions in scopes different from the current one.
- not putting generated warnings/errors into Warning Info (Diagnostic
Area) is against The Standard.
The patch changes the logic as follows:
- Diagnostic Area is cleared on the beginning of each statement that
either is able to generate warnings, or is able to work with tables.
- at the end of a stored routine instruction, Diagnostic Area is left
intact.
- Diagnostic Area is checked after each stored routine instruction. If
an instruction generates several condition, it's now possible to take a
look at all of them and determine an appropriate handler.