Problem: if multibyte and binary string arguments passed to
RPAD(), LPAD() or INSERT() functions, they might return
wrong results or even lead to a server crash due to missed
character set convertion.
Fix: perform the convertion if necessary.
Only wait for a single debug signal at a time as the signal state
is global. Also, do not activate the query cache debug sync points
if the thread has no associated THD session.
Issue an error if user specifies multiple commands to run.
Also there was an unnoticed bug that DO_CHECK was actually 0 which lead
to wrong actions in some cases.
The mysqlcheck.test contained commands with the suspicious meaning
for the above reason. Extra commands removed from there.
per-file commands:
client/mysqlcheck.c
Bug#35269 mysqlcheck behaves different depending on order of parameters
Drop with an error if multiple commands.
mysql-test/r/mysqlcheck.result
Bug#35269 mysqlcheck behaves different depending on order of parameters
result completed.
mysql-test/t/mysqlcheck.test
Bug#35269 mysqlcheck behaves different depending on order of parameters
testcase added.
not-working commands removed from some mysqlcheck calls.
The problem was that threads waiting on the query cache lock
are not easily seen due to the lack of a state indicating that
the thread is waiting on the said lock. This made it difficult
for users to quickly spot (for example, via SHOW PROCESSLIST)
a query cache contention problem.
The solution is to update the thread state when the query cache
lock needs to be acquired. Whenever the lock is to be acquired,
the thread state is updated to "Waiting for query cache lock"
and is reset once the lock is granted or the wait is interrupted.
The intention is to make query cache related hangs more evident.
To further investigate query cache related locking problems, one
may use PERFORMANCE_SCHEMA to track the overhead associated with
the locking bits and determine which particular lock is being a
contention point.
The coalesce function returned DATETIME type due to a DATETIME argument, but
since it's not a date/time function it can't return correct int value for
it. Nevertheless Item_datetime_cache was chosen to cache coalesce's result
and that led to a wrong result.
Now Item_datetime_cache is used only for those function that could return
correct int representation of DATETIME values.
discover its existence".
The problem was that user without any privileges on
routine was able to find out whether it existed or not.
DROP FUNCTION and DROP PROCEDURE statements were
checking if routine being dropped existed and reported
ER_SP_DOES_NOT_EXIST error/warning before checking
if user had enough privileges to drop it.
This patch solves this problem by changing code not to
check if routine exists before checking if user has enough
privileges to drop it. Moreover we no longer perform this
check using a separate call instead we rely on
sp_drop_routine() returning SP_KEY_NOT_FOUND if routine
doesn't exist.
This change also simplifies one of upcoming patches
refactoring global read lock implementation.
The subtime function wasn't able to produce correct int representation of
its result. For constant expressions the Item_datetime_cache is used to
speedup evaluation and Item_datetime_cache expects underlying item to return
correct int representation of DATETIME value. These two factors combined led
to a wrong query result.
Now the Item_func_add_time has function val_datetime which performs the
calculation and saves result into given MYSQL_TIME struct, it also sets
null_value to appropriate value. val_int and val_str member functions
convert the result obtained from val_datetime to int or string respectively
and returns it.
Bug#54678: InnoDB, TRUNCATE, ALTER, I_S SELECT, crash or deadlock
- Incompatible change: truncate no longer resorts to a row by
row delete if the storage engine does not support the truncate
method. Consequently, the count of affected rows does not, in
any case, reflect the actual number of rows.
- Incompatible change: it is no longer possible to truncate a
table that participates as a parent in a foreign key constraint,
unless it is a self-referencing constraint (both parent and child
are in the same table). To work around this incompatible change
and still be able to truncate such tables, disable foreign checks
with SET foreign_key_checks=0 before truncate. Alternatively, if
foreign key checks are necessary, please use a DELETE statement
without a WHERE condition.
Problem description:
The problem was that for storage engines that do not support
truncate table via a external drop and recreate, such as InnoDB
which implements truncate via a internal drop and recreate, the
delete_all_rows method could be invoked with a shared metadata
lock, causing problems if the engine needed exclusive access
to some internal metadata. This problem originated with the
fact that there is no truncate specific handler method, which
ended up leading to a abuse of the delete_all_rows method that
is primarily used for delete operations without a condition.
Solution:
The solution is to introduce a truncate handler method that is
invoked when the engine does not support truncation via a table
drop and recreate. This method is invoked under a exclusive
metadata lock, so that there is only a single instance of the
table when the method is invoked.
Also, the method is not invoked and a error is thrown if
the table is a parent in a non-self-referencing foreign key
relationship. This was necessary to avoid inconsistency as
some integrity checks are bypassed. This is inline with the
fact that truncate is primarily a DDL operation that was
designed to quickly remove all data from a table.
thd->in_sub_stmt
In a precursor patch for Bug#52044
(revid:bzr/kostja@stripped), a
number of reorganizations of code was made. In addition some
assertions were added to ensure the correct transactional state.
The reorganization had a small glitch so statements that was
active in the query cache was not followed by a
statement commit/rollback (this code was removed). A section
in the trans_commit_stmt/trans_rollback_stmt code is to
clear the thd->transaction.stmt list of affected storage
engines. When a new statement is initiated, an assert
introduced by the 523044 patch checks if this list is cleared.
When the query cache is accessed, this list may be populated,
and since it's not committed it will not be cleared.
This fix adds explicit statement commit or rollback for
statements that is contained in the query cache.
for ALTER TABLE + MERGE tables
The patch for Bug#56292 changed how metadata locks are taken for MERGE
tables. After the patch, locking the MERGE table will also lock the
children tables with the same metadata lock type. This means that
LOCK TABLES on a MERGE table also will implicitly do LOCK TABLES on
the children tables.
A consequence of this change, is that it is possible to do LOCK TABLES
on a child table both explicitly and implicitly with the same statement
and that these two locks can be of different strength. For example,
LOCK TABLES child READ, merge WRITE.
In LOCK TABLES mode, we are not allowed to take new locks and each
statement must therefore try to find an existing TABLE instance with
a suitable lock. The code that searched for a suitable TABLE instance,
only considered table level locks. If a child table was locked twice,
it was therefore possible for this code to find a TABLE instance with
suitable table level locks but without suitable metadata lock.
This problem caused the assert in upgrade_shared_lock_to_exclusive()
to be triggered as it tried to upgrade a MDL_SHARED lock to
EXCLUSIVE. The problem was a regression caused by the patch for
Bug#56292.
This patch fixes the problem by partially reverting the changes
done by Bug#56292. Now, the children tables will only use the
same metadata lock as the MERGE table for MDL_SHARED_NO_WRITE
when not in locked tables mode. This means that LOCK TABLE
on a MERGE table will not implicitly lock the children tables.
This still fixes the original problem in Bug#56292 without
causing a regression.
Test case added to merge.test.
In case of failure in ALTER ... PARTITION under LOCK TABLE
the server could crash, due to it had modified the locked
table object, which was not reverted in case of failure,
resulting in a bad table definition used after the failed
command.
Solved by always closing the LOCKED TABLE, even in case
of error.
Note: this is a 5.1-only fix, bug#56172 fixed it in 5.5+
This assert was triggered if DELETE was done on a view that
referenced another view which in turn (directly or indirectly)
referenced more than one table.
Delete from a view referencing more than one table (a join view)
is not supported and is supposed to give ER_VIEW_DELETE_MERGE_VIEW
error. Before this error was reported from the multi table
delete code, an assert verified that the view from the DELETE statement
had more than one underlying table. However, this assert did not take
into account that the view could refer to another view which in turn
referenced the actual tables.
This patch fixes the problem by adjusting the assert to take this
possibility into account. This problem was only noticeable on debug
builds of the server. On release builds, ER_VIEW_DELETE_MERGE_VIEW
was correctly reported.
Test case added to delete.test.
LOAD DATA into partitioned MyISAM table
Problem was that both partitioning and myisam
used the same table_share->mutex for different protections
(auto inc and repair).
Solved by adding a specific mutex for the partitioning
auto_increment.
Also adding destroying the ha_data structure in
free_table_share (which is to be propagated
into 5.5).
This is a 5.1 ONLY patch, already fixed in 5.5+.
The crash happens because original join table is replaced with temporary table
at execution stage and later we attempt to use this temporary table in
select_describe. It might happen that
Item_subselect::update_used_tables() method which sets const_item flag
is not called by some reasons (no where/having conditon in subquery for example).
It prevents JOIN::join_tmp creation and breaks original join.
The fix is to call ::update_used_tables() before ::const_item() check.
Bug#57113: ha_partition::extra(ha_extra_function):
Assertion `m_extra_cache' failed
Fix for bug#55458 included DBUG_ASSERTS causing
debug builds of the server to crash on
another multi-table update.
Removed the asserts since they where wrong.
(updated after testing the patch in 5.5).