table copy".
This patch only adds test case as the bug itself was addressed
by Ramil's fix for bug 50946 "fast index creation still seems
to copy the table".
prepared statements
Using GROUP_CONCAT() together with the WITH ROLLUP modifier
could crash the server.
The reason was a combination of several facts:
1. The Item_func_group_concat class stores pointers to ORDER
objects representing the columns in the ORDER BY clause of
GROUP_CONCAT().
2. find_order_in_list() called from
Item_func_group_concat::setup() modifies the ORDER objects so
that their 'item' member points to the arguments list
allocated in the Item_func_group_concat constructor.
3. In some cases (e.g. in JOIN::rollup_make_fields) a copy of
the original Item_func_group_concat object could be created by
using the Item_func_group_concat::Item_func_group_concat(THD
*thd, Item_func_group_concat *item) copy constructor. The
latter essentially creates a shallow copy of the source
object. Memory for the arguments array is allocated on
thd->mem_root, but the pointers for arguments and ORDER are
copied verbatim.
What happens in the test case is that when executing the query
for the first time, after a copy of the original
Item_func_group_concat object has been created by
JOIN::rollup_make_fields(), find_order_in_list() is called for
this new object. It then resolves ORDER BY by modifying the
ORDER objects so that they point to elements of the arguments
array which is local to the cloned object. When thd->mem_root
is freed upon completing the execution, pointers in the ORDER
objects become invalid. Those ORDER objects, however, are also
shared with the original Item_func_group_concat object which is
preserved between executions of a prepared statement. So the
first call to find_order_in_list() for the original object on
the second execution tries to dereference an invalid pointer.
The solution is to create copies of the ORDER objects when
copying Item_func_group_concat to not leave any stale pointers
in other instances with different lifecycles.
mysql-test/r/func_gconcat.result:
Test case for bug #54476.
mysql-test/t/func_gconcat.test:
Test case for bug #54476.
sql/item_sum.cc:
Copy the ORDER objects pointed to by the elements of the
'order' array in the copy constructor of
Item_func_group_concat.
sql/table.h:
Removed the unused 'item_copy' member of the ORDER class.
The first problem was that SHOW CREATE TRIGGER took a stronger metadata
lock than required. This caused the statement to be blocked when it was
not needed. For example, LOCK TABLE WRITE in one connection would block
SHOW CREATE TRIGGER in another connection.
Another problem was that a SHOW CREATE TRIGGER statement issued inside
a transaction did not release its metadata locks at the end of the
statement execution. This happened even if SHOW CREATE TRIGGER is an
information statement. The consequence was that SHOW CREATE TRIGGER
was able to block other connections from accessing the table
(e.g. using ALTER TABLE).
This patch fixes the problem by changing SHOW CREATE TRIGGER to take
a MDL_SHARED_HIGH_PRIO metadata lock similar to what is already done
for SHOW CREATE TABLE. The patch also changes SHOW CREATE TRIGGER to
explicitly release any metadata locks taken by the statement after
it completes.
Test case added to show_check.test.
concurrent SHOW CREATE
The problem was that a SHOW CREATE TABLE statement issued inside
a transaction did not release its metadata locks at the end of the
statement execution. This happened even if SHOW CREATE TABLE is an
information statement.
The consequence was that SHOW CREATE TABLE was able to block other
connections from accessing the table (e.g. using ALTER TABLE).
This patch fixes the problem by explicitly releasing any metadata
locks taken by SHOW CREATE TABLE after the statement completes.
Test case added to show_check.test.
This bug is a design flaw of the fix for the bug#33546. It assumed that an
item can be used only in one comparison context, but actually it isn't the
case. Item_cache_datetime is used to store result for MIX/MAX aggregate
functions. Because Arg_comparator always compares datetime values as INTs when
possible the Item_cache_datetime most time caches only INT value. But
since all datetime values has STRING result type MIN/MAX functions are asked
for a STRING value when the result is being sent to a client. The
Item_cache_datetime was designed to avoid conversions and get INT/STRING
values from an underlying item, but at the moment the values is asked
underlying item doesn't hold it anymore thus wrong result is returned.
Beside that MIN/MAX aggregate functions was wrongly initializing cached result
and this led to a wrong result.
The Item::has_compatible_context helper function is added. It checks whether
this and given items has the same comparison context or can be compared as
DATETIME values by Arg_comparator. The equality propagation optimization is
adjusted to take into account that items which being compared as DATETIME
can have different comparison contexts.
The Item_cache_datetime now converts cached INT value to a correct STRING
DATETIME value by means of number_to_datetime & my_TIME_to_str functions.
The Arg_comparator::set_cmp_context_for_datetime helper function is added.
It sets comparison context of items being compared as DATETIMEs to INT if
items will be compared as longlong.
The Item_sum_hybrid::setup function now correctly initializes its result
value.
In order to avoid unnecessary conversions Item_sum_hybrid now states that it
can provide correct longlong value if the item being aggregated can do it
too.
mysql-test/r/group_by.result:
Added a test case for the bug#49771.
sql/item.cc:
Bug#49771: Incorrect MIN/MAX for date/time values.
The equality propagation mechanism is adjusted to take into account that
items which being compared as DATETIME can have different comparison
contexts.
The Item_cache_datetime now converts cached INT value to a correct STRING
DATETIME/TIME value.
sql/item.h:
Bug#49771: Incorrect MIN/MAX for date/time values.
The Item::has_compatible_context helper function is added. It checks whether
this and given items has the same comparison context or can be compared as
DATETIME values by Arg_comparator.
Added Item_cache::clear helper function.
sql/item_cmpfunc.cc:
Bug#49771: Incorrect MIN/MAX for date/time values.
The Arg_comparator::set_cmp_func now sets the correct comparison context
for items being compared as DATETIME values.
sql/item_cmpfunc.h:
Bug#49771: Incorrect MIN/MAX for date/time values.
The Arg_comparator::set_cmp_context_for_datetime helper function is added.
It sets comparison context of items being compared as DATETIMEs to INT if
items will be compared as longlong.
sql/item_sum.cc:
Bug#49771: Incorrect MIN/MAX for date/time values.
The Item_sum_hybrid::setup function now correctly initializes its result
value.
sql/item_sum.h:
Bug#49771: Incorrect MIN/MAX for date/time values.
In order to avoid unnecessary conversions Item_sum_hybrid now states that it
can provide correct longlong value if the item being aggregated can do it
too.
This assert checks that the server does not try to send OK to the
client if there has been some error during processing. This is done
to make sure that the error is in fact sent to the client.
The problem was that view errors during processing of WHERE conditions
in UPDATE statements where not detected by the update code. It therefore
tried to send OK to the client, triggering the assert.
The bug was only noticeable in debug builds.
This patch fixes the problem by making sure that the update code
checks for errors during condition processing and acts accordingly.
and reverse() function
3 problems fixed :
1. The reported problem : caused by incorrect parsing of
the file as ucs data resulting in wrong length of the parsed
string. Fixed by truncating the invalid trailing bytes
(non-complete multibyte characters) when reading from the file
2. LOAD DATA when reading from a proper UCS2 file wasn't
recognizing the new line characters. Fixed by first looking
if a byte is a new line (or any other special) character before
reading it as a part of a multibyte character.
3. When using user variables to hold the column data in LOAD
DATA the character set of the user variable was set incorrectly
to the database charset. Fixed by setting it to the charset
specified by LOAD DATA (if any).
naming scheme for tests related to functions, rename analyse.test to
func_analyse.test (test for the ANALYSE() procedure). Avoids confusion
with the ANALYZE statement (tested in analyze.test).
The problem there is that HAVING condition evaluates const
parts of condition despite the condition has references
on aggregate functions. Table t1 became const tables
after make_join_statistics and table1.pk = 1, HAVING is
transformed into MAX(1) < 7 and taken away from HAVING.
The fix is to skip evaluation of HAVING conts parts if
HAVING condition has references on aggregate functions.
mysql-test/r/having.result:
test case
mysql-test/t/having.test:
test case
sql/sql_select.cc:
skip evaluation of HAVING conts parts if
HAVING condition has references on aggregate functions.
Essentially, the problem is that safemalloc is excruciatingly
slow as it checks all allocated blocks for overrun at each
memory management primitive, yielding a almost exponential
slowdown for the memory management functions (malloc, realloc,
free). The overrun check basically consists of verifying some
bytes of a block for certain magic keys, which catches some
simple forms of overrun. Another minor problem is violation
of aliasing rules and that its own internal list of blocks
is prone to corruption.
Another issue with safemalloc is rather the maintenance cost
as the tool has a significant impact on the server code.
Given the magnitude of memory debuggers available nowadays,
especially those that are provided with the platform malloc
implementation, maintenance of a in-house and largely obsolete
memory debugger becomes a burden that is not worth the effort
due to its slowness and lack of support for detecting more
common forms of heap corruption.
Since there are third-party tools that can provide the same
functionality at a lower or comparable performance cost, the
solution is to simply remove safemalloc. Third-party tools
can provide the same functionality at a lower or comparable
performance cost.
The removal of safemalloc also allows a simplification of the
malloc wrappers, removing quite a bit of kludge: redefinition
of my_malloc, my_free and the removal of the unused second
argument of my_free. Since free() always check whether the
supplied pointer is null, redudant checks are also removed.
Also, this patch adds unit testing for my_malloc and moves
my_realloc implementation into the same file as the other
memory allocation primitives.
client/mysqldump.c:
Pass my_free directly as its signature is compatible with the
callback type -- which wasn't the case for free_table_ent.
from mysql-next-mr-opt-backporting.
Bug#54515: Crash in opt_range.cc::get_best_group_min_max on
SELECT from VIEW with GROUP BY
When handling the grouping items in get_best_group_min_max, the
items need to be of type Item_field. In this bug, an ASSERT
triggered because the item used for grouping was an
Item_direct_view_ref (i.e., the group column is from a view).
The fix is to get the real_item since Item_ref* pointing to
Item_field is ok.
mysql-test/r/select.result:
Add test for BUG#54515
mysql-test/t/select.test:
Add test for BUG#54515
sql/opt_range.cc:
Get the real_item() when processing grouping items in
get_best_group_min_max.
and the original engine is disabled
Missing check that engine is available.
mysql-test/include/not_blackhole.inc:
new include file
mysql-test/r/partition_not_blackhole.result:
new result file
mysql-test/std_data/parts/t1_blackhole.frm:
blackhole partitioned table .frm file:
create table `t1` (`id` int primary key) engine=blackhole
partition by key () partitions 1;
mysql-test/std_data/parts/t1_blackhole.par:
.par file matching blackhole partitioned .frm
mysql-test/t/partition_not_blackhole-master.opt:
new master-opt to disable blackhole if compiled in.
mysql-test/t/partition_not_blackhole.test:
New test
sql/ha_partition.cc:
Added check that engine is available.
Problem: sha2() reported its result as BINARY
Fix:
- Inheriting Item_func_sha2 from Item_str_ascii_func
- Setting max_length via fix_length_and_charset()
instead of direct assignment.
- Adding tests
value and NO_ZERO_DATE
The problem was that a older version of the error path for a
failed admin statement relied upon a few error conditions being
met in order to access a table handler, the first one being that
the table object pointer was not NULL. Probably due to chance,
in all cases a table object was closed but the reference wasn't
reset, the other conditions didn't evaluate to true. With the
addition of a new check on the error path, the handler started
being dereferenced whenever it was not reset to NULL, causing
problems for code paths which closed the table but didn't reset
the reference.
The solution is to reset the reference whenever a admin statement
fails and the tables are closed.
mysql-test/r/partition_innodb.result:
Add test case result for Bug#54783
mysql-test/t/partition_innodb.test:
Add test case for Bug#54783
sql/sql_table.cc:
In case table recreate failed, set a appropriate result code.
Reset reference to a closed table object, otherwise the error
path might attempt to access it.
Merge up to sunny.bains@oracle.com-20100625081841-ppulnkjk1qlazh82 .
There are 8 more changesets in mysql-5.1-innodb, but PB2 shows a
failure for a test added in one of them. If that is resolved quickly
then those 8 more changesets will be merged too.
MERGE engine".
Backport the patch from 6.0 by Ingo Struewing:
revid:ingo.struewing@sun.com-20091028183659-6kmv1k3gdq6cpg4d
Bug#36171 - CREATE TEMPORARY TABLE and MERGE engine
In former MySQL versions, up to 5.1.23/6.0.4 it was possible to create
temporary MERGE tables with non-temporary MyISAM tables.
This has been changed in the mentioned version due to Bug 19627
(temporary merge table locking). MERGE children were locked through
the parent table. If the parent was temporary, it was not locked and
so the children were not locked either. Parallel use of the MyISAM
tables corrupted them.
Since 6.0.6 (WL 4144 - Lock MERGE engine children), the children are
locked independently from the parent. Now it is possible to allow
non-temporary children with a temporary parent. Even though the
temporary MERGE table itself is not locked, each non-temporary
MyISAM table is locked anyway.
NOTE: Behavior change: In 5.1.23/6.0.4 we prohibited non-temporary
children with a temporary MERGE table. Now we re-allow it.
An important side-effect is that temporary tables, which overlay
non-temporary MERGE children, overlay the children in the MERGE table.
mysql-test/r/merge.result:
Update results (Bug#36171).
mysql-test/r/merge_mmap.result:
Update results (Bug#36171).
mysql-test/t/merge.test:
Add tests for Bug#36171
mysql-test/t/merge_mmap.test:
Add tests for Bug#36171.
storage/myisammrg/ha_myisammrg.cc:
Changed constraint for temporary state of tables.
and a backport of relevant changes from the 6.0
version of the fix done by Ingo Struewing.
The bug itself was fixed by the patch for Bug#54811.
MyISAMMRG engine would try to use MMAP on its children
even on platforms that don't support it and even if
myisam_use_mmap option was off.
This lead to an infinite hang in INSERT ... SELECT into
a MyISAMMRG table when the destination MyISAM table
was also selected from.
A bug in duplicate detection fixed by 54811 was essential to
the hang - when a duplicate is detected, the optimizer
disables the use of memory mapped files, and it wasn't the case.
The patch below is also to not turn on MMAP on children tables
if myisam_use_mmap is off.
A test case is added to cover MyISAMMRG and myisam_use_mmap
option.
mysql-test/r/merge_mmap.result:
Result file - Bug#50788.
mysql-test/t/merge_mmap-master.opt:
An option file for the test for Bug#50788 -- use mmap.
mysql-test/t/merge_mmap.test:
Try INSERT ... SELECT into a merge table when myisam_use_mmap is on (Bug#50788).
storage/myisam/mi_statrec.c:
Fixed misinterpretation of the return value of my_b_read().
storage/myisammrg/ha_myisammrg.cc:
Skip HA_EXTRA_MMAP if MyISAM memory mapping is disabled.
with open HANDLER
Fixes a problem with schema.test visible using embedded server.
The HANDLER was not closed which caused the test to hang.
The problem was not visible if the test was run on a normal server
as the the handler there was implicitly closed by DATABASE DDL
statements doing Events::drop_schema_events().
with open HANDLER
Fixes problem which caused mdl_sync.test to fail on Solaris and
Windows due to path name differences in error messages in the
result file.
DATABASE with open HANDLER"
Remove LOCK_create_db, database name locks, and use metadata locks instead.
This exposes CREATE/DROP/ALTER DATABASE statements to the graph-based
deadlock detector in MDL, and paves the way for a safe, deadlock-free
implementation of RENAME DATABASE.
Database DDL statements will now take exclusive metadata locks on
the database name, while table/view/routine DDL statements take
intention exclusive locks on the database name. This prevents race
conditions between database DDL and table/view/routine DDL.
(e.g. DROP DATABASE with concurrent CREATE/ALTER/DROP TABLE)
By adding database name locks, this patch implements
WL#4450 "DDL locking: CREATE/DROP DATABASE must use database locks" and
WL#4985 "DDL locking: namespace/hierarchical locks".
The patch also changes code to use init_one_table() where appropriate.
The new lock_table_names() function requires TABLE_LIST::db_length to
be set correctly, and this is taken care of by init_one_table().
This patch also adds a simple template to help work with
the mysys HASH data structure.
Most of the patch was written by Konstantin Osipov.
Fixed an incomplete historical ALTER TABLE MODIFY trimming the trigger
privilege bit from mysql.tables_priv.Table_priv column.
Removed the duplicate ALTER TABLE MODIFY.
Test suite added.
BUG#54872 MBR: replication failure caused by using tmp table inside transaction
Changed criteria to classify a statement as unsafe in order to reduce the
number of spurious warnings. So a statement is classified as unsafe when
there is on-going transaction at any point of the execution if:
1. The mixed statement is about to update a transactional table and
a non-transactional table.
2. The mixed statement is about to update a temporary transactional
table and a non-transactional table.
3. The mixed statement is about to update a transactional table and
read from a non-transactional table.
4. The mixed statement is about to update a temporary transactional
table and read from a non-transactional table.
5. The mixed statement is about to update a non-transactional table
and read from a transactional table when the isolation level is
lower than repeatable read.
After updating a transactional table if:
6. The mixed statement is about to update a non-transactional table
and read from a temporary transactional table.
7. The mixed statement is about to update a non-transactional table
and read from a temporary transactional table.
8. The mixed statement is about to update a non-transactionala table
and read from a temporary non-transactional table.
9. The mixed statement is about to update a temporary non-transactional
table and update a non-transactional table.
10. The mixed statement is about to update a temporary non-transactional
table and read from a non-transactional table.
11. A statement is about to update a non-transactional table and the
option variables.binlog_direct_non_trans_update is OFF.
The reason for this is that locks acquired may not protected a concurrent
transaction of interfering in the current execution and by consequence in
the result. So the patch reduced the number of spurious unsafe warnings.
Besides we fixed a regression caused by BUG#51894, which makes temporary
tables to go into the trx-cache if there is an on-going transaction. In
MIXED mode, the patch for BUG#51894 ignores that the trx-cache may have
updates to temporary non-transactional tables that must be written to the
binary log while rolling back the transaction.
So we fix this problem by writing the content of the trx-cache to the
binary log while rolling back a transaction if a non-transactional
temporary table was updated and the binary logging format is MIXED.
The problem is that QUICK_SELECT_DESC behaviour depends
on used_key_parts value which can be bigger than selected
best_key_parts value if an engine supports clustered key.
But used_key_parts is overwritten with best_key_parts
value that prevents from correct selection of index
access method. The fix is to preserve used_key_parts
value for further use in QUICK_SELECT_DESC.
mysql-test/r/innodb_mysql.result:
test case
mysql-test/t/innodb_mysql.test:
test case
sql/sql_select.cc:
preserve used_key_parts value for further use in QUICK_SELECT_DESC
Remove mysql_lock_have_duplicate(), since now we always
have TABLE_LIST objects for MyISAMMRG children
in lex->query_tables and keep it till the end of the
statement (sub-statement).
mysql-test/r/merge.result:
Update results (Bug#54811).
mysql-test/t/merge-big.test:
Update to new wait state.
mysql-test/t/merge.test:
Add a test case for Bug#54811.
sql/lock.cc:
Remove a function that is now unused.
sql/lock.h:
Remove a function that is now unused.
sql/sql_base.cc:
Don't try to search for duplicate table among THR_LOCK objects, TABLE_LIST list contains all used tables.
This bug is a consequence of WL#5349, as the
default storage engine was changed.
The fix was to explicitly add an ENGINE
clause to a CREATE TABLE statement, to
ensure that we test case preservement on
MyISAM.
The problem was that if a query accessing a view was blocked due to
conflicting locks on tables in the view definition, it would be possible
for a different connection to alter the view definition before the view
query completed. When the view query later resumed, it used the old view
definition. This meant that if the view query was later repeated inside
the same transaction, the two executions of the query would give different
results, thus breaking repeatable read. (The first query used the old
view definition, the second used the new view definition).
This bug is no longer repeatable with the recent changes to the metadata
locking subsystem (revno: 3040). The view query will no longer back-off
and release the lock on the view definiton. Instead it will wait for
the conflicting lock(s) to go away while keeping the view definition lock.
This means that it is no longer possible for a concurrent connection to
alter the view definition. Instead, any such attempt will be blocked.
In the case from the bug report where the same view query was executed
twice inside the same transaction, any ALTER VIEW from other connections
will now be blocked until the transaction has completed (or aborted).
The view queries will therefore use the same view definition and we will
have repeatable read.
Test case added to innodb_mysql_lock.test.
This patch contains no code changes.
This deadlock happened if DROP DATABASE was blocked due to an open
HANDLER table from a different connection. While DROP DATABASE
is blocked, it holds the LOCK_mysql_create_db mutex. This results
in a deadlock if the connection with the open HANDLER table tries
to execute a CREATE/ALTER/DROP DATABASE statement as they all
try to acquire LOCK_mysql_create_db.
This patch makes this deadlock scenario very unlikely by closing and
marking for re-open all HANDLER tables for which there are pending
conflicing locks, before LOCK_mysql_create_db is acquired.
However, there is still a very slight possibility that a connection
could access one of these HANDLER tables between closing/marking for
re-open and the acquisition of LOCK_mysql_create_db.
This patch is for 5.1 only, a separate and complete fix will be
made for 5.5+.
Test case added to schema.test.