THAN LOCALHOST
This is a test bug and the explanation for the behaviour can be found
on the bug page.Modifying the select to select user where user!=root for the line where
failure is encountered on machines with no hostname other than the localhost.
STATUS OF ROLLBACKED TRANSACTION" and bug #17054007 - "TRANSACTION
IS NOT FULLY ROLLED BACK IN CASE OF INNODB DEADLOCK".
The problem in the first bug report was that although deadlock involving
metadata locks was reported using the same error code and message as InnoDB
deadlock it didn't rollback transaction like the latter. This caused
confusion to users as in some cases after ER_LOCK_DEADLOCK transaction
could have been restarted immediately and in some cases rollback was
required.
The problem in the second bug report was that although InnoDB deadlock
caused transaction rollback in all storage engines it didn't cause release
of metadata locks. So concurrent DDL on the tables used in transaction was
blocked until implicit or explicit COMMIT or ROLLBACK was issued in the
connection which got InnoDB deadlock.
The former issue has stemmed from the fact that when support for detection
and reporting metadata locks deadlocks was added we erroneously assumed
that InnoDB doesn't rollback transaction on deadlock but only last statement
(while this is what happens on InnoDB lock timeout actually) and so didn't
implement rollback of transactions on MDL deadlocks.
The latter issue was caused by the fact that rollback of transaction due
to deadlock is carried out by setting THD::transaction_rollback_request
flag at the point where deadlock is detected and performing rollback
inside of trans_rollback_stmt() call when this flag is set. And
trans_rollback_stmt() is not aware of MDL locks, so no MDL locks are
released.
This patch solves these two problems in the following way:
- In case when MDL deadlock is detect transaction rollback is requested
by setting THD::transaction_rollback_request flag.
- Code performing rollback of transaction if THD::transaction_rollback_request
is moved out from trans_rollback_stmt(). Now we handle rollback request
on the same level as we call trans_rollback_stmt() and release statement/
transaction MDL locks.
TO INCONSISTENCY
PROBLEM
--------
When we drop a partitoned table , we first gather the
information about partitions in the table from the
table_name.par file and store it in an internal data
structure.Then we delete this file and the data in
the table. If the server crashes after deleting the
file,then after recovering we cannot access the table
.Even we cannot drop the table ,because drop algorithm
requires par file to read the partition information.
FIX
---
1. We move the part of deleting par file after deleting
all the table data from the storage egine.
2. During drop operation if we detect that the par
file is missing then we delete the .frm file,since
there is no way of recovering without par file.
[Approved by Mattias rb#2576 ]
USING THE PLUGIN INTERFACE.
ISSUE: No support for floating-point plugin
system variables.
SOLUTION: Allowing plugins to define and expose floating-point
system variables of type double. MYSQL_SYSVAR_DOUBLE
and MYSQL_THDVAR_DOUBLE are added.
ISSUE: Fractional part of the def, min, max values of system
variables are ignored.
SOLUTION: Adding functions that are used to store the raw
representation of a double in the raw bits of unsigned
longlong in a way that the binary representation
remains the same.
WITH COMPOSITE KEY COLUMNS
Problem:-
While running a SELECT query with several AGGR(DISTINCT) function
and these are referring to different field of same composite key,
Returned incorrect value.
Analysis:-
In a table, where we have composite key like (a,b,c)
and when we give a query like
select COUNT(DISTINCT b), SUM(DISTINCT a) from ....
here, we first make a list of items in Aggr(distinct) function
(which is a, b), where order of item doesn't matter.
and then we see, whether we have a composite key where the prefix
of index columns matches the items of the aggregation function.
(in this case we have a,b,c).
if yes, so we can use loose index scan and we need not perform
duplicate removal to distinct in our aggregate function.
In our table, we traverse column marked with <-- and get the result as
(a,b,c) count(distinct b) sum(distinct a)
treated as count b treated as sum(a)
(1,1,2)<-- 1 1
(1,2,2)<-- 1++=2 1+1=2
(1,2,3)
(2,1,2)<-- 2++=3 1+1+2=4
(2,2,2)<-- 3++=4 1+1+2+2=6
(2,2,3)
result will be 4,6, but it should be (2,3)
As in this case, our assumption is incorrect. If we have
query like
select count(distinct a,b), sum(distinct a,b)from ..
then we can use loose index scan
Solution:-
In our query, when we have more then one aggr(distinct) function
then they should refer to same fields like
select count(distinct a,b), sum(distinct a,b) from ..
-->we can use loose scan index as both aggr(distinct) refer to same fields a,b.
If they are referring to different field like
select count(distinct a), sum(distinct b) from ..
-->will not use loose scan index as both aggr(distinct) refer to different fields.
WITH A PORT NUMBER ENCLOSED IN QUOTES
Problem: mysqldump --dump-slave --include-master-host-port
prints the CHANGE MASTER command in the generated logical
backup. The PORT number that is generated with this command
is a string and should be an integer.
Fix: Remove the Enclosed quotes for port number.
DOWNGRADED FROM 5.6.11 TO 5.6.10
Problem was new syntax not accepted by previous version.
Fixed by adding version comment of /*!50531 around the
new syntax.
Like this in the .frm file:
'PARTITION BY KEY /*!50611 ALGORITHM = 2 */ () PARTITIONS 3'
and also changing the output from SHOW CREATE TABLE to:
CREATE TABLE t1 (a INT)
/*!50100 PARTITION BY KEY */ /*!50611 ALGORITHM = 1 */ /*!50100 ()
PARTITIONS 3 */
It will always add the ALGORITHM into the .frm for KEY [sub]partitioned
tables, but for SHOW CREATE TABLE it will only add it in case it is the non
default ALGORITHM = 1.
Also notice that for 5.5, it will say /*!50531 instead of /*!50611, which
will make upgrade from 5.5 > 5.5.31 to 5.6 < 5.6.11 fail!
If one downgrades an fixed version to the same major version (5.5 or 5.6) the
bug 14521864 will be visible again, but unless the .frm is updated, it will
work again when upgrading again.
Also fixed so that the .frm does not get updated version
if a single partition check passes.
Due to an internal change in the server code in between 5.1 and 5.5
(wl#2649) the hash function used in KEY partitioning changed
for numeric and date/time columns (from binary hash calculation
to character based hash calculation).
Also enum/set changed from latin1 ci based hash calculation to
binary hash between 5.1 and 5.5. (bug#11759782).
These changes makes KEY [sub]partitioned tables on any of
the affected column types incompatible with 5.5 and above,
since the calculation of partition id differs.
Also since InnoDB asserts that a deleted row was previously
read (positioned), the server asserts on delete of a row that
is in the wrong partition.
The solution for this situation is:
1) The partitioning engine will check that delete/update will go to the
partition the row was read from and give an error otherwise, consisting
of the rows partitioning fields. This will avoid asserts in InnoDB and
also alert the user that there is a misplaced row. A detailed error
message will be given, including an entry to the error log consisting
of both table name, partition and row content (PK if exists, otherwise
all partitioning columns).
2) A new optional syntax for KEY () partitioning in 5.5 is allowed:
[SUB]PARTITION BY KEY [ALGORITHM = N] (list_of_cols)
Where N = 1 uses the same hashing as 5.1 (Numeric/date/time fields uses
binary hashing, ENUM/SET uses charset hashing) N = 2 uses the same
hashing as 5.5 (Numeric/date/time fields uses charset hashing,
ENUM/SET uses binary hashing). If not set on CREATE/ALTER it will
default to 2.
This new syntax should probably be ignored by NDB.
3) Since there is a demand for avoiding scanning through the full
table, during upgrade the ALTER TABLE t PARTITION BY ... command is
considered a no-op (only .frm change) if everything except ALGORITHM
is the same and ALGORITHM was not set before, which allows manually
upgrading such table by something like:
ALTER TABLE t PARTITION BY KEY ALGORITHM = 1 () or
ALTER TABLE t PARTITION BY KEY ALGORITHM = 2 ()
4) Enhanced partitioning with CHECK/REPAIR to also check for/repair
misplaced rows. (Also works for ALTER TABLE t CHECK/REPAIR PARTITION)
CHECK FOR UPGRADE:
If the .frm version is < 5.5.3
and uses KEY [sub]partitioning
and an affected column type
then it will fail with an message:
KEY () partitioning changed, please run:
ALTER TABLE `test`.`t1` PARTITION BY KEY ALGORITHM = 1 (a)
PARTITIONS 12
(i.e. current partitioning clause, with the addition of
ALGORITHM = 1)
CHECK without FOR UPGRADE:
if MEDIUM (default) or EXTENDED options are given:
Scan all rows and verify that it is in the correct partition.
Fail for the first misplaced row.
REPAIR:
if default or EXTENDED (i.e. not QUICK/USE_FRM):
Scan all rows and every misplaced row is moved into its correct
partitions.
5) Updated mysqlcheck (called by mysql_upgrade) to handle the
new output from CHECK FOR UPGRADE, to run the ALTER statement
instead of running REPAIR.
This will allow mysql_upgrade (or CHECK TABLE t FOR UPGRADE) to upgrade
a KEY [sub]partitioned table that has any affected field type
and a .frm version < 5.5.3 to ALGORITHM = 1 without rebuild.
Also notice that if the .frm has a version of >= 5.5.3 and ALGORITHM
is not set, it is not possible to know if it consists of rows from
5.1 or 5.5! In these cases I suggest that the user does:
(optional)
LOCK TABLE t WRITE;
SHOW CREATE TABLE t;
(verify that it has no ALGORITHM = N, and to be safe, I would suggest
backing up the .frm file, to be used if one need to change to another
ALGORITHM = N, without needing to rebuild/repair)
ALTER TABLE t <old partitioning clause, but with ALGORITHM = N>;
which should set the ALGORITHM to N (if the table has rows from
5.1 I would suggest N = 1, otherwise N = 2)
CHECK TABLE t;
(here one could use the backed up .frm instead and change to a new N
and run CHECK again and see if it passes)
and if there are misplaced rows:
REPAIR TABLE t;
(optional)
UNLOCK TABLES;
PROPERLY QUOTED IN BINLOG FILE
Problem: In load data file query, User variables are allowed
inside "Into_list" and "Set_list". These user variables used
inside these two lists are not properly guarded with backticks
while server is writting into binlog. Hence user variable names
like a` cannot be used in this context.
Fix: Properly quote these variables while
writting into binlog
Problem: When a view, with a specific character set and collation,
is created on another view with a different character set and collation the
dump restoration results in an illegal mix of collations error.
SOLUTION: To avoid this confusion of collations, the create table datatype
being used is hardcoded as "tinyint NOT NULL". This will not matter as the table
created will be dropped at runtime and specifically tinyint is used to
avoid hitting the row size conflicts.
When a binlog is replayed into a server, e.g.:
$ mysqlbinlog binlog.000001 | mysql
it sets a pseudo slave mode on the client connection in order to server
be able to read binlog events, there is, a format description event is
needed to correctly read following events.
Also this pseudo slave mode applies to the current connection
replication rules that are needed to correctly apply binlog events.
If a binlog dump is sourced on a connection, this pseudo slave mode will
remains after it, what will apply unexpected rules from customer
perspective to following commands.
Added a new SET statement to binlog dump that will unset pseudo slave
mode at the end of dump file.
MYSQLDUMP OUTPUT
A patch is pushed on this bug. A result mismatch
occured for the test main.ddl_i18n_utf8 in
x86_64 gcov build of linux in pb2. This commit is
to modify ddl_i18n_utf8.result to match the
changes made for the bug.
MYSQLDUMP OUTPUT
A patch is pushed on this bug. A result mismatch
occured for the test main.ddl_i18n_koi8r in
x86_64 gcov build of linux in pb2. This commit is
to modify ddl_i18n_koi8r.result to match the
changes made for the bug.
MYSQLDUMP OUTPUT
Problem: mysqldump when used with option --routines, dumps
all the routines of the specified database into
output. The statements in this output are written
in such a way that they are version safe using C
style version commenting (of the format
/*!<version num> <sql statement>*/). If a semicolon
is present right before closing of the comment in
dump output, it results in a syntax error while
importing.
Solution: Version comments for dumped routines are
specifically to protect the ones older than 5.0.
When the import is done on 5.0 or later versions,
entire create statement gets executed as all the
check conditions at the beginning of the comments
are cleared. Since the trade off is between the
performance of newer versions which are more in
use and protection of very old versions which are
no longer supported, it is proposed that these
comments be removed altogether to maintain
stability of the versions supported.
main.mysqlbinlog_row_innodb are skipped by mtr
=== Problem ===
The following tests are wrongly placed in main suite and as a
result these are not run with proper binlog format combinations.
Some are always skipped by mtr.
1) mysqlbinlog_row_myisam
2) mysqlbinlog_row_innodb
3) mysqlbinlog_row.test
4) mysqlbinlog_row_trans.test
5) mysqlbinlog-cp932
6) mysqlbinlog2
7) mysqlbinlog_base64
=== Background ===
mtr runs the tests placed in main suite with binlog format=stmt.
Those that need to be tested against binlog format=row or mixed
or more than one binlog format and require only one mysql server
are placed in binlog suite. mtr runs tests in binlog suite with
all three binlog formats(stmt,row and mixed).
=== Fix ===
1) Moved the test listed in problem section above to binlog suite.
2) Added prefix "binlog_" to the name of each test case moved.
Renamed the coresponding result files and option files accordingly.
PRIVILEGES
Description: (user,host) pair from security context is used
privilege checking at the time of granting or
revoking proxy privileges. This creates problem
when server is started with
--skip-name-resolve option because host will not
contain any value. Checks should be dependent on
consistent values regardless the way server is
started. Further, privilege check should use
(priv_user,priv_host) pair rather than values
obtained from inbound connection because
this pair represents the correct account context
obtained from mysql.user table.
-----------
After compiling from source, during make test I got the following error:
test main.loaddata failed with error
CURRENT_TEST: main.loaddata
mysqltest: At line 592: query 'LOAD DATA INFILE 'tmpp.txt' INTO TABLE t1
CHARACTER SET ucs2
(@b) SET a=REVERSE(@b)' failed: 1115: Unknown character set: 'ucs2'
I noticed other tests are skipped because of no ucs2
main.mix2_myisam_ucs2 [ skipped ] Test requires:'
have_ucs2'
Should main.loaddata be skipped if there is no ucs2
How To Repeat:
-------------
Run make test on compiled source that doesn't have ucs2
Suggested fix:
-------------
the failing piece of the test should be moved from mysql-test/t/loaddata.test to
mysql-test/t/ctype_ucs.test.
QUOTING IN REPLICATION
Problem: Misquoting or unquoted identifiers may lead to
incorrect statements to be logged to the binary log.
Fix: we use specialized functions to append quoted identifiers in
the statements generated by the server.
-----------
After compiling from source, during make test I got the following error:
test main.loaddata failed with error
CURRENT_TEST: main.loaddata
mysqltest: At line 592: query 'LOAD DATA INFILE 'tmpp.txt' INTO TABLE t1
CHARACTER SET ucs2
(@b) SET a=REVERSE(@b)' failed: 1115: Unknown character set: 'ucs2'
I noticed other tests are skipped because of no ucs2
main.mix2_myisam_ucs2 [ skipped ] Test requires:'
have_ucs2'
Should main.loaddata be skipped if there is no ucs2
How To Repeat:
-------------
Run make test on compiled source that doesn't have ucs2
Suggested fix:
-------------
the failing piece of the test should be moved from mysql-test/t/loaddata.test to
mysql-test/t/ctype_ucs.test.
-----------
After compiling from source, during make test I got the following error:
test main.loaddata failed with error
CURRENT_TEST: main.loaddata
mysqltest: At line 592: query 'LOAD DATA INFILE 'tmpp.txt' INTO TABLE t1
CHARACTER SET ucs2
(@b) SET a=REVERSE(@b)' failed: 1115: Unknown character set: 'ucs2'
I noticed other tests are skipped because of no ucs2
main.mix2_myisam_ucs2 [ skipped ] Test requires:'
have_ucs2'
Should main.loaddata be skipped if there is no ucs2
How To Repeat:
-------------
Run make test on compiled source that doesn't have ucs2
Suggested fix:
-------------
the failing piece of the test should be moved from mysql-test/t/loaddata.test to
mysql-test/t/ctype_ucs.test.
NUMBERS
If a system variable was declared as deprecated without mention of an
alternative, the message would look funny, e.g. for @@delayed_insert_limit:
Warning 1287 '@@delayed_insert_limit' is deprecated and
will be removed in MySQL .
The message was meant to display the version number, but it's not
possible to give one when declaring a system variable.
The fix does two things:
1) The definition of the message
ER_WARN_DEPRECATED_SYNTAX_NO_REPLACEMENT is changed so that it does
not display a version number. I.e. in English the message now reads:
Warning 1287 The syntax '@@delayed_insert_limit' is deprecated and
will be removed in a future version.
2) The message ER_WARN_DEPRECATED_SYNTAX_WITH_VER is discontinued in
favor of ER_WARN_DEPRECATED_SYNTAX for system variables. This change
was already done in versions 5.6 and above as part of wl#5265. This
part is simply back-ported from the worklog.
CONNECTIONS IF SPE
Problem description: -ssl-key value is not validated, you can assign any bogus
text to --ssl-key and it is not verified that it exists, and more importantly,
it allows the client to connect to mysqld.
Fix: Added proper validations checks for --ssl-key.
Note:
1) Documentation changes require for 5.1, 5.5, 5.6 and trunk in the sections
listed below and the details are :
http://dev.mysql.com/doc/refman/5.6/en/ssl-options.html#option_general_ssl
and
REQUIRE SSL section of
http://dev.mysql.com/doc/refman/5.6/en/grant.html
2) Client having with option '--ssl', should able to get ssl connection. This
will be implemented as part of separate fix in 5.6 and trunk.
Backporting the WL#5716, "Information schema table for InnoDB
buffer pool information". Backporting revisions 2876.244.113,
2876.244.102 from mysql-trunk.
rb://1177 approved by Jimmy Yang.
"ORDER BY" AND "LIMIT BY" CLAUSE
PROBLEM:
When a 'limit' clause is specified in a query along with
group by and order by, optimizer chooses wrong index
there by examining more number of rows than required.
However without the 'limit' clause, optimizer chooses
the right index.
ANALYSIS:
With respect to the query specified, range optimizer chooses
the first index as there is a range present ( on 'a'). Optimizer
then checks for an index which would give records in sorted
order for the 'group by' clause.
While checking chooses the second index (on 'c,b,a') based on
the 'limit' specified and the selectivity of
'quick_condition_rows' (number of rows present in the range)
in 'test_if_skip_sort_order' function.
But, it fails to consider that an order by clause on a
different column will result in scanning the entire index and
hence the estimated number of rows calculated above are
wrong (which results in choosing the second index).
FIX:
Do not enforce the 'limit' clause in the call to
'test_if_skip_sort_order' if we are creating a temporary
table. Creation of temporary table indicates that there would be
more post-processing and hence will need all the rows.
This fix is backported from 5.6. This problem is fixed in 5.6 as
part of changes for work log #5558
executing
The problem is that mysql lacks information about the objects a view
depends on so it can't dump views and tables in the proper order.
Thus it needs to create "stand-in" myisam tables for each view while
dumping the tables that it later drops and replaces with the actual view
view definition.
But since views can have much more columns than an actual table creating
these stand-in tables may be problematic.
There's no way to portably find out how many columns an mysiam table
can have. It's a complicated formula depending on internal server constants.
Thus we can't have a reliable error check without repeating the logic and
the formula inside mysqldump.
1. Changed the type of the columns of the stand-in tables mysqldump
makes to satisfy view dependencies from the original type to smallint
to save on row space.
2. Added a warning on the mysqldump's standard error for a possible
problems replaying the dump file if the columns of a view exceed 1000.
3. Added a test case.
Print the warning(note):
YEAR(x) is deprecated and will be removed in a future release. Please use YEAR(4) instead
on "CREATE TABLE ... YEAR(x)" or "ALTER TABLE MODIFY ... YEAR(x)", where x != 4
Fixed by backport of:
------------------------------------------------------------
revno: 3402.50.156
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
branch nick: mysql-trunk-test
timestamp: Wed 2012-02-08 14:10:23 +0100
message:
Bug#13417754 ASSERT IN ROW_DROP_DATABASE_FOR_MYSQL DURING DROP SCHEMA
This assert could be triggered if an InnoDB table was being moved
to a different database using ALTER TABLE ... RENAME, while this
database concurrently was being dropped by DROP DATABASE.
The reason for the problem was that no metadata lock was taken
on the target database by ALTER TABLE ... RENAME.
DROP DATABASE was therefore not blocked and could remove
the database while ALTER TABLE ... RENAME was executing. This
could cause the assert in InnoDB to be triggered.
This patch fixes the problem by taking a IX metadata lock on
the target database before ALTER TABLE ... RENAME starts
moving a table to a different database.
Note that this problem did not occur with RENAME TABLE which
already takes the correct metadata locks.
Also note that this patch slightly changes the behavior of
ALTER TABLE ... RENAME. Before, the statement would abort and
return an error if a lock on the target table name could not
be taken immediately. With this patch, ALTER TABLE ... RENAME
will instead block and wait until the lock can be taken
(or until we get a lock timeout). This also means that it is
possible to get ER_LOCK_DEADLOCK errors in this situation
since we allow ALTER TABLE ... RENAME to wait and not just
abort immediately.