mysqldump --routine fails to dump databases containing backslash ("\")
character. This happened because escaped database name was being used as an
identifier while changing current database. Such identifers are not supposed
to be escaped, they must be properly quoted instead.
MULTIPLE THREADS
Description:- The utility "mysqlimport" does not use
multiple threads for the execution with option
"--use-threads". "mysqlimport" while importing multiple
files and multiple tables, uses a single thread even if the
number of threads are specified with "--use-threads" option.
Analysis:- This utility uses ifdef HAVE_LIBPTHREAD to check
for libpthread library and if defined uses libpthread
library for mutlithreaing. Since HAVE_LIBPTHREAD is not
defined anywhere in the source, "--use-threads" option is
silently ignored.
Fix:- "-DTHREADS" is set to the COMPILE_FLAGS which will
enable pthreads. HAVE_LIBPTHREAD macro is removed.
MYSQLDUMP OUTPUT
Problem: mysqldump when used with option --routines, dumps
all the routines of the specified database into
output. The statements in this output are written
in such a way that they are version safe using C
style version commenting (of the format
/*!<version num> <sql statement>*/). If a semicolon
is present right before closing of the comment in
dump output, it results in a syntax error while
importing.
Solution: Version comments for dumped routines are
specifically to protect the ones older than 5.0.
When the import is done on 5.0 or later versions,
entire create statement gets executed as all the
check conditions at the beginning of the comments
are cleared. Since the trade off is between the
performance of newer versions which are more in
use and protection of very old versions which are
no longer supported, it is proposed that these
comments be removed altogether to maintain
stability of the versions supported.
client/mysqldump.c:
Bug#14463669 FAILURE TO CORRECTLY PARSE ROUTINES IN
MYSQLDUMP OUTPUT
Output of mysqldump is derived by getting the queries from
show create and appending version comments to them.
query_str is the variable used to store the final string.
Since it is no longer required, its declaration and
manipulations made on it are deleted. At the step where
output is printed, query_str is replaced with the original
query string derived from 'show create'.
MYSQLDUMP OUTPUT
Problem: mysqldump when used with option --routines, dumps
all the routines of the specified database into
output. The statements in this output are written
in such a way that they are version safe using C
style version commenting (of the format
/*!<version num> <sql statement>*/). If a semicolon
is present right before closing of the comment in
dump output, it results in a syntax error while
importing.
Solution: Version comments for dumped routines are
specifically to protect the ones older than 5.0.
When the import is done on 5.0 or later versions,
entire create statement gets executed as all the
check conditions at the beginning of the comments
are cleared. Since the trade off is between the
performance of newer versions which are more in
use and protection of very old versions which are
no longer supported, it is proposed that these
comments be removed altogether to maintain
stability of the versions supported.
two tests still fail:
main.innodb_icp and main.range_vs_index_merge_innodb
call records_in_range() with both range ends being open
(which triggers an assert)
executing
The problem is that mysql lacks information about the objects a view
depends on so it can't dump views and tables in the proper order.
Thus it needs to create "stand-in" myisam tables for each view while
dumping the tables that it later drops and replaces with the actual view
view definition.
But since views can have much more columns than an actual table creating
these stand-in tables may be problematic.
There's no way to portably find out how many columns an mysiam table
can have. It's a complicated formula depending on internal server constants.
Thus we can't have a reliable error check without repeating the logic and
the formula inside mysqldump.
1. Changed the type of the columns of the stand-in tables mysqldump
makes to satisfy view dependencies from the original type to smallint
to save on row space.
2. Added a warning on the mysqldump's standard error for a possible
problems replaying the dump file if the columns of a view exceed 1000.
3. Added a test case.
executing
The problem is that mysql lacks information about the objects a view
depends on so it can't dump views and tables in the proper order.
Thus it needs to create "stand-in" myisam tables for each view while
dumping the tables that it later drops and replaces with the actual view
view definition.
But since views can have much more columns than an actual table creating
these stand-in tables may be problematic.
There's no way to portably find out how many columns an mysiam table
can have. It's a complicated formula depending on internal server constants.
Thus we can't have a reliable error check without repeating the logic and
the formula inside mysqldump.
1. Changed the type of the columns of the stand-in tables mysqldump
makes to satisfy view dependencies from the original type to smallint
to save on row space.
2. Added a warning on the mysqldump's standard error for a possible
problems replaying the dump file if the columns of a view exceed 1000.
3. Added a test case.
CAUSES RESTORE PROBLEM
Merging the fix from mysql-5.1 to mysql-5.5
mysql-test/t/mysqldump.test:
There is a difference in the testcase which is added as
part of this fix, when compared with mysql-5.1. In mysql-5.5
and mysql-5.6, "DROP mysql database" fails by enabling
logging, hence removed those lines.
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after
dropping the mysql database, and applying the dump by enabling the
logging, "'general_log' table not found" errors are logged into the
server log file.
Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables
for general_log and slow_log, because the data dump of those tables
are taking LOCKS, which is not allowed for log tables.
Fix:
----
We came up with an approach that instead of taking both meta data
and data dump information for those tables, take only the meta data
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
Design with the fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and
slow_log
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
While taking the meta data dump for general_log and slow_log the
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS".
This is because we skipped "DROP TABLE" for those tables,
"DROP TABLE" fails for these tables if logging is enabled.
Customer is applying the dump by enabling logging so, if the dump
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE"
stmts for those tables.
After the fix we could observe "Table 'mysql.general_log'
doesn't exist" errors initially that is because in the customer
scenario they are dropping the mysql database by enabling the
logging, Hence, those errors are expected. Once we apply the
dump which is taken before the "drop database mysql", the errors
will not be there.
client/mysqldump.c:
In get_table_structure() added code to skip the DROP TABLE stmts for general_log
and slow_log tables, because when logging is enabled those stmts will fail. And
replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make
sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
those tables.
In dump_all_tables_in_db() added code to call get_table_structure() for
general_log and slow_log tables.
mysql-test/r/mysqldump.result:
Added a test as part of fix for Bug #11754178
mysql-test/t/mysqldump.test:
Added a test as part of fix for Bug #11754178
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after
dropping the mysql database, and applying the dump by enabling the
logging, "'general_log' table not found" errors are logged into the
server log file.
Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables
for general_log and slow_log, because the data dump of those tables
are taking LOCKS, which is not allowed for log tables.
Fix:
----
We came up with an approach that instead of taking both meta data
and data dump information for those tables, take only the meta data
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
Design with the fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and
slow_log
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
While taking the meta data dump for general_log and slow_log the
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS".
This is because we skipped "DROP TABLE" for those tables,
"DROP TABLE" fails for these tables if logging is enabled.
Customer is applying the dump by enabling logging so, if the dump
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE"
stmts for those tables.
After the fix we could observe "Table 'mysql.general_log'
doesn't exist" errors initially that is because in the customer
scenario they are dropping the mysql database by enabling the
logging, Hence, those errors are expected. Once we apply the
dump which is taken before the "drop database mysql", the errors
will not be there.
client/mysqldump.c:
In get_table_structure() added code to skip the DROP TABLE stmts for general_log
and slow_log tables, because when logging is enabled those stmts will fail. And
replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make
sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
those tables.
In dump_all_tables_in_db() added code to call get_table_structure() for
general_log and slow_log tables.
mysql-test/r/mysqldump.result:
Added a test as part of fix for Bug #11754178
mysql-test/t/mysqldump.test:
Added a test as part of fix for Bug #11754178
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after
dropping the mysql database, and applying the dump by enabling the
logging, "'general_log' table not found" errors are logged into the
server log file.
Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables
for general_log and slow_log, because the data dump of those tables
are taking LOCKS, which is not allowed for log tables.
Fix:
----
We came up with an approach that instead of taking both meta data
and data dump information for those tables, take only the meta data
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
Design with the fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and
slow_log
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
While taking the meta data dump for general_log and slow_log the
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS".
This is because we skipped "DROP TABLE" for those tables,
"DROP TABLE" fails for these tables if logging is enabled.
Customer is applying the dump by enabling logging so, if the dump
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE"
stmts for those tables.
After the fix we could observe "Table 'mysql.general_log'
doesn't exist" errors initially that is because in the customer
scenario they are dropping the mysql database by enabling the
logging, Hence, those errors are expected. Once we apply the
dump which is taken before the "drop database mysql", the errors
will not be there.
- mysql-test-run.pl --valgrind complains when all tests succeed.
- perfschema.all_instances fail on non-linux, where ENABLE_TEMP_POOL
is not set and therefore BITMAP mutex is not used.
- MDEV-132: main.mysqldump fails because it depends on exact size of stdio
buffers.
- MDEV-99: rpl.rpl_cant_read_event_incident fails due to a race where the
slave manages to connect while the test case is in the middle of setting up
the master, causing the slave to replicate extra/wrong events.
- MDEV-133: rpl.rpl_rotate_purge_deadlock fails because it issues a
DEBUG_SYNC SIGNAL immediately followed by RESET; this means that sometimes
the intended receipient has no time to see the signal before it is cleared
by the RESET, causing wait to timeout.
routines.
mysqldump in xml mode did not dump routines, events or
triggers.
This patch fixes this issue by fixing the if conditions
that disallowed the dump of above mentioned objects in
xml mode, and added the required code to enable dump
in xml format.
client/mysqldump.c:
BUG#11760384 - 52792: mysqldump in XML mode does not dump
routines.
Fixed some if conditions to allow execution of dump methods
for xml and further added the relevant code at places to produce
the dump in xml format.
mysql-test/r/mysqldump.result:
Added a test case for Bug#11760384.
mysql-test/t/mysqldump.test:
Added a test case for Bug#11760384.
routines.
mysqldump in xml mode did not dump routines, events or
triggers.
This patch fixes this issue by fixing the if conditions
that disallowed the dump of above mentioned objects in
xml mode, and added the required code to enable dump
in xml format.
--FLUSH-LOG BREAKS CONSISTENCY
The transaction started by mysqldump gets committed
implicitly when flush-log is specified along with
single-transaction option, and hence can break
consistency.
This is because, COM_REFRESH is executed in order
to flush logs and starting from 5.5 this command
performs an implicit commit.
Fixed by making sure that COM_REFRESH is executed
before the transaction has started and not after it.
Note : This patch triggers following behavioral
changes in mysqldump :
1) After this patch we no longer flush logs before
dumping each database if --single-transaction
option is given like it was done before (in the
absence of --lock-all-tables and --master-data
options).
2) Also, after this patch, we start acquiring
FTWRL before flushing logs in cases when only
--single-transaction and --flush-logs are given.
It becomes safe to use mysqldump with these two
options and without --master-data parameter for
backups.
client/mysqldump.c:
Bug#12809202 61854: MYSQLDUMP --SINGLE-TRANSACTION
--FLUSH-LOG BREAKS CONSISTENCY
Added logic to make sure that, if flush-log option
is specified, mysql_refresh() is never executed after
the transaction has started.
Added verbose messages for all the executions of
mysql_refresh() in order to track its invocation.
mysql-test/r/mysqldump.result:
Added test case for Bug#12809202.
mysql-test/t/mysqldump.test:
Added test case for Bug#12809202.
--FLUSH-LOG BREAKS CONSISTENCY
The transaction started by mysqldump gets committed
implicitly when flush-log is specified along with
single-transaction option, and hence can break
consistency.
This is because, COM_REFRESH is executed in order
to flush logs and starting from 5.5 this command
performs an implicit commit.
Fixed by making sure that COM_REFRESH is executed
before the transaction has started and not after it.
Note : This patch triggers following behavioral
changes in mysqldump :
1) After this patch we no longer flush logs before
dumping each database if --single-transaction
option is given like it was done before (in the
absence of --lock-all-tables and --master-data
options).
2) Also, after this patch, we start acquiring
FTWRL before flushing logs in cases when only
--single-transaction and --flush-logs are given.
It becomes safe to use mysqldump with these two
options and without --master-data parameter for
backups.
* rename all debugging related command-line options
and variables to start from "debug-", and made them all
OFF by default.
* replace "MySQL" with "MariaDB" in error messages
* "Cast ... converted ... integer to it's ... complement"
is now a note, not a warning
* @@query_cache_strip_comments now has a session scope,
not global.
sql/sql_insert.cc:
CREATE ... IF NOT EXISTS may do nothing, but
it is still not a failure. don't forget to my_ok it.
******
CREATE ... IF NOT EXISTS may do nothing, but
it is still not a failure. don't forget to my_ok it.
sql/sql_table.cc:
small cleanup
******
small cleanup
This fixes a bug that when you use mysqldump --no-create-info to generate a dump that you want to merge with an existing table,
you can get an innodb table with duplicated unique keys.
Patch orignally by Eric Bergen.
client/mysqldump.c:
Only use UNIQUE_CHECKS=0 for tables that are created.
This solves the issue that you can't get duplicate unique keys when merging two dumps.
mysql-test/r/mysqldump.result:
Test for mysqldump --no-create-info