CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after
dropping the mysql database, and applying the dump by enabling the
logging, "'general_log' table not found" errors are logged into the
server log file.
Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables
for general_log and slow_log, because the data dump of those tables
are taking LOCKS, which is not allowed for log tables.
Fix:
----
We came up with an approach that instead of taking both meta data
and data dump information for those tables, take only the meta data
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
Design with the fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and
slow_log
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
While taking the meta data dump for general_log and slow_log the
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS".
This is because we skipped "DROP TABLE" for those tables,
"DROP TABLE" fails for these tables if logging is enabled.
Customer is applying the dump by enabling logging so, if the dump
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE"
stmts for those tables.
After the fix we could observe "Table 'mysql.general_log'
doesn't exist" errors initially that is because in the customer
scenario they are dropping the mysql database by enabling the
logging, Hence, those errors are expected. Once we apply the
dump which is taken before the "drop database mysql", the errors
will not be there.
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after
dropping the mysql database, and applying the dump by enabling the
logging, "'general_log' table not found" errors are logged into the
server log file.
Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables
for general_log and slow_log, because the data dump of those tables
are taking LOCKS, which is not allowed for log tables.
Fix:
----
We came up with an approach that instead of taking both meta data
and data dump information for those tables, take only the meta data
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
Design with the fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and
slow_log
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
While taking the meta data dump for general_log and slow_log the
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS".
This is because we skipped "DROP TABLE" for those tables,
"DROP TABLE" fails for these tables if logging is enabled.
Customer is applying the dump by enabling logging so, if the dump
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE"
stmts for those tables.
After the fix we could observe "Table 'mysql.general_log'
doesn't exist" errors initially that is because in the customer
scenario they are dropping the mysql database by enabling the
logging, Hence, those errors are expected. Once we apply the
dump which is taken before the "drop database mysql", the errors
will not be there.
TO "LOCALHOST" IF LOCALHOST IS BOTH IPV4/IPV6 ENABLED.
Previous commit comments were wrong. The default value has always been NULL.
The original patch for Bug#12762885 just makes it visible in the logs.
This patch uses "0.0.0.0" string if bind-address is not set.
IF LOCALHOST IS BOTH IPV4/IPV6 ENABLED.
The original patch removed default value of the bind-address option.
So, the default value became NULL. By coincedence NULL resolves
to 0.0.0.0 and ::, and since the server chooses first IPv4-address,
0.0.0.0 is choosen. So, there was no change in the behaviour.
This patch restores default value of the bind-address option to "0.0.0.0".
IF LOCALHOST IS BOTH IPV4/IPV6 ENABLED.
The original patch removed default value of the bind-address option.
So, the default value became NULL. By coincedence NULL resolves
to 0.0.0.0 and ::, and since the server chooses first IPv4-address,
0.0.0.0 is choosen. So, there was no change in the behaviour.
This patch restores default value of the bind-address option to "0.0.0.0".
Problem - The failure on PB2 is possbily due to the port number being still in
use even after the server restarts which is not reflected in the
server restart.
Fix - The problem is fixed by starting the servers forcefully using the option
file and also the parameters for the server restart is passed correctly.
rb://1043
approved by: Sunny Bains
Two internal counters were incremented twice for a single
operations. The counters are:
srv_buf_pool_flushed
buf_lru_flush_page_count
Details:
- test case bug12427262.test was failing on windows because
on windows '/' was not recognized. And this was used in
LIKE clause of the query being run in this test case.
Fix:
- Windows needs '\\\\' for path seperater in mysql. I was
not sure how to keep a single query with two different
syntax based on platform. So modifying query to make sure
it runs correctly on both platform.
The function mysql_show_binlog_events has a local stack variable
'LOG_INFO linfo;', which is assigned to thd->current_linfo, however
this variable goes out of scope and is destroyed before clean
thd->current_linfo.
The problem is solved by moving 'LOG_INFO linfo;' to function scope.
BUG#11761686 insert_id event is not filtered.
Two issues are covered.
INSERT into autoincrement field which is not the first part in the composed primary key
is unsafe by autoincrement logging design. The case is specific to MyISAM engine
because Innodb does not allow such table definition.
However no warnings and row-format logging in the MIXED mode was done, and
that is fixed.
Int-, Rand-, User-var log-events were not filtered along with their parent
query that made possible them to screw up execution context of the following
query.
Fixed with deferring their execution until the parent query.
******
Bug#11754117
Post review fixes.
Reason:
This is a regression happened because of changes done in code refactoring
in 5.1 from 5.0.
Issue:
While doing "Show tables" lex->verbose was being checked to avoid opening
FRM files to get table type. In case of "Show full table", lex->verbose
is true to indicate table type is required. In 5.0, this check was
present which got missing in >=5.5.
Fix:
Added the required check to avoid opening FRM files unnecessarily in case
of "Show tables".
Currently SHOW MASTER LOGS and SHOW BINARY LOGS require the SUPER
privilege. Monitoring tools (such as MEM) often want to check this
output - for instance MEM generates the SUM of the sizes of the logs
reported here, and puts that in the Replication overview within the MEM
Dashboard.
However, because of the SUPER requirement, these tools often have an
account that holds open the connection whilst monitoring, and can lock
out administrators when the server gets overloaded and reaches
max_connections - there is already another SUPER privileged account
connected, the "monitor".
As SHOW MASTER STATUS, and all other replication related statements,
return with either REPLICATION CLIENT or SUPER privileges, this worklog
is to make SHOW MASTER LOGS and SHOW BINARY LOGS be consistent with this
as well, and allow both of these commands with either SUPER or
REPLICATION CLIENT.
This allows monitoring tools to not require a SUPER privilege any more,
so is safer in overloaded situations, as well as being more secure, as
lighter privileges can be given to users of such tools or scripts.
ORDER BY COUNT(*) LIMIT.
PROBLEM:
With respect to problem in the bug description, we
exhibit different behaviors for the two tables
presented, because innodb statistics (rec_per_key
in this case) are updated for the first table
and not so for the second one. As a result the
query plan gets changed in test_if_skip_sort_order
to use 'index' scan. Hence the difference in the
explain output. (NOTE: We can reproduce the problem
with first table by reducing the number of tuples
and changing the table structure)
The varied output w.r.t the query on the second table
is because of the result in the query plan change.
When a query plan is changed to use 'index' scan,
after the call to test_if_skip_sort_order, we set
keyread to TRUE immedietly. If for some reason
we drop this index scan for a filesort later on,
we fetch only the keys not the entire tuple.
As a result we would see junk values in the result set.
Following is the code flow:
Call test_if_skip_sort_order
-Choose an index to give sorted output
-If this is a covering index, set_keyread to TRUE
-Set the scan to INDEX scan
Call test_if_skip_sort_order second time
-Index is not chosen (note that we do not pass the
actual limit value second time. Hence we do not choose
index scan second time which in itself is a bug fixed
in 5.6 with WL#5558)
-goto filesort
Call filesort
-Create quick range on a different index
-Since keyread is set to TRUE, we fetch only the columns of
the index
-results in the required columns are not fetched
FIX:
Remove the call to set_keyread(TRUE) from
test_if_skip_sort_order. The access function which is
'join_read_first' or 'join_read_last' calls set_keyread anyways.
UNHANDLED, CONFUSING ERROR
The main confusion with the error message is that "it
implies that your data dictionary may now be out of
sync". This patch will remove the unwanted and the
misleading error message by not doing an unnecessary
operation in the error handling code.
rb://980 approved by: Dmitry Lenev
The class Copy_field contains a String tmp,
which may allocate memory on the heap.
That means that all instances of Copy_field
must be properly destroyed. Alas they are not.
Solution: don't use Copy_field::tmp for copying
from_field => tmp => to_field
in do_field_string()