Several fixes :
* sql-common/client.c
Added a validity check of the fields metadata packet sent
by the server.
Now libmysql will check if the length of the data sent by
the server matches what's expected by the protocol before
using the data.
* client/mysqltest.cc
Fixed the error handling code in mysqltest to avoid sending
new commands when the reading the result set failed (and
there are unread data in the pipe).
* sql_common.h + libmysql/libmysql.c + sql-common/client.c
unpack_fields() now generates a proper error when it fails.
Added a new argument to this function to support the error
generation.
* sql/protocol.cc
Added a debug trigger to cause the server to send a NULL
insted of the packet expected by the client for testing
purposes.
Problem: mysqlbinlog exits without any error code in case of
file write error. It is because of the fact that the calls
to Log_event::print() method does not return a value and the
thus any error were being ignored.
Resolution: We resolve this problem by checking for the
IO_CACHE::error == -1 after every call to Log_event:: print()
and terminating the further execution.
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after
dropping the mysql database, and applying the dump by enabling the
logging, "'general_log' table not found" errors are logged into the
server log file.
Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables
for general_log and slow_log, because the data dump of those tables
are taking LOCKS, which is not allowed for log tables.
Fix:
----
We came up with an approach that instead of taking both meta data
and data dump information for those tables, take only the meta data
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
Design with the fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and
slow_log
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
While taking the meta data dump for general_log and slow_log the
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS".
This is because we skipped "DROP TABLE" for those tables,
"DROP TABLE" fails for these tables if logging is enabled.
Customer is applying the dump by enabling logging so, if the dump
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE"
stmts for those tables.
After the fix we could observe "Table 'mysql.general_log'
doesn't exist" errors initially that is because in the customer
scenario they are dropping the mysql database by enabling the
logging, Hence, those errors are expected. Once we apply the
dump which is taken before the "drop database mysql", the errors
will not be there.
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after
dropping the mysql database, and applying the dump by enabling the
logging, "'general_log' table not found" errors are logged into the
server log file.
Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables
for general_log and slow_log, because the data dump of those tables
are taking LOCKS, which is not allowed for log tables.
Fix:
----
We came up with an approach that instead of taking both meta data
and data dump information for those tables, take only the meta data
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
Design with the fix:
1) mysql database is having tables like db, event,... general_log,
... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and
slow_log
3) Take the TL_READ lock on tables which are present in the table
list and do 'show create table'.
4) Release the lock.
While taking the meta data dump for general_log and slow_log the
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS".
This is because we skipped "DROP TABLE" for those tables,
"DROP TABLE" fails for these tables if logging is enabled.
Customer is applying the dump by enabling logging so, if the dump
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE"
stmts for those tables.
After the fix we could observe "Table 'mysql.general_log'
doesn't exist" errors initially that is because in the customer
scenario they are dropping the mysql database by enabling the
logging, Hence, those errors are expected. Once we apply the
dump which is taken before the "drop database mysql", the errors
will not be there.
This patch changes the mechanism by which the client enables a
plugin. Instead of using INSERT IGNORE to reload a plugin library,
it now uses REPLACE INTO. This allows users to load a library
multiple times replacing the existing values in the mysql.plugin
table. This allows users to replace the symbol reference to a
different dl name in the table. Thus permitting enabling of
multiple versions of the same library without first disabling
the old version.
A regression test was added to ensure this feature works.
routines.
mysqldump in xml mode did not dump routines, events or
triggers.
This patch fixes this issue by fixing the if conditions
that disallowed the dump of above mentioned objects in
xml mode, and added the required code to enable dump
in xml format.
--FLUSH-LOG BREAKS CONSISTENCY
The transaction started by mysqldump gets committed
implicitly when flush-log is specified along with
single-transaction option, and hence can break
consistency.
This is because, COM_REFRESH is executed in order
to flush logs and starting from 5.5 this command
performs an implicit commit.
Fixed by making sure that COM_REFRESH is executed
before the transaction has started and not after it.
Note : This patch triggers following behavioral
changes in mysqldump :
1) After this patch we no longer flush logs before
dumping each database if --single-transaction
option is given like it was done before (in the
absence of --lock-all-tables and --master-data
options).
2) Also, after this patch, we start acquiring
FTWRL before flushing logs in cases when only
--single-transaction and --flush-logs are given.
It becomes safe to use mysqldump with these two
options and without --master-data parameter for
backups.
readline.cc: In function char* batch_readline(LINE_BUFFER*):
readline.cc:60:9: error: out_length may be used uninitialized in this function
log.cc: In function int find_uniq_filename(char*):
log.cc:1857:8: error: number may be used uninitialized in this function
OPTION SKIP-WRITE-BINLOG
System tables were not getting upgraded when
mysql_upgrade was run with --skip-write-binlog
option. (Same for --write-binlog.) Also, with
this option, mysql_upgrade_info file was not
getting created after the upgrade.
mysql_upgrade makes use of mysql client tool in
order to run upgrade scripts, while doing so it
passes some of the command line options (used to
start mysql_upgrade) directly to mysql client.
The reason behind this bug being, some options
like skip-write-binlog and upgrade-system-tables
were being passed to mysql tool along with other
options, and hence mysql execution failed due
presence of these invalid options.
Fixed this issue by filtering out the above mentioned
options from the list of options that will be passed to
mysql and mysqlcheck tools. However, since --write-binlog
is supported by mysqlcheck, this option would be used
explicitly while running mysqlcheck. (not part of patch,
already there)
Checking the contents of general log after the upgrade
is not doable via an mtr test. So performed manual test.
Added a test to verify the creation of mysql_upgrade_info.
This patch corrects a defect whereby the --mysqld, --my-print-defaults,
and --plugin-ini were required. These options are not required and the
code has been fixed accordingly.
Don't do this for echo, instead:
1) Enable replacements also for assignment from backquoted SQL
2) Allow replace_regex to take a variable for the *entire* argument list
With this, the test can be amended, but only in its version in trunk
Simplified fix avoiding changes to mysys:
Use the MY_HOLD_ORIGINAL_MODES flag when calling my_copy(),
this also stops it from attempting to chown() the file.
Yes this behavior is a bit confusing....
The only case this might change the behavior is if the destination file
exists, but since we also use MY_DONT_OVERWRITE_FILE, it would fail
in those cases anyway.
modified function do_get_error in mysqltest.cc to handle multiple variable passed
added test case to mysqltest.test to verify handling to multiple errors passed
This patch corrects a defect whereby the bootstrap_server() method was
returning 0 instead of the error code generated. The code has been changed to
return the correct value returned from the bootstrap command.
This patch corrects a defect in the building of the DELETE commands for
disabling a plugin whereby only the original plugin data was deleted. If there
were other plugins, the delete did not remove the rows. The code has been
changed to remove all rows from the mysql.plugin table that were inserted when
the plugin was loaded. The test has also been changed to correctly identify if
all rows have been deleted.
run_query_stmt() might use disable_xxx vars after calling handle_no_error
But handle_no_error() hes reverted any ONCE settings
Fix is to take revert_properties() out of handle_no_error()
SYSTEM VARIABLE NAME SQL_MAX_JOIN_SI
BACKGROUND:
ER_TOO_BIG_SELECT refers to SQL_MAX_JOIN_SIZE, which is the
old name for MAX_JOIN_SIZE.
FIX:
Support for old name SQL_MAX_JOIN_SIZE is removed in MySQL 5.6
and is renamed as MAX_JOIN_SIZE.So the errmsg.txt
and mysql.cc files have been updated and the corresponding result
files have also been updated.
Followup fixes for --ps-protocol:
1) Incorrectly set ps_protocol variable instead of ps_protocol_enabled
2) disable_result_log was tested after calling handle_no_error()
which would revert a temporary setting.
Call handle_error() instead of die() when evaluating these
Must remember "current command" with link to errors to ignore
Added test cases to mysqltest.test
Added a keyword ONCE to add to those commands
Some internal tables to keep track of which properties are
temporarily overriden
Added tests in mysqltest.test
Updates to other tests will be done later
This patch corrects an error encountered in PB where Windows machines
are built in release mode have an extraneous parameter added in place
of the --console option. This is caused by the insert of '(null'
instead of an empty string. In non-debug mode, the string is explicitly
set to an empty string.
Patch also fixes a result mismatch on Windows machines.
This patch corrects an unsafe string concatenation for a Windows-specific
code segment. The symptoms were, under certain conditions like specifying
the location of my-print-defaults and the basedir, and run on a release
build, the client would exit without printing any messages.