1
0
mirror of https://github.com/MariaDB/server.git synced 2025-07-02 14:22:51 +03:00
Commit Graph

29518 Commits

Author SHA1 Message Date
72c642c097 Fix mysql_plugin test to handle version XXa 2012-06-29 14:19:31 +02:00
31a9208bd0 Bug #12998841: libmysql divulges plaintext password upon request in 5.5
1. Clear text password client plugin disabled by default.
2. Added an environment variable LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN, that
when set to something starting with '1', 'Y' or 'y' will enable the clear
text
plugin for all connections.
3. Added a new mysql_options() option : MYSQL_ENABLE_CLEARTEXT_PLUGIN
that takes an my_bool argument. When the value of the argument is non-zero
the clear text plugin is enabled for this connection only.
4. Added an enable-cleartext-plugin config file option that takes a numeric

argument. If the numeric value of the numeric argument is non-zero the
clear
text plugin is enabled for the connection
5. Added a boolean command line option "--enable_cleartext_plugin" to
mysql, mysqlslap and mysqladmin. When specified it will call mysql_options
with the effect of #3
6. Added a new CLEARTEXT option to the connect command in mysqltest.
When specified it will enable the cleartext plugin for usage.
7. Added test cases and updated existing ones that need the clear text
plugin.
2012-07-05 09:55:20 +03:00
e6f0b97b50 Bug #11753490: 44939: sql dumps containing broad views fail when
executing

The problem is that mysql lacks information about the objects a view
depends on so it can't dump views and tables in the proper order.
Thus it needs to create "stand-in" myisam tables for each view while 
dumping the tables that it later drops and replaces with the actual view
view definition.
But since views can have much more columns than an actual table creating
these stand-in tables may be problematic.

There's no way to portably find out how many columns an mysiam table
can have. It's a complicated formula depending on internal server constants.
Thus we can't have a reliable error check without repeating the logic and 
the formula inside mysqldump.

1. Changed the type of the columns of the stand-in tables mysqldump
makes to satisfy view dependencies from the original type to smallint 
to save on row space.

2. Added a warning on the mysqldump's standard error for a possible 
problems replaying the dump file if the columns of a view exceed 1000.

3. Added a test case.
2012-07-04 17:48:58 +03:00
623f930a5d upmerge from mysql-5.1 to mysql-5.5 2012-07-03 18:08:31 +05:30
9f4f06996b Bug#13417440 : 63340: ARCHIVE FILE IO NOT INSTRUMENTED
Details:
 - Modified test case to make sure its run for all and not only
   for archive Storage Engine.
2012-07-03 09:55:51 +05:30
47c1ce35e2 manual merge (WL6219) 2012-06-29 14:12:21 +04:00
ba966cff98 Backport of the deprecation warning from WL#6219: "Deprecate and remove YEAR(2) type"
Print the warning(note):

 YEAR(x) is deprecated and will be removed in a future release. Please use YEAR(4) instead

on "CREATE TABLE ... YEAR(x)" or "ALTER TABLE MODIFY ... YEAR(x)", where x != 4
2012-06-29 12:55:45 +04:00
c2d38c306f BUG #13946716: FEDERATED_PLUGIN TEST CASE FAIL ON 64BIT ARCHITECTURES 2012-06-14 17:07:49 +05:30
46ca66b9f8 BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Upmerge from mysql-5.1 -> mysql-5.5
2012-06-12 12:59:56 +05:30
1211b5d50b BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
Replication breaks in the cases if the event length exceeds 
the size of master Dump thread's max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the  
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem is fixed in 2 steps:

1.) The Dump thread limit to read event is increased to the upper limit
    i.e. Dump thread reads whatever gets logged in the binary log.

2.) On the slave side we increase the the max_allowed_packet for the
    slave's threads (IO/SQL) by increasing it to 1GB.

    This is done using the new server option (slave_max_allowed_packet)
    included, is used to regulate the max_allowed_packet of the  
    slave thread (IO/SQL) by the DBA, and facilitates the sending of
    large packets from the master to the slave.

    This causes the large packets to be received by the slave and apply
    it successfully.
2012-06-12 12:59:13 +05:30
25383ff615 Bug#13417440 : 63340: ARCHIVE FILE IO NOT INSTRUMENTED
Followup patch: wrong result unless archive storage engine is available.
2012-06-05 12:27:20 +02:00
040a1fddbb Bug#13982017: ALTER TABLE RENAME ENDS UP WITH ERROR 1050 (42S01)
Fixed by backport of:
    ------------------------------------------------------------
    revno: 3402.50.156
    committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
    branch nick: mysql-trunk-test
    timestamp: Wed 2012-02-08 14:10:23 +0100
    message:
      Bug#13417754 ASSERT IN ROW_DROP_DATABASE_FOR_MYSQL DURING DROP SCHEMA
      
      This assert could be triggered if an InnoDB table was being moved
      to a different database using ALTER TABLE ... RENAME, while this
      database concurrently was being dropped by DROP DATABASE.
      
      The reason for the problem was that no metadata lock was taken
      on the target database by ALTER TABLE ... RENAME.
      DROP DATABASE was therefore not blocked and could remove
      the database while ALTER TABLE ... RENAME was executing. This
      could cause the assert in InnoDB to be triggered.
      
      This patch fixes the problem by taking a IX metadata lock on
      the target database before ALTER TABLE ... RENAME starts
      moving a table to a different database.
      
      Note that this problem did not occur with RENAME TABLE which
      already takes the correct metadata locks.
      
      Also note that this patch slightly changes the behavior of
      ALTER TABLE ... RENAME. Before, the statement would abort and
      return an error if a lock on the target table name could not
      be taken immediately. With this patch, ALTER TABLE ... RENAME
      will instead block and wait until the lock can be taken 
      (or until we get a lock timeout). This also means that it is
      possible to get ER_LOCK_DEADLOCK errors in this situation
      since we allow ALTER TABLE ... RENAME to wait and not just
      abort immediately.
2012-06-01 09:31:24 +02:00
6c03d09e2e BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
Replication breaks in the cases if the event length exceeds 
the size of master Dump thread's max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the  
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem is fixed in 2 steps:

1.) The Dump thread limit to read event is increased to the upper limit
    i.e. Dump thread reads whatever gets logged in the binary log.

2.) On the slave side we increase the the max_allowed_packet for the
    slave's threads (IO/SQL) by increasing it to 1GB.

    This is done using the new server option (slave_max_allowed_packet)
    included, is used to regulate the max_allowed_packet of the  
    slave thread (IO/SQL) by the DBA, and facilitates the sending of
    large packets from the master to the slave.

    This causes the large packets to be received by the slave and apply
    it successfully.
2012-05-30 10:10:52 +05:30
1c1b16e304 PB2 Failure Fix : Disabled the test case in correct manner. 2012-05-25 15:44:27 +05:30
c678b1a994 Bug#13417440 : 63340: ARCHIVE FILE IO NOT INSTRUMENTED
Details:
 - Archive storage engine file access were not instrumented and thus
   were not shown in PS tables.
      
Fix:
 - Added instrumentation code by using PS Apis for I/O.
2012-05-24 23:00:32 +05:30
77fcf72cf4 WL#6311 Remove --safe-mode
Print deprecation warning if the --safe-mode command line option is
used.
2012-05-23 12:27:32 +02:00
7d3ae34e75 Improved the test performance_schema.func_file_io,
so that investigating test failures is easier.

Detect cases when @before_count / @after_count is NULL.
2012-05-23 10:21:35 +02:00
a7692bb521 After the fix for Bug #12752572, there is a pb2 failure. Fixing it by
updating the result file.  Because a multi-row insert now reserves the
auto increment values before hand, if any explicitly specified auto
increment values are there, then some of the reserved values are lost.
2012-05-23 10:12:52 +05:30
9aa79dc596 BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
SQL statements close to the size of max_allowed_packet produce binary
log events larger than max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet + max_event_header
length. Now since the event length exceeds this size master Dump
thread is unable to send the packet on to the slave.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem was fixed by increasing the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option included which is used to
regulate the max_allowed_packet of the slave thread (IO/SQL).
This causes the large packets to be received by the slave and apply
it successfully.
2012-05-21 12:57:39 +05:30
8bb98d7535 BUG#11754117 - 45670: INTVAR_EVENTS FOR FILTERED-OUT QUERY_LOG_EVENTS ARE EXECUTED
Improved random number filtering verification on
rpl_filter_tables_not_exist test.
2012-05-15 22:06:48 +01:00
09cb0649e5 BUG#11754117 - 45670: INTVAR_EVENTS FOR FILTERED-OUT QUERY_LOG_EVENTS ARE EXECUTED
Automerge from mysql-5.1 into mysql-5.5.
2012-05-15 22:18:59 +01:00
375afcf1df Upmerge optional testsuite path 2012-05-15 09:19:58 +02:00
e7b735fb4b Added some extra optional path to test suites 2012-05-15 09:14:44 +02:00
33d9d40ccd Merging from mysql-5.1 to mysql-5.5. 2012-05-10 10:33:16 +05:30
b76a59f5a6 Bug #14007649 65111: INNODB SOMETIMES FAILS TO UPDATE ROWS INSERTED
BY A CONCURRENT TRANSACTIO

The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
a table handler clone. Innodb does not provide a clone operation.  
The ha_innobase::clone() is not there. The handler::clone() does not 
take care of the ha_innobase->prebuilt->select_lock_type.  Because of 
this what happens is that for one index we do a locking read, and 
for the other index we were doing a non-locking (consistent) read. 
The patch introduces ha_innobase::clone() member function.  
It is implemented similar to ha_myisam::clone().  It calls the 
base class handler::clone() and then does any additional operation 
required.  I am setting the ha_innobase->prebuilt->select_lock_type 
correctly. 

rb://1060 approved by Marko
2012-05-10 10:18:31 +05:30
d37a28c9b0 Merge from mysql-5.1.63-release 2012-05-08 07:19:14 +02:00
ad1e123f47 Merge 5.5.24 back into main 5.5.
This is a weave merge, but without any conflicts.
In 14 source files, the copyright year needed to be updated to 2012.
2012-05-07 22:20:42 +02:00
066dc9a281 Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM

Merging the fix from mysql-5.1 to mysql-5.5
2012-05-07 16:51:26 +05:30
14aa2c020e Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after 
dropping the mysql database, and applying the dump by enabling the 
logging, "'general_log' table not found" errors are logged into the 
server log file.

Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables 
for general_log and slow_log, because the data dump of those tables 
are taking LOCKS, which is not allowed for log tables.

Fix:
----
We came up with an approach that instead of taking both meta data 
and data dump information for those tables, take only the meta data 
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

Design with the fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and 
   slow_log
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

While taking the meta data dump for general_log and slow_log the 
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
This is because we skipped "DROP TABLE" for those tables, 
"DROP TABLE" fails for these tables if logging is enabled. 
Customer is applying the dump by enabling logging so, if the dump 
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
stmts for those tables.
  
After the fix we could observe "Table 'mysql.general_log' 
doesn't exist" errors initially that is because in the customer 
scenario they are dropping the mysql database by enabling the 
logging, Hence, those errors are expected. Once we apply the 
dump which is taken before the "drop database mysql", the errors 
will not be there.
2012-05-07 16:46:44 +05:30
41cdad9868 Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after 
dropping the mysql database, and applying the dump by enabling the 
logging, "'general_log' table not found" errors are logged into the 
server log file.

Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables 
for general_log and slow_log, because the data dump of those tables 
are taking LOCKS, which is not allowed for log tables.

Fix:
----
We came up with an approach that instead of taking both meta data 
and data dump information for those tables, take only the meta data 
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

Design with the fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and 
   slow_log
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

While taking the meta data dump for general_log and slow_log the 
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
This is because we skipped "DROP TABLE" for those tables, 
"DROP TABLE" fails for these tables if logging is enabled. 
Customer is applying the dump by enabling logging so, if the dump 
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
stmts for those tables.
  
After the fix we could observe "Table 'mysql.general_log' 
doesn't exist" errors initially that is because in the customer 
scenario they are dropping the mysql database by enabling the 
logging, Hence, those errors are expected. Once we apply the 
dump which is taken before the "drop database mysql", the errors 
will not be there.
2012-05-04 18:33:34 +05:30
f619c8ce83 In perl, to break out of a foreach loop we need to use
the keyword "last" and not "break".  Fixing the failing
test case.
2012-05-04 12:29:49 +05:30
5f4c6942bf Revert two follow-ups for Bug#12762885:
- alexander.nozdrin@oracle.com-20120427151428-7llk1mlwx8xmbx0t
  - alexander.nozdrin@oracle.com-20120427144227-kltwiuu8snds4j3l.
2012-04-27 21:07:53 +04:00
3885dc55dd Proper follow-up for Bug#12762885 - 61713: MYSQL WILL NOT BIND TO "LOCALHOST"
IF LOCALHOST IS BOTH IPV4/IPV6 ENABLED.

The original patch removed default value of the bind-address option.
So, the default value became NULL. By coincedence NULL resolves
to 0.0.0.0 and ::, and since the server chooses first IPv4-address, 
0.0.0.0 is choosen. So, there was no change in the behaviour.

This patch restores default value of the bind-address option to "0.0.0.0".
2012-04-27 19:14:28 +04:00
4b917744a2 BUG#13812374 - RPL.RPL_REPORT_PORT FAILS OCCASIONALLY ON PB2
Problem - The failure on PB2 is possbily due to the port number being still in
          use even after the server restarts which is not reflected in the
          server restart.
      
Fix - The problem is fixed by starting the servers forcefully using the option
      file and also the parameters for the server restart is passed correctly.
2012-04-26 19:34:03 +05:30
9927a07bc1 Allow Windows absolute paths in N:\ formatfor the --vardir option 2012-04-23 12:52:14 +02:00
a891de4221 merge from 5.1 repo 2012-04-23 12:05:05 +03:00
d5925c2044 BUG#11754117
rpl_auto_increment_bug45679.test is refined due to not fixed in 5.1 Bug11749859-39934.
2012-04-23 11:51:19 +03:00
ec2caa37ba merge bug11754117-45670 fixes from 5.1: fixing result files. 2012-04-21 14:19:06 +03:00
bf66e3ab63 merge bug11754117-45670 fixes from 5.1. 2012-04-21 13:24:39 +03:00
e748999eb5 BUG#12427262 : 60961: SHOW TABLES VERY SLOW WHEN NOT IN SYSTEM DISK CACHE
Details:
 - test case bug12427262.test was failing on windows because
   on windows '/' was not recognized. And this was used in
   LIKE clause of the query being run in this test case.

Fix:
 - Windows needs '\\\\' for path seperater in mysql. I was 
   not sure how to keep a single query with two different 
   syntax based on platform. So modifying query to make sure
   it runs correctly on both platform.
2012-04-21 05:23:09 +05:30
8ac39aa8e0 BUG#13979418: SHOW BINLOG EVENTS MAY CRASH THE SERVER
Merge from 5.1 into 5.5.

Conflicts:
 * sql/log.h
 * sql/sql_repl.cc
2012-04-20 23:35:53 +01:00
ca33df2094 BUG#13979418: SHOW BINLOG EVENTS MAY CRASH THE SERVER
The function mysql_show_binlog_events has a local stack variable
'LOG_INFO linfo;', which is assigned to thd->current_linfo, however
this variable goes out of scope and is destroyed before clean
thd->current_linfo.

The problem is solved by moving 'LOG_INFO linfo;' to function scope.
2012-04-20 22:25:59 +01:00
f3509d1d67 BUG#11754117 incorrect logging of INSERT into auto-increment
BUG#11761686 insert_id event is not filtered.
  
Two issues are covered.
  
INSERT into autoincrement field which is not the first part in the composed primary key 
is unsafe by autoincrement logging design. The case is specific to MyISAM engine
because Innodb does not allow such table definition.
  
However no warnings and row-format logging in the MIXED mode was done, and
that is fixed.
  
Int-, Rand-, User-var log-events were not filtered along with their parent
query that made possible them to screw up execution context of the following
query.
  
Fixed with deferring their execution until the parent query.

******
Bug#11754117 

Post review fixes.
2012-04-20 19:41:20 +03:00
2786a6e232 BUG#12427262 : 60961: SHOW TABLES VERY SLOW WHEN NOT IN SYSTEM DISK CACHE
Details:
- Merge : 5.1 -> 5.5
- Addded a new test case which was not added in 5.1 because PS was
  not there in 5.1.
2012-04-19 15:59:46 +05:30
d612986b36 Backport 5.5=>5.1 Patch for Bug#13805127:
Stored program cache produces wrong result in same THD.
2012-04-18 13:14:05 +02:00
46c51c40e1 Bug #12902967 Creating self referencing fk on same index unhandled,
confusing error. Updated the test script to work properly on
windows platform.
2012-04-18 15:16:11 +05:30
0eea06c5d0 WL#6236: Allow SHOW MASTER LOGS and SHOW BINARY LOGS with REPLICATION CLIENT
Merge from 5.1 into 5.5.
2012-04-18 10:12:19 +01:00
a9a7e6ea24 WL#6236: Allow SHOW MASTER LOGS and SHOW BINARY LOGS with REPLICATION CLIENT
Currently SHOW MASTER LOGS and SHOW BINARY LOGS require the SUPER
privilege. Monitoring tools (such as MEM) often want to check this 
output - for instance MEM generates the SUM of the sizes of the logs 
reported here, and puts that in the Replication overview within the MEM
Dashboard.
However, because of the SUPER requirement, these tools often have an 
account that holds open the connection whilst monitoring, and can lock
out administrators when the server gets overloaded and reaches
max_connections - there is already another SUPER privileged account
connected, the "monitor". 

As SHOW MASTER STATUS, and all other replication related statements,
return with either REPLICATION CLIENT or SUPER privileges, this worklog 
is to make SHOW MASTER LOGS and SHOW BINARY LOGS be consistent with this
as well, and allow both of these commands with either SUPER or 
REPLICATION CLIENT. 
This allows monitoring tools to not require a SUPER privilege any more,
so is safer in overloaded situations, as well as being more secure, as 
lighter privileges can be given to users of such tools or scripts.
2012-04-18 10:08:01 +01:00
2479f3cb7b Merge from 5.1 to 5.5 2012-04-18 11:34:36 +05:30
81058259c7 Bug#12713907:STRANGE OPTIMIZE & WRONG RESULT UNDER
ORDER BY COUNT(*) LIMIT.

PROBLEM:
With respect to problem in the bug description, we
exhibit different behaviors for the two tables
presented, because innodb statistics (rec_per_key
in this case) are updated for the first table
and not so for the second one. As a result the
query plan gets changed in test_if_skip_sort_order
to use 'index' scan. Hence the difference in the
explain output. (NOTE: We can reproduce the problem
with first table by reducing the number of tuples
and changing the table structure)

The varied output w.r.t the query on the second table
is because of the result in the query plan change.
When a query plan is changed to use 'index' scan,
after the call to test_if_skip_sort_order, we set
keyread to TRUE immedietly. If for some reason
we drop this index scan for a filesort later on,
we fetch only the keys not the entire tuple.
As a result we would see junk values in the result set.

Following is the code flow:

Call test_if_skip_sort_order
-Choose an index to give sorted output
-If this is a covering index, set_keyread to TRUE
-Set the scan to INDEX scan

Call test_if_skip_sort_order second time
-Index is not chosen (note that we do not pass the
actual limit value second time. Hence we do not choose
index scan second time which in itself is a bug fixed
in 5.6 with WL#5558)
-goto filesort

Call filesort
-Create quick range on a different index
-Since keyread is set to TRUE, we fetch only the columns of
the index
-results in the required columns are not fetched

FIX:
Remove the call to set_keyread(TRUE) from
test_if_skip_sort_order. The access function which is
'join_read_first' or 'join_read_last' calls set_keyread anyways.
2012-04-18 11:25:01 +05:30