1
0
mirror of https://github.com/MariaDB/server.git synced 2025-07-04 01:23:45 +03:00
Commit Graph

32154 Commits

Author SHA1 Message Date
6d8e329c9a Fixed bug LP:973039 - Assertion `share->in_trans == 0' failed in maria_close on DROP TABLE under LOCK
- 5.5 was missing calls to ha_extra(HA_PREPARE_FOR_DROP | HA_PREPARE_FOR_RENAME);  Lost in merge 5.3 -> 5.5


sql/sql_admin.cc:
  Updated arguments for close_all_tables_for_name
sql/sql_base.h:
  Updated arguments for close_all_tables_for_name
sql/sql_partition.cc:
  Updated arguments for close_all_tables_for_name
sql/sql_table.cc:
  Updated arguments for close_all_tables_for_name
  Removed test of kill, as we have already called 'ha_extra(HA_PREPARE_FOR_DROP)' and the table may be inconsistent.
sql/sql_trigger.cc:
  Updated arguments for close_all_tables_for_name
sql/sql_truncate.cc:
  For truncate that is done with drop + recreate, signal that the table will be dropped.
2012-05-16 18:44:17 +03:00
dfbd777fd8 MWL#182: SHOW EXPLAIN: Merge 5.3->5.5 2012-05-16 19:20:00 +04:00
533e1d2845 MDEV-273: SHOW EXPLAIN: server crashes in JOIN::print_explain on a query with impossible WHERE
- It turns out, there is a case where the join is degenerate, but 
  join->table_count!= && join->tables_list!=NULL. Need to also check 
  if join->zero_result_cause!=NULL, too.

- There is a slight problem: The code sets
    zero_result_cause= "no matching row in const table"
  when NOT running EXPLAIN. The result is that SHOW EXPLAIN will show this line while 
  regular EXPLAIN will not.
2012-05-16 12:49:22 +04:00
5d8b38df4c BUG#11754117 - 45670: INTVAR_EVENTS FOR FILTERED-OUT QUERY_LOG_EVENTS ARE EXECUTED
Improved random number filtering verification on
rpl_filter_tables_not_exist test.
2012-05-15 22:06:48 +01:00
c17bace4f0 Added --continue-on-error to mysqltest and mysql-test-run
This will contune the test case even if there was an error
and makes it easier to run a test that contains many sub tests against one engine.

(originally by Monty)
2012-05-15 19:35:57 +02:00
382e81ca84 MDEV-270: SHOW EXPLAIN: server crashes in JOIN::print_explain on a query with
select tables optimized away
- Take into account, that for some degenerate joins instead of "join->table_count=0"
  the code sets "join->tables_list=0".
2012-05-15 15:56:50 +04:00
f0e93ac2bf BUG#11754117 - 45670: INTVAR_EVENTS FOR FILTERED-OUT QUERY_LOG_EVENTS ARE EXECUTED
Automerge from mysql-5.1 into mysql-5.5.
2012-05-15 22:18:59 +01:00
0e66663159 Upmerge optional testsuite path 2012-05-15 09:19:58 +02:00
e72278fd42 Added some extra optional path to test suites 2012-05-15 09:14:44 +02:00
3d37b67b2b Fix for LP bug#998516
If we did nothing in resolving unique table conflict we should not retry (it leed to infinite loop).
Now we retry (recheck) unique table check only in case if we materialized a table.
2012-05-15 08:31:07 +03:00
b3841ae9e4 MDEV-267: SHOW EXPLAIN: Server crashes in JOIN::print_explain on most of queries
- Make SHOW EXPLAIN code handle degenerate subquery cases (No tables, impossible where, etc)
2012-05-14 22:39:00 +04:00
6d41fa0d54 BUG#998236: Assertion failure or valgrind errors at best_access_path ...
- Let fix_semijoin_strategies_for_picked_join_order() set 
  POSITION::prefix_record_count for POSITION records that it copies from
  SJ_MATERIALIZATION_INFO::tables. 
  (These records do not have prefix_record_count set, because they are optimized
   as joins-inside-semijoin-nests, without full advance_sj_state() processing).
2012-05-13 13:15:17 +04:00
4dff59a31b Merge 5.2->5.3 2012-05-12 12:27:26 +04:00
e1b6e1b899 Merge 5.2->5.3 2012-05-12 12:12:35 +04:00
97ae1682f1 BUG#997747: Assertion `join->best_read < ((double)1.79..5e+308L)' failed
in greedy_search with LEFT JOINs and unique keys
- Backport the fix for BUG#806524 from MariaDB 5.3
2012-05-12 11:53:14 +04:00
6bce336624 MDEV-240: SHOW EXPLAIN: Assertion `this->optimized == 2' failed
- Fix the bug: SHOW EXPLAIN may hit a case where a join is partially 
  optimized.
- Change JOIN::optimized to use enum instead of numeric constants
2012-05-11 18:13:06 +04:00
e10fecc02f Merge 5.2->5.3 2012-05-11 11:40:23 +03:00
1185420da0 5.3 merge 2012-05-21 20:54:41 +02:00
431e042b5d c 2012-05-21 15:30:25 +02:00
f2cbc014d9 fix for LP bug#994392
The not_null_tables() of Item_func_not_all and Item_in_optimizer was inherited from
Item_func by mistake. It made the optimizer think that  subquery
predicates with ALL/ANY/IN were null-rejecting. This could trigger invalid
conversions of outer joins into inner joins.
2012-05-11 09:35:46 +03:00
6fae4447f0 # MDEV-239: Assertion `field_types == 0 ... ' failed in Protocol_text::store...
- Make all functions that produce parts of EXPLAIN output take 
  explain_flags as parameter, instead of looking into thd->lex->describe
2012-05-10 15:13:57 +04:00
58b9164f04 MDEV-238: SHOW EXPLAIN: Server crashes in JOIN::print_explain with FROM subquery and GROUP BY
- Support SHOW EXPLAIN for selects that have "Using temporary; Using filesort".
2012-05-10 13:43:48 +04:00
326b40c9c8 Merging from mysql-5.1 to mysql-5.5. 2012-05-10 10:33:16 +05:30
391ea219c2 Bug #14007649 65111: INNODB SOMETIMES FAILS TO UPDATE ROWS INSERTED
BY A CONCURRENT TRANSACTIO

The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
a table handler clone. Innodb does not provide a clone operation.  
The ha_innobase::clone() is not there. The handler::clone() does not 
take care of the ha_innobase->prebuilt->select_lock_type.  Because of 
this what happens is that for one index we do a locking read, and 
for the other index we were doing a non-locking (consistent) read. 
The patch introduces ha_innobase::clone() member function.  
It is implemented similar to ha_myisam::clone().  It calls the 
base class handler::clone() and then does any additional operation 
required.  I am setting the ha_innobase->prebuilt->select_lock_type 
correctly. 

rb://1060 approved by Marko
2012-05-10 10:18:31 +05:30
cdc9a1172d MWL#182: Explain running statements:
Make SHOW EXPLAIN work for queries that do "Using temporary" and/or "Using filesort"
- Patch#1: Don't lose "Using temporary/filesort" in the SHOW EXPLAIN output.
2012-05-10 01:45:38 +05:30
2a1afc29f2 Inverted the option --skip-stat-tables for --stat-tables.
Set it to 0 by default.
Now only the tests that use persistent statistics tables require
starting the server with --stat-tables set on.
2012-05-08 16:42:55 -07:00
ddd3e261b2 MDEV-254: Server hang with FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT
The code to re-enable checkpointing after UNLOCK TABLES was lost in the 5.5
merge, so re-add it back in.
2012-05-08 14:27:44 +02:00
54534a6984 MDEV-262 : log_state occationally fails in buildbot.
The failures are  missing entries in the slow query log.  The reason for the failure  are sleep() calls  with short duration 10ms, which is less than the default system timer resolution for various WaitForXXXObject functions  (15.6 ms) and thus can't work reliably.
The fix is to make sleeps tiny bit longer (20ms from 10ms) in the test.
2012-05-08 12:38:22 +02:00
074ce71e90 Merge from mysql-5.1.63-release 2012-05-08 07:19:14 +02:00
5be07ceadd Merge 5.5.24 back into main 5.5.
This is a weave merge, but without any conflicts.
In 14 source files, the copyright year needed to be updated to 2012.
2012-05-07 22:20:42 +02:00
ea8314fdd5 LP bug#994275 fix.
In 5.3 we substitute constants in ref access values it can't be null so we do not need add NOT NULL for early NULL filtering.
2012-05-07 21:14:37 +03:00
1d47bbe3bf Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM

Merging the fix from mysql-5.1 to mysql-5.5

mysql-test/t/mysqldump.test:
  There is a difference in the testcase which is added as 
  part of this fix, when compared with mysql-5.1. In mysql-5.5 
  and mysql-5.6, "DROP mysql database" fails by enabling 
  logging, hence removed those lines.
2012-05-07 16:51:26 +05:30
e7364ec29c Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after 
dropping the mysql database, and applying the dump by enabling the 
logging, "'general_log' table not found" errors are logged into the 
server log file.

Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables 
for general_log and slow_log, because the data dump of those tables 
are taking LOCKS, which is not allowed for log tables.

Fix:
----
We came up with an approach that instead of taking both meta data 
and data dump information for those tables, take only the meta data 
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

Design with the fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and 
   slow_log
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

While taking the meta data dump for general_log and slow_log the 
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
This is because we skipped "DROP TABLE" for those tables, 
"DROP TABLE" fails for these tables if logging is enabled. 
Customer is applying the dump by enabling logging so, if the dump 
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
stmts for those tables.
  
After the fix we could observe "Table 'mysql.general_log' 
doesn't exist" errors initially that is because in the customer 
scenario they are dropping the mysql database by enabling the 
logging, Hence, those errors are expected. Once we apply the 
dump which is taken before the "drop database mysql", the errors 
will not be there.

client/mysqldump.c:
  In get_table_structure() added code to skip the DROP TABLE stmts for general_log
  and slow_log tables, because when logging is enabled those stmts will fail. And
  replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make 
  sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
  those tables.
  In dump_all_tables_in_db() added code to call get_table_structure() for 
  general_log and slow_log tables.
mysql-test/r/mysqldump.result:
  Added a test as part of fix for Bug #11754178
mysql-test/t/mysqldump.test:
  Added a test as part of fix for Bug #11754178
2012-05-07 16:46:44 +05:30
8065143637 Fix for LP bug#993726
Optimization of aggregate functions detected constant under max() and evalueted it, but condition in the WHWRE clause (which is always FALSE) was not taken into account
2012-05-07 13:26:34 +03:00
213476ef3e Fix for bug lp:992405
The patch backports two patches from mysql 5.6:
- BUG#12640437: USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT QUERY OUTPUT
- Bug#12578908: SELECT SQL_BUFFER_RESULT OUTPUTS TOO MANY ROWS WHEN GROUP IS OPTIMIZED AWAY

Original comment:
-----------------
3714 Jorgen Loland	2012-03-01
      BUG#12640437 - USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT 
                     QUERY OUTPUT
      
      For all but simple grouped queries, temporary tables are used to
      resolve grouping. In these cases, the list of grouping fields is
      stored in the temporary table and grouping is resolved
      there (e.g. by adding a unique constraint on the involved
      fields). Because of this, grouping is already done when the rows
      are read from the temporary table.
      
      In the case where a group clause may be optimized away, grouping
      does not have to be resolved using a temporary table. However, if
      a temporary table is explicitly requested (e.g. because the
      SQL_BUFFER_RESULT hint is used, or the statement is
      INSERT...SELECT), a temporary table is used anyway. In this case,
      the temporary table is created with an empty group list (because
      the group clause was optimized away) and it will therefore not
      create groups. Since the temporary table does not take care of
      grouping, JOIN::group shall not be set to false in 
      make_simple_join(). This was fixed in bug 12578908. 
      
      However, there is an exception where make_simple_join() should
      set JOIN::group to false even if the query uses a temporary table
      that was explicitly requested but is not strictly needed. That
      exception is if the loose index scan access method (explain
      says "Using index for group-by") is used to read into the 
      temporary table. With loose index scan, grouping is resolved 
      by the access method. This is exactly what happens in this bug.
2012-05-07 11:02:58 +03:00
906c9a93a0 Supported extended keys when collecting and using persistent statistics. 2012-05-06 22:42:14 -07:00
867ce618cb merge 2012-05-05 14:59:44 +02:00
daafaa0f86 Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after 
dropping the mysql database, and applying the dump by enabling the 
logging, "'general_log' table not found" errors are logged into the 
server log file.

Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables 
for general_log and slow_log, because the data dump of those tables 
are taking LOCKS, which is not allowed for log tables.

Fix:
----
We came up with an approach that instead of taking both meta data 
and data dump information for those tables, take only the meta data 
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

Design with the fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and 
   slow_log
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

While taking the meta data dump for general_log and slow_log the 
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
This is because we skipped "DROP TABLE" for those tables, 
"DROP TABLE" fails for these tables if logging is enabled. 
Customer is applying the dump by enabling logging so, if the dump 
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
stmts for those tables.
  
After the fix we could observe "Table 'mysql.general_log' 
doesn't exist" errors initially that is because in the customer 
scenario they are dropping the mysql database by enabling the 
logging, Hence, those errors are expected. Once we apply the 
dump which is taken before the "drop database mysql", the errors 
will not be there.

client/mysqldump.c:
  In get_table_structure() added code to skip the DROP TABLE stmts for general_log
  and slow_log tables, because when logging is enabled those stmts will fail. And
  replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make 
  sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
  those tables.
  In dump_all_tables_in_db() added code to call get_table_structure() for 
  general_log and slow_log tables.
mysql-test/r/mysqldump.result:
  Added a test as part of fix for Bug #11754178
mysql-test/t/mysqldump.test:
  Added a test as part of fix for Bug #11754178
2012-05-04 18:33:34 +05:30
2cf17b09ed Resolve opt_vardir in MTR with realpath. Server resolves some directory names, thus
mtr should do it as well, to avoid differences in test output. 

This fixes sys_vars.secure_file_priv on FreeBSD9.0 buildbot.
2012-05-04 14:46:18 +02:00
ab58904367 Fix FreeBSD test errors. Also link with libexecinfo on FreeBSD for stacktrace functionality. 2012-05-04 14:02:35 +02:00
b757c13078 In perl, to break out of a foreach loop we need to use
the keyword "last" and not "break".  Fixing the failing
test case.
2012-05-04 12:29:49 +05:30
44cf9ee5f7 5.3 merge 2012-05-04 07:16:38 +02:00
d231dc8f59 Fix (hopefully) a race condition in a test. Wait until killed connection
is gone.
2012-05-03 18:58:48 +02:00
d555878050 automatic merge 2012-05-03 16:00:41 +03:00
c9a73aa204 Fix bug lp:993745
This is a backport of the fix for MySQL bug #13723054 in 5.6.

Original comment:
      The crash is caused by arbitrary memory area owerwriting in case of
      BLOB fields during attempt to copy BLOB field key image into record
      buffer(record buffer is too small to get BLOB key part image).
      note:
      QUICK_GROUP_MIN_MAX_SELECT can not work with BLOB fields
      because it uses record buffer as temporary buffer for key values
      however this case is filtered out by covering_keys() check
      in get_best_group_min_max() as BLOBs always require key length
      modificator in the key declaration and if the key has a BLOB
      then it can not be covered key.
      The fix is to use 'max_used_key_length' key length instead of 0.

Analysis:
Spcifically the crash in this bug was a result of the call to key_copy()
that copied the whole key, inlcuding the BLOB field which is not used
for index access. Copying the blob field overwrote memory as far as the
function parameter 'key_info'. As a result the contents of key_info was
all 0, which resulted in a crash when this key_info was accessed few
lines below in key_cmp().
2012-05-03 14:49:52 +03:00
550d6871a5 MDEV-246 - Aborted_clients incremented during ordinary connection close
The problem was increment of aborted_threads variable due to thd->killed which was set when threadpool connection was terminated .  The fix is not to set thd->killed anymore, there is no real reason for doing it..

Added a test that checks that status variable aborted_clients does not grow for ordinary disconnects, and that successful KILL increments this variable.
2012-05-03 02:47:06 +02:00
ef4b9a0e57 Fix for failing gis-precise on Windows. 2012-05-03 13:14:40 +05:00
99e2ba4848 5.2 merge 2012-05-02 22:02:06 +02:00
167ad4c4a5 update the result file 2012-05-02 22:00:31 +02:00
8fe40c50db MDEV-214 lp:967242 Wrong result with JOIN, AND in ON condition, multi-part key, GROUP BY, subquery and OR in WHERE
The problem was in the code (update_const_equal_items()) which marked
index parts constant independently of the place where the equality was used.
In the test suite it marked t2_1.c part constant despite the fact that
it connected by OR with other expression.

Solution is to mark constant only top equalities connected with AND.
2012-05-02 18:11:02 +02:00