1
0
mirror of https://github.com/MariaDB/server.git synced 2025-08-08 11:22:35 +03:00
Commit Graph

76778 Commits

Author SHA1 Message Date
Sergey Petrunya
34e9a4c1e2 Merge of recent changes in MWL#182 in 5.3 with {Merge of MWL#182 with 5.5} 2012-05-17 00:59:03 +04:00
Michael Widenius
26cc22f3fe Fixed LP:990187 Assertion `share->reopen == 1' failed at maria_extra on ADD PARTITION
mysql-test/suite/maria/maria-partitioning.result:
  New test case
mysql-test/suite/maria/maria-partitioning.test:
  New test case
sql/sql_base.cc:
  Ignore HA_EXTRA_NORMAL for wait_while_table_is_used()
  More DBUG
sql/sql_partition.cc:
  Don't use HA_EXTRA_FORCE_REOPEN for wait_while_table_is_used() as the table is opened multiple times (in prep_alter_part_table)
  This fixes the assert in Aria where we check if table is opened multiple times if HA_EXTRA_FORCE_REOPEN is issued
2012-05-16 22:04:48 +03:00
Sergey Petrunya
483ae4bf81 Make SHOW EXPLAIN give a separate timeout error. 2012-05-16 20:22:54 +04:00
Michael Widenius
c39da19c4a Moved maria tests to suite/maria 2012-05-16 18:46:02 +03:00
Michael Widenius
6d8e329c9a Fixed bug LP:973039 - Assertion `share->in_trans == 0' failed in maria_close on DROP TABLE under LOCK
- 5.5 was missing calls to ha_extra(HA_PREPARE_FOR_DROP | HA_PREPARE_FOR_RENAME);  Lost in merge 5.3 -> 5.5


sql/sql_admin.cc:
  Updated arguments for close_all_tables_for_name
sql/sql_base.h:
  Updated arguments for close_all_tables_for_name
sql/sql_partition.cc:
  Updated arguments for close_all_tables_for_name
sql/sql_table.cc:
  Updated arguments for close_all_tables_for_name
  Removed test of kill, as we have already called 'ha_extra(HA_PREPARE_FOR_DROP)' and the table may be inconsistent.
sql/sql_trigger.cc:
  Updated arguments for close_all_tables_for_name
sql/sql_truncate.cc:
  For truncate that is done with drop + recreate, signal that the table will be dropped.
2012-05-16 18:44:17 +03:00
Sergey Petrunya
dfbd777fd8 MWL#182: SHOW EXPLAIN: Merge 5.3->5.5 2012-05-16 19:20:00 +04:00
Annamalai Gurusami
ce9e6b5a9a Bug #13943231: ALTER TABLE AFTER DISCARD MAY CRASH THE SERVER
The following scenario crashes our mysql server:

1.  set global innodb_file_per_table=1;
2.  create table t1(c1 int) engine=innodb;
3.  alter table t1 discard tablespace;
4.  alter table t1 add unique index(c1);

Step 4 crashes the server.  This patch introduces a check on discarded
tablespace to avoid the crash.

rb://1041 approved by Marko Makela
2012-05-16 16:36:49 +05:30
Venkata Sidagam
6b05e434bf Bug #13955256: KEYCACHE CRASHES, CORRUPTIONS/HANGS WITH,
FULLTEXT INDEX AND CONCURRENT DML.

Problem Statement:
------------------
1) Create a table with FT index.
2) Enable concurrent inserts.
3) In multiple threads do below operations repeatedly
   a) truncate table
   b) insert into table ....
   c) select ... match .. against .. non-boolean/boolean mode

After some time we could observe two different assert core dumps

Analysis:
--------
1)assert core dump at key_read_cache():
Two select threads operating in-parallel on same key 
root block.
1st select thread block->status is set to BLOCK_ERROR 
because the my_pread() in read_block() is returning '0'. 
Truncate table made the index file size as 1024 and pread 
was asked to get the block of count bytes(1024 bytes) 
from offset of 1024 which it cannot read since its 
"end of file" and retuning '0' setting 
"my_errno= HA_ERR_FILE_TOO_SHORT" and the key_file_length, 
key_root[0] is same i.e. 1024. Since block status has BLOCK_ERROR 
the 1st select thread enter into the free_block() and will 
be under wait on conditional mutex by making status as 
BLOCK_REASSIGNED and goes for wait_on_readers(). Other select 
thread will also work on the same block and sees the status as 
BLOCK_ERROR and enters into free_block(), checks for BLOCK_REASSIGNED 
and asserting the server.

2)assert core dump at key_write_cache():
One select thread and One insert thread.
Select thread gets the unlocks the 'keycache->cache_lock', 
which allows other threads to continue and gets the pread() 
return value as'0'(please see the explanation above) and 
tries to get the lock on 'keycache->cache_lock' and waits 
there for the lock.
Insert thread requests for the block, block will be assigned 
from the hash list and makes the page_status as 
'PAGE_WAIT_TO_BE_READ' and goes for the read_block(), waits 
in the queue since there are some other threads performing 
reads on the same block.
Select thread which was waiting for the 'keycache->cache_lock' 
mutex in the read_block() will continue after getting the my_pread() 
value as '0' and sets the block status as BLOCK_ERROR and goes to 
the free_block() and go to the wait_for_readers().
Now the insert thread will awake and continues. and checks 
block->status as not BLOCK_READ and it asserts.  

Fix:
---
In the full text code, multiple readers of index file is not guarded. 
Hence added below below code in _ft2_search() and walk_and_match().

to lock the key_root I have used below code in _ft2_search()
 if (info->s->concurrent_insert)
    mysql_rwlock_rdlock(&share->key_root_lock[0]);

and to unlock 
 if (info->s->concurrent_insert)
   mysql_rwlock_unlock(&share->key_root_lock[0]);

storage/myisam/ft_boolean_search.c:
  Since its a recursion function, to avoid confusion in taking and 
  releasing the locks, renamed _ft2_search() to _ft2_search_internal() 
  function. And _ft2_search() will take the lock, call 
  _ft2_search_internal() and release the lock in case of concurrent 
  inserts.
storage/myisam/ft_nlq_search.c:
  Added read locks code in walk_and_match()
2012-05-16 16:14:27 +05:30
Sergey Petrunya
533e1d2845 MDEV-273: SHOW EXPLAIN: server crashes in JOIN::print_explain on a query with impossible WHERE
- It turns out, there is a case where the join is degenerate, but 
  join->table_count!= && join->tables_list!=NULL. Need to also check 
  if join->zero_result_cause!=NULL, too.

- There is a slight problem: The code sets
    zero_result_cause= "no matching row in const table"
  when NOT running EXPLAIN. The result is that SHOW EXPLAIN will show this line while 
  regular EXPLAIN will not.
2012-05-16 12:49:22 +04:00
Annamalai Gurusami
bcb5d73767 Bug #12752572 61579: REPLICATION FAILURE WHILE
INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER

When an insert stmt like "insert into t values (1),(2),(3)" is
executed, the autoincrement values assigned to these three rows are
expected to be contiguous.  In the given lock mode
(innodb_autoinc_lock_mode=1), the auto inc lock will be released
before the end of the statement.  So to make the autoincrement
contiguous for a given statement, we need to reserve the auto inc
values at the beginning of the statement.  

rb://1074 approved by Alexander Nozdrin
2012-05-16 11:17:48 +05:30
Nuno Carvalho
5d8b38df4c BUG#11754117 - 45670: INTVAR_EVENTS FOR FILTERED-OUT QUERY_LOG_EVENTS ARE EXECUTED
Improved random number filtering verification on
rpl_filter_tables_not_exist test.
2012-05-15 22:06:48 +01:00
Sergei Golubchik
c17bace4f0 Added --continue-on-error to mysqltest and mysql-test-run
This will contune the test case even if there was an error
and makes it easier to run a test that contains many sub tests against one engine.

(originally by Monty)
2012-05-15 19:35:57 +02:00
Marko Mäkelä
5917c58a5d Bug#14025221 FOREIGN KEY REFERENCES FREED MEMORY AFTER DROP INDEX
dict_table_replace_index_in_foreign_list(): Replace the dropped index
also in the foreign key constraints of child tables that are
referencing this table.

row_ins_check_foreign_constraint(): If the underlying index is
missing, refuse the operation.

rb:1051 approved by Jimmy Yang
2012-05-15 15:04:39 +03:00
Sergey Petrunya
382e81ca84 MDEV-270: SHOW EXPLAIN: server crashes in JOIN::print_explain on a query with
select tables optimized away
- Take into account, that for some degenerate joins instead of "join->table_count=0"
  the code sets "join->tables_list=0".
2012-05-15 15:56:50 +04:00
Mattias Jonsson
f436b188e2 bug#13949735: crash regression from bug#13694811.
There can be cases when the optimizer calls ha_partition::records_in_range
when there are no matching partitions. So the DBUG_ASSERT of
!tot_used_partitions does assert.

Fixed by returning 0 instead when no matching partitions are found.

This will avoid the crash. records_in_range will then try to find the
biggest used partition, which will not find any partition and
records_in_range will then return 0, meaning non rows can be found.

Patch contributed by Davi Arnaut at twitter.
2012-05-15 12:45:52 +02:00
Tor Didriksen
02f90402c8 Bug#14039955 RPAD FUNCTION LEADS TO UNINITIALIZED VALUES WARNING IN MY_STRTOD
Rewrite the "parser" in my_strtod_int() to avoid
reading past the end of the input string.
2012-05-18 12:57:38 +02:00
Rohit Kalhans
b4ffce10d3 upmerge from mysql-5.1 -> mysql-5.5. 2012-05-18 14:53:23 +05:30
Mayank Prasad
0581d1c46c Bug#11766101 : 59140: LIKE CONCAT('%',@A,'%') DOESN'T MATCH WHEN @A CONTAINS LATIN1 STRING
Issue/Cause:
Issue is of memory corruption.During optimization phase, pattern to be matched in where 
clause, is prepared. This is done in Item_func_concat::val_str() function which forms the
resultant string (tmp_value) and return its pointer. In caller, Item_func_like::fix_fields, 
pattern is made to point to this string (tmp_value). In further processing, tmp_value is 
getting modified which causes pattern to have changed/wrong values.

Fix:
Allocate its own memroy location in caller, copy value of resultant string (tmp_value) 
into that and make pattern to point to that. This makes sure no further changes to 
tmp_value will affect pattern.
2012-05-17 22:24:23 +05:30
Gopal Shankar
6053cb8cc1 null merge for Bug#12636001 2012-05-17 18:21:45 +05:30
unknown
b45c750200 2012-05-17 11:46:11 +01:00
unknown
1bfd04ea7c 2012-05-17 10:19:53 +05:30
Annamalai Gurusami
dfff1ff58b null merge from mysql-5.1 to mysql-5.5 2012-05-16 16:38:02 +05:30
Annamalai Gurusami
31a6bcbee4 Merge from mysql-5.1 to mysql-5.5. 2012-05-16 16:33:22 +05:30
Venkata Sidagam
79af7db585 Null merge from 5.1 to 5.5 2012-05-16 16:19:20 +05:30
Venkata Sidagam
7244e84e6d Merging the fix from mysql-5.1 to mysql-5.5 2012-05-16 16:06:07 +05:30
Annamalai Gurusami
7a6c449c62 Bug #13943231: ALTER TABLE AFTER DISCARD MAY CRASH THE SERVER
The following scenario crashes our mysql server:

1.  set global innodb_file_per_table=1;
2.  create table t1(c1 int) engine=innodb;
3.  alter table t1 discard tablespace;
4.  alter table t1 add unique index(c1);

Step 4 crashes the server.  This patch introduces a check on discarded
tablespace to avoid the crash.

rb://1041 approved by Marko Makela
2012-05-16 15:23:56 +05:30
Annamalai Gurusami
453fcc0cf5 Merge from mysql-5.1 to mysql-5.5. 2012-05-16 14:10:18 +05:30
Venkata Sidagam
0e9c1e9fc3 Bug #13955256: KEYCACHE CRASHES, CORRUPTIONS/HANGS WITH,
FULLTEXT INDEX AND CONCURRENT DML.

Problem Statement:
------------------
1) Create a table with FT index.
2) Enable concurrent inserts.
3) In multiple threads do below operations repeatedly
   a) truncate table
   b) insert into table ....
   c) select ... match .. against .. non-boolean/boolean mode

After some time we could observe two different assert core dumps

Analysis:
--------
1)assert core dump at key_read_cache():
Two select threads operating in-parallel on same key 
root block.
1st select thread block->status is set to BLOCK_ERROR 
because the my_pread() in read_block() is returning '0'. 
Truncate table made the index file size as 1024 and pread 
was asked to get the block of count bytes(1024 bytes) 
from offset of 1024 which it cannot read since its 
"end of file" and retuning '0' setting 
"my_errno= HA_ERR_FILE_TOO_SHORT" and the key_file_length, 
key_root[0] is same i.e. 1024. Since block status has BLOCK_ERROR 
the 1st select thread enter into the free_block() and will 
be under wait on conditional mutex by making status as 
BLOCK_REASSIGNED and goes for wait_on_readers(). Other select 
thread will also work on the same block and sees the status as 
BLOCK_ERROR and enters into free_block(), checks for BLOCK_REASSIGNED 
and asserting the server.

2)assert core dump at key_write_cache():
One select thread and One insert thread.
Select thread gets the unlocks the 'keycache->cache_lock', 
which allows other threads to continue and gets the pread() 
return value as'0'(please see the explanation above) and 
tries to get the lock on 'keycache->cache_lock' and waits 
there for the lock.
Insert thread requests for the block, block will be assigned 
from the hash list and makes the page_status as 
'PAGE_WAIT_TO_BE_READ' and goes for the read_block(), waits 
in the queue since there are some other threads performing 
reads on the same block.
Select thread which was waiting for the 'keycache->cache_lock' 
mutex in the read_block() will continue after getting the my_pread() 
value as '0' and sets the block status as BLOCK_ERROR and goes to 
the free_block() and go to the wait_for_readers().
Now the insert thread will awake and continues. and checks 
block->status as not BLOCK_READ and it asserts.  

Fix:
---
In the full text code, multiple readers of index file is not guarded. 
Hence added below below code in _ft2_search() and walk_and_match().

to lock the key_root I have used below code in _ft2_search()
 if (info->s->concurrent_insert)
    mysql_rwlock_rdlock(&share->key_root_lock[0]);

and to unlock 
 if (info->s->concurrent_insert)
   mysql_rwlock_unlock(&share->key_root_lock[0]);

storage/myisam/ft_boolean_search.c:
  Since its a recursion function, to avoid confusion in taking and 
  releasing the locks, renamed _ft2_search() to _ft2_search_internal() 
  function. And _ft2_search() will take the lock, call 
  _ft2_search_internal() and release the lock in case of concurrent 
  inserts.
storage/myisam/ft_nlq_search.c:
  Added read locks code in walk_and_match()
2012-05-16 13:55:22 +05:30
Olav Sandstaa
c5dc1ea526 Fix for Bug#12667154 SAME QUERY EXEC AS WHERE SUBQ GIVES DIFFERENT
RESULTS ON IN() & NOT IN() COMP #3

This bug causes a wrong result in mysql-trunk when ICP is used
and bad performance in mysql-5.5 and mysql-trunk.

Using the query from bug report to explain what happens and causes
the wrong result from the query when ICP is enabled:

1. The t3 table contains four records. The outer query will read
   these and for each of these it will execute the subquery.

2. Before the first execution of the subquery it will be optimized. In
   this case the important is what happens to the first table t1:
   -make_join_select() will call the range optimizer which decides
    that t1 should be accessed using a range scan on the k1 index
    It creates a QUICK_RANGE_SELECT object for this.
   -As the last part of optimization the ICP code pushes the
    condition down to the storage engine for table t1 on the k1 index.

   This produces the following information in the explain for this table:

     2 DEPENDENT SUBQUERY t1 range k1 k1 5 NULL 3 Using index condition; Using filesort

   Note the use of filesort.

3. The first execution of the subquery does (among other things) due
   to the need for sorting:
   a. Call create_sort_index() which again will call find_all_keys():
   b. find_all_keys() will read the required keys for all qualifying
      rows from the storage engine. To do this it checks if it has a
      quick-select for the table. It will use the quick-select for
      reading records. In this case it will read four records from the
      storage engine (based on the range criteria). The storage engine
      will evaluate the pushed index condition for each record.
   c. At the end of create_sort_index() there is code that cleans up a
      lot of stuff on the join tab. One of the things that is cleaned
      is the select object. The result of this is that the
      quick-select object created in make_join_select is deleted.

4. The second execution of the subquery does the same as the first but
   the result is different:
   a. Call create_sort_index() which again will call find_all_keys()
      (same as for the first execution)
   b. find_all_keys() will read the keys from the storage engine. To
      do this it checks if it has a quick-select for the table. Now
      there is NO quick-select object(!) (since it was deleted in
      step 3c). So find_all_keys defaults to read the table using a
      table scan instead. So instead of reading the four relevant records
      in the range it reads the entire table (6 records). It then
      evaluates the table's condition (and here it goes wrong). Since
      the entire condition has been pushed down to the storage engine
      using ICP all 6 records qualify. (Note that the storage engine
      will not evaluate the pushed index condition in this case since
      it was pushed for the k1 index and now we do a table scan
      without any index being used).
      The result is that here we return six qualifying key values
      instead of four due to not evaluating the table's condition.
   c. As above.

5. The two last execution of the subquery will also produce wrong results
   for the same reason.

Summary: The problem occurs due to all but the first executions of the
subquery is done as a table scan without evaluating the table's
condition (which is pushed to the storage engine on a different
index). This is caused by the create_sort_index() function deleting
the quick-select object that should have been used for executing the
subquery as a range scan.

Note that this bug in addition to causing wrong results also can
result in bad performance due to executing the subquery using a table
scan instead of a range scan. This is an issue in MySQL 5.5.

The fix for this problem is to avoid that the Quick-select-object that
the optimizer created is deleted when create_sort_index() is doing
clean-up of the join-tab. This will ensure that the quick-select
object and the corresponding pushed index condition will be available
and used by all following executions of the subquery.


sql/sql_select.cc:
  Fix for Bug#12667154: Change how create_sort_index() cleans up the 
  join_tab's select and quick-select objects in order to avoid that a 
  quick-select object created outside of create_sort_index() is deleted.
2012-05-16 09:49:23 +02:00
unknown
3780f47339 2012-05-16 09:18:57 +02:00
Nuno Carvalho
f0e93ac2bf BUG#11754117 - 45670: INTVAR_EVENTS FOR FILTERED-OUT QUERY_LOG_EVENTS ARE EXECUTED
Automerge from mysql-5.1 into mysql-5.5.
2012-05-15 22:18:59 +01:00
Marko Mäkelä
a6da754a31 Merge mysql-5.1 to mysql-5.5. 2012-05-15 15:11:34 +03:00
Georgi Kodinov
bef6c0c161 merge 5.1->5.5 2012-05-15 13:18:42 +03:00
Georgi Kodinov
fcb033053d Bug #11761822: yassl rejects valid certificate which openssl accepts
Applied the fix that updates yaSSL to 2.2.1 and fixes parsing this 
particular certificate.
Added a test case with the certificate itself.
2012-05-15 13:12:22 +03:00
Bjorn Munch
0e66663159 Upmerge optional testsuite path 2012-05-15 09:19:58 +02:00
Bjorn Munch
e72278fd42 Added some extra optional path to test suites 2012-05-15 09:14:44 +02:00
unknown
3d37b67b2b Fix for LP bug#998516
If we did nothing in resolving unique table conflict we should not retry (it leed to infinite loop).
Now we retry (recheck) unique table check only in case if we materialized a table.
2012-05-15 08:31:07 +03:00
Sergey Petrunya
b3841ae9e4 MDEV-267: SHOW EXPLAIN: Server crashes in JOIN::print_explain on most of queries
- Make SHOW EXPLAIN code handle degenerate subquery cases (No tables, impossible where, etc)
2012-05-14 22:39:00 +04:00
Joerg Bruehe
d8faf3f179 Raise version after cloning 5.5.25 2012-05-14 15:15:13 +02:00
Sergey Petrunya
6d41fa0d54 BUG#998236: Assertion failure or valgrind errors at best_access_path ...
- Let fix_semijoin_strategies_for_picked_join_order() set 
  POSITION::prefix_record_count for POSITION records that it copies from
  SJ_MATERIALIZATION_INFO::tables. 
  (These records do not have prefix_record_count set, because they are optimized
   as joins-inside-semijoin-nests, without full advance_sj_state() processing).
2012-05-13 13:15:17 +04:00
Sergey Petrunya
4dff59a31b Merge 5.2->5.3 2012-05-12 12:27:26 +04:00
Sergey Petrunya
e1b6e1b899 Merge 5.2->5.3 2012-05-12 12:12:35 +04:00
Sergey Petrunya
97ae1682f1 BUG#997747: Assertion `join->best_read < ((double)1.79..5e+308L)' failed
in greedy_search with LEFT JOINs and unique keys
- Backport the fix for BUG#806524 from MariaDB 5.3
2012-05-12 11:53:14 +04:00
Sergey Petrunya
6bce336624 MDEV-240: SHOW EXPLAIN: Assertion `this->optimized == 2' failed
- Fix the bug: SHOW EXPLAIN may hit a case where a join is partially 
  optimized.
- Change JOIN::optimized to use enum instead of numeric constants
2012-05-11 18:13:06 +04:00
unknown
e10fecc02f Merge 5.2->5.3 2012-05-11 11:40:23 +03:00
Sergei Golubchik
329daad2d3 more portable fix for lp:942266 - 5.5 builds fail with systemtap-sdt-dev installed on Ubuntu
include <limits> early, before min/max macros are defined.
2012-05-11 09:18:00 +02:00
Sergei Golubchik
1185420da0 5.3 merge 2012-05-21 20:54:41 +02:00
Sergei Golubchik
431e042b5d c 2012-05-21 15:30:25 +02:00
unknown
f2cbc014d9 fix for LP bug#994392
The not_null_tables() of Item_func_not_all and Item_in_optimizer was inherited from
Item_func by mistake. It made the optimizer think that  subquery
predicates with ALL/ANY/IN were null-rejecting. This could trigger invalid
conversions of outer joins into inner joins.
2012-05-11 09:35:46 +03:00
Sergey Petrunya
6fae4447f0 # MDEV-239: Assertion `field_types == 0 ... ' failed in Protocol_text::store...
- Make all functions that produce parts of EXPLAIN output take 
  explain_flags as parameter, instead of looking into thd->lex->describe
2012-05-10 15:13:57 +04:00