1
0
mirror of https://github.com/MariaDB/server.git synced 2025-12-09 08:01:34 +03:00
Commit Graph

23189 Commits

Author SHA1 Message Date
Sergei Golubchik
280fcf0808 5.1 merge 2012-05-18 14:23:05 +02:00
unknown
e5bca74bfb Fixed bug mdev-277 as part of the fix for lp:944706
The cause for this bug is that the method JOIN::get_examined_rows iterates over all
JOIN_TABs of the join assuming they are just a sequence. In the query above, the
innermost subquery is merged into its parent query. When we call
JOIN::get_examined_rows for the second-level subquery, the iteration that
assumes sequential order of join tabs goes outside the join_tab array and calls
the method JOIN_TAB::get_examined_rows on uninitialized memory. 

The fix is to iterate over JOIN_TABs in a way that takes into account the nested
semi-join structure of JOIN_TABs. In particular iterate as select_describe.
2012-05-18 14:52:01 +03:00
Igor Babaev
8e95e6543b Changed a test case from join_cache.test to make it platform independent 2012-05-17 18:01:13 -07:00
Igor Babaev
9b79feba56 Fixed the bug that caused displaying incorrect values in
the column cardinality of the table information_schema.statistics.
2012-05-17 16:54:26 -07:00
unknown
da5214831d Fix for bug lp:944706, task MDEV-193
The patch enables back constant subquery execution during
query optimization after it was disabled during the development
of MWL#89 (cost-based choice of IN-TO-EXISTS vs MATERIALIZATION).

The main idea is that constant subqueries are allowed to be executed
during optimization if their execution is not expensive.

The approach is as follows:
- Constant subqueries are recursively optimized in the beginning of
  JOIN::optimize of the outer query. This is done by the new method
  JOIN::optimize_constant_subqueries(). This is done so that the cost
  of executing these queries can be estimated.
- Optimization of the outer query proceeds normally. During this phase
  the optimizer may request execution of non-expensive constant subqueries.
  Each place where the optimizer may potentially execute an expensive
  expression is guarded with the predicate Item::is_expensive().
- The implementation of Item_subselect::is_expensive has been extended
  to use the number of examined rows (estimated by the optimizer) as a
  way to determine whether the subquery is expensive or not.
- The new system variable "expensive_subquery_limit" controls how many
  examined rows are considered to be not expensive. The default is 100.

In addition, multiple changes were needed to make this solution work
in the light of the changes made by MWL#89. These changes were needed
to fix various crashes and wrong results, and legacy bugs discovered
during development.
2012-05-17 13:46:05 +03:00
Sergei Golubchik
0a8c9b98f6 merge with mysql-5.1.63 2012-05-17 12:12:33 +02:00
unknown
0520825803 Test suite of fixed bug (LP bug#993459). 2012-05-17 10:45:20 +03:00
unknown
5a47413934 fix of LP bug#998321
The problem is that we can't check null_value field of non-basic constant without the item execution.:
2012-05-17 10:13:25 +03:00
Sergey Petrunya
34e9a4c1e2 Merge of recent changes in MWL#182 in 5.3 with {Merge of MWL#182 with 5.5} 2012-05-17 00:59:03 +04:00
Michael Widenius
c39da19c4a Moved maria tests to suite/maria 2012-05-16 18:46:02 +03:00
Sergey Petrunya
dfbd777fd8 MWL#182: SHOW EXPLAIN: Merge 5.3->5.5 2012-05-16 19:20:00 +04:00
Sergey Petrunya
533e1d2845 MDEV-273: SHOW EXPLAIN: server crashes in JOIN::print_explain on a query with impossible WHERE
- It turns out, there is a case where the join is degenerate, but 
  join->table_count!= && join->tables_list!=NULL. Need to also check 
  if join->zero_result_cause!=NULL, too.

- There is a slight problem: The code sets
    zero_result_cause= "no matching row in const table"
  when NOT running EXPLAIN. The result is that SHOW EXPLAIN will show this line while 
  regular EXPLAIN will not.
2012-05-16 12:49:22 +04:00
Sergei Golubchik
c17bace4f0 Added --continue-on-error to mysqltest and mysql-test-run
This will contune the test case even if there was an error
and makes it easier to run a test that contains many sub tests against one engine.

(originally by Monty)
2012-05-15 19:35:57 +02:00
Sergey Petrunya
382e81ca84 MDEV-270: SHOW EXPLAIN: server crashes in JOIN::print_explain on a query with
select tables optimized away
- Take into account, that for some degenerate joins instead of "join->table_count=0"
  the code sets "join->tables_list=0".
2012-05-15 15:56:50 +04:00
unknown
3d37b67b2b Fix for LP bug#998516
If we did nothing in resolving unique table conflict we should not retry (it leed to infinite loop).
Now we retry (recheck) unique table check only in case if we materialized a table.
2012-05-15 08:31:07 +03:00
Sergey Petrunya
b3841ae9e4 MDEV-267: SHOW EXPLAIN: Server crashes in JOIN::print_explain on most of queries
- Make SHOW EXPLAIN code handle degenerate subquery cases (No tables, impossible where, etc)
2012-05-14 22:39:00 +04:00
Sergey Petrunya
6d41fa0d54 BUG#998236: Assertion failure or valgrind errors at best_access_path ...
- Let fix_semijoin_strategies_for_picked_join_order() set 
  POSITION::prefix_record_count for POSITION records that it copies from
  SJ_MATERIALIZATION_INFO::tables. 
  (These records do not have prefix_record_count set, because they are optimized
   as joins-inside-semijoin-nests, without full advance_sj_state() processing).
2012-05-13 13:15:17 +04:00
Sergey Petrunya
4dff59a31b Merge 5.2->5.3 2012-05-12 12:27:26 +04:00
Sergey Petrunya
e1b6e1b899 Merge 5.2->5.3 2012-05-12 12:12:35 +04:00
Sergey Petrunya
97ae1682f1 BUG#997747: Assertion `join->best_read < ((double)1.79..5e+308L)' failed
in greedy_search with LEFT JOINs and unique keys
- Backport the fix for BUG#806524 from MariaDB 5.3
2012-05-12 11:53:14 +04:00
Sergey Petrunya
6bce336624 MDEV-240: SHOW EXPLAIN: Assertion `this->optimized == 2' failed
- Fix the bug: SHOW EXPLAIN may hit a case where a join is partially 
  optimized.
- Change JOIN::optimized to use enum instead of numeric constants
2012-05-11 18:13:06 +04:00
unknown
e10fecc02f Merge 5.2->5.3 2012-05-11 11:40:23 +03:00
Sergei Golubchik
1185420da0 5.3 merge 2012-05-21 20:54:41 +02:00
Sergei Golubchik
431e042b5d c 2012-05-21 15:30:25 +02:00
unknown
f2cbc014d9 fix for LP bug#994392
The not_null_tables() of Item_func_not_all and Item_in_optimizer was inherited from
Item_func by mistake. It made the optimizer think that  subquery
predicates with ALL/ANY/IN were null-rejecting. This could trigger invalid
conversions of outer joins into inner joins.
2012-05-11 09:35:46 +03:00
Sergey Petrunya
6fae4447f0 # MDEV-239: Assertion `field_types == 0 ... ' failed in Protocol_text::store...
- Make all functions that produce parts of EXPLAIN output take 
  explain_flags as parameter, instead of looking into thd->lex->describe
2012-05-10 15:13:57 +04:00
Sergey Petrunya
58b9164f04 MDEV-238: SHOW EXPLAIN: Server crashes in JOIN::print_explain with FROM subquery and GROUP BY
- Support SHOW EXPLAIN for selects that have "Using temporary; Using filesort".
2012-05-10 13:43:48 +04:00
Sergey Petrunya
cdc9a1172d MWL#182: Explain running statements:
Make SHOW EXPLAIN work for queries that do "Using temporary" and/or "Using filesort"
- Patch#1: Don't lose "Using temporary/filesort" in the SHOW EXPLAIN output.
2012-05-10 01:45:38 +05:30
Igor Babaev
2a1afc29f2 Inverted the option --skip-stat-tables for --stat-tables.
Set it to 0 by default.
Now only the tests that use persistent statistics tables require
starting the server with --stat-tables set on.
2012-05-08 16:42:55 -07:00
unknown
ddd3e261b2 MDEV-254: Server hang with FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT
The code to re-enable checkpointing after UNLOCK TABLES was lost in the 5.5
merge, so re-add it back in.
2012-05-08 14:27:44 +02:00
Vladislav Vaintroub
54534a6984 MDEV-262 : log_state occationally fails in buildbot.
The failures are  missing entries in the slow query log.  The reason for the failure  are sleep() calls  with short duration 10ms, which is less than the default system timer resolution for various WaitForXXXObject functions  (15.6 ms) and thus can't work reliably.
The fix is to make sleeps tiny bit longer (20ms from 10ms) in the test.
2012-05-08 12:38:22 +02:00
Sunanda Menon
074ce71e90 Merge from mysql-5.1.63-release 2012-05-08 07:19:14 +02:00
Joerg Bruehe
5be07ceadd Merge 5.5.24 back into main 5.5.
This is a weave merge, but without any conflicts.
In 14 source files, the copyright year needed to be updated to 2012.
2012-05-07 22:20:42 +02:00
unknown
ea8314fdd5 LP bug#994275 fix.
In 5.3 we substitute constants in ref access values it can't be null so we do not need add NOT NULL for early NULL filtering.
2012-05-07 21:14:37 +03:00
Venkata Sidagam
1d47bbe3bf Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM

Merging the fix from mysql-5.1 to mysql-5.5

mysql-test/t/mysqldump.test:
  There is a difference in the testcase which is added as 
  part of this fix, when compared with mysql-5.1. In mysql-5.5 
  and mysql-5.6, "DROP mysql database" fails by enabling 
  logging, hence removed those lines.
2012-05-07 16:51:26 +05:30
Venkata Sidagam
e7364ec29c Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after 
dropping the mysql database, and applying the dump by enabling the 
logging, "'general_log' table not found" errors are logged into the 
server log file.

Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables 
for general_log and slow_log, because the data dump of those tables 
are taking LOCKS, which is not allowed for log tables.

Fix:
----
We came up with an approach that instead of taking both meta data 
and data dump information for those tables, take only the meta data 
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

Design with the fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and 
   slow_log
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

While taking the meta data dump for general_log and slow_log the 
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
This is because we skipped "DROP TABLE" for those tables, 
"DROP TABLE" fails for these tables if logging is enabled. 
Customer is applying the dump by enabling logging so, if the dump 
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
stmts for those tables.
  
After the fix we could observe "Table 'mysql.general_log' 
doesn't exist" errors initially that is because in the customer 
scenario they are dropping the mysql database by enabling the 
logging, Hence, those errors are expected. Once we apply the 
dump which is taken before the "drop database mysql", the errors 
will not be there.

client/mysqldump.c:
  In get_table_structure() added code to skip the DROP TABLE stmts for general_log
  and slow_log tables, because when logging is enabled those stmts will fail. And
  replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make 
  sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
  those tables.
  In dump_all_tables_in_db() added code to call get_table_structure() for 
  general_log and slow_log tables.
mysql-test/r/mysqldump.result:
  Added a test as part of fix for Bug #11754178
mysql-test/t/mysqldump.test:
  Added a test as part of fix for Bug #11754178
2012-05-07 16:46:44 +05:30
unknown
8065143637 Fix for LP bug#993726
Optimization of aggregate functions detected constant under max() and evalueted it, but condition in the WHWRE clause (which is always FALSE) was not taken into account
2012-05-07 13:26:34 +03:00
unknown
213476ef3e Fix for bug lp:992405
The patch backports two patches from mysql 5.6:
- BUG#12640437: USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT QUERY OUTPUT
- Bug#12578908: SELECT SQL_BUFFER_RESULT OUTPUTS TOO MANY ROWS WHEN GROUP IS OPTIMIZED AWAY

Original comment:
-----------------
3714 Jorgen Loland	2012-03-01
      BUG#12640437 - USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT 
                     QUERY OUTPUT
      
      For all but simple grouped queries, temporary tables are used to
      resolve grouping. In these cases, the list of grouping fields is
      stored in the temporary table and grouping is resolved
      there (e.g. by adding a unique constraint on the involved
      fields). Because of this, grouping is already done when the rows
      are read from the temporary table.
      
      In the case where a group clause may be optimized away, grouping
      does not have to be resolved using a temporary table. However, if
      a temporary table is explicitly requested (e.g. because the
      SQL_BUFFER_RESULT hint is used, or the statement is
      INSERT...SELECT), a temporary table is used anyway. In this case,
      the temporary table is created with an empty group list (because
      the group clause was optimized away) and it will therefore not
      create groups. Since the temporary table does not take care of
      grouping, JOIN::group shall not be set to false in 
      make_simple_join(). This was fixed in bug 12578908. 
      
      However, there is an exception where make_simple_join() should
      set JOIN::group to false even if the query uses a temporary table
      that was explicitly requested but is not strictly needed. That
      exception is if the loose index scan access method (explain
      says "Using index for group-by") is used to read into the 
      temporary table. With loose index scan, grouping is resolved 
      by the access method. This is exactly what happens in this bug.
2012-05-07 11:02:58 +03:00
Igor Babaev
906c9a93a0 Supported extended keys when collecting and using persistent statistics. 2012-05-06 22:42:14 -07:00
Sergei Golubchik
867ce618cb merge 2012-05-05 14:59:44 +02:00
Venkata Sidagam
daafaa0f86 Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after 
dropping the mysql database, and applying the dump by enabling the 
logging, "'general_log' table not found" errors are logged into the 
server log file.

Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables 
for general_log and slow_log, because the data dump of those tables 
are taking LOCKS, which is not allowed for log tables.

Fix:
----
We came up with an approach that instead of taking both meta data 
and data dump information for those tables, take only the meta data 
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

Design with the fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and 
   slow_log
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

While taking the meta data dump for general_log and slow_log the 
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
This is because we skipped "DROP TABLE" for those tables, 
"DROP TABLE" fails for these tables if logging is enabled. 
Customer is applying the dump by enabling logging so, if the dump 
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
stmts for those tables.
  
After the fix we could observe "Table 'mysql.general_log' 
doesn't exist" errors initially that is because in the customer 
scenario they are dropping the mysql database by enabling the 
logging, Hence, those errors are expected. Once we apply the 
dump which is taken before the "drop database mysql", the errors 
will not be there.

client/mysqldump.c:
  In get_table_structure() added code to skip the DROP TABLE stmts for general_log
  and slow_log tables, because when logging is enabled those stmts will fail. And
  replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make 
  sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
  those tables.
  In dump_all_tables_in_db() added code to call get_table_structure() for 
  general_log and slow_log tables.
mysql-test/r/mysqldump.result:
  Added a test as part of fix for Bug #11754178
mysql-test/t/mysqldump.test:
  Added a test as part of fix for Bug #11754178
2012-05-04 18:33:34 +05:30
Vladislav Vaintroub
ab58904367 Fix FreeBSD test errors. Also link with libexecinfo on FreeBSD for stacktrace functionality. 2012-05-04 14:02:35 +02:00
Sergei Golubchik
44cf9ee5f7 5.3 merge 2012-05-04 07:16:38 +02:00
Vladislav Vaintroub
d231dc8f59 Fix (hopefully) a race condition in a test. Wait until killed connection
is gone.
2012-05-03 18:58:48 +02:00
Michael Widenius
d555878050 automatic merge 2012-05-03 16:00:41 +03:00
unknown
c9a73aa204 Fix bug lp:993745
This is a backport of the fix for MySQL bug #13723054 in 5.6.

Original comment:
      The crash is caused by arbitrary memory area owerwriting in case of
      BLOB fields during attempt to copy BLOB field key image into record
      buffer(record buffer is too small to get BLOB key part image).
      note:
      QUICK_GROUP_MIN_MAX_SELECT can not work with BLOB fields
      because it uses record buffer as temporary buffer for key values
      however this case is filtered out by covering_keys() check
      in get_best_group_min_max() as BLOBs always require key length
      modificator in the key declaration and if the key has a BLOB
      then it can not be covered key.
      The fix is to use 'max_used_key_length' key length instead of 0.

Analysis:
Spcifically the crash in this bug was a result of the call to key_copy()
that copied the whole key, inlcuding the BLOB field which is not used
for index access. Copying the blob field overwrote memory as far as the
function parameter 'key_info'. As a result the contents of key_info was
all 0, which resulted in a crash when this key_info was accessed few
lines below in key_cmp().
2012-05-03 14:49:52 +03:00
Vladislav Vaintroub
550d6871a5 MDEV-246 - Aborted_clients incremented during ordinary connection close
The problem was increment of aborted_threads variable due to thd->killed which was set when threadpool connection was terminated .  The fix is not to set thd->killed anymore, there is no real reason for doing it..

Added a test that checks that status variable aborted_clients does not grow for ordinary disconnects, and that successful KILL increments this variable.
2012-05-03 02:47:06 +02:00
Alexey Botchkov
ef4b9a0e57 Fix for failing gis-precise on Windows. 2012-05-03 13:14:40 +05:00
Sergei Golubchik
99e2ba4848 5.2 merge 2012-05-02 22:02:06 +02:00
Oleksandr Byelkin
8fe40c50db MDEV-214 lp:967242 Wrong result with JOIN, AND in ON condition, multi-part key, GROUP BY, subquery and OR in WHERE
The problem was in the code (update_const_equal_items()) which marked
index parts constant independently of the place where the equality was used.
In the test suite it marked t2_1.c part constant despite the fact that
it connected by OR with other expression.

Solution is to mark constant only top equalities connected with AND.
2012-05-02 18:11:02 +02:00