1
0
mirror of https://github.com/MariaDB/server.git synced 2025-07-05 12:42:17 +03:00
Commit Graph

2689 Commits

Author SHA1 Message Date
d0762ef511 5.1 -> 5.5 merge 2011-10-05 13:55:51 +04:00
fcd99c156b Bug#11747970 34660: CRASH WHEN FEDERATED TABLE LOSES CONNECTION DURING INSERT ... SELECT
Problematic query:
insert ignore into `t1_federated` (`c1`) select `c1` from  `t1_local` a
where not exists (select 1 from `t1_federated` b where a.c1 = b.c1);
When this query is killed in another connection it could lead to crash.
The problem is follwing:
An attempt to obtain table statistics for subselect table in killed query
fails with an error. So JOIN::optimize() for subquery is failed but
it does not prevent further subquery evaluation.
At the first subquery execution JOIN::optimize() is called
(see subselect_single_select_engine::exec()) and fails with
an error. 'executed' flag is set to TRUE and it prevents
further subquery evaluation. At the second call
JOIN::optimize() does not happen as 'JOIN::optimized' is TRUE
and in case of uncacheable subquery the 'executed' flag is set
to FALSE before subquery evaluation. So we loose 'optimize stage'
error indication (see subselect_single_select_engine::exec()).
In other words 'executed' flag is used for two purposes, for
error indication at JOIN::optimize() stage and for an
indication of subquery execution. And it seems it's wrong
as the flag could be reset.
2011-10-05 13:28:20 +04:00
6ac689fb4a 5.1 -> 5.5 merge 2011-08-02 11:54:35 +04:00
de3693a1cd Bug#11766594 59736: SELECT DISTINCT.. INCORRECT RESULT WITH DETERMINISTIC FUNCTION IN WHERE C
There is an optimization of DISTINCT in JOIN::optimize()
which depends on THD::used_tables value. Each SELECT statement
inside SP resets used_tables value(see mysql_select()) and it
leads to wrong result. The fix is to replace THD::used_tables
with LEX::used_tables.
2011-08-02 11:33:45 +04:00
c74d844de3 Merge from mysql-5.5.14-release 2011-07-06 01:13:50 +02:00
fb110da7e8 BUG#12561818 - RERUN OF STORED FUNCTION GIVES ERROR 1172:
RESULT CONSISTED OF MORE THAN ONE ROW

MySQL converts incorrect DATEs and DATETIMEs to '0000-00-00' on
insertion by default. This means that this sequence is possible:

CREATE TABLE t1(date_notnull DATE NOT NULL);
INSERT INTO t1 values (NULL);
SELECT * FROM t1;
0000-00-00

At the same time, ODBC drivers do not (or at least did not in the
90's) understand the DATE and DATETIME value '0000-00-00'. Thus,
to be able to query for the value 0000-00-00 it was decided in
MySQL 4.x (or maybe even before that) that for the special case
of DATE/DATETIME NOT NULL columns, the query "SELECT ... WHERE
date_notnull IS NULL" should return rows with date_notnull ==
'0000-00-00'. This is documented misbehavior that we do not want
to change.

The hack used to make MySQL return these rows is to convert 
"date_notnull IS NULL" to "date_notnull = 0". This is, however,
only done if the table date_notnull belongs to is not an inner
table of an outer join. The rationale for this seems to be that
if there is no join match for the row in the outer table,
null-complemented rows would otherwise not be returned because
the null-complemented DATE value is actually NULL. On the other
hand, this means that the "return rows with 0000-00-00 when the
query asks for IS NULL"-hack is not in effect for outer joins.

In this bug, we have a LEFT JOIN that does not misbehave like 
the documentation says it should. The fix is to rewrite

"date_notnull IS NULL" to "date_notnull IS NULL OR 
                           date_notnull = 0"
if dealing with an OUTER JOIN, otherwise 
"date_notnull IS NULL" to "date_notnull = 0"
as was done before.

Note:
The bug was originally reported as different result on first 
and second execution of SP. The reason was that during first
execution the query was correctly rewritten to an inner join
due to a null-rejecting predicate. On second execution the
"IS NULL" -> "= 0" rewrite was done because there was no outer
join. The real problem, though, was incorrect date/datetime 
IS NULL handling for OUTER JOINs.
2011-06-10 10:22:45 +02:00
712f2d3833 weave merge of mysql-5.5->mysql-5.5-security 2011-05-10 17:20:26 +03:00
9477fd2879 weave merge of mysql-5.1->mysql-5.1-security 2011-05-10 16:57:40 +03:00
294fb44d67 merge 5.1 => 5.5 : Bug#12329653 2011-05-05 08:13:22 +02:00
9baf84e99a merge 5.0 => 5.1 : Bug#12329653 2011-05-04 17:12:45 +02:00
a32df762d4 Bug#12329653 - EXPLAIN, UNION, PREPARED STATEMENT, CRASH, SQL_FULL_GROUP_BY
The query was re-written *after* we had tagged it with NON_AGG_FIELD_USED.
Remove the flag before continuing.
2011-05-04 16:18:21 +02:00
76f37a0235 Bug#11756928 48916: SERVER INCORRECTLY PROCESSING HAVING CLAUSES WITH AN ORDER BY CLAUSE
Before sorting HAVING condition is split into two parts,
first part is a table related condition and the rest of is
HAVING part. Extraction of HAVING part does not take into account
the fact that some of conditions might be non-const but
have 'used_tables' == 0 (independent subqueries)
and because of that these conditions are cut off by
make_cond_for_table() function.
The fix is to use (table_map) 0 instead of used_tables in
third argument for make_cond_for_table() function.
It allows to extract elements which belong to sorted
table and in addition elements which are independend
subqueries.
2011-04-22 11:20:55 +04:00
2af655c2e5 Bug#11765713 58705: OPTIMIZER LET ENGINE DEPEND ON UNINITIALIZED VALUES CREATED BY OPT_SUM_QU
Valgrind warnings were caused by comparing index values to an un-initialized field.
2011-04-14 16:35:24 +02:00
3f3318c34e Bug#11756242 48137: PROCEDURE ANALYSE() LEAKS MEMORY WHEN RETURNING NULL
There are two problems with ANALYSE():

1. Memory leak 
   it happens because do_select() can overwrite
   JOIN::procedure field(with zero value in our case) and
   JOIN destructor don't free the memory allocated for
   JOIN::procedure. The fix is to save original JOIN::procedure
   before do_select() call and restore it after do_select
   execution.

2. Wrong result
   If ANALYSE() procedure is used for the statement with LIMIT clause
   it could retrun empty result set. It happens because of missing 
   analyse::end_of_records() call. First end_send() function call
   returns NESTED_LOOP_QUERY_LIMIT and second call of end_send() with
   end_of_records flag enabled does not happen. The fix is to return
   NESTED_LOOP_OK from end_send() if procedure is active.
2011-04-14 12:11:57 +04:00
0f763f4f07 5.1 -> 5.5 merge 2011-04-22 11:39:42 +04:00
73c8173f44 Merge fix for Bug#11765713 from 5.1 2011-04-15 08:54:05 +02:00
625d0c6c0b 5.1 -> 5.5 merge 2011-04-14 13:10:11 +04:00
8ab3e4f3a1 Merge of fix for Bug#11766675. 2011-02-18 11:55:24 +01:00
cd4c263dc4 Bug#11766675 - 59839: Aggregation followed by subquery yields wrong result
The loop that was looping over subqueries' references to outer field used a
local boolean variable to tell whether the field was grouped or not. But the
implementor failed to reset the variable after each iteration. Thus a field
that was not directly aggregated appeared to be.

Fixed by resetting the variable upon each new iteration.
2011-02-18 11:50:06 +01:00
c85029f83b Merge from mysql-5.1.55-release 2011-02-08 12:52:33 +01:00
3e533efa81 Fix for bug#59308: Incorrect result for SELECT DISTINCT <col>... ORDER BY <col> DESC.
Also fix bug#59110: Memory leak of QUICK_SELECT_I allocated memory.
Includes Jørgen Lølands review comments.
      
Root cause of these bugs are that test_if_skip_sort_order() decided to
revert the 'skip_sort_order' descision (and use filesort) after the
query plan has been updated to reflect a 'skip' of the sort order.
      
This might happen in 'check_reverse_order:' if we have a 
select->quick which could not be made descending by appending 
a QUICK_SELECT_DESC. ().
      
The original 'save_quick' was then restored after the QEP has been modified,
which caused:
      
  - An incorrect 'precomputed_group_by= TRUE' may have been set, 
    and not reverted, as part of the already modifified QEP (Bug#59308)
  - A 'select->quick' might have been created which we fail to delete (bug#59110).
      
This fix is a refactorication of test_if_skip_sort_order() where all logic
related to modification of QEP (controlled by argument 'bool no_changes'), is
moved to the end of test_if_skip_sort_order(), and done after *all* 'test_if_skip'
checks has been performed - including the 'check_reverse_order:' checks.
      
The refactorication above contains now intentional changes to the logic which 
has been moved to the end of the function.
      
Furthermore, a smaller part of the fix address the handling of the 
select->quick objects which may already exists when we call 
'test_if_skip_sort_order()' (save_quick) -and
new select->quick's created during test_if_skip_sort_order():
      
  - Before new select->quick may be created by calling ::test_quick_select(), we
    set 'select->quick= 0' to avoid that ::test_quick_select() prematurely
    delete the save_quick's. (After this call we may have both a 'save_quick' 
    and 'select->quick')
      
  - All returns from ::test_if_skip_sort_order() where we may have both a
    'save_quick' and a 'select->quick' has been changed to goto's to the
    exit points 'skiped_sort_order:' or 'need_filesort:' where we
    decide which of the QUICK_SELECT's to keep, and delete the other.
2011-02-07 10:36:21 +01:00
c8de3bba8e Fix for bug#57030: ('BETWEEN' evaluation is incorrect')
Root cause for this bug is that the optimizer try to detect&
optimize the special case:
      
'<field>  BETWEEN c1 AND c1' and handle this as the condition '<field>  = c1'
            
This was implemented inside add_key_field(.. *field, *value[]...)
which assumed field to refer key Field, and value[] to refer a [low...high]
constant pair. value[0] and value[1] was then compared for equality.
            
In a 'normal' BETWEEN condition of the form '<field>  BETWEEN val1 and val2' the
BETWEEN operation is represented with an argementlist containing the
values [<field>, val1, val2] - add_key_field() is then called with
parameters field=<field>, *value=val1.
            
However, if the BETWEEN predicate specified:
            
 1)  '<const1>  BETWEEN<const2>  AND<field>
            
the 'field' and 'value' arguments to add_key_field() had to be swapped.
This was implemented by trying to cheat add_key_field() to handle it like:
            
 2) '<const1>  GE<const2>  AND<const1>  LE<field>'
            
As we didn't really replace the BETWEEN operation with 'ge' and 'le',
add_key_field() still handled it as a 'BETWEEN' and compared the (swapped)
arguments<const1>  and<const2>  for equality. If they was equal, the
condition 1) was incorrectly 'optimized' to:
            
 3) '<field>  EQ <const1>'
            
This fix moves this optimization of '<field>  BETWEEN c1 AND c1' into
add_key_fields() which then calls add_key_equal_fields() to collect 
key equality / comparison for the key fields in the BETWEEN condition.
2011-02-01 13:20:16 +01:00
a3acdfacd1 Updating header copyright/README in source for 2011 2011-01-25 15:42:40 +01:00
fc42cbaca3 Bug#58207: invalid memory reads when using default column value and
tmptable needed

The function DEFAULT() works by modifying the the data buffer pointers (often
referred to as 'record' or 'table record') of its argument. This modification
is done during name resolution (fix_fields().) Unfortunately, the same
modification is done when creating a temporary table, because default values
need to propagate to the new table.

Fixed by skipping the pointer modification for fields that are arguments to
the DEFAULT function.
2011-01-12 09:55:31 +01:00
1c32b8ee3c weave merge from mysql-5.1 to mysql-5.5
Resolved an innodb conflict thanks to vasil.
2011-02-08 17:47:33 +02:00
d7e3a54271 Merge of fix for bug#59308 from mysql-5.1 -> mysql-5.5 2011-02-07 10:40:42 +01:00
1d6261c5c3 Fix for bug#58490, 'Incorrect result in multi level OUTER JOIN
in combination with IS NULL'
      
As this bug is a duplicate of bug#49322, it also includes test cases
covering this bugreport
      
Qualifying an OUTER JOIN with the condition 'WHERE <column> IS NULL',
where <column> is declared as 'NOT NULL' causes the
'not_exists_optimize' to be enabled by the optimizer.
      
In evaluate_join_record() the 'not_exists_optimize' caused
'NESTED_LOOP_NO_MORE_ROWS' to be returned immediately
when a matching row was found.
      
However, as the 'not_exists_optimize' is derived from
'JOIN_TAB::select_cond', the usual rules for condition guards
also applies for 'not_exist_optimize'. It is therefore incorrect
to check 'not_exists_optimize' without ensuring that all guards
protecting it is 'open'.
      
This fix uses the fact that 'not_exists_optimize' is derived from
a 'is_null' predicate term in 'tab->select_cond'. Furthermore,
'is_null' will evaluate to 'false' for any 'non-null' rows
once all guards protecting the is_null is open.
      
We can use this knowledge as an implicit guard check for the
'not_exists_optimize' by moving 'if (...not_exists_optimize)'
inside the handling of 'select_cond==false'. It will then
not take effect before its guards are open.
      
We also add an assert which requires that a
'not_exists_optimize' always comes together with
a select_cond. (containing 'is_null').
2011-02-01 15:19:34 +01:00
83817644c6 Merge 2011-02-01 13:23:28 +01:00
f4adb7c6e4 Fix for bug#58553, "Queries with pushed conditions causes 'explain extended'
to crash mysqld". 
      
handler::pushed_cond was not always properly reset when table objects where
recycled via the table cache.
      
handler::pushed_cond is now set to NULL in handler::ha_reset(). This should 
prevent pushed conditions from (incorrectly) re-apperaring in later queries.
2011-01-11 12:09:54 +01:00
0bb9123f64 automerge 2011-01-07 15:28:36 +02:00
970c3fb244 Fix for #58422: Incorrect result when OUTER JOIN'ing with an empty table.
Fixed incorrect checks in join_read_const_table() for when to 
accept a non-existing, or empty const-row as a part of the const'ified 
set of tables.
      
Intention of this test is to only accept NULL-rows if this table is outer joined
into the resultset. (In case of an inner-join we can conclude at this point that 
resultset will be empty, end we want to return 'error' to signal this.)
      
Initially 'maybe_null' is set to the same value as 'outer_join' in 
setup_table_map(), mysql_priv.h ~line 2424. Later simplify_joins() will
attemp to replace outer joins by inner join whenever possible. This
will cause 'outer_join' to be updated. However, 'maybe_null' is *not* updated
to reflect this rewrite as this field is used to currectly set the 'nullability'
property for the columns in the resultset.
      
We should therefore change join_read_const_table() to check the 'outer_join'
property instead of 'maybe_null', as this correctly reflect the nullability of
the *execution plan* (not *resultset*).
2011-01-13 11:42:48 +01:00
ba38eef6dc Fix for bug#58134: 'Incorrectly condition pushdown inside subquery to NDB engine'
An incorrect 'table_map' containing both the table itself, 
and possible any outer-refs if this was the last table in 
the subquery, was presented to make_cond_for_table().
      
As a pushed condition is only able to refer column from the table
the condition is pushed to, nothing else than columns from the
table itself (tab->table->map) may be refered in the pushed condition
constructed by 'push_cond= make_cond_for_table()'. 
      
Also fix a minor 'copy and paste' bug in a comment 
inside make_cond_for_table().

No testcase is possible on mainbranch as the NDB engine is not available (yet)
on mysql >= 5.5
2011-01-13 10:20:45 +01:00
b48abbc5e1 Merge of fix for Bug#58207. 2011-01-12 10:31:41 +01:00
b7e3f45011 Merge of fix for bug#58553, "Queries with pushed conditions causes 'explain
extended' to crash mysqld" (see http://lists.mysql.com/commits/128409).
2011-01-11 12:33:28 +01:00
94cde4c951 Merge 2010-12-29 01:26:31 +01:00
920d185fd8 Merge 2010-12-29 00:47:05 +01:00
fddb1f1b13 - Added/updated copyright headers
- Removed files specific to compiling on OS/2
- Removed files specific to SCO Unix packaging
- Removed "libmysqld/copyright", text is included in documentation
- Removed LaTeX headers for NDB Doxygen documentation
- Removed obsolete NDB files
- Removed "mkisofs" binaries
- Removed the "cvs2cl.pl" script
- Changed a few GPL texts to use "program" instead of "library"
2010-12-28 19:57:23 +01:00
209df51cc9 Bug #59021 Valgrind warning in key_unpack()
Introduced by fix for Bug#57687
2010-12-20 10:00:14 +01:00
ef0a01abfc BUG#58985: Assertion tab->quick->index != 64 failed in make_join_select()
in sql_select.cc

Follow-up patch. Add sanity check for quick select when it is
decided that it should be used.
2010-12-17 13:52:39 +01:00
f7dc30c843 BUG#58985: Assertion tab->quick->index != 64 failed in
make_join_select() in sql_select.cc

Caused by incorrect ASSERT introduced by BUG#58456. Quick selects 
may have index == MAX_KEY if it merges indices.
2010-12-17 10:02:24 +01:00
664f06de7c BUG#58456 - Assertion 0 in QUICK_INDEX_MERGE_SELECT::need_sorted_output
in opt_range.h

In this bug, there are two alternative access plans: 
 * Index merge range access
 * Const ref access

best_access_path() decided that the ref access was preferrable, 
but make_join_select() still decided to point 
SQL_SELECT::quick to the index merge because the table had 
type==JT_CONST which was not handled. 

At the same time the table's ref.key still referred to the 
index the ref access would use indicating that ref access 
should be used. In this state, different parts of the 
optimizer code have different perceptions of which access path
is in use (ref or range).

test_if_skip_sort_order() was called to check if the ref access
needed ordering, but test_if_skip_sort_order() got confused and
requested the index merge to return records in sorted order. 
Index merge cannot do this, and fired an ASSERT.

The fix is to take join_tab->type==JT_CONST into concideration
when make_join_select() decides whether or not to use the 
range access method.
2010-12-16 12:25:02 +01:00
cd36a6a5d5 Fixed following problems:
--Bug#52157 various crashes and assertions with multi-table update, stored function
--Bug#54475 improper error handling causes cascading crashing failures in innodb/ndb
--Bug#57703 create view cause Assertion failed: 0, file .\item_subselect.cc, line 846
--Bug#57352 valgrind warnings when creating view
--Recently discovered problem when a nested materialized derived table is used
  before being populated and it leads to incorrect result

We have several modes when we should disable subquery evaluation.
The reasons for disabling are different. It could be
uselessness of the evaluation as in case of 'CREATE VIEW'
or 'PREPARE stmt', or we should disable subquery evaluation
if tables are not locked yet as it happens in bug#54475, or
too early evaluation of subqueries can lead to wrong result
as it happened in Bug#19077.
Main problem is that if subquery items are treated as const
they are evaluated in ::fix_fields(), ::fix_length_and_dec()
of the parental items as a lot of these methods have
Item::val_...() calls inside.
We have to make subqueries non-const to prevent unnecessary
subquery evaluation. At the moment we have different methods
for this. Here is a list of these modes:

1. PREPARE stmt;
We use UNCACHEABLE_PREPARE flag.
It is set during parsing in sql_parse.cc, mysql_new_select() for
each SELECT_LEX object and cleared at the end of PREPARE in
sql_prepare.cc, init_stmt_after_parse(). If this flag is set
subquery becomes non-const and evaluation does not happen.

2. CREATE|ALTER VIEW, SHOW CREATE VIEW, I_S tables which
   process FRM files
We use LEX::view_prepare_mode field. We set it before
view preparation and check this flag in
::fix_fields(), ::fix_length_and_dec().
Some bugs are fixed using this approach,
some are not(Bug#57352, Bug#57703). The problem here is
that we have a lot of ::fix_fields(), ::fix_length_and_dec()
where we use Item::val_...() calls for const items.

3. Derived tables with subquery = wrong result(Bug19077)
The reason of this bug is too early subquery evaluation.
It was fixed by adding Item::with_subselect field
The check of this field in appropriate places prevents
const item evaluation if the item have subquery.
The fix for Bug19077 fixes only the problem with
convert_constant_item() function and does not cover
other places(::fix_fields(), ::fix_length_and_dec() again)
where subqueries could be evaluated.

Example:
CREATE TABLE t1 (i INT, j BIGINT);
INSERT INTO t1 VALUES (1, 2), (2, 2), (3, 2);
SELECT * FROM (SELECT MIN(i) FROM t1
WHERE j = SUBSTRING('12', (SELECT * FROM (SELECT MIN(j) FROM t1) t2))) t3;
DROP TABLE t1;

4. Derived tables with subquery where subquery
   is evaluated before table locking(Bug#54475, Bug#52157)

Suggested solution is following:

-Introduce new field LEX::context_analysis_only with the following
 possible flags:
 #define CONTEXT_ANALYSIS_ONLY_PREPARE 1
 #define CONTEXT_ANALYSIS_ONLY_VIEW    2
 #define CONTEXT_ANALYSIS_ONLY_DERIVED 4
-Set/clean these flags when we perform
 context analysis operation
-Item_subselect::const_item() returns
 result depending on LEX::context_analysis_only.
 If context_analysis_only is set then we return
 FALSE that means that subquery is non-const.
 As all subquery types are wrapped by Item_subselect
 it allow as to make subquery non-const when
 it's necessary.
2010-12-14 12:33:03 +03:00
a2aa73d92a 5.1-bugteam->5.5-bugteam merge 2010-12-14 13:46:00 +03:00
4e9fb2c76f Bug #57954: BIT_AND function returns incorrect results
when semijoin=on

When setting the aggregate function as having no rows to report
the function no_rows_in_result() was calling Item_sum::reset().
However this function in addition to cleaning up the aggregate 
value by calling aggregator_clear() was also adding the current
value to the aggregate value by calling aggregator_add().
Fixed by making no_rows_in_result() to call aggregator_clear()
directly.
Renamed Item_sum::reset to Item_sum::reset_and_add() to
and added a comment to avoid misinterpretation of what the
function does.
2010-12-08 14:28:06 +02:00
96d45ed2f6 merge 2010-11-26 16:32:51 +02:00
c5987223db merge 2010-11-26 14:51:48 +02:00
3586f7727f backport: Bug #55568 from 5.1-security to 5.0-security
> revision-id: alexey.kopytov@sun.com-20100824103548-ikm79qlfrvggyj9h
> parent: sunny.bains@oracle.com-20100816001222-xqc447tr6jwh8c53
> committer: Alexey Kopytov <Alexey.Kopytov@Sun.com>
> branch nick: 5.1-security
> timestamp: Tue 2010-08-24 14:35:48 +0400
> message:
>   Bug #55568: user variable assignments crash server when used
>               within query
>   
>   The server could crash after materializing a derived table
>   which requires a temporary table for grouping.
>   
>   When destroying the temporary table used to execute a query for
>   a derived table, JOIN::destroy() did not clean up Item_fields
>   pointing to fields in the temporary table. This led to
>   dereferencing a dangling pointer when printing out the items
>   tree later in the outer SELECT.
>   
>   The solution is an addendum to the patch for bug37362: in
>   addition to cleaning up items in tmp_all_fields3, do the same
>   for items in tmp_all_fields1, since now we have an example
>   where this is necessary.
2010-11-23 00:29:47 +03:00
b5586c67ec Fix for Bug#56138 "valgrind errors about overlapping memory when double-assigning same variable",
and related small fixes.
2010-11-22 09:57:59 +01:00
b3232453a1 merge of 5.1-bugteam 2010-11-22 10:13:46 +01:00
b318882949 Bug#52711 Segfault when doing EXPLAIN SELECT with union...order by (select... where...)
backport from 5.1
2010-11-08 13:51:39 +03:00