1
0
mirror of https://github.com/MariaDB/server.git synced 2025-07-24 19:42:23 +03:00
Commit Graph

2747 Commits

Author SHA1 Message Date
a2aa73d92a 5.1-bugteam->5.5-bugteam merge 2010-12-14 13:46:00 +03:00
4e9fb2c76f Bug #57954: BIT_AND function returns incorrect results
when semijoin=on

When setting the aggregate function as having no rows to report
the function no_rows_in_result() was calling Item_sum::reset().
However this function in addition to cleaning up the aggregate 
value by calling aggregator_clear() was also adding the current
value to the aggregate value by calling aggregator_add().
Fixed by making no_rows_in_result() to call aggregator_clear()
directly.
Renamed Item_sum::reset to Item_sum::reset_and_add() to
and added a comment to avoid misinterpretation of what the
function does.
2010-12-08 14:28:06 +02:00
96d45ed2f6 merge 2010-11-26 16:32:51 +02:00
c5987223db merge 2010-11-26 14:51:48 +02:00
3586f7727f backport: Bug #55568 from 5.1-security to 5.0-security
> revision-id: alexey.kopytov@sun.com-20100824103548-ikm79qlfrvggyj9h
> parent: sunny.bains@oracle.com-20100816001222-xqc447tr6jwh8c53
> committer: Alexey Kopytov <Alexey.Kopytov@Sun.com>
> branch nick: 5.1-security
> timestamp: Tue 2010-08-24 14:35:48 +0400
> message:
>   Bug #55568: user variable assignments crash server when used
>               within query
>   
>   The server could crash after materializing a derived table
>   which requires a temporary table for grouping.
>   
>   When destroying the temporary table used to execute a query for
>   a derived table, JOIN::destroy() did not clean up Item_fields
>   pointing to fields in the temporary table. This led to
>   dereferencing a dangling pointer when printing out the items
>   tree later in the outer SELECT.
>   
>   The solution is an addendum to the patch for bug37362: in
>   addition to cleaning up items in tmp_all_fields3, do the same
>   for items in tmp_all_fields1, since now we have an example
>   where this is necessary.
2010-11-23 00:29:47 +03:00
b5586c67ec Fix for Bug#56138 "valgrind errors about overlapping memory when double-assigning same variable",
and related small fixes.
2010-11-22 09:57:59 +01:00
b3232453a1 merge of 5.1-bugteam 2010-11-22 10:13:46 +01:00
b318882949 Bug#52711 Segfault when doing EXPLAIN SELECT with union...order by (select... where...)
backport from 5.1
2010-11-08 13:51:39 +03:00
9d16003a00 Merge from mysql-5.5-bugteam to mysql-5.5-runtime
No conflicts
2010-11-01 12:10:04 +01:00
89f9f6a40d manual merge 5.1-bugteam --> 5.5-bugteam (bug 52160) 2010-11-01 02:23:37 +03:00
0389c6aac0 Bug #52160: crash and inconsistent results when grouping
by a function and column

The bugreport reveals two different bugs about grouping
on a function:

1) grouping by the TIME_TO_SEC function result caused
   a server crash or wrong results and
2) grouping by the function returning a blob caused
   an unexpected "Duplicate entry" error and wrong
   result.

Details for the 1st bug:

TIME_TO_SEC() returns NULL if its argument is invalid (empty
string for example). Thus its nullability depends not only
on the nullability of its arguments but also on their values.
Fixed by (overoptimistically) setting TIME_TO_SEC() to be
nullable despite the nullability of its arguments.

Details for the 2nd bug:

The server is unable to create indices on blobs without
explicit blob key part length. However, this fact was
ignored for blob function result fields of GROUP BY
intermediate tables.
Fixed by disabling GROUP BY index creation for blob
function result fields like regular blob fields.
2010-10-31 19:04:38 +03:00
e3917c3d43 Bug#57688 Assertion `!table || (!table->write_set || bitmap_is_set(table->write_set, field
Lines below which were added in the patch for Bug#56814 cause this crash:

+      if (table->table)
+        table->table->maybe_null= FALSE;

Consider following test case:
--
CREATE TABLE t1(f1 INT NOT NULL);
INSERT INTO t1 VALUES (16777214),(0);

SELECT COUNT(*) FROM t1 LEFT JOIN t1 t2
ON 1 WHERE t2.f1 > 1 GROUP BY t2.f1;

DROP TABLE t1;
--

We set TABLE::maybe_null to FALSE for t2 table
and in create_tmp_field() we create appropriate tmp table field
using create_tmp_field_from_item() function instead of
create_tmp_field_from_field. As a result we have
LONGLONG field. As we have GROUP BY clause we calculate
group buffer length, see calc_group_buffer().
Item from group list which is used for calculation
refer to the field from real tables and have LONG type.
So group buffer length become insufficient for storing of
LONGLONG value. It leads to overwriting of wrong memory
area in do_field_int() function which is called from
end_update().
After some investigation I found out that
create_tmp_field_from_item() is used only for OLAP
grouping and can not be used for common grouping
as it could be an incompatibility between tmp
table fields and group buffer length.
We can not remove create_tmp_field_from_item() call from
create_tmp_field as OLAP needs it and we can not use this
function for common grouping. So we should remove setting
TABLE::maybe_null to FALSE from simplify_joins().
In this case we'll get wrong behaviour of
list_contains_unique_index() back. To fix it we
could use Field::real_maybe_null() check instead of
Field::maybe_null() and add addition check of
TABLE_LIST::outer_join.
2010-10-29 12:23:06 +04:00
ae6801eb23 Bug#37780: Make KILL reliable (main.kill fails randomly)
- A prerequisite cleanup patch for making KILL reliable.

The test case main.kill did not work reliably.

The following problems have been identified:

1. A kill signal could go lost if it came in, short before a
thread went reading on the client connection.

2. A kill signal could go lost if it came in, short before a
thread went waiting on a condition variable.

These problems have been solved as follows. Please see also
added code comments for more details.

1. There is no safe way to detect, when a thread enters the
blocking state of a read(2) or recv(2) system call, where it
can be interrupted by a signal. Hence it is not possible to
wait for the right moment to send a kill signal. It has been
decided, not to fix it in the code.  Instead, the test case
repeats the KILL statement until the connection terminates.

2. Before waiting on a condition variable, we register it
together with a synchronizating mutex in THD::mysys_var. After
this, we need to test THD::killed again. At some places we did
only test it in a loop condition before the registration. When
THD::killed had been set between this test and the registration,
we entered waiting without noticing the killed flag. Additional
checks ahve been introduced where required.

In addition to the above, a re-write of the main.kill test
case has been done. All sleeps have been replaced by Debug
Sync Facility synchronization. A couple of sync points have
been added to the server code.

To avoid further problems, if the test case fails in spite of
the fixes, the test case has been added to the "experimental"
list for now.

- Most of the work on this patch is authored by Ingo Struewing
2010-10-22 09:58:09 -02:00
e6472e8fed Bug#56814 Explain + subselect + fulltext crashes server
create_sort_index() function overwrites original JOIN_TAB::type field.
At re-execution of subquery overwritten JOIN_TAB::type(JT_ALL) is
used instead of JT_FT. It misleads test_if_skip_sort_order() and
the function tries to find suitable key for the order that should
not be allowed for FULLTEXT(JT_FT) table.
The fix is to restore JOIN_TAB strucures for subselect on re-execution
for EXPLAIN.
Additional fix:
Update TABLE::maybe_null field which
affects list_contains_unique_index() behaviour as it
could have the value(maybe_null==TRUE) based on the
assumption that this join is outer
(see setup_table_map() func).
2010-10-18 16:12:27 +04:00
9a8f22fa2d Bug#54484 explain + prepared statement: crash and Got error -1 from storage engine
Subquery executes twice, at top level JOIN::optimize and ::execute stages.
At first execution create_sort_index() function is called and
FT_SELECT object is created and destroyed. HANDLER::ft_handler is cleaned up
in the object destructor and at second execution FT_SELECT::get_next() method
returns error.
The fix is to reinit HANDLER::ft_handler field before re-execution of subquery.
2010-10-18 14:47:26 +04:00
539291cde9 merged mysql-5.1 into mysql-5.1-bugteam 2010-10-05 11:11:56 +03:00
216418e7b2 merge 2010-09-29 17:26:32 +03:00
061769d7b6 merge 2010-09-13 16:07:50 +02:00
640454b373 merge 2010-09-13 15:56:56 +02:00
c09489ebb2 Merge of fix for Bug#50394. 2010-09-13 14:46:55 +02:00
20bdf7636b Bug #50394: Regression in EXPLAIN with index scan, LIMIT, GROUP BY and
ORDER BY computed col
      
GROUP BY implies ORDER BY in the MySQL dialect of SQL. Therefore, when an
index on the first table in the query is used, and that index satisfies
ordering according to the GROUP BY clause, the query optimizer estimates the
number of tuples that need to be read from this index. If there is a LIMIT
clause, table statistics on tables following this 'sort table' are employed.

There may be a separate ORDER BY clause however, which mandates reading the
whole 'sort table' anyway. But the previous estimate was left untouched.

Fixed by removing the estimate from EXPLAIN output if GROUP BY is used in
conjunction with an ORDER BY clause that mandates using a temporary table.
2010-09-13 13:33:19 +02:00
4770f8d415 merge 2010-09-10 11:50:38 +02:00
3d5b9792e6 Bug#54543: update ignore with incorrect subquery leads to assertion failure:
inited==INDEX

When an error occurs while sending the data in a temporary table there was no
cleanup performed. This caused a failed assertion in the case when different
access methods were used for populating the table vs. retrieving the data from
the table if IGNORE was specified and sql_safe_updates = 0. In this case
execution continues, but the handler expects to continue with the access
method used for row retrieval.

Fixed by doing the cleanup even if errors occur.
2010-09-07 09:58:05 +02:00
684c3e9e3d merge from 5.5-merge 2010-09-02 16:57:59 +03:00
edbae904ff Merge from mysql-5.1-bugteam to mysql-5.1-security 2010-09-01 17:43:02 -07:00
ab2414577a Merge 5.1-bugteam to 5.5-merge. 2010-08-27 15:33:32 +04:00
50150fd66a Bug#53806: Wrong estimates for range query in partitioned MyISAM table
Bug#46754: 'rows' field doesn't reflect partition pruning

The EXPLAIN's result in 'rows' field
was evaluated to number of rows when the table was opened
(not from the table cache) and only the partitions left
after pruning was updated with its correct number
of rows.

The evaluation of the 'rows' field was using handler::records()
which is a potentially expensive call, and ignores the partitioning
pruning.

The fix was to use the handlers stats.records after updating it
with ::info(HA_STATUS_VARIABLE) instead.
2010-08-26 17:14:18 +02:00
d63b9feb10 Automerge. 2010-08-26 14:17:27 +04:00
6c6a3e8f44 Bug #53544: Server hangs during JOIN query in stored procedure
called twice in a row

Queries with nested joins could cause an infinite loop in the
server when used from SP/PS.

When flattening nested joins, simplify_joins() tracks if the
name resolution list needs to be updated by setting
fix_name_res to TRUE if the current loop iteration has done any
transformations to the join table list. The problem was that
the flag was not reset before the next loop iteration leading
to unnecessary "fixing" of the name resolution list which in
turn could lead to a loop (i.e. circularly-linked part) in that
list. This was causing problems on subsequent execution when
used together with stored procedures or prepared statements.

Fixed by making sure fix_name_res is reset on every loop
iteration.
2010-08-26 14:13:02 +04:00
151af144ff Bug #55656: mysqldump can be slower after bug 39653 fix.
After fix for bug 39653 the shortest available secondary index was used for
full table scan. Primary clustered key was used only if no secondary index
can be used. However, when chosen secondary index includes all fields of the
table being scanned it's better to use primary index since the amount of
data to scan is the same but the primary index is clustered.
Now the find_shortest_key function takes this into account.
2010-08-26 13:31:04 +04:00
947c7f3029 Automerge. 2010-08-24 14:44:15 +04:00
0e74ac5028 Bug #55568: user variable assignments crash server when used
within query

The server could crash after materializing a derived table
which requires a temporary table for grouping.

When destroying the temporary table used to execute a query for
a derived table, JOIN::destroy() did not clean up Item_fields
pointing to fields in the temporary table. This led to
dereferencing a dangling pointer when printing out the items
tree later in the outer SELECT.

The solution is an addendum to the patch for bug37362: in
addition to cleaning up items in tmp_all_fields3, do the same
for items in tmp_all_fields1, since now we have an example
where this is necessary.
2010-08-24 14:35:48 +04:00
607f1adabd merge 2010-08-17 15:12:52 +03:00
12f7d57d42 Bug #55580 : segfault in read_view_sees_trx_id
The server was not checking for errors generated during
the execution of Item::val_xxx() methods when copying
data to the group, order, or distinct temp table's row.
Fixed by extending the copy_funcs() to return an error
code and by checking for that error code on the places
copy_funcs() is called. 
Test case added.
2010-08-13 11:07:39 +03:00
6d3078e231 manual merge from mysql-5.1-bugteam 2010-08-09 14:11:29 +02:00
cc3be1aee0 Bug #54106 assert in Protocol::end_statement,
INSERT IGNORE ... SELECT ... UNION SELECT ...

This assert was triggered by INSERT IGNORE ... SELECT. The assert checks that a
statement either sends OK or an error to the client. If the bug was triggered
on release builds, it caused OK to be sent to the client instead of the correct
error message (in this case ER_FIELD_SPECIFIED_TWICE).

The reason the assert was triggered, was that lex->no_error was set to TRUE
during JOIN::optimize() because of IGNORE. This causes all errors to be ignored.
However, not all errors can be ignored. Some, such as ER_FIELD_SPECIFIED_TWICE
will cause the INSERT to fail no matter what. But since lex->no_error was set,
the critical errors were ignored, the INSERT failed and neither OK nor the
error message was sent to the client.

This patch fixes the problem by temporarily turning off lex->no_error in
places where errors cannot be ignored during processing of INSERT ... SELECT.

Test case added to insert.test.
2010-08-09 13:39:59 +02:00
02a0787d8d Auto-merge from mysql-trunk-bugfixing. 2010-07-30 19:13:38 +04:00
14b769cfc6 Merge trunk-bugfixing -> trunk-runtime. 2010-07-29 14:18:13 +04:00
95ec38c5b1 Bug #55472: Assertion failed in heap_rfirst function of hp_rfirst.c on
DELETE statement

Single-table delete ordered by a field that has a hash-type index
may cause an assertion failure or a crash.

An optimization added by the fix for the bug 36569 forced the
optimizer to use ORDER BY-compatible indices when applicable.

However, the existence of unsorted indices (HASH index algorithm
for some engines such as MEMORY/HEAP, NDB) was ignored.

The test_if_order_by_key function has been modified to skip
unsorted indices.
2010-07-29 01:02:43 +04:00
8909586f16 Rename select_send::abort() to select_send::abort_result_set()
to be more descriptive.
2010-07-28 15:17:19 +04:00
2abe7b9d4e Merge trunk-bugfixing -> trunk-runtime. 2010-07-27 18:32:42 +04:00
6d059673f7 Implement WL#5502 Remove dead 5.0 class Sensitive_cursor.
Remove dead and unused code.
Update to reflect the code review requests.
2010-07-27 16:42:36 +04:00
9fd9857e0b WL#5498: Remove dead and unused source code
Remove code that has been disabled for a long time.
2010-07-23 17:09:27 -03:00
d4885416b7 manual merge from mysql-5.1-bugteam 2010-07-19 11:21:24 +02:00
4b2378a148 Bug #54734 assert in Diagnostics_area::set_ok_status
This assert checks that the server does not try to send OK to the
client if there has been some error during processing. This is done
to make sure that the error is in fact sent to the client.

The problem was that view errors during processing of WHERE conditions
in UPDATE statements where not detected by the update code. It therefore
tried to send OK to the client, triggering the assert.
The bug was only noticeable in debug builds.

This patch fixes the problem by making sure that the update code
checks for errors during condition processing and acts accordingly.
2010-07-19 11:03:52 +02:00
649390ac81 Merge of mysql-trunk-bugfixing into mysql-trunk-merge. 2010-07-15 10:47:50 -03:00
80935b8659 5.1-bugteam->trunk-merge merge 2010-07-09 14:46:46 +04:00
3c39a56208 Bug#54416 MAX from JOIN with HAVING returning NULL with 5.1 and Empty set
The problem there is that HAVING condition evaluates const
parts of condition despite the condition has references
on aggregate functions. Table t1 became const tables
after make_join_statistics and table1.pk = 1, HAVING is
transformed into MAX(1) < 7 and taken away from HAVING.
The fix is to skip evaluation of HAVING conts parts if
HAVING condition has references on aggregate functions.
2010-07-09 14:39:47 +04:00
a10ae35328 Bug#34043: Server loops excessively in _checkchunk() when safemalloc is enabled
Essentially, the problem is that safemalloc is excruciatingly
slow as it checks all allocated blocks for overrun at each
memory management primitive, yielding a almost exponential
slowdown for the memory management functions (malloc, realloc,
free). The overrun check basically consists of verifying some
bytes of a block for certain magic keys, which catches some
simple forms of overrun. Another minor problem is violation
of aliasing rules and that its own internal list of blocks
is prone to corruption.

Another issue with safemalloc is rather the maintenance cost
as the tool has a significant impact on the server code.
Given the magnitude of memory debuggers available nowadays,
especially those that are provided with the platform malloc
implementation, maintenance of a in-house and largely obsolete
memory debugger becomes a burden that is not worth the effort
due to its slowness and lack of support for detecting more
common forms of heap corruption.

Since there are third-party tools that can provide the same
functionality at a lower or comparable performance cost, the
solution is to simply remove safemalloc. Third-party tools
can provide the same functionality at a lower or comparable
performance cost. 

The removal of safemalloc also allows a simplification of the
malloc wrappers, removing quite a bit of kludge: redefinition
of my_malloc, my_free and the removal of the unused second
argument of my_free. Since free() always check whether the
supplied pointer is null, redudant checks are also removed.

Also, this patch adds unit testing for my_malloc and moves
my_realloc implementation into the same file as the other
memory allocation primitives.
2010-07-08 18:20:08 -03:00
b71629a37d 5.1-bugteam->trunk-merge merge 2010-06-30 17:16:56 +04:00