even in the cases when there existed range/index-merge scans that
were cheaper than the full table scan.
This was a defect/bug of the implementation of mwl #128.
Now hash join can work not only with full table scan of the joined
table, but also with full index scan, range and index-merge scans.
Accordingly, in the cases when hash join is used the column 'type'
in the EXPLAINs can contain now 'hash_ALL', 'hash_index', 'hash_range'
and 'hash_index_merge'. If hash join is coupled with a range/index_merge
scan then the columns 'key' and 'key_len' contain info not only on
the used hash index, but also on the indexes used for the scan.
Problem is that these tests run with --innodb-lock-wait-timeout=2 in .opt
(and this is necessary as built-in innodb does not allow to change this
dynamically). This cases another part of the test to occasionally time
out an UPDATE, which subsequently caused the test case to timeout due to
waiting for a condition (successful UPDATE) that never occurs.
Fixed by re-trying the update in case of timeout.
Tested by inserting a sleep() in the connection that the UPDATE is waiting
for, and checking that the retry loops a couple of times until the other
connection is done and COMMITs.
- Fixed problem with oqgraph and 'make dist'
Note that after this merge we have a problem show in join_outer where we examine too many rows in one specific case (related to BUG#57024).
This will be fixed when mwl#128 is merged into 5.3.
independent execution plans.
Fixed a bug in Unique::unique_add that caused a crash for a query from
index_intersect_innodb on some platforms.
Fixed two bugs in opt_range.cc that led to the choice of not the
cheapest plans for index intersections.
The pushdown condition for the sorted table in a query can be complemented
by the conditions from HAVING. This transformation is done in JOIN::exec
pretty late after the original pushdown condition have been saved in the
field pre_idx_push_select_cond for the sorted table. So this field must
be updated after the inclusion of the condition from HAVING.
case than in corr index".
Server was unable to find existing or explicitly created supporting
index for foreign key if corresponding statement clause used field
names in case different than one used in key specification and created
yet another supporting index.
In cases when name of constraint (and thus name of generated index)
was the same as name of existing/explicitly created index this led
to duplicate key name error.
The problem was that unlike all other code Key_part_spec::operator==()
compared field names in case sensitive fashion. As result routines
responsible for getting rid of redundant generated supporting indexes
for foreign key were not working properly for versions of field names
using different cases.
(backported from mysql-trunk)
sql/sql_class.cc:
Make field name comparison case-insensitive like it is
in the rest of server.
- Changed to still use bcmp() in certain cases becasue
- Faster for short unaligneed strings than memcmp()
- Bettern when using valgrind
- Changed to use my_sprintf() instead of sprintf() to get higher portability for old systems
- Changed code to use MariaDB version of select->skip_record()
- Removed -%::SCCS/s.% from Makefile.am:s to remove automake warnings
After fix for bug 39653 the shortest available secondary index was used for
full table scan. Primary clustered key was used only if no secondary index
can be used. However, when chosen secondary index includes all fields of the
table being scanned it's better to use primary index since the amount of
data to scan is the same but the primary index is clustered.
Now the find_shortest_key function takes this into account.
mysql-test/suite/innodb/r/innodb_mysql.result:
Added a test case for the bug#55656.
mysql-test/suite/innodb/t/innodb_mysql.test:
Added a test case for the bug#55656.
sql/sql_select.cc:
Bug #55656: mysqldump can be slower after bug #39653 fix.
The find_shortest_key function now prefers clustered primary key
if found secondary key includes all fields of the table.
KILL_BAD_DATA is returned
Two problems discovered with the LEAST()/GREATEST()
functions:
1. The check for a null value should happen even
after the second call to val_str() in the args. This is
important because two subsequent calls to the same
Item::val_str() may yield different results.
Fixed by checking for NULL value before dereferencing
the string result.
2. While looping over the arguments and evaluating them
the loop should stop if there was an error evaluating so far
or the statement was killed. Fixed by checking for error
and bailing out.
The server was not checking for errors generated during
the execution of Item::val_xxx() methods when copying
data to the group, order, or distinct temp table's row.
Fixed by extending the copy_funcs() to return an error
code and by checking for that error code on the places
copy_funcs() is called.
Test case added.
This crash occured after ALTER TABLE was used on a temporary
transactional table locked by LOCK TABLES. Any later attempts to
execute LOCK/UNLOCK TABLES, caused the server to crash.
The reason for the crash was the list of locked tables would
end up having a pointer to a free'd table instance. This happened
because ALTER TABLE deleted the table without also removing the
table reference from the locked tables list.
This patch fixes the problem by making sure ALTER TABLE also
removes the table from the locked tables list.
Test case added to innodb_mysql.test.
Merge up to sunny.bains@oracle.com-20100625081841-ppulnkjk1qlazh82 .
There are 8 more changesets in mysql-5.1-innodb, but PB2 shows a
failure for a test added in one of them. If that is resolved quickly
then those 8 more changesets will be merged too.
Valgrind warning happpens because of uninitialized null bytes.
In row_sel_push_cache_row_for_mysql() function we fill fetch cache
with necessary field values, row_sel_store_mysql_rec() is called
for this and leaves null bytes untouched.
Later row_sel_pop_cached_row_for_mysql() rewrites table record
buffer with uninited null bytes. We can see the problem from the
test case:
At 'SELECT...' we call row_sel_push...->row_sel_store...->row_sel_pop_cached...
chain which rewrites table->record[0] buffer with uninitialized null bytes.
When we call 'UPDATE...' statement, compare_record uses this buffer and
valgrind warning occurs.
The fix is to init null bytes with default values.
mysql-test/suite/innodb/r/innodb_mysql.result:
test case
mysql-test/suite/innodb/t/innodb_mysql.test:
test case
mysql-test/t/ps_3innodb.test:
enable valgrind testing
storage/innobase/row/row0sel.c:
init null bytes with default values as they might be
left uninitialized in some cases and these uninited bytes
might be copied into mysql record buffer that leads to
valgrind warnings on next use of the buffer.