mysql-test-run auto-disables all optional plugins.
mysql-test/include/default_client.cnf:
no @OPT.plugindir anymore
mysql-test/include/default_mysqld.cnf:
don't disable plugins manually - mtr can do it better
mysql-test/suite/innodb/t/innodb_bug47167.test:
mtr now uses suite-dir as an include path
mysql-test/suite/innodb/t/innodb_file_format.test:
mtr now uses suite-dir as an include path
mysql-test/t/partition_binlog.test:
this test uses partitions
storage/example/mysql-test/mtr/t/source.result:
update results. as mysqltest includes the correct overlayed include
storage/innobase/handler/ha_innodb.cc:
the assert is wrong
PARTITION STATISTICS
Problem was the fix for bug#11756867; It always used the first
partitions, and stopped after it checked 10 [sub]partitions.
(or until it found a partition which would contain a match).
This results in bad statistics for tables where the first 10 partitions
don't represent the majority of the data (like when the first 10
partitions only contained a few rows in total).
The solution was to take statisics from the partitions containing
the most rows instead:
Added an array of partition ids which is sorted by number of records
in descending order.
this array is used in records_in_range to cover as many records as
possible in as few calls as possible.
Also changed the limit of how many partitions to use for the statistics
from a static max of 10 partitions, into a dynamic model:
Maximum number of partitions is now log2(total number of partitions)
taken from the ordered array.
It will continue calling partitions records_in_range until it has
checked:
(total rows in matching partitions) * (maximum number of partitions)
/ (number of used partitions)
Also reverted the changes for ha_partition::scan_time() and
ha_partition::estimate_rows_upper_bound() to before
the fix of bug#11756867. Since they are not as slow as
records_in_range.
Fixed compiler warnings found by buildbot
Makefile.am:
Removed extra empty line
cmd-line-utils/libedit/sys.h:
Fixed that strndup() doesn't give compiler warnings
mysql-test/Makefile.am:
Fixes for 'make distcheck'
plugin/auth_pam/auth_pam.c:
Ensure that prototype for strndup() is included on linux
sql/share/Makefile.am:
Fixes for 'make distcheck'
storage/innodb_plugin/btr/btr0sea.c:
Fixed compiler warning
support-files/Makefile.am:
Fixes for 'make distcheck'
Suppress innodb_bug34300 from failing if InnoDB prints:
120221 11:05:03 InnoDB: ERROR: the age of the last checkpoint is 9439048,
InnoDB: which exceeds the log group capacity 9433498.
by default the log capacity is 2 log files, 5 MB each.
Fixed supression in mysql-test-run so it also works on windows.
mysql-test/mysql-test-run.pl:
Fixed supression so it also works on windows.
mysql-test/valgrind.supp:
More general handling of memory loss in dlclose (backported from 5.2)
sql/signal_handler.cc:
Added newlines around link to how to do bug reports
RESULT FROM PREVIOUS TRANSACTION
The current Query Cache API is not fully compatible with
the partitioning engine.
There is no good way to implement support for QC due to:
1) a static callback for ha_partition would need to have access
to all partition names and call the underlying callback for each
[sub]partition with the correct name.
2) pruning would be impossible, even if one used the ulonglong
engine_data due to if engine_data is changed, the table is
invalidated by the QC.
So the only viable solution to avoid incorrect data is to not allow
caching of queries using partitioned tables.
(There are some extra changes, due to removal of \r as line break)
- MySQL 5.5 introduced caching of constant items by means of
wrapping them in Item_cache_XXX objects. If a subquery was wrapped
in this cache, it could end up being pushed down by ICP.
- The fix is to add Item_cache::walk() which lets ICP to see that
the cache item has a subquery inside it, and disable pushdown for this case.
- In mi_rkey(), do correct handling of case where mi_yield_and_check_if_killed()
detects that the thread was killed (all other similar functions in MyISAM/Aria have
slightly different code and do not have this problem).
- Also fixed assignment in DBUG_ASSERT
- this is 2nd variant of the fix:
= make .result file smaller
= run KILLable statements in a separate connection, otherwise we could end up trying to
KILL the final "DROP TABLE" statement
Fixed README with link to source
Merged InnoDB change to XtraDB
README:
Added information of where to find MariaDB code
storage/archive/ha_archive.cc:
Removed memset() of rows, a MariaDB checksum's doesn't touch not used data.
This happend when you have more than 1024 open Aria tables during checkpoint.
mysql-test/mysql-test-run.pl:
Fixed that variable names are consistent between external and internal server.
mysql-test/suite/maria/suite.pm:
Test for aria-block-size instead of 'aria' as 'aria' is not set for embedded server.
This should be ok for aria tests, as aria is never disabled for these.
storage/maria/ma_checkpoint.c:
Fixed bug when there are more than 1024 open Aria tables during checkpoint.
- In return_zero_rows(), don't call mark_as_null_row() for semi-join
materialized tables, because 1) they may have been already freed, and
2)there is no real need to call mark_as_null_row() for them.
This bug is the result of an incomplete/inconsistent change introduced into
5.3 code when the cond_equal parameter were added to the function optimize_cond.
The change was made during a merge from 5.2 in October 2010.
The bug could affect only queries with HAVING.
An outer join query with a semi-join subquery could return a wrong result
if the optimizer chose to materialize the subquery.
It happened because when substituting for the best field into a ref item
used to build access keys not all COND_EQUAL objects that could be employed
at substitution were checked.
Also refined some code in the function check_join_cache_usage to make it
safer.