support ha_innodb.so as a dynamic plugin.
* remove obsolete *,innodb_plugin.rdiff files
* s/--plugin-load=/--plugin-load-add=/
* MYSQL_PLUGIN_IMPORT glob_hostname[]
* use my_error instead of push_warning_printf(ER_DEFAULT)
* don't use tdc_size and tc_size in a module
update test cases (XtraDB is 5.6.14, InnoDB is 5.6.10)
* copy new tests over
* disable some tests for (old) InnoDB
* delete XtraDB tests that no longer apply
small compatibility changes:
* s/HTON_EXTENDED_KEYS/HTON_SUPPORTS_EXTENDED_KEYS/
* revert unnecessary InnoDB changes to make it a bit closer to the upstream
fix XtraDB to compile on Windows (both as a static and a dynamic plugin)
disable XtraDB on Windows (deadlocks) and where no atomic ops are available (e.g. CentOS 5)
storage/innobase/handler/ha_innodb.cc:
revert few unnecessary changes to make it a bit closer to the original InnoDB
storage/innobase/include/univ.i:
correct the version to match what it was merged from
Fixing "/*100000 ...*/" comments (i.e. MySQL style with 6-digits),
which were unintentionally broken in the MDEV-5009 patch.
modified:
mysql-test/r/comments.result
mysql-test/t/comments.test
sql/sql_lex.cc
Add another test case. This one for killing a worker while its transaction is
waiting to start until the previous transaction has committed.
Fix setting reading_or_writing to 0 in worker threads so SHOW SLAVE STATUS can
show something more useful than "Reading from net".
Materialization forced in case if rand() used in view or derived table to avoud several calls of rand for gting value of a field.
Fixed set variable uncachable flag from - it shouldbe a side effect not a random value.
FAILS TO DROP CREATED EVENTS:
- Check for triggers should exclude mtr's own
- Move the code to before checksum table as it might affect result
of some autdit_log tests (does in 5.6)
- Replace SHOW STATUS LIKE 'slave_open_temp_tables' to be like in 5.6
Add another test case.
Fix bug found: When one thread aborts, another thread might
wrongly advance the relay log position too far, causing the
fauling transaction to be skipped and thus lost upon
restarting the SQL thread after the failure.
storage/oqgraph into 10.0
- fixed sys_vars.all_vars (added oqgraph_allow_create_integer_latch_basic.test)
- since oqgraph depends on Judy, move it to a separate package
Add a test case for killing a waiting query in parallel replication.
Fix several bugs found:
- We should not wakeup_subsequent_commits() in ha_rollback_trans(), since we
do not know the right wakeup_error() to give.
- When a wait_for_prior_commit() is killed, we must unregister from the
waitee so we do not race and get an extra (non-kill) wakeup.
- We need to deal with error propagation correctly in queue_for_group_commit
when one thread is killed.
- Fix one locking issue in queue_for_group_commit(), we could unlock the
waitee lock too early and this end up processing wakeup() with insufficient
locking.
- Fix Xid_log_event::do_apply_event; if commit fails it must not update the
in-memory @@gtid_slave_pos state.
- Fix and cleanup some things in the rpl_parallel.cc error handling.
- Add a missing check for killed in the slave sql driver thread, to avoid a
race.
- include metadata_lock_info plugin into debian packages
- ignore metadata_lock_info plugin in mysqld--help test
- let test suite run with built-in metadata_lock_info plugin
When a DSO is loaded we rewrite service pointers to point to the actual service structures.
But when a DSO is unloaded, we have to restore their original values, in case this DSO
wasn't removed from memory on dlclose() and is later loaded again.
When marking used columns the function find_field_in_table_ref() erroneously
called the walk method for the real item behind a view/derived table field
with the second parameter set to TRUE.
This erroneous code was introduced in 2006.
Bug#17867117 - ERROR RESULT WHEN "COUNT + DISTINCT + CASE WHEN" NEED MERGE_WALK
Problem:
COUNT DISTINCT gives incorrect result when it uses a Unique
Tree and its last inserted record has null value.
Here is how COUNT DISTINCT is processed, given that this query is not
using loose index scan.
When a row is produced as a result of joining tables (there is only
one table here), we store the SELECTed value in a Unique tree. This
allows elimination of any duplicates, and thus implements DISTINCT.
When we have processed all rows like this, we walk the Unique tree,
counting its elements, in Aggregator_distinct::endup() (tree->walk());
for each element we call Item_sum_count::add(). Such function wants to
ignore any NULL value, for that it checks item_sum -> args[0] ->
null_value. It is a mistake: when walking the Unique tree, the value
to be aggregated is not item_sum ->args[0] but rather table ->
field[0].
Solution:
instead of item_sum -> args[0] -> null_value, use arg_is_null(), which
knows where to look (like in fix for bug 57932).
As a consequence of this solution, we have to make arg_is_null() a
little more general:
1) Because it was so far only used for AVG() (which always has a
single argument), this function was looking at a single argument; now
that it has to work with COUNT(DISTINCT expression1,expression2), it
must look at all arguments.
2) Because we start using arg_is_null () for COUNT(DISTINCT), i.e. in
Item_sum_count::add (), it implies that we are also using it for
COUNT(no DISTINCT) (same add ()). For COUNT(no DISTINCT), the
nullness to check is that of item_sum -> args[0]. But the null_value
of such item is reliable only if val_*() has been called on it. So far
arg_is_null() was always used after a call to arg_val*(), so could
rely on null_value; but for COUNT, there is no call to arg_val*(), so
arg_is_null() has to call is_null() instead.
Testcase for 16539979 by Neeraj. Testcase for 17867117 contributed by
Xiaobin Lin from Taobao.
Fix ha_myisammrg::update_create_info() to do what ha_myisammrg::append_create_info() does -
take sub-table names from TABLE_LIST, not reverse engineer tablefile names.
Backport praveenkumar.hulakund@oracle.com-20120127081643-u7dxy23i8yyqarm7 from mysql-5.6
Fix :
-------
Created separate suites called innodb_zip ans i_innodb_zip that contain all compression tests.
Running the new suites with following compression-related parameters :
* innodb_compression_level = {1/9}
* innodb_log_compressed_pages = {ON/OFF}
when frm file exists, but the table does not.
In recover_from_failed_open(), allow the discovery
to fail without an error, when
open_strategy == TABLE_LIST::OPEN_IF_EXISTS.