Resolved all conflicts, bad merges and fixed a few minor bugs in the code.
Commented out the queries from multi_update, view, subselect_sj, func_str,
derived_view, view_grant that failed either with crashes in ps-protocol or
with wrong results.
The failures are clear indications of some bugs in the code and these bugs
are to be fixed.
DB_COL_APPEARS_TWICE_IN_INDEX: Remove. This condition is already
checked and reported by MySQL before passing the index definition to
the storage engine.
row_create_index_for_mysql(): Remove the redundant check for
DB_COL_APPEARS_TWICE_IN_INDEX. When enforcing the column prefix index
limit, invoke dict_mem_index_free(index) to plug the memory leak. In
the loop, use index->n_def instead of dict_index_get_n_fields(index),
because the latter would be 0 for indexes that have not been copied to
the data dictionary cache.
innodb-use-sys-malloc.test:
Add test cases for attempting to trigger the error checks in
row_create_index_for_mysql(). Before MySQL 5.5 and WL#5743, the leak
is only reproducible if ha_innobase::max_supported_key_part_length()
returned a higher limit than the one used in
row_create_index_for_mysql().
In MySQL 5.5 and later, the leak is reproducible with
innodb_large_prefix=true.
rb:688 approved by Jimmy Yang
Fix a failure of the re-enabled innodb-index.test in the embedded server.
Apparently, the embedded server does not default to ENGINE=InnoDB when
copying an InnoDB table by CREATE TABLE t2 SELECT * FROM t1;
Re-enable a test that was disabled as collateral damage.
Starting with MySQL 5.5, queries will acquire and hold a shared meta-data lock
(MDL) on tables they process, until the transaction is committed or
rolled back. This will prevent DDL operations on the tables, such as creating
an index.
innodb-index.test: Use a second table for creating the index. The index will
still be "too new" for the transaction that was started before the index
creation was started.
Fixed compiler warnings
client/readline.cc:
Fixed compiler warning
mysql-test/suite/innodb/t/innodb_bug60049.test:
This test failed when running with --mysqld=--loose-innodb-fast-shutdown=2 which we do on some machines
mysql-test/t/mysqldump.test:
Only run test if utf8 is used
sql/log.cc:
Fixed compiler warning
sql/mysql_priv.h:
Fixed compiler warnings
tests/mysql_client_test.c:
Don't abort test if ucs2 is not in use.
The innoDB global variable srv_lower_case_table_names is set to the value of lower_case_table_names declared in mysqld.h server in ha_innodb.cc. Since this variable can change at runtime, it is reset for each handler call to ::create, ::open, ::rename_table & ::delete_table.
But it is possible for tables to be implicitly opened before an explicit handler call is made when an engine is first started or restarted. I was able to reproduce that with the testcase in this patch on a version of InnoDB from 2 weeks ago. It seemed like the change buffer entries for the secondary key was getting put into pages after the restart. (But I am not sure, I did not write down the call stack while it was reproducing.) In the current code, the implicit open, which is actually a call to dict_load_foreigns(), does not occur with this testcase.
The change is to replace srv_lower_case_table_names by an interface function in innodb.cc that retrieves the server global variable when it is needed.
causes future shutdown hang
InnoDB would hang on shutdown if any XA transactions exist in the
system in the PREPARED state. This has been masked by the fact that
MySQL would roll back any PREPARED transaction on shutdown, in the
spirit of Bug #12161 Xa recovery and client disconnection.
[mysql-test-run] do_shutdown_server: Interpret --shutdown_server 0 as
a request to kill the server immediately without initiating a
shutdown procedure.
xid_cache_insert(): Initialize XID_STATE::rm_error in order to avoid a
bogus error message on XA ROLLBACK of a recovered PREPARED transaction.
innobase_commit_by_xid(), innobase_rollback_by_xid(): Free the InnoDB
transaction object after rolling back a PREPARED transaction.
trx_get_trx_by_xid(): Only consider transactions whose
trx->is_prepared flag is set. The MySQL layer seems to prevent
attempts to roll back connected transactions that are in the PREPARED
state from another connection, but it is better to play it safe. The
is_prepared flag was introduced in the InnoDB Plugin.
trx_n_prepared: A new counter, counting the number of InnoDB
transactions in the PREPARED state.
logs_empty_and_mark_files_at_shutdown(): On shutdown, allow
trx_n_prepared transactions to exist in the system.
trx_undo_free_prepared(), trx_free_prepared(): New functions, to free
the memory objects of PREPARED transactions on shutdown. This is not
needed in the built-in InnoDB, because it would collect all allocated
memory on shutdown. The InnoDB Plugin needs this because of
innodb_use_sys_malloc.
trx_sys_close(): Invoke trx_free_prepared() on all remaining
transactions.
Bug#59410 read uncommitted: unlock row could not find a 3 mode lock
on the record
This bug is present only in 5.6 but I am adding the test case to earlier
versions to ensure it never appears in earlier versions too.
Setting lowercase_table_names to 2 on Windows causing Foreign Key problems
This problem was exposed by the fix for Bug#55222. There was a codepath in dict0load.c,
dict_load_foreigns() that made sure the table name matched case sensitive in order to
load a referenced table into the dictionary as needed. If an engine is rebooted which
accesses a table with foreign keys, and lower_case_table_names=2, then the table with
foreign keys will get an error when it is changed (insert/updated/delete).
Once the referenced tables are loaded into the dictionary cache by a select statement
on those tables, the same change would succeed because the affected code path would
not get followed.
The bug was a result of the fix for bug 668644 that turned out to be
not quite correct. A problem appeared with HAVING conditions containing
more than one predicate. If a query with an ORDER BY clause uses
such HAVING condition and the required order can be obtained with
a range/index scan then the HAVING condition has to be pushed into
two different formulas (items). To be able to do it we have to create
a copy of the ANDOR structure of the pushed condition.
even in the cases when there existed range/index-merge scans that
were cheaper than the full table scan.
This was a defect/bug of the implementation of mwl #128.
Now hash join can work not only with full table scan of the joined
table, but also with full index scan, range and index-merge scans.
Accordingly, in the cases when hash join is used the column 'type'
in the EXPLAINs can contain now 'hash_ALL', 'hash_index', 'hash_range'
and 'hash_index_merge'. If hash join is coupled with a range/index_merge
scan then the columns 'key' and 'key_len' contain info not only on
the used hash index, but also on the indexes used for the scan.
- Fixed some issues with partitions and connection_string, which also fixed lp:716890 "Pre- and post-recovery crash in Aria"
- Fixed wrong assert in Aria
Now need to merge with latest xtradb before pushing
sql/ha_partition.cc:
Ensure that m_ordered_rec_buffer is not freed before close.
sql/mysqld.cc:
Changed to use opt_stack_trace instead of opt_pstack.
Removed references to pstack
sql/partition_element.h:
Ensure that connect_string is initialized
storage/maria/ma_key_recover.c:
Fixed wrong assert
primary_key_no == 0".
Attempt to create InnoDB table with non-nullable column of
geometry type having an unique key with length 12 on it and
with some other candidate key led to server crash due to
assertion failure in both non-debug and debug builds.
The problem was that such a non-candidate key could have
been sorted as the first key in table/.FRM, before any legit
candidate keys. This resulted in assertion failure in InnoDB
engine which assumes that primary key should either be the
first key in table/.FRM or should not exist at all.
The reason behind such an incorrect sorting was an wrong
value of Create_field::key_length member for geometry field
(which was set to its pack_length == 12) which confused code
in mysql_prepare_create_table(), so it would skip marking
such key as a key with partial segments.
This patch fixes the problem by ensuring that this member
gets the same value of Create_field::key_length member as
for other blob fields (from which geometry field class is
inherited), and as result unique keys on geometry fields
are correctly marked as having partial segments.
mysql-test/include/gis_keys.inc:
Added test case for bug #58650 "Failing assertion:
primary_key_no == -1 || primary_key_no == 0".
mysql-test/r/gis.result:
Added test case for bug #58650 "Failing assertion:
primary_key_no == -1 || primary_key_no == 0".
mysql-test/suite/innodb/r/innodb_gis.result:
Added test case for bug #58650 "Failing assertion:
primary_key_no == -1 || primary_key_no == 0".
mysql-test/suite/innodb_plugin/r/innodb_gis.result:
Added test case for bug #58650 "Failing assertion:
primary_key_no == -1 || primary_key_no == 0".
sql/field.cc:
Changed Create_field::create_length_to_internal_length() to
correctly set Create_field::key_length member for geometry
fields. Similar to the blob types key_length for such fields
should be the same as length and not field's packed length
(which is always 12 for geometry).
As result of this change code handling table creation now
always correctly identifies btree/unique keys on geometry
fields as partial keys, so such keys can't be erroneously
treated as candidate keys and sorted in keys array in .FRM
before legit candidate keys.
This fixes bug #58650 "Failing assertion: primary_key_no ==
-1 || primary_key_no == 0" in which incorrect candidate key
sorting led to assertion failure in InnoDB code.
"rows examined" estimates". This change implements "innodb_stats_method"
with options of "nulls_equal", "nulls_unequal" and "null_ignored".
rb://553 approved by Marko