Copied relevant test cases and code from the MySQL 5.6 tree
Testing of my_use_symdir moved to engines.
mysql-test/r/partition_windows.result:
Updated result file
mysql-test/suite/archive/archive_no_symlink-master.opt:
Testing of symlinks with archive
mysql-test/suite/archive/archive_no_symlink.result:
Testing of symlinks with archive
mysql-test/suite/archive/archive_no_symlink.test:
Testing of symlinks with archive
mysql-test/suite/archive/archive_symlink.result:
Testing of symlinks with archive
mysql-test/suite/archive/archive_symlink.test:
Testing of symlinks with archive
sql/log_event.cc:
Updated comment
sql/partition_info.cc:
Don't test my_use_symdir here
sql/sql_parse.cc:
Updated comment
sql/sql_table.cc:
Don't test my_use_symdir here
sql/table.cc:
Added more DBUG_PRINT
storage/archive/ha_archive.cc:
Give warnings for index_file_name and if we can't use data directory
storage/myisam/ha_myisam.cc:
Give warnings if we can't use data directory or index directory
Bug #3329 Incomplete lower_case_table_names=2 implementation
The problem was that check_db_name() converted database names to lower case also in case of lower_case_table_names=2.
Fixed by removing the conversion in check_db_name for lower_case_table_names = 2 and instead converting db name to
lower case at same places as table names are converted.
Fixed bug that SHOW CREATE DATABASE FOO showed information for database 'foo'.
I also removed some checks of lower_case_table_names when it was enough to use table_alias_charset.
mysql-test/mysql-test-run.pl:
Added --use-copy argument to force mysql-test-run to copy files instead of doing symlinks. This is needed when you run
with test directory on another file system
mysql-test/r/lowercase_table.result:
Updated results
mysql-test/r/lowercase_table2.result:
Updated results
mysql-test/suite/parts/r/partition_mgm_lc2_innodb.result:
Updated results
mysql-test/suite/parts/r/partition_mgm_lc2_memory.result:
Updated results
mysql-test/suite/parts/r/partition_mgm_lc2_myisam.result:
Updated results
mysql-test/t/lowercase_table.test:
Added tests with mixed case databases
mysql-test/t/lowercase_table2.test:
Added tests with mixed case databases
sql/log.cc:
Don't check lower_case_table_names when we can use table_alias_charset
sql/sql_base.cc:
Don't check lower_case_table_names when we can use table_alias_charset
sql/sql_db.cc:
Use cmp_db_names() for checking if current database changed.
mysql_rm_db() now converts db to lower case if lower_case_table_names was used.
Changed database options cache to use table_alias_charset. This fixed a bug where SHOW CREATE DATABASE showed wrong information.
sql/sql_parse.cc:
Change also db name to lower case when file names are changed.
Don't need to story copy of database name anymore when lower_case_table_names == 2 as check_db_name() don't convert in this case.
Updated arguments to mysqld_show_create_db().
When adding table to TABLE_LIST also convert db name to lower case if needed (same way as we do with table names).
sql/sql_show.cc:
mysqld_show_create_db() now also takes original name as argument for output to user.
sql/sql_show.h:
Updated prototype for mysqld_show_create_db()
sql/sql_table.cc:
In mysql_rename_table(), do same conversions to database name as we do for the file name
- Histogram::find_bucket() should not walk off the end of the value range.
- Address review feedback in Histogram::point_selectivity(): different handling
for zero-width buckets, and explanations.
The reason was that a couple of variables that hold number of rows that was used to calculate buffers was uint and caused an overflow.
Fixed by changing variables that could hold number of rows from uint to ulong and also added a cast for this test.
include/heap.h:
Reorder to get better alignment. Changed variables that could hold number of rows from uint to ulong
mysql-test/suite/heap/heap.result:
Added test case
mysql-test/suite/heap/heap.test:
Added test case
mysql-test/suite/plugins/t/server_audit.test:
Added sleep as we want to have disconnect logged before we try a new connect
storage/heap/ha_heap.cc:
Changed variables that could hold number of rows from uint to ulong
Limit number of rows to 4G (as most of the variables that holds rows are ulong anyway)
reset records_changed when key_stat_version is changed to not cause increments for every row changed
storage/heap/ha_heap.h:
changed records_changed to ulong as this can get big
storage/heap/hp_create.c:
Changed variables that could hold number of rows from uint to ulong
Added cast (fixed the original bug)
storage/heap/hp_delete.c:
Changed variables that could hold number of rows from uint to ulong
storage/heap/hp_open.c:
Removed not needed cast
storage/heap/hp_write.c:
Changed variables that could hold number of rows from uint to ulong
support-files/compiler_warnings.supp:
Removed extra : from supression
- Fix Histogram::point_selectivity() to work in the case where the
passed value_pos=0 (or 1) and the first (or the last) bucket in the
histogram has zero value-range (i.e one value).
[Attempt #2]
- Use a new selectivity calculation formula in Histogram::point_selectivity.
The formula is different from the old one because it was developed from scratch.
it doesn't have any possible division-by-zero problems.
don't use mysql-5.6 change.
correct fix: zero-out rounded tail after the number was shifted because
of the carry digit (otherwise the carry digit will be zeroed out too).
Let TABLE_SHARE::tdc.free_tables, TABLE_SHARE::tdc.all_tables,
TABLE_SHARE::tdc.flushed and corresponding invariants be protected by
per-share TABLE_SHARE::tdc.LOCK_table_share instead of global LOCK_open.
Now if CREATE OR REPLACE fails but we have deleted a table already, we will generate a DROP TABLE in the binary log.
This fixes this issue.
In addition, for a failing CREATE OR REPLACE TABLE ... SELECT we don't generate a log of all the inserted rows, only the DROP TABLE.
I added code for not logging DROP TEMPORARY TABLE for tables where the CREATE TABLE was not logged. This code will be activated in 10.1
by removing the code protected by DONT_LOG_DROP_OF_TEMPORARY_TABLES.
mysql-test/suite/rpl/r/create_or_replace_mix.result:
More test cases
mysql-test/suite/rpl/r/create_or_replace_row.result:
More test cases
mysql-test/suite/rpl/r/create_or_replace_statement.result:
More test cases
mysql-test/suite/rpl/t/create_or_replace.inc:
More test cases
sql/log.cc:
Added binlog_reset_cache() to clear the binary log.
sql/log.h:
Added prototype
sql/sql_insert.cc:
If CREATE OR REPLACE TABLE ... SELECT fails:
- Don't log anything if nothing changed
- If table was deleted, log a DROP TABLE.
Remember if we table creation of temporary tables was logged.
sql/sql_table.cc:
Added log_drop_table()
Remember if we table creation of temporary tables was logged.
If CREATE OR REPLACE TABLE ... SELECT fails and a table was deleted, log a DROP TABLE.
sql/sql_table.h:
Added prototype
sql/sql_truncate.cc:
Remember if we table creation of temporary tables was logged.
sql/table.h:
Added table_creation_was_logged
mysql-test/r/create_or_replace2.result:
Added test case
mysql-test/t/create_or_replace.test:
Fixed comment
mysql-test/t/create_or_replace2.test:
Added test case
sql/sql_base.cc:
Safety fix:
Don't let threads with query_id=0 free temporary tables as this may free temporary tables not in use.
This is mostly the case for the slave io threads, as most other threads has thd->query_id != 0.
sql/sql_table.cc:
Added comment.
Ignore kill when opening temporary table for CREATE ... LIKE.
This fixed the original isue
- MDEV-5689 ExtractValue(xml, 'substring(/x,/y)') crashes
- MDEV-5709 ExtractValue() with XPath variable references returns wrong result.
Description:
1. The main problem was that that nodeset_func->fix_fields() was
called in Item_func_xml_extractvalue::val_str() and
Item_func_xml_update::val_str(), which led in some cases to
execution of the XPath engine *before* having a parsed XML value.
Moved to Item_xml_str_func::fix_fields().
2. Cleanup: added a new method Item_xml_str_func::fix_fields() and moved
most of the code from Item_xml_str_func::fix_length_and_dec()
to Item_xml_str_func::fix_fields(), to follow the usual Item layout.
3. Cleanup: a parsed XML value is useless without the raw XML value
it was built from.
Previously the parsed and the raw values where stored in separate String
instances. It was hard to follow how they are synchronized.
Added a helper class XML which contains both parsed and raw values.
Makes things easier to read and modify.
4. MDEV-5709: const_item() could incorrectly return a "true"
result when XPath expression contains users/SP variable references.
Now nodeset_func->const_item() is also taken into account to
catch such cases.
5. Minor code enhancements.
A huge number in the "day" part of an interval made the code to return
a negative date erroneously. Adding a test to return an error on a too
large "day" value.
After constant table row substitution the where condition may be converted
to always true. The function calculate_cond_selectivity_for_table() should
take into account this possibility.
Due to how gap locks work, two transactions could group commit together on the
master, but get lock conflicts and then deadlock due to different thread
scheduling order on slave.
For now, remove these deadlocks by running the parallel slave in READ
COMMITTED mode. And let InnoDB/XtraDB allow statement-based binlogging for the
parallel slave in READ COMMITTED.
We are also investigating a different solution long-term, which is based on
relaxing the gap locks only between the transactions running in parallel for
one slave, but not against possibly external transactions.
When a transaction fails in parallel replication, it should signal the error
to any following transactions doing wait_for_prior_commit() on it. But the
code for this was incorrect, and would not correctly remember a prior error
when sending the signal. This caused corruption when slave stopped due to an
error.
Fix by remembering the error code when we first get an error, and passing the
saved error code to wakeup_subsequent_commits().
Thanks to nanyi607rao who reported this bug on
maria-developers@lists.launchpad.net and analysed the root cause.
LOCAL AND IMPORT ERRORS
Description:
-----------
This bug happens due to the fact that current algorithm is designed
that in the case of LOCAL load of data, in case of the error, the
remaining part of the file is read in order to return the proper
error message to the client side.
But, the problem with current implementation is that data stream
for the client side is cleared only in the case where line delimiters
exist, which is not a case with, for example fixed width
fields.
Fix:
----
Ported patch provided by Sinisa Milivojevic n bug report for this
issue to 5.5+ versions.
As part of this patch code is changed to clear the data stream
by calling new member function "READ_INFO::skip_data_till_eof".