There are two issues fixed here:
1. We needed to update the result file, for some of
mysqlbinlog_* tests, because now the some padding chars
are not output anymore.
2. We needed to change the Field_string::pack so that
for BINARY types the padding chars are not packed
(lengthsp will return full length for these types).
- Fixing crash on attempt to create a fulltext index with an utf8mb4 column
- fixing wrong border width for supplementary characters in mysql client:
mysql --default-character-set=utf8mb4 -e "select concat(_utf32 0x20000,'a')"
Split rpl_row_charset into:
- rpl_row_utf16.
- rpl_row_utf32.
This way these tests can run independently if server supports
either one of the charsets but not both.
Cleaned up rpl_row_utf32 which had a spurious instruction:
-- let $reset_slave_type_conversions= 0
In BUG#51787 we were using the wrong charset to print out the
data. We were using the field charset for the string that would
hold the information. This caused the assertion, because the
string length was not aligned with UTF32 bytes requirements for
storage.
We fix this by using &my_charset_latin1 in the string object
instead of the field->charset(). As a side-effect, we needed to
extend the show_sql_type interface so that it took the field
charset is now passed as a parameter, so that one is able to
calculate the correct field size.
In BUG#51716 we had issues with Field_string::pack and
Field_string::unpack. When packing, the length was incorrectly
calculated. When unpacking, the padding the string would be
padded with the wrong bytes (a few bytes less than it should).
We fix this by resorting to charset abstractions (functions) that
calculate the correct length when packing and pad correctly the
string when unpacking.
LOCK kills the server.
Prohibit FLUSH TABLES WITH READ LOCK application to views or
temporary tables.
Fix a subtle bug in the implementation when we actually
did not remove table share objects from the table cache after
acquiring exclusive locks.
mysql-test/r/flush.result:
Update results (Bug#51710)
mysql-test/t/flush.test:
Add a test case for Bug#51710.
sql/sql_parse.cc:
Fix Bug#51710 "FLUSH TABLES <view> WITH READ LOCK
killes the server.
Ensure we don't open views and temporary tables.
Fix a yet another bug in the implementation which
did not actually remove the tables from cache after acquiring
exclusive locks.
The problem was that in read only mode (read_only enabled),
the server would mistakenly deny data modification attempts
for temporary tables which belong to a transactional storage
engine (eg. InnoDB).
The solution is to allow transactional temporary tables to be
modified under read only mode. As a whole, the read only mode
does not apply to any kind of temporary table.
mysql-test/r/read_only_innodb.result:
Add test case result for Bug#33669
mysql-test/t/read_only_innodb.test:
Add test case for Bug#33669
sql/lock.cc:
Rename mysql_lock_tables_check to lock_tables_check and make
it static. Move locking related checks from get_lock_data to
lock_tables_check. Allow write locks to temporary tables even
under read-only.
Ensure that we store the correct cached_field_type whenever we cache Field items
(in this case it allows us to compare dates as dates, rather than strings)
mysql-test/r/type_timestamp.result:
Add test case.
mysql-test/t/type_timestamp.test:
Add test case.
sql/item.h:
Initialize cached_field_type from the Field item.
Before this fix, the performance schema instrumentation
in mdl.h / mdl.cc was incomplete, causing:
- build warnings,
- no data collection for the performance schema
This fix:
- added instrumentation helpers for the new preferred
reader read write lock, mysql_prlock_*
- implemented completely the performance schema
instrumentation of mdl.h / mdl.cc
Before this fix, mysql_upgrade would always drop and re create
the performance_schema database.
This in theory could destroy user data created using 5.1 or older versions.
With this fix, mysql_upgrade checks the content of the
performance_schema database before droping it.
had broken the 5.1 behaviour of --log-error: --log-error without argument sent to stderr instead of writing
to a file with an autogenerated name.
mysql-test/suite/sys_vars/t/log_error_func.test:
test that error log is created and shown in SHOW VARIABLES.
Interestingly the error log's path is apparently relative if --log-error=argument is used, but
may be absolute or relative if --log-error(no argument) is used (because then the path is derived from
that of pidfile_name, which can be absolute or relative, depending on if autogenerated or not).
mysql-test/suite/sys_vars/t/log_error_func2.test:
test that error log is created and shown in SHOW VARIABLES
mysql-test/suite/sys_vars/t/log_error_func3.test:
test that error log is empty in SHOW VARIABLES
sql/mysql_priv.h:
id for option --log-error
sql/mysqld.cc:
No --log-error means "write errors to stderr", whereas --log-error
without argument means "write errors to a file". So we cannot use the default logic
of class sys_var_charptr, which treats "option not used" the same as "option used
without argument" and uses the same default for both. We need to catch "option used",
in mysqld_get_one_option(), and then "without argument". Setting to "" makes sure
that init_server_components() will create the log, with an autogenerated name.
sql/sys_vars.cc:
need to give the option a numeric id so that we can catch it in mysqld_get_one_option()
Bug#51676 Server crashes on SELECT, ORDER BY on 'utf8mb4' column
An additional fix. We should use 0xFFFD as a weight for supplementary
characters, not the "weight for character U+FFFD".
Bug#51675 Server crashes on inserting 4 byte char. after ALTER TABLE to 'utf8mb4'
Bug#51676 Server crashes on SELECT, ORDER BY on 'utf8mb4' column
include/m_ctype.h:
Defining MY_CS_REPLACEMENT_CHARACTER
mysql-test/r/ctype_utf8mb4.result:
Adding tests
mysql-test/t/ctype_utf8mb4.test:
Adding tests
strings/ctype-uca.c:
Don't use UCA data for characters higher than 0xFFFF.
strings/ctype-ucs2.c:
Using newly defined MY_CS_REPLACEMENT_CHARACTER
strings/ctype-utf8.c:
Using newly defined MY_CS_REPLACEMENT_CHARACTER
Removing unesed variable "plane".
autotools runs
- Fix recognition of --with-debug=full in configure wrapper
- Remove CMakeCache.txt in configure wrapper, to match the original
- Fix recognition of max-no-ndb
- Fix broken dependencies of mysql_fix_privilege_table.sql from
mysql_system_tables.sql and mysql_system_tables_fix.sql
- Add "distclean target" that informs user about appropriate bzr command
cmake/configure.pl:
- Recognize --with-debug=full, map to WITH_DEBUG_FULL
- remove CMakeCache.txt, so the configuration is no more sticky
(to match the original configure behavior)
cmake/plugin.cmake:
- Recognize WITH_MAX_NO_NDB, this fixes missing storage engines after BUILD/*max-no-ndb scripts
mysql-test/CMakeLists.txt:
test-force uses the same macros (MTR_FORCE) as test-bt* now
scripts/CMakeLists.txt:
- Fix broken dependency when producing mysql_fix_privilege_tables.sql, reported by Davi.
We now concatenate 2 scripts in custom command that
has dependency on both scripts rather than concatenating them at cmake time.
sql/CMakeLists.txt:
Address frequently asked question "where is distclean" by implementing distclean target
that does nothing except pointing to appropriate
bzr command.
It is better not to call "bzr clean-tree" automatically, without user consent.
It could clean new files that were meant to be added.
Diagnostics_area::set_ok_status on DROP FUNCTION
This assert tests that the server is not trying to send "ok" to
the client if an error has occured during statement processing.
In this case, the assert was triggered by lock timeout errors when
accessing system tables to do an implicit REVOKE after executing
DROP FUNCTION/PROCEDURE. In practice, this was only likely to
happen with very low values for "lock_wait_timeout" (in the bug report
1 second was used). These errors were ignored and the server tried
to send "ok" to the client, triggering the assert.
The patch for Bug#45225 introduced lock timeouts for metadata locks.
This made it possible to get timeouts when accessing system tables.
Note that a followup patch for Bug#45225 pushed after this
bug was reported, changed accessing of system tables such
that the user-supplied timeout value is ignored and the maximum
timeout value is used instead. This exact bug was therefore
only noticeable in the period between the initial Bug#45225 patch
and the followup patch.
However, the same problem could occur for any errors during revoking
of privileges - not just timeouts. This patch fixes the problem by
making sure that any errors during revoking of privileges are
reported to the client.
Test case added to sp-destruct.test. Since the original bug is not
reproducable now that system tables are accessed using a a long
timeout value, this test instead calls DROP FUNCTION with a grant
system table missing.
Add deprecation warning when variable optimizer_search_depth is given
the value 63.
mysql-test/r/greedy_optimizer.result
Updated with warning text.
mysql-test/r/mysqld--help-notwin.result
Updated with warning from mysqld --help --verbose.
mysql-test/r/mysqld--help-win.result
Updated with warning from mysqld --help --verbose.
sql/sys_vars.cc
Added an update check function to the constructor invocation for
the optimizer_search_depth variable. The function emits a
warning message for the value 63.
There was auto-reconnecting by slave earlier than a prescribed by slave_net_timeout value.
The issue happened on 64bit solaris that spotted rather incorrect casting of
the ulong slave_net_timeout into the uint of mysql.options.read_timeout.
Notice, that there is no reason for slave_net_timeout to be of type of ulong.
Since it's primarily passed as arg to mysql_options the type can be made
as uint to avoid all conversion hassles.
That's what the fixes are made.
A "side" effect of the patch is a new value for the max of slave_net_timeout
to be the max of the unsigned int type (therefore to vary across platforms).
Note, a regression test can't be made to run reliably without making it to last over some
20 secs. That's why it is placed in suite/large_tests.
mysql-test/suite/large_tests/r/rpl_slave_net_timeout.result:
the new test results.
mysql-test/suite/large_tests/t/rpl_slave_net_timeout-slave.opt:
Initialization of the option that yields slave_net_timeout's default.
sql/mysql_priv.h:
changing type for slave_net_timeout from ulong to uint
sql/mysqld.cc:
changing type for slave_net_timeout from ulong to uint
sql/sys_vars.cc:
Refining the max value for slave_net_timeout to be as the max for uint type.
Extend and implement the grammar that allows to FLUSH WITH READ LOCK
a list of tables, rather than all of them.
Incompatible grammar change:
Previously one could perform FLUSH TABLES, HOSTS, PRIVILEGES in a single
statement.
After this change, FLUSH TABLES must always be alone on the list.
Judging by the test suite, however, the old extended syntax
was never or very rarely used.
The new statement requires RELOAD ACL global privilege and
LOCK_TABLES_ACL | SELECT_ACL on individual tables.
In other words, it's an atomic combination of LOCK TALBES <list> READ
and FLUSH TABLES <list>, and requires respective privileges.
For additional information about the semantics, please
see WL#5000 and the comment for flush_tables_with_read_lock()
function in sql_parse.cc
mysql-test/r/flush.result:
Update test results (WL#5000).
mysql-test/t/flush.test:
Add test coverage for WL#5000.
sql/sql_yacc.yy:
Allow FLUSH TABLES <table_list> WITH READ LOCK.
Disallow FLUSH TABLES <table_list>, flush_options.
Before this fix, the performance schema file instrumentation would treat:
- a relative path to a file
- an absolute path to the same file
as two different files.
This would lead to:
- separate aggregation counters
- file leaks when a file is removed.
With this fix, a relative and absolute path are resolved to the same file instrument.
The problem was that ALTER TABLE on a merge table which was locked
using LOCK TABLE ... WRITE, by mistake gave
ER_TABLE_NOT_LOCKED_FOR_WRITE.
During opening of the table to be ALTERed, open_table() tried to
get an upgradable metadata lock. In LOCK TABLEs mode, this lock
must already exist (i.e. taken by LOCK TABLE) as new locks of this
type cannot be acquired for fear of deadlock. So in LOCK TABLEs
mode, open_table() tried to find an existing upgradable lock for
the table to be altered.
The problem was that open_table() also tried to find upgradable
metadata locks for children of merge tables even if no such
locks are needed to execute ALTER TABLE on merge tables.
This patch fixes the problem by making sure that open tables code
only searches for upgradable metadata locks for the merge table
and not for the merge children tables.
The patch also fixes a related bug where an upgradable metadata
lock was aquired outside of LOCK TABLEs mode even if the table in
question was temporary. This bug meant that LOCK TABLES or DDL on
temporary tables by mistake could be blocked/aborted by locks held
on base tables with the same table name by other connections.
Test cases added to merge.test and lock_multi.test.