main.derived_cond_pushdown: Move all 10.3 tests to the end,
trim trailing white space, and add an "End of 10.3 tests" marker.
Add --sorted_result to tests where the ordering is not deterministic.
main.win_percentile: Add --sorted_result to tests where the
ordering is no longer deterministic.
The test and also rpl_gtid_delete_domain failed on PPC64 platform
due to an incorrectly specified actual key for searching
in a gtid domain system hash. While the correct size is 32 bits
the supplied value was 8 bytes of long int size on the platform.
The problem became evident thanks to the big endiness which
cut off the *least* significant part of the value field.
Fixed with correcting a dynamic array initialization to hold
now uint32 values as well as the values extraction for
searching in the gtid domain system hash.
A new added test ensures no overflowed values are accepted
for deletion which prevents inadvertent action. Notice though
MariaDB [test]> set @@session.gtid_domain_id=(1 << 32) + 1;
MariaDB [test]> show warnings;
+---------+------+--------------------------------------------------------+
| Level | Code | Message |
+---------+------+--------------------------------------------------------+
| Warning | 1292 | Truncated incorrect gtid_domain_id value: '4294967297' |
+---------+------+--------------------------------------------------------+
MariaDB [test]> select @@session.gtid_domain_id;
+--------------------------+
| @@session.gtid_domain_id |
+--------------------------+
| 4294967295 |
+--------------------------+
Problem:- Create/drop index was logged into binlog.
Goal:- Operation on temporary table should not be binlog when binlog format
is row.
Solution:-
We should add CF_FORCE_ORIGINAL_BINLOG_FORMAT when there is ddl on temp
table.
For optimize, analyze, repair we wont change anything ,Then will
be logged in binlog , But they also dont throw any error if operation fails
Since slave wont be having any temp table , but these operation on tmp
table will be processed without breaking replication.
For rename we need a different logic MDEV-16728 will solve it.
The code passing positions in the query to constructors of
Rewritable_query_parameter descendants (e.g. Item_splocal)
was not reliable. It used various Lex_input_stream methods:
- get_tok_start()
- get_tok_start_prev()
- get_tok_end()
- get_ptr()
to find positions of the recently scanned tokens.
The challenge was mostly to choose between get_tok_start()
and get_tok_start_prev(), taking into account to the current
grammar (depending if lookahead takes place before
or after we read the positions in every particular rule).
But this approach did not work at all in combination
with token contractions, when MYSQLlex() translates
two tokens into one token ID, for example:
WITH ROLLUP -> WITH_ROLLUP_SYM
As a result, the tokenizer is already one more token ahead.
So in query fragment:
"GROUP BY d, spvar WITH ROLLUP"
get_tok_start() points to "ROLLUP".
get_tok_start_prev() points to "WITH".
As a result, it was "WITH" who was erroneously replaced
to NAME_CONST() instead of "spvar".
This patch modifies the code to do it a different way.
Changes:
1. For keywords and identifiers, the tokenizer now
returns LEX_CTRING pointing directly to the query
fragment. So query positions are now just available using:
- $1.str - for the beginning of a token
- $1.str+$1.length - for the end of a token
2. Identifiers are not allocated on the THD memory root
in the tokenizer any more. Allocation is now done
on later stages, in methods like LEX::create_item_ident().
3. Two LEX_CSTRING based structures were added:
- Lex_ident_cli_st - used to store the "client side"
identifier representation, pointing to the
query fragment. Note, these identifiers
are encoded in @@character_set_client
and can have broken byte sequences.
- Lex_ident_sys_st - used to store the "server side"
identifier representation, pointing to the
THD allocated memory. This representation
guarantees that the identifier was checked
for being well-formed, and is encoded in utf8.
4. To distinguish between two identifier types
in the grammar, two Bison types were added:
<ident_cli> and <ident_sys>
5. All non-reserved keywords were marked as
being of the type <ident_cli>.
All reserved keywords are still of the type NONE.
6. All curly brackets in rules collecting
non-reserved keywords into non-terminal
symbols were removed, e.g.:
Was:
keyword_sp_data_type:
BIT_SYM {}
| BOOLEAN_SYM {}
Now:
keyword_sp_data_type:
BIT_SYM
| BOOLEAN_SYM
This is important NOT to have brackets here!!!!
This is needed to make sure that the underlying
Lex_ident_cli_ststructure correctly passes up to
the calling rule.
6. The code to scan identifiers and keywords
was moved from lex_one_token() into new
Lex_input_stream methods:
scan_ident_sysvar()
scan_ident_start()
scan_ident_middle()
scan_ident_delimited()
This was done to:
- get rid of enormous amount of references to &yylval->lex_str
- and remove a lot of references like lip->xxx
7. The allocating functionality which puts identifiers on the
THD memory root now resides in methods of Lex_ident_sys_st,
and in THD::to_ident_sys_alloc().
get_quoted_token() was removed.
8. Cleanup: check_simple_select() was moved as a method to LEX.
9. Cleanup: Some more functionality was moved from *.yy
to new methods were added to LEX:
make_item_colon_ident_ident()
make_item_func_call_generic()
create_item_qualified_asterisk()
Problem was timing between the thread that was killed and reading the
binary log.
Updated the test to wait until the killed thread was properly terminated
before checking what's in the binary log.
To make check safe, I changed "threads_connected" to be updated after
thd::cleanup() is done, to ensure that all binary logs updates are done
before the variable is changed. This was mainly done to get the
test deterministic and have now other real influence in how the server
works.
Main problem was that no log-event print function checked for disk
full error on the IO_CACHE.
All changes in this patch only affects mysqlbinlog, not the server!
- Changed all log-event print functions to return 1 on error
- Fixed memory usage when not using --flashback.
- Added printing of number of rows in row events. Can be disabled with
--print-row-count=0
- Print annotated rows when using mysqlbinlog --short-form
- Fixed that mysqlbinlog --debug works
- Fixed create_drop_binlog.test test failure
- Reorganized fields in PRINT_EVENT_INFO to be according to size to
optimize storage
- Don't change print_row_event_position or print_row_counts if set by user
- Remove some testing of argument to my_free is 0
- base64-output=never is now supported and works in all context
- Updated help information for --base64-output and --short-form
- print_row_count is now on by default. Reset automatically if --short-form
is used
- Removed obsolote warning for mysql 5.6.0
- More DBUG_PRINT for mysqltest.cc
- my_b_write_byte() now checks for flush failures. This fixed a memory
overrun on disk full
- my_b_printf() now returns 1 on failure, 0 on ok. This simplifies code
and no old code was using the old return value of my_b_printf().
- my_b_Write_backtick_quote() now returns 1 on failure and 0 on ok
- Fixed some error conditions in log printing that was not previously
handled.
- Slave_rows_error_report() can now handle longlong positions
- Write_on_release_cache() rewritten so that we can detect errors
on flush. Not depending on automatic release anymore.
- Changed types for Pos and End_log_pos to 64 bit in SHOW BINLOG EVENTS
- Fixed that copy_event_cache_to_string_and_reinit() works with strings
longer than 4G (Changed to use LEX_STRING instead of String)
- Restricted binlog_rows_event_max_size to UINT32_MAX-1 as String's are
anyway restricted to UINT32_MAX
- Fixed bug in rpl_binlog_state::write_to_iocache() which hide write
failures (duplicate variable name)
- Fixed bug in String::append if original string was not allocated
- Stop mysqlbinlog output at once if there is an error.
- Before printing error message, flush result file. This ensures that
the error message is printed last. (Easier to find)
- Remove not used thd_rpl_is_parallel()
- Remove not used mysql_notify_thread_having_shared_lock()
- Remove not needed LOCK_thread_count from MYSQL_BIN_LOG::reset_logs()
- LOCK_thread_count is not protecting against rollback, so this
code and comment is not needed
- Remove mutex_locks in slave.cc that are not needed.
Added THD::assert_not_linked() to ensure that it was safe to remove
- Fixed not repeatable test load_data_stmt_view
- Updated binlog_killed to test removal of mutex
(thanks to Andrei Elkin for test)
- More code comments
1. Removing data type specific constants from enum_item_param_state,
adding SHORT_DATA_VALUE instead.
2. Replacing tests for Item_param::state for the removed constants to
tests for Type_handler::cmp_type() against {INT|REAL|TIME|DECIAML}_RESULT.
Deriving Item_param::PValue from Type_handler_hybrid_field_type,
to store the data type handler of the current value of the parameter.
3. Moving Item_param::decimal_value and Item_param::str_value_ptr
to Item_param::PValue. Adding Item_param::PValue::m_string
and changing Item_param to use it to store string values,
instead of Item::str_value. The intent is to replace Item_param::value
to a st_value based implementation in the future, to avoid duplicate code.
Adding a sub-class Item::PValue_simple, to implement
Item_param::PValue::swap() easier.
Remaming Item_basic_value::fix_charset_and_length_from_str_value()
to fix_charset_and_length() and adding the "CHARSET_INFO" pointer
parameter, instead of getting it directly from item->str_value.charset().
Changing Item_param to pass value.m_string.charset() instead
of str_value.charset().
Adding a String argument to the overloaded
fix_charset_and_length_from_str_value() and changing Item_param
to pass value.m_string instead of str_value.
4. Replacing the case in Item_param::save_in_field() to a call
for Type_handler::Item_save_in_field().
5. Adding new methods into Item_param::PValue:
val_real(), val_int(), val_decimal(), val_str().
Changing the corresponding Item_param methods
to use these new Item_param::PValue methods
internally. Adding a helper method
Item_param::can_return_value() and removing
duplicate code in Item_param::val_xxx().
6. Removing value.set_handler() from Item_param::set_conversion()
and Type_handler_xxx::Item_param_set_from_value().
It's now done inside Item_param::set_param_func(),
Item_param::set_value() and Item_param::set_limit_clause_param().
7. Changing Type_handler_int_result::Item_param_set_from_value()
to set max_length using attr->max_length instead of
MY_INT64_NUM_DECIMAL_DIGITS, to preserve the data type
of the assigned expression more precisely.
8. Adding Type_handler_hybrid_field_type::swap(),
using it in Item_param::PValue::swap().
9. Moving the data-type specific code from
Item_param::query_val_str(), Item_param::eq(),
Item_param::clone_item() to
Item_param::value_query_type_str(),
Item_param::value_eq(), Item_param::value_clone_item(),
to split the "state" dependent code and
the data type dependent code.
Later we'll split the data type related code further
and add new methods in Type_handler. This will be done
after we replace Item_param::PValue to st_value.
10. Adding asserts into set_int(), set_double(), set_decimal(),
set_time(), set_str(), set_longdata() to make sure that
the value set to Item_param corresponds to the previously
set data type handler.
11. Adding tests into t/ps.test and suite/binlog/t/binlog_stm_ps.test,
to cover Item_param::print() and Item_param::append_for_log()
for LIMIT clause parameters.
Note, the patch does not change the behavior covered by the new
tests. Adding for better code coverage.
12. Adding tests for more precise integer data type in queries like this:
EXECUTE IMMEDIATE
'CREATE OR REPLACE TABLE t1 AS SELECT 999999999 AS a,? AS b'
USING 999999999;
The explicit integer literal and the same integer literal
passed as a PS parameter now produce columns of the same data type.
Re-recording old results in ps.result, gis.result, func_hybrid_type.result
accordingly.
As reported in MDEV-11969 "there's no way to ditch knowledge" about some
domain that is no longer updated on a server. Besides being of annoyance to
clutter output in DBA console stale domains can prevent the slave
to connect the master as MDEV-12012 witnesses.
What domain is obsolete must be evaluated by the user (DBA) according
to whether the domain info is still relevant and will the domain ever
receive any update.
This patch introduces a method to discard obsolete gtid domains from
the server binlog state. The removal requires no event group from such
domain present in existing binlog files though. If there are any the
containing logs must be first PURGEd in order for
FLUSH BINARY LOGS DELETE_DOMAIN_ID=(list-of-domains)
succeed. Otherwise the command returns an error.
The list of obsolete domains can be computed through
intersecting two sets - the earliest (first) binlog's Gtid_list
and the current value of @@global.gtid_binlog_state - and extracting
the domain id components from the intersection list items.
The new DELETE_DOMAIN_ID featured FLUSH continues to rotate binlog
omitting the deleted domains from the active binlog file's Gtid_list.
Notice though when the command is ineffective - that none of requested to delete
domain exists in the binlog state - rotation does not occur.
Obsolete domain deletion is not harmful for connected slaves as long
as master side binlog files *purge* is synchronized with FLUSH-DELETE_DOMAIN_ID.
The slaves must have the last event from purged files processed as usual,
in order not to bump later into requesting a gtid from a file which
was already gone.
While the command is not replicated (as ordinary FLUSH BINLOG LOGS is)
slaves, even though having extra domains, won't suffer from reconnection errors
thanks to master-slave gtid connection protocol allowing the master
to be ignorant about a gtid domain.
Should at failover such slave to be promoted into master role it may run
the ex-master's
FLUSH BINARY LOGS DELETE_DOMAIN_ID=(list-of-domains)
to clean its own binlog state.
NOTES.
suite/perfschema/r/start_server_low_digest.result
is re-recorded as consequence of internal parser codes changes.
Problem was introduced with the InnoDB 5.7 merge, the code related to
avoiding extra fsync at the end of commit when binlog is enabled. The
MariaDB method for this was removed, but the replacement MySQL method
based on thd_get_durability_property() is not functional in MariaDB.
This commit reverts the offending parts of the merge and adds a test
case, to fix the problem for InnoDB. But other storage engines are
likely to have a similar problem.
The test did not handle correctly possible difference in system
timezone. The fix is to remove non-functional setting of local
time_zone and instead allow timestamp replacement to work with
any date/time
CREATE/DROP TEMPORARY TABLE are not safe to optimistically replicate in
parallel with other transactions, so they need to be marked as "ddl" in the
binlog.
This was already done for stand-alone CREATE/DROP TEMPORARY. But temporary
tables can also be created and dropped inside a BEGIN...END transaction, and
such transactions were not marked as ddl. Nor was the DROP TEMPORARY TABLE
statement emitted implicitly when a client connection is closed.
So this patch adds such ddl mark for the missing cases.
The difference to Kristian's original patch is mainly a fix in
mysql_trans_commit_alter_copy_data() to remember the unsafe_rollback_flags
over the temporary commit.
Problem
-------
For one-statement contains multiple row events, Flashback didn't reverse the
sequence of row events inside one-statement.
Solution
--------
Using a new array 'events_in_stmt' to store the row events of one-statement,
when parsed the last one event, then print from the last one to the first one.
In the same time, fixed another bug, without -vv will not insert the table_map
into print_event_info->m_table_map, then change_to_flashback_event() will not
execute because of Table_map_log_event is empty.
Make `mysqladmin --local` use `FLUSH LOCAL` for all flush-* commands,
and only do `SET SQL_LOG_BIN=OFF` for create/drop/old_password/password.
Additionally, --local is ignored for all commands that never write
to binlog, so e.g. `mysqladmin --local version` no longer needs SUPER