If we have a 2+ node cluster which is replicating from an async master
and the binlog_format is set to STATEMENT and multi-row inserts are executed
on a table with an auto_increment column such that values are automatically
generated by MySQL, then the server node generates wrong auto_increment
values, which are different from what was generated on the async master.
In the title of the MDEV-9519 it was proposed to ban start slave on a Galera
if master binlog_format = statement and wsrep_auto_increment_control = 1,
but the problem can be solved without such a restriction.
The causes and fixes:
1. We need to improve processing of changing the auto-increment values
after changing the cluster size.
2. If wsrep auto_increment_control switched on during operation of
the node, then we should immediately update the auto_increment_increment
and auto_increment_offset global variables, without waiting of the next
invocation of the wsrep_view_handler_cb() callback. In the current version
these variables retain its initial values if wsrep_auto_increment_control
is switched on during operation of the node, which leads to inconsistent
results on the different nodes in some scenarios.
3. If wsrep auto_increment_control switched off during operation of the node,
then we must return the original values of the auto_increment_increment and
auto_increment_offset global variables, as the user has set. To make this
possible, we need to add a "shadow copies" of these variables (which stores
the latest values set by the user).
https://jira.mariadb.org/browse/MDEV-9519
Changing the way how a cursor is opened to fetch its structure only,
e.g. for a cursor FOR loop record variable.
The old methods with setting thd->lex->limit_rows_examined to an Item_uint(0)
was not reliable and could push these messages into diagnostics area:
The query examined at least 1 rows, which exceeds LIMIT ROWS EXAMINED (0)
The new method should be more reliable, as it completely prevents the call
of do_select() in JOIN::exec_inner() during the cursor structure discovery,
so the execution of the cursor SELECT query returns immediately after the
preparation step (when the result row structure becomes known),
without even entering the code that fetches the result rows.
truncating a temporary table
TRUNCATE expects only one TABLE instance (which is used by TRUNCATE
itself) to be open. However this requirement wasn't enforced after
"MDEV-5535: Cannot reopen temporary table".
Fixed by closing unused table instances before performing TRUNCATE.
This problem manifested itself when a join query used two or more
materialized CTE such that each of them employed the same recursive CTE.
The bug caused a crash. The crash happened because the cleanup()
function was performed premature for recursive CTE. This clean up was
induced by the cleanup of the first CTE referenced the recusrsive CTE.
This cleanup destroyed the structures that would allow to read from the
temporary table containing the rows of the recursive CTE and an attempt to read
these rows for the second CTE referencing the recursive CTE triggered a
crash.
The clean up for a recursive CTE R should be performed after the cleanup
of the last materialized CTE that uses R.
a table value constructor shows wrong number of rows
This is another attempt to fix this bug. The previous patch did not take
into account that a transformation for ALL/ANY subqueries could be applied
to the materialized table that wrapped the table value constructor used as
a specification of the subselect used an ALL/ANY subquery. In this case
the result of the derived table used a sink of the class select_subselect
rather than of the class select_unit. Thus the previous fix could cause
memory overwrites when running EXPLAIN for queries with table value
constructors in ALL/ANY subselects.
This patch introduces support for the system variable eq_range_index_dive_limit
that existed in MySQL starting from 5.6. The variable sets a limit for
index dives into equality ranges. Index dives are performed by optimizer
to estimate the number of rows in range scans. Index dives usually provide
good estimate but they are pretty expensive. To estimate the number of rows
in equality ranges statistical data on indexes can be employed. Its usage gives
not so good estimates but it's cheap. So if the number of equality dives
required by an index scan exceeds the set limit no dives for equality
ranges are performed by the optimizer for this index.
As the new system variable is introduced in a stable version the default
value for it is set to a special value meaning there is no limit for the number
of index dives performed by the optimizer.
The patch partially uses the MySQL code for WL 5957
'Statistics-based Range optimization for many ranges'.
If we have a 2+ node cluster which is replicating from an async master
and the binlog_format is set to STATEMENT and multi-row inserts are executed
on a table with an auto_increment column such that values are automatically
generated by MySQL, then the server node generates wrong auto_increment
values, which are different from what was generated on the async master.
The causes and fixes:
1. We need to improve processing of changing the auto-increment values
after changing the cluster size.
2. If wsrep auto_increment_control switched on during operation of
the node, then we should immediately update the auto_increment_increment
and auto_increment_offset global variables, without waiting of the next
invocation of the wsrep_view_handler_cb() callback. In the current version
these variables retain its initial values if wsrep_auto_increment_control
is switched on during operation of the node, which leads to inconsistent
results on the different nodes in some scenarios.
3. If wsrep auto_increment_control switched off during operation of the node,
then we must return the original values of the auto_increment_increment and
auto_increment_offset global variables, as the user has set. To make this
possible, we need to add a "shadow copies" of these variables (which stores
the latest values set by the user).
The previous correction of the patch for mdev-16473 did not work
correctly for the databases whose names started with '*'.
Added a test case with a database named "*".
Problem:
push_cursor() created sp_cursor instances on THD::main_mem_root,
which is freed only after the SP instructions loop.
Changes:
- Moving sp_cursor declaration from sp_rcontext.h to sql_class.h
- Deriving sp_instr_cpush from sp_cursor. So now sp_cursor is created
only once (at the SP parse time) and then reused on all loop iterations
- Adding a new method reset() into sp_cursor (and its parent classes)
to reset an sp_cursor instance before reuse.
- Moving former sp_cursor members m_fetch_count, m_row_count, m_found
into a separate class sp_cursor_statistics. This helps to reuse
the code in sp_cursor constructors, and in sp_cursor::reset()
- Adding a helper method sp_rcontext::pop_cursor().
- Adding "THD*" parameter to so_rcontext::pop_cursors() and pop_all_cursors()
- Removing "new" and "delete" from sp_rcontext::push_cursor() and
sp_rconext::pop_cursor().
- Fixing sp_cursor not to derive from Sql_alloc, as it's now allocated
only as a part of sp_instr_cpush (and not allocated separately).
- Moving lex_keeper->disable_query_cache() from sp_cursor::sp_cursor()
to sp_instr_cpush::execute().
- Adding tests
Before this patch if no default database was set the server threw
an error for any table name reference that was not fully qualified by
database name. In particular it happened for table names referenced
CTE tables. This was incorrect.
The error message was thrown at the parser stage when the names referencing
different tables were not resolved yet.
Now if no default database is set and a with clause is used in the
processed statement any table reference is just supplied with a dummy
database name "*none*" at the parser stage. Later after a call
of check_dependencies_in_with_clauses() when the names for CTE tables
can be resolved error messages are thrown only for those names that
refer to non-CTE tables. This is done in open_and_process_table().
specific temporary errors
The optimistic parallel slave's worker thread could face a run-time error due to
the algorithm's specifics which allows for conflicts like the reported
"Can't find record in 'table'".
A typical stack is like
{noformat}
#0 handler::print_error (this=0x61c00008f8a0, error=149, errflag=0) at handler.cc:3650
#1 0x0000555555e95361 in write_record (thd=thd@entry=0x62a0000a2208, table=table@entry=0x61f00008ce88, info=info@entry=0x7fffdee356d0) at sql_insert.cc:1944
#2 0x0000555555ea7767 in mysql_insert (thd=thd@entry=0x62a0000a2208, table_list=0x61b00012ada0, fields=..., values_list=..., update_fields=..., update_values=..., duplic=<optimized out>, ignore=<optimized out>) at sql_insert.cc:1039
#3 0x0000555555efda90 in mysql_execute_command (thd=thd@entry=0x62a0000a2208) at sql_parse.cc:3927
#4 0x0000555555f0cc50 in mysql_parse (thd=0x62a0000a2208, rawbuf=<optimized out>, length=<optimized out>, parser_state=<optimized out>) at sql_parse.cc:7449
#5 0x00005555566d4444 in Query_log_event::do_apply_event (this=0x61200005b9c8, rgi=<optimized out>, query_arg=<optimized out>, q_len_arg=<optimized out>) at log_event.cc:4508
#6 0x00005555566d639e in Query_log_event::do_apply_event (this=<optimized out>, rgi=<optimized out>) at log_event.cc:4185
#7 0x0000555555d738cf in Log_event::apply_event (rgi=0x61d0001ea080, this=0x61200005b9c8) at log_event.h:1343
#8 apply_event_and_update_pos_apply (ev=ev@entry=0x61200005b9c8, thd=thd@entry=0x62a0000a2208, rgi=rgi@entry=0x61d0001ea080, reason=<optimized out>) at slave.cc:3479
#9 0x0000555555d8596b in apply_event_and_update_pos_for_parallel (ev=ev@entry=0x61200005b9c8, thd=thd@entry=0x62a0000a2208, rgi=rgi@entry=0x61d0001ea080) at slave.cc:3623
#10 0x00005555562aca83 in rpt_handle_event (qev=qev@entry=0x6190000fa088, rpt=rpt@entry=0x62200002bd68) at rpl_parallel.cc:50
#11 0x00005555562bd04e in handle_rpl_parallel_thread (arg=arg@entry=0x62200002bd68) at rpl_parallel.cc:1258
{noformat}
Here {{handler::print_error}} computes whether to error log the
current error when --log-warnings > 1. The decision flag is consulted
bu {{my_message_sql()}} which can be eventually called.
In the bug case the decision is to log.
However in the optimistic mode slave applier case any conflict is
attempted to resolve with rollback and retry to success. Hence the
logging is at least extraneous.
The case is fixed with adding a new flag {{ME_LOG_AS_WARN}} which
{{handler::print_error}} may propagate further on through {{my_error}}
when the error comes from an optimistically running slave worker thread.
The new flag effectively requests the warning level for the errlog record,
while the thread's DA records the actual error (which is regarded as temporary one
by the parallel slave error handler).
Do not try to set versioning conditions on every SP call. It may work
incorrectly, but it's a general bug described in MDEV-774.
This patch makes system versioning stuff consistent with other code and
also fixes a use-after-free bug.
Closes#756
Fixed by deleting the sequence if we where not able to initialize it
I also noticed that we didn't always set the error message when
check_killed(), which could lead to aborted queries without error
beeing properly set. Fixed by default setting error message if
check_error() noticed that killed had been called.
This allowed me to remove a lot of calls to thd->send_kill_message().
Being executed under slow_log is ON the test revealed a "side-effect"
in MDEV-8305 implementation which inadvertently made the trigger or
stored function statements to reset the top-level query's
THD::start_time et al. (Details of the test failure analysis are footnoted).
Unlike the SP case the SF and Trigger's internal statement should not
do that.
Fixed with revising the MDEV-8305 decision to backup/reset/restore
the session timestamp inside sp_instr_stmt::execute(). The timestamp
actually remains reset in the SP case by its caller per statement basis by ever
existing logics.
Timestamps related tests are extended to cover the trigger and stored function case.
Note, commit 3395ab7324 is reverted as its struct QUERY_START_TIME_INFO
declaration is not in use anymore after this patch.
Footnote:
--------
Specifically to the failing test, a query on the master was logged
okay with a timestamp of the query's top-level statement but its post
update trigger managed to compute one more (later) timestamp which got
inserted into another table. The latter table master-vs-slave
no fractional part timestamp discrepancy became evident
thanks to different execution time of the trigger combined with the
fact of the logged with micro-second fractional part master timestamp
was truncated on the slave. On master when the fractional part was
close to 1 the trigger execution added up its own latency to overflow
to next second value. That's how the master timestamp surprisingly
turned out to bigger than the slave's one.
preserve positions if the multi-update join is using tmp table:
* store positions in the tmp table if needed
JOIN::add_fields_for_current_rowid()
* take positions from the tmp table, not from file->position():
multi_update::prepare2()
Store transaction start time in thd->transaction.start_time.
THD::transaction_time() wraps over transaction.start_time taking into
account current status of BEGIN.
Don't use hidden system time in versioning,
but keep the system time logic in THD
to workaround low-res system clock and
replication not versioned to versioned.
This reverts MDEV-14788 (System versioning cannot
be based on local timestamps, as it is now).
Versioning is based on local timestamps again,
but timestamps are protected by MDEV-15923
(option to control who can set session @@timestamp).
1. Adding THD::convert_string(LEX_CSTRING *to,...) as a wrapper
for convert_string(LEX_STRING *to,...), as LEX_CSTRING
is now frequently used for conversion purpose.
This reduced duplicate code in TEXT_STRING_sys,
TEXT_STRING_literal, TEXT_STRING_filesystem grammar rules in *.yy
2. Adding yet another THD::convert_string() with an extra parameter
"bool simple_copy_is_possible". This even more reduced
repeatable code in the mentioned grammar rules in *.yy
3. Deriving Lex_ident_cli_st from Lex_string_with_metadata_st,
as they have very similar functionality. Moving m_quote
from Lex_ident_cli_st to Lex_string_with_metadata_st,
as m_quote will be used later to optimize string literals anyway
(e.g. avoid redundant copying on the tokenizer stage).
Adjusting Lex_input_stream::get_text() accordingly.
4. Moving the reminders of the code in TEXT_STRING_sys, TEXT_STRING_literal,
TEXT_STRING_filesystem grammar rules as new methods in THD:
- make_text_string_sys()
- make_text_string_connection()
- make_text_string_filesystem()
and changing *.yy to use these new methods.
This reduced the amount of similar code in
sql_yacc.yy and sql_yacc_ora.yy.
5. Removing duplicate code in Lex_input_stream::body_utf8_append_ident():
by reusing THD::make_text_string_sys(). Thanks to #3 and #4.
6. Making THD members charset_is_system_charset,
charset_is_collation_connection, charset_is_character_set_filesystem
private, as they are not needed externally any more.