1
0
mirror of https://github.com/MariaDB/server.git synced 2025-05-31 08:42:45 +03:00

4164 Commits

Author SHA1 Message Date
Marko Mäkelä
6a9b216301 Fix clang -Wunused-private-field 2019-04-03 19:50:51 +03:00
Marko Mäkelä
117291db8b Merge 10.2 into 10.3 2019-03-19 16:04:59 +02:00
sysprg
26432e49d3 MDEV-17262: mysql crashed on galera while node rejoined cluster (#895)
This patch contains a fix for the MDEV-17262/17243 issues and
new mtr test.

These issues (MDEV-17262/17243) have two reasons:

1) After an intermediate commit, a transaction loses its status
of "transaction that registered in the MySQL for 2pc coordinator"
(in the InnoDB) due to the fact that since version 10.2 the
write_row() function (which located in the ha_innodb.cc) does
not call trx_register_for_2pc(m_prebuilt->trx) during the processing
of split transactions. It is necessary to restore this call inside
the write_row() when an intermediate commit was made (for a split
transaction).

Similarly, we need to set the flag of the started transaction
(m_prebuilt->sql_stat_start) after intermediate commit.

The table->file->extra(HA_EXTRA_FAKE_START_STMT) called from the
wsrep_load_data_split() function (which located in sql_load.cc)
will also do this, but it will be too late. As a result, the call
to the wsrep_append_keys() function from the InnoDB engine may be
lost or function may be called with invalid transaction identifier.

2) If a transaction with the LOAD DATA statement is divided into
logical mini-transactions (of the 10K rows) and binlog is rotated,
then in rare cases due to the wsrep handler re-registration at the
boundary of the split, the last portion of data may be lost. Since
splitting of the LOAD DATA into mini-transactions is technical,
I believe that we should not allow these mini-transactions to fall
into separate binlogs. Therefore, it is necessary to prohibit the
rotation of binlog in the middle of processing LOAD DATA statement.

https://jira.mariadb.org/browse/MDEV-17262 and
https://jira.mariadb.org/browse/MDEV-17243
2019-03-18 07:39:51 +02:00
Sergei Golubchik
b64fde8f38 Merge branch '10.2' into 10.3 2019-03-17 13:06:41 +01:00
Sergei Golubchik
0508d327ae Merge branch '10.1' into 10.2 2019-03-15 21:00:41 +01:00
Sergei Golubchik
a62e9a83c0 MDEV-15945 --ps-protocol does not test some queries
Make mysqltest to use --ps-protocol more

use prepared statements for everything that server supports
with the exception of CALL (for now).

Fix discovered test failures and bugs.

tests:
* PROCESSLIST shows Execute state, not Query
* SHOW STATUS increments status variables more than in text protocol
* multi-statements should be avoided (see tests with a wrong delimiter)
* performance_schema events have different names in --ps-protocol
* --enable_prepare_warnings

mysqltest.cc:
* make sure run_query_stmt() doesn't crash if there's
  no active connection (in wait_until_connected_again.inc)
* prepare all statements that server supports

protocol.h
* Protocol_discard::send_result_set_metadata() should not send
  anything to the client.

sql_acl.cc:
* extract the functionality of getting the user for SHOW GRANTS
  from check_show_access(), so that mysql_test_show_grants() could
  generate the correct column names in the prepare step

sql_class.cc:
* result->prepare() can fail, don't ignore its return value
* use correct number of decimals for EXPLAIN columns

sql_parse.cc:
* discard profiling for SHOW PROFILE. In text protocol it's done in
  prepare_schema_table(), but in --ps it is called on prepare only,
  so nothing was discarding profiling during execute.
* move the permission checking code for SHOW CREATE VIEW to
  mysqld_show_create_get_fields(), so that it would be called during
  prepare step too.
* only set sel_result when it was created here and needs to be
  destroyed in the same block. Avoid destroying lex->result.
* use the correct number of tables in check_show_access(). Saying
  "as many as possible" doesn't work when first_not_own_table isn't
  set yet.

sql_prepare.cc:
* use correct user name for SHOW GRANTS columns
* don't ignore verbose flag for SHOW SLAVE STATUS
* support preparing REVOKE ALL and ROLLBACK TO SAVEPOINT
* don't ignore errors from thd->prepare_explain_fields()
* use select_send result for sending ANALYZE and EXPLAIN, but don't
  overwrite lex->result, because it might be needed to issue execute-time
  errors (select_dumpvar - too many rows)

sql_show.cc:
* check grants for SHOW CREATE VIEW here, not in mysql_execute_command

sql_view.cc:
* use the correct function to check privileges. Old code was doing
  check_access() for thd->security_ctx, which is invoker's sctx,
  not definer's sctx. Hide various view related errors from the invoker.

sql_yacc.yy:
* initialize lex->select_lex for LOAD, otherwise it'll contain garbage
  data that happen to fail tests with views in --ps (but not otherwise).
2019-03-12 13:10:49 +01:00
Sergei Golubchik
22f1cf9292 cleanup: misc 2019-03-12 13:10:49 +01:00
Sergei Golubchik
dda2e940fb pass the slow logging information in thd->query_plan_flags
This solves the following issues:

* unlike lex->m_sql_cmd and lex->sql_command, thd->query_plan_flags
  is not reset in Prepared_statement::execute, it survives
  till the log_slow_statement(), so slow logging behaves correctly in --ps

* using thd->query_plan_flags for both slow_log_filter and
  log_slow_admin_statements means the definition of "admin" statements
  for the slow log is the same no matter how it is filtered out.
2019-03-12 13:10:49 +01:00
Oleksandr Byelkin
83123412f0 MDEV-11975: SQLCOM_PREPARE of EXPLAIN & ANALYZE statement do not return correct metadata info
Added metadate info after prepare EXPLAIN/ANALYZE.
2019-03-12 13:10:48 +01:00
Alexander Barkov
f0cd707503 After-merge fix for MDEV-18333 Slow_queries count doesn't increase when slow_query_log is turned off 2019-03-06 23:44:58 +04:00
Julius Goryavsky
50b3632fa4 MDEV-9519: Data corruption will happen on the Galera cluster size change
If we have a 2+ node cluster which is replicating from an async master
and the binlog_format is set to STATEMENT and multi-row inserts are executed
on a table with an auto_increment column such that values are automatically
generated by MySQL, then the server node generates wrong auto_increment
values, which are different from what was generated on the async master.

In the title of the MDEV-9519 it was proposed to ban start slave on a Galera
if master binlog_format = statement and wsrep_auto_increment_control = 1,
but the problem can be solved without such a restriction.

The causes and fixes:

1. We need to improve processing of changing the auto-increment values
after changing the cluster size.

2. If wsrep auto_increment_control switched on during operation of
the node, then we should immediately update the auto_increment_increment
and auto_increment_offset global variables, without waiting of the next
invocation of the wsrep_view_handler_cb() callback. In the current version
these variables retain its initial values if wsrep_auto_increment_control
is switched on during operation of the node, which leads to inconsistent
results on the different nodes in some scenarios.

3. If wsrep auto_increment_control switched off during operation of the node,
then we must return the original values of the auto_increment_increment and
auto_increment_offset global variables, as the user has set. To make this
possible, we need to add a "shadow copies" of these variables (which stores
the latest values set by the user).

https://jira.mariadb.org/browse/MDEV-9519
2019-02-26 08:09:04 +02:00
Julius Goryavsky
2c734c980e MDEV-9519: Data corruption will happen on the Galera cluster size change
If we have a 2+ node cluster which is replicating from an async master
and the binlog_format is set to STATEMENT and multi-row inserts are executed
on a table with an auto_increment column such that values are automatically
generated by MySQL, then the server node generates wrong auto_increment
values, which are different from what was generated on the async master.

In the title of the MDEV-9519 it was proposed to ban start slave on a Galera
if master binlog_format = statement and wsrep_auto_increment_control = 1,
but the problem can be solved without such a restriction.

The causes and fixes:

1. We need to improve processing of changing the auto-increment values
after changing the cluster size.

2. If wsrep auto_increment_control switched on during operation of
the node, then we should immediately update the auto_increment_increment
and auto_increment_offset global variables, without waiting of the next
invocation of the wsrep_view_handler_cb() callback. In the current version
these variables retain its initial values if wsrep_auto_increment_control
is switched on during operation of the node, which leads to inconsistent
results on the different nodes in some scenarios.

3. If wsrep auto_increment_control switched off during operation of the node,
then we must return the original values of the auto_increment_increment and
auto_increment_offset global variables, as the user has set. To make this
possible, we need to add a "shadow copies" of these variables (which stores
the latest values set by the user).

https://jira.mariadb.org/browse/MDEV-9519
2019-02-26 07:45:11 +02:00
Julius Goryavsky
243f829c1c MDEV-9519: Data corruption will happen on the Galera cluster size change
If we have a 2+ node cluster which is replicating from an async master
and the binlog_format is set to STATEMENT and multi-row inserts are executed
on a table with an auto_increment column such that values are automatically
generated by MySQL, then the server node generates wrong auto_increment
values, which are different from what was generated on the async master.

In the title of the MDEV-9519 it was proposed to ban start slave on a Galera
if master binlog_format = statement and wsrep_auto_increment_control = 1,
but the problem can be solved without such a restriction.

The causes and fixes:

1. We need to improve processing of changing the auto-increment values
after changing the cluster size.

2. If wsrep auto_increment_control switched on during operation of
the node, then we should immediately update the auto_increment_increment
and auto_increment_offset global variables, without waiting of the next
invocation of the wsrep_view_handler_cb() callback. In the current version
these variables retain its initial values if wsrep_auto_increment_control
is switched on during operation of the node, which leads to inconsistent
results on the different nodes in some scenarios.

3. If wsrep auto_increment_control switched off during operation of the node,
then we must return the original values of the auto_increment_increment and
auto_increment_offset global variables, as the user has set. To make this
possible, we need to add a "shadow copies" of these variables (which stores
the latest values set by the user).

https://jira.mariadb.org/browse/MDEV-9519
2019-02-25 11:19:07 +02:00
Vladislav Vaintroub
261ce5286f MDEV-18281 COM_RESET_CONNECTION changes the connection encoding
Store original charset during client authentication, and restore it for
COM_RESET_CONNECTION
2019-02-02 17:32:15 +01:00
Vladislav Vaintroub
e214aa1cd3 MDEV-18281 COM_RESET_CONNECTION changes the connection encoding
Store original charset during client authentication, and restore it for
COM_RESET_CONNECTION
2019-02-02 17:29:33 +01:00
Alexander Barkov
3074beaad6 MDEV-17387 MariaDB Server giving wrong error while executing select query from procedure
Changing the way how a cursor is opened to fetch its structure only,
e.g. for a cursor FOR loop record variable.

The old methods with setting thd->lex->limit_rows_examined to an Item_uint(0)
was not reliable and could push these messages into diagnostics area:

  The query examined at least 1 rows, which exceeds LIMIT ROWS EXAMINED (0)

The new method should be more reliable, as it completely prevents the call
of do_select() in JOIN::exec_inner() during the cursor structure discovery,
so the execution of the cursor SELECT query returns immediately after the
preparation step (when the result row structure becomes known),
without even entering the code that fetches the result rows.
2018-11-09 09:56:02 +04:00
Sergey Vojtovich
bad2f1569d MDEV-17167 - InnoDB: Failing assertion: table->get_ref_count() == 0 upon
truncating a temporary table

TRUNCATE expects only one TABLE instance (which is used by TRUNCATE
itself) to be open. However this requirement wasn't enforced after
"MDEV-5535: Cannot reopen temporary table".

Fixed by closing unused table instances before performing TRUNCATE.
2018-10-02 13:42:44 +04:00
Sergei Golubchik
57e0da50bb Merge branch '10.2' into 10.3 2018-09-28 16:37:06 +02:00
Igor Babaev
4d991abd4f MDEV-17024 Crash on large query
This problem manifested itself when a join query used two or more
materialized CTE such that each of them employed the same recursive CTE.
The bug caused a crash. The crash happened because the cleanup()
function was performed premature for recursive CTE. This clean up was
induced by the cleanup of the first CTE referenced the recusrsive CTE.
This cleanup destroyed the structures that would allow to read from the
temporary table containing the rows of the recursive CTE and an attempt to read
these rows for the second CTE referencing the recursive CTE triggered a
crash.
The clean up for a recursive CTE R should be performed after the cleanup
of the last materialized CTE that uses R.
2018-09-07 20:10:45 -07:00
Oleksandr Byelkin
31081593aa Merge branch '11.0' into 10.1 2018-09-06 22:45:19 +02:00
Marko Mäkelä
2f4c391958 Merge 10.2 into 10.3 2018-09-06 22:35:45 +03:00
Sergei Golubchik
22bcfa011a cleanup: FOREIGN_KEY_INFO
instead of returning strings for CASCADE/RESTRICT
from every storage engine, use enum values

Backport of a3614d33e8a
2018-09-04 08:37:44 +02:00
Marko Mäkelä
206528f722 Merge 10.1 into 10.2 2018-08-31 15:10:02 +03:00
Marko Mäkelä
3b5d3cd68e Revert MDEV-9519 due to regressions
This reverts commit 75dfd4acb995789ca5f86ccbd361fff9d2797e79.
2018-08-31 12:36:31 +03:00
Marko Mäkelä
7830fb7f45 Merge 10.2 into 10.3 2018-08-28 12:22:56 +03:00
Igor Babaev
497d86276f MDEV-17017 Explain for query using derived table specified with
a table value constructor shows wrong number of rows

This is another attempt to fix this bug. The previous patch did not take
into account that a transformation for ALL/ANY subqueries could be applied
to the materialized table that wrapped the table value constructor used as
a specification of the subselect used an ALL/ANY subquery. In this case
the result of the derived table used a sink of the class select_subselect
rather than of the class select_unit. Thus the previous fix could cause
memory overwrites when running EXPLAIN for queries with table value
constructors in ALL/ANY subselects.
2018-08-27 08:15:10 -07:00
Marko Mäkelä
9258097fa3 Merge 10.1 into 10.2 2018-08-21 15:20:34 +03:00
Igor Babaev
4eac5df3fc MDEV-16934 Query with very large IN clause lists runs slowly
This patch introduces support for the system variable eq_range_index_dive_limit
that existed in MySQL starting from 5.6. The variable sets a limit for
index dives into equality ranges. Index dives are performed by optimizer
to estimate the number of rows in range scans. Index dives usually provide
good estimate but they are pretty expensive. To estimate the number of rows
in equality ranges statistical data on indexes can be employed. Its usage gives
not so good estimates but it's cheap. So if the number of equality dives
required by an index scan exceeds the set limit no dives for equality
ranges are performed by the optimizer for this index.

As the new system variable is introduced in a stable version the default
value for it is set to a special value meaning there is no limit for the number
of index dives performed by the optimizer.

The patch partially uses the MySQL code for WL 5957
'Statistics-based Range optimization for many ranges'.
2018-08-17 14:28:39 -07:00
Julius Goryavsky
75dfd4acb9 This is patch for the https://jira.mariadb.org/browse/MDEV-9519 issue:
If we have a 2+ node cluster which is replicating from an async master
and the binlog_format is set to STATEMENT and multi-row inserts are executed
on a table with an auto_increment column such that values are automatically
generated by MySQL, then the server node generates wrong auto_increment
values, which are different from what was generated on the async master.

The causes and fixes:

1. We need to improve processing of changing the auto-increment values
after changing the cluster size.

2. If wsrep auto_increment_control switched on during operation of
the node, then we should immediately update the auto_increment_increment
and auto_increment_offset global variables, without waiting of the next
invocation of the wsrep_view_handler_cb() callback. In the current version
these variables retain its initial values if wsrep_auto_increment_control
is switched on during operation of the node, which leads to inconsistent
results on the different nodes in some scenarios.

3. If wsrep auto_increment_control switched off during operation of the node,
then we must return the original values of the auto_increment_increment and
auto_increment_offset global variables, as the user has set. To make this
possible, we need to add a "shadow copies" of these variables (which stores
the latest values set by the user).
2018-08-15 14:17:28 +03:00
Sergei Golubchik
0aa9b03393 Merge branch '10.2' into 10.3 2018-08-12 12:02:23 +02:00
Oleksandr Byelkin
affdd79c69 Merge branch '10.1' into 10.2 2018-08-03 23:26:26 +02:00
Oleksandr Byelkin
701f0b8e36 Fix gcc 7.3 compiler warnings. 2018-08-03 14:37:55 +02:00
Sergei Golubchik
36e59752e7 Merge branch '10.2' into 10.3 2018-06-30 16:39:20 +02:00
Eugene Kosov
b5184c7efb MDEV-15947 ASAN heap-use-after-free in Item_ident::print or in my_strcasecmp_utf8 or unexpected ER_BAD_FIELD_ERROR upon call of stored procedure reading from versioned table
Closes #728
2018-06-30 16:12:36 +02:00
Sergei Golubchik
65f7473cf9 fix for ctags 2018-06-30 16:12:13 +02:00
Sergei Golubchik
1dd3c8f8ba Merge branch '10.1' into 10.2 2018-06-28 22:46:12 +02:00
Sergei Golubchik
44d1cada12 Merge branch '10.0' into 10.1 2018-06-28 14:07:51 +02:00
Sergei Golubchik
3d4beee1a9 remove double-counting
rnd_pos_by_record calls ha_rnd_pos, which does the counting
2018-06-28 13:36:51 +02:00
Sergei Golubchik
52a25d7b67 MDEV-16473 WITH statement throws 'no database selected' error
Different fix, just use NULL, not no_db,
2018-06-28 12:38:53 +02:00
Igor Babaev
090febbb2d This is another attempt to fix mdev-16473.
The previous correction of the patch for mdev-16473 did not work
correctly for the databases whose names started with '*'.
Added a test case with a database named "*".
2018-06-28 00:36:50 -07:00
Alexander Barkov
56145be295 MDEV-16584 SP with a cursor inside a loop wastes THD memory aggressively
Problem:

push_cursor() created sp_cursor instances on THD::main_mem_root,
which is freed only after the SP instructions loop.

Changes:
- Moving sp_cursor declaration from sp_rcontext.h to sql_class.h
- Deriving sp_instr_cpush from sp_cursor. So now sp_cursor is created
  only once (at the SP parse time) and then reused on all loop iterations
- Adding a new method reset() into sp_cursor (and its parent classes)
  to reset an sp_cursor instance before reuse.
- Moving former sp_cursor members m_fetch_count, m_row_count, m_found
  into a separate class sp_cursor_statistics. This helps to reuse
  the code in sp_cursor constructors, and in sp_cursor::reset()
- Adding a helper method sp_rcontext::pop_cursor().
- Adding "THD*" parameter to so_rcontext::pop_cursors() and pop_all_cursors()
- Removing "new" and "delete" from sp_rcontext::push_cursor() and
  sp_rconext::pop_cursor().
- Fixing sp_cursor not to derive from Sql_alloc, as it's now allocated
  only as a part of sp_instr_cpush (and not allocated separately).
- Moving lex_keeper->disable_query_cache() from sp_cursor::sp_cursor()
  to sp_instr_cpush::execute().
- Adding tests
2018-06-27 12:54:05 +04:00
Igor Babaev
6d377a523c Correction for the patch to fix mdev-16473. 2018-06-26 10:49:23 -07:00
Alexander Barkov
1d6bc0f01f Removing sp_head::is_stored_procedure. This code was dead after MDEV-15991 2018-06-26 17:53:11 +04:00
Igor Babaev
7d0d934ca6 MDEV-16473 WITH statement throws 'no database selected' error
Before this patch if no default database was set the server threw
an error for any table name reference that was not fully qualified by
database name. In particular it happened for table names referenced
CTE tables. This was incorrect.
The error message was thrown at the parser stage when the names referencing
different tables were not resolved yet.
Now if no default database is set and  a with clause is used in the
processed statement  any table reference is just supplied with a dummy
database name "*none*" at the parser stage. Later after a call
of check_dependencies_in_with_clauses() when the names for CTE tables
can be resolved error messages are thrown only for those names that
refer to non-CTE tables. This is done in open_and_process_table().
2018-06-26 00:02:48 -07:00
Oleksandr Byelkin
517d718201 MDEV-15477: SESSION_SYSVARS_TRACKER does not track last_gtid
register changes of last_gtid
2018-06-25 19:01:41 +02:00
Sergei Golubchik
b942aa34c1 Merge branch '10.1' into 10.2 2018-06-21 23:47:39 +02:00
Sergei Golubchik
bb24663f5a MDEV-13577 slave_parallel_mode=optimistic should not report the mode's specific temporary errors
Revert 7bbe324fc17
Fix the bug differently (in log_event.cc)
Fix the test case to actually fail without the bug fix
2018-06-20 11:10:27 +02:00
Vicențiu Ciorbaru
6e55236c0a Merge branch '10.0-galera' into 10.1 2018-06-12 19:39:37 +03:00
Andrei Elkin
7bbe324fc1 MDEV-13577 slave_parallel_mode=optimistic should not report the mode's
specific temporary errors

The optimistic parallel slave's worker thread could face a run-time error due to
the algorithm's specifics which allows for conflicts like the reported
"Can't find record in 'table'".
A typical stack is like

{noformat}
#0  handler::print_error (this=0x61c00008f8a0, error=149, errflag=0) at handler.cc:3650
#1  0x0000555555e95361 in write_record (thd=thd@entry=0x62a0000a2208, table=table@entry=0x61f00008ce88, info=info@entry=0x7fffdee356d0) at sql_insert.cc:1944
#2  0x0000555555ea7767 in mysql_insert (thd=thd@entry=0x62a0000a2208, table_list=0x61b00012ada0, fields=..., values_list=..., update_fields=..., update_values=..., duplic=<optimized out>, ignore=<optimized out>) at sql_insert.cc:1039
#3  0x0000555555efda90 in mysql_execute_command (thd=thd@entry=0x62a0000a2208) at sql_parse.cc:3927
#4  0x0000555555f0cc50 in mysql_parse (thd=0x62a0000a2208, rawbuf=<optimized out>, length=<optimized out>, parser_state=<optimized out>) at sql_parse.cc:7449
#5  0x00005555566d4444 in Query_log_event::do_apply_event (this=0x61200005b9c8, rgi=<optimized out>, query_arg=<optimized out>, q_len_arg=<optimized out>) at log_event.cc:4508
#6  0x00005555566d639e in Query_log_event::do_apply_event (this=<optimized out>, rgi=<optimized out>) at log_event.cc:4185
#7  0x0000555555d738cf in Log_event::apply_event (rgi=0x61d0001ea080, this=0x61200005b9c8) at log_event.h:1343
#8  apply_event_and_update_pos_apply (ev=ev@entry=0x61200005b9c8, thd=thd@entry=0x62a0000a2208, rgi=rgi@entry=0x61d0001ea080, reason=<optimized out>) at slave.cc:3479
#9  0x0000555555d8596b in apply_event_and_update_pos_for_parallel (ev=ev@entry=0x61200005b9c8, thd=thd@entry=0x62a0000a2208, rgi=rgi@entry=0x61d0001ea080) at slave.cc:3623
#10 0x00005555562aca83 in rpt_handle_event (qev=qev@entry=0x6190000fa088, rpt=rpt@entry=0x62200002bd68) at rpl_parallel.cc:50
#11 0x00005555562bd04e in handle_rpl_parallel_thread (arg=arg@entry=0x62200002bd68) at rpl_parallel.cc:1258
{noformat}

Here {{handler::print_error}} computes whether to error log the
current error when --log-warnings > 1. The decision flag is consulted
bu {{my_message_sql()}} which can be eventually called.
In the bug case the decision is to log.
However in the optimistic mode slave applier case any conflict is
attempted to resolve with rollback and retry to success. Hence the
logging is at least extraneous.

The case is fixed with adding a new flag {{ME_LOG_AS_WARN}} which
{{handler::print_error}} may propagate further on through {{my_error}}
when the error comes from an optimistically running slave worker thread.

The new flag effectively requests the warning level for the errlog record,
while the thread's DA records the actual error (which is regarded as temporary one
by the parallel slave error handler).
2018-06-12 15:29:16 +03:00
Eugene Kosov
748ef3ec91 MDEV-15991 Server crashes in setup_on_expr upon calling SP or function executing DML on versioned tables
Do not try to set versioning conditions on every SP call. It may work
incorrectly, but it's a general bug described in MDEV-774.
This patch makes system versioning stuff consistent with other code and
also fixes a use-after-free bug.

Closes #756
2018-06-03 23:25:43 +02:00