1
0
mirror of https://github.com/MariaDB/server.git synced 2025-06-03 07:02:23 +03:00

7860 Commits

Author SHA1 Message Date
sjaakola
157b3a637f MDEV-23328 Server hang due to Galera lock conflict resolution
Mutex order violation when wsrep bf thread kills a conflicting trx,
the stack is

          wsrep_thd_LOCK()
          wsrep_kill_victim()
          lock_rec_other_has_conflicting()
          lock_clust_rec_read_check_and_lock()
          row_search_mvcc()
          ha_innobase::index_read()
          ha_innobase::rnd_pos()
          handler::ha_rnd_pos()
          handler::rnd_pos_by_record()
          handler::ha_rnd_pos_by_record()
          Rows_log_event::find_row()
          Update_rows_log_event::do_exec_row()
          Rows_log_event::do_apply_event()
          Log_event::apply_event()
          wsrep_apply_events()

and mutexes are taken in the order

          lock_sys->mutex -> victim_trx->mutex -> victim_thread->LOCK_thd_data

When a normal KILL statement is executed, the stack is

          innobase_kill_query()
          kill_handlerton()
          plugin_foreach_with_mask()
          ha_kill_query()
          THD::awake()
          kill_one_thread()

        and mutexes are

          victim_thread->LOCK_thd_data -> lock_sys->mutex -> victim_trx->mutex

This patch is the plan D variant for fixing potetial mutex locking
order exercised by BF aborting and KILL command execution.

In this approach, KILL command is replicated as TOI operation.
This guarantees total isolation for the KILL command execution
in the first node: there is no concurrent replication applying
and no concurrent DDL executing. Therefore there is no risk of
BF aborting to happen in parallel with KILL command execution
either. Potential mutex deadlocks between the different mutex
access paths with KILL command execution and BF aborting cannot
therefore happen.

TOI replication is used, in this approach,  purely as means
to provide isolated KILL command execution in the first node.
KILL command should not (and must not) be applied in secondary
nodes. In this patch, we make this sure by skipping KILL
execution in secondary nodes, in applying phase, where we
bail out if applier thread is trying to execute KILL command.
This is effective, but skipping the applying of KILL command
could happen much earlier as well.

This also fixed unprotected calls to wsrep_thd_abort
that will use wsrep_abort_transaction. This is fixed
by holding THD::LOCK_thd_data while we abort transaction.

Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
2021-10-29 10:00:17 +03:00
sjaakola
5c230b21bf MDEV-23328 Server hang due to Galera lock conflict resolution
Mutex order violation when wsrep bf thread kills a conflicting trx,
the stack is

          wsrep_thd_LOCK()
          wsrep_kill_victim()
          lock_rec_other_has_conflicting()
          lock_clust_rec_read_check_and_lock()
          row_search_mvcc()
          ha_innobase::index_read()
          ha_innobase::rnd_pos()
          handler::ha_rnd_pos()
          handler::rnd_pos_by_record()
          handler::ha_rnd_pos_by_record()
          Rows_log_event::find_row()
          Update_rows_log_event::do_exec_row()
          Rows_log_event::do_apply_event()
          Log_event::apply_event()
          wsrep_apply_events()

and mutexes are taken in the order

          lock_sys->mutex -> victim_trx->mutex -> victim_thread->LOCK_thd_data

When a normal KILL statement is executed, the stack is

          innobase_kill_query()
          kill_handlerton()
          plugin_foreach_with_mask()
          ha_kill_query()
          THD::awake()
          kill_one_thread()

        and mutexes are

          victim_thread->LOCK_thd_data -> lock_sys->mutex -> victim_trx->mutex

This patch is the plan D variant for fixing potetial mutex locking
order exercised by BF aborting and KILL command execution.

In this approach, KILL command is replicated as TOI operation.
This guarantees total isolation for the KILL command execution
in the first node: there is no concurrent replication applying
and no concurrent DDL executing. Therefore there is no risk of
BF aborting to happen in parallel with KILL command execution
either. Potential mutex deadlocks between the different mutex
access paths with KILL command execution and BF aborting cannot
therefore happen.

TOI replication is used, in this approach,  purely as means
to provide isolated KILL command execution in the first node.
KILL command should not (and must not) be applied in secondary
nodes. In this patch, we make this sure by skipping KILL
execution in secondary nodes, in applying phase, where we
bail out if applier thread is trying to execute KILL command.
This is effective, but skipping the applying of KILL command
could happen much earlier as well.

This also fixed unprotected calls to wsrep_thd_abort
that will use wsrep_abort_transaction. This is fixed
by holding THD::LOCK_thd_data while we abort transaction.

Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
2021-10-29 09:52:52 +03:00
Jan Lindström
aa7ca987db MDEV-25114: Crash: WSREP: invalid state ROLLED_BACK (FATAL)
Revert "MDEV-23328 Server hang due to Galera lock conflict resolution"

This reverts commit eac8341df4c3c7b98360f4e9498acf393dc055e3.
2021-10-29 09:52:40 +03:00
sjaakola
db50ea3ad3 MDEV-23328 Server hang due to Galera lock conflict resolution
Mutex order violation when wsrep bf thread kills a conflicting trx,
the stack is

          wsrep_thd_LOCK()
          wsrep_kill_victim()
          lock_rec_other_has_conflicting()
          lock_clust_rec_read_check_and_lock()
          row_search_mvcc()
          ha_innobase::index_read()
          ha_innobase::rnd_pos()
          handler::ha_rnd_pos()
          handler::rnd_pos_by_record()
          handler::ha_rnd_pos_by_record()
          Rows_log_event::find_row()
          Update_rows_log_event::do_exec_row()
          Rows_log_event::do_apply_event()
          Log_event::apply_event()
          wsrep_apply_events()

and mutexes are taken in the order

          lock_sys->mutex -> victim_trx->mutex -> victim_thread->LOCK_thd_data

When a normal KILL statement is executed, the stack is

          innobase_kill_query()
          kill_handlerton()
          plugin_foreach_with_mask()
          ha_kill_query()
          THD::awake()
          kill_one_thread()

        and mutexes are

          victim_thread->LOCK_thd_data -> lock_sys->mutex -> victim_trx->mutex

This patch is the plan D variant for fixing potetial mutex locking
order exercised by BF aborting and KILL command execution.

In this approach, KILL command is replicated as TOI operation.
This guarantees total isolation for the KILL command execution
in the first node: there is no concurrent replication applying
and no concurrent DDL executing. Therefore there is no risk of
BF aborting to happen in parallel with KILL command execution
either. Potential mutex deadlocks between the different mutex
access paths with KILL command execution and BF aborting cannot
therefore happen.

TOI replication is used, in this approach,  purely as means
to provide isolated KILL command execution in the first node.
KILL command should not (and must not) be applied in secondary
nodes. In this patch, we make this sure by skipping KILL
execution in secondary nodes, in applying phase, where we
bail out if applier thread is trying to execute KILL command.
This is effective, but skipping the applying of KILL command
could happen much earlier as well.

This also fixed unprotected calls to wsrep_thd_abort
that will use wsrep_abort_transaction. This is fixed
by holding THD::LOCK_thd_data while we abort transaction.

Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
2021-10-29 07:57:18 +03:00
Marko Mäkelä
f59f5c4a10 Revert MDEV-25114
Revert 88a4be75a5f3b8d59ac8f6347ff2c197813c05dc and
9d97f92febc89941784d17d59c60275e21140ce0, which had been
prematurely pushed by accident.
2021-09-24 16:21:20 +03:00
sjaakola
88a4be75a5 MDEV-25114 Crash: WSREP: invalid state ROLLED_BACK (FATAL)
This patch is the plan D variant for fixing potetial mutex locking
order exercised by BF aborting and KILL command execution.

In this approach, KILL command is replicated as TOI operation.
This guarantees total isolation for the KILL command execution
in the first node: there is no concurrent replication applying
and no concurrent DDL executing. Therefore there is no risk of
BF aborting to happen in parallel with KILL command execution
either. Potential mutex deadlocks between the different mutex
access paths with KILL command execution and BF aborting cannot
therefore happen.

TOI replication is used, in this approach,  purely as means
to provide isolated KILL command execution in the first node.
KILL command should not (and must not) be applied in secondary
nodes. In this patch, we make this sure by skipping KILL
execution in secondary nodes, in applying phase, where we
bail out if applier thread is trying to execute KILL command.
This is effective, but skipping the applying of KILL command
could happen much earlier as well.

This patch also fixes mutex locking order and unprotected
THD member accesses on bf aborting case. We try to hold
THD::LOCK_thd_data during bf aborting. Only case where it
is not possible is at wsrep_abort_transaction before
call wsrep_innobase_kill_one_trx where we take InnoDB
mutexes first and then THD::LOCK_thd_data.

This will also fix possible race condition during
close_connection and while wsrep is disconnecting
connections.

Added wsrep_bf_kill_debug test case

Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
2021-09-24 09:47:31 +03:00
Sergei Golubchik
d552e092c9 MDEV-10075: Provide index of error causing error in array INSERT
use existing Warning_info::m_current_row_for_warning instead
of a newly introduced counter.

But use m_current_row_for_warning to count rows also in the parser
and during prepare.
2021-09-16 13:38:48 +02:00
Rucha Deodhar
ea06c67a49 MDEV-10075: Provide index of error causing error in array INSERT
Extended the parser for GET DIAGNOSTICS to use ERROR_INDEX to get
warning/error index.
Error information is stored in Sql_condition. So it can be used to
store the index of warning/error too. THD::current_insert_index keeps a
track of count for each row that is processed or going to be inserted in the
table (or first row in case of prepare phase). When an error occurs,
first we need to fetch corrected error index (using correct_error_index())
for an error number. This is needed because in prepare phase, the error
may not be because of rows/values. In such case, correct value of
error_index should be 0. Once correct value if fetched, assign it to
Sql_condition::error_index when the object is created during error/warning.
This error_index variable is returned when ERROR_INDEX is used in
GET DIAGNOSTICS.
2021-09-15 08:55:34 +02:00
Monty
8d08971c84 Removed CREATE/DROP TABLESPACE and related commands
- DISCARD/IMPORT TABLESPACE are the only tablespace commands left
- TABLESPACE arguments for CREATE TABLE and ALTER ... ADD PARTITION are
  ignored.
- Tablespace names are not shown anymore in .frm and not shown in
  information schema

Other things
- Removed end spaces from sql/CMakeList.txt
2021-09-14 18:04:09 +03:00
Monty
4ebaa80f0b Failed change master could leave around old relay log files
The reason was that there where no cleanup after a failed 'change master'.
Fixed by doing a cleanup of created relay log files in remove_master_info()
2021-09-14 13:43:50 +03:00
Vladislav Vaintroub
3504f70f7f Bison 3.7 - fix "conversion from 'ptrdiff_t' to 'ulong', possible loss of data" 2021-09-11 15:19:42 +02:00
Jan Lindström
1bc82aaf0a MDEV-26352 : Add new thread states for certain WSREP scenarios
This adds following new thread states:
* waiting to execute in isolation - DDL is waiting to execute in TOI mode.
* waiting for TOI DDL - some other statement is waiting for DDL to complete.
* waiting for flow control - some statement is paused while flow control is in effect.
* waiting for certification - the transaction is being certified.
2021-09-03 09:07:03 +03:00
Marko Mäkelä
cc4e20e56f Merge 10.5 into 10.6 2021-08-26 10:20:17 +03:00
Marko Mäkelä
87ff4ba7c8 Merge 10.4 into 10.5 2021-08-26 08:46:57 +03:00
Marko Mäkelä
15b691b7bd After-merge fix f84e28c119b495da77e197f7cd18af4048fc3126
In a rebase of the merge, two preceding commits were accidentally reverted:
commit 112b23969a30ba6441efa5e22a3017435febfa17 (MDEV-26308)
commit ac2857a5fbf851d90171ac55f23385869ee6ba83 (MDEV-25717)

Thanks to Daniele Sciascia for noticing this.
2021-08-25 17:35:44 +03:00
Marko Mäkelä
f84e28c119 Merge 10.3 into 10.4 2021-08-18 16:51:52 +03:00
Leandro Pacheco
112b23969a MDEV-26308 : Galera test failure on galera.galera_split_brain
Contains following fixes:

* allow TOI commands to timeout while trying to acquire TOI with
override lock_wait_timeout with a LONG_TIMEOUT only after
succesfully entering TOI
* only ignore lock_wait_timeout on TOI
* fix galera_split_brain test as TOI operation now returns ER_LOCK_WAIT_TIMEOUT after lock_wait_timeout
* explicitly test for TOI

Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
2021-08-18 08:57:33 +03:00
Oleksandr Byelkin
6efb5e9f5e Merge branch '10.5' into 10.6 2021-08-02 10:11:41 +02:00
Oleksandr Byelkin
ae6bdc6769 Merge branch '10.4' into 10.5 2021-07-31 23:19:51 +02:00
mkaruza
386ac12a48 MDEV-25740 Assertion `!wsrep_has_changes(thd) || (thd->lex->sql_command == SQLCOM_CREATE_TABLE && !thd->is_current_stmt_binlog_format_row())' failed in void wsrep_commit_empty(THD*, bool)
Using ROLLBACK with `completion_type = CHAIN` result in start of
transaction and implicit commit before previous WSREP internal data is
cleared.

Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
2021-07-28 15:04:30 +03:00
Sergei Golubchik
b34cafe9d9 cleanup: move thread_count to THD_count::value()
because the name was misleading, it counts not threads, but THDs,
and as THD_count is the only way to increment/decrement it, it
could as well be declared inside THD_count.
2021-07-24 12:37:50 +02:00
Oleksandr Byelkin
a7d880f0b0 MDEV-21916: COM_STMT_BULK_EXECUTE with RETURNING insert wrong values
The problem is that array binding uses net buffer to read parameters for each
execution while each execiting with RETURNING write in the same buffer.

Solution is to allocate new net buffer to avoid changing buffer we are reading
from.
2021-07-15 16:28:13 +02:00
Sergei Golubchik
164a64baa3 MDEV-15888 Implement FLUSH TABLES tbl_name [, tbl_name] ... WITH READ LOCK for views.
privilege checks for tables flushed via views
2021-07-02 16:44:00 +02:00
Dmitry Shulga
a00b51f639 MDEV-16708: Fixed the failed test main.set_statement 2021-06-17 19:30:24 +02:00
Dmitry Shulga
327402291a MDEV-16708: Unsupported commands for prepared statements
Extended a set of commands that can be executed via binary protocol
by a user with expired password
2021-06-17 19:30:24 +02:00
Dmitry Shulga
9370c6e83c MDEV-16708: Unsupported commands for prepared statements
Withing this task the following changes were made:
- Added sending of metadata info in prepare phase for the admin related
  command (check table, checksum table, repair, optimize, analyze).

- Refactored implmentation of HELP command to support its execution in
  PS mode

- Added support for execution of LOAD INTO and XA- related statements
  in PS mode

- Modified mysqltest.cc to run statements in PS mode unconditionally
  in case the option --ps-protocol is set. Formerly, only those statements
  were executed using PS protocol that matched the hard-coded regular expression

- Fixed the following issues:
    The statement
      explain select (select 2)
    executed in regular and PS mode produces different results:

    MariaDB [test]> prepare stmt from "explain select (select 2)";
    Query OK, 0 rows affected (0,000 sec)
    Statement prepared
    MariaDB [test]> execute stmt;
    +------+-------------+-------+------+---------------+------+---------+------+------+----------------+
    | id   | select_type | table | type | possible_keys | key  | key_len | ref  | rows | Extra          |
    +------+-------------+-------+------+---------------+------+---------+------+------+----------------+
    |    1 | PRIMARY     | NULL  | NULL | NULL          | NULL | NULL    | NULL | NULL | No tables used |
    |    2 | SUBQUERY    | NULL  | NULL | NULL          | NULL | NULL    | NULL | NULL | No tables used |
    +------+-------------+-------+------+---------------+------+---------+------+------+----------------+
    2 rows in set (0,000 sec)
    MariaDB [test]> explain select (select 2);
    +------+-------------+-------+------+---------------+------+---------+------+------+----------------+
    | id   | select_type | table | type | possible_keys | key  | key_len | ref  | rows | Extra          |
    +------+-------------+-------+------+---------------+------+---------+------+------+----------------+
    |    1 | SIMPLE      | NULL  | NULL | NULL          | NULL | NULL    | NULL | NULL | No tables used |
    +------+-------------+-------+------+---------------+------+---------+------+------+----------------+
    1 row in set, 1 warning (0,000 sec)

    In case the statement
      CREATE TABLE t1 SELECT * FROM (SELECT 1 AS a, (SELECT a+0)) a
    is run in PS mode it fails with the error
      ERROR 1054 (42S22): Unknown column 'a' in 'field list'.

- Uniform handling of read-only variables both in case the SET var=val
  statement is executed as regular or prepared statememt.

- Fixed assertion firing on handling LOAD DATA statement for temporary tables

- Relaxed assert condition in the function lex_end_stage1() by adding
  the commands SQLCOM_ALTER_EVENT, SQLCOM_CREATE_PACKAGE,
  SQLCOM_CREATE_PACKAGE_BODY to a list of supported command

- Removed raising of the error ER_UNSUPPORTED_PS in the function
  check_prepared_statement() for the ALTER VIEW command

- Added initialization of the data memember st_select_lex_unit::last_procedure
  (assign NULL value) in the constructor

  Without this change the test case main.ctype_utf8 fails with the following
  report in case it is run with the optoin --ps-protocol.
    mysqltest: At line 2278: query 'VALUES (_latin1 0xDF) UNION VALUES(_utf8'a' COLLATE utf8_bin)' failed: 2013: Lost connection

- The following bug reports were fixed:
      MDEV-24460: Multiple rows result set returned from stored
                  routine over prepared statement binary protocol is
                  handled incorrectly
      CONC-519: mariadb client library doesn't handle server_status and
                warnign_count fields received in the packet
                COM_STMT_EXECUTE_RESPONSE.

  Reasons for these bug reports have the same nature and caused by
  missing loop iteration on results sent by server in response to
  COM_STMT_EXECUTE packet.

  Enclosing of statements for processing of COM_STMT_EXECUTE response
  in the construct like
    do
    {
      ...
    } while (!mysql_stmt_next_result());
  fixes the above mentioned bug reports.
2021-06-17 19:30:24 +02:00
Sergei Golubchik
3648b333c7 cleanup: formatting
also avoid an oxymoron of using `MYSQL_PLUGIN_IMPORT` under
`#ifdef MYSQL_SERVER`, and empty_clex_str is so trivial that a plugin
can define it if needed.
2021-06-11 13:02:55 +02:00
Marko Mäkelä
a722ee88f3 Merge 10.5 into 10.6 2021-06-01 11:39:38 +03:00
Sergei Golubchik
4777097fee followup: rename generated files to have distinct names 2021-05-27 00:40:23 +02:00
Marko Mäkelä
860e754349 Merge 10.5 into 10.6 2021-05-26 11:22:40 +03:00
Marko Mäkelä
365cd08345 Merge 10.4 into 10.5 2021-05-26 09:47:28 +03:00
Igor Babaev
675716e1cb MDEV-23886 Reusing CTE inside a function fails with table doesn't exist
In the code existed just before this patch binding of a table reference to
the specification of the corresponding CTE happens in the function
open_and_process_table(). If the table reference is not the first in the
query the specification is cloned in the same way as the specification of
a view is cloned for any reference of the view. This works fine for
standalone queries, but does not work for stored procedures / functions
for the following reason.
When the first call of a stored procedure/ function SP is processed the
body of SP is parsed. When a query of SP is parsed the info on each
encountered table reference is put into a TABLE_LIST object linked into
a global chain associated with the query. When parsing of the query is
finished the basic info on the table references from this chain except
table references to derived tables and information schema tables is put
in one hash table associated with SP. When parsing of the body of SP is
finished this hash table is used to construct TABLE_LIST objects for all
table references mentioned in SP and link them into the list of such
objects passed to a pre-locking process that calls open_and_process_table()
for each table from the list.
When a TABLE_LIST for a view is encountered the view is opened and its
specification is parsed. For any table reference occurred in
the specification a new TABLE_LIST object is created to be included into
the list for pre-locking. After all objects in the pre-locking have been
looked through the tables mentioned in the list are locked. Note that the
objects referenced CTEs are just skipped here as it is impossible to
resolve these references without any info on the context where they occur.
Now the statements from the body of SP are executed one by one that.
At the very beginning of the execution of a query the tables used in the
query are opened and open_and_process_table() now is called for each table
reference mentioned in the list of TABLE_LIST objects associated with the
query that was built when the query was parsed.
For each table reference first the reference is checked against CTEs
definitions in whose scope it occurred. If such definition is found the
reference is considered resolved and if this is not the first reference
to the found CTE the the specification of the CTE is re-parsed and the
result of the parsing is added to the parsing tree of the query as a
sub-tree. If this sub-tree contains table references to other tables they
are added to the list of TABLE_LIST objects associated with the query in
order the referenced tables to be opened. When the procedure that opens
the tables comes to the TABLE_LIST object created for a non-first
reference to a CTE it discovers that the referenced table instance is not
locked and reports an error.
Thus processing non-first table references to a CTE similar to how
references to view are processed does not work for queries used in stored
procedures / functions. And the main problem is that the current
pre-locking mechanism employed for stored procedures / functions does not
allow to save the context in which a CTE reference occur. It's not trivial
to save the info about the context where a CTE reference occurs while the
resolution of the table reference cannot be done without this context and
consequentially the specification for the table reference cannot be
determined.

This patch solves the above problem by moving resolution of all CTE
references at the parsing stage. More exactly references to CTEs occurred in
a query are resolved right after parsing of the query has finished. After
resolution any CTE reference it is marked as a reference to to derived
table. So it is excluded from the hash table created for pre-locking used
base tables and view when the first call of a stored procedure / function
is processed.
This solution required recursive calls of the parser. The function
THD::sql_parser() has been added specifically for recursive invocations of
the parser.

# Conflicts:
#	sql/sql_cte.cc
#	sql/sql_cte.h
#	sql/sql_lex.cc
#	sql/sql_lex.h
#	sql/sql_view.cc
#	sql/sql_yacc.yy
#	sql/sql_yacc_ora.yy
2021-05-25 21:48:54 -07:00
Marko Mäkelä
1dea7f7977 Merge 10.3 into 10.4 2021-05-25 15:38:57 +03:00
Igor Babaev
04de651725 MDEV-23886 Reusing CTE inside a function fails with table doesn't exist
In the code existed just before this patch binding of a table reference to
the specification of the corresponding CTE happens in the function
open_and_process_table(). If the table reference is not the first in the
query the specification is cloned in the same way as the specification of
a view is cloned for any reference of the view. This works fine for
standalone queries, but does not work for stored procedures / functions
for the following reason.
When the first call of a stored procedure/ function SP is processed the
body of SP is parsed. When a query of SP is parsed the info on each
encountered table reference is put into a TABLE_LIST object linked into
a global chain associated with the query. When parsing of the query is
finished the basic info on the table references from this chain except
table references to derived tables and information schema tables is put
in one hash table associated with SP. When parsing of the body of SP is
finished this hash table is used to construct TABLE_LIST objects for all
table references mentioned in SP and link them into the list of such
objects passed to a pre-locking process that calls open_and_process_table()
for each table from the list.
When a TABLE_LIST for a view is encountered the view is opened and its
specification is parsed. For any table reference occurred in
the specification a new TABLE_LIST object is created to be included into
the list for pre-locking. After all objects in the pre-locking have been
looked through the tables mentioned in the list are locked. Note that the
objects referenced CTEs are just skipped here as it is impossible to
resolve these references without any info on the context where they occur.
Now the statements from the body of SP are executed one by one that.
At the very beginning of the execution of a query the tables used in the
query are opened and open_and_process_table() now is called for each table
reference mentioned in the list of TABLE_LIST objects associated with the
query that was built when the query was parsed.
For each table reference first the reference is checked against CTEs
definitions in whose scope it occurred. If such definition is found the
reference is considered resolved and if this is not the first reference
to the found CTE the the specification of the CTE is re-parsed and the
result of the parsing is added to the parsing tree of the query as a
sub-tree. If this sub-tree contains table references to other tables they
are added to the list of TABLE_LIST objects associated with the query in
order the referenced tables to be opened. When the procedure that opens
the tables comes to the TABLE_LIST object created for a non-first
reference to a CTE it discovers that the referenced table instance is not
locked and reports an error.
Thus processing non-first table references to a CTE similar to how
references to view are processed does not work for queries used in stored
procedures / functions. And the main problem is that the current
pre-locking mechanism employed for stored procedures / functions does not
allow to save the context in which a CTE reference occur. It's not trivial
to save the info about the context where a CTE reference occurs while the
resolution of the table reference cannot be done without this context and
consequentially the specification for the table reference cannot be
determined.

This patch solves the above problem by moving resolution of all CTE
references at the parsing stage. More exactly references to CTEs occurred in
a query are resolved right after parsing of the query has finished. After
resolution any CTE reference it is marked as a reference to to derived
table. So it is excluded from the hash table created for pre-locking used
base tables and view when the first call of a stored procedure / function
is processed.
This solution required recursive calls of the parser. The function
THD::sql_parser() has been added specifically for recursive invocations of
the parser.
2021-05-25 00:43:03 -07:00
Marko Mäkelä
1864a8ea93 Merge 10.2 into 10.3 2021-05-24 09:38:49 +03:00
Igor Babaev
43c9fcefc0 MDEV-23886 Reusing CTE inside a function fails with table doesn't exist
In the code existed just before this patch binding of a table reference to
the specification of the corresponding CTE happens in the function
open_and_process_table(). If the table reference is not the first in the
query the specification is cloned in the same way as the specification of
a view is cloned for any reference of the view. This works fine for
standalone queries, but does not work for stored procedures / functions
for the following reason.
When the first call of a stored procedure/ function SP is processed the
body of SP is parsed. When a query of SP is parsed the info on each
encountered table reference is put into a TABLE_LIST object linked into
a global chain associated with the query. When parsing of the query is
finished the basic info on the table references from this chain except
table references to derived tables and information schema tables is put
in one hash table associated with SP. When parsing of the body of SP is
finished this hash table is used to construct TABLE_LIST objects for all
table references mentioned in SP and link them into the list of such
objects passed to a pre-locking process that calls open_and_process_table()
for each table from the list.
When a TABLE_LIST for a view is encountered the view is opened and its
specification is parsed. For any table reference occurred in
the specification a new TABLE_LIST object is created to be included into
the list for pre-locking. After all objects in the pre-locking have been
looked through the tables mentioned in the list are locked. Note that the
objects referenced CTEs are just skipped here as it is impossible to
resolve these references without any info on the context where they occur.
Now the statements from the body of SP are executed one by one that.
At the very beginning of the execution of a query the tables used in the
query are opened and open_and_process_table() now is called for each table
reference mentioned in the list of TABLE_LIST objects associated with the
query that was built when the query was parsed.
For each table reference first the reference is checked against CTEs
definitions in whose scope it occurred. If such definition is found the
reference is considered resolved and if this is not the first reference
to the found CTE the the specification of the CTE is re-parsed and the
result of the parsing is added to the parsing tree of the query as a
sub-tree. If this sub-tree contains table references to other tables they
are added to the list of TABLE_LIST objects associated with the query in
order the referenced tables to be opened. When the procedure that opens
the tables comes to the TABLE_LIST object created for a non-first
reference to a CTE it discovers that the referenced table instance is not
locked and reports an error.
Thus processing non-first table references to a CTE similar to how
references to view are processed does not work for queries used in stored
procedures / functions. And the main problem is that the current
pre-locking mechanism employed for stored procedures / functions does not
allow to save the context in which a CTE reference occur. It's not trivial
to save the info about the context where a CTE reference occurs while the
resolution of the table reference cannot be done without this context and
consequentially the specification for the table reference cannot be
determined.

This patch solves the above problem by moving resolution of all CTE
references at the parsing stage. More exactly references to CTEs occurred in
a query are resolved right after parsing of the query has finished. After
resolution any CTE reference it is marked as a reference to to derived
table. So it is excluded from the hash table created for pre-locking used
base tables and view when the first call of a stored procedure / function
is processed.
This solution required recursive calls of the parser. The function
THD::sql_parser() has been added specifically for recursive invocations of
the parser.
2021-05-21 16:00:35 -07:00
Monty
b332ffc1af Fixed assert in WSREP if one started with --wsrep_provider=.. 2021-05-19 22:54:14 +02:00
Monty
f671a9dee7 Replace find_temporary_table() with is_temporary_table()
DROP TABLE opens all temporary tables at start, but then
uses find_temporary_table() to check if a table is temporary
instead of is_temporary_table() which is much faster.

This patch fixes this issue.
2021-05-19 22:54:11 +02:00
Monty
a206658b98 Change CHARSET_INFO character set and collaction names to LEX_CSTRING
This change removed 68 explict strlen() calls from the code.

The following renames was done to ensure we don't use the old names
when merging code from earlier releases, as using the new variables
for print function could result in crashes:
- charset->csname renamed to charset->cs_name
- charset->name renamed to charset->coll_name

Almost everything where mechanical changes except:
- Changed to use the new Protocol::store(LEX_CSTRING..) when possible
- Changed to use field->store(LEX_CSTRING*, CHARSET_INFO*) when possible
- Changed to use String->append(LEX_CSTRING&) when possible

Other things:
- There where compiler issues with ensuring that all character set names
  points to the same string: gcc doesn't allow one to use integer constants
  when defining global structures (constant char * pointers works fine).
  To get around this, I declared defines for each character set name
  length.
2021-05-19 22:54:07 +02:00
Rucha Deodhar
2fdb556e04 MDEV-8334: Rename utf8 to utf8mb3
This patch changes the main name of 3 byte character set from utf8 to
utf8mb3. New old_mode UTF8_IS_UTF8MB3 is added and set TRUE by default,
so that utf8 would mean utf8mb3. If not set, utf8 would mean utf8mb4.
2021-05-19 06:48:36 +02:00
Jan Lindström
bee1bb056d MDEV-9609 : wsrep_debug only logs DDL information on originating node
Added DDL logging to applier and replaying also so that
DDL is logged on other than originating node.

wsrep.h
	Removed wsrep_thd_is_local conditions and cleaned up
	the macros. Removed WSREP_TO_ISOLATION_END.

Event_job_data::execute
change_password
acl_set_default_role
mysql_execute_command
	Replaced macro by function call

wsrep_to_isolation_begin
wsrep_to_isolation_end
	If execution is not local log DDL-information when
	wsrep_debug is enabled

No new tests required as current regression setting is
already testing these code paths.
2021-05-15 13:24:22 +03:00
Marko Mäkelä
54c460ace6 Merge 10.5 into 10.6 2021-04-22 08:43:03 +03:00
Marko Mäkelä
df33b719ca Merge 10.4 into 10.5 2021-04-22 08:25:40 +03:00
Marko Mäkelä
0d267f7caa MDEV-25362 after-merge fix: Remove unnecessary code 2021-04-22 07:36:04 +03:00
Vicențiu Ciorbaru
2d595319bf cleanup: Select_limit_counters rename set_unlimited to clear
The function was originally introduced by eb0804ef5e7eeb059bb193c3c6787e8a4188d34d
MDEV-18553: MDEV-16327 pre-requisits part 1: isolation of LIMIT/OFFSET handling

set_unlimited had an overloaded notion of both clearing the offset value
and the limit value. The code is used for SQL_CALC_ROWS option to
disable the limit clause after the limit is reached, while at the same
time the calling code suppreses sending of rows.

Proposed solution:
Dedicated clear method for query initialization (to ensure no garbage
remains between executions).
Dedicated set_unlimited that only alters the limit value.
2021-04-21 14:08:58 +03:00
Vicențiu Ciorbaru
13cf8f5e9a cleanup: Refactor select_limit in select lex
Replace
  * select_lex::offset_limit
  * select_lex::select_limit
  * select_lex::explicit_limit
with select_lex::Lex_select_limit

The Lex_select_limit already existed with the same elements and was used in
by the yacc parser.

This commit is in preparation for FETCH FIRST implementation, as it
simplifies a lot of the code.

Additionally, the parser is simplified by making use of the stack to
return Lex_select_limit objects.

Cleanup of init_query() too. Removes explicit_limit= 0 as it's done a bit later
in init_select() with limit_params.empty()
2021-04-21 14:08:58 +03:00
Marko Mäkelä
4930f9c94b Merge 10.5 into 10.6 2021-04-21 11:45:00 +03:00
Marko Mäkelä
d104fe6f73 Merge 10.4 into 10.5 2021-04-21 10:27:13 +03:00
Marko Mäkelä
ceed768eea MDEV-25362 after-merge fix: GCC -Og -Wmaybe-uninitialized 2021-04-21 10:06:31 +03:00
Alexey Botchkov
67b31a6e4c MDEV-17399 JSON_TABLE.
test crashing after any_db assinged nonzero lenght fixed.
2021-04-21 10:21:47 +04:00