1
0
mirror of https://github.com/MariaDB/server.git synced 2025-04-24 18:27:21 +03:00

7672 Commits

Author SHA1 Message Date
Daniel Black
065f995e6d Merge branch 10.5 into 10.6 2022-03-18 12:17:11 +11:00
Daniel Black
b73d852779 Merge 10.4 to 10.5 2022-03-17 17:03:24 +11:00
Daniel Black
069139a549 Merge 10.3 to 10.4
extra2_read_len resolved by keeping the implementation
in sql/table.cc by exposed it for use by ha_partition.cc

Remove identical implementation in unireg.h
(ref: bfed2c7d57a7ca34936d6ef0688af7357592dc40)
2022-03-16 16:39:10 +11:00
Daniel Black
6a2d88c132 Merge 10.2 to 10.3 2022-03-16 12:51:22 +11:00
Daniel Black
57dbe8785d MDEV-23915 ER_KILL_DENIED_ERROR not passed a thread id (part 2)
Per Marko's comment in JIRA, sql_kill is passing the thread id
as long long. We change the format of the error messages to match,
and cast the thread id to long long in sql_kill_user.
2022-03-16 09:37:45 +11:00
Daniel Black
99837c61a6 MDEV-23915 ER_KILL_DENIED_ERROR not passed a thread id
The 10.5 test error main.grant_kill showed up a incorrect
thread id on a big endian architecture.

The cause of this is the sql_kill_user function assumed the
error was ER_OUT_OF_RESOURCES, when the the actual error was
ER_KILL_DENIED_ERROR. ER_KILL_DENIED_ERROR as an error message
requires a thread id to be passed as unsigned long, however a
user/host was passed.

ER_OUT_OF_RESOURCES doesn't even take a user/host, despite
the optimistic comment. We remove this being passed as an
argument to the function so that when MDEV-21978 is implemented
one less compiler format warning is generated (which would
have caught this error sooner).

Thanks Otto for reporting and Marko for analysis.
2022-03-16 09:37:45 +11:00
Daniel Black
f4fb6cb3fe MDEV-27900: aio handle partial reads/writes (uring)
MDEV-27900 continued for uring.

Also spell synchronously correctly in sql_parse.cc.

Reviewed by Wlad.
2022-03-12 16:16:47 +11:00
Sergei Petrunia
32692140e1 MDEV-27306: SET STATEMENT optimizer_trace=1 Doesn't save the trace
In mysql_execute_command(), move optimizer trace initialization to be
after run_set_statement_if_requested() call.

Unfortunately, mysql_execute_command() code uses "goto error" a lot, and
this means optimizer trace code cannot use RAII objects. Work this around
by:
- Make Opt_trace_start a non-RAII object, add init() method.
- Move the code that writes the top-level object and array into
  Opt_trace_start::init().
2021-12-19 17:19:02 +03:00
Oleksandr Byelkin
e26b30d1bb Merge branch '10.4' into 10.5 2021-11-02 15:35:31 +01:00
Oleksandr Byelkin
bb46b79c8c Fix mutex order according to a new sequence. 2021-11-02 13:09:35 +01:00
Jan Lindström
7846c56fbe MDEV-23328 Server hang due to Galera lock conflict resolution
* Fix error handling NULL-pointer reference
* Add mtr-suppression on galera_ssl_upgrade
2021-11-02 10:25:52 +02:00
Jan Lindström
eab7f5d8bc MDEV-23328 Server hang due to Galera lock conflict resolution
* Fix error handling NULL-pointer reference
* Add mtr-suppression on galera_ssl_upgrade
2021-11-02 10:08:54 +02:00
Jan Lindström
db64924454 MDEV-23328 Server hang due to Galera lock conflict resolution
* Fix error handling NULL-pointer reference
* Add mtr-suppression on galera_ssl_upgrade
2021-11-02 07:23:40 +02:00
Jan Lindström
e571eaae9f MDEV-23328 Server hang due to Galera lock conflict resolution
Use better error message when KILL fails even in case TOI
fails.
2021-11-02 07:20:30 +02:00
Jan Lindström
ea239034de MDEV-23328 Server hang due to Galera lock conflict resolution
* Fix error handling NULL-pointer reference
* Add mtr-suppression on galera_ssl_upgrade
2021-11-01 13:07:55 +02:00
sjaakola
ef2dbb8dbc MDEV-23328 Server hang due to Galera lock conflict resolution
Mutex order violation when wsrep bf thread kills a conflicting trx,
the stack is

          wsrep_thd_LOCK()
          wsrep_kill_victim()
          lock_rec_other_has_conflicting()
          lock_clust_rec_read_check_and_lock()
          row_search_mvcc()
          ha_innobase::index_read()
          ha_innobase::rnd_pos()
          handler::ha_rnd_pos()
          handler::rnd_pos_by_record()
          handler::ha_rnd_pos_by_record()
          Rows_log_event::find_row()
          Update_rows_log_event::do_exec_row()
          Rows_log_event::do_apply_event()
          Log_event::apply_event()
          wsrep_apply_events()

and mutexes are taken in the order

          lock_sys->mutex -> victim_trx->mutex -> victim_thread->LOCK_thd_data

When a normal KILL statement is executed, the stack is

          innobase_kill_query()
          kill_handlerton()
          plugin_foreach_with_mask()
          ha_kill_query()
          THD::awake()
          kill_one_thread()

        and mutexes are

          victim_thread->LOCK_thd_data -> lock_sys->mutex -> victim_trx->mutex

This patch is the plan D variant for fixing potetial mutex locking
order exercised by BF aborting and KILL command execution.

In this approach, KILL command is replicated as TOI operation.
This guarantees total isolation for the KILL command execution
in the first node: there is no concurrent replication applying
and no concurrent DDL executing. Therefore there is no risk of
BF aborting to happen in parallel with KILL command execution
either. Potential mutex deadlocks between the different mutex
access paths with KILL command execution and BF aborting cannot
therefore happen.

TOI replication is used, in this approach,  purely as means
to provide isolated KILL command execution in the first node.
KILL command should not (and must not) be applied in secondary
nodes. In this patch, we make this sure by skipping KILL
execution in secondary nodes, in applying phase, where we
bail out if applier thread is trying to execute KILL command.
This is effective, but skipping the applying of KILL command
could happen much earlier as well.

This also fixed unprotected calls to wsrep_thd_abort
that will use wsrep_abort_transaction. This is fixed
by holding THD::LOCK_thd_data while we abort transaction.

Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
2021-10-29 20:40:35 +02:00
Jan Lindström
d5bc05798f MDEV-25114: Crash: WSREP: invalid state ROLLED_BACK (FATAL)
Revert "MDEV-23328 Server hang due to Galera lock conflict resolution"

This reverts commit eac8341df4c3c7b98360f4e9498acf393dc055e3.
2021-10-29 20:38:11 +02:00
sjaakola
157b3a637f MDEV-23328 Server hang due to Galera lock conflict resolution
Mutex order violation when wsrep bf thread kills a conflicting trx,
the stack is

          wsrep_thd_LOCK()
          wsrep_kill_victim()
          lock_rec_other_has_conflicting()
          lock_clust_rec_read_check_and_lock()
          row_search_mvcc()
          ha_innobase::index_read()
          ha_innobase::rnd_pos()
          handler::ha_rnd_pos()
          handler::rnd_pos_by_record()
          handler::ha_rnd_pos_by_record()
          Rows_log_event::find_row()
          Update_rows_log_event::do_exec_row()
          Rows_log_event::do_apply_event()
          Log_event::apply_event()
          wsrep_apply_events()

and mutexes are taken in the order

          lock_sys->mutex -> victim_trx->mutex -> victim_thread->LOCK_thd_data

When a normal KILL statement is executed, the stack is

          innobase_kill_query()
          kill_handlerton()
          plugin_foreach_with_mask()
          ha_kill_query()
          THD::awake()
          kill_one_thread()

        and mutexes are

          victim_thread->LOCK_thd_data -> lock_sys->mutex -> victim_trx->mutex

This patch is the plan D variant for fixing potetial mutex locking
order exercised by BF aborting and KILL command execution.

In this approach, KILL command is replicated as TOI operation.
This guarantees total isolation for the KILL command execution
in the first node: there is no concurrent replication applying
and no concurrent DDL executing. Therefore there is no risk of
BF aborting to happen in parallel with KILL command execution
either. Potential mutex deadlocks between the different mutex
access paths with KILL command execution and BF aborting cannot
therefore happen.

TOI replication is used, in this approach,  purely as means
to provide isolated KILL command execution in the first node.
KILL command should not (and must not) be applied in secondary
nodes. In this patch, we make this sure by skipping KILL
execution in secondary nodes, in applying phase, where we
bail out if applier thread is trying to execute KILL command.
This is effective, but skipping the applying of KILL command
could happen much earlier as well.

This also fixed unprotected calls to wsrep_thd_abort
that will use wsrep_abort_transaction. This is fixed
by holding THD::LOCK_thd_data while we abort transaction.

Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
2021-10-29 10:00:17 +03:00
sjaakola
5c230b21bf MDEV-23328 Server hang due to Galera lock conflict resolution
Mutex order violation when wsrep bf thread kills a conflicting trx,
the stack is

          wsrep_thd_LOCK()
          wsrep_kill_victim()
          lock_rec_other_has_conflicting()
          lock_clust_rec_read_check_and_lock()
          row_search_mvcc()
          ha_innobase::index_read()
          ha_innobase::rnd_pos()
          handler::ha_rnd_pos()
          handler::rnd_pos_by_record()
          handler::ha_rnd_pos_by_record()
          Rows_log_event::find_row()
          Update_rows_log_event::do_exec_row()
          Rows_log_event::do_apply_event()
          Log_event::apply_event()
          wsrep_apply_events()

and mutexes are taken in the order

          lock_sys->mutex -> victim_trx->mutex -> victim_thread->LOCK_thd_data

When a normal KILL statement is executed, the stack is

          innobase_kill_query()
          kill_handlerton()
          plugin_foreach_with_mask()
          ha_kill_query()
          THD::awake()
          kill_one_thread()

        and mutexes are

          victim_thread->LOCK_thd_data -> lock_sys->mutex -> victim_trx->mutex

This patch is the plan D variant for fixing potetial mutex locking
order exercised by BF aborting and KILL command execution.

In this approach, KILL command is replicated as TOI operation.
This guarantees total isolation for the KILL command execution
in the first node: there is no concurrent replication applying
and no concurrent DDL executing. Therefore there is no risk of
BF aborting to happen in parallel with KILL command execution
either. Potential mutex deadlocks between the different mutex
access paths with KILL command execution and BF aborting cannot
therefore happen.

TOI replication is used, in this approach,  purely as means
to provide isolated KILL command execution in the first node.
KILL command should not (and must not) be applied in secondary
nodes. In this patch, we make this sure by skipping KILL
execution in secondary nodes, in applying phase, where we
bail out if applier thread is trying to execute KILL command.
This is effective, but skipping the applying of KILL command
could happen much earlier as well.

This also fixed unprotected calls to wsrep_thd_abort
that will use wsrep_abort_transaction. This is fixed
by holding THD::LOCK_thd_data while we abort transaction.

Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
2021-10-29 09:52:52 +03:00
Jan Lindström
aa7ca987db MDEV-25114: Crash: WSREP: invalid state ROLLED_BACK (FATAL)
Revert "MDEV-23328 Server hang due to Galera lock conflict resolution"

This reverts commit eac8341df4c3c7b98360f4e9498acf393dc055e3.
2021-10-29 09:52:40 +03:00
sjaakola
db50ea3ad3 MDEV-23328 Server hang due to Galera lock conflict resolution
Mutex order violation when wsrep bf thread kills a conflicting trx,
the stack is

          wsrep_thd_LOCK()
          wsrep_kill_victim()
          lock_rec_other_has_conflicting()
          lock_clust_rec_read_check_and_lock()
          row_search_mvcc()
          ha_innobase::index_read()
          ha_innobase::rnd_pos()
          handler::ha_rnd_pos()
          handler::rnd_pos_by_record()
          handler::ha_rnd_pos_by_record()
          Rows_log_event::find_row()
          Update_rows_log_event::do_exec_row()
          Rows_log_event::do_apply_event()
          Log_event::apply_event()
          wsrep_apply_events()

and mutexes are taken in the order

          lock_sys->mutex -> victim_trx->mutex -> victim_thread->LOCK_thd_data

When a normal KILL statement is executed, the stack is

          innobase_kill_query()
          kill_handlerton()
          plugin_foreach_with_mask()
          ha_kill_query()
          THD::awake()
          kill_one_thread()

        and mutexes are

          victim_thread->LOCK_thd_data -> lock_sys->mutex -> victim_trx->mutex

This patch is the plan D variant for fixing potetial mutex locking
order exercised by BF aborting and KILL command execution.

In this approach, KILL command is replicated as TOI operation.
This guarantees total isolation for the KILL command execution
in the first node: there is no concurrent replication applying
and no concurrent DDL executing. Therefore there is no risk of
BF aborting to happen in parallel with KILL command execution
either. Potential mutex deadlocks between the different mutex
access paths with KILL command execution and BF aborting cannot
therefore happen.

TOI replication is used, in this approach,  purely as means
to provide isolated KILL command execution in the first node.
KILL command should not (and must not) be applied in secondary
nodes. In this patch, we make this sure by skipping KILL
execution in secondary nodes, in applying phase, where we
bail out if applier thread is trying to execute KILL command.
This is effective, but skipping the applying of KILL command
could happen much earlier as well.

This also fixed unprotected calls to wsrep_thd_abort
that will use wsrep_abort_transaction. This is fixed
by holding THD::LOCK_thd_data while we abort transaction.

Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
2021-10-29 07:57:18 +03:00
Marko Mäkelä
f59f5c4a10 Revert MDEV-25114
Revert 88a4be75a5f3b8d59ac8f6347ff2c197813c05dc and
9d97f92febc89941784d17d59c60275e21140ce0, which had been
prematurely pushed by accident.
2021-09-24 16:21:20 +03:00
sjaakola
88a4be75a5 MDEV-25114 Crash: WSREP: invalid state ROLLED_BACK (FATAL)
This patch is the plan D variant for fixing potetial mutex locking
order exercised by BF aborting and KILL command execution.

In this approach, KILL command is replicated as TOI operation.
This guarantees total isolation for the KILL command execution
in the first node: there is no concurrent replication applying
and no concurrent DDL executing. Therefore there is no risk of
BF aborting to happen in parallel with KILL command execution
either. Potential mutex deadlocks between the different mutex
access paths with KILL command execution and BF aborting cannot
therefore happen.

TOI replication is used, in this approach,  purely as means
to provide isolated KILL command execution in the first node.
KILL command should not (and must not) be applied in secondary
nodes. In this patch, we make this sure by skipping KILL
execution in secondary nodes, in applying phase, where we
bail out if applier thread is trying to execute KILL command.
This is effective, but skipping the applying of KILL command
could happen much earlier as well.

This patch also fixes mutex locking order and unprotected
THD member accesses on bf aborting case. We try to hold
THD::LOCK_thd_data during bf aborting. Only case where it
is not possible is at wsrep_abort_transaction before
call wsrep_innobase_kill_one_trx where we take InnoDB
mutexes first and then THD::LOCK_thd_data.

This will also fix possible race condition during
close_connection and while wsrep is disconnecting
connections.

Added wsrep_bf_kill_debug test case

Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
2021-09-24 09:47:31 +03:00
Vladislav Vaintroub
3504f70f7f Bison 3.7 - fix "conversion from 'ptrdiff_t' to 'ulong', possible loss of data" 2021-09-11 15:19:42 +02:00
Marko Mäkelä
cc4e20e56f Merge 10.5 into 10.6 2021-08-26 10:20:17 +03:00
Marko Mäkelä
87ff4ba7c8 Merge 10.4 into 10.5 2021-08-26 08:46:57 +03:00
Marko Mäkelä
15b691b7bd After-merge fix f84e28c119b495da77e197f7cd18af4048fc3126
In a rebase of the merge, two preceding commits were accidentally reverted:
commit 112b23969a30ba6441efa5e22a3017435febfa17 (MDEV-26308)
commit ac2857a5fbf851d90171ac55f23385869ee6ba83 (MDEV-25717)

Thanks to Daniele Sciascia for noticing this.
2021-08-25 17:35:44 +03:00
Marko Mäkelä
f84e28c119 Merge 10.3 into 10.4 2021-08-18 16:51:52 +03:00
Leandro Pacheco
112b23969a MDEV-26308 : Galera test failure on galera.galera_split_brain
Contains following fixes:

* allow TOI commands to timeout while trying to acquire TOI with
override lock_wait_timeout with a LONG_TIMEOUT only after
succesfully entering TOI
* only ignore lock_wait_timeout on TOI
* fix galera_split_brain test as TOI operation now returns ER_LOCK_WAIT_TIMEOUT after lock_wait_timeout
* explicitly test for TOI

Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
2021-08-18 08:57:33 +03:00
Oleksandr Byelkin
6efb5e9f5e Merge branch '10.5' into 10.6 2021-08-02 10:11:41 +02:00
Oleksandr Byelkin
ae6bdc6769 Merge branch '10.4' into 10.5 2021-07-31 23:19:51 +02:00
mkaruza
386ac12a48 MDEV-25740 Assertion `!wsrep_has_changes(thd) || (thd->lex->sql_command == SQLCOM_CREATE_TABLE && !thd->is_current_stmt_binlog_format_row())' failed in void wsrep_commit_empty(THD*, bool)
Using ROLLBACK with `completion_type = CHAIN` result in start of
transaction and implicit commit before previous WSREP internal data is
cleared.

Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
2021-07-28 15:04:30 +03:00
Sergei Golubchik
b34cafe9d9 cleanup: move thread_count to THD_count::value()
because the name was misleading, it counts not threads, but THDs,
and as THD_count is the only way to increment/decrement it, it
could as well be declared inside THD_count.
2021-07-24 12:37:50 +02:00
Oleksandr Byelkin
a7d880f0b0 MDEV-21916: COM_STMT_BULK_EXECUTE with RETURNING insert wrong values
The problem is that array binding uses net buffer to read parameters for each
execution while each execiting with RETURNING write in the same buffer.

Solution is to allocate new net buffer to avoid changing buffer we are reading
from.
2021-07-15 16:28:13 +02:00
Sergei Golubchik
164a64baa3 MDEV-15888 Implement FLUSH TABLES tbl_name [, tbl_name] ... WITH READ LOCK for views.
privilege checks for tables flushed via views
2021-07-02 16:44:00 +02:00
Dmitry Shulga
a00b51f639 MDEV-16708: Fixed the failed test main.set_statement 2021-06-17 19:30:24 +02:00
Dmitry Shulga
327402291a MDEV-16708: Unsupported commands for prepared statements
Extended a set of commands that can be executed via binary protocol
by a user with expired password
2021-06-17 19:30:24 +02:00
Dmitry Shulga
9370c6e83c MDEV-16708: Unsupported commands for prepared statements
Withing this task the following changes were made:
- Added sending of metadata info in prepare phase for the admin related
  command (check table, checksum table, repair, optimize, analyze).

- Refactored implmentation of HELP command to support its execution in
  PS mode

- Added support for execution of LOAD INTO and XA- related statements
  in PS mode

- Modified mysqltest.cc to run statements in PS mode unconditionally
  in case the option --ps-protocol is set. Formerly, only those statements
  were executed using PS protocol that matched the hard-coded regular expression

- Fixed the following issues:
    The statement
      explain select (select 2)
    executed in regular and PS mode produces different results:

    MariaDB [test]> prepare stmt from "explain select (select 2)";
    Query OK, 0 rows affected (0,000 sec)
    Statement prepared
    MariaDB [test]> execute stmt;
    +------+-------------+-------+------+---------------+------+---------+------+------+----------------+
    | id   | select_type | table | type | possible_keys | key  | key_len | ref  | rows | Extra          |
    +------+-------------+-------+------+---------------+------+---------+------+------+----------------+
    |    1 | PRIMARY     | NULL  | NULL | NULL          | NULL | NULL    | NULL | NULL | No tables used |
    |    2 | SUBQUERY    | NULL  | NULL | NULL          | NULL | NULL    | NULL | NULL | No tables used |
    +------+-------------+-------+------+---------------+------+---------+------+------+----------------+
    2 rows in set (0,000 sec)
    MariaDB [test]> explain select (select 2);
    +------+-------------+-------+------+---------------+------+---------+------+------+----------------+
    | id   | select_type | table | type | possible_keys | key  | key_len | ref  | rows | Extra          |
    +------+-------------+-------+------+---------------+------+---------+------+------+----------------+
    |    1 | SIMPLE      | NULL  | NULL | NULL          | NULL | NULL    | NULL | NULL | No tables used |
    +------+-------------+-------+------+---------------+------+---------+------+------+----------------+
    1 row in set, 1 warning (0,000 sec)

    In case the statement
      CREATE TABLE t1 SELECT * FROM (SELECT 1 AS a, (SELECT a+0)) a
    is run in PS mode it fails with the error
      ERROR 1054 (42S22): Unknown column 'a' in 'field list'.

- Uniform handling of read-only variables both in case the SET var=val
  statement is executed as regular or prepared statememt.

- Fixed assertion firing on handling LOAD DATA statement for temporary tables

- Relaxed assert condition in the function lex_end_stage1() by adding
  the commands SQLCOM_ALTER_EVENT, SQLCOM_CREATE_PACKAGE,
  SQLCOM_CREATE_PACKAGE_BODY to a list of supported command

- Removed raising of the error ER_UNSUPPORTED_PS in the function
  check_prepared_statement() for the ALTER VIEW command

- Added initialization of the data memember st_select_lex_unit::last_procedure
  (assign NULL value) in the constructor

  Without this change the test case main.ctype_utf8 fails with the following
  report in case it is run with the optoin --ps-protocol.
    mysqltest: At line 2278: query 'VALUES (_latin1 0xDF) UNION VALUES(_utf8'a' COLLATE utf8_bin)' failed: 2013: Lost connection

- The following bug reports were fixed:
      MDEV-24460: Multiple rows result set returned from stored
                  routine over prepared statement binary protocol is
                  handled incorrectly
      CONC-519: mariadb client library doesn't handle server_status and
                warnign_count fields received in the packet
                COM_STMT_EXECUTE_RESPONSE.

  Reasons for these bug reports have the same nature and caused by
  missing loop iteration on results sent by server in response to
  COM_STMT_EXECUTE packet.

  Enclosing of statements for processing of COM_STMT_EXECUTE response
  in the construct like
    do
    {
      ...
    } while (!mysql_stmt_next_result());
  fixes the above mentioned bug reports.
2021-06-17 19:30:24 +02:00
Sergei Golubchik
3648b333c7 cleanup: formatting
also avoid an oxymoron of using `MYSQL_PLUGIN_IMPORT` under
`#ifdef MYSQL_SERVER`, and empty_clex_str is so trivial that a plugin
can define it if needed.
2021-06-11 13:02:55 +02:00
Marko Mäkelä
a722ee88f3 Merge 10.5 into 10.6 2021-06-01 11:39:38 +03:00
Sergei Golubchik
4777097fee followup: rename generated files to have distinct names 2021-05-27 00:40:23 +02:00
Marko Mäkelä
860e754349 Merge 10.5 into 10.6 2021-05-26 11:22:40 +03:00
Marko Mäkelä
365cd08345 Merge 10.4 into 10.5 2021-05-26 09:47:28 +03:00
Igor Babaev
675716e1cb MDEV-23886 Reusing CTE inside a function fails with table doesn't exist
In the code existed just before this patch binding of a table reference to
the specification of the corresponding CTE happens in the function
open_and_process_table(). If the table reference is not the first in the
query the specification is cloned in the same way as the specification of
a view is cloned for any reference of the view. This works fine for
standalone queries, but does not work for stored procedures / functions
for the following reason.
When the first call of a stored procedure/ function SP is processed the
body of SP is parsed. When a query of SP is parsed the info on each
encountered table reference is put into a TABLE_LIST object linked into
a global chain associated with the query. When parsing of the query is
finished the basic info on the table references from this chain except
table references to derived tables and information schema tables is put
in one hash table associated with SP. When parsing of the body of SP is
finished this hash table is used to construct TABLE_LIST objects for all
table references mentioned in SP and link them into the list of such
objects passed to a pre-locking process that calls open_and_process_table()
for each table from the list.
When a TABLE_LIST for a view is encountered the view is opened and its
specification is parsed. For any table reference occurred in
the specification a new TABLE_LIST object is created to be included into
the list for pre-locking. After all objects in the pre-locking have been
looked through the tables mentioned in the list are locked. Note that the
objects referenced CTEs are just skipped here as it is impossible to
resolve these references without any info on the context where they occur.
Now the statements from the body of SP are executed one by one that.
At the very beginning of the execution of a query the tables used in the
query are opened and open_and_process_table() now is called for each table
reference mentioned in the list of TABLE_LIST objects associated with the
query that was built when the query was parsed.
For each table reference first the reference is checked against CTEs
definitions in whose scope it occurred. If such definition is found the
reference is considered resolved and if this is not the first reference
to the found CTE the the specification of the CTE is re-parsed and the
result of the parsing is added to the parsing tree of the query as a
sub-tree. If this sub-tree contains table references to other tables they
are added to the list of TABLE_LIST objects associated with the query in
order the referenced tables to be opened. When the procedure that opens
the tables comes to the TABLE_LIST object created for a non-first
reference to a CTE it discovers that the referenced table instance is not
locked and reports an error.
Thus processing non-first table references to a CTE similar to how
references to view are processed does not work for queries used in stored
procedures / functions. And the main problem is that the current
pre-locking mechanism employed for stored procedures / functions does not
allow to save the context in which a CTE reference occur. It's not trivial
to save the info about the context where a CTE reference occurs while the
resolution of the table reference cannot be done without this context and
consequentially the specification for the table reference cannot be
determined.

This patch solves the above problem by moving resolution of all CTE
references at the parsing stage. More exactly references to CTEs occurred in
a query are resolved right after parsing of the query has finished. After
resolution any CTE reference it is marked as a reference to to derived
table. So it is excluded from the hash table created for pre-locking used
base tables and view when the first call of a stored procedure / function
is processed.
This solution required recursive calls of the parser. The function
THD::sql_parser() has been added specifically for recursive invocations of
the parser.

# Conflicts:
#	sql/sql_cte.cc
#	sql/sql_cte.h
#	sql/sql_lex.cc
#	sql/sql_lex.h
#	sql/sql_view.cc
#	sql/sql_yacc.yy
#	sql/sql_yacc_ora.yy
2021-05-25 21:48:54 -07:00
Marko Mäkelä
1dea7f7977 Merge 10.3 into 10.4 2021-05-25 15:38:57 +03:00
Igor Babaev
04de651725 MDEV-23886 Reusing CTE inside a function fails with table doesn't exist
In the code existed just before this patch binding of a table reference to
the specification of the corresponding CTE happens in the function
open_and_process_table(). If the table reference is not the first in the
query the specification is cloned in the same way as the specification of
a view is cloned for any reference of the view. This works fine for
standalone queries, but does not work for stored procedures / functions
for the following reason.
When the first call of a stored procedure/ function SP is processed the
body of SP is parsed. When a query of SP is parsed the info on each
encountered table reference is put into a TABLE_LIST object linked into
a global chain associated with the query. When parsing of the query is
finished the basic info on the table references from this chain except
table references to derived tables and information schema tables is put
in one hash table associated with SP. When parsing of the body of SP is
finished this hash table is used to construct TABLE_LIST objects for all
table references mentioned in SP and link them into the list of such
objects passed to a pre-locking process that calls open_and_process_table()
for each table from the list.
When a TABLE_LIST for a view is encountered the view is opened and its
specification is parsed. For any table reference occurred in
the specification a new TABLE_LIST object is created to be included into
the list for pre-locking. After all objects in the pre-locking have been
looked through the tables mentioned in the list are locked. Note that the
objects referenced CTEs are just skipped here as it is impossible to
resolve these references without any info on the context where they occur.
Now the statements from the body of SP are executed one by one that.
At the very beginning of the execution of a query the tables used in the
query are opened and open_and_process_table() now is called for each table
reference mentioned in the list of TABLE_LIST objects associated with the
query that was built when the query was parsed.
For each table reference first the reference is checked against CTEs
definitions in whose scope it occurred. If such definition is found the
reference is considered resolved and if this is not the first reference
to the found CTE the the specification of the CTE is re-parsed and the
result of the parsing is added to the parsing tree of the query as a
sub-tree. If this sub-tree contains table references to other tables they
are added to the list of TABLE_LIST objects associated with the query in
order the referenced tables to be opened. When the procedure that opens
the tables comes to the TABLE_LIST object created for a non-first
reference to a CTE it discovers that the referenced table instance is not
locked and reports an error.
Thus processing non-first table references to a CTE similar to how
references to view are processed does not work for queries used in stored
procedures / functions. And the main problem is that the current
pre-locking mechanism employed for stored procedures / functions does not
allow to save the context in which a CTE reference occur. It's not trivial
to save the info about the context where a CTE reference occurs while the
resolution of the table reference cannot be done without this context and
consequentially the specification for the table reference cannot be
determined.

This patch solves the above problem by moving resolution of all CTE
references at the parsing stage. More exactly references to CTEs occurred in
a query are resolved right after parsing of the query has finished. After
resolution any CTE reference it is marked as a reference to to derived
table. So it is excluded from the hash table created for pre-locking used
base tables and view when the first call of a stored procedure / function
is processed.
This solution required recursive calls of the parser. The function
THD::sql_parser() has been added specifically for recursive invocations of
the parser.
2021-05-25 00:43:03 -07:00
Marko Mäkelä
1864a8ea93 Merge 10.2 into 10.3 2021-05-24 09:38:49 +03:00
Igor Babaev
43c9fcefc0 MDEV-23886 Reusing CTE inside a function fails with table doesn't exist
In the code existed just before this patch binding of a table reference to
the specification of the corresponding CTE happens in the function
open_and_process_table(). If the table reference is not the first in the
query the specification is cloned in the same way as the specification of
a view is cloned for any reference of the view. This works fine for
standalone queries, but does not work for stored procedures / functions
for the following reason.
When the first call of a stored procedure/ function SP is processed the
body of SP is parsed. When a query of SP is parsed the info on each
encountered table reference is put into a TABLE_LIST object linked into
a global chain associated with the query. When parsing of the query is
finished the basic info on the table references from this chain except
table references to derived tables and information schema tables is put
in one hash table associated with SP. When parsing of the body of SP is
finished this hash table is used to construct TABLE_LIST objects for all
table references mentioned in SP and link them into the list of such
objects passed to a pre-locking process that calls open_and_process_table()
for each table from the list.
When a TABLE_LIST for a view is encountered the view is opened and its
specification is parsed. For any table reference occurred in
the specification a new TABLE_LIST object is created to be included into
the list for pre-locking. After all objects in the pre-locking have been
looked through the tables mentioned in the list are locked. Note that the
objects referenced CTEs are just skipped here as it is impossible to
resolve these references without any info on the context where they occur.
Now the statements from the body of SP are executed one by one that.
At the very beginning of the execution of a query the tables used in the
query are opened and open_and_process_table() now is called for each table
reference mentioned in the list of TABLE_LIST objects associated with the
query that was built when the query was parsed.
For each table reference first the reference is checked against CTEs
definitions in whose scope it occurred. If such definition is found the
reference is considered resolved and if this is not the first reference
to the found CTE the the specification of the CTE is re-parsed and the
result of the parsing is added to the parsing tree of the query as a
sub-tree. If this sub-tree contains table references to other tables they
are added to the list of TABLE_LIST objects associated with the query in
order the referenced tables to be opened. When the procedure that opens
the tables comes to the TABLE_LIST object created for a non-first
reference to a CTE it discovers that the referenced table instance is not
locked and reports an error.
Thus processing non-first table references to a CTE similar to how
references to view are processed does not work for queries used in stored
procedures / functions. And the main problem is that the current
pre-locking mechanism employed for stored procedures / functions does not
allow to save the context in which a CTE reference occur. It's not trivial
to save the info about the context where a CTE reference occurs while the
resolution of the table reference cannot be done without this context and
consequentially the specification for the table reference cannot be
determined.

This patch solves the above problem by moving resolution of all CTE
references at the parsing stage. More exactly references to CTEs occurred in
a query are resolved right after parsing of the query has finished. After
resolution any CTE reference it is marked as a reference to to derived
table. So it is excluded from the hash table created for pre-locking used
base tables and view when the first call of a stored procedure / function
is processed.
This solution required recursive calls of the parser. The function
THD::sql_parser() has been added specifically for recursive invocations of
the parser.
2021-05-21 16:00:35 -07:00
Monty
b332ffc1af Fixed assert in WSREP if one started with --wsrep_provider=.. 2021-05-19 22:54:14 +02:00
Monty
f671a9dee7 Replace find_temporary_table() with is_temporary_table()
DROP TABLE opens all temporary tables at start, but then
uses find_temporary_table() to check if a table is temporary
instead of is_temporary_table() which is much faster.

This patch fixes this issue.
2021-05-19 22:54:11 +02:00