1
0
mirror of https://github.com/MariaDB/server.git synced 2025-07-24 19:42:23 +03:00
Commit Graph

4187 Commits

Author SHA1 Message Date
401ff6994d MDEV-26221: DYNAMIC_ARRAY use size_t for sizes
https://jira.mariadb.org/browse/MDEV-26221
my_sys DYNAMIC_ARRAY and DYNAMIC_STRING inconsistancy

The DYNAMIC_STRING uses size_t for sizes, but DYNAMIC_ARRAY used uint.
This patch adjusts DYNAMIC_ARRAY to use size_t like DYNAMIC_STRING.

As the MY_DIR member number_of_files is copied from a DYNAMIC_ARRAY,
this is changed to be size_t.

As MY_TMPDIR members 'cur' and 'max' are copied from a DYNAMIC_ARRAY,
these are also changed to be size_t.

The lists of plugins and stored procedures use DYNAMIC_ARRAY,
but their APIs assume a size of 'uint'; these are unchanged.
2021-10-19 16:00:26 +03:00
b4911f5a34 Merge 10.6 into 10.7 2021-10-13 16:37:12 +03:00
a8379e53e8 Merge 10.5 into 10.6
The changes to galera.galear_var_replicate_myisam_on
in commit d9b933bec6
are omitted due to conflicts
with commit 27d66d644c.
2021-10-13 13:28:12 +03:00
99bb3fb656 Merge 10.4 into 10.5 2021-10-13 12:33:56 +03:00
a736a3174a Merge 10.3 into 10.4 2021-10-13 12:03:32 +03:00
1e70b287e7 MDEV-25891 Computed default for INVISIBLE column is ignored in INSERT
There are two fill_record() functions (lines 8343 and 8618). First one
is used when there are some explicit values, the second one is used
for all implicit values. First one does update_default_fields(), the
second one did not. Added update_default_fields() call to the implicit
version of fill_record().
2021-10-11 13:36:07 +03:00
b36d6f92a8 Merge 10.6 into 10.7 2021-09-30 11:01:07 +03:00
a49e394399 Merge 10.5 into 10.6 2021-09-30 10:38:44 +03:00
064cb58efe Merge 10.4 into 10.5
FIXME: Part of the MDEV-20699 test is disabled due to
nonderterministic result.
2021-09-30 09:04:43 +03:00
a10b63bf58 Merge 10.3 into 10.4 2021-09-29 16:03:02 +03:00
742b37a345 Merge 10.2 into 10.3 2021-09-29 15:04:20 +03:00
3690c549c6 MDEV-24454 Crash at change_item_tree
Use in_sum_func (and so nest_level) only in LEX to which SELECT lex belong to

Reduce usage of current_select (because it does not always point on the correct
 SELECT_LEX, for example with prepare.

Change context for all classes inherited from Item_ident (was only for Item_field) in case of pushing down it to HAVING.

Now name resolution context have to have SELECT_LEX reference if the context is present.

Fixed feedback plugin stack usage.
2021-09-27 11:00:51 +02:00
4be366111b Merge 10.6 into 10.7 2021-09-11 18:01:31 +03:00
15139964d5 Merge 10.5 into 10.6 2021-09-11 17:55:27 +03:00
7c33ecb665 Merge remote-tracking branch 'upstream/10.4' into 10.5 2021-09-10 17:16:18 +03:00
de7e027d5e Merge remote-tracking branch 'upstream/10.3' into 10.4 2021-09-09 09:23:35 +03:00
391f6b4f1e MDEV-26362: incorrect nest_level value with INTERSECT
Add DBUG_ASSERT (should be kept in merge)
Fix nest_level assignment in LEX::add_unit_in_brackets (should be ignored in merge to 10.4)
2021-09-05 10:38:12 +02:00
58aaa67064 Merge 10.6 into 10.7 2021-08-31 11:01:19 +03:00
e94172c2a0 Merge 10.5 into 10.6 2021-08-31 11:00:41 +03:00
e62120cec7 Merge 10.4 into 10.5 2021-08-31 10:04:56 +03:00
0464761126 Merge 10.3 into 10.4 2021-08-31 09:22:21 +03:00
b378ddb3d3 MDEV 22785 Crash with prepared statements and NEXTVAL()
The problem was that a PREARE followed by a non prepared statement
using DEFAULT NEXT_VALUE() could change table->next_local to point to
a not persitent memory aria. The next EXECUTE would then try to use
the wrong pointer, which could cause a crash.
Fixed by reseting the pointer to it's old value when doing EXECUTE.
2021-08-26 07:07:46 +03:00
ddcb242b3c Merge 10.6 into 10.7 2021-08-04 11:52:39 +03:00
6efb5e9f5e Merge branch '10.5' into 10.6 2021-08-02 10:11:41 +02:00
ae6bdc6769 Merge branch '10.4' into 10.5 2021-07-31 23:19:51 +02:00
7841a7eb09 Merge branch '10.3' into 10.4 2021-07-31 22:59:58 +02:00
5518c3209b MDEV-23178: Qualified asterisk not supported in INSERT .. RETURNING
Analysis: When we have INSERT/REPLACE returning with qualified asterisk in the
RETURNING clause, '*' is not resolved properly because of wrong context.
context->table_list is NULL or has incorrect table because context->table_list
has tables from the FROM clause. For INSERT/REPLACE...SELECT...RETURNING,
context->table_list has table we are inserting from. While in other
INSERT/REPLACE syntax, context->table_list is NULL because there is no FROM
clause.
Fix: If filling fields instead of '*' for qualified asterisk in RETURNING,
use first_name_resolution_table for correct resolution of item.
2021-07-22 21:56:18 +05:30
091743c6d8 Removing the condition in the for loop and putting it in one place to
make code more readable and cleaner.
2021-07-22 21:56:18 +05:30
6190a02f35 Merge branch '10.2' into 10.3 2021-07-21 20:11:07 +02:00
d378a466a5 Change Item_true and Item_false to pointers
This is a prerequisite for moving them to a readonly segment.
2021-07-18 19:59:35 +03:00
c47e4aab62 MDEV-23597 Assertion `marked_for_read()' failed while evaluating DEFAULT
The columns that are part of DEFAULT expression were not read-marked
in statements like UPDATE...SET b=DEFAULT.

The problem is `F(DEFAULT)` expression depends of the left-hand side of an
assignment. However, setup_fields accepts only right-hand side value.
Neither Item::fix_fields does.

Suchwise, b=DEFAULT(b) works fine, because Item_default_field has
information on what field it is default of:
    if (thd->mark_used_columns != MARK_COLUMNS_NONE)
      def_field->default_value->expr->update_used_tables();

in Item_default_value::fix_fields().

It is not reasonable to pass a left-hand side to Item:fix_fields, because
the case is rare, so the rewrite
  b= F(DEFAULT)  ->  b= F(DEFAULT(b))

is made instead.

Both UPDATE and multi-UPDATE are affected, however any form of INSERT
is not: it marks all the fields in DEFAULT expressions for read in
TABLE::mark_default_fields_for_write().
2021-07-16 13:31:19 +03:00
7d9ba57da4 [1/2] MDEV-18166 ASSERT_COLUMN_MARKED_FOR_READ failed on tables with vcols
This is a 10.2+ part of a jira task

The two bugs regarding virtual column marking have been fixed:

1. UPDATE of a partitioned table, where the optimizer has chosen a
 secondary index to make a filesort;
2. INSERT into a table with a nonblob field generated from a blob, with
 binlog enabled and binlog_row_image=noblob.

3. DELETE from a view on a table with virtual column.

Generally the assertion happens from update_virtual_fields() call

These bugs are root-caused by missing field marking for dependant fields
of a virtual column.

Therefore a fix is: mark all the fields involved in the vcol expression by
calling field->register_field_in_read_map() instead just setting a single
bit.

3 was reproducible only on 10.4+, however the problem might has just been
invisible in the earlier versions. The fix is applicable to 10.2-10.3 as
well.
2021-07-12 22:00:39 +03:00
c2ebe8147d MDEV-25837 Assertion `thd->locked_tables_mode == LTM_NONE' failed in Locked_tables_list::init_locked_tables.
don't do prelocking for the FLUSH command.
2021-06-29 16:03:26 +04:00
aedf314333 MDEV-16708: spurious ER_NEED_REPREPARE for gtid_slave_pos and event_scheduler
do not try to detect metadata change (and reprepare) for
internal short-lived TABLE_LIST objects that couldn't have
possibly lived long enough to see prepare and cache the metadata.
2021-06-17 19:30:24 +02:00
5478ca779a MDEV-25576: The statement EXPLAIN running as regular statement and as prepared statement produces different results for UPDATE with subquery
10.6 cleanup
2021-06-17 19:30:24 +02:00
3648b333c7 cleanup: formatting
also avoid an oxymoron of using `MYSQL_PLUGIN_IMPORT` under
`#ifdef MYSQL_SERVER`, and empty_clex_str is so trivial that a plugin
can define it if needed.
2021-06-11 13:02:55 +02:00
1bd681c8b3 MDEV-25506 (3 of 3): Do not delete .ibd files before commit
This is a complete rewrite of DROP TABLE, also as part of other DDL,
such as ALTER TABLE, CREATE TABLE...SELECT, TRUNCATE TABLE.

The background DROP TABLE queue hack is removed.
If a transaction needs to drop and create a table by the same name
(like TRUNCATE TABLE does), it must first rename the table to an
internal #sql-ib name. No committed version of the data dictionary
will include any #sql-ib tables, because whenever a transaction
renames a table to a #sql-ib name, it will also drop that table.
Either the rename will be rolled back, or the drop will be committed.

Data files will be unlinked after the transaction has been committed
and a FILE_RENAME record has been durably written. The file will
actually be deleted when the detached file handle returned by
fil_delete_tablespace() will be closed, after the latches have been
released. It is possible that a purge of the delete of the SYS_INDEXES
record for the clustered index will execute fil_delete_tablespace()
concurrently with the DDL transaction. In that case, the thread that
arrives later will wait for the other thread to finish.

HTON_TRUNCATE_REQUIRES_EXCLUSIVE_USE: A new handler flag.
ha_innobase::truncate() now requires that all other references to
the table be released in advance. This was implemented by Monty.

ha_innobase::delete_table(): If CREATE TABLE..SELECT is detected,
we will "hijack" the current transaction, drop the table in
the current transaction and commit the current transaction.
This essentially fixes MDEV-21602. There is a FIXME comment about
making the check less failure-prone.

ha_innobase::truncate(), ha_innobase::delete_table():
Implement a fast path for temporary tables. We will no longer allow
temporary tables to use the adaptive hash index.

dict_table_t::mdl_name: The original table name for the purpose of
acquiring MDL in purge, to prevent a race condition between a
DDL transaction that is dropping a table, and purge processing
undo log records of DML that had executed before the DDL operation.
For #sql-backup- tables during ALTER TABLE...ALGORITHM=COPY, the
dict_table_t::mdl_name will differ from dict_table_t::name.

dict_table_t::parse_name(): Use mdl_name instead of name.

dict_table_rename_in_cache(): Update mdl_name.

For the internal FTS_ tables of FULLTEXT INDEX, purge would
acquire MDL on the FTS_ table name, but not on the main table,
and therefore it would be able to run concurrently with a
DDL transaction that is dropping the table. Previously, the
DROP TABLE queue hack prevented a race between purge and DDL.
For now, we introduce purge_sys.stop_FTS() to prevent purge from
opening any table, while a DDL transaction that may drop FTS_
tables is in progress. The function fts_lock_table(), which will
be invoked before the dictionary is locked, will wait for
purge to release any table handles.

trx_t::drop_table_statistics(): Drop statistics for the table.
This replaces dict_stats_drop_index(). We will drop or rename
persistent statistics atomically as part of DDL transactions.
On lock conflict for dropping statistics, we will fail instantly
with DB_LOCK_WAIT_TIMEOUT, because we will be holding the
exclusive data dictionary latch.

trx_t::commit_cleanup(): Separated from trx_t::commit_in_memory().
Relax an assertion around fts_commit() and allow DB_LOCK_WAIT_TIMEOUT
in addition to DB_DUPLICATE_KEY. The call to fts_commit() is
entirely misplaced here and may obviously break the consistency
of transactions that affect FULLTEXT INDEX. It needs to be fixed
separately.

dict_table_t::n_foreign_key_checks_running: Remove (MDEV-21175).
The counter was a work-around for missing meta-data locking (MDL)
on the SQL layer, and not really needed in MariaDB.

ER_TABLE_IN_FK_CHECK: Replaced with ER_UNUSED_28.

HA_ERR_TABLE_IN_FK_CHECK: Remove.

row_ins_check_foreign_constraints(): Do not acquire
dict_sys.latch either. The SQL-layer MDL will protect us.

This was reviewed by Thirunarayanan Balathandayuthapani
and tested by Matthias Leich.
2021-06-09 17:06:07 +03:00
3d6eb7afcf MDEV-25602 get rid of __WIN__ in favor of standard _WIN32
This fixed the MySQL bug# 20338 about misuse of double underscore
prefix __WIN__, which was old MySQL's idea of identifying Windows
Replace it by _WIN32 standard symbol for targeting Windows OS
(both 32 and 64 bit)

Not that connect storage engine is not fixed in this patch (must be
fixed in "upstream" branch)
2021-06-06 13:21:03 +02:00
a722ee88f3 Merge 10.5 into 10.6 2021-06-01 11:39:38 +03:00
9c7a456a92 Merge 10.4 into 10.5 2021-06-01 10:38:09 +03:00
77d8da57d7 Merge 10.3 into 10.4 2021-06-01 09:14:59 +03:00
950a220060 Merge 10.2 into 10.3 2021-06-01 08:40:59 +03:00
91bde0fb67 MDEV-25576: The statement EXPLAIN running as regular statement and as prepared statement produces different results for UPDATE with subquery
Both EXPLAIN and EXPLAIN EXTENDED statements produce different results set
in case it is run in normal way and in PS mode for the statements
UPDATE/DELETE with subquery.

The use case below reproduces the issue:
MariaDB [test]> CREATE TABLE t1 (c1 INT KEY) ENGINE=MyISAM;
Query OK, 0 rows affected (0,128 sec)

MariaDB [test]> CREATE TABLE t2 (c2 INT) ENGINE=MyISAM;
Query OK, 0 rows affected (0,023 sec)

MariaDB [test]> CREATE TABLE t3 (c3 INT) ENGINE=MyISAM;
Query OK, 0 rows affected (0,021 sec)

MariaDB [test]> EXPLAIN EXTENDED UPDATE t3 SET c3 =
    -> ( SELECT COUNT(d1.c1) FROM ( SELECT a11.c1 FROM t1 AS a11
    -> STRAIGHT_JOIN t2 AS a21 ON a21.c2 = a11.c1 JOIN t1 AS a12
    -> ON a12.c1 = a11.c1 ) d1 );
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
| id   | select_type | table | type | possible_keys | key  | key_len | ref  | rows | filtered | Extra                          |
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
|    1 | PRIMARY     | t3    | ALL  | NULL          | NULL | NULL    | NULL |    0 |   100.00 |                                |
|    2 | SUBQUERY    | NULL  | NULL | NULL          | NULL | NULL    | NULL | NULL |     NULL | Impossible WHERE noticed after reading const tables
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
2 rows in set (0,002 sec)

MariaDB [test]> PREPARE stmt FROM
    -> EXPLAIN EXTENDED UPDATE t3 SET c3 =
    -> ( SELECT COUNT(d1.c1) FROM ( SELECT a11.c1 FROM t1 AS a11
    -> STRAIGHT_JOIN t2 AS a21 ON a21.c2 = a11.c1 JOIN t1 AS a12
    -> ON a12.c1 = a11.c1 ) d1 );
Query OK, 0 rows affected (0,000 sec)
Statement prepared

MariaDB [test]>  EXECUTE stmt;
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
| id   | select_type | table | type | possible_keys | key  | key_len | ref  | rows | filtered | Extra                          |
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
|    1 | PRIMARY     | t3    | ALL  | NULL          | NULL | NULL    | NULL |    0 |   100.00 |                                |
|    2 | SUBQUERY    | NULL  | NULL | NULL          | NULL | NULL    | NULL | NULL |     NULL | no matching row in const table |
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
2 rows in set (0,000 sec)

The reason by that different result sets are produced is that on execution
of the statement 'EXECUTE stmt' the flag SELECT_DESCRIBE not set
in the data member SELECT_LEX::options for instances of SELECT_LEX that
correspond to subqueries used in the UPDTAE/DELETE statements.

Initially, these flags were set on parsing the statement
  PREPARE stmt FROM "EXPLAIN EXTENDED UPDATE t3 SET ..."
but latter they were reset before starting real execution of
the parsed query during handling the statement 'EXECUTE stmt';

So, to fix the issue the functions mysql_update()/mysql_delete()
have been modified to set the flag SELECT_DESCRIBE forcibly
in the data member SELECT_LEX::options for the primary SELECT_LEX
of the UPDATE/DELETE statement.
2021-05-30 17:31:55 +07:00
aa284e0237 MDEV-17749 Kill during LOCK TABLE ; ALTER TABLE causes assert
The problem was that when LOCK TABLES where unwinded as part of
a killed connection, unlink_all_closed_tables() did not like that
there was uncommited transactions.
Fixed by doing a rollback of any open transaction in this particular case.
2021-05-26 14:35:23 +03:00
860e754349 Merge 10.5 into 10.6 2021-05-26 11:22:40 +03:00
675716e1cb MDEV-23886 Reusing CTE inside a function fails with table doesn't exist
In the code existed just before this patch binding of a table reference to
the specification of the corresponding CTE happens in the function
open_and_process_table(). If the table reference is not the first in the
query the specification is cloned in the same way as the specification of
a view is cloned for any reference of the view. This works fine for
standalone queries, but does not work for stored procedures / functions
for the following reason.
When the first call of a stored procedure/ function SP is processed the
body of SP is parsed. When a query of SP is parsed the info on each
encountered table reference is put into a TABLE_LIST object linked into
a global chain associated with the query. When parsing of the query is
finished the basic info on the table references from this chain except
table references to derived tables and information schema tables is put
in one hash table associated with SP. When parsing of the body of SP is
finished this hash table is used to construct TABLE_LIST objects for all
table references mentioned in SP and link them into the list of such
objects passed to a pre-locking process that calls open_and_process_table()
for each table from the list.
When a TABLE_LIST for a view is encountered the view is opened and its
specification is parsed. For any table reference occurred in
the specification a new TABLE_LIST object is created to be included into
the list for pre-locking. After all objects in the pre-locking have been
looked through the tables mentioned in the list are locked. Note that the
objects referenced CTEs are just skipped here as it is impossible to
resolve these references without any info on the context where they occur.
Now the statements from the body of SP are executed one by one that.
At the very beginning of the execution of a query the tables used in the
query are opened and open_and_process_table() now is called for each table
reference mentioned in the list of TABLE_LIST objects associated with the
query that was built when the query was parsed.
For each table reference first the reference is checked against CTEs
definitions in whose scope it occurred. If such definition is found the
reference is considered resolved and if this is not the first reference
to the found CTE the the specification of the CTE is re-parsed and the
result of the parsing is added to the parsing tree of the query as a
sub-tree. If this sub-tree contains table references to other tables they
are added to the list of TABLE_LIST objects associated with the query in
order the referenced tables to be opened. When the procedure that opens
the tables comes to the TABLE_LIST object created for a non-first
reference to a CTE it discovers that the referenced table instance is not
locked and reports an error.
Thus processing non-first table references to a CTE similar to how
references to view are processed does not work for queries used in stored
procedures / functions. And the main problem is that the current
pre-locking mechanism employed for stored procedures / functions does not
allow to save the context in which a CTE reference occur. It's not trivial
to save the info about the context where a CTE reference occurs while the
resolution of the table reference cannot be done without this context and
consequentially the specification for the table reference cannot be
determined.

This patch solves the above problem by moving resolution of all CTE
references at the parsing stage. More exactly references to CTEs occurred in
a query are resolved right after parsing of the query has finished. After
resolution any CTE reference it is marked as a reference to to derived
table. So it is excluded from the hash table created for pre-locking used
base tables and view when the first call of a stored procedure / function
is processed.
This solution required recursive calls of the parser. The function
THD::sql_parser() has been added specifically for recursive invocations of
the parser.

# Conflicts:
#	sql/sql_cte.cc
#	sql/sql_cte.h
#	sql/sql_lex.cc
#	sql/sql_lex.h
#	sql/sql_view.cc
#	sql/sql_yacc.yy
#	sql/sql_yacc_ora.yy
2021-05-25 21:48:54 -07:00
04de651725 MDEV-23886 Reusing CTE inside a function fails with table doesn't exist
In the code existed just before this patch binding of a table reference to
the specification of the corresponding CTE happens in the function
open_and_process_table(). If the table reference is not the first in the
query the specification is cloned in the same way as the specification of
a view is cloned for any reference of the view. This works fine for
standalone queries, but does not work for stored procedures / functions
for the following reason.
When the first call of a stored procedure/ function SP is processed the
body of SP is parsed. When a query of SP is parsed the info on each
encountered table reference is put into a TABLE_LIST object linked into
a global chain associated with the query. When parsing of the query is
finished the basic info on the table references from this chain except
table references to derived tables and information schema tables is put
in one hash table associated with SP. When parsing of the body of SP is
finished this hash table is used to construct TABLE_LIST objects for all
table references mentioned in SP and link them into the list of such
objects passed to a pre-locking process that calls open_and_process_table()
for each table from the list.
When a TABLE_LIST for a view is encountered the view is opened and its
specification is parsed. For any table reference occurred in
the specification a new TABLE_LIST object is created to be included into
the list for pre-locking. After all objects in the pre-locking have been
looked through the tables mentioned in the list are locked. Note that the
objects referenced CTEs are just skipped here as it is impossible to
resolve these references without any info on the context where they occur.
Now the statements from the body of SP are executed one by one that.
At the very beginning of the execution of a query the tables used in the
query are opened and open_and_process_table() now is called for each table
reference mentioned in the list of TABLE_LIST objects associated with the
query that was built when the query was parsed.
For each table reference first the reference is checked against CTEs
definitions in whose scope it occurred. If such definition is found the
reference is considered resolved and if this is not the first reference
to the found CTE the the specification of the CTE is re-parsed and the
result of the parsing is added to the parsing tree of the query as a
sub-tree. If this sub-tree contains table references to other tables they
are added to the list of TABLE_LIST objects associated with the query in
order the referenced tables to be opened. When the procedure that opens
the tables comes to the TABLE_LIST object created for a non-first
reference to a CTE it discovers that the referenced table instance is not
locked and reports an error.
Thus processing non-first table references to a CTE similar to how
references to view are processed does not work for queries used in stored
procedures / functions. And the main problem is that the current
pre-locking mechanism employed for stored procedures / functions does not
allow to save the context in which a CTE reference occur. It's not trivial
to save the info about the context where a CTE reference occurs while the
resolution of the table reference cannot be done without this context and
consequentially the specification for the table reference cannot be
determined.

This patch solves the above problem by moving resolution of all CTE
references at the parsing stage. More exactly references to CTEs occurred in
a query are resolved right after parsing of the query has finished. After
resolution any CTE reference it is marked as a reference to to derived
table. So it is excluded from the hash table created for pre-locking used
base tables and view when the first call of a stored procedure / function
is processed.
This solution required recursive calls of the parser. The function
THD::sql_parser() has been added specifically for recursive invocations of
the parser.
2021-05-25 00:43:03 -07:00
1864a8ea93 Merge 10.2 into 10.3 2021-05-24 09:38:49 +03:00
43c9fcefc0 MDEV-23886 Reusing CTE inside a function fails with table doesn't exist
In the code existed just before this patch binding of a table reference to
the specification of the corresponding CTE happens in the function
open_and_process_table(). If the table reference is not the first in the
query the specification is cloned in the same way as the specification of
a view is cloned for any reference of the view. This works fine for
standalone queries, but does not work for stored procedures / functions
for the following reason.
When the first call of a stored procedure/ function SP is processed the
body of SP is parsed. When a query of SP is parsed the info on each
encountered table reference is put into a TABLE_LIST object linked into
a global chain associated with the query. When parsing of the query is
finished the basic info on the table references from this chain except
table references to derived tables and information schema tables is put
in one hash table associated with SP. When parsing of the body of SP is
finished this hash table is used to construct TABLE_LIST objects for all
table references mentioned in SP and link them into the list of such
objects passed to a pre-locking process that calls open_and_process_table()
for each table from the list.
When a TABLE_LIST for a view is encountered the view is opened and its
specification is parsed. For any table reference occurred in
the specification a new TABLE_LIST object is created to be included into
the list for pre-locking. After all objects in the pre-locking have been
looked through the tables mentioned in the list are locked. Note that the
objects referenced CTEs are just skipped here as it is impossible to
resolve these references without any info on the context where they occur.
Now the statements from the body of SP are executed one by one that.
At the very beginning of the execution of a query the tables used in the
query are opened and open_and_process_table() now is called for each table
reference mentioned in the list of TABLE_LIST objects associated with the
query that was built when the query was parsed.
For each table reference first the reference is checked against CTEs
definitions in whose scope it occurred. If such definition is found the
reference is considered resolved and if this is not the first reference
to the found CTE the the specification of the CTE is re-parsed and the
result of the parsing is added to the parsing tree of the query as a
sub-tree. If this sub-tree contains table references to other tables they
are added to the list of TABLE_LIST objects associated with the query in
order the referenced tables to be opened. When the procedure that opens
the tables comes to the TABLE_LIST object created for a non-first
reference to a CTE it discovers that the referenced table instance is not
locked and reports an error.
Thus processing non-first table references to a CTE similar to how
references to view are processed does not work for queries used in stored
procedures / functions. And the main problem is that the current
pre-locking mechanism employed for stored procedures / functions does not
allow to save the context in which a CTE reference occur. It's not trivial
to save the info about the context where a CTE reference occurs while the
resolution of the table reference cannot be done without this context and
consequentially the specification for the table reference cannot be
determined.

This patch solves the above problem by moving resolution of all CTE
references at the parsing stage. More exactly references to CTEs occurred in
a query are resolved right after parsing of the query has finished. After
resolution any CTE reference it is marked as a reference to to derived
table. So it is excluded from the hash table created for pre-locking used
base tables and view when the first call of a stored procedure / function
is processed.
This solution required recursive calls of the parser. The function
THD::sql_parser() has been added specifically for recursive invocations of
the parser.
2021-05-21 16:00:35 -07:00
744a53806e Fixed hang in concurrent DROP TABLE and BACKUP LOCK BLOCK_DDL
The problem was that tdc_remove_referenced_share() did not take into
account that someone could push things into share->free_tables() even
if there is a MDL_EXCLUSIVE lock on the table.
This can happen if flush_tables() uses the table cache to flush a
a non transactional table to disk.
2021-05-19 22:54:14 +02:00