1
0
mirror of https://github.com/MariaDB/server.git synced 2025-10-24 07:13:33 +03:00
Commit Graph

938 Commits

Author SHA1 Message Date
Oleksandr Byelkin
fc5772ce17 Merge branch '10.5' into 10.6 2024-08-20 09:11:34 +02:00
Dmitry Shulga
ba5482ffc2 MDEV-34718: Trigger doesn't work correctly with bulk update
Running an UPDATE statement in PS mode and having positional
parameter(s) bound with an array of actual values (that is
prepared to be run in bulk mode) results in incorrect behaviour
in presence of on update trigger that also executes an UPDATE
statement. The same is true for handling a DELETE statement in
presence of on delete trigger. Typically, the visible effect of
such incorrect behaviour is expressed in a wrong number of
updated/deleted rows of a target table. Additionally, in case UPDATE
statement, a number of modified rows and a state message returned
by a statement contains wrong information about a number of modified rows.

The reason for incorrect number of updated/deleted rows is that
a data structure used for binding positional argument with its
actual values is stored in THD (this is thd->bulk_param) and reused
on processing every INSERT/UPDATE/DELETE statement. It leads to
consuming actual values bound with top-level UPDATE/DELETE statement
by other DML statements used by triggers' body.

To fix the issue, reset the thd->bulk_param temporary to the value
nullptr before invoking triggers and restore its value on finishing
its execution.

The second part of the problem relating with wrong value of affected
rows reported by Connector/C API is caused by the fact that diagnostics
area is reused by an original DML statement and a statement invoked
by a trigger. This fact should be take into account on finalizing a
state of diagnostics area on completion running of a statement.

Important remark: in case the macros DBUG_OFF is on, call of the method
  Diagnostics_area::reset_diagnostics_area()
results in reset of the data members
  m_affected_rows, m_statement_warn_count.
Values of these data members of the class Diagnostics_area are used on
sending OK and EOF messages. In case DML statement is executed in PS bulk
mode such resetting results in sending wrong result values to a client
for affected rows in case the DML statement fires a triggers. So, reset
these data members only in case the current statement being processed
is not run in bulk mode.
2024-08-19 12:13:43 +07:00
Marko Mäkelä
91a2192bf2 Merge 10.5 into 10.6 2024-02-07 13:51:03 +02:00
Nikita Malyavin
68c1fbfc17 MDEV-25370 Update for portion changes autoincrement key in bi-temp table
According to the standard, the autoincrement column (i.e. *identity
column*) should be advanced each insert implicitly made by
UPDATE/DELETE ... FOR PORTION.

This is very unconvenient use in several notable cases. Concider a
WITHOUT OVERLAPS key with an autoinc column:
id int auto_increment, unique(id, p without overlaps)

An update or delete with FOR PORTION creates a sense that id will remain
unchanged in such case.

The standard's IDENTITY reminds MariaDB's AUTO_INCREMENT, however
the generation rules differ in many ways. For example, there's also a
notion autoincrement index, which is bound to the autoincrement field.

We will define our own generation rule for the PORTION OF operations
involving AUTO_INCREMENT:
* If an autoincrement index contains WITHOUT OVERLAPS specification, then
a new value should not be generated, otherwise it should.

Apart from WITHOUT OVERLAPS there is also another notable case, referred
by the reporter - a unique key that has an autoincrement column and a field
from the period specification:
  id int auto_increment, unique(id, s), period for p(s, e)

for this case, no exception is made, and the autoincrementing rules will be
proceeded accordung to the standard (i.e. the value will be advanced on
implicit inserts).
2024-01-31 16:03:38 +01:00
Sergei Golubchik
e95bba9c58 Merge branch '10.5' into 10.6 2023-12-17 11:20:43 +01:00
Alexander Barkov
4ced4898fd MDEV-32958 Unusable key notes do not get reported for some operations
Enable unusable key notes for non-equality predicates:
   <, <=, =>, >, BETWEEN, IN, LIKE

Note, in some scenarios it displays duplicate notes, e.g.
for queries with ORDER BY:

  SELECT * FROM t1
  WHERE    indexed_string_column >= 10
  ORDER BY indexed_string_column
  LIMIT 5;

This should be tolarable. Getting rid of the diplicate note
completely would need a much more complex patch, which is
not desiable in 10.6.

Details:

- Changing RANGE_OPT_PARAM::note_unusable_keys from bool
  to a new data type Item_func::Bitmap, so the caller can
  choose with a better granuality which predicates
  should raise unusable key notes inside the range optimizer:
    a. all predicates (=, <=>, <, <=, =>, >, BETWEEN, IN, LIKE)
    b. all predicates except equality (=, <=>)
    c. none of the predicates

  "b." is needed because in some scenarios equality predicates (=, <=>)
  send unusable key notes at an earlier stage, before the range optimizer,
  during update_ref_and_keys(). Calling the range optimizer with
  "all predicates" would produce duplicate notes for = and <=> in such cases.

- Fixing get_quick_record_count() to call the range optimizer
  with "all predicates except equality" instead of "none of the predicates".
  Before this change the range optimizer suppressed all notes for
  non-equality predicates: <, <=, =>, >, BETWEEN, IN, LIKE.
  This actually fixes the reported problem.

- Fixing JOIN::make_range_rowid_filters() to call the range optimizer
  with "all predicates except equality" instead of "all predicates".
  Before this change the range optimizer produced duplicate notes
  for = and <=> during a rowid_filter optimization.

- Cleanup:
  Adding the op_collation argument to Field::raise_note_cannot_use_key_part()
  and displaying the operation collation rather than the argument collation
  in the unusable key note. This is important for operations with more than
  two arguments: BETWEEN and IN, e.g.:

    SELECT * FROM t1
    WHERE column_utf8mb3_general_ci
          BETWEEN 'a' AND 'b' COLLATE utf8mb3_unicode_ci;

    SELECT * FROM t1
    WHERE column_utf8mb3_general_ci
          IN ('a', 'b' COLLATE utf8mb3_unicode_ci);

    The note for 'a' now prints utf8mb3_unicode_ci as the collation.
    which is the collation of the entire operation:

      Cannot use key key1 part[0] for lookup:
      "`column_utf8mb3_general_ci`" of collation `utf8mb3_general_ci` >=
      "'a'" of collation `utf8mb3_unicode_ci`

    Before this change it printed the collation of 'a',
    so the note was confusing:

      Cannot use key key1 part[0] for lookup:
      "`column_utf8mb3_general_ci`" of collation `utf8mb3_general_ci` >=
      "'a'" of collation `utf8mb3_general_ci`"
2023-12-11 08:55:27 +04:00
Rex
c6a9fd7904 MDEV-32212 DELETE with ORDER BY and semijoin optimization causing crash
Statements affected by this bug are delete statements that have all
these conditions

1) single table delete syntax
2) and in (sub-query) predicate
3) semi-join optimization enabled
4) an order by clause.

Semijoin optimization on an innocent looking query, such as

DELETE FROM t1 WHERE c1 IN (select c2 from t2) ORDER BY c1;

turns it from a single table delete to a multi-table delete.

During multi_delete::initialize_tables for the top level join object, a
table is initialized missing a keep_current_rowid flag, needed to
position a handler for removal of the correct row after the filesort
structure has been built.

Fix provided by Monty (monty@mariadb.com)
Pushed into 10.5 at Monty's request.
Applicable to 10.6, 10.11, 11.0.
OK'd by Monty in slack:#askmonty 2023-12-01
2023-12-01 09:29:36 +12:00
Sergei Petrunia
691e964d23 MDEV-31764: ASAN use-after-poison in trace_engine_stats in ANALYZE JSON
Do not attempt to produce "r_engine_stats" on the temporary (=work) tables.
These tables may be
- re-created during the query execution
- freed during the query execution (This is done e.g. in JOIN::cleanup(),
  before we produce ANALYZE FORMAT=JSON output).

- (Also, make save_explain_data() functions not set handler_for_stats
  to point to handler objects that do not have handler->handler_stats set.
  If the storage engine is not collecting handler_stats, it will not have
  them when we're producing ANALYZE FORMAT=JSON output, either).
2023-08-01 22:32:54 +03:00
Sergei Petrunia
6e484c3bd9 MDEV-31577: Make ANALYZE FORMAT=JSON print innodb stats
ANALYZE FORMAT=JSON output now includes table.r_engine_stats which
has the engine statistics. Only non-zero members are printed.

Internally: EXPLAIN data structures Explain_table_acccess and
Explain_update now have handler* handler_for_stats pointer.
It is used to read statistics from handler_for_stats->handler_stats.

The following applies only to 10.9+, backport doesn't use it:

Explain data structures exist after the tables are closed. We avoid
walking invalid pointers using this:
- SQL layer calls Explain_query::notify_tables_are_closed() before
  closing tables.
- After that call, printing of JSON output is disabled. Non-JSON output
  can be printed but we don't access handler_for_stats when doing that.
2023-07-21 16:50:11 +03:00
Monty
99bd226059 MDEV-31558 Add InnoDB engine information to the slow query log
The new statistics is enabled by adding the "engine", "innodb" or "full"
option to --log-slow-verbosity

Example output:

 # Pages_accessed: 184  Pages_read: 95  Pages_updated: 0  Old_rows_read: 1
 # Pages_read_time: 17.0204  Engine_time: 248.1297

Page_read_time is time doing physical reads inside a storage engine.
(Writes cannot be tracked as these are usually done in the background).
Engine_time is the time spent inside the storage engine for the full
duration of the read/write/update calls. It uses the same code as
'analyze statement' for calculating the time spent.

The engine statistics is done with a generic interface that should be
easy for any engine to use. It can also easily be extended to provide
even more statistics.

Currently only InnoDB has counters for Pages_% and Undo_% status.
Engine_time works for all engines.

Implementation details:

class ha_handler_stats holds all engine stats.  This class is included
in handler and THD classes.
While a query is running, all statistics is updated in the handler. In
close_thread_tables() the statistics is added to the THD.

handler::handler_stats is a pointer to where statistics should be
collected. This is set to point to handler::active_handler_stats if
stats are requested. If not, it is set to 0.
handler_stats has also an element, 'active' that is 1 if stats are
requested. This is to allow engines to avoid doing any 'if's while
updating the statistics.

Cloned or partition tables have the pointer set to the base table if
status are requested.

There is a small performance impact when using --log-slow-verbosity=engine:
- All engine calls in 'select' will be timed.
- IO calls for InnoDB reads will be timed.
- Incrementation of counters are done on local variables and accesses
  are inline, so these should have very little impact.
- Statistics has to be reset for each statement for the THD and each
  used handler. This is only 40 bytes, which should be neglectable.
- For partition tables we have to loop over all partitions to update
  the handler_status as part of table_init(). Can be optimized in the
  future to only do this is log-slow-verbosity changes. For this to work
  we have to update handler_status for all opened partitions and
  also for all partitions opened in the future.

Other things:
- Added options 'engine' and 'full' to log-slow-verbosity.
- Some of the new files in the test suite comes from Percona server, which
  has similar status information.
- buf_page_optimistic_get(): Do not increment any counter, since we are
  only validating a pointer, not performing any buf_pool.page_hash lookup.
- Added THD argument to save_explain_data_intern().
- Switched arguments for save_explain_.*_data() to have
  always THD first (generates better code as other functions also have THD
  first).
2023-07-07 12:53:18 +03:00
Marko Mäkelä
5bada1246d Merge 10.5 into 10.6 2023-04-11 16:15:19 +03:00
Oleksandr Byelkin
ac5a534a4c Merge remote-tracking branch '10.4' into 10.5 2023-03-31 21:32:41 +02:00
Igor Babaev
f33fc2fae5 MDEV-30539 EXPLAIN EXTENDED: no message with queries for DML statements
EXPLAIN EXTENDED for an UPDATE/DELETE/INSERT/REPLACE statement did not
produce the warning containing the text representation of the query
obtained after the optimization phase. Such warning was produced for
SELECT statements, but not for DML statements.
The patch fixes this defect of EXPLAIN EXTENDED for DML statements.
2023-03-25 12:36:59 -07:00
Marko Mäkelä
e55397a46d Merge 10.5 into 10.6 2022-12-05 18:04:23 +02:00
Jan Lindström
4eb8e51c26 Merge 10.4 into 10.5 2022-11-30 13:10:52 +02:00
lrf141
da03d8d99f MDEV-19190 Assertion ...auto_inc_initialized failed in get_auto_increment
This is a DELETE only case. Normally this statement doesn't make inserts,
but DELETE ... FOR PORTION changes it. UPDATE and INSERT initializes
autoinc by calling handler::info(HA_STATUS_AUTO). Also myisam and innodb
can lazily initialize it in their update_create_info overrides.

The solution is to initialize autoinc during delete preparation,
if period (DELETE FOR PORTION) is specified.

The initial work has been done by Kento Takeuchi by his PR #2048,
however this commit also holds a few technical modifications by
Nikita Malyavin
2022-11-24 02:05:53 +03:00
Oleksandr Byelkin
ee620a7416 Merge branch '10.5' into 10.6 2022-08-04 16:58:42 +02:00
Oleksandr Byelkin
1e71ea806b Merge branch '10.4' into 10.5 2022-08-04 08:30:03 +02:00
Oleksandr Byelkin
e509065247 Merge branch '10.3' into 10.4 2022-08-03 19:51:44 +02:00
Thirunarayanan Balathandayuthapani
f9ec9b6abb MDEV-27282 InnoDB: Failing assertion: !query->intersection
- query->intersection fails to get freed if the query exceeds
innodb_ft_result_cache_limit

- errors from init_ftfuncs were not propogated by delete command

This is taken from percona/percona-server@ef2c0bcb9a
2022-08-03 20:35:12 +05:30
Marko Mäkelä
4ef44cc2f9 Merge 10.5 into 10.6 2022-03-15 14:49:24 +02:00
Marko Mäkelä
e1246775a9 Merge 10.4 into 10.5 2022-03-15 08:32:28 +02:00
Marko Mäkelä
9c6135e81f Merge 10.3 into 10.4 2022-03-15 08:10:35 +02:00
Sergei Golubchik
6789f2cfab MDEV-18304 sql_safe_updates does not work with OR clauses
not every index-using plan sets bits in table->quick_keys.
QUICK_ROR_INTERSECT_SELECT, for example, doesn't.

Use the fact that select->quick is set instead.

Also allow EXPLAIN to work.
2022-03-12 19:13:17 +01:00
Marko Mäkelä
d95361107c Merge 10.5 into 10.6 2021-09-24 14:38:52 +03:00
Marko Mäkelä
7e2b42324c Merge 10.4 into 10.5 2021-09-24 08:42:23 +03:00
Marko Mäkelä
9024498e88 Merge 10.3 into 10.4 2021-09-22 18:26:54 +03:00
Daniel Ye
9fc1ef932f MDEV-26545 Spider does not correctly handle UDF and stored function in where conds
- Handle stored function conditions correctly, with the same logic as with UDFs.
- When running queries on Spider SE, by default, we do not push down WHERE conditions containing usage of UDFs/stored functions to remote data nodes, unless the user demands (by setting spider_use_pushdown_udf).
- Disable direct update/delete when a udf condition is skipped.
2021-09-22 18:55:05 +09:00
Vladislav Vaintroub
e7f4daf88c merge 10.5 to 10.6 2021-07-16 22:12:09 +02:00
Oleksandr Byelkin
a7d880f0b0 MDEV-21916: COM_STMT_BULK_EXECUTE with RETURNING insert wrong values
The problem is that array binding uses net buffer to read parameters for each
execution while each execiting with RETURNING write in the same buffer.

Solution is to allocate new net buffer to avoid changing buffer we are reading
from.
2021-07-15 16:28:13 +02:00
Dmitry Shulga
5478ca779a MDEV-25576: The statement EXPLAIN running as regular statement and as prepared statement produces different results for UPDATE with subquery
10.6 cleanup
2021-06-17 19:30:24 +02:00
Marko Mäkelä
a722ee88f3 Merge 10.5 into 10.6 2021-06-01 11:39:38 +03:00
Marko Mäkelä
9c7a456a92 Merge 10.4 into 10.5 2021-06-01 10:38:09 +03:00
Marko Mäkelä
77d8da57d7 Merge 10.3 into 10.4 2021-06-01 09:14:59 +03:00
Marko Mäkelä
950a220060 Merge 10.2 into 10.3 2021-06-01 08:40:59 +03:00
Dmitry Shulga
91bde0fb67 MDEV-25576: The statement EXPLAIN running as regular statement and as prepared statement produces different results for UPDATE with subquery
Both EXPLAIN and EXPLAIN EXTENDED statements produce different results set
in case it is run in normal way and in PS mode for the statements
UPDATE/DELETE with subquery.

The use case below reproduces the issue:
MariaDB [test]> CREATE TABLE t1 (c1 INT KEY) ENGINE=MyISAM;
Query OK, 0 rows affected (0,128 sec)

MariaDB [test]> CREATE TABLE t2 (c2 INT) ENGINE=MyISAM;
Query OK, 0 rows affected (0,023 sec)

MariaDB [test]> CREATE TABLE t3 (c3 INT) ENGINE=MyISAM;
Query OK, 0 rows affected (0,021 sec)

MariaDB [test]> EXPLAIN EXTENDED UPDATE t3 SET c3 =
    -> ( SELECT COUNT(d1.c1) FROM ( SELECT a11.c1 FROM t1 AS a11
    -> STRAIGHT_JOIN t2 AS a21 ON a21.c2 = a11.c1 JOIN t1 AS a12
    -> ON a12.c1 = a11.c1 ) d1 );
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
| id   | select_type | table | type | possible_keys | key  | key_len | ref  | rows | filtered | Extra                          |
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
|    1 | PRIMARY     | t3    | ALL  | NULL          | NULL | NULL    | NULL |    0 |   100.00 |                                |
|    2 | SUBQUERY    | NULL  | NULL | NULL          | NULL | NULL    | NULL | NULL |     NULL | Impossible WHERE noticed after reading const tables
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
2 rows in set (0,002 sec)

MariaDB [test]> PREPARE stmt FROM
    -> EXPLAIN EXTENDED UPDATE t3 SET c3 =
    -> ( SELECT COUNT(d1.c1) FROM ( SELECT a11.c1 FROM t1 AS a11
    -> STRAIGHT_JOIN t2 AS a21 ON a21.c2 = a11.c1 JOIN t1 AS a12
    -> ON a12.c1 = a11.c1 ) d1 );
Query OK, 0 rows affected (0,000 sec)
Statement prepared

MariaDB [test]>  EXECUTE stmt;
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
| id   | select_type | table | type | possible_keys | key  | key_len | ref  | rows | filtered | Extra                          |
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
|    1 | PRIMARY     | t3    | ALL  | NULL          | NULL | NULL    | NULL |    0 |   100.00 |                                |
|    2 | SUBQUERY    | NULL  | NULL | NULL          | NULL | NULL    | NULL | NULL |     NULL | no matching row in const table |
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
2 rows in set (0,000 sec)

The reason by that different result sets are produced is that on execution
of the statement 'EXECUTE stmt' the flag SELECT_DESCRIBE not set
in the data member SELECT_LEX::options for instances of SELECT_LEX that
correspond to subqueries used in the UPDTAE/DELETE statements.

Initially, these flags were set on parsing the statement
  PREPARE stmt FROM "EXPLAIN EXTENDED UPDATE t3 SET ..."
but latter they were reset before starting real execution of
the parsed query during handling the statement 'EXECUTE stmt';

So, to fix the issue the functions mysql_update()/mysql_delete()
have been modified to set the flag SELECT_DESCRIBE forcibly
in the data member SELECT_LEX::options for the primary SELECT_LEX
of the UPDATE/DELETE statement.
2021-05-30 17:31:55 +07:00
Monty
be093c81a7 MDEV-24089 support oracle syntax: rownum
The ROWNUM() function is for SELECT mapped to JOIN->accepted_rows, which is
incremented for each accepted rows.
For Filesort, update, insert, delete and load data, we map ROWNUM() to
internal variables incremented when the table is changed.
The connection between the row counter and Item_func_rownum is done
in sql_select.cc::fix_items_after_optimize() and
sql_insert.cc::fix_rownum_pointers()

When ROWNUM() is used anywhere in query, the optimization to ignore ORDER
BY in sub queries are disabled. This was done to get the following common
Oracle query to work:
select * from (select * from t1 order by a desc) as t where rownum() <= 2;
MDEV-3926 "Wrong result with GROUP BY ... WITH ROLLUP" contains a discussion
about this topic.

LIMIT optimization is enabled when in a top level WHERE clause comparing
ROWNUM() with a numerical constant using any of the following expressions:
- ROWNUM() < #
- ROWNUM() <= #
- ROWNUM() = 1
ROWNUM() can be also be the right argument to the comparison function.

LIMIT optimization is done in two cases:
- For the current sub query when the ROWNUM comparison is done on the top
  level:
  SELECT * from t1 WHERE rownum() <= 2 AND t1.a > 0
- For an inner sub query, when the upper level has only a ROWNUM comparison
  in the WHERE clause:
  SELECT * from (select * from t1) as t WHERE rownum() <= 2

In Oracle mode, one can also use ROWNUM without parentheses.

Other things:
- Fixed bug where the optimizer tries to optimize away sub queries
  with RAND_TABLE_BIT set (non-deterministic queries). Now these
  sub queries will not be converted to joins.  This bug fix was also
  needed to get rownum() working inside subqueries.
- In remove_const() remove setting simple_order to FALSE if ROLLUP is
  USED. This code was disable a long time ago because of wrong assignment
  in the following code.  Instead we set simple_order to false if
  RAND_TABLE_BIT was used in the SELECT list.  This ensures that
  we don't delete ORDER BY if the result set is not deterministic, like
  in 'SELECT RAND() AS 'r' FROM t1 ORDER BY r';
- Updated parameters for Sort_param::init_for_filesort() to be able
  to provide filesort with information where the number of accepted
  rows should be stored
- Reordered fields in class Filesort to optimize storage layout
- Added new error messsage to tell that a function can't be used in HAVING
- Added field 'with_rownum' to THD to mark that ROWNUM() is used in the
  query.

Co-author: Oleksandr Byelkin <sanja@mariadb.com>
           LIMIT optimization for sub query
2021-05-19 22:54:11 +02:00
Marko Mäkelä
ed4b2b3f95 Merge 10.5 into 10.6 2021-04-26 08:40:36 +03:00
Marko Mäkelä
4725792bf3 Merge 10.4 into 10.5 2021-04-25 12:04:45 +03:00
Marko Mäkelä
e4394cc547 Merge 10.3 into 10.4 2021-04-25 10:20:57 +03:00
Igor Babaev
2c9bf0ae87 This commit adds the same call of st_select_lex::set_unique_exclude() that
complemented the fix for MDEV-24823 in 10.2. As it is the only call of
this function in 10.3 the commit also has added the code of the function.
2021-04-24 16:53:24 -07:00
Igor Babaev
1288dfffe7 This patch complements the patch for MDEV-24823. 2021-04-23 14:32:59 -07:00
Igor Babaev
e3a25793be MDEV-24823 Crash with invalid multi-table update of view in 2nd execution of SP
Before this patch mergeable derived tables / view used in a multi-table
update / delete were merged before the preparation stage.
When the merge of a derived table / view is performed the on expression
attached to it is fixed and ANDed with the where condition of the select S
containing this derived table / view. It happens after the specification of
the derived table / view has been merged into S. If the ON expression refers
to a non existing field an error is reported and some other mergeable derived
tables / views remain unmerged. It's not a problem if the multi-table
update / delete statement is standalone. Yet if it is used in a stored
procedure the select with incompletely merged derived tables / views may
cause a problem for the second call of the procedure. This does not happen
for select queries using derived tables / views, because in this case their
specifications are merged after the preparation stage at which all ON
expressions are fixed.
This patch makes sure that merging of the derived tables / views used in a
multi-table update / delete statement is performed after the preparation
stage.

Approved by Oleksandr Byelkin <sanja@mariadb.com>
2021-04-22 20:02:08 -07:00
Igor Babaev
b3b5d57e78 MDEV-24823 Crash with invalid multi-table update of view in 2nd execution of SP
Before this patch mergeable derived tables / view used in a multi-table
update / delete were merged before the preparation stage.
When the merge of a derived table / view is performed the on expression
attached to it is fixed and ANDed with the where condition of the select S
containing this derived table / view. It happens after the specification of
the derived table / view has been merged into S. If the ON expression refers
to a non existing field an error is reported and some other mergeable derived
tables / views remain unmerged. It's not a problem if the multi-table
update / delete statement is standalone. Yet if it is used in a stored
procedure the select with incompletely merged derived tables / views may
cause a problem for the second call of the procedure. This does not happen
for select queries using derived tables / views, because in this case their
specifications are merged after the preparation stage at which all ON
expressions are fixed.
This patch makes sure that merging of the derived tables / views used in a
multi-table update / delete statement is performed after the preparation
stage.

Approved by Oleksandr Byelkin <sanja@mariadb.com>
2021-04-22 13:56:50 -07:00
Marko Mäkelä
92abdcca5a Merge 10.5 into 10.6 2021-01-07 09:08:09 +02:00
Oleksandr Byelkin
02e7bff882 Merge commit '10.4' into 10.5 2021-01-06 10:53:00 +01:00
Marko Mäkelä
0aa02567dd Merge 10.3 into 10.4 2020-12-23 14:52:59 +02:00
Aleksey Midenkov
932ec586aa MDEV-23644 Assertion on evaluating foreign referential action for self-reference in system versioned table
First part of the fix (row0mysql.cc) addresses external columns when adding history
row on referential action. The full data must be retrieved before the
row is inserted.

Second part of the fix (the rest) avoids duplicate primary key error between
the history row generated on referential action and the history row
generated by SQL command. Both command and referential action can
happen on same table since foreign key can be self-reference (parent
and child tables are same). Moreover, the self-reference can refer
multiple rows when the key is non-unique. In such case history is
generated by referential action occured on first row but processed all
rows by a matched key. The second round is when the next row is
processed by a command but history already exists. In such case we
check TRX_ID of existing history row and if it is the same we assume
the above situation and skip adding one more history row or failing
the command.
2020-12-22 03:33:53 +03:00
Marko Mäkelä
a13fac9eee Merge 10.5 into 10.6 2020-12-03 08:12:47 +02:00
Marko Mäkelä
6a1e655cb0 Merge 10.4 into 10.5 2020-12-02 18:29:49 +02:00