Also expand vcol field index coverings to include indexes covering all
the fields in the expression. The reasoning goes as follows: let f(c1,
c2, ..., cn) be a function on applied to columns c1, c2, ..., cn, if
f(...) is covered by an index, so should vc whose expression is
f(...).
For example, if t.vf = t.c1 + t.c2, and t has three indexes (vf), (c1,
c2), (c1).
Before this change, vf's index covering is a singleton {(vf)}. Let's call
that the "conventional" index covering.
After this change vf's index covering is now {(vf), (c1, c2)}, since
(c1, c2) covers both c1 and c2. Let's call (c1, c2) in this case the
"extra" covering.
With the coverings updated, when an index in the "extra" covering is
chosen for keyread, the vcol also needs to be calculated. In this case
we mark vcol in the table read_set, and ensure it is computed.
With these changes, we see various improvements, including from using
full table scan + filesort to full index scan + filesort when ORDER BY
an indexed vcol (here vc = c + 1 is a vcol and both c and vc are
indexes):
explain select c + 1 from t order by vc;
id select_type table type possible_keys key key_len ref rows Extra
-1 SIMPLE t ALL NULL NULL NULL NULL 10000 Using filesort
+1 SIMPLE t index NULL c 5 NULL 10000 Using index; Using filesort
The substitutions are followed updates to all_fields which include a
copy of the ORDER BY/GROUP BY item pointers, as well as corresponding
updates to ref_pointer_array so that the all_fields and
ref_pointer_array remain in sync.
Another, related change is the recomputation of table index covering
on substitutions. It not only reflects the correct table index
covering after the substitutions, but also improve executions where
the vcol index can be chosen, such as this example (here vc = c + 1
and vc is the only index in the table), from full table scan +
filesort to full index scan:
select vc from t order by c + 1;
We do it in SELECT as well as in single table DELETE/UPDATE.
When processing queries like
INSERT INTO t1 (..) SELECT .. FROM t1, t2 ...,
there is a single query block (i.e., a single SELECT_LEX) for both INSERT and
SELECT parts. During hints resolution, when hints are attached to particular
TABLE_LIST's, the search is performed by table name across the whole
query block.
So, if a table mentioned in an optimizer hint is present in the INSERT part,
the hint is attached to the that table. This is obviously wrong as
optimizer hints are supposed to only affect the SELECT part of
an INSERT..SELECT clause.
This commit disables possible attaching hints to tables in the INSERT part
and fixes some other bugs related to INSERT..SELECT statements processing
It places a limit N (a timeout value in milliseconds) on how long
a statement is permitted to execute before the server terminates it.
Syntax:
SELECT /*+ MAX_EXECUTION_TIME(milliseconds) */ ...
Only top-level SELECT statements support the hint.
MDEV-36563 Assertion `!mysql_bin_log.is_open()' failed in
THD::mark_tmp_table_as_free_for_reuse
The purpose of this commit is to ensure that creation and changes of
temporary tables are properly and predicable logged to the binary
log. It also fixes some bugs where ROW logging was used in MIXED mode,
when STATEMENT would be a better (and expected) choice.
In this comment STATEMENT stands for logging to binary log in
STATEMENT format, MIXED stands for MIXED binlog format and ROW for ROW
binlog format.
New rules for logging of temporary tables
- CREATE of temporary tables are now by default binlogged only if
STATEMENT binlog format is used. If it is binlogged, 1 is stored in
TABLE_SHARE->table_creation_was_logged. The user can change this
behavior by setting create_temporary_table_binlog_formats to
MIXED,STATEMENT in which case the create is logged in statement
format also in MIXED mode (as before).
- Changes to temporary tables are only binlogged if and only if
the CREATE was logged. The logging happens under STATEMENT or MIXED.
If binlog_format=ROW, temporary table changes are not binlogged. A
temporary table that are changed under ROW are marked as 'not up to
date in binlog' and no future row changes are logged. Any usage of
this temporary table will force row logging of other tables in any
future statements using the temporary table to be row logged.
- DROP TEMPORARY is binlogged only of the CREATE was binlogged.
Changes done:
- Row logging is forced for any statement using temporary tables that
are not up to date in the binary log.
(Before the row logging was forced if the user has a temporary table)
- If there is any changes to the temporary table that is not binlogged,
the table is marked as not up to date.
- TABLE_SHARE->table_creation_was_logged has a new definition for
temporary tables:
0 Table creating was not logged to binary log
1 Table creating was logged to binary log and table is up to date.
2 Table creating was logged to binary log but some changes where
not logged to binary log.
Table is not up to date in binary log is defined as value 0 or 2.
- If a multi-table-update or multi-table-delete fails then
all updated temporary tables are marked as not up to date.
- Enforce row logging if the query is using temporary tables
that are not up to date.
Before row logging was enforced if the user had any
temporary tables.
- When dropping temporary tables use IF EXISTS. This ensures
that slave will not stop if it had crashed and lost the
temporary tables.
- Remove comment and version from DROP /*!4000 TEMPORARY.. generated when
a connection closes that has open temporary tables. Added 'generated by
server' at the end of the DROP.
Bugs fixed:
- When using temporary tables with commands that forced row based,
like INSERT INTO temporary_table VALUES (UUID()), this was never
logged which causes the temporary table to be inconsistent on
master and slave.
- Used binlog format is now clearly defined. It is now only depending
on the current binlog_format and the tables used.
Before it was depending on the user had ANY temporary tables and
the state of 'current_stmt_binlog_format' set by previous queries.
This also caused temporary tables to be logged to binary log in
some cases.
- CREATE TABLE t1 LIKE not_logged_temporary_table caused replication
to stop.
- Rename of not binlogged temporary tables where binlogged to binary log
which caused replication to stop.
Changes in behavior:
- By default create_temporary_table_binlog_formats=STATEMENT, which
means that CREATE TEMPORARY is not logged to binary log under MIXED
binary logging. This can be changed by setting
create_temporary_table_binlog_formats to MIXED,STATEMENT.
- Using temporary tables that was not logged to the binary log will
cause any query using them for updating other tables to be logged in
ROW format. Before all queries was logged in ROW format if the user had
any temporary tables, even if they were not used by the query.
- Generated DROP TEMPORARY TABLE is now always using IF EXISTS and
has a "generated by server" comment in the binary log.
The consequences of the above is that manipulations of a lot of rows
through temporary tables will by default be be slower in mixed mode.
For example:
BEGIN;
CREATE TEMPORARY TABLE tmp AS SELECT a, b, c FROM
large_table1 JOIN large_table2 ON ...;
INSERT INTO other_table SELECT b, c FROM tmp WHERE a <100;
DROP TEMPORARY TABLE tmp;
COMMIT;
By default this will create a huge entry in the binary log, compared
to just a few hundred bytes in statement mode. However the change in
this commit will make usage of temporary tables more reliable and
predicable and is thus worth it. Using statement mode or
create_temporary_table_binlog_formats can be used to avoid this issue.
The problem is with window function which requires its own sorting but
it is impossible as table is updated while select is
progressing. That's it, multi-update queries do not use temporary
tables to store updated rows and substitute them at the end of the
query. Instead, updates are performed directly on the target table,
row by row, during query execution. MariaDB processes updates in a way
that ensures each row is updated only once, even if it matches
multiple conditions in the query. This behavior avoids redundant
updates and does not require intermediate storage in a temporary table.
The detailed cause of the loop invoked by window function was
explained by Yuchen Pei in MDEV-31647 comments.
The fix disables window functions for multi-update.
Note that MySQL throws ER_UPDATE_TABLE_USED in that case which is the
result of check_unique_table(). But this function is not used for
multi-update in MariaDB and this check cannot be done because some
level of SELECT expressions is allowed (MDEV-13911).
Added capability to create a trigger associated with several trigger
events. For this goal, the syntax of the CREATE TRIGGER statement
was extended to support the syntax structure { event [ OR ... ] }
for the `trigger_event` clause. Since one trigger will be able to
handle several events it should be provided a way to determine what
kind of event is handled on execution of a trigger. For this goal
support of the clauses INSERTING, UPDATING , DELETING was added by
this patch. These clauses can be used inside a trigger body to detect
what kind of trigger action is currently processed using the following
boilerplate:
IF INSERTING THEN ...
ELSIF UPDATING THEN ...
ELSIF DELETING THEN ...
In case one of the clauses INSERTING, UPDATING, DELETING specified in
a trigger's body not matched with a trigger event type, the error
ER_INCOMPATIBLE_EVENT_FLAG is emitted.
After this patch be pushed, one Trigger object will be associated with
several trigger events. It means that the array
Table_triggers_list::triggers
can contain several pointers to the same Trigger object in array members
corresponding to different events. Moreover, support of several trigger
events for the same trigger requires that the data members `next` and
`action_order` of the Trigger class be converted to arrays to store
relating information per trigger event base.
Ability to specify the same trigger for different event types results in
necessity to handle invalid cases on execution of the multi-event
trigger, when the OLD or NEW qualifiers doesn't match a current event
type against that the trigger is run. The clause OLD should produces
the NULL value for INSERT event, whereas the clause NEW should produce
the NULL value for DELETE event.
The placement of calls to optimize_unflattened_subqueries(false) in
MDEV-25008 4b6922a315 reversed this
order, causing subquery plans to be chosen after sending EXPLAIN
output when the former would have errored out, which causes assertion
failure when attempting to emit an error after send_eof.
Note that there's no equivalent problem in single table deletion
because Sql_cmd_delete::delete_from_single_table() does not invoke
send_explain().
Avoid ASAN failure by collecting statistics from Result objects
before cleaning them up. In related single-table cases, statistics
are maintained directly by the single-table update and delete
functions.
* rpl.rpl_system_versioning_partitions updated for MDEV-32188
* innodb.row_size_error_log_warnings_3 changed error for MDEV-33658
(checks are done in a different order)
This patch includes a few changes to make the code easier to maintain:
- Renamed SQL_I_List::link_in_list to SQL_I_List::insert. link_in_list was
ambiguous as it could refer to a link or it could refer to a node
- Remove field_name local variable in multi_update::initialize_tables because
it is not used when creating the temporary tables
- multi_update changes:
- Move temp table callocs to init, a more natural location for them, and moved
tables_to_update to const member variable so we don't recompute it.
- Filter out jtbm tables and tables not in the update map, pushing those that
will be updated into an update_targets container. This simplifies checks and
loops in initialize_tables.
Implementation of this task adds ability to raise the signal with
SQLSTATE '02TRG' from a BEFORE INSERT/UPDATE/DELETE trigger and handles
this signal as an indicator meaning 'to throw away the current row'
on processing the INSERT/UPDATE/DELETE statement. The signal with
SQLSTATE '02TRG' has special meaning only in case it is raised inside
BEFORE triggers, for AFTER trigger's this value of SQLSTATE isn't treated
in any special way. In according with SQL standard, the SQLSTATE class '02'
means NO DATA and sql_errno for this class is set to value
ER_SIGNAL_NOT_FOUND by current implementation of MariaDB server.
Implementation of this task assigns the value ER_SIGNAL_SKIP_ROW_FROM_TRIGGER
to sql_errno in Diagnostics_area in case the signal is raised from a trigger
and SQLSTATE has value '02TRG'.
To catch signal with SQLTSATE '02TRG' and handle it in special way, the methods
Table_triggers_list::process_triggers
select_insert::store_values
select_create::store_values
Rows_log_event::process_triggers
and the overloaded function
fill_record_n_invoke_before_triggers
were extended with extra out parameter for returning the flag whether
to skip the current values being processed by INSERT/UPDATE/DELETE
statement. This extra parameter is passed as nullptr in case of AFTER trigger
and BEFORE trigger this parameter points to a variable to store a marker
whether to skip the current record or store it by calling write_record().
(Review input addressed)
After this patch, the optimizer can handle virtual column expressions
in WHERE/ON clauses. If the table has an indexed virtual column:
ALTER TABLE t1
ADD COLUMN vcol INT AS (col1+1),
ADD INDEX idx1(vcol);
and the query uses the exact virtual column expression:
SELECT * FROM t1 WHERE col1+1 <= 100
then the optimizer will be able use index idx1 for it.
This is achieved by walking the WHERE/ON clauses and replacing instances
of virtual column expression (like "col1+1" above) with virtual column's
Item_field (like "vcol"). The latter can be processed by the optimizer.
Replacement is considered (and done) only in items that are potentially
usable to the range optimizer.
Added support of the clause `UPDATE OF <columns>` for
BEFORE/AFTER UPDATE triggers. Triggers defined with this clause
are fired and run actions only in case an UPDATE statement affects
any of the listed columns. For columns not specified in the clause
`UPDATE OF <columns>`, an UPDATE statement with such columns as
targets don't result in running a trigger.
Output of SHOW TRIGGERS isn't affected by this task. Output of
the statement SHOW CREATE TRIGGER shows the clause `UPDATE OF <columns>`
if it was specified on trigger creation.
Tests accompany this task don't include tests that checking for cooperation of
the statement LOAD DATA and the clause `UPDATE OF <columns>` for
BEFORE/AFTER UPDATE triggers since the statement LOAD DATA is treated like
the statement INSERT INTO and therefore doesn't fire BEFORE/AFTER UPDATE
triggers.
it's incorrect to zero out table->triggers->extra_null_bitmap
before a statement, because if insert uses an explicit field list
and omits a field that has no default value, the field should
get NULL implicitly. So extra_null_bitmap should have 1s for all
fields that have no defaults
* create extra_null_bitmap_init and initialize it as above
* copy extra_null_bitmap_init to extra_null_bitmap for inserts
* still zero out extra_null_bitmap for updates/deletes where
all fields definitely have a value
* make not_null_fields_have_null_values() to send
ER_NO_DEFAULT_FOR_FIELD for fields with no default and no value,
otherwise creation of a trigger with an empty body would change the
error message
The problems were that:
1) resources was freed "asimetric" normal execution in send_eof,
in case of error in destructor.
2) destructor was not called in case of SP for result objects.
(so if the last SP execution ended with error resorces was not
freeded on reinit before execution (cleanup() called before next
execution) and destructor also was not called due to lack of
delete call for the object)
Result cleanup() renamed to reset_for_next_ps_execution() to better
reflect function().
All result method revised and freeing resources made "symetric".
Destructor of result object called for SP.
Added skipped invalidation in case of error in insert.
Removed misleading naming of reset(thd) (could be mixed with
with reset()).
This bug has the same nature as the issues
MDEV-34718: Trigger doesn't work correctly with bulk update
MDEV-24411: Trigger doesn't work correctly with bulk insert
To fix the issue covering all use cases, resetting the thd->bulk_param
temporary to the value nullptr before invoking triggers and restoring
its original value on finishing execution of a trigger is moved to the method
Table_triggers_list::process_triggers
that be invoked ultimately for any kind of triggers.
create templates
thd->alloc<X>(n) to use instead of (X*)thd->alloc(sizeof(X)*n)
and the same for thd->calloc(). By the default the type is char,
so old usage of thd->alloc(size) works too.
MDEV-34447 Removed setting first_cond_optimization to 0 in update and
delete when leaf_tables_saved. This can cause problems when two ps
executions of an update go through different paths, where the first ps
execution goes through single table update only and the second ps
execution also goes through multi table update. When this happens, the
first_cond_optimization of the outer query is not set to false during
the first ps execution because optimize() is not called for the outer
query. But then the second ps execution will call optimize() on the
outer query, which with first_cond_optimization==true trips the 2nd ps
mem leak detection.
This is not a problem in higher version as both executions go through
multi table updates, possibly due to MDEV-28883.
We fix this problem by restoring the FALSE assignments to
first_cond_optimization.
Single-table UPDATE/DELETE didn't provide outer_lookup_keys value for
subqueries. This didn't allow to make a meaningful choice between
IN->EXISTS and Materialization strategies for subqueries.
Fix this:
* Make UPDATE/DELETE save Sql_cmd_dml::scanned_rows,
* Then, subquery's JOIN::choose_subquery_plan() can fetch it from
there for outer_lookup_keys
Details:
UPDATE/DELETE now calls select_lex->optimize_unflattened_subqueries()
twice, like SELECT does (first call optimize_constant_subquries() in
JOIN::optimize_inner(), then call optimize_unflattened_subqueries() in
JOIN::optimize_stage2()):
1. Call with const_only=true before any optimizations. This allows
range optimizer and others to use the values of cheap const
subqueries.
2. Call it with const_only=false after range optimizer, partition
pruning, etc. outer_lookup_keys value is provided, so it's possible to
pick a good subquery strategy.
Note: PROTECT_STATEMENT_MEMROOT requires that first SP execution
performs subquery optimization for all subqueries, even for degenerate
query plans like "Impossible WHERE". Due to that, we ensure that the
call to optimize_unflattened_subqueries (with const_only=false) even
for degenerate query plans still happens, as was the case before this
change.