1
0
mirror of https://github.com/MariaDB/server.git synced 2025-08-08 11:22:35 +03:00
Commit Graph

2068 Commits

Author SHA1 Message Date
Oleksandr Byelkin
4d41ec081e Merge branch '10.6' into 10.11 2025-04-26 10:47:03 +02:00
Oleksandr Byelkin
19644f6821 Merge branch '10.5' into 10.6 2025-04-26 10:41:52 +02:00
Aleksey Midenkov
837fad6e41 MDEV-31647 Stack looping and SIGSEGV in Item_args::walk_args on UPDATE
The problem is with window function which requires its own sorting but
it is impossible as table is updated while select is
progressing. That's it, multi-update queries do not use temporary
tables to store updated rows and substitute them at the end of the
query. Instead, updates are performed directly on the target table,
row by row, during query execution. MariaDB processes updates in a way
that ensures each row is updated only once, even if it matches
multiple conditions in the query. This behavior avoids redundant
updates and does not require intermediate storage in a temporary table.

The detailed cause of the loop invoked by window function was
explained by Yuchen Pei in MDEV-31647 comments.

The fix disables window functions for multi-update.

Note that MySQL throws ER_UPDATE_TABLE_USED in that case which is the
result of check_unique_table(). But this function is not used for
multi-update in MariaDB and this check cannot be done because some
level of SELECT expressions is allowed (MDEV-13911).
2025-04-21 21:13:52 +02:00
Oleksandr Byelkin
20b818f45e Merge branch '10.6' into 10.11 2025-04-21 11:23:11 +02:00
Oleksandr Byelkin
a135551569 Merge branch '10.5' into 10.6 2025-04-21 10:43:17 +02:00
Sergei Golubchik
7f1492d0bc cleanup: rename hide_view_error->replace_view_error_with_generic
as requested by Monty
2025-04-17 17:22:56 +02:00
Sergei Golubchik
e69f8cae1a Merge branch '10.6' into 10.11 2025-01-30 11:55:13 +01:00
Marko Mäkelä
98dbe3bfaf Merge 10.5 into 10.6 2025-01-20 09:57:37 +02:00
Sergei Golubchik
a69da0c31e MDEV-19761 Before Trigger not processed for Not Null Columns if no explicit value and no DEFAULT
it's incorrect to zero out table->triggers->extra_null_bitmap
before a statement, because if insert uses an explicit field list
and omits a field that has no default value, the field should
get NULL implicitly. So extra_null_bitmap should have 1s for all
fields that have no defaults

* create extra_null_bitmap_init and initialize it as above
* copy extra_null_bitmap_init to extra_null_bitmap for inserts
* still zero out extra_null_bitmap for updates/deletes where
  all fields definitely have a value
* make not_null_fields_have_null_values() to send
  ER_NO_DEFAULT_FOR_FIELD for fields with no default and no value,
  otherwise creation of a trigger with an empty body would change the
  error message
2025-01-17 23:42:56 +01:00
Oleksandr Byelkin
0d35fe6e57 MDEV-35326: Memory Leak in init_io_cache_ext upon SHUTDOWN
The problems were that:
1) resources was freed "asimetric" normal execution in send_eof,
 in case of error in destructor.
2) destructor was not called in case of SP for result objects.
(so if the last SP execution ended with error resorces was not
freeded on reinit before execution (cleanup() called before next
execution) and destructor also was not called due to lack of
delete call for the object)

Result cleanup() renamed to reset_for_next_ps_execution() to better
reflect function().

All result method revised and freeing resources made "symetric".

Destructor of result object called for SP.

Added skipped invalidation in case of error in insert.

Removed misleading naming of reset(thd) (could be mixed with
with reset()).
2025-01-13 10:04:27 +01:00
Marko Mäkelä
3f914afd3a Merge 10.6 into 10.11 2025-01-02 12:39:56 +02:00
Yuchen Pei
671f80c738 Merge branch '10.5' into 10.6 2024-12-17 11:06:09 +11:00
Dmitry Shulga
54c1031b74 MDEV-34958: after Trigger doesn't work correctly with bulk insert
This bug has the same nature as the issues
  MDEV-34718: Trigger doesn't work correctly with bulk update
  MDEV-24411: Trigger doesn't work correctly with bulk insert

To fix the issue covering all use cases, resetting the thd->bulk_param
temporary to the value nullptr before invoking triggers and restoring
its original value on finishing execution of a trigger is moved to the method
  Table_triggers_list::process_triggers
that be invoked ultimately for any kind of triggers.
2024-12-13 16:19:39 +07:00
Oleksandr Byelkin
3d0fb15028 Merge branch '10.6' into 10.11 2024-10-29 15:24:38 +01:00
Oleksandr Byelkin
f00711bba2 Merge branch '10.5' into 10.6 2024-10-29 14:20:03 +01:00
Yuchen Pei
b8c2bd9f69 MDEV-35249 Fix regression caused by MDEV-34447
MDEV-34447 Removed setting first_cond_optimization to 0 in update and
delete when leaf_tables_saved. This can cause problems when two ps
executions of an update go through different paths, where the first ps
execution goes through single table update only and the second ps
execution also goes through multi table update. When this happens, the
first_cond_optimization of the outer query is not set to false during
the first ps execution because optimize() is not called for the outer
query. But then the second ps execution will call optimize() on the
outer query, which with first_cond_optimization==true trips the 2nd ps
mem leak detection.

This is not a problem in higher version as both executions go through
multi table updates, possibly due to MDEV-28883.

We fix this problem by restoring the FALSE assignments to
first_cond_optimization.
2024-10-25 18:03:40 +11:00
Sergei Golubchik
3a1cf2c85b MDEV-34679 ER_BAD_FIELD uses non-localizable substrings 2024-10-17 21:37:37 +02:00
Yuchen Pei
d002b1f503 Merge branch '10.6' into 10.11 2024-09-09 11:34:19 +10:00
Yuchen Pei
60b93cdd30 Merge branch '10.5' into 10.6 2024-09-06 13:52:57 +10:00
Yuchen Pei
e886c2ba02 MDEV-34757 Check leaf_tables_saved in partition pruning in UPDATE and DELETE 2024-09-06 11:41:59 +10:00
Yuchen Pei
2c3e07df47 MDEV-34447: Memory leakage is detected on running the test main.ps against the server 11.1
The memory leak happened on second execution of a prepared statement
that runs UPDATE statement with correlated subquery in right hand side of
the SET clause. In this case, invocation of the method
  table->stat_records()
could return the zero value that results in going into the 'if' branch
that handles impossible where condition. The issue is that this condition
branch missed saving of leaf tables that has to be performed as first
condition optimization activity. Later the PS statement memory root
is marked as read only on finishing first time execution of the prepared
statement. Next time the same statement is executed it hits the assertion
on attempt to allocate a memory on the PS memory root marked as read only.
This memory allocation takes place by the sequence of the following
invocations:
 Prepared_statement::execute
  mysql_execute_command
   Sql_cmd_dml::execute
    Sql_cmd_update::execute_inner
     Sql_cmd_update::update_single_table
      st_select_lex::save_leaf_tables
       List<TABLE_LIST>::push_back

To fix the issue, add the flag SELECT_LEX::leaf_tables_saved to control
whether the method SELECT_LEX::save_leaf_tables() has to be called or
it has been already invoked and no more invocation required.

Similar issue could take place on running the DELETE statement with
the LIMIT clause in PS/SP mode. The reason of memory leak is the same as for
UPDATE case and be fixed in the same way.
2024-09-06 11:41:58 +10:00
Oleksandr Byelkin
70afc62750 Merge branch '10.6' into 10.11 2024-08-20 10:00:39 +02:00
Oleksandr Byelkin
fc5772ce17 Merge branch '10.5' into 10.6 2024-08-20 09:11:34 +02:00
Dmitry Shulga
ba5482ffc2 MDEV-34718: Trigger doesn't work correctly with bulk update
Running an UPDATE statement in PS mode and having positional
parameter(s) bound with an array of actual values (that is
prepared to be run in bulk mode) results in incorrect behaviour
in presence of on update trigger that also executes an UPDATE
statement. The same is true for handling a DELETE statement in
presence of on delete trigger. Typically, the visible effect of
such incorrect behaviour is expressed in a wrong number of
updated/deleted rows of a target table. Additionally, in case UPDATE
statement, a number of modified rows and a state message returned
by a statement contains wrong information about a number of modified rows.

The reason for incorrect number of updated/deleted rows is that
a data structure used for binding positional argument with its
actual values is stored in THD (this is thd->bulk_param) and reused
on processing every INSERT/UPDATE/DELETE statement. It leads to
consuming actual values bound with top-level UPDATE/DELETE statement
by other DML statements used by triggers' body.

To fix the issue, reset the thd->bulk_param temporary to the value
nullptr before invoking triggers and restore its value on finishing
its execution.

The second part of the problem relating with wrong value of affected
rows reported by Connector/C API is caused by the fact that diagnostics
area is reused by an original DML statement and a statement invoked
by a trigger. This fact should be take into account on finalizing a
state of diagnostics area on completion running of a statement.

Important remark: in case the macros DBUG_OFF is on, call of the method
  Diagnostics_area::reset_diagnostics_area()
results in reset of the data members
  m_affected_rows, m_statement_warn_count.
Values of these data members of the class Diagnostics_area are used on
sending OK and EOF messages. In case DML statement is executed in PS bulk
mode such resetting results in sending wrong result values to a client
for affected rows in case the DML statement fires a triggers. So, reset
these data members only in case the current statement being processed
is not run in bulk mode.
2024-08-19 12:13:43 +07:00
Marko Mäkelä
27a3366663 Merge 10.6 into 10.11 2024-06-27 10:26:09 +03:00
Marko Mäkelä
0076eb3d4e Merge 10.5 into 10.6 2024-06-24 13:09:47 +03:00
Dave Gosselin
db0c28eff8 MDEV-33746 Supply missing override markings
Find and fix missing virtual override markings.  Updates cmake
maintainer flags to include -Wsuggest-override and
-Winconsistent-missing-override.
2024-06-20 11:32:13 -04:00
Marko Mäkelä
788953463d Merge 10.6 into 10.11
Some fixes related to commit f838b2d799 and
Rows_log_event::do_apply_event() and Update_rows_log_event::do_exec_row()
for system-versioned tables were provided by Nikita Malyavin.
This was required by test versioning.rpl,trx_id,row.
2024-03-28 09:16:57 +02:00
Monty
cfa8268ef9 MDEV-33622 Server crashes when the UPDATE statement (which has duplicate key) is run after setting a low thread_stack
This was caused by wrong allocation of variable on stack.
(Was allocating 4K of data instead of 512 bytes).

No test case as the original MDEV test cases is not usable for mtr.
2024-03-12 19:00:41 +02:00
Marko Mäkelä
64cce8d5bf Merge 10.6 into 10.11 2024-02-14 16:12:53 +02:00
Marko Mäkelä
691f923906 Merge 10.5 into 10.6 2024-02-13 20:42:59 +02:00
Marko Mäkelä
8ec12e0d6d Merge 10.4 into 10.5 2024-02-12 11:38:13 +02:00
Dmitry Shulga
e48bd474a2 MDEV-15703: Crash in EXECUTE IMMEDIATE 'CREATE OR REPLACE TABLE t1 (a INT DEFAULT ?)' USING DEFAULT
This patch fixes the issue with passing the DEFAULT or IGNORE values to
positional parameters for some kind of SQL statements to be executed
as prepared statements.

The main idea of the patch is to associate an actual value being passed
by the USING clause with the positional parameter represented by
the Item_param class. Such association must be performed on execution of
UPDATE statement in PS/SP mode. Other corner cases that results in
server crash is on handling CREATE TABLE when positional parameter
placed after the DEFAULT clause or CALL statement and passing either
the value DEFAULT or IGNORE as an actual value for the positional parameter.
This case is fixed by checking whether an error is set in diagnostics
area at the function pack_vcols() on return from the function pack_expression()
2024-02-08 09:21:54 +01:00
Sergei Golubchik
87e13722a9 Merge branch '10.6' into 10.11 2024-02-01 18:36:14 +01:00
Monty
3c63e2f890 Temporary table name used by multip-update has 'null' as part of the name
Problem was that TMP_TABLE_PARAM was not properly initialized
2024-01-23 13:03:12 +02:00
Sergei Golubchik
fd0b47f9d6 Merge branch '10.6' into 10.11 2023-12-18 11:19:04 +01:00
Alexander Barkov
4ced4898fd MDEV-32958 Unusable key notes do not get reported for some operations
Enable unusable key notes for non-equality predicates:
   <, <=, =>, >, BETWEEN, IN, LIKE

Note, in some scenarios it displays duplicate notes, e.g.
for queries with ORDER BY:

  SELECT * FROM t1
  WHERE    indexed_string_column >= 10
  ORDER BY indexed_string_column
  LIMIT 5;

This should be tolarable. Getting rid of the diplicate note
completely would need a much more complex patch, which is
not desiable in 10.6.

Details:

- Changing RANGE_OPT_PARAM::note_unusable_keys from bool
  to a new data type Item_func::Bitmap, so the caller can
  choose with a better granuality which predicates
  should raise unusable key notes inside the range optimizer:
    a. all predicates (=, <=>, <, <=, =>, >, BETWEEN, IN, LIKE)
    b. all predicates except equality (=, <=>)
    c. none of the predicates

  "b." is needed because in some scenarios equality predicates (=, <=>)
  send unusable key notes at an earlier stage, before the range optimizer,
  during update_ref_and_keys(). Calling the range optimizer with
  "all predicates" would produce duplicate notes for = and <=> in such cases.

- Fixing get_quick_record_count() to call the range optimizer
  with "all predicates except equality" instead of "none of the predicates".
  Before this change the range optimizer suppressed all notes for
  non-equality predicates: <, <=, =>, >, BETWEEN, IN, LIKE.
  This actually fixes the reported problem.

- Fixing JOIN::make_range_rowid_filters() to call the range optimizer
  with "all predicates except equality" instead of "all predicates".
  Before this change the range optimizer produced duplicate notes
  for = and <=> during a rowid_filter optimization.

- Cleanup:
  Adding the op_collation argument to Field::raise_note_cannot_use_key_part()
  and displaying the operation collation rather than the argument collation
  in the unusable key note. This is important for operations with more than
  two arguments: BETWEEN and IN, e.g.:

    SELECT * FROM t1
    WHERE column_utf8mb3_general_ci
          BETWEEN 'a' AND 'b' COLLATE utf8mb3_unicode_ci;

    SELECT * FROM t1
    WHERE column_utf8mb3_general_ci
          IN ('a', 'b' COLLATE utf8mb3_unicode_ci);

    The note for 'a' now prints utf8mb3_unicode_ci as the collation.
    which is the collation of the entire operation:

      Cannot use key key1 part[0] for lookup:
      "`column_utf8mb3_general_ci`" of collation `utf8mb3_general_ci` >=
      "'a'" of collation `utf8mb3_unicode_ci`

    Before this change it printed the collation of 'a',
    so the note was confusing:

      Cannot use key key1 part[0] for lookup:
      "`column_utf8mb3_general_ci`" of collation `utf8mb3_general_ci` >=
      "'a'" of collation `utf8mb3_general_ci`"
2023-12-11 08:55:27 +04:00
Oleksandr Byelkin
036df5f970 Merge branch '10.10' into 10.11 2023-08-08 14:57:31 +02:00
Oleksandr Byelkin
34a8e78581 Merge branch '10.6' into 10.9 2023-08-04 08:01:06 +02:00
Oleksandr Byelkin
6bf8483cac Merge branch '10.5' into 10.6 2023-08-01 15:08:52 +02:00
Oleksandr Byelkin
7564be1352 Merge branch '10.4' into 10.5 2023-07-26 16:02:57 +02:00
Marko Mäkelä
bce3ee704f Merge 10.10 into 10.11 2023-07-26 14:44:43 +03:00
Aleksey Midenkov
14cc7e7d6e MDEV-25644 UPDATE not working properly on transaction precise system versioned table
First UPDATE under START TRANSACTION does nothing (nstate= nstate),
but anyway generates history. Since update vector is empty we get into
(!uvect->n_fields) branch which only adds history row, but does not do
update. After that we get current row with wrong (old) row_start value
and because of that second UPDATE tries to insert history row again
because it sees trx->id != row_start which is the guard to avoid
inserting multiple trx_id-based history rows under same transaction
(because we have same trx_id and we get duplicate error and this bug
demostrates that). But this try anyway fails because PK is based on
row_end which is constant under same transaction, so PK didn't change.

The fix moves vers_make_update() to an earlier stage of
calc_row_difference(). Therefore it prepares update vector before
(!uvect->n_fields) check and never gets into that branch, hence no
need to handle versioning inside that condition anymore.

Now trx->id and row_start are equal after first UPDATE and we don't
try to insert second history row.

== Cleanups and improvements ==

ha_innobase::update_row():

vers_set_fields and vers_ins_row are cleaned up into direct condition
check. SQLCOM_ALTER_TABLE check now is not used as this is dead code,
assertion is done instead.

upd_node->is_delete is set in calc_row_difference() just to keep
versioning code as much in one place as possible. vers_make_delete()
is still located in row_update_for_mysql() as this is required for
ha_innodbase::delete_row() as well.

row_ins_duplicate_error_in_clust():

Restrict DB_FOREIGN_DUPLICATE_KEY to the better conditions.
VERSIONED_DELETE is used specifically to help lower stack to
understand what caused current insert. Related to MDEV-29813.
2023-07-20 18:22:31 +03:00
Marko Mäkelä
7cde5c539b Merge 10.6 into 10.9 2023-07-10 11:22:21 +03:00
Monty
99bd226059 MDEV-31558 Add InnoDB engine information to the slow query log
The new statistics is enabled by adding the "engine", "innodb" or "full"
option to --log-slow-verbosity

Example output:

 # Pages_accessed: 184  Pages_read: 95  Pages_updated: 0  Old_rows_read: 1
 # Pages_read_time: 17.0204  Engine_time: 248.1297

Page_read_time is time doing physical reads inside a storage engine.
(Writes cannot be tracked as these are usually done in the background).
Engine_time is the time spent inside the storage engine for the full
duration of the read/write/update calls. It uses the same code as
'analyze statement' for calculating the time spent.

The engine statistics is done with a generic interface that should be
easy for any engine to use. It can also easily be extended to provide
even more statistics.

Currently only InnoDB has counters for Pages_% and Undo_% status.
Engine_time works for all engines.

Implementation details:

class ha_handler_stats holds all engine stats.  This class is included
in handler and THD classes.
While a query is running, all statistics is updated in the handler. In
close_thread_tables() the statistics is added to the THD.

handler::handler_stats is a pointer to where statistics should be
collected. This is set to point to handler::active_handler_stats if
stats are requested. If not, it is set to 0.
handler_stats has also an element, 'active' that is 1 if stats are
requested. This is to allow engines to avoid doing any 'if's while
updating the statistics.

Cloned or partition tables have the pointer set to the base table if
status are requested.

There is a small performance impact when using --log-slow-verbosity=engine:
- All engine calls in 'select' will be timed.
- IO calls for InnoDB reads will be timed.
- Incrementation of counters are done on local variables and accesses
  are inline, so these should have very little impact.
- Statistics has to be reset for each statement for the THD and each
  used handler. This is only 40 bytes, which should be neglectable.
- For partition tables we have to loop over all partitions to update
  the handler_status as part of table_init(). Can be optimized in the
  future to only do this is log-slow-verbosity changes. For this to work
  we have to update handler_status for all opened partitions and
  also for all partitions opened in the future.

Other things:
- Added options 'engine' and 'full' to log-slow-verbosity.
- Some of the new files in the test suite comes from Percona server, which
  has similar status information.
- buf_page_optimistic_get(): Do not increment any counter, since we are
  only validating a pointer, not performing any buf_pool.page_hash lookup.
- Added THD argument to save_explain_data_intern().
- Switched arguments for save_explain_.*_data() to have
  always THD first (generates better code as other functions also have THD
  first).
2023-07-07 12:53:18 +03:00
Marko Mäkelä
656c2e18b1 Merge 10.10 into 10.11 2023-04-14 13:08:28 +03:00
Marko Mäkelä
44281b88f3 Merge 10.8 into 10.9 2023-04-14 11:32:36 +03:00
Marko Mäkelä
1d1e0ab2cc Merge 10.6 into 10.8 2023-04-12 15:50:08 +03:00
Marko Mäkelä
5bada1246d Merge 10.5 into 10.6 2023-04-11 16:15:19 +03:00
Oleksandr Byelkin
ac5a534a4c Merge remote-tracking branch '10.4' into 10.5 2023-03-31 21:32:41 +02:00