1
0
mirror of https://github.com/MariaDB/server.git synced 2025-11-08 00:28:29 +03:00
Commit Graph

1063 Commits

Author SHA1 Message Date
Aleksey Midenkov
ff33f49d9a Merge 11.4 into 11.8 2025-09-29 18:25:09 +03:00
Marko Mäkelä
e8ef8c0055 Merge 10.11 into 11.4 2025-09-24 13:40:09 +03:00
bsrikanth-mariadb
573d3ad1c6 MDEV-33309: for update|delete analyze format=json doesn't show r_other_time_ms
Single-table UPDATE/DELETEs only show r_total_time_ms in top-level query block.
Replace it with r_table_time_ms and r_other_time_ms.
2025-09-22 10:24:14 +05:30
Nikita Malyavin
28472359b1 MDEV-15990 versioning: don't allow changes in the past 2025-09-17 18:47:25 +03:00
Nikita Malyavin
8001679af6 MDEV-15990 handle timestamp-based collisions as well
Timestamp-versioned row deletion was exposed to a collisional problem: if
current timestamp wasn't changed, then a sequence of row delete+insert could
get a duplication error. A row delete would find another conflicting history row
and return an error.

This is true both for REPLACE and DELETE statements, however in REPLACE, the
"optimized" path is usually taken, especially in the tests. There, delete+insert
is substituted for a single versioned row update. In the end, both paths end up
as ha_update_row + ha_write_row.

The solution is to handle a history collision somehow.

From the design perspective, the user shouldn't experience history rows loss,
unless there's a technical limitation.

To the contrary, trxid-based changes should never generate history for the same
transaction, see MDEV-15427.

If two operations on the same row happened too quickly, so that they happen at
the same timestamp, the history row shouldn't be lost. We can still write a
history row, though it'll have row_start == row_end.

We cannot store more than one such historical row, as this will violate the
unique constraint on row_end. So we will have to phisically delete the row if
the history row is already available.

In this commit:
1. Improve TABLE::delete_row to handle the history collision: if an update
   results with a duplicate error, delete a row for real.
2. use TABLE::delete_row in a non-optimistic path of REPLACE, where the
   system-versioned case now belongs entirely.
2025-09-17 18:29:47 +03:00
Nikita Malyavin
aeb25743af MDEV-15990 REPLACE on a precise-versioned table returns ER_DUP_ENTRY
We had a protection against it, by allowing versioned delete if:
trx->id != table->vers_start_id()

For replace this check fails: replace calls ha_delete_row(record[2]), but
table->vers_start_id() returns the value from record[0], which is irrelevant.

The same problem hits Field::is_max, which may have checked the wrong record.

Fix:
* Refactor Field::is_max to optionally accept a pointer as an argument.
* Refactor vers_start_id and vers_end_id to always accept a pointer to the
record. there is a difference with is_max is that is_max accepts the pointer to
the
field data, rather than to the record.

Method val_int() would be too effortful to refactor to accept the argument, so
instead the value in record is fetched directly, like it is done in
Field_longlong.
2025-09-17 11:38:55 +03:00
Oleksandr Byelkin
15b1426c3a Merge branch '10.11' into bb-11.4-release 2025-09-15 16:17:33 +02:00
Oleksandr Byelkin
0707dac202 Merge branch '10.6' into 10.11 2025-09-12 13:08:40 +02:00
Sergei Golubchik
5743435954 MDEV-37397 Assertion `bitmap_is_set(&read_partitions, next->id)' failed in int partition_info::vers_set_hist_part(THD *)
after 633417308f (MDEV-37312) lookup_handler is locked with F_WRLCK,
because it may be used for deleting rows.

And lookup_handler is locked with F_WRLCK after prune_partitions(),
but the main handler is locked before, and might expects all
partitions to be in the read least, non-pruned.

Let's prepare the lookup handler before prune_partitions().
2025-09-04 17:20:02 +02:00
Monty
882f6fa3aa Fixed typos
- Removed duplicate words, like "the the" and "to to"
- Removed duplicate lines (one double sort line found in mysql.cc)
- Fixed some typos found while searching for duplicate words.

Command used to find duplicate words:
egrep -rI "\s([a-zA-Z]+)\s+\1\s" | grep -v param

Thanks to Artjoms Rimdjonoks for the command and pointing out the
spelling errors.
2025-09-04 18:08:39 +03:00
Nikita Malyavin
0108664a8a Merge branch 10.11 into 11.4
# Conflicts:
#	sql/handler.h
#	sql/log_event.h
#	sql/log_event_server.cc
2025-09-02 15:58:39 +02:00
Nikita Malyavin
db26bf7ada MDEV-15990 versioning: don't allow changes in the past 2025-08-04 17:44:05 +02:00
Nikita Malyavin
e56f6c585e MDEV-15990 handle timestamp-based collisions as well
Timestamp-versioned row deletion was exposed to a collisional problem: if
current timestamp wasn't changed, then a sequence of row delete+insert could
get a duplication error. A row delete would find another conflicting history row
and return an error.

This is true both for REPLACE and DELETE statements, however in REPLACE, the
"optimized" path is usually taken, especially in the tests. There, delete+insert
is substituted for a single versioned row update. In the end, both paths end up
as ha_update_row + ha_write_row.

The solution is to handle a history collision somehow.

From the design perspective, the user shouldn't experience history rows loss,
unless there's a technical limitation.

To the contrary, trxid-based changes should never generate history for the same
transaction, see MDEV-15427.

If two operations on the same row happened too quickly, so that they happen at
the same timestamp, the history row shouldn't be lost. We can still write a
history row, though it'll have row_start == row_end.

We cannot store more than one such historical row, as this will violate the
unique constraint on row_end. So we will have to phisically delete the row if
the history row is already available.

In this commit:
1. Improve TABLE::delete_row to handle the history collision: if an update
   results with a duplicate error, delete a row for real.
2. use TABLE::delete_row in a non-optimistic path of REPLACE, where the
   system-versioned case now belongs entirely.
2025-08-04 17:44:05 +02:00
Nikita Malyavin
6353a80ef5 MDEV-15990 REPLACE on a precise-versioned table returns ER_DUP_ENTRY
We had a protection against it, by allowing versioned delete if:
trx->id != table->vers_start_id()

For replace this check fails: replace calls ha_delete_row(record[2]), but
table->vers_start_id() returns the value from record[0], which is irrelevant.

The same problem hits Field::is_max, which may have checked the wrong record.

Fix:
* Refactor Field::is_max to optionally accept a pointer as an argument.
* Refactor vers_start_id and vers_end_id to always accept a pointer to the
record. there is a difference with is_max is that is_max accepts the pointer to
the
field data, rather than to the record.

Method val_int() would be too effortful to refactor to accept the argument, so
instead the value in record is fetched directly, like it is done in
Field_longlong.
2025-08-04 17:44:05 +02:00
Nikita Malyavin
f8ac3a7c0e multi_delete: fix unlikely -> likely 2025-08-04 17:44:05 +02:00
Dave Gosselin
dbd7017110 MDEV-36997 Assertion failed in ha_tina::delete_row on multi delete
Multi-delete code invokes ha_rnd_end after applying deferred deletes, following
the pattern as in multi-update.

The CSV engine batches deletes made via calls to delete_row, then applies them
all at once during ha_rnd_end.  For each row batched, the CSV engine decrements
an internal counter of the total rows in the table.  The multi-delete code was
not calling ha_rnd_end, so this internal counter was not consistent with the
total number of rows in the table, triggering the assertion.

In the CSV engine, explicitly delete the destination file before renaming the
source file over it.  This avoids a file rename problem on Windows.  This
change would have been necessary before the latest multi-delete changes, had
this test case existed at that point in time, because the end_read_record call
in do_table_deletes swallowed the rename failure on Windows.
2025-06-17 10:34:52 -04:00
Aleksey Midenkov
f1f9284181 MDEV-34046 Parameterized PS converts error to warning, causes
replication problems

DELETE HISTORY did not process parameterized PS properly as the
history expression was checked on prepare stage when the parameters
was not yet substituted. In that case check_units() succeeded as there
is no invalid type: Item_param has type_handler_null which is
inherited from string type and this is valid type for history
expression. The warning was thrown when the expression was evaluated
for comparison on delete execution (when the parameter was already
substituted).

The fix postpones check_units() until the first PS execution. We have
to postpone where conditions processing until the first execution and
update select_lex.where on every execution as it is reset to the state
after prepare.
2025-06-12 14:52:00 +03:00
Sergei Golubchik
237e24497b Merge remote-tracking branch 'github/bb-11.4-release' into bb-11.8-serg 2025-04-27 19:40:00 +02:00
Dave Gosselin
d3c9a2ee21 MDEV-35510 ASAN build crashes during bootstrap
Avoid ASAN failure by collecting statistics from Result objects
before cleaning them up.  In related single-table cases, statistics
are maintained directly by the single-table update and delete
functions.
2025-04-14 12:56:39 -04:00
ParadoxV5
d5ba6f71b9 Tag push_warning_printf with ATTRIBUTE_FORMAT
* Let GCC `-Wformat` check formats sent to these `my_vsnprintf_ex` users
* Migrate them from the old extension specifiers
  to the new `-Wformat`-compatible suffixes
2025-02-12 10:17:44 +01:00
Sergei Golubchik
9ee09a33bb Merge branch '11.7' into 11.8 2025-02-11 20:29:43 +01:00
Sergei Golubchik
ba01c2aaf0 Merge branch '11.4' into 11.7
* rpl.rpl_system_versioning_partitions updated for MDEV-32188
* innodb.row_size_error_log_warnings_3 changed error for MDEV-33658
  (checks are done in a different order)
2025-02-06 16:46:36 +01:00
Dave Gosselin
edd52b7fc7 MDEV-30469 Feature rebase
Upon rebase, some error codes changed order so record tests to
update accordingly.  Merge with trigger changes from MDEV-34724.
2025-02-05 11:31:57 -05:00
Dave Gosselin
5e07d1abd4 MDEV-35848, MDEV-35568 Reintroduce delete_while_scanning for multi_delete
Reintroduces delete_while_scanning optimization for multi_delete.
Reverse some test changes from the initial feature devlopment now
that we delete-on-the-fly once again.
2025-02-05 10:12:30 -05:00
Dave Gosselin
5001300bd4 MDEV-30469 Support ORDER BY and LIMIT for multi-table DELETE, index hints for single-table DELETE
We now allow multitable queries with order by and limit, such as:
  delete t1.*, t2.* from t1, t2 order by t1.id desc limit 3;
To predict what rows will be deleted, run the equivalent select:
  select t1.*, t2.* from t1, t2 order by t1.id desc limit 3;
Additionally, index hints are now supported with single table delete statements:
  delete from t2 use index(xid) order by (id) limit 2;

This approach changes the multi_delete SELECT result interceptor to use a temporary
table to collect row ids pertaining to the rows that will be deleted, rather than
directly deleting rows from the target table(s).  Row ids are collected during
send_data, then read during send_eof to delete target rows.  In the event that the
temporary table created in memory is not big enough for all matching rows, it is
converted to an aria table.

Other changes:
  - Deleting from a sequence now affects zero rows instead of emitting an error

Limitations:
  - The federated connector does not create implicit row ids, so we to use a key
  when conditionally deleting.  See the change in federated_maybe_16324629.test
2025-02-05 10:12:27 -05:00
Sergei Golubchik
7d657fda64 Merge branch '10.11 into 11.4 2025-01-30 12:01:11 +01:00
Sergei Golubchik
e69f8cae1a Merge branch '10.6' into 10.11 2025-01-30 11:55:13 +01:00
Sergei Golubchik
03d2328785 MDEV-35944 DELETE fails to notice transaction abort, violating ACID
Process errors of read_record().

Also, add an assert that Marko requested
2025-01-29 10:43:29 +01:00
Dmitry Shulga
4c956fa15b MDEV-34724: Skipping a row operation from a trigger
Implementation of this task adds ability to raise the signal with
SQLSTATE '02TRG' from a BEFORE INSERT/UPDATE/DELETE trigger and handles
this signal as an indicator meaning 'to throw away the current row'
on processing the INSERT/UPDATE/DELETE statement. The signal with
SQLSTATE '02TRG' has special meaning only in case it is raised inside
BEFORE triggers, for AFTER trigger's this value of SQLSTATE isn't treated
in any special way. In according with SQL standard, the SQLSTATE class '02'
means NO DATA and sql_errno for this class is set to value
ER_SIGNAL_NOT_FOUND by current implementation of MariaDB server.
Implementation of this task assigns the value ER_SIGNAL_SKIP_ROW_FROM_TRIGGER
to sql_errno in Diagnostics_area in case the signal is raised from a trigger
and SQLSTATE has value '02TRG'.

To catch signal with SQLTSATE '02TRG' and handle it in special way, the methods
 Table_triggers_list::process_triggers
 select_insert::store_values
 select_create::store_values
 Rows_log_event::process_triggers
and the overloaded function
 fill_record_n_invoke_before_triggers
were extended with extra out parameter for returning the flag whether
to skip the current values being processed by INSERT/UPDATE/DELETE
statement. This extra parameter is passed as nullptr in case of AFTER trigger
and BEFORE trigger this parameter points to a variable to store a marker
whether to skip the current record or store it by calling write_record().
2025-01-27 16:30:27 +07:00
Sergei Petrunia
1c2a83179d MDEV-35616: Add basic optimizer support for virtual column
(Review input addressed)

After this patch, the optimizer can handle virtual column expressions
in WHERE/ON clauses. If the table has an indexed virtual column:

  ALTER TABLE t1
    ADD COLUMN vcol INT AS (col1+1),
    ADD INDEX idx1(vcol);

and the query uses the exact virtual column expression:

  SELECT * FROM t1 WHERE col1+1 <= 100

then the optimizer will be able use index idx1 for it.

This is achieved by walking the WHERE/ON clauses and replacing instances
of virtual column expression (like "col1+1" above) with virtual column's
Item_field (like "vcol"). The latter can be processed by the optimizer.

Replacement is considered (and done) only in items that are potentially
usable to the range optimizer.
2025-01-25 10:50:52 +02:00
Marko Mäkelä
98dbe3bfaf Merge 10.5 into 10.6 2025-01-20 09:57:37 +02:00
Sergei Golubchik
a69da0c31e MDEV-19761 Before Trigger not processed for Not Null Columns if no explicit value and no DEFAULT
it's incorrect to zero out table->triggers->extra_null_bitmap
before a statement, because if insert uses an explicit field list
and omits a field that has no default value, the field should
get NULL implicitly. So extra_null_bitmap should have 1s for all
fields that have no defaults

* create extra_null_bitmap_init and initialize it as above
* copy extra_null_bitmap_init to extra_null_bitmap for inserts
* still zero out extra_null_bitmap for updates/deletes where
  all fields definitely have a value
* make not_null_fields_have_null_values() to send
  ER_NO_DEFAULT_FOR_FIELD for fields with no default and no value,
  otherwise creation of a trigger with an empty body would change the
  error message
2025-01-17 23:42:56 +01:00
Oleksandr Byelkin
0d35fe6e57 MDEV-35326: Memory Leak in init_io_cache_ext upon SHUTDOWN
The problems were that:
1) resources was freed "asimetric" normal execution in send_eof,
 in case of error in destructor.
2) destructor was not called in case of SP for result objects.
(so if the last SP execution ended with error resorces was not
freeded on reinit before execution (cleanup() called before next
execution) and destructor also was not called due to lack of
delete call for the object)

Result cleanup() renamed to reset_for_next_ps_execution() to better
reflect function().

All result method revised and freeing resources made "symetric".

Destructor of result object called for SP.

Added skipped invalidation in case of error in insert.

Removed misleading naming of reset(thd) (could be mixed with
with reset()).
2025-01-13 10:04:27 +01:00
Marko Mäkelä
15700f54c2 Merge 11.4 into 11.7 2025-01-09 09:41:38 +02:00
Marko Mäkelä
17f01186f5 Merge 10.11 into 11.4 2025-01-09 07:58:08 +02:00
Marko Mäkelä
3f914afd3a Merge 10.6 into 10.11 2025-01-02 12:39:56 +02:00
Yuchen Pei
671f80c738 Merge branch '10.5' into 10.6 2024-12-17 11:06:09 +11:00
Dmitry Shulga
54c1031b74 MDEV-34958: after Trigger doesn't work correctly with bulk insert
This bug has the same nature as the issues
  MDEV-34718: Trigger doesn't work correctly with bulk update
  MDEV-24411: Trigger doesn't work correctly with bulk insert

To fix the issue covering all use cases, resetting the thd->bulk_param
temporary to the value nullptr before invoking triggers and restoring
its original value on finishing execution of a trigger is moved to the method
  Table_triggers_list::process_triggers
that be invoked ultimately for any kind of triggers.
2024-12-13 16:19:39 +07:00
Marko Mäkelä
33907f9ec6 Merge 11.4 into 11.7 2024-12-02 17:51:17 +02:00
Marko Mäkelä
2719cc4925 Merge 10.11 into 11.4 2024-12-02 11:35:34 +02:00
Marko Mäkelä
3d23adb766 Merge 10.6 into 10.11 2024-11-29 13:43:17 +02:00
Marko Mäkelä
7d4077cc11 Merge 10.5 into 10.6 2024-11-29 12:37:46 +02:00
Brandon Nesterenko
dbfee9fc2b MDEV-34348: Consolidate cmp function declarations
Partial commit of the greater MDEV-34348 scope.
MDEV-34348: MariaDB is violating clang-16 -Wcast-function-type-strict

The functions queue_compare, qsort2_cmp, and qsort_cmp2
all had similar interfaces, and were used interchangable
and unsafely cast to one another.

This patch consolidates the functions all into the
qsort_cmp2 interface.

Reviewed By:
============
Marko Mäkelä <marko.makela@mariadb.com>
2024-11-23 08:14:22 -07:00
Oleksandr Byelkin
b12ff287ec Merge branch '11.6' into 11.7 2024-11-10 19:22:21 +01:00
Oleksandr Byelkin
9e1fb104a3 Merge tag '11.4' into 11.6
MariaDB 11.4.4 release
2024-11-08 07:17:00 +01:00
Sergei Golubchik
f2512c0fa8 cleanup: prepare_for_insert() -> prepare_for_modify()
make handler::prepare_for_insert() to be called to prepare
the handler for writes, INSERT/UPDATE/DELETE.
2024-11-05 14:00:49 -08:00
Sergei Golubchik
44c6328cbb cleanup: thd->alloc<>() and thd->calloc<>()
create templates

  thd->alloc<X>(n) to use instead of (X*)thd->alloc(sizeof(X)*n)

and the same for thd->calloc(). By the default the type is char,
so old usage of thd->alloc(size) works too.
2024-11-05 14:00:48 -08:00
Sergei Golubchik
eff16d7593 Revert "MDEV-15458 Segfault in heap_scan() upon UPDATE after ADD SYSTEM VERSIONING"
This partially reverts 43623f04a9

Engines have to set ::position() after ::write_row(), otherwise
the server won't be able to refer to the row just inserted.
This is important for high-level indexes.

heap part isn't reverted, so heap doesn't support high-level indexes.
to fix this, it'll need info->lastpos in addition to info->current_ptr
2024-11-05 14:00:48 -08:00
Oleksandr Byelkin
c770bce898 Merge branch '11.2' into 11.4 2024-10-30 15:11:17 +01:00
Oleksandr Byelkin
69d033d165 Merge branch '10.11' into 11.2 2024-10-29 16:42:46 +01:00