1
0
mirror of https://github.com/MariaDB/server.git synced 2025-11-08 00:28:29 +03:00
Commit Graph

1073 Commits

Author SHA1 Message Date
Oleksandr Byelkin
c976b527db Merge branch '11.8' into bb-12.1-release 2025-10-08 09:05:38 +02:00
Aleksey Midenkov
ff33f49d9a Merge 11.4 into 11.8 2025-09-29 18:25:09 +03:00
Marko Mäkelä
e8ef8c0055 Merge 10.11 into 11.4 2025-09-24 13:40:09 +03:00
bsrikanth-mariadb
573d3ad1c6 MDEV-33309: for update|delete analyze format=json doesn't show r_other_time_ms
Single-table UPDATE/DELETEs only show r_total_time_ms in top-level query block.
Replace it with r_table_time_ms and r_other_time_ms.
2025-09-22 10:24:14 +05:30
Nikita Malyavin
28472359b1 MDEV-15990 versioning: don't allow changes in the past 2025-09-17 18:47:25 +03:00
Nikita Malyavin
8001679af6 MDEV-15990 handle timestamp-based collisions as well
Timestamp-versioned row deletion was exposed to a collisional problem: if
current timestamp wasn't changed, then a sequence of row delete+insert could
get a duplication error. A row delete would find another conflicting history row
and return an error.

This is true both for REPLACE and DELETE statements, however in REPLACE, the
"optimized" path is usually taken, especially in the tests. There, delete+insert
is substituted for a single versioned row update. In the end, both paths end up
as ha_update_row + ha_write_row.

The solution is to handle a history collision somehow.

From the design perspective, the user shouldn't experience history rows loss,
unless there's a technical limitation.

To the contrary, trxid-based changes should never generate history for the same
transaction, see MDEV-15427.

If two operations on the same row happened too quickly, so that they happen at
the same timestamp, the history row shouldn't be lost. We can still write a
history row, though it'll have row_start == row_end.

We cannot store more than one such historical row, as this will violate the
unique constraint on row_end. So we will have to phisically delete the row if
the history row is already available.

In this commit:
1. Improve TABLE::delete_row to handle the history collision: if an update
   results with a duplicate error, delete a row for real.
2. use TABLE::delete_row in a non-optimistic path of REPLACE, where the
   system-versioned case now belongs entirely.
2025-09-17 18:29:47 +03:00
Nikita Malyavin
aeb25743af MDEV-15990 REPLACE on a precise-versioned table returns ER_DUP_ENTRY
We had a protection against it, by allowing versioned delete if:
trx->id != table->vers_start_id()

For replace this check fails: replace calls ha_delete_row(record[2]), but
table->vers_start_id() returns the value from record[0], which is irrelevant.

The same problem hits Field::is_max, which may have checked the wrong record.

Fix:
* Refactor Field::is_max to optionally accept a pointer as an argument.
* Refactor vers_start_id and vers_end_id to always accept a pointer to the
record. there is a difference with is_max is that is_max accepts the pointer to
the
field data, rather than to the record.

Method val_int() would be too effortful to refactor to accept the argument, so
instead the value in record is fetched directly, like it is done in
Field_longlong.
2025-09-17 11:38:55 +03:00
Oleksandr Byelkin
15b1426c3a Merge branch '10.11' into bb-11.4-release 2025-09-15 16:17:33 +02:00
Oleksandr Byelkin
0707dac202 Merge branch '10.6' into 10.11 2025-09-12 13:08:40 +02:00
Sergei Golubchik
5743435954 MDEV-37397 Assertion `bitmap_is_set(&read_partitions, next->id)' failed in int partition_info::vers_set_hist_part(THD *)
after 633417308f (MDEV-37312) lookup_handler is locked with F_WRLCK,
because it may be used for deleting rows.

And lookup_handler is locked with F_WRLCK after prune_partitions(),
but the main handler is locked before, and might expects all
partitions to be in the read least, non-pruned.

Let's prepare the lookup handler before prune_partitions().
2025-09-04 17:20:02 +02:00
Monty
882f6fa3aa Fixed typos
- Removed duplicate words, like "the the" and "to to"
- Removed duplicate lines (one double sort line found in mysql.cc)
- Fixed some typos found while searching for duplicate words.

Command used to find duplicate words:
egrep -rI "\s([a-zA-Z]+)\s+\1\s" | grep -v param

Thanks to Artjoms Rimdjonoks for the command and pointing out the
spelling errors.
2025-09-04 18:08:39 +03:00
Nikita Malyavin
0108664a8a Merge branch 10.11 into 11.4
# Conflicts:
#	sql/handler.h
#	sql/log_event.h
#	sql/log_event_server.cc
2025-09-02 15:58:39 +02:00
Nikita Malyavin
db26bf7ada MDEV-15990 versioning: don't allow changes in the past 2025-08-04 17:44:05 +02:00
Nikita Malyavin
e56f6c585e MDEV-15990 handle timestamp-based collisions as well
Timestamp-versioned row deletion was exposed to a collisional problem: if
current timestamp wasn't changed, then a sequence of row delete+insert could
get a duplication error. A row delete would find another conflicting history row
and return an error.

This is true both for REPLACE and DELETE statements, however in REPLACE, the
"optimized" path is usually taken, especially in the tests. There, delete+insert
is substituted for a single versioned row update. In the end, both paths end up
as ha_update_row + ha_write_row.

The solution is to handle a history collision somehow.

From the design perspective, the user shouldn't experience history rows loss,
unless there's a technical limitation.

To the contrary, trxid-based changes should never generate history for the same
transaction, see MDEV-15427.

If two operations on the same row happened too quickly, so that they happen at
the same timestamp, the history row shouldn't be lost. We can still write a
history row, though it'll have row_start == row_end.

We cannot store more than one such historical row, as this will violate the
unique constraint on row_end. So we will have to phisically delete the row if
the history row is already available.

In this commit:
1. Improve TABLE::delete_row to handle the history collision: if an update
   results with a duplicate error, delete a row for real.
2. use TABLE::delete_row in a non-optimistic path of REPLACE, where the
   system-versioned case now belongs entirely.
2025-08-04 17:44:05 +02:00
Nikita Malyavin
6353a80ef5 MDEV-15990 REPLACE on a precise-versioned table returns ER_DUP_ENTRY
We had a protection against it, by allowing versioned delete if:
trx->id != table->vers_start_id()

For replace this check fails: replace calls ha_delete_row(record[2]), but
table->vers_start_id() returns the value from record[0], which is irrelevant.

The same problem hits Field::is_max, which may have checked the wrong record.

Fix:
* Refactor Field::is_max to optionally accept a pointer as an argument.
* Refactor vers_start_id and vers_end_id to always accept a pointer to the
record. there is a difference with is_max is that is_max accepts the pointer to
the
field data, rather than to the record.

Method val_int() would be too effortful to refactor to accept the argument, so
instead the value in record is fetched directly, like it is done in
Field_longlong.
2025-08-04 17:44:05 +02:00
Nikita Malyavin
f8ac3a7c0e multi_delete: fix unlikely -> likely 2025-08-04 17:44:05 +02:00
bsrikanth-mariadb
db937cc971 MDEV-37207: dumping tables for multi delete query doesn't work always
It was observed that, when doing multiple DELETE on tables:
-> if there was duplicate data to delete, then no ddls were dumped to the trace.
-> However, when tested with no data being deleted from the tables,
   then ddls of the tables were getting dumped to the trace.

The problem is that store_tables_context_in_trace() is not getting
invoked, from mysql_execute_command() in all the situations.

The reason is that multi-table DELETE returned error when it had deleted duplicate rows.
Multi-table DELETE actually finds record combinations to delete,
and when it has found let's say {t1.rowX, t2.rowY, t3.rowZ}, it will attempt to
save t1.rowX in the temptable for t1, t2.rowY in the temptable for t2 and so forth.
When saving the row to be deleted, in the temp table, it can encounter error
121 (HA_ERR_FOUND_DUPP_KEY), and it is propagated to the caller.

As a fix, I have marked that there is no error when HA_ERR_FOUND_DUPP_KEY error_code is
noticed in the multi_delete::send_data()
2025-06-28 07:35:08 -04:00
Yuchen Pei
8cdee25952 MDEV-36132 Substitute vcol expressions with indexed vcol fields in ORDER BY and GROUP BY
Also expand vcol field index coverings to include indexes covering all
the fields in the expression. The reasoning goes as follows: let f(c1,
c2, ..., cn) be a function on applied to columns c1, c2, ..., cn, if
f(...) is covered by an index, so should vc whose expression is
f(...).

For example, if t.vf = t.c1 + t.c2, and t has three indexes (vf), (c1,
c2), (c1).

Before this change, vf's index covering is a singleton {(vf)}. Let's call
that the "conventional" index covering.

After this change vf's index covering is now {(vf), (c1, c2)}, since
(c1, c2) covers both c1 and c2. Let's call (c1, c2) in this case the
"extra" covering.

With the coverings updated, when an index in the "extra" covering is
chosen for keyread, the vcol also needs to be calculated. In this case
we mark vcol in the table read_set, and ensure it is computed.

With these changes, we see various improvements, including from using
full table scan + filesort to full index scan + filesort when ORDER BY
an indexed vcol (here vc = c + 1 is a vcol and both c and vc are
indexes):

 explain select c + 1 from t order by vc;
 id	select_type	table	type	possible_keys	key	key_len	ref	rows	Extra
-1	SIMPLE	t	ALL	NULL	NULL	NULL	NULL	10000	Using filesort
+1	SIMPLE	t	index	NULL	c	5	NULL	10000	Using index; Using filesort

The substitutions are followed updates to all_fields which include a
copy of the ORDER BY/GROUP BY item pointers, as well as corresponding
updates to ref_pointer_array so that the all_fields and
ref_pointer_array remain in sync.

Another, related change is the recomputation of table index covering
on substitutions. It not only reflects the correct table index
covering after the substitutions, but also improve executions where
the vcol index can be chosen, such as this example (here vc = c + 1
and vc is the only index in the table), from full table scan +
filesort to full index scan:

select vc from t order by c + 1;

We do it in SELECT as well as in single table DELETE/UPDATE.
2025-07-22 10:44:12 +10:00
Oleksandr Byelkin
dfcb5c91e0 Merge branch '11.8' into 12.0 2025-06-18 07:50:39 +02:00
Dave Gosselin
dbd7017110 MDEV-36997 Assertion failed in ha_tina::delete_row on multi delete
Multi-delete code invokes ha_rnd_end after applying deferred deletes, following
the pattern as in multi-update.

The CSV engine batches deletes made via calls to delete_row, then applies them
all at once during ha_rnd_end.  For each row batched, the CSV engine decrements
an internal counter of the total rows in the table.  The multi-delete code was
not calling ha_rnd_end, so this internal counter was not consistent with the
total number of rows in the table, triggering the assertion.

In the CSV engine, explicitly delete the destination file before renaming the
source file over it.  This avoids a file rename problem on Windows.  This
change would have been necessary before the latest multi-delete changes, had
this test case existed at that point in time, because the end_read_record call
in do_table_deletes swallowed the rename failure on Windows.
2025-06-17 10:34:52 -04:00
Aleksey Midenkov
f1f9284181 MDEV-34046 Parameterized PS converts error to warning, causes
replication problems

DELETE HISTORY did not process parameterized PS properly as the
history expression was checked on prepare stage when the parameters
was not yet substituted. In that case check_units() succeeded as there
is no invalid type: Item_param has type_handler_null which is
inherited from string type and this is valid type for history
expression. The warning was thrown when the expression was evaluated
for comparison on delete execution (when the parameter was already
substituted).

The fix postpones check_units() until the first PS execution. We have
to postpone where conditions processing until the first execution and
update select_lex.where on every execution as it is reset to the state
after prepare.
2025-06-12 14:52:00 +03:00
Oleksandr Byelkin
f1102da37a Merge branch '11.8' into 12.0 2025-05-22 09:22:55 +02:00
Oleg Smirnov
fa929a2be6 MDEV-36486 Optimizer hints are resolved against the INSERT part of INSERT..SELECT
When processing queries like
  INSERT INTO t1 (..) SELECT .. FROM t1, t2 ...,

there is a single query block (i.e., a single SELECT_LEX) for both INSERT and
SELECT parts. During hints resolution, when hints are attached to particular
TABLE_LIST's, the search is performed by table name across the whole
query block.
So, if a table mentioned in an optimizer hint is present in the INSERT part,
the hint is attached to the that table. This is obviously wrong as
optimizer hints are supposed to only affect the SELECT part of
an INSERT..SELECT clause.

This commit disables possible attaching hints to tables in the INSERT part
and fixes some other bugs related to INSERT..SELECT statements processing
2025-05-05 12:02:48 +07:00
Oleg Smirnov
67319f3e8d MDEV-34860 Implement MAX_EXECUTION_TIME hint
It places a limit N (a timeout value in milliseconds) on how long
a statement is permitted to execute before the server terminates it.

Syntax:
SELECT /*+ MAX_EXECUTION_TIME(milliseconds) */ ...

Only top-level SELECT statements support the hint.
2025-05-05 12:02:47 +07:00
Monty
f8ba5ced55 MDEV-36099 Ensure that creation and usage of temporary tables in replication is predictable
MDEV-36563 Assertion `!mysql_bin_log.is_open()' failed in
           THD::mark_tmp_table_as_free_for_reuse

The purpose of this commit is to ensure that creation and changes of
temporary tables are properly and predicable logged to the binary
log.  It also fixes some bugs where ROW logging was used in MIXED mode,
when STATEMENT would be a better (and expected) choice.

In this comment STATEMENT stands for logging to binary log in
STATEMENT format, MIXED stands for MIXED binlog format and ROW for ROW
binlog format.

New rules for logging of temporary tables
- CREATE of temporary tables are now by default binlogged only if
  STATEMENT binlog format is used. If it is binlogged, 1 is stored in
  TABLE_SHARE->table_creation_was_logged. The user can change this
  behavior by setting create_temporary_table_binlog_formats to
  MIXED,STATEMENT in which case the create is logged in statement
  format also in MIXED mode (as before).
- Changes to temporary tables are only binlogged if and only if
  the CREATE was logged. The logging happens under STATEMENT or MIXED.
  If binlog_format=ROW, temporary table changes are not binlogged. A
  temporary table that are changed under ROW are marked as 'not up to
  date in binlog' and no future row changes are logged.  Any usage of
  this temporary table will force row logging of other tables in any
  future statements using the temporary table to be row logged.
- DROP TEMPORARY is binlogged only of the CREATE was binlogged.

Changes done:
- Row logging is forced for any statement using temporary tables that
  are not up to date in the binary log.
  (Before the row logging was forced if the user has a temporary table)
- If there is any changes to the temporary table that is not binlogged,
  the table is marked as not up to date.
- TABLE_SHARE->table_creation_was_logged has a new definition for
  temporary tables:
  0  Table creating was not logged to binary log
  1  Table creating was logged to binary log and table is up to date.
  2  Table creating was logged to binary log but some changes where
     not logged to binary log.
  Table is not up to date in binary log is defined as value 0 or 2.
- If a multi-table-update or multi-table-delete fails then
  all updated temporary tables are marked as not up to date.
- Enforce row logging if the query is using temporary tables
  that are not up to date.
  Before row logging was enforced if the user had any
  temporary tables.
- When dropping temporary tables use IF EXISTS. This ensures
  that slave will not stop if it had crashed and lost the
  temporary tables.
- Remove comment and version from DROP /*!4000 TEMPORARY.. generated when
  a connection closes that has open temporary tables. Added 'generated by
  server' at the end of the DROP.

Bugs fixed:
- When using temporary tables with commands that forced row based,
  like INSERT INTO temporary_table VALUES (UUID()), this was never
  logged which causes the temporary table to be inconsistent on
  master and slave.
- Used binlog format is now clearly defined. It is now only depending
  on the current binlog_format and the tables used.
  Before it was depending on the user had ANY temporary tables and
  the state of 'current_stmt_binlog_format' set by previous queries.
  This also caused temporary tables to be logged to binary log in
  some cases.
- CREATE TABLE t1 LIKE not_logged_temporary_table caused replication
  to stop.
- Rename of not binlogged temporary tables where binlogged to binary log
  which caused replication to stop.

Changes in behavior:

- By default create_temporary_table_binlog_formats=STATEMENT, which
  means that CREATE TEMPORARY is not logged to binary log under MIXED
  binary logging. This can be changed by setting
  create_temporary_table_binlog_formats to MIXED,STATEMENT.
- Using temporary tables that was not logged to the binary log will
  cause any query using them for updating other tables to be logged in
  ROW format. Before all queries was logged in ROW format if the user had
  any temporary tables, even if they were not used by the query.
- Generated DROP TEMPORARY TABLE is now always using IF EXISTS and
  has a "generated by server" comment in the binary log.

The consequences of the above is that manipulations of a lot of rows
through temporary tables will by default be be slower in mixed mode.

For example:
  BEGIN;
  CREATE TEMPORARY TABLE tmp AS SELECT a, b, c FROM
  large_table1 JOIN large_table2 ON ...;
  INSERT INTO other_table SELECT b, c FROM tmp WHERE a <100;
  DROP TEMPORARY TABLE tmp;
  COMMIT;

By default this will create a huge entry in the binary log, compared
to just a few hundred bytes in statement mode. However the change in
this commit will make usage of temporary tables more reliable and
predicable and is thus worth it. Using statement mode or
create_temporary_table_binlog_formats can be used to avoid this issue.
2025-04-28 12:59:38 +03:00
Sergei Golubchik
237e24497b Merge remote-tracking branch 'github/bb-11.4-release' into bb-11.8-serg 2025-04-27 19:40:00 +02:00
Dmitry Shulga
ecb7c9b692 MDEV-10164: Add support for TRIGGERS that fire on multiple events
Added capability to create a trigger associated with several trigger
events. For this goal, the syntax of the CREATE TRIGGER statement
was extended to support the syntax structure { event [ OR ... ] }
for the `trigger_event` clause. Since one trigger will be able to
handle several events it should be provided a way to determine what
kind of event is handled on execution of a trigger. For this goal
support of the clauses INSERTING, UPDATING , DELETING was added by
this patch. These clauses can be used inside a trigger body to detect
what kind of trigger action is currently processed using the following
boilerplate:
  IF INSERTING THEN ...
  ELSIF UPDATING THEN ...
  ELSIF DELETING THEN ...
In case one of the clauses INSERTING, UPDATING, DELETING specified in
a trigger's body not matched with a trigger event type, the error
ER_INCOMPATIBLE_EVENT_FLAG is emitted.

After this patch be pushed, one Trigger object will be associated with
several trigger events. It means that the array
  Table_triggers_list::triggers
can contain several pointers to the same Trigger object in array members
corresponding to different events. Moreover, support of several trigger
events for the same trigger requires that the data members `next` and
`action_order` of the Trigger class be converted to arrays to store
relating information per trigger event base.

Ability to specify the same trigger for different event types results in
necessity to handle invalid cases on execution of the multi-event
trigger, when the OLD or NEW qualifiers doesn't match a current event
type against that the trigger is run. The clause OLD should produces
the NULL value for INSERT event, whereas the clause NEW should produce
the NULL value for DELETE event.
2025-04-19 18:36:03 +07:00
Dave Gosselin
d3c9a2ee21 MDEV-35510 ASAN build crashes during bootstrap
Avoid ASAN failure by collecting statistics from Result objects
before cleaning them up.  In related single-table cases, statistics
are maintained directly by the single-table update and delete
functions.
2025-04-14 12:56:39 -04:00
Vasilii Lakhin
717c12de0e Fix typos in C comments inside sql/ 2025-03-14 12:08:56 +04:00
ParadoxV5
d5ba6f71b9 Tag push_warning_printf with ATTRIBUTE_FORMAT
* Let GCC `-Wformat` check formats sent to these `my_vsnprintf_ex` users
* Migrate them from the old extension specifiers
  to the new `-Wformat`-compatible suffixes
2025-02-12 10:17:44 +01:00
Sergei Golubchik
9ee09a33bb Merge branch '11.7' into 11.8 2025-02-11 20:29:43 +01:00
Sergei Golubchik
ba01c2aaf0 Merge branch '11.4' into 11.7
* rpl.rpl_system_versioning_partitions updated for MDEV-32188
* innodb.row_size_error_log_warnings_3 changed error for MDEV-33658
  (checks are done in a different order)
2025-02-06 16:46:36 +01:00
Dave Gosselin
edd52b7fc7 MDEV-30469 Feature rebase
Upon rebase, some error codes changed order so record tests to
update accordingly.  Merge with trigger changes from MDEV-34724.
2025-02-05 11:31:57 -05:00
Dave Gosselin
5e07d1abd4 MDEV-35848, MDEV-35568 Reintroduce delete_while_scanning for multi_delete
Reintroduces delete_while_scanning optimization for multi_delete.
Reverse some test changes from the initial feature devlopment now
that we delete-on-the-fly once again.
2025-02-05 10:12:30 -05:00
Dave Gosselin
5001300bd4 MDEV-30469 Support ORDER BY and LIMIT for multi-table DELETE, index hints for single-table DELETE
We now allow multitable queries with order by and limit, such as:
  delete t1.*, t2.* from t1, t2 order by t1.id desc limit 3;
To predict what rows will be deleted, run the equivalent select:
  select t1.*, t2.* from t1, t2 order by t1.id desc limit 3;
Additionally, index hints are now supported with single table delete statements:
  delete from t2 use index(xid) order by (id) limit 2;

This approach changes the multi_delete SELECT result interceptor to use a temporary
table to collect row ids pertaining to the rows that will be deleted, rather than
directly deleting rows from the target table(s).  Row ids are collected during
send_data, then read during send_eof to delete target rows.  In the event that the
temporary table created in memory is not big enough for all matching rows, it is
converted to an aria table.

Other changes:
  - Deleting from a sequence now affects zero rows instead of emitting an error

Limitations:
  - The federated connector does not create implicit row ids, so we to use a key
  when conditionally deleting.  See the change in federated_maybe_16324629.test
2025-02-05 10:12:27 -05:00
Sergei Golubchik
7d657fda64 Merge branch '10.11 into 11.4 2025-01-30 12:01:11 +01:00
Sergei Golubchik
e69f8cae1a Merge branch '10.6' into 10.11 2025-01-30 11:55:13 +01:00
Sergei Golubchik
03d2328785 MDEV-35944 DELETE fails to notice transaction abort, violating ACID
Process errors of read_record().

Also, add an assert that Marko requested
2025-01-29 10:43:29 +01:00
Dmitry Shulga
4c956fa15b MDEV-34724: Skipping a row operation from a trigger
Implementation of this task adds ability to raise the signal with
SQLSTATE '02TRG' from a BEFORE INSERT/UPDATE/DELETE trigger and handles
this signal as an indicator meaning 'to throw away the current row'
on processing the INSERT/UPDATE/DELETE statement. The signal with
SQLSTATE '02TRG' has special meaning only in case it is raised inside
BEFORE triggers, for AFTER trigger's this value of SQLSTATE isn't treated
in any special way. In according with SQL standard, the SQLSTATE class '02'
means NO DATA and sql_errno for this class is set to value
ER_SIGNAL_NOT_FOUND by current implementation of MariaDB server.
Implementation of this task assigns the value ER_SIGNAL_SKIP_ROW_FROM_TRIGGER
to sql_errno in Diagnostics_area in case the signal is raised from a trigger
and SQLSTATE has value '02TRG'.

To catch signal with SQLTSATE '02TRG' and handle it in special way, the methods
 Table_triggers_list::process_triggers
 select_insert::store_values
 select_create::store_values
 Rows_log_event::process_triggers
and the overloaded function
 fill_record_n_invoke_before_triggers
were extended with extra out parameter for returning the flag whether
to skip the current values being processed by INSERT/UPDATE/DELETE
statement. This extra parameter is passed as nullptr in case of AFTER trigger
and BEFORE trigger this parameter points to a variable to store a marker
whether to skip the current record or store it by calling write_record().
2025-01-27 16:30:27 +07:00
Sergei Petrunia
1c2a83179d MDEV-35616: Add basic optimizer support for virtual column
(Review input addressed)

After this patch, the optimizer can handle virtual column expressions
in WHERE/ON clauses. If the table has an indexed virtual column:

  ALTER TABLE t1
    ADD COLUMN vcol INT AS (col1+1),
    ADD INDEX idx1(vcol);

and the query uses the exact virtual column expression:

  SELECT * FROM t1 WHERE col1+1 <= 100

then the optimizer will be able use index idx1 for it.

This is achieved by walking the WHERE/ON clauses and replacing instances
of virtual column expression (like "col1+1" above) with virtual column's
Item_field (like "vcol"). The latter can be processed by the optimizer.

Replacement is considered (and done) only in items that are potentially
usable to the range optimizer.
2025-01-25 10:50:52 +02:00
Marko Mäkelä
98dbe3bfaf Merge 10.5 into 10.6 2025-01-20 09:57:37 +02:00
Sergei Golubchik
a69da0c31e MDEV-19761 Before Trigger not processed for Not Null Columns if no explicit value and no DEFAULT
it's incorrect to zero out table->triggers->extra_null_bitmap
before a statement, because if insert uses an explicit field list
and omits a field that has no default value, the field should
get NULL implicitly. So extra_null_bitmap should have 1s for all
fields that have no defaults

* create extra_null_bitmap_init and initialize it as above
* copy extra_null_bitmap_init to extra_null_bitmap for inserts
* still zero out extra_null_bitmap for updates/deletes where
  all fields definitely have a value
* make not_null_fields_have_null_values() to send
  ER_NO_DEFAULT_FOR_FIELD for fields with no default and no value,
  otherwise creation of a trigger with an empty body would change the
  error message
2025-01-17 23:42:56 +01:00
Oleksandr Byelkin
0d35fe6e57 MDEV-35326: Memory Leak in init_io_cache_ext upon SHUTDOWN
The problems were that:
1) resources was freed "asimetric" normal execution in send_eof,
 in case of error in destructor.
2) destructor was not called in case of SP for result objects.
(so if the last SP execution ended with error resorces was not
freeded on reinit before execution (cleanup() called before next
execution) and destructor also was not called due to lack of
delete call for the object)

Result cleanup() renamed to reset_for_next_ps_execution() to better
reflect function().

All result method revised and freeing resources made "symetric".

Destructor of result object called for SP.

Added skipped invalidation in case of error in insert.

Removed misleading naming of reset(thd) (could be mixed with
with reset()).
2025-01-13 10:04:27 +01:00
Marko Mäkelä
15700f54c2 Merge 11.4 into 11.7 2025-01-09 09:41:38 +02:00
Marko Mäkelä
17f01186f5 Merge 10.11 into 11.4 2025-01-09 07:58:08 +02:00
Marko Mäkelä
3f914afd3a Merge 10.6 into 10.11 2025-01-02 12:39:56 +02:00
Yuchen Pei
671f80c738 Merge branch '10.5' into 10.6 2024-12-17 11:06:09 +11:00
Dmitry Shulga
54c1031b74 MDEV-34958: after Trigger doesn't work correctly with bulk insert
This bug has the same nature as the issues
  MDEV-34718: Trigger doesn't work correctly with bulk update
  MDEV-24411: Trigger doesn't work correctly with bulk insert

To fix the issue covering all use cases, resetting the thd->bulk_param
temporary to the value nullptr before invoking triggers and restoring
its original value on finishing execution of a trigger is moved to the method
  Table_triggers_list::process_triggers
that be invoked ultimately for any kind of triggers.
2024-12-13 16:19:39 +07:00
Marko Mäkelä
33907f9ec6 Merge 11.4 into 11.7 2024-12-02 17:51:17 +02:00
Marko Mäkelä
2719cc4925 Merge 10.11 into 11.4 2024-12-02 11:35:34 +02:00