1
0
mirror of https://github.com/MariaDB/server.git synced 2025-10-24 07:13:33 +03:00
Commit Graph

2907 Commits

Author SHA1 Message Date
Oleksandr Byelkin
fc5772ce17 Merge branch '10.5' into 10.6 2024-08-20 09:11:34 +02:00
Dmitry Shulga
ba5482ffc2 MDEV-34718: Trigger doesn't work correctly with bulk update
Running an UPDATE statement in PS mode and having positional
parameter(s) bound with an array of actual values (that is
prepared to be run in bulk mode) results in incorrect behaviour
in presence of on update trigger that also executes an UPDATE
statement. The same is true for handling a DELETE statement in
presence of on delete trigger. Typically, the visible effect of
such incorrect behaviour is expressed in a wrong number of
updated/deleted rows of a target table. Additionally, in case UPDATE
statement, a number of modified rows and a state message returned
by a statement contains wrong information about a number of modified rows.

The reason for incorrect number of updated/deleted rows is that
a data structure used for binding positional argument with its
actual values is stored in THD (this is thd->bulk_param) and reused
on processing every INSERT/UPDATE/DELETE statement. It leads to
consuming actual values bound with top-level UPDATE/DELETE statement
by other DML statements used by triggers' body.

To fix the issue, reset the thd->bulk_param temporary to the value
nullptr before invoking triggers and restore its value on finishing
its execution.

The second part of the problem relating with wrong value of affected
rows reported by Connector/C API is caused by the fact that diagnostics
area is reused by an original DML statement and a statement invoked
by a trigger. This fact should be take into account on finalizing a
state of diagnostics area on completion running of a statement.

Important remark: in case the macros DBUG_OFF is on, call of the method
  Diagnostics_area::reset_diagnostics_area()
results in reset of the data members
  m_affected_rows, m_statement_warn_count.
Values of these data members of the class Diagnostics_area are used on
sending OK and EOF messages. In case DML statement is executed in PS bulk
mode such resetting results in sending wrong result values to a client
for affected rows in case the DML statement fires a triggers. So, reset
these data members only in case the current statement being processed
is not run in bulk mode.
2024-08-19 12:13:43 +07:00
Yuchen Pei
d7042ec4da Merge branch '10.5' into 10.6 2024-06-26 09:16:54 +08:00
Dmitry Shulga
8b169949d6 MDEV-24411: Trigger doesn't work correctly with bulk insert
Executing an INSERT statement in PS mode having positional parameter
bound with an array could result in incorrect number of inserted rows
in case there is a BEFORE INSERT trigger that executes yet another
INSERT statement to put a copy of row being inserted into some table.

The reason for incorrect number of inserted rows is that a data structure
used for binding positional argument with its actual values is stored
in THD (this is thd->bulk_param) and reused on processing every INSERT
statement. It leads to consuming actual values bound with top-level
INSERT statement by other INSERT statements used by triggers' body.

To fix the issue, reset the thd->bulk_param temporary to the value nullptr
before invoking triggers and restore its value on finishing its execution.
2024-06-25 09:52:52 +07:00
Marko Mäkelä
0076eb3d4e Merge 10.5 into 10.6 2024-06-24 13:09:47 +03:00
Dave Gosselin
db0c28eff8 MDEV-33746 Supply missing override markings
Find and fix missing virtual override markings.  Updates cmake
maintainer flags to include -Wsuggest-override and
-Winconsistent-missing-override.
2024-06-20 11:32:13 -04:00
Sergei Golubchik
7b53672c63 Merge branch '10.5' into 10.6 2024-05-08 20:06:00 +02:00
Sergei Golubchik
4f5dea43df cleanup
* remove dead code
* simplify the check for table->s->next_number_index
* misc
2024-05-05 21:37:08 +02:00
Nikita Malyavin
72429cad7f MDEV-30046 wrong row targeted with "insert ... on duplicate" and "replace"
When HA_DUPLICATE_POS is not supported, the row to replace was navigated by
ha_index_read_idx_map, which uses only hash to navigate.

Suchwise, given a hash collision it may choose an incorrect row.

handler::position would be correct and very convenient to use here.

dup_ref is already set by handler independently of the engine
capabilities, when an extra lookup is made (for long unique or something else,
for example WITHOUT OVERLAPS) such error will be indicated by
file->lookup_errkey != -1.
2024-05-05 18:38:34 +02:00
Monty
b5d65fc105 Optimize performance of my_bitmap
MDEV-33502 Slowdown when running nested statement with many partitions

This change was triggered to help some MariaDB users with close to
10000 bits in their bitmaps.

- Change underlaying storage to be 64 bit instead of 32bit.
  - This reduses number of loops to scan bitmaps.
  - This can cause some bitmaps to be 4 byte large.
- Ensure that all not used top-bits are always 0 (simplifes code as
  the last 64 bit storage is not a special case anymore).
- Use my_find_first_bit() to find the first set bit which is much faster
  than scanning trough things byte by byte and then bit by bit.

Other things:
- Added a bool to remember if my_bitmap_init() did allocate the bitmap
  array. my_bitmap_free() will only free arrays it did allocate.
  This allowed me to remove setting 'bitmap=0' before calling
  my_bitmap_free() for cases where the bitmap's where allocated externally.
- my_bitmap_init() sets bitmap to 0 in case of failure.
- Added 'universal' asserts to most bitmap functions.
- Change all remaining calls to bitmap_init() to my_bitmap_init().
  - To finish the change from 2014.
- Changed all usage of uint32 in my_bitmap.h to my_bitmap_map.
- Updated bitmap_copy() to handle bitmaps of different size.
- Removed const from bitmap_exists_intersection() as this caused casts
  on all usage.
- Removed not used function bitmap_set_above().
- Renamed create_last_word_mask() to create_last_bit_mask() (to match
  name changes in my_bitmap.cc)
- Extended bitmap-t with test for more bitmap functions.
2024-02-27 14:51:33 +02:00
Sergei Golubchik
3f6038bc51 Merge branch '10.5' into 10.6 2024-01-31 18:04:03 +01:00
Alexander Barkov
97fcafb9ec MDEV-32837 long unique does not work like unique key when using replace
write_record() when performing REPLACE has an optimization:
- if the unique violation happened in the last unique key, then do UPDATE
- otherwise, do DELETE+INSERT

This patch changes the way of detecting if this optimization
can be applied if the table has long (hash based) unique
(i.e. UNIQUE..USING HASH) constraints.

Problem:

The old condition did not take into account that
TABLE_SHARE and TABLE see long uniques differently:
- TABLE_SHARE sees as HA_KEY_ALG_LONG_HASH and HA_NOSAME
- TABLE sees as usual non-unique indexes
So the old condition could erroneously decide that the UPDATE optimization
is possible when there are still some unique hash constraints in the table.

Fix:

- If the current key is a long unique, it now works as follows:

  UPDATE can be done if the current long unique is the last
  long unique, and there are no in-engine (normal) uniques.

- For in-engine uniques nothing changes, it still works as before:

  If the current key is an in-engine (normal) unique:
  UPDATE can be done if it is the last normal unique.
2024-01-24 17:19:54 +04:00
Sergei Golubchik
e95bba9c58 Merge branch '10.5' into 10.6 2023-12-17 11:20:43 +01:00
Sergei Golubchik
98a39b0c91 Merge branch '10.4' into 10.5 2023-12-02 01:02:50 +01:00
Monty
acdb8b6779 Fixed crash in Delayed_insert::get_local_table()
This was a bug in my previous commit, found by buildbot
2023-11-28 15:31:44 +02:00
Monty
08e6431c8c Fixed memory leak introduces by a fix for MDEV-29932
The leaks are all 40 bytes and happens in this call stack when running
mtr vcol.vcol_syntax:

alloc_root()
...
Virtual_column_info::fix_and_check_exp()
...
Delayed_insert::get_local_table()

The problem was that one copied a MEM_ROOT from THD to a TABLE without
taking into account that new blocks would be allocated through the
TABLE memroot (and would thus be leaked).
In general, one should NEVER copy MEM_ROOT from one object to another
without clearing the copied memroot!

Fixed by, at end of get_local_table(), copy all new allocated objects
to client_thd->mem_root.

Other things:
- Removed references to MEM_ROOT::total_alloc that was wrongly left
  after a previous commit
2023-11-27 19:08:14 +02:00
Denis Protivensky
e39c497c80 MDEV-22232: Fix CTAS replay & retry in case it gets BF-aborted
- Add selected tables as shared keys for CTAS certification
- Set proper security context on the replayer thread
- Disallow CTAS command retry

Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
2023-11-21 08:02:23 +01:00
Aleksey Midenkov
f7552313d4 MDEV-29932 Invalid expr in cleanup_session_expr() upon INSERT DELAYED
There are two TABLE objects in each thread: first one is created in
delayed thread by Delayed_insert::open_and_lock_table(), second one is
created in connection thread by Delayed_insert::get_local_table(). It
is copied from the delayed thread table.

When the second table is copied copy-assignment operator copies
vcol_refix_list which is already filled with an item from delayed
thread. Then get_local_table() adds its own item. Thus both tables
contains the same list with two items which is wrong. Then connection
thread finishes and its item freed. Then delayed thread tries to
access it in vcol_cleanup_expr().

The fix just clears vcol_refix_list in the copied table.

Another problem is that copied table contains the same mem_root, any
allocations on it will be invalid if the original table is freed (and
that is indeterministic as it is done in another thread). Since copied
table is allocated in connection THD and lives not longer than
thd->mem_root we may assign its mem_root from thd->mem_root.

Third, it doesn't make sense to do open_and_lock_tables() on NULL
pointer.
2023-11-10 15:46:15 +03:00
Oleksandr Byelkin
b83c379420 Merge branch '10.5' into 10.6 2023-11-08 15:57:05 +01:00
Oleksandr Byelkin
6cfd2ba397 Merge branch '10.4' into 10.5 2023-11-08 12:59:00 +01:00
Jan Lindström
f57deb314f MDEV-31660 : Assertion `client_state.transaction().active() in wsrep_append_key
At the moment we cannot support
wsrep_forced_binlog_format=[MIXED|STATEMENT]
during CREATE TABLE AS SELECT.
Statement will use ROW instead and give
a warning.

Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
2023-09-29 12:54:04 +02:00
Marko Mäkelä
0dd25f28f7 Merge 10.5 into 10.6 2023-09-11 14:46:39 +03:00
Marko Mäkelä
f8f7d9de2c Merge 10.4 into 10.5 2023-09-11 11:29:31 +03:00
Sergei Golubchik
65b3c89430 MDEV-32015 insert into an empty table fails with hash unique
don't enable bulk insert when table->s->long_unique_table
2023-09-06 22:38:41 +02:00
Thirunarayanan Balathandayuthapani
c438284863 MDEV-31835 Remove unnecesary extra HA_EXTRA_IGNORE_INSERT call
- HA_EXTRA_IGNORE_INSERT call is being called for every inserted row,
and on partitioned tables on every row * every partition.
This leads to slowness during load..data operation

- Under bulk operation, multiple insert statement error handling
will end up emptying the table. This behaviour introduced by the
commit 8ea923f55b (MDEV-24818).
This makes the HA_EXTRA_IGNORE_INSERT call redundant. We can
use the same behavior for insert..ignore statement as well.

- Removed the extra call HA_EXTRA_IGNORE_INSERT as the solution
to improve the performance of load command.
2023-08-25 17:22:17 +05:30
Monty
a6bf4b5807 MDEV-29693 ANALYZE TABLE still flushes table definition cache when engine-independent statistics is used
This commits enables reloading of engine-independent statistics
without flushing the table from table definition cache.

This is achieved by allowing multiple version of the
TABLE_STATISTICS_CB object and having independent pointers to it in
TABLE and TABLE_SHARE.  The TABLE_STATISTICS_CB object have reference
pointers and are freed when no one is pointing to it anymore.

TABLE's TABLE_STATISTICS_CB pointer is updated to use the
TABLE_SHARE's pointer when read_statistics_for_tables() is called at
the beginning of a query.

Main changes:
- read_statistics_for_table() will allocate an new TABLE_STATISTICS_CB
  object.
- All get_stat_values() functions has a new parameter that tells
  where collected data should be stored. get_stat_values() are not
  using the table_field object anymore to store data.
- All get_stat_values() functions returns 1 if they found any
  data in the statistics tables.

Other things:
- Fixed INSERT DELAYED to not read statistics tables.
- Removed Statistics_state from TABLE_STATISTICS_CB as this is not
  needed anymore as wer are not changing TABLE_SHARE->stats_cb while
  calculating or loading statistics.
- Store values used with store_from_statistical_minmax_field() in
  TABLE_STATISTICS_CB::mem_root. This allowed me to remove the function
  delete_stat_values_for_table_share().
  - Field_blob::store_from_statistical_minmax_field() is implemented
    but is not normally used as we do not yet support EIS statistics
    for blobs. For example Field_blob::update_min() and
    Field_blob::update_max() are not implemented.
    Note that the function can be called if there is an concurrent
    "ALTER TABLE MODIFY field BLOB" running because of a bug in
    ALTER TABLE where it deletes entries from column_stats
    before it has an exclusive lock on the table.
- Use result of field->val_str(&val) as a pointer to the result
  instead of val (safetly fix).
- Allocate memory for collected statistics in THD::mem_root, not in
  in TABLE::mem_root. This could cause the TABLE object to grow if a
  ANALYZE TABLE was run many times on the same table.
  This was done in allocate_statistics_for_table(),
  create_min_max_statistical_fields_for_table() and
  create_min_max_statistical_fields_for_table_share().
- Store in TABLE_STATISTICS_CB::stats_available which statistics was
  found in the statistics tables.
- Removed index_table from class Index_prefix_calc as it was not used.
- Added TABLE_SHARE::LOCK_statistics to ensure we don't load EITS
  in parallel. First thread will load it, others will reuse the
  loaded data.
- Eliminate read_histograms_for_table(). The loading happens within
  read_statistics_for_tables() if histograms are needed.
  One downside is that if we have read statistics without histograms
  before and someone requires histograms, we have to read all statistics
  again (once) from the statistics tables.
  A smaller downside is the need to call alloc_root() for each
  individual histogram. Before we could allocate all the space for
  histograms with a single alloc_root.
- Fixed bug in MyISAM and Aria where they did not properly notice
  that table had changed after analyze table. This was not a problem
  before this patch as then the MyISAM and Aria tables where flushed
  as part of ANALYZE table which did hide this issue.
- Fixed a bug in ANALYZE table where table->records could be seen as 0
  in collect_statistics_for_table(). The effect of this unlikely bug
  was that a full table scan could be done even if
  analyze_sample_percentage was not set to 1.
- Changed multiple mallocs in a row to use multi_alloc_root().
- Added a mutex protection in update_statistics_for_table() to ensure
  that several tables are not updating the statistics at the same time.

Some of the changes in sql_statistics.cc are based on a patch from
Oleg Smirnov <olernov@gmail.com>

Co-authored-by: Oleg Smirnov <olernov@gmail.com>
Co-authored-by: Vicentiu Ciorbaru <cvicentiu@gmail.com>
Reviewer: Sergei Petrunia <sergey@mariadb.com>
2023-08-18 13:28:39 +03:00
Kristian Nielsen
7c9837ce74 Merge 10.4 into 10.5
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
2023-08-15 18:02:18 +02:00
Kristian Nielsen
18acbaf416 MDEV-31655: Parallel replication deadlock victim preference code errorneously removed
Restore code to make InnoDB choose the second transaction as a deadlock
victim if two transactions deadlock that need to commit in-order for
parallel replication. This code was erroneously removed when VATS was
implemented in InnoDB.

Also add a test case for InnoDB choosing the right deadlock victim.
Also fixes this bug, with testcase that reliably reproduces:

MDEV-28776: rpl.rpl_mark_optimize_tbl_ddl fails with timeout on sync_with_master

Reviewed-by: Marko Mäkelä <marko.makela@mariadb.com>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
2023-08-15 16:39:49 +02:00
Kristian Nielsen
900c4d6920 MDEV-31655: Parallel replication deadlock victim preference code errorneously removed
Restore code to make InnoDB choose the second transaction as a deadlock
victim if two transactions deadlock that need to commit in-order for
parallel replication. This code was erroneously removed when VATS was
implemented in InnoDB.

Also add a test case for InnoDB choosing the right deadlock victim.
Also fixes this bug, with testcase that reliably reproduces:

MDEV-28776: rpl.rpl_mark_optimize_tbl_ddl fails with timeout on sync_with_master

Note: This should be null-merged to 10.6, as a different fix is needed
there due to InnoDB locking code changes.

Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
2023-08-15 16:35:30 +02:00
Oleksandr Byelkin
5ea5291d97 Merge branch '10.5' into 10.6 2023-08-04 07:52:54 +02:00
Sergei Golubchik
010f535b7f cleanup: remove redundant arguments 2023-08-01 22:33:56 +02:00
Oleksandr Byelkin
6bf8483cac Merge branch '10.5' into 10.6 2023-08-01 15:08:52 +02:00
Oleksandr Byelkin
7564be1352 Merge branch '10.4' into 10.5 2023-07-26 16:02:57 +02:00
Aleksey Midenkov
c5a8341115 MDEV-23100 ODKU of non-versioning column inserts history row
Use vers_check_update() to avoid inserting history row for ODKU if now
versioned fields specified in update_fields.
2023-07-20 18:22:30 +03:00
Oleksandr Byelkin
db3342b325 Merge branch '10.5' into 10.6 2023-05-04 18:47:11 +02:00
Oleksandr Byelkin
ba0433dc1c Merge branch '10.4' into 10.5 2023-05-04 18:19:47 +02:00
Sergei Golubchik
4d6e458f9f MDEV-31164 default current_timestamp() not working when used INSERT ON DUPLICATE KEY in some cases
select_insert::store_values() must reset
has_value_set bitmap before every row, just like mysql_insert() does.
because ON DUPLICATE KEY UPDATE and triggers modify it
2023-05-04 16:07:39 +02:00
Oleksandr Byelkin
043d69bbcc Merge branch '10.5' into 10.6 2023-05-03 09:51:25 +02:00
Oleksandr Byelkin
edf8ce5b97 Merge branch 'bb-10.4-release' into bb-10.5-release 2023-05-02 13:54:54 +02:00
Sergei Golubchik
bc970573b3 MDEV-22756 SQL Error (1364): Field 'DB_ROW_HASH_1' doesn't have a default value
exclude generated columns from the "has default value" check
2023-04-28 14:11:59 +02:00
Marko Mäkelä
5bada1246d Merge 10.5 into 10.6 2023-04-11 16:15:19 +03:00
Oleksandr Byelkin
ac5a534a4c Merge remote-tracking branch '10.4' into 10.5 2023-03-31 21:32:41 +02:00
Igor Babaev
f33fc2fae5 MDEV-30539 EXPLAIN EXTENDED: no message with queries for DML statements
EXPLAIN EXTENDED for an UPDATE/DELETE/INSERT/REPLACE statement did not
produce the warning containing the text representation of the query
obtained after the optimization phase. Such warning was produced for
SELECT statements, but not for DML statements.
The patch fixes this defect of EXPLAIN EXTENDED for DML statements.
2023-03-25 12:36:59 -07:00
Oleksandr Byelkin
c3a5cf2b5b Merge branch '10.5' into 10.6 2023-01-31 09:31:42 +01:00
Oleksandr Byelkin
7fa02f5c0b Merge branch '10.4' into 10.5 2023-01-27 13:54:14 +01:00
Sergei Golubchik
fc292f42be MDEV-29199 Unique hash key is ignored upon INSERT ... SELECT into non-empty MyISAM table
disable bulk insert optimization if long uniques are used, because they
need to read the table (index_read) after every inserted now. And bulk
insert optimization might disable indexes.

bulk insert is already disabled in other cases when there are chances
that the table will be read duing the bulk insert.
2023-01-20 15:44:15 +01:00
Marko Mäkelä
3386b30975 Merge 10.5 into 10.6 2023-01-13 10:45:41 +02:00
Marko Mäkelä
73ecab3d26 Merge 10.4 into 10.5 2023-01-13 10:18:30 +02:00
Marko Mäkelä
71e8e4934d Merge 10.3 into 10.4 2023-01-13 09:28:25 +02:00
Nikita Malyavin
7a98d232e4 MDEV-30378 Versioned REPLACE succeeds with ON DELETE RESTRICT constraint
node->is_delete was incorrectly set to NO_DELETE for a set of operations.

In general we shouldn't rely on sql_command and look for more abstract ways
to control the behavior.

trg_event_map seems to be a suitable way. To mind replica nodes, it is ORed
with slave_fk_event_map, which stores trg_event_map when replica has
triggers disabled.
2023-01-12 21:51:48 +03:00