1
0
mirror of https://github.com/MariaDB/server.git synced 2025-11-09 11:41:36 +03:00
Commit Graph

4372 Commits

Author SHA1 Message Date
Sergei Golubchik
9f80e3fbb7 MDEV-35032 streaming mode for mhnsw search
support SQL semantics for SELECT ... WHERE ... ORDER BY ... LIMIT

* switch from returning k nearest neighbors to returning
  as many as needed, in k-neighbor chunks, with increasing distance
* make search_layer() skips nodes that are closer than a threshold
* read_next keeps a search context - list of k found nodes,
  threshold, ctx, etc.
* when the list of found nodes is exhausted, it repeats the search
  starting from last found nodes and a threshold
* search context kepts ctx->refcount incremented, so ctx won't go away
* but commit_lock is unlocked between calls, so InnoDB can modify the table
* use ctx version to detect that, switch to MHNSW_Trx when it happens

bugfix:
* use the correct lock in ha_external_lock() for the graph table
* InnoDB didn't reset locks on ha_external_lock(F_UNLCK) and previous
  LOCK_X leaked into the next statement
2024-11-05 14:00:51 -08:00
Sergei Golubchik
97b2392ede cleanup: TABLE_SHARE::lock_share() helper
also: renames, s/const/constexpr/ for consistency
2024-11-05 14:00:50 -08:00
Sergey Vojtovich
3283688797 Simplified quick_rm_table() and mysql_rename_table()
Replaced obscure FRM_ONLY, NO_FRM_RENAME, NO_HA_TABLE, NO_PAR_TABLE with
straightforward explicit flags:

QRMT_FRM - [re]moves .frm
QRMT_PAR - [re]moves .par
QRMT_HANDLER - calls ha_delete_table()/ha_rename_table() and [re]moves
               high-level indexes
QRMT_DEFAULT - same as QRMT_FRM | QRMT_HANDLER, which is regular table
               drop/rename.
2024-11-05 14:00:50 -08:00
Sergey Vojtovich
a90fa3f397 ALTER TABLE fixes for high-level indexes (i)
Fixes for ALTER TABLE ... ADD/DROP COLUMN, ALGORITHM=COPY.

Let quick_rm_table() remove high-level indexes along with original table.

Avoid locking uninitialized LOCK_share for INTERNAL_TMP_TABLEs.

Don't enable bulk insert when altering a table containing vector index.
InnoDB can't handle situation when bulk insert is enabled for one table
but disabled for another. We can't do bulk insert on vector index as it
does table updates currently.
2024-11-05 14:00:50 -08:00
Sergei Golubchik
ebcbed6d74 post-fixes for TRUNCATE
* fix the truncate-by-handler variant, used by InnoDB
* test that insert works after truncate, meaning graph table was emptied
* test that the vector index size is zero after truncate in MyISAM
2024-11-05 14:00:49 -08:00
Sergei Golubchik
f44989ff0f UPDATE/DELETE post-fixes 2024-11-05 14:00:49 -08:00
Hugo Wen
0e2b9e7621 MDEV-33408 Initial support for vector DELETE and UPDATE
When the source row is deleted, mark the corresponding node in HNSW
index by setting `tref` to null. An index is added for the `tref` in
secondary table for faster searching of the to-be-marked nodes.

The nodes marked as deleted will still be used for search, but will not
be included in the final query results.

As skipping deleted nodes and not adding deleted nodes for new-inserted
nodes' neighbor list could impact the performance, we now only skip
these nodes in search results.

- for some reason the bitmap is not set for hlindex during the delete so
  I had to temporarily comment out one line

All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
2024-11-05 14:00:49 -08:00
Sergei Golubchik
049d839350 mhnsw: inter-statement shared cache
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
  (to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
  are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
  nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
  need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
  graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
  INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
  in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
  from the secondary cache are invalidated in the shared cache
  while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
  in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
  when it reaches the threshold
2024-11-05 14:00:49 -08:00
Sergei Golubchik
25b4000290 InnoDB support for hlindexes and mhnsw
* mhnsw:
  * use primary key, innodb loves and (and the index cannot have dupes anyway)
    * MyISAM is ok with that, performance-wise
  * must be ha_rnd_init(0) because we aren't going to scan
    * MyISAM resets the position on ha_rnd_init(0) so query it before
    * oh, and use the correct handler, just in case
  * HA_ERR_RECORD_IS_THE_SAME is no error
* innodb:
  * return ref_length on create
  * don't assume table->pos_in_table_list is set
  * ok, assume away, but only for system versioned tables
* set alter_info on create (InnoDB needs to check for FKs)
* pair external_lock/external_unlock correctly
2024-11-05 14:00:49 -08:00
Sergei Golubchik
613542dceb mhnsw: build indexes with the columns of exactly right size 2024-11-05 14:00:49 -08:00
Vicențiu Ciorbaru
88839e71a3 Initial HNSW implementation
This commit includes the work done in collaboration with Hugo Wen from
Amazon:

    MDEV-33408 Alter HNSW graph storage and fix memory leak

    This commit changes the way HNSW graph information is stored in the
    second table. Instead of storing connections as separate records, it now
    stores neighbors for each node, leading to significant performance
    improvements and storage savings.

    Comparing with the previous approach, the insert speed is 5 times faster,
    search speed improves by 23%, and storage usage is reduced by 73%, based
    on ann-benchmark tests with random-xs-20-euclidean and
    random-s-100-euclidean datasets.

    Additionally, in previous code, vector objects were not released after
    use, resulting in excessive memory consumption (over 20GB for building
    the index with 90,000 records), preventing tests with large datasets.
    Now ensure that vectors are released appropriately during the insert and
    search functions. Note there are still some vectors that need to be
    cleaned up after search query completion. Needs to be addressed in a
    future commit.

    All new code of the whole pull request, including one or several files
    that are either new files or modified ones, are contributed under the
    BSD-new license. I am contributing on behalf of my employer Amazon Web
    Services, Inc.

As well as the commit:

    Introduce session variables to manage HNSW index parameters

    Three variables:

    hnsw_max_connection_per_layer
    hnsw_ef_constructor
    hnsw_ef_search

    ann-benchmark tool is also updated to support these variables in commit
    https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch
    https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable

    All new code of the whole pull request, including one or several files
    that are either new files or modified ones, are contributed under the
    BSD-new license. I am contributing on behalf of my employer Amazon Web
    Services, Inc.

Co-authored-by: Hugo Wen <wenhug@amazon.com>
2024-11-05 14:00:48 -08:00
Sergei Golubchik
d6add9a03d initial support for vector indexes
MDEV-33407 Parser support for vector indexes

The syntax is

  create table t1 (... vector index (v) ...);

limitation:
* v is a binary string and NOT NULL
* only one vector index per table
* temporary tables are not supported

MDEV-33404 Engine-independent indexes: subtable method

added support for so-called "high level indexes", they are not visible
to the storage engine, implemented on the sql level. For every such
an index in a table, say, t1, the server implicitly creates a second
table named, like, t1#i#05 (where "05" is the index number in t1).
This table has a fixed structure, no frm, not accessible directly,
doesn't go into the table cache, needs no MDLs.

MDEV-33406 basic optimizer support for k-NN searches

for a query like SELECT ... ORDER BY func() optimizer will use
item_func->part_of_sortkey() to decide what keys can be used
to resolve ORDER BY.
2024-11-05 14:00:48 -08:00
Sergei Golubchik
08a7f18b19 cleanup: init_tmp_table_share(bool thread_specific)
let the caller tell init_tmp_table_share() whether the table
should be thread_specific or not.

In particular, internal tmp tables created in the slave thread
are perfectly thread specific
2024-11-05 14:00:48 -08:00
Sergei Golubchik
44c6328cbb cleanup: thd->alloc<>() and thd->calloc<>()
create templates

  thd->alloc<X>(n) to use instead of (X*)thd->alloc(sizeof(X)*n)

and the same for thd->calloc(). By the default the type is char,
so old usage of thd->alloc(size) works too.
2024-11-05 14:00:48 -08:00
Sergei Golubchik
07ec1a9e37 cleanup: unused function argument 2024-11-05 14:00:48 -08:00
Yuchen Pei
4b6922a315 MDEV-25008: UPDATE/DELETE: Cost-based choice IN->EXISTS vs Materialization
Single-table UPDATE/DELETE didn't provide outer_lookup_keys value for
subqueries. This didn't allow to make a meaningful choice between
IN->EXISTS and Materialization strategies for subqueries.

Fix this:
* Make UPDATE/DELETE save Sql_cmd_dml::scanned_rows,
* Then, subquery's JOIN::choose_subquery_plan() can fetch it from
there for outer_lookup_keys

Details:
UPDATE/DELETE now calls select_lex->optimize_unflattened_subqueries()
twice, like SELECT does (first call optimize_constant_subquries() in
JOIN::optimize_inner(), then call optimize_unflattened_subqueries() in
JOIN::optimize_stage2()):
1. Call with const_only=true before any optimizations. This allows
range optimizer and others to use the values of cheap const
subqueries.
2. Call it with const_only=false after range optimizer, partition
pruning, etc. outer_lookup_keys value is provided, so it's possible to
pick a good subquery strategy.

Note: PROTECT_STATEMENT_MEMROOT requires that first SP execution
performs subquery optimization for all subqueries, even for degenerate
query plans like "Impossible WHERE". Due to that, we ensure that the
call to optimize_unflattened_subqueries (with const_only=false) even
for degenerate query plans still happens, as was the case before this
change.
2024-10-23 23:51:24 +11:00
Rex
10008b3d3e MDEV-31466 Add optional correlation column list for derived tables
Extend derived table syntax to support column name assignment.
(subquery expression) [as|=] ident [comma separated column name list].
Prior to this patch, the optional comma separated column name list is
not supported.

Processing within the unit of the subquery expression will use
original column names, outside the unit will use the new names.

For example, in the query

select a1, a2 from
  (select c1, c2, c3 from t1 where c2 > 0) as dt (a1, a2, a3)
where a2 > 10;

we see the second column of the derived table dt being used both within,
(where c2 > 0), and outside, (where a2 > 10), the specification.
Both conditions apply to t1.c2.

When multiple unit preparations are required, such as when being used within
a prepared statement or procedure, original column names are needed for
correct resolution. Original names are reset within mysql_derived_reinit().

Item_holder items, used for result tables in both TVC and union preparations
are renamed before use within st_select_lex_unit::prepare().

During wildcard expansion, if column names are present, items names are
set directly after creation.

Reviewed by Igor Babaev (igor@mariadb.com)
2024-10-15 06:08:46 +12:00
Kristian Nielsen
db5d1cde45 MDEV-34857: Implement --slave-abort-blocking-timeout
If a slave replicating an event has waited for more than
@@slave_abort_blocking_timeout for a conflicting metadata lock held by a
non-replication thread, the blocking query is killed to allow replication to
proceed and not be blocked indefinitely by a user query.

Reviewed-by: Monty <monty@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
2024-09-04 11:44:14 +02:00
Oleksandr Byelkin
342fa29615 Merge branch '11.4' into 11.5 2024-08-21 11:52:54 +02:00
Oleksandr Byelkin
eb70e0d6e2 Merge branch '11.2' into 11.4 2024-08-21 09:30:54 +02:00
Oleksandr Byelkin
6197e6abc4 Merge branch '10.11' into 11.2 2024-08-21 07:58:46 +02:00
Oleksandr Byelkin
70afc62750 Merge branch '10.6' into 10.11 2024-08-20 10:00:39 +02:00
Oleksandr Byelkin
fc5772ce17 Merge branch '10.5' into 10.6 2024-08-20 09:11:34 +02:00
Dmitry Shulga
ba5482ffc2 MDEV-34718: Trigger doesn't work correctly with bulk update
Running an UPDATE statement in PS mode and having positional
parameter(s) bound with an array of actual values (that is
prepared to be run in bulk mode) results in incorrect behaviour
in presence of on update trigger that also executes an UPDATE
statement. The same is true for handling a DELETE statement in
presence of on delete trigger. Typically, the visible effect of
such incorrect behaviour is expressed in a wrong number of
updated/deleted rows of a target table. Additionally, in case UPDATE
statement, a number of modified rows and a state message returned
by a statement contains wrong information about a number of modified rows.

The reason for incorrect number of updated/deleted rows is that
a data structure used for binding positional argument with its
actual values is stored in THD (this is thd->bulk_param) and reused
on processing every INSERT/UPDATE/DELETE statement. It leads to
consuming actual values bound with top-level UPDATE/DELETE statement
by other DML statements used by triggers' body.

To fix the issue, reset the thd->bulk_param temporary to the value
nullptr before invoking triggers and restore its value on finishing
its execution.

The second part of the problem relating with wrong value of affected
rows reported by Connector/C API is caused by the fact that diagnostics
area is reused by an original DML statement and a statement invoked
by a trigger. This fact should be take into account on finalizing a
state of diagnostics area on completion running of a statement.

Important remark: in case the macros DBUG_OFF is on, call of the method
  Diagnostics_area::reset_diagnostics_area()
results in reset of the data members
  m_affected_rows, m_statement_warn_count.
Values of these data members of the class Diagnostics_area are used on
sending OK and EOF messages. In case DML statement is executed in PS bulk
mode such resetting results in sending wrong result values to a client
for affected rows in case the DML statement fires a triggers. So, reset
these data members only in case the current statement being processed
is not run in bulk mode.
2024-08-19 12:13:43 +07:00
Oleksandr Byelkin
ea75a0b600 Merge branch '11.4' into 11.5 2024-08-05 17:50:18 +02:00
Oleksandr Byelkin
1640c9b06e Merge branch '11.2' into 11.4 2024-08-04 17:27:48 +02:00
Oleksandr Byelkin
dced6cbdb6 Merge branch '11.1' into 11.2 2024-08-03 09:50:16 +02:00
Oleksandr Byelkin
80abd847da Merge branch '10.11' into 11.1 2024-08-03 09:32:42 +02:00
Oleksandr Byelkin
0fe39d368a Merge branch '10.6' into 10.11 2024-07-22 15:14:50 +02:00
Dave Gosselin
02e38e2ece MDEV-33971 NAME_CONST in WHERE clause replaced by inner item
Improve performance of queries like
  SELECT * FROM t1 WHERE field = NAME_CONST('a', 4);
by, in this example, replacing the WHERE clause with field = 4
in the case of ref access.

The rewrite is done during fix_fields and we disambiguate this
case from other cases of NAME_CONST by inspecting where we are
in parsing.  We rely on THD::where to accomplish this.  To
improve performance there, we change the type of THD::where to
be an enumeration, so we can avoid string comparisons during
Item_name_const::fix_fields.  Consequently, this patch also
changes all usages of THD::where to conform likewise.
2024-07-10 17:23:43 -04:00
Alexander Barkov
4e805aed85 Merge remote-tracking branch 'origin/11.4' into 11.5 2024-07-10 12:17:09 +04:00
Alexander Barkov
5fb07d942b Merge remote-tracking branch 'origin/11.2' into 11.4 2024-07-09 21:45:37 +04:00
Alexander Barkov
8aad19ddfc Merge remote-tracking branch 'origin/11.1' into 11.2 2024-07-09 14:04:11 +04:00
Alexander Barkov
44af9bfc67 Merge remote-tracking branch 'origin/10.11' into 11.1 2024-07-09 10:45:47 +04:00
Oleksandr Byelkin
2447dda2c0 Merge branch '10.11' into 11.1 2024-07-08 22:40:16 +02:00
Alexander Barkov
4d71a117a3 Merge remote-tracking branch 'origin/10.6' into 10.11 2024-07-08 21:52:08 +04:00
Alexander Barkov
e56040fee8 Merge remote-tracking branch 'origin/10.5' into 10.6 2024-07-08 18:59:04 +04:00
Alexander Barkov
8f4ec79d09 Merge remote-tracking branch 'origin/11.4' into 11.5 2024-07-08 12:25:04 +04:00
Monty
29d9467641 Fixed core dump when using --debug
The problem was using safe_table_name() instead of safe_table_name().str
with DBUG_PRINT
2024-07-06 15:28:37 +03:00
Brandon Nesterenko
cbc1898e82 MDEV-25607: Auto-generated DELETE from HEAP table can break replication
The special logic used by the memory storage engine
to keep slaves in sync with the master on a restart can
break replication. In particular, after a restart, the
master writes DELETE statements in the binlog for
each MEMORY-based table so the slave can empty its
data. If the DELETE is not executable, e.g. due to
invalid triggers, the slave will error and fail, whereas
the master will never see the problem.

Instead of DELETE statements, use TRUNCATE to
keep slaves in-sync with the master, thereby bypassing
triggers.

Reviewed By:
===========
Kristian Nielsen <knielsen@knielsen-hq.org>
Andrei Elkin <andrei.elkin@mariadb.com>
2024-07-05 12:00:09 -06:00
Marko Mäkelä
27a3366663 Merge 10.6 into 10.11 2024-06-27 10:26:09 +03:00
Dmitry Shulga
77c465d5aa MDEV-34171: Memory leakage is detected on running the test versioning.partition
One of possible use cases that reproduces the memory leakage listed below:

  set timestamp= unix_timestamp('2000-01-01 00:00:00');
  create or replace table t1 (x int) with system versioning
    partition by system_time interval 1 hour auto
    partitions 3;

  create table t2 (x int);

  create trigger tr after insert on t2 for each row update t1 set x= 11;
  create or replace procedure sp2() insert into t2 values (5);

  set timestamp= unix_timestamp('2000-01-01 04:00:00');
  call sp2;

  set timestamp= unix_timestamp('2000-01-01 13:00:00');
  call sp2; # <<=== Memory leak happens there. In case MariaDB server is built
                    with the option -DWITH_PROTECT_STATEMENT_MEMROOT,
                    the second execution would hit assert failure.

The reason of leaking a memory is that once a new partition be created
the table should be closed and re-opened. It results in calling the function
extend_table_list() that indirectly invokes the function sp_add_used_routine()
to add routines implicitly used by the statement that makes a new memory
allocation.

To fix it, don't remove routines and tables the statement implicitly depends
on when a table being closed for subsequent re-opening.
2024-06-25 11:11:36 +07:00
Marko Mäkelä
0076eb3d4e Merge 10.5 into 10.6 2024-06-24 13:09:47 +03:00
Dave Gosselin
db0c28eff8 MDEV-33746 Supply missing override markings
Find and fix missing virtual override markings.  Updates cmake
maintainer flags to include -Wsuggest-override and
-Winconsistent-missing-override.
2024-06-20 11:32:13 -04:00
Alexander Barkov
c4bf4ce948 Merge remote-tracking branch 'origin/11.2' into 11.4 2024-06-17 15:46:39 +04:00
Marko Mäkelä
a21e49cbcc Merge 11.1 into 11.2 2024-06-17 12:02:03 +03:00
Marko Mäkelä
d34289a3e2 Merge 10.11 into 11.1 2024-06-17 09:21:50 +03:00
Marko Mäkelä
b81d717387 Merge 10.6 into 10.11 2024-06-11 12:50:10 +03:00
Marko Mäkelä
27834ebc91 Merge 10.5 into 10.6 2024-06-10 15:22:15 +03:00
Marko Mäkelä
a2bd936c52 MDEV-33161 Function pointer signature mismatch in LF_HASH
In cmake -DWITH_UBSAN=ON builds with clang but not with GCC,
-fsanitize=undefined will flag several runtime errors on
function pointer mismatch related to the lock-free hash table LF_HASH.

Let us use matching function signatures and remove function pointer
casts in order to avoid potential bugs due to undefined behaviour.

These errors could be caught at compilation time by
-Wcast-function-type-strict, which is available starting with clang-16,
but not available in any version of GCC as of now. The old GCC flag
-Wcast-function-type is enabled as part of -Wextra, but it specifically
does not catch these errors.

Reviewed by: Vladislav Vaintroub
2024-06-10 12:35:33 +03:00