Avoid accessing MDL_lock::key from outside of MDL_lock class directly,
use MDL_lock::get_key() instead.
This is part of broader cleanup, which aims to make large part of
MDL_lock members private. It is needed to simplify further work on
MDEV-19749 - MDL scalability regression after backup locks.
never estimate that a graph search will visit more nodes than there
are in the graph. In fact, let's reduce the graph size by 30%, it'll
increase the false positive rate of a bloom filter by 2% when
visiting the whole graph, it doesn't affect recall noticeably.
we need to read the shared graph size under a lock. let's store it
in the thread-local unused TABLE::used_stat_records member.
This patch optimises the dot_product function by leveraging
vectorisation through SIMD intrinsics. This transformation enables
parallel execution of multiple operations, significantly improving the
performance of dot product computation on supported architectures.
The original dot_product function does undergo auto-vectorisation when
compiled with -O3. However, performance analysis has shown that the
newly optimised implementation performs better on Power10 and achieves
comparable performance on Power9 machines.
Benchmark tests were conducted on both Power9 and Power10 machines,
comparing the time taken by the original (auto-vectorized) code and the
new vectorised code. GCC 11.5.0 on RHEL 9.5 operating system with -O3
were used. The benchmarks were performed using a sample test code with
a vector size of 4096 and 10⁷ loop iterations. Here are the average
execution times (in seconds) over multiple runs:
Power9:
Before change: ~16.364 s
After change: ~16.180 s
Performance gain is modest but measurable.
Power10:
Before change: ~8.989 s
After change: ~6.446 s
Significant improvement, roughly 28–30% faster.
Signed-off-by: Manjul Mohan <manjul.mohan@ibm.com>
a start node in the shared context must be atomically assigned
to a valid and fully initialized node otherwise a concurrent
thread might use it before it's safe
SIMD implementations of bloom filters and dot product calculation.
A microbenchmark shows 1.7x dot product performance improvement compared to
regular -O2/-O3 builds and 2.4x compared to builds with auto-vectorization
disabled.
Performance improvement (microbenchmark) for bloom filters is less exciting,
within 10-30% ballpark depending on compiler options and load.
Misc implementation notes:
CalcHash: no _mm256_shuffle_epi8(), use explicit XOR/shift.
CalcHash: no 64bit multiplication, do scalar multiplication.
ConstructMask/Query: no _mm256_i64gather_epi64, access array elements explicitly.
Query: no _mm256_movemask_epi8, accumulate bits manually.
Closes#3671
x86 builds don't use SIMD, fast math and inlining causes
distances to be quite unstable and
1) comparison with the threshold no longer works, the distance calculated
twice between the same two vectors comes out differently
2) a bunch of identical vectors get the non-zero distance between
them and HNSW cross-links them with no outbound links (if there're
more than 2M identical vectors). Let's strengthen the select_neighbors
heuristic to skip neighbors that are too close to each other
MDEV-35418 suggests a better solution for this.
now with streaming (MDEV-35032) we cannot longer free MHNSW_Trx
at the end of the search. Cannot even free it at the end of the
mhnsw_insert, because there can be a search running (INSERT ... SELECT).
Let's do reference counting, even though it's a thread-local object.
considering that users don't interact with MariaDB vector search directly,
but primarily use AI frameworks, we should use names familiar
to vector store connector writers and for AI framework users.
That is industry standard M and ef.
mhnsw_cache_size -> mhnsw_max_cache_size
mhnsw_distance_function -> mhnsw_default_distance
mhnsw_max_edges_per_node -> mhnsw_default_m
mhnsw_min_limit -> mhnsw_ef_search
inside CREATE TABLE:
max_edges_per_node -> m
distance_function -> distance
with streaming implemened mhnsw no longer needs to know
the LIMIT in advance. let's just cap it to avoid allocating
too much memory for the one step result set
MHNSW_Trx cannot store a pointer to the TABLE_SHARE for the duration
of a transaction, because the share can be evicted from the cache any
time.
Use a pointer to the MDL_ticket instead, it is stored in the THD
and has a duration of MDL_TRANSACTION, so won't go away.
When we need a TABLE_SHARE - on commit - get a new one from tdc.
Normally, it'll be already in the cache, so it'd be fast.
We don't optimize for the edge case when TABLE_SHARE was evicted.
support SQL semantics for SELECT ... WHERE ... ORDER BY ... LIMIT
* switch from returning k nearest neighbors to returning
as many as needed, in k-neighbor chunks, with increasing distance
* make search_layer() skips nodes that are closer than a threshold
* read_next keeps a search context - list of k found nodes,
threshold, ctx, etc.
* when the list of found nodes is exhausted, it repeats the search
starting from last found nodes and a threshold
* search context kepts ctx->refcount incremented, so ctx won't go away
* but commit_lock is unlocked between calls, so InnoDB can modify the table
* use ctx version to detect that, switch to MHNSW_Trx when it happens
bugfix:
* use the correct lock in ha_external_lock() for the graph table
* InnoDB didn't reset locks on ha_external_lock(F_UNLCK) and previous
LOCK_X leaked into the next statement
make generosity depend on
1. M. Keep small M's fast, increase generosity for larger M's to get
better recall.
2. distance. Keep generosity small when vectors are far from the
target, increase generosity when the search gets closer. This
allows to examine more relevant vectors but doesn't waste time
examining irrelevant vectors. Particularly important with cosine
metric when the distance is bounded
Fixes for ALTER TABLE ... ADD/DROP COLUMN, ALGORITHM=COPY.
Let quick_rm_table() remove high-level indexes along with original table.
Avoid locking uninitialized LOCK_share for INTERNAL_TMP_TABLEs.
Don't enable bulk insert when altering a table containing vector index.
InnoDB can't handle situation when bulk insert is enabled for one table
but disabled for another. We can't do bulk insert on vector index as it
does table updates currently.
into a separate transaction_participant structure
handlerton inherits it, so handlerton itself doesn't change.
but entities that only need to participate in a transaction,
like binlog or online alter log, use a transaction_participant
and no longer need to pretend to be a full-blown but invisible
storage engine which doesn't support create table.
* fix the truncate-by-handler variant, used by InnoDB
* test that insert works after truncate, meaning graph table was emptied
* test that the vector index size is zero after truncate in MyISAM
introduced a generosity factor that makes the search less greedy.
it dramatically improves the recall by making the search a bit slower
(for the same recall one can use half the M and smaller ef).
had to add Queue::safe_push() method that removes one of the
furthest elements (not necessarily the furthest) in the queue
to keep it from overflowing.
use int16_t instead of floats, they're faster and smaller.
but perform intermediate SIMD calculations with floats to avoid overflows.
recall drop with such scheme is below 0.002, often none.
int8_t would've been better but the precision loss is too big
and recall degrades too much.
When the source row is deleted, mark the corresponding node in HNSW
index by setting `tref` to null. An index is added for the `tref` in
secondary table for faster searching of the to-be-marked nodes.
The nodes marked as deleted will still be used for search, but will not
be included in the final query results.
As skipping deleted nodes and not adding deleted nodes for new-inserted
nodes' neighbor list could impact the performance, we now only skip
these nodes in search results.
- for some reason the bitmap is not set for hlindex during the delete so
I had to temporarily comment out one line
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
* preserve the graph in memory between statements
* keep it in a TABLE_SHARE, available for concurrent searches
* nodes are generally read-only, walking the graph doesn't change them
* distance to target is cached, calculated only once
* SIMD-optimized bloom filter detects visited nodes
* nodes are stored in an array, not List, to better utilize bloom filter
* auto-adjusting heuristic to estimate the number of visited nodes
(to configure the bloom filter)
* many threads can concurrently walk the graph. MEM_ROOT and Hash_set
are protected with a mutex, but walking doesn't need them
* up to 8 threads can concurrently load nodes into the cache,
nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might
need tuning)
* concurrent editing is not supported though
* this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the
graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an
INSERT into the main table means multiple UPDATEs in the graph)
* InnoDB uses secondary transaction-level caches linked in a list in
in thd->ha_data via a fake handlerton
* on rollback the secondary cache is discarded, on commit nodes
from the secondary cache are invalidated in the shared cache
while it is exclusively locked
* on savepoint rollback both caches are flushed. this can be improved
in the future with a row visibility callback
* graph size is controlled by @@mhnsw_cache_size, the cache is flushed
when it reaches the threshold
instead of one row per node per layer, have one row per node.
store all neighbors for all layers in that row, and the vector itself too
it completely avoids searches in the graph table and
will allow to implement deletions in the future
1. introduce alpha. the value of 1.1 is optimal, so hard-code it.
2. hard-code ef_construction=10, best by test
3. rename hnsw_max_connection_per_layer to mhnsw_max_edges_per_node
(max_connection is rather ambiguous in MariaDB) and add a help text
4. rename hnsw_ef_search to mhnsw_min_limit and add a help text
* mhnsw:
* use primary key, innodb loves and (and the index cannot have dupes anyway)
* MyISAM is ok with that, performance-wise
* must be ha_rnd_init(0) because we aren't going to scan
* MyISAM resets the position on ha_rnd_init(0) so query it before
* oh, and use the correct handler, just in case
* HA_ERR_RECORD_IS_THE_SAME is no error
* innodb:
* return ref_length on create
* don't assume table->pos_in_table_list is set
* ok, assume away, but only for system versioned tables
* set alter_info on create (InnoDB needs to check for FKs)
* pair external_lock/external_unlock correctly