Also expand vcol field index coverings to include indexes covering all
the fields in the expression. The reasoning goes as follows: let f(c1,
c2, ..., cn) be a function on applied to columns c1, c2, ..., cn, if
f(...) is covered by an index, so should vc whose expression is
f(...).
For example, if t.vf = t.c1 + t.c2, and t has three indexes (vf), (c1,
c2), (c1).
Before this change, vf's index covering is a singleton {(vf)}. Let's call
that the "conventional" index covering.
After this change vf's index covering is now {(vf), (c1, c2)}, since
(c1, c2) covers both c1 and c2. Let's call (c1, c2) in this case the
"extra" covering.
With the coverings updated, when an index in the "extra" covering is
chosen for keyread, the vcol also needs to be calculated. In this case
we mark vcol in the table read_set, and ensure it is computed.
With these changes, we see various improvements, including from using
full table scan + filesort to full index scan + filesort when ORDER BY
an indexed vcol (here vc = c + 1 is a vcol and both c and vc are
indexes):
explain select c + 1 from t order by vc;
id select_type table type possible_keys key key_len ref rows Extra
-1 SIMPLE t ALL NULL NULL NULL NULL 10000 Using filesort
+1 SIMPLE t index NULL c 5 NULL 10000 Using index; Using filesort
The substitutions are followed updates to all_fields which include a
copy of the ORDER BY/GROUP BY item pointers, as well as corresponding
updates to ref_pointer_array so that the all_fields and
ref_pointer_array remain in sync.
Another, related change is the recomputation of table index covering
on substitutions. It not only reflects the correct table index
covering after the substitutions, but also improve executions where
the vcol index can be chosen, such as this example (here vc = c + 1
and vc is the only index in the table), from full table scan +
filesort to full index scan:
select vc from t order by c + 1;
We do it in SELECT as well as in single table DELETE/UPDATE.
DROP USER looks for sessions by the do-be-dropped user and if found:
* fails with ER_CANNOT_USER in Oracle mode
* continues with ER_ACTIVE_CONNECTIONS_FOR_USER_TO_DROP warning otherwise
Every user being dropped is marked with flag that disallow establishing
a new connections on behalf this user.
let's always disconnect a user connection before dropping the said user.
MariaDB is traditionally very tolerant to active connections
of the dropped user, which isn't the case for most other databases.
Let's avoid unintentionally spreading incompatible behavior
and disconnect before drop.
Except in cases when the test specifically tests such a behavior.
Remove one of the major sources of race condiitons in mariadb-test.
Normally, mariadb_close() sends COM_QUIT to the server and immediately
disconnects. In mariadb-test it means the test can switch to another
connection and sends queries to the server before the server even
started parsing the COM_QUIT packet and these queries can see the
connection as fully active, as it didn't reach dispatch_command yet.
This is a major source of instability in tests and many - but not all,
still less than a half - tests employ workarounds. The correct one
is a pair count_sessions.inc/wait_until_count_sessions.inc.
Also very popular was wait_until_disconnected.inc, which was completely
useless, because it verifies that the connection is closed, and after
disconnect it always is, it didn't verify whether the server processed
COM_QUIT. Sadly the placebo was as widely used as the real thing.
Let's fix this by making mariadb-test `disconnect` command _to wait_ for
the server to confirm. This makes almost all workarounds redundant.
In some cases count_sessions.inc/wait_until_count_sessions.inc is still
needed, though, as only `disconnect` command is changed:
* after external tools, like `exec $MYSQL`
* after failed `connect` command
* replication, after `STOP SLAVE`
* Federated/CONNECT/SPIDER/etc after `DROP TABLE`
and also in some XA tests, because an XA transaction is dissociated from
the THD very late, after the server has closed the client connection.
Collateral cleanups: fix comments, remove some redundant statements:
* DROP IF EXISTS if nothing is known to exist
* DROP table/view before DROP DATABASE
* REVOKE privileges before DROP USER
etc
Statements that intend to modify data have to acquire protection
against ongoing backup. Prior to backup locks, protection against
FTWRL was acquired in form of 2 shared metadata locks of GLOBAL
(global read lock) and COMMIT namespaces. These two namespaces
were separate entities, they didn't share data structures and
locking primitives. And thus they were separate contention
points.
With backup locks, introduced by 7a9dfdd, these namespaces were
combined into a single BACKUP namespace. It became a single
contention point, which doubled load on BACKUP namespace data
structures and locking primitives compared to GLOBAL and COMMIT
namespaces. In other words system throughput has halved.
MDL fast lanes solve this problem by allowing multiple contention
points for single MDL_lock. Fast lane is scalable multi-instance
registry for leightweight locks. Internally it is just a list of
granted tickets, close counter and a mutex.
Number of fast lanes (or contention points) is defined by the
metadata_locks_instances system variable. Value of 1 disables fast
lanes and lock requests are served by conventional MDL_lock data
structures.
Since fast lanes allow arbitrary number of contention points, they
outperform pre-backup locks GLOBAL and COMMIT.
Fast lanes are enabled only for BACKUP namespace. Support for other
namespaces is to be implemented separately.
Lock types are divided in 2 categories: lightweight and heavyweight.
Lightweight lock types represent DML: MDL_BACKUP_DML,
MDL_BACKUP_TRANS_DML, MDL_BACKUP_SYS_DML, MDL_BACKUP_DDL,
MDL_BACKUP_ALTER_COPY, MDL_BACKUP_COMMIT. They are fully compatible
with each other. Normally served by corresponding fast lane, which is
determined by thread_id % metadata_locks_instances.
Heavyweight lock types represent ongoing backup: MDL_BACKUP_START,
MDL_BACKUP_FLUSH, MDL_BACKUP_WAIT_FLUSH, MDL_BACKUP_WAIT_DDL,
MDL_BACKUP_WAIT_COMMIT, MDL_BACKUP_FTWRL1, MDL_BACKUP_FTWRL2,
MDL_BACKUP_BLOCK_DDL. These locks are always served by conventional
MDL_lock data structures. Whenever such lock is requested, fast
lanes are closed and all tickets registered in fast lanes are
moved to conventional MDL_lock data structures. Until such locks
are released or aborted, lightweight lock requests are served by
conventional MDL_lock data structures.
Strictly speaking moving tickets from fast lanes to conventional
MDL_lock data structures is not required. But it allows to reduce
complexity and keep intact methods like: MDL_lock::visit_subgraph(),
MDL_lock::notify_conflicting_locks(), MDL_lock::reschedule_waiters(),
MDL_lock::can_grant_lock().
It is not even required to register tickets in fast lanes. They
can be implemented basing on an atomic variable that holds two
counters: granted lightweight locks and granted/waiting heavyweight
locks. Similarly to MySQL solution, which roughly speaking has
"single atomic fast lane". However it appears to be it won't bring
any better performance, while code complexity is going to be much
higher.
Before MySQL 4.0.18, user-specified constraint names were ignored.
Starting with MySQL 4.0.18, the specified constraint name was
prepended with the schema name and '/'. Now we are transforming
into a format where the constraint name is prepended with the
dict_table_t::name and the impossible UTF-8 sequence 0xff.
Generated constraint names will be ASCII decimal numbers.
On upgrade, old FOREIGN KEY constraint names will be displayed
without any schema name prefix. They will be updated to the new
format on DDL operations.
dict_foreign_t::sql_id(): Return the SQL constraint name
without any schemaname/tablename\377 or schemaname/ prefix.
row_rename_table_for_mysql(), dict_table_rename_in_cache():
Simplify the logic: Just rename constraints to the new format.
dict_table_get_foreign_id(): Replaces dict_table_get_highest_foreign_id().
innobase_get_foreign_key_info(): Let my_error() refer to erroneous
anonymous constraints as "(null)".
row_delete_constraint(): Try to drop all 3 constraint name variants.
Reviewed by: Thirunarayanan Balathandayuthapani
Tested by: Matthias Leich
- The test was inadvertently skipped on Windows CI, due to the
unjustified addition of include/have_tlsv13.inc in MDEV-33834 (log TLS
version). That include didn't make sense here and just reduced
coverage.
- Once skipped, the test got broken later by MDEV-12182 changes.
Originally it expected only one localhost:PORT line in the audit log,
assuming Unix socket connections. But on Windows, MTR uses TCP by
default, so all entries had :PORT, and the diff failed.
Fix:
- Forced tcp connection for server_audit.test, via .cnf file
Re-recorded result
unix_socket + server_audit is still covered by other tests.
- Dropped the have_tlsv13.inc include to restore coverage—it wasn't
testing TLS versions or ciphers anyway
Added option 'aria-pagecache-segments', default 1.
For values > 1, this split the aria-pagecache-buffer into the given
number of segments, each independent from each other. Having multiple
pagecaches improve performance when multiple connections runs queries
concurrently using different tables.
Each pagecache will use aria-pageache-buffer/segments amount of
memory, however at least 128K.
Each opened table has its index and data file use the segments in a
a round-robin fashion.
Internal changes:
- All programs allocating the maria pagecache themselves should now
call multi_init_pagecache() instead of init_pagecache().
- pagecache statistics is now stored in 'pagecache_stats' instead of
maria_pagecache. One must call multi_update_pagecache_stats() to
update the statistics.
- Added into PAGECACHE_FILE a pointer to files pagecache. This was
done to ensure that index and data file are using the same
pagecache and simplified the checkpoint code.
I kept pagecache in TABLE_SHARE to minimize the changes.
- really_execute_checkpoint() was update to handle a dynamic number of
pagecaches.
- pagecache_collect_changed_blocks_with_lsn() was slight changed to
allow it to be called for each pagecache.
- undefined not used functions maria_assign_pagecache() and
maria_change_pagecache()
- ma_pagecaches.c is totally rewritten. It now contains all
multi_pagecache functions.
Errors found be QA that are fixed:
MDEV-36872 UBSAN errors in ma_checkpoint.c
MDEV-36874 Behavior upon too small aria_pagecache_buffer_size in case of
multiple segments is not very user-friendly
MDEV-36914 ma_checkpoint.c(285,9): conversion from '__int64' to 'uint'
treated as an error
MDEV-36912 sys_vars.sysvars_server_embedded and
sys_vars.sysvars_server_notembedded fail on x86
MTR .result files currently do not contain output to indicate if a
change_user command has been executed in the corresponding .test files.
Record change_user command in the following format in MTR output only if
disable_query_log is set to false: change_user <user>,<password>,<db>;
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Currently, there are multiple error codes reported for the issue
'Can't execute init_slave query'. Those error codes are the underlying
reason the init_slave query cannot be executed, but this makes it
difficult to detect the issue in an automated way.
This patch introduces a new error code, ER_INIT_SLAVE_ERROR, to unify
all the errors related to the init_slave query. The ER_INIT_SLAVE_ERROR
error is raised for any issue related to the init_slave query, and the
underlying error code and message are included in the Last_SQL_Error
field.
Reviewed by:
Jimmy Hu <jimmy.hu@mariadb.com>
Brandon Nesterenko <brandon.nesterenko@mariadb.com>
InnoDB does the following check for sequence table during check
table command:
- There should be only one index should exist on sequence table
- There should be only one row should exist on sequence table
- The leaf page must be the root page for the sequence table
- Delete marked record should not exist
- DB_TRX_ID and DB_ROLL_PTR of the record should be 0 and 1U << 55
To check the rows, the table needs to be opened. To that end, and like
MDEV-36038, we force COPY algorithm on ALTER TABLE ... SEQUENCE=1.
This also results in checking the sequence state / metadata.
The table structure was already validated before this patch.
(cherry picked from commit 6f8ef26885)
Two new error codes ER_SEQUENCE_TABLE_HAS_TOO_FEW_ROWS and
ER_SEQUENCE_TABLE_HAS_TOO_MANY_ROWS were introduced in MDEV-36032 in
both 10.11 and, as part of MDEV-22491, 12.0. Here we remove them from
10.11, but they should remain in 12.0.
is_bulk_op())' failed after ALTER TABLE of versioned table
Missed error code resulted in my_ok() at higher frame which failed on
assertion for m_status in state of error.
handler::clone() call did not work with read only tables like S3.
It gave a wrong error message (out of memory instead of a permission
error) and aborted the query.
The issue was that the clone call had a wrong parameter to ha_open().
This now fixed. I also changed the clone call to provide the correct
error message if things fails.
This patch fixes an 'out of memory' error when using the S3 engine
for queries that could use multiple indexes together to find the matching
rows, like the following:
SELECT * FROM t1 WHERE key1 = 99 OR key2 = 2
forever, cannot be killed
mysql_rm_table_no_locks() does TDC_RT_REMOVE_ALL which waits while
share is closed. The table normally is open only as OPEN_STUB, this is
what parser does for CREATE TABLE. But for SELECT the table is opened
not as a stub. If it is the same table name we anyway have two
TABLE_LIST objects: stub and not stub. So for "not stub"
TDC_RT_REMOVE_ALL sees open count and decides to wait until it is
closed. And it hangs because that was opened in the same thread.
The fix disables subqueries in CHECK expression at parser
level. Thanks to Sergei Golubchik <serg@mariadb.org> for the patch.
buf_pool_t::resize(): After successfully shrinking the buffer pool,
announce the success. The size had already been updated in shrunk().
After failing to shrink the buffer pool, re-enable the adaptive
hash index if it had been enabled.
Reviewed by: Debarun Banerjee
buf_pool_t::resize(): After successfully shrinking the buffer pool,
announce the success. The size had already been updated in shrunk().
After failing to shrink the buffer pool, re-enable the adaptive
hash index if it had been enabled.
Reviewed by: Debarun Banerjee
Without this increase the mtr test case pre/post conditions will
fail as the stack usage has increased under MSAN with clang-20.1.
ASAN takes a 11M stack, however there was no obvious gain in MSAN
test success after 2M.
The resulting behaviour observed on smaller stack size was a
SEGV normally.
Hide the default stack size from the sysvar tests that expose
thread-stack as a variable with its default value.
Problem:
=======
- In 10.11, During Copy algorithm, InnoDB does use bulk insert
for row by row insert operation. When temporary directory
ran out of memory, row_mysql_handle_errors() fails to handle
DB_TEMP_FILE_WRITE_FAIL.
- During inplace algorithm, concurrent DML fails to write
the log operation into the temporary file. InnoDB fail to
mark the error for the online log.
- ddl_log_write() releases the global ddl lock prematurely before
release the log memory entry
Fix:
===
row_mysql_handle_errors(): Rollback the transaction when
InnoDB encounters DB_TEMP_FILE_WRITE_FAIL
convert_error_code_to_mysql(): Report an aborted transaction
when InnoDB encounters DB_TEMP_FILE_WRITE_FAIL during
alter table algorithm=copy or innodb bulk insert operation
row_log_online_op(): Mark the error in online log when
InnoDB ran out of temporary space
fil_space_extend_must_retry(): Mark the os_has_said_disk_full
as true if os_file_set_size() fails
btr_cur_pessimistic_update(): Return error code when
btr_cur_pessimistic_insert() fails
ddl_log_write(): Release the global ddl lock after releasing
the log memory entry when error was encountered
btr_cur_optimistic_update(): Relax the assertion that
blob pointer can be null during rollback because InnoDB can
ran out of space while allocating the external page
ha_innobase::extra(): Rollback the transaction during DDL before
calling convert_error_code_to_mysql().
row_undo_mod_upd_exist_sec(): Remove the assertion which says
that InnoDB should fail to build index entry when rollbacking
an incomplete transaction after crash recovery. This scenario
can happen when InnoDB ran out of space.
row_upd_changes_ord_field_binary_func(): Relax the assertion to
make that externally stored field can be null when InnoDB ran out
of space.
Problem:
=======
- During inplace algorithm, concurrent DML fails to write
the log operation into the temporary file. InnoDB fail to
mark the error for the online log.
- ddl_log_write() releases the global ddl lock prematurely before
release the log memory entry
Fix:
===
row_log_online_op(): Mark the error in online log when
InnoDB ran out of temporary space
fil_space_extend_must_retry(): Mark the os_has_said_disk_full
as true if os_file_set_size() fails
btr_cur_pessimistic_update(): Return error code when
btr_cur_pessimistic_insert() fails
ddl_log_write(): Release the global ddl lock after releasing the
log memory entry when error was encountered
btr_cur_optimistic_update(): Relax the assertion that
blob pointer can be null during rollback because InnoDB can
ran out of space while allocating the external page
row_undo_mod_upd_exist_sec(): Remove the assertion which says
that InnoDB should fail to build index entry when rollbacking
an incomplete transaction after crash recovery. This scenario
can happen when InnoDB ran out of space.
row_upd_changes_ord_field_binary_func(): Relax the assertion to
make that externally stored field can be null when InnoDB ran out
of space.
ER_DUP_ENTRY on partitioned table
Now as c1492f3d07 (MDEV-36115) restores m_last_part table->file
points to partition p0 while the error happens in p1, so error index
does not match ib_table in innobase_get_mysql_key_number_for_index().
This case is handled by separate code block in
innobase_get_mysql_key_number_for_index() which was wrong on using
secondary index for dict_index_is_auto_gen_clust() and it was not
covered by the tests.
After the membership change on a newly synced node, then this is just a
warning and safe to suppress.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Test changes only. Both warnings are expected and
should be suppressed because we intentionally inject
different inconsistencies on two nodes and then join
them back with membership change.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Test changes only. Add wait_condition so that all nodes
are in the expected state and add debug output if issue
does reproduce.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Issue:
Mariadb acquires additional MDL locks on UPDATE/INSERT/DELETE statements
on table with foreign keys. For example, table t1 references t2, an
UPDATE to t1 will MDL lock t2 in addition to t1.
A replica may deliver an ALTER t1 and UPDATE t2 concurrently for
applying. Then the UPDATE may acquire MDL lock for t1, followed by a
conflict when the ALTER attempts to MDL lock on t1. Causing a BF-BF
conflict.
Solution:
Additional keys for the referenced/foreign table needs to be added
to avoid potential MDL conflicts with concurrent update and DDLs.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Add wait_condition to wait until all inserted rows are replicated
so that show create table is deterministic.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
page_is_corrupted(): Do not allocate the buffers from stack,
but from the heap, in xb_fil_cur_open().
row_quiesce_write_cfg(): Issue one type of message when we
fail to create the .cfg file.
update_statistics_for_table(), read_statistics_for_table(),
delete_statistics_for_table(), rename_table_in_stat_tables():
Use a common stack buffer for Index_stat, Column_stat, Table_stat.
ha_connect::FileExists(): Invoke push_warning_printf() so that
we can avoid allocating a buffer for snprintf().
translog_init_with_table(): Do not duplicate TRANSLOG_PAGE_SIZE_BUFF.
Let us also globally enable the GCC 4.4 and clang 3.0 option
-Wframe-larger-than=16384 to reduce the possibility of introducing
such stack overflow in the future. For RocksDB and Mroonga we relax
these limits.
Reviewed by: Vladislav Lesin
Adding a new column in INFORMATION_SCHEMA.COLLATIONS:
PAD_ATTRIBUTE ENUM('PAD SPACE','NO PAD')
and a new column Pad_attribute into SHOW COLLATION.
The new column has been added after SORTLEN but before COMMENT.
This order is compatible with MySQL-8.0 order,
with the exception that MariaDB has an extra last column COMMENT:
MariaDB [test]> desc information_schema.collations;
+--------------------+----------------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------------+----------------------------+------+-----+---------+-------+
| COLLATION_NAME | varchar(64) | NO | | NULL | |
| CHARACTER_SET_NAME | varchar(32) | YES | | NULL | |
| ID | bigint(11) | YES | | NULL | |
| IS_DEFAULT | varchar(3) | YES | | NULL | |
| IS_COMPILED | varchar(3) | NO | | NULL | |
| SORTLEN | bigint(3) | NO | | NULL | |
| PAD_ATTRIBUTE | enum('PAD SPACE','NO PAD') | NO | | NULL | |
| COMMENT | varchar(80) | NO | | NULL | |
+--------------------+----------------------------+------+-----+---------+-------+
The new Pad_attribute in SHOW COLLATION has been added as the last column.
This is also compatible with MySQL:
MariaDB [test]> show collation like 'utf8mb4_bin';
+-------------+---------+------+---------+----------+---------+---------------+
| Collation | Charset | Id | Default | Compiled | Sortlen | Pad_attribute |
+-------------+---------+------+---------+----------+---------+---------------+
| utf8mb4_bin | utf8mb4 | 46 | | Yes | 1 | PAD SPACE |
+-------------+---------+------+---------+----------+---------+---------------+