mirror of
https://github.com/MariaDB/server.git
synced 2025-11-10 23:02:54 +03:00
In commit 1bd681c8b3 (MDEV-25506 part 3)
we introduced a "fake instant timeout" when a transaction would wait
for a table or record lock while holding dict_sys.latch. This prevented
a deadlock of the server but could cause bogus errors for operations
on the InnoDB persistent statistics tables.
A better fix is to ensure that whenever a transaction is being
executed in the InnoDB internal SQL parser (which will for now
require dict_sys.latch to be held), it will already have acquired
all locks that could be required for the execution. So, we will
acquire the following locks upfront, before acquiring dict_sys.latch:
(1) MDL on the affected user table (acquired by the SQL layer)
(2) If applicable (not for RENAME TABLE): InnoDB table lock
(3) If persistent statistics are going to be modified:
(3.a) MDL_SHARED on mysql.innodb_table_stats, mysql.innodb_index_stats
(3.b) exclusive table locks on the statistics tables
(4) Exclusive table locks on the InnoDB data dictionary tables
(not needed in ANALYZE TABLE and the like)
Note: Acquiring exclusive locks on the statistics tables may cause
more locking conflicts between concurrent DDL operations.
Notably, RENAME TABLE will lock the statistics tables
even if no persistent statistics are enabled for the table.
DROP DATABASE will only acquire locks on statistics tables if
persistent statistics are enabled for the tables on which the
SQL layer is invoking ha_innobase::delete_table().
For any "garbage collection" in innodb_drop_database(), a timeout
while acquiring locks on the statistics tables will result in any
statistics not being deleted for any tables that the SQL layer
did not know about.
If innodb_defragment=ON, information may be written to the statistics
tables even for tables for which InnoDB persistent statistics are
disabled. But, DROP TABLE will no longer attempt to delete that
information if persistent statistics are not enabled for the table.
This change should also fix the hangs related to InnoDB persistent
statistics and STATS_AUTO_RECALC (MDEV-15020) as well as
a bug that running ALTER TABLE on the statistics tables
concurrently with running ALTER TABLE on InnoDB tables could
cause trouble.
lock_rec_enqueue_waiting(), lock_table_enqueue_waiting():
Do not issue a fake instant timeout error when the transaction
is holding dict_sys.latch. Instead, assert that the dict_sys.latch
is never being held here.
lock_sys_tables(): A new function to acquire exclusive locks on all
dictionary tables, in case DROP TABLE or similar operation is
being executed. Locking non-hard-coded tables is optional to avoid
a crash in row_merge_drop_temp_indexes(). The SYS_VIRTUAL table was
introduced in MySQL 5.7 and MariaDB Server 10.2. Normally, we require
all these dictionary tables to exist before executing any DDL, but
the function row_merge_drop_temp_indexes() is an exception.
When upgrading from MariaDB Server 10.1 or MySQL 5.6 or earlier,
the table SYS_VIRTUAL would not exist at this point.
ha_innobase::commit_inplace_alter_table(): Invoke
log_write_up_to() while not holding dict_sys.latch.
dict_sys_t::remove(), dict_table_close(): No longer try to
drop index stubs that were left behind by aborted online ADD INDEX.
Such indexes should be dropped from the InnoDB data dictionary by
row_merge_drop_indexes() as part of the failed DDL operation.
Stubs for aborted indexes may only be left behind in the
data dictionary cache.
dict_stats_fetch_from_ps(): Use a normal read-only transaction.
ha_innobase::delete_table(), ha_innobase::truncate(), fts_lock_table():
While waiting for purge to stop using the table,
do not hold dict_sys.latch.
ha_innobase::delete_table(): Implement a work-around for the rollback
of ALTER TABLE...ADD PARTITION. MDL_EXCLUSIVE would not be held if
ALTER TABLE hits lock_wait_timeout while trying to upgrade the MDL
due to a conflicting LOCK TABLES, such as in the first ALTER TABLE
in the test case of Bug#53676 in parts.partition_special_innodb.
Therefore, we must explicitly stop purge, because it would not be
stopped by MDL.
dict_stats_func(), btr_defragment_chunk(): Allocate a THD so that
we can acquire MDL on the InnoDB persistent statistics tables.
mysqltest_embedded: Invoke ha_pre_shutdown() before free_used_memory()
in order to avoid ASAN heap-use-after-free related to acquire_thd().
trx_t::dict_operation_lock_mode: Changed the type to bool.
row_mysql_lock_data_dictionary(), row_mysql_unlock_data_dictionary():
Implemented as macros.
rollback_inplace_alter_table(): Apply an infinite timeout to lock waits.
innodb_thd_increment_pending_ops(): Wrapper for
thd_increment_pending_ops(). Never attempt async operation for
InnoDB background threads, such as the trx_t::commit() in
dict_stats_process_entry_from_recalc_pool().
lock_sys_t::cancel(trx_t*): Make dictionary transactions immune to KILL.
lock_wait(): Make dictionary transactions immune to KILL, and to
lock wait timeout when waiting for locks on dictionary tables.
parts.partition_special_innodb: Use lock_wait_timeout=0 to instantly
get ER_LOCK_WAIT_TIMEOUT.
main.mdl: Filter out MDL on InnoDB persistent statistics tables
Reviewed by: Thirunarayanan Balathandayuthapani
This directory contains test suites for the MariaDB server. To run currently existing test cases, execute ./mysql-test-run in this directory. Some tests are known to fail on some platforms or be otherwise unreliable. In the file collections/smoke_test there is a list of tests that are expected to be stable. In general you do not have to have to do "make install", and you can have a co-existing MariaDB installation, the tests will not conflict with it. To run the tests in a source directory, you must do "make" first. In Red Hat distributions, you should run the script as user "mysql". The user is created with nologin shell, so the best bet is something like # su - # cd /usr/share/mysql-test # su -s /bin/bash mysql -c ./mysql-test-run This will use the installed MariaDB executables, but will run a private copy of the server process (using data files within /usr/share/mysql-test), so you need not start the mysqld service beforehand. You can omit --skip-test-list option if you want to check whether the listed failures occur for you. To clean up afterwards, remove the created "var" subdirectory, e.g. # su -s /bin/bash - mysql -c "rm -rf /usr/share/mysql-test/var" If tests fail on your system, please read the following manual section for instructions on how to report the problem: https://mariadb.com/kb/en/reporting-bugs If you want to use an already running MySQL server for specific tests, use the --extern option to mysql-test-run. Please note that in this mode, you are expected to provide names of the tests to run. For example, here is the command to run the "alias" and "analyze" tests with an external server: # mysql-test-run --extern socket=/tmp/mysql.sock alias analyze To match your setup, you might need to provide other relevant options. With no test names on the command line, mysql-test-run will attempt to execute the default set of tests, which will certainly fail, because many tests cannot run with an external server (they need to control the options with which the server is started, restart the server during execution, etc.) You can create your own test cases. To create a test case, create a new file in the main subdirectory using a text editor. The file should have a .test extension. For example: # xemacs t/test_case_name.test In the file, put a set of SQL statements that create some tables, load test data, and run some queries to manipulate it. Your test should begin by dropping the tables you are going to create and end by dropping them again. This ensures that you can run the test over and over again. If you are using mysqltest commands in your test case, you should create the result file as follows: # mysql-test-run --record test_case_name or # mysqltest --record < t/test_case_name.test If you only have a simple test case consisting of SQL statements and comments, you can create the result file in one of the following ways: # mysql-test-run --record test_case_name # mysql test < t/test_case_name.test > r/test_case_name.result # mysqltest --record --database test --result-file=r/test_case_name.result < t/test_case_name.test When this is done, take a look at r/test_case_name.result. If the result is incorrect, you have found a bug. In this case, you should edit the test result to the correct results so that we can verify that the bug is corrected in future releases. If you want to submit your test case you can send it to maria-developers@lists.launchpad.net or attach it to a bug report on https://mariadb.org/jira/. If the test case is really big or if it contains 'not public' data, then put your .test file and .result file(s) into a tar.gz archive, add a README that explains the problem, ftp the archive to ftp://ftp.askmonty.org/private and submit a report to https://mariadb.org/jira about it. The latest information about mysql-test-run can be found at: https://mariadb.com/kb/en/mariadb/mysqltest/ If you want to create .rdiff files, check https://mariadb.com/kb/en/mariadb/mysql-test-auxiliary-files/