* Mitigate race candition when got_no_such_table remains uncleared.
* Remove warnings about deprecated SELECT .. FROM .. INTO ...
MDEV-16222 Assertion `0' failed in row_purge_remove_sec_if_poss_leaf on table with virtual columns and indexes
Shorten some VARCHAR attributes to a more reasonable length.
INNODB_METRICS: Rename the column STATUS to ENABLED, and make it Boolean.
Replace with INT(1) many Boolean attributes that were declared as VARCHAR
containing 'NO','YES','disabled','enabled','Uninitialized','Initialized'.
Replace some VARCHAR attributes with ENUM.
Replace some BIGINT with INT when 32 bits are sufficient.
Remove INNODB_SYS_TABLESPACES.SPACE_TYPE. The type of a tablespace
can be derived from the tablespace ID. A fixed number is used for
the system tablespace and the temporary tablespace. All other tablespaces
are single-table or single-partition tablespaces.
i_s_locks_row_t::lock_type, lock_get_type_str(): Remove.
This is a redundant field. Table and record locks can be
distinguished by whether i_s_locks_row_t::lock_index is NULL.
fill_trx_row(): Do not unnecessarily copy the constant strings that
trx->op_info is pointing to.
i_s_locks_row_t::lock_mode: Replace string with integer.
lock_get_mode_str(), lock_get_trx_id(), lock_get_trx(): Remove.
field_store_ulint(): Remove.
There was a bug in the page cache that didn't take into account that
another thread could be waiting for a page to be read by read_big_block().
Fixed by releasing all waiters in read_big_block()
There are two options when coping S3 tables with mysqldump
(there is startup option --copy_s3_tables, boolean, default no)
1) Ignore all tables with engine S3, as the data is already safe in S3 and any
computer where you restore the backup will automatically discover the S3 table.
2) Copy the table as a normal table with the following 2 changes:
- Change ENGINE=S3 to ENGINE=ARIA;
- After copy add to log 'ALTER TABLE table_name ENGINE=S3'
MDEV-19585 Assertion with S3 table and flush_tables
The limit has to be increased so that MariaDB can create system tables.
It should not have any notable impact on performance.
There should not be any notable performance differences between 1K and 4K,
especially for temporary tables. In most cases using bigger blocks is also
faster (with the possible exception of doing key reads of not fixed length
keys).
The error occured because aria_copy_to_s3() function tried to copy .frm file
of partition, but partition does not have it's own .frm file. The same is true
for aria_rename_s3().
To fix this issue the new parameter was added to those two functions to specify
if .frm file must be copied or not. The parameter is set to 'false' for
partitions.
Also there was other issue with EXCHANGE PARTITION. Briefly, there is the
following sequence of operations(see exchange_name_with_ddl_log() for details):
1) rename swap table to temporary table,
2) rename partition to swap table,
3) rename temporary table to partition.
On step (1) .frm file is renamed too. On step (2) the swap table does not
have .frm file, as partition does not have it. On step (3) partition will have
.frm file, because it will be renamed from temporary table. All of this causes
error on different stages of the table access. To fix it, .frm is not touched
at all for s3 during EXCHANGE PARTITION operation. This is implemented in
ha_s3::rename_table() by additional checking of
current_thd->lex->alter_info.partition_flags(see also ALTER_PARTITION_EXCHANGE
in sql_yacc.yy).
Patch is about two cases:
1) On some collate changes it's possible to rebuild only secondary indexes
2) For non-indexed columns collate can be changed INSTANTly
Implemented mostly in Field_{string,varstring,blob}::is_equal().
Make this method return how exactly collationa differs.
This information is later used by fill_alter_inplace_info() to pass
correct info to engine.
The test cases for the MDEV found several independent bugs
in MariaDB server and Aria:
- If a temporary table was marked as crashed, it could never
be deleted.
- Opening of a crashed temporary table gave an error message
but the error was never forwarded to the caller which caused
an assert() in my_ok()
- init_read_record() did mmap of all temporary tables, which is
probably not a good idea as this area can potentially be
very big. Changed code to only mmap internal temporary tables.
- mmap-ed tables where not unmapped in case of repair/optimize
which caused bad data in table and crashes if the original
table files where replaced with new ones (as the old mmap
was still in place). Fixed by removing the mmap in case
of repair.
- Cleaned up usage of code that disabled mmap in Aria
In collaboration with Sergey Vojtovich <svoj@mariadb.org>
The COMPRESSED clause is now a part of the data type and goes immediately
after the data type and length, but before the CHARACTER SET clause,
and before column attributes such as DEFAULT, COLLATE, ON UPDATE,
SYSTEM VERSIONING, engine specific column attributes.
In the old reduction, the COMPRESSED clause was a column attribute.
New syntax:
<varchar or text data type> <length> <compression> <character set> <column attributes>
<varbinary or blob data type> <length> <compression> <column attributes>
New syntax examples:
VARCHAR(1000) COMPRESSED CHARACTER SET latin1 DEFAULT ''
BLOB COMPRESSED DEFAULT ''
Deprecate syntax examples:
VARCHAR(1000) CHARACTER SET latin1 COMPRESSED DEFAULT ''
TEXT CHARACTER SET latin1 DEFAULT '' COMPRESSED
VARBINARY(1000) DEFAULT '' COMPRESSED
As a side effect:
- COMPRESSED is not valid as an SP label name in SQL/PSM routines any more
(but it's still valid as an SP label name in sql_mode=ORACLE)
- COMPRESSED is now allowed in combination with GENERATED ALWAYS AS:
TEXT COMPRESSED GENERATED ALWAYS AS REPEAT('a',1000)
There was two separate problems:
- Aria pagecache didn't properly handle re-reading of blocks
that have given errors before (this triggered an assert)
- temporary tables that where opened several times where
not properly closed in ALTER, REPAIR or OPTIMIZE table
Other things
- Added a couple of asserts that will make it easier to
find problems like this in the future.
in HEAP btree indexes, the address of a record in memory is part of the
key. So, when inserting many identical keys, the actual btree
shape is defined by how and where records in memory are allocated.
records_in_range uses floats to estimate the size of the chunk of the
btree between min and max records, it depends on the btree shape and,
thus, is not portable either. As are optimizer decisions that are based
on records_in_range estimations, if the number happens to be close
to a tipping point.
as a fix, reduce the number of matching rows, so that even with
system-specific variations the optimizer would still pick the
expected plan.
Fixes heap.heap failure (range vs ALL) on ppc64
Server and command line tools now support option --tls_version to specify the
TLS version between client and server. Valid values are TLSv1.0, TLSv1.1, TLSv1.2, TLSv1.3
or a combination of them. E.g.
--tls_version=TLSv1.3
--tls_version=TLSv1.2,TLSv1.3
In case there is a gap between versions, the lowest version will be used:
--tls_version=TLSv1.1,TLSv1.3 -> Only TLSv1.1 will be available.
If the used TLS library doesn't support the specified TLS version, it will use
the default configuration.
Limitations:
SSLv3 is not supported. The default configuration doesn't support TLSv1.0 anymore.
TLSv1.3 protocol currently is only supported by OpenSSL 1.1.0 (client and server) and
GnuTLS 3.6.5 (client only).
Overview of TLS implementations and protocols
Server:
+-----------+-----------------------------------------+
| Library | Supported TLS versions |
+-----------+-----------------------------------------+
| WolfSSL | TLSv1.1, TLSv1,2 |
+-----------+-----------------------------------------+
| OpenSSL | (TLSv1.0), TLSv1.1, TLSv1,2, TLSv1.3 |
+-----------+-----------------------------------------+
| LibreSSL | (TLSv1.0), TLSv1.1, TLSv1,2, TLSv1.3 |
+-----------+-----------------------------------------+
Client (MariaDB Connector/C)
+-----------+-----------------------------------------+
| Library | Supported TLS versions |
+-----------+-----------------------------------------+
| GnuTLS | (TLSv1.0), TLSv1.1, TLSv1.2, TLSv1.3 |
+-----------+-----------------------------------------+
| Schannel | (TLSv1.0), TLSv1.1, TLSv1.2 |
+-----------+-----------------------------------------+
| OpenSSL | (TLSv1.0), TLSv1.1, TLSv1,2, TLSv1.3 |
+-----------+-----------------------------------------+
| LibreSSL | (TLSv1.0), TLSv1.1, TLSv1,2, TLSv1.3 |
+-----------+-----------------------------------------+
Problem:
=========
One of the purge thread access the corrupted page and tries to remove from
LRU list. In the mean time, other purge threads are waiting for same page
in buf_wait_for_read(). Assertion(buf_fix_count == 0) fails for the
purge thread which tries to remove the page from LRU list.
Solution:
========
- Set the page id as FIL_NULL to indicate the page is corrupted before
removing the block from LRU list. Acquire hash lock for the particular
page id and wait for the other threads to release buf_fix_count
for the block.
- Added the error check for btr_cur_open() in row_search_on_row_ref().
Before killing the server, ensure that the incomplete state of
the transaction will be made durable and will be applied and
rolled back on recovery, so that each time, roughly the same
amount of work will be done.
Remove DML statements after the recovery, and execute
CHECK TABLE instead.
Remove the test, because it easily fails with a result difference.
Analysis by Thirunarayanan Balathandayuthapani:
By default, innodb_encrypt_tables=0.
1) Test case creates 100 tables in innodb_encrypt_1.
2) creates another 100 unencrypted tables (encryption=off) in innodb_encrypt_2
3) creates another 100 encrypted tables (encryption=on) in innodb_encrypt_3
4) enabling innodb_encrypt_tables=1 and checking that only
100 encrypted tables exist. (already we have 100 in dictionary)
5) opening all tables again (no idea why)
6) After that, set innodb_encrypt_tables=0 and wait for 100 tables
to be decrypted (already we have 100 unencrypted tables)
7) dropping all databases
Sporadic failure happens because after step 4, it could encrypt the
normal table too, because innodb_encryption_threads=4.
This test was added in MDEV-9931, which was about InnoDB startup being
slow due to all .ibd files being opened. There have been a number of
later fixes to this problem. Currently the latest one is
commit cad56fbaba, in which some tests
(in particular the test innodb.alter_kill) could fail if all InnoDB
.ibd files are read during startup. That could make this test redundant.
Let us remove the test, because it is big, slow, unreliable, and
does not seem to reliably catch the problem that all files are being
read on InnoDB startup.