MariaDB Server 10.0 and 10.1 support non-indexed virtual columns,
which are hidden from the storage engine. Starting with MDEV-5800
in MariaDB 10.2.2, the virtual columns are visible to storage engines.
calc_row_difference(): Follow up the MDEV-17199 fix, which forgot
to increment num_v when skipping virtual columns in tables that
were created before MariaDB 10.2.2. This caused a corruption of
the update vector when an updated persistent column is preceded
by virtual columns.
The macro innobase_is_v_fld() turns out to be equivalent with
the opposite of Field::stored_in_db(). Remove the macro and
invoke the member function directly.
innodb_base_col_setup_for_stored(): Simplify a condition to only
check Field::vcol_info.
innobase_create_index_def(): Replace some redundant code with
DBUG_ASSERT().
MariaDB before MDEV-5800 in version 10.2.2 did not support
indexed virtual columns. Non-persistent virtual columns were
hidden from storage engines. Only starting with MDEV-5800, InnoDB
would create internal metadata on virtual columns.
On TRUNCATE TABLE, an old .frm file from before MDEV-5800 may be
used as the table schema. When the table is being re-created by
InnoDB, the old schema must be used. That is, we may hide
the existence of virtual columns from InnoDB.
create_table_check_doc_id_col(): Remove the assertion that failed.
This function can actually correctly deal with virtual columns
that could have been created before MariaDB 10.2.2 introduced MDEV-5800.
create_table_info_t::create_table_def(): Do not create metadata for
virtual columns if the table definition was created before MariaDB 10.2.2.
This is a follow-up task to MDEV-12026, which introduced
innodb_checksum_algorithm=full_crc32 and a simpler page format.
MDEV-12026 did not enable full_crc32 for page_compressed tables,
which we will be doing now.
This is joint work with Thirunarayanan Balathandayuthapani.
For innodb_checksum_algorithm=full_crc32 we change the
page_compressed format as follows:
FIL_PAGE_TYPE: The most significant bit will be set to indicate
page_compressed format. The least significant bits will contain
the compressed page size, rounded up to a multiple of 256 bytes.
The checksum will be stored in the last 4 bytes of the page
(whether it is the full page or a page_compressed page whose
size is determined by FIL_PAGE_TYPE), covering all preceding
bytes of the page. If encryption is used, then the page will
be encrypted between compression and computing the checksum.
For page_compressed, FIL_PAGE_LSN will not be repeated at
the end of the page.
FSP_SPACE_FLAGS (already implemented as part of MDEV-12026):
We will store the innodb_compression_algorithm that may be used
to compress pages. Previously, the choice of algorithm was written
to each compressed data page separately, and one would be unable
to know in advance which compression algorithm(s) are used.
fil_space_t::full_crc32_page_compressed_len(): Determine if the
page_compressed algorithm of the tablespace needs to know the
exact length of the compressed data. If yes, we will reserve and
write an extra byte for this right before the checksum.
buf_page_is_compressed(): Determine if a page uses page_compressed
(in any innodb_checksum_algorithm).
fil_page_decompress(): Pass also fil_space_t::flags so that the
format can be determined.
buf_page_is_zeroes(): Check if a page is full of zero bytes.
buf_page_full_crc32_is_corrupted(): Renamed from
buf_encrypted_full_crc32_page_is_corrupted(). For full_crc32,
we always simply validate the checksum to the page contents,
while the physical page size is explicitly specified by an
unencrypted part of the page header.
buf_page_full_crc32_size(): Determine the size of a full_crc32 page.
buf_dblwr_check_page_lsn(): Make this a debug-only function, because
it involves potentially costly lookups of fil_space_t.
create_table_info_t::check_table_options(),
ha_innobase::check_if_supported_inplace_alter(): Do allow the creation
of SPATIAL INDEX with full_crc32 also when page_compressed is used.
commit_cache_norebuild(): Preserve the compression algorithm when
updating the page_compression_level.
dict_tf_to_fsp_flags(): Set the flags for page compression algorithm.
FIXME: Maybe there should be a table option page_compression_algorithm
and a session variable to back it?
This patch contains a fix for the MDEV-17262/17243 issues and
new mtr test.
These issues (MDEV-17262/17243) have two reasons:
1) After an intermediate commit, a transaction loses its status
of "transaction that registered in the MySQL for 2pc coordinator"
(in the InnoDB) due to the fact that since version 10.2 the
write_row() function (which located in the ha_innodb.cc) does
not call trx_register_for_2pc(m_prebuilt->trx) during the processing
of split transactions. It is necessary to restore this call inside
the write_row() when an intermediate commit was made (for a split
transaction).
Similarly, we need to set the flag of the started transaction
(m_prebuilt->sql_stat_start) after intermediate commit.
The table->file->extra(HA_EXTRA_FAKE_START_STMT) called from the
wsrep_load_data_split() function (which located in sql_load.cc)
will also do this, but it will be too late. As a result, the call
to the wsrep_append_keys() function from the InnoDB engine may be
lost or function may be called with invalid transaction identifier.
2) If a transaction with the LOAD DATA statement is divided into
logical mini-transactions (of the 10K rows) and binlog is rotated,
then in rare cases due to the wsrep handler re-registration at the
boundary of the split, the last portion of data may be lost. Since
splitting of the LOAD DATA into mini-transactions is technical,
I believe that we should not allow these mini-transactions to fall
into separate binlogs. Therefore, it is necessary to prohibit the
rotation of binlog in the middle of processing LOAD DATA statement.
https://jira.mariadb.org/browse/MDEV-17262 and
https://jira.mariadb.org/browse/MDEV-17243
MDEV-15418 changed InnoDB to use the READ UNCOMMITTED isolation level
when the transaction recovery is disabled by setting
innodb_force_recovery to 5 or 6. Alas, CHECK TABLE would still
internally use REPEATABLE READ. If the server was started without
purge running into completion (and resetting all DB_TRX_ID thanks
to MDEV-12288), this would cause errors about any DB_TRX_ID that
had not been reset to 0.
row_drop_tables_for_mysql_in_background(): Copy the table name
before closing the table handle, to avoid heap-use-after-free if
another thread succeeds in dropping the table before
row_drop_table_for_mysql_in_background() completes the table name lookup.
dict_mem_create_temporary_tablename(): With innodb_safe_truncate=ON
(the default), generate a simple, unique, collision-free table name
using only the id, no pseudorandom component. This is safe, because
on startup, we will drop any #sql tables that might exist in InnoDB.
This is a backport from 10.3. It should have been backported already
as part of backporting MDEV-14717,MDEV-14585 which were prerequisites
for the MDEV-13564 backup-friendly TRUNCATE TABLE.
This seems to reduce the chance of table creation failures in
ha_innobase::truncate().
ha_innobase::truncate(): Do not invoke close(), but instead
mimic it, so that we can restore to the original table handle
in case opening the truncated copy of the table failed.
The problem with the InnoDB table attribute encryption_key_id is that it is
not being persisted anywhere in InnoDB except if the table attribute
encryption is specified and is something else than encryption=default.
MDEV-17320 made it a hard error if encryption_key_id is specified to be
anything else than 1 in that case.
Ideally, we would always persist encryption_key_id in InnoDB. But, then we
would have to be prepared for the case that when encryption is being enabled
for a table whose encryption_key_id attribute refers to a non-existing key.
In MariaDB Server 10.1, our best option remains to not store anything
inside InnoDB. But, instead of returning the error that MDEV-17320
introduced, we should merely issue a warning that the specified
encryption_key_id is going to be ignored if encryption=default.
To improve the situation a little more, we will issue a warning if
SET [GLOBAL|SESSION] innodb_default_encryption_key_id is being set
to something that does not refer to an available encryption key.
Starting with MariaDB Server 10.2, thanks to MDEV-5800, we could open the
table definition from InnoDB side when the encryption is being enabled,
and actually fix the root cause of what was reported in MDEV-17320.
If we have a 2+ node cluster which is replicating from an async master
and the binlog_format is set to STATEMENT and multi-row inserts are executed
on a table with an auto_increment column such that values are automatically
generated by MySQL, then the server node generates wrong auto_increment
values, which are different from what was generated on the async master.
In the title of the MDEV-9519 it was proposed to ban start slave on a Galera
if master binlog_format = statement and wsrep_auto_increment_control = 1,
but the problem can be solved without such a restriction.
The causes and fixes:
1. We need to improve processing of changing the auto-increment values
after changing the cluster size.
2. If wsrep auto_increment_control switched on during operation of
the node, then we should immediately update the auto_increment_increment
and auto_increment_offset global variables, without waiting of the next
invocation of the wsrep_view_handler_cb() callback. In the current version
these variables retain its initial values if wsrep_auto_increment_control
is switched on during operation of the node, which leads to inconsistent
results on the different nodes in some scenarios.
3. If wsrep auto_increment_control switched off during operation of the node,
then we must return the original values of the auto_increment_increment and
auto_increment_offset global variables, as the user has set. To make this
possible, we need to add a "shadow copies" of these variables (which stores
the latest values set by the user).
https://jira.mariadb.org/browse/MDEV-9519
If we have a 2+ node cluster which is replicating from an async master
and the binlog_format is set to STATEMENT and multi-row inserts are executed
on a table with an auto_increment column such that values are automatically
generated by MySQL, then the server node generates wrong auto_increment
values, which are different from what was generated on the async master.
In the title of the MDEV-9519 it was proposed to ban start slave on a Galera
if master binlog_format = statement and wsrep_auto_increment_control = 1,
but the problem can be solved without such a restriction.
The causes and fixes:
1. We need to improve processing of changing the auto-increment values
after changing the cluster size.
2. If wsrep auto_increment_control switched on during operation of
the node, then we should immediately update the auto_increment_increment
and auto_increment_offset global variables, without waiting of the next
invocation of the wsrep_view_handler_cb() callback. In the current version
these variables retain its initial values if wsrep_auto_increment_control
is switched on during operation of the node, which leads to inconsistent
results on the different nodes in some scenarios.
3. If wsrep auto_increment_control switched off during operation of the node,
then we must return the original values of the auto_increment_increment and
auto_increment_offset global variables, as the user has set. To make this
possible, we need to add a "shadow copies" of these variables (which stores
the latest values set by the user).
https://jira.mariadb.org/browse/MDEV-9519
If we have a 2+ node cluster which is replicating from an async master
and the binlog_format is set to STATEMENT and multi-row inserts are executed
on a table with an auto_increment column such that values are automatically
generated by MySQL, then the server node generates wrong auto_increment
values, which are different from what was generated on the async master.
In the title of the MDEV-9519 it was proposed to ban start slave on a Galera
if master binlog_format = statement and wsrep_auto_increment_control = 1,
but the problem can be solved without such a restriction.
The causes and fixes:
1. We need to improve processing of changing the auto-increment values
after changing the cluster size.
2. If wsrep auto_increment_control switched on during operation of
the node, then we should immediately update the auto_increment_increment
and auto_increment_offset global variables, without waiting of the next
invocation of the wsrep_view_handler_cb() callback. In the current version
these variables retain its initial values if wsrep_auto_increment_control
is switched on during operation of the node, which leads to inconsistent
results on the different nodes in some scenarios.
3. If wsrep auto_increment_control switched off during operation of the node,
then we must return the original values of the auto_increment_increment and
auto_increment_offset global variables, as the user has set. To make this
possible, we need to add a "shadow copies" of these variables (which stores
the latest values set by the user).
https://jira.mariadb.org/browse/MDEV-9519
MariaDB data-at-rest encryption (innodb_encrypt_tables)
had repurposed the same unused data field that was repurposed
in MySQL 5.7 (and MariaDB 10.2) for the Split Sequence Number (SSN)
field of SPATIAL INDEX. Because of this, MariaDB was unable to
support encryption on SPATIAL INDEX pages.
Furthermore, InnoDB page checksums skipped some bytes, and there
are multiple variations and checksum algorithms. By default,
InnoDB accepts all variations of all algorithms that ever existed.
This unnecessarily weakens the page checksums.
We hereby introduce two more innodb_checksum_algorithm variants
(full_crc32, strict_full_crc32) that are special in a way:
When either setting is active, newly created data files will
carry a flag (fil_space_t::full_crc32()) that indicates that
all pages of the file will use a full CRC-32C checksum over the
entire page contents (excluding the bytes where the checksum
is stored, at the very end of the page). Such files will always
use that checksum, no matter what the parameter
innodb_checksum_algorithm is assigned to.
For old files, the old checksum algorithms will continue to be
used. The value strict_full_crc32 will be equivalent to strict_crc32
and the value full_crc32 will be equivalent to crc32.
ROW_FORMAT=COMPRESSED tables will only use the old format.
These tables do not support new features, such as larger
innodb_page_size or instant ADD/DROP COLUMN. They may be
deprecated in the future. We do not want an unnecessary
file format change for them.
The new full_crc32() format also cleans up the MariaDB tablespace
flags. We will reserve flags to store the page_compressed
compression algorithm, and to store the compressed payload length,
so that checksum can be computed over the compressed (and
possibly encrypted) stream and can be validated without
decrypting or decompressing the page.
In the full_crc32 format, there no longer are separate before-encryption
and after-encryption checksums for pages. The single checksum is
computed on the page contents that is written to the file.
We do not make the new algorithm the default for two reasons.
First, MariaDB 10.4.2 was a beta release, and the default values
of parameters should not change after beta. Second, we did not
yet implement the full_crc32 format for page_compressed pages.
This will be fixed in MDEV-18644.
This is joint work with Marko Mäkelä.
If we instantly change the size of a fixed-length field
and treat it as kind-of variable-length, then we will need
conversions between old column values and new ones.
I tried adding such a conversion to row_build(), but then I
noticed that more conversions would be needed, because
old values still appeared in a freshly rebuilt secondary index,
causing a mismatch when trying to search with the correct
longer value that was converted in my provisional fix to row_build().
So, we will revert the essential part of
MDEV-15563: Instant ROW_FORMAT=REDUNDANT column extension
(commit 22feb179ae), but not
remove any tests.
This was developed by Aleksey Midenkov based on my design.
In the original InnoDB storage format (that was retroactively named
ROW_FORMAT=REDUNDANT in MySQL 5.0.3), the length of each index field
is stored explicitly.
Because of this, we can and now will allow instant conversion from
VARCHAR to CHAR or VARBINARY to BINARY of equal or greater size,
as well as instant conversion of TINYINT to SMALLINT to MEDIUMINT
to INT to BIGINT (while not changing between signed and unsigned).
Theoretically, we could allow changing from an unsigned integer to
a bigger unsigned integer, as well as changing CHAR to VARCHAR, but
that would require additional metadata and conversions whenever
reading old records.
Field_str::is_equal(), Field_varstring::is_equal(), Field_num::is_equal():
Return the new result IS_EQUAL_PACK_LENGTH_EXT if the table advertises
HA_EXTENDED_TYPES_CONVERSION capability and we are considering the
above-mentioned conversions.
ALTER_COLUMN_EQUAL_PACK_LENGTH_EXT: A new ALTER TABLE flag, similar
to ALTER_COLUMN_EQUAL_PACK_LENGTH but requiring conversions when
reading the data. The Field::is_equal() result IS_EQUAL_PACK_LENGTH_EXT
will map to this flag.
dtype_get_fixed_size_low(): For BINARY, CHAR and integer columns
in ROW_FORMAT=REDUNDANT, return 0 (variable length) from now on.
dtype_get_sql_null_size(): Keep returning the current size for
BINARY, CHAR and integer columns, so that in ROW_FORMAT=REDUNDANT
it will remain possible to update in place between NULL and NOT NULL
values.
btr_index_rec_validate(): Relax a CHECK TABLE length check for
ROW_FORMAT=REDUNDANT tables.
btr_cur_instant_init_low(): No longer trust fixed_len
for ROW_FORMAT=REDUNDANT tables.
We cannot rely on fixed_len anymore because the record can have shorter
length from before instant extension. Note that importing such tablespace
into earlier MariaDB versions produces ER_TABLE_SCHEMA_MISMATCH when
using a .cfg file.
In the original InnoDB storage format (which was retroactively named
ROW_FORMAT=REDUNDANT in MySQL 5.0.3), the length of each index field
is stored explicitly. Thus, we can and from now on will allow arbitrary
extension of VARBINARY and VARCHAR columns when the table is in
ROW_FORMAT=REDUNDANT.
ha_innobase::open(): Advertise a new HA_EXTENDED_TYPES_CONVERSION
capability for ROW_FORMAT=REDUNDANT tables.
Field_varstring::is_equal(): If the HA_EXTENDED_TYPES_CONVERSION
capability is advertised for the table, return IS_EQUAL_PACK_LENGTH
for any length extension.
When importing a tablespace, we must initialize dummy DEFAULT NULL
values for any instantly added columns in order to avoid a debug
assertion failure when PageConverter::update_records() invokes
rec_get_offsets(). Finally, when the operation completes, we must
evict and reload the table definition, so that the correct
default values for instantly added columns will be loaded.
ha_innobase::discard_or_import_tablespace(): On successful
IMPORT TABLESPACE, evict and reload the table definition,
so that btr_cur_instant_init() will load the correct metadata.
PageConverter::update_index_page(): Fill in dummy DEFAULT NULL values
for instantly added columns. These will be replaced upon the
completion of the operation by evicting and reloading the metadata.
row_discard_tablespace(): Invoke dict_table_t::remove_instant().
After DISCARD TABLESPACE, the table is no longer in "instant ALTER"
format, because there is no data file attached.
MySQL 5.7 introduced the class page_size_t and increased the size of
buffer pool page descriptors by introducing this object to them.
Maybe the intention of this exercise was to prepare for a future
where the buffer pool could accommodate multiple page sizes.
But that future never arrived, not even in MySQL 8.0. It is much
easier to manage a pool of a single page size, and typically all
storage devices of an InnoDB instance benefit from using the same
page size.
Let us remove page_size_t from MariaDB Server. This will make it
easier to remove support for ROW_FORMAT=COMPRESSED (or make it a
compile-time option) in the future, just by removing various
occurrences of zip_size.
This patch contains a full implementation of the optimization
that allows to use in-memory rowid / primary filters built for range
conditions over indexes. In many cases usage of such filters reduce
the number of disk seeks spent for fetching table rows.
In this implementation the choice of what possible filter to be applied
(if any) is made purely on cost-based considerations.
This implementation re-achitectured the partial implementation of
the feature pushed by Galina Shalygina in the commit
8d5a11122c.
Besides this patch contains a better implementation of the generic
handler function handler::multi_range_read_info_const() that
takes into account gaps between ranges when calculating the cost of
range index scans. It also contains some corrections of the
implementation of the handler function records_in_range() for MyISAM.
This patch supports the feature for InnoDB and MyISAM.
The parameters innodb_file_format and innodb_large_prefix were overridden
in the Debian-distributed configuration files, because the default values
of these parameters between MariaDB 5.5 and MariaDB 10.2
did not make any sense.
To allow a more seamless upgrade from MariaDB 10.1 to later versions,
let InnoDB recognize the parameters innodb_file_format and
innodb_large_prefix and issue deprecation warnings for them if they
are specified. A deprecation period of only one major release
(one year between the MariaDB 10.2 and 10.3 releases) is insufficient
for these widely used parameters.
Clear wsrep XID in innobase_rollback_by_xid() for recovered wsrep
transaction in order to avoid resetting XID storage when rolling back
wsrep transaction during recovery.
Sort wsrep XIDs read from storage engine in ascending order and
erify that the range is continuous during crash recovery. If binlog is off,
commit all recovered transactions for continuous seqno range. This is safe
because all transactions with wsrep XID have been certified and must be
committed in the cluster. On the other hand if binlog is on, respect binlog
as a transaction coordinator in order to avoid missing transactions in binlog
that have been committed into storage engine .
thd_destructor_proxy() may miss abort signal if innobase_end() is running
concurrently, which causes server hang in pthread_join() on shutdown.
The problem was that aborting wasn't protected by mutex:
proxy thr: while (!myvar->abort)
end thr: running->abort = 1;
end thr: mysql_cond_broadcast(...);
proxy thr: mysql_cond_wait(...); // nobody to awake it
end thr: pthread_join(...); // waits for proxy thr
Also made main.mysqld_option_err reentrant.
The second line of changes related to replacing xtrabackup with
mariabackup:
1) All unnecessary references to xtrabackup are removed from
the documentation, from some comments, from the control files
that are used to prepare the packages.
2) Made corrections of the tests from the galera_3nodes suite
that mentioned xtrabackup or the old (associated with xtrabackup)
version of innobackupex.
3) Fixed flaws in the galera_3nodes mtr suite control scripts,
because of which they could not work with mariabackup.
4) Fixed numerous bugs in the SST scripts and in the mtr test
files (galera_3nodes mtr suite) that prevented the use of Galera
with IPv6 addresses.
5) Fixed flaws in tests for rsync and mysqldump (for galera_3nodes
mtr tests suite). These tests were not performed successfully without
these fixes.
https://jira.mariadb.org/browse/MDEV-17835
thd_destructor_proxy() may miss abort signal if innobase_end() is running
concurrently, which causes server hang in pthread_join() on shutdown.
The problem was that aborting wasn't protected by mutex:
proxy thr: while (!myvar->abort)
end thr: running->abort = 1;
end thr: mysql_cond_broadcast(...);
proxy thr: mysql_cond_wait(...); // nobody to awake it
end thr: pthread_join(...); // waits for proxy thr
Also made main.mysqld_option_err reentrant.
server shutdown.
Prior to adding current thd to the global thd list, initialize
thd->mysys_var->current_cond / thd->mysys_var->current_mutex.
Otherwise thd_destructor_proxy can miss the abort condition, which is set
by the shutdown thread.