btr_cur_pessimistic_insert(): Do not attempt to convert the
metadata BLOB to external storage if it has already been converted.
It could have been converted by btr_cur_pessimistic_update().
Relax some over-zealous assertions.
dtuple_convert_big_rec(): Assert that the metadata BLOB has
not been converted yet.
dict_table_t::instant_column(): Correctly compute the value of
metadata_changed. The original computation in
commit 003720755f would essentially
invoke memcmp(x,x,y), which can only return 0.
This is a regression after MDEV-13671.
The bug is related to key part prefix lengths wich are stored in SYS_FIELDS.
Storage format is not obvious and was handled incorrectly which led to data
dictionary corruption.
SYS_FIELDS.POS actually contains prefix length too in case if any key part
has prefix length.
innobase_rename_column_try(): fixed prefixes handling
Tests for prefixed indexes added too.
Closes#1063
Orphan #sql* tables may remain after ALTER TABLE
was interrupted by timeout or KILL or client disconnect.
This is a regression caused by MDEV-16515.
Similar to temporary tables (MDEV-16647), we had better ignore the
KILL when dropping the original table in the final part of ALTER TABLE.
Closes#1020
The fix for MDEV-17901 did not cover cases where the AUTO_INCREMENT
column was not dropped, but some other columns before it were.
commit_cache_norebuild(): Revert the MDEV-17901 fix.
dict_index_t::clear_instant_alter(): Update table->persistent_autoinc.
innobase_instant_try(): Only try to update the hidden metadata
record if the number of columns is changing (increasing) or
a metadata BLOB is being added due to permuting or dropping columns
for the first time.
dict_table_t::instant_column(), ha_innobase_inplace_ctx::instant_column():
Return whether the metadata record needs to be updated.
This assertion should have been relaxed when implementing the first part of
MDEV-15563: instant removal of NOT NULL attribute for ROW_FORMAT=REDUNDANT
tables.
For ROW_FORMAT=REDUNDANT, there is no bitmap of null columns;
the null flags are encoded in the end offset of each field.
We do not really care about the number of fields that can be NULL.
Ensure that the 'auxiliary transactions' that are there for
flushing the incomplete undo log of the to-be-recovered DDL
transactions are actually making modifications.
- Refactor code to isolate page validation in page_is_corrupted() function.
- Introduce --extended-validation parameter(default OFF) for mariabackup
--backup to enable decryption of encrypted uncompressed pages during
backup.
- mariabackup would still always check checksum on encrypted data,
it is needed to detect partially written pages.
use frm_version, not mysql_version when parsing frm
In particular, virtual columns are stored according to
frm_version. And CHECK TABLE will overwrite mysql_version
to the current server version, so it cannot correctly
describe frm format.
Write a test case that computes valid crc32 checksums for
an encrypted page, but zeroes out the payload area, so
that the checksum after decryption fails.
xb_fil_cur_read(): Validate the page number before trying
any checksum calculation or decrypting or decompression.
Also, skip zero-filled pages. For page_compressed pages,
ensure that the FIL_PAGE_TYPE was changed. Also, reject
FIL_PAGE_PAGE_COMPRESSED_ENCRYPTED if no decryption was attempted.
Problem:
=======
Mariabackup seems to fail to verify the pages of compressed tables.
The reason is that both fil_space_verify_crypt_checksum() and
buf_page_is_corrupted() will skip the validation for compressed pages.
Fix:
====
Mariabackup should call fil_page_decompress() for compressed and encrypted
compressed page. After that, call buf_page_is_corrupted() to
check the page corruption.