If a table is altered using the MDEV-11369/MDEV-15562/MDEV-13134
ALGORITHM=INSTANT, it can force the table to use a non-canonical
format:
* A hidden metadata record at the start of the clustered index
is used to store each column's DEFAULT value. This makes it possible
to add new columns that have default values without rebuilding the table.
* Starting with MDEV-15562 in MariaDB Server 10.4, a BLOB in the
hidden metadata record is used to store column mappings. This makes
it possible to drop or reorder columns without rebuilding the table.
This also makes it possible to add columns to any position or drop
columns from any position in the table without rebuilding the table.
If a column is dropped without rebuilding the table, old records
will contain garbage in that column's former position, and new records
will be written with NULL values, empty strings, or dummy values.
This is generally not a problem. However, there may be cases where
users may want to avoid putting a table into this format.
For example, users may want to ensure that future UPDATE operations
after an ADD COLUMN will be performed in-place, to reduce write
amplification. (Instantly added columns are essentially always
variable-length.) Users might also want to avoid bugs similar to
MDEV-19916, or they may want to be able to export tables to
older versions of the server.
We will introduce the option innodb_instant_alter_column_allowed,
with the following values:
* never (0): Do not allow instant add/drop/reorder,
to maintain format compatibility with MariaDB 10.x and MySQL 5.x.
If the table (or partition) is not in the canonical format, then
any ALTER TABLE (even one that does not involve instant column
operations) will force a table rebuild.
* add_last (1, default in 10.3): Store a hidden metadata record that
allows columns to be appended to the table instantly (MDEV-11369).
In 10.4 or later, if the table (or partition) is not in this format,
then any ALTER TABLE (even one that does not involve column changes)
will force a table rebuild.
Starting with 10.4:
* add_drop_reorder (2, default): Like 'add_last', but allow the
metadata record to store a column map, to support instant
add/drop/reorder of columns (MDEV-15562).
- multi_range_read_info_const now uses the new records_in_range interface
- Added handler::avg_io_cost()
- Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is
not 1.0. In this case we trust the avg_io_cost() from the handler.
- Changed test_quick_select to use TIME_FOR_COMPARE instead of
TIME_FOR_COMPARE_IDX to align this with the rest of the code.
- Fixed bug when using test_if_cheaper_ordering where we didn't use
keyread if index was changed
- Fixed a bug where we didn't use index only read when using order-by-index
- Added keyread_time() to HEAP.
The default keyread_time() was optimized for blocks and not suitable for
HEAP. The effect was the HEAP prefered table scans over ranges for btree
indexes.
- Fixed get_sweep_read_cost() for HEAP tables
- Ensure that range and ref have same cost for simple ranges
Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure
we favior ref for range for simple queries.
- Fixed that matching_candidates_in_table() uses same number of records
as the rest of the optimizer
- Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for
HEAP and temporary tables better. A few tests changed because of this.
- heap::read_time() and heap::keyread_time() adjusted to not add +1.
This was to ensure that handler::keyread_time() doesn't give
higher cost for heap tables than for normal tables. One effect of
this is that heap and derived tables stored in heap will prefer
key access as this is now regarded as cheap.
- Changed cost for index read in sql_select.cc to match
multi_range_read_info_const(). All index cost calculation is now
done trough one function.
- 'ref' will now use quick_cost for keys if it exists. This is done
so that for '=' ranges, 'ref' is prefered over 'range'.
- scan_time() now takes avg_io_costs() into account
- get_delayed_table_estimates() uses block_size and avg_io_cost()
- Removed default argument to test_if_order_by_key(); simplifies code
page_cur_insert_rec_low(): Check the array bounds before comparing.
We used to read one byte beyond the end of the 'rec' payload.
The incorrect logic was originally introduced in
commit 7ae21b18a6
and modified in commit 138cbec5f2.
FOREIGN_KEY_CHECKS is disabled
- dict_foreign_find_index() can return NULL if InnoDB already dropped
the foreign index when FOREIGN_KEY_CHECKS is disabled.
redo log during recovery
- InnoDB unnecessarily reads the page even though it has fully initialized
buffered redo log records. Allow the page initialization redo log to
apply for the page in buf_page_get_gen() during recovery.
- Renamed buf_page_get_gen() to buf_page_get_low()
- Newly added buf_page_get_gen() will check for buffered redo log for
the particular page id during recovery
- Added new function buf_page_mtr_lock() which basically latches the page
for the given latch type.
- recv_recovery_create_page() is inline function which creates a page
if it has page initialization redo log records.
Remove CREATE/DROP database.
Remove some unnecessary suppressions, replacements, and
SQL statements.
Populate tables via have_sequence.inc to avoid the creation of
explicit InnoDB record locks in INSERT...SELECT. This will remove
some gaps in AUTO_INCREMENT values.
fil_delete_tablespace(): Remove the unused parameter drop_ahi,
and add the parameter if_exists=false. We want to suppress
error messages if we know that the tablespace has been discarded.
dict_table_rename_in_cache(): Pass the new parameter to
fil_delete_tablespace(), that is, do not complain about
missing tablespace if the tablespace has been discarded.
row_make_new_pathname(): Declare as static.
row_drop_table_for_mysql(): Tolerate !table->data_dir_path
when the tablespace has been discarded.
row_rename_table_for_mysql(): Skip part of the RENAME TABLE
when fil_space_get_first_path() returns NULL.
buf_pool_resize(): Simplify the fault injection
for innodb.buf_pool_resize_oom.
innodb.buf_pool_resize_oom: Use a small buffer pool.
innodb.innodb_buffer_pool_load_now: Make use of the sequence engine,
to avoid creating explicit InnoDB record locks. Clean up the
accesses to information_schema.innodb_buffer_page_lru.
Adapt from 10.2: git cherry-pick bfb5e1c3f0
buf_pool_t::chunk_t::create(), buf_pool_t::resize():
Restore or simplify the debug instrumentation.
buf_resize_callback(): Add DBUG_ENTER/DBUG_VOID_RETURN so that
the DBUG_EXECUTE_IF in buf_pool_t::resize() can be triggered.
Temporary tables are typically short-lived, and temporary tables
are assumed to be accessed only by the thread that is handling
the owning connection. Hence, they must not be subject to
defragmenting.
ha_innobase::optimize(): Do not add temporary tables to
the defragment_table() queue.
This was to remove a performance regression between 10.3 and 10.4
In 10.5 we will have a better implementation of records_in_range
that will enable us to get more statistics.
This change was not done in 10.4 because the 10.5 will be part of
a larger change that is not suitable for the GA 10.4 version
Other things:
- Changed default handler block_size to 8192 to fix things statistics
for engines that doesn't set the block size.
- Fixed a bug in spider when using multiple part const ranges
(Patch from Kentoku)
PageBulk::insertPage(): Check the array bounds before comparing.
We used to read one byte beyond the end of the 'rec' payload.
The incorrect logic was originally introduced in
commit 7ae21b18a6.
The following parameters are deprecated:
innodb-background-scrub-data-uncompressed
innodb-background-scrub-data-compressed
innodb-background-scrub-data-interval
innodb-background-scrub-data-check-interval
Removed scrubbing code completely(btr0scrub.h, btr0scrub.cc)
Removed information_schema.innodb_tablespaces_scrubbing tables
Removed the scrubbing logic from fil_crypt_thread()
Compute MONITOR_LSN_CHECKPOINT_AGE on demand in
srv_mon_process_existing_counter().
This allows us to remove the overhead of MONITOR_SET
calls for the counter.
Simplify the logging of ALTER TABLE operations, by making use of the
TRX_UNDO_RENAME_TABLE undo log record that was introduced in
commit 0bc36758ba.
commit_try_rebuild(): Invoke row_rename_table_for_mysql() and
actually rename the files before committing the transaction.
fil_mtr_rename_log(), commit_cache_rebuild(),
log_append_on_checkpoint(), row_merge_rename_tables_dict(): Remove.
mtr_buf_copy_t, log_t::append_on_checkpoint: Remove.
row_rename_table_for_mysql(): If !use_fk, ignore missing foreign
keys. Remove a call to dict_table_rename_in_cache(), because
trx_rollback_to_savepoint() should invoke the function if needed.
This patch adds support of RENAME INDEX operation to the ALTER TABLE
statement. Code which determines if ALTER TABLE can be done in-place
for "simple" storage engines like MyISAM, Heap and etc. was updated to
handle ALTER TABLE ... RENAME INDEX as an in-place operation. Support
for in-place ALTER TABLE ... RENAME INDEX for InnoDB was covered by
MDEV-13301.
Syntax changes
==============
A new type of <alter_specification> is added:
<rename index clause> ::= RENAME ( INDEX | KEY ) <oldname> TO <newname>
Where <oldname> and <newname> are identifiers for old name and new
name of the index.
Semantic changes
================
The result of "ALTER TABLE t1 RENAME INDEX a TO b" is a table which
contents and structure are identical to the old version of 't1' with
the only exception index 'a' being called 'b'.
Neither <oldname> nor <newname> can be "primary". The index being
renamed should exist and its new name should not be occupied
by another index on the same table.
Related to: WL#6555, MDEV-13301
Now there can be only one log file instead of several which
logically work as a single file.
Possible names of redo log files: ib_logfile0,
ib_logfile101 (for just created one)
innodb_log_fiels_in_group: value of this variable is not used
by InnoDB. Possible values are still 1..100, to not break upgrade
LOG_FILE_NAME: add constant of value "ib_logfile0"
LOG_FILE_NAME_PREFIX: add constant of value "ib_logfile"
get_log_file_path(): convenience function that returns full
path of a redo log file
SRV_N_LOG_FILES_MAX: removed
srv_n_log_files: we can't remove this for compatibility reasons,
but now server doesn't use this variable
log_sys_t::file::fd: now just one, not std::vector
log_sys_t::log_capacity: removed word 'group'
find_and_check_log_file(): part of logic from huge srv_start()
moved here
recv_sys_t::files: file descriptors of redo log files.
There can be several of those in case we're upgrading
from older MariaDB version.
recv_sys_t::remove_extra_log_files: whether to remove
ib_logfile{1,2,3...} after successfull upgrade.
recv_sys_t::read(): open if needed and read from one
of several log files
recv_sys_t::files_size(): open if needed and return files count
redo_file_sizes_are_correct(): check that redo log files
sizes are equal. Just to log an error for a user.
Corresponding check was moved from srv0start.cc
namespace deprecated: put all deprecated variables here to
prevent usage of it by us, developers
Do not rebuild index when it's key part converted from utf8mb3 to utf8mb4
but key part stays the same.
dict_index_add_to_cache(): assert that prefix_len is divided by mbmaxlen
ha_innobase::compare_key_parts(): compare key part lenght in symbols instead
of bytes.
commit 08ba388713 of MDEV-12353
introduced an incorrect assumption, which was documented by
the failing assertion.
After instant ADD COLUMN, we can have a null (and in-place) UPDATE
of NULL to NULL. No data needs to be written for such updates.
For ROW_FORMAT=REDUNDANT, we reserve space for the NULL values,
and to be compatible with existing behaviour, we will zerofill
the unused data bytes when updating to NULL value.
log_t::FORMAT_10_5: physical redo log format tag
log_phys_t: Buffered records in the physical format.
The log record bytes will follow the last data field,
making use of alignment padding that would otherwise be wasted.
If there are multiple records for the same page, also those
may be appended to an existing log_phys_t object if the memory
is available.
In the physical format, the first byte of a record identifies the
record and its length (up to 15 bytes). For longer records, the
immediately following bytes will encode the remaining length
in a variable-length encoding. Usually, a variable-length-encoded
page identifier will follow, followed by optional payload, whose
length is included in the initially encoded total record length.
When a mini-transaction is updating multiple fields in a page,
it can avoid repeating the tablespace identifier and page number
by setting the same_page flag (most significant bit) in the first
byte of the log record. The byte offset of the record will be
relative to where the previous record for that page ended.
Until MDEV-14425 introduces a separate file-level log for
redo log checkpoints and file operations, we will write the
file-level records in the page-level redo log file.
The record FILE_CHECKPOINT (which replaces MLOG_CHECKPOINT)
will be removed in MDEV-14425, and one sequential scan of the
page recovery log will suffice.
Compared to MLOG_FILE_CREATE2, FILE_CREATE will not include any flags.
If the information is needed, it can be parsed from WRITE records that
modify FSP_SPACE_FLAGS.
MLOG_ZIP_WRITE_STRING: Remove. The record was only introduced temporarily
as part of this work, before being replaced with WRITE (along with
MLOG_WRITE_STRING, MLOG_1BYTE, MLOG_nBYTES).
mtr_buf_t::empty(): Check if the buffer is empty.
mtr_t::m_n_log_recs: Remove. It suffices to check if m_log is empty.
mtr_t::m_last, mtr_t::m_last_offset: End of the latest m_log record,
for the same_page encoding.
page_recv_t::last_offset: Reflects mtr_t::m_last_offset.
Valid values for last_offset during recovery should be 0 or above 8.
(The first 8 bytes of a page are the checksum and the page number,
and neither are ever updated directly by log records.)
Internally, the special value 1 indicates that the same_page form
will not be allowed for the subsequent record.
mtr_t::page_create(): Take the block descriptor as parameter,
so that it can be compared to mtr_t::m_last. The INIT_INDEX_PAGE
record will always followed by a subtype byte, because same_page
records must be longer than 1 byte.
trx_undo_page_init(): Combine the writes in WRITE record.
trx_undo_header_create(): Write 4 bytes using a special MEMSET
record that includes 1 bytes of length and 2 bytes of payload.
flst_write_addr(): Define as a static function. Combine the writes.
flst_zero_both(): Replaces two flst_zero_addr() calls.
flst_init(): Do not inline the function.
fsp_free_seg_inode(): Zerofill the whole inode.
fsp_apply_init_file_page(): Initialize FIL_PAGE_PREV,FIL_PAGE_NEXT
to FIL_NULL when using the physical format.
btr_create(): Assert !page_has_siblings() because fsp_apply_init_file_page()
must have been invoked.
fil_ibd_create(): Do not write FILE_MODIFY after FILE_CREATE.
fil_names_dirty_and_write(): Remove the parameter mtr.
Write the records using a separate mini-transaction object,
because any FILE_ records must be at the start of a mini-transaction log.
recv_recover_page(): Add a fil_space_t* parameter.
After applying log to the a ROW_FORMAT=COMPRESSED page,
invoke buf_zip_decompress() to restore the uncompressed page.
buf_page_io_complete(): Remove the temporary hack to discard the
uncompressed page of a ROW_FORMAT=COMPRESSED page.
page_zip_write_header(): Remove. Use mtr_t::write() or
mtr_t::memset() instead, and update the compressed page frame
separately.
trx_undo_header_add_space_for_xid(): Remove.
trx_undo_seg_create(): Perform the changes that were previously
made by trx_undo_header_add_space_for_xid().
btr_reset_instant(): New function: Reset the table to MariaDB 10.2
or 10.3 format when rolling back an instant ALTER TABLE operation.
page_rec_find_owner_rec(): Merge with the only callers.
page_cur_insert_rec_low(): Combine writes by using a local buffer.
MEMMOVE data from the preceding record whenever feasible
(copying at least 3 bytes).
page_cur_insert_rec_zip(): Combine writes to page header fields.
PageBulk::insertPage(): Issue MEMMOVE records to copy a matching
part from the preceding record.
PageBulk::finishPage(): Combine the writes to the page header
and to the sparse page directory slots.
mtr_t::write(): Only log the least significant (last) bytes
of multi-byte fields that actually differ.
For updating FSP_SIZE, we must always write all 4 bytes to the
redo log, so that the fil_space_set_recv_size() logic in
recv_sys_t::parse() will work.
mtr_t::memcpy(), mtr_t::zmemcpy(): Take a pointer argument
instead of a numeric offset to the page frame. Only log the
last bytes of multi-byte fields that actually differ.
In fil_space_crypt_t::write_page0(), we must log also any
unchanged bytes, so that recovery will recognize the record
and invoke fil_crypt_parse().
Future work:
MDEV-21724 Optimize page_cur_insert_rec_low() redo logging
MDEV-21725 Optimize btr_page_reorganize_low() redo logging
MDEV-21727 Optimize redo logging for ROW_FORMAT=COMPRESSED
Instead of writing the high-level redo log records
MLOG_LIST_END_COPY_CREATED, MLOG_COMP_LIST_END_COPY_CREATED
write log for each individual insert of a record.
page_copy_rec_list_end_to_created_page(): Remove.
This will improve the fill factor of some pages.
Adjust some tests accordingly.
PageBulk::init(), PageBulk::finish(): Avoid setting bogus limits
to PAGE_HEAP_TOP and PAGE_N_DIR_SLOTS. Avoid accessor functions
that would enforce these limits before the correct ones are set
at the end of PageBulk::finish().
compare_keys_but_name(): do not use KEY_PART_INFO::field for
Field::is_equal(). Following the logic of that code we need to
compare fields of a table. But KEY_PART_INFO::field sometimes
(when key part is shorter than table field) is a different field.
In that case Field::is_equal() returns incorrect result and
problems occur.
KEY_PART_INFO::field may become some strange field in
open_frm_error open_table_from_share(). I think this is an
incorrect logic, some tecnhical debt. I'm not fixing it right now,
because I don't have time. But I'm making Field::field_length
a const class member. Then, the only fishy code which changed that
field requires now a const_cast<>. I'm bringing attention to that
code with it. This change should not affect logic of the
program in any way.
Remove usage of deprecated variable storage_engine. It was deprecated in 5.5 but
it never issued a deprecation warning. Make it issue a warning in 10.5.1.
Replaced with default_storage_engine.
For compatibility with diagnostic software, let us
return a dummy buffer pool identifier 0 and restore
the columns that were initially deleted in
commit 1a6f708ec5:
information_schema.innodb_buffer_page.pool_id
information_schema.innodb_buffer_page_lru.pool_id
information_schema.innodb_buffer_pool_stats.pool_id
information_schema.innodb_cmpmem.buffer_pool_instance
information_schema.innodb_cmpmem_reset.buffer_pool_instance
Thanks to Vladislav Vaintroub for pointing this out.
Our benchmarking efforts indicate that the reasons for splitting the
buf_pool in commit c18084f71b
have mostly gone away, possibly as a result of
mysql/mysql-server@ce6109ebfd
or similar work.
Only in one write-heavy benchmark where the working set size is
ten times the buffer pool size, the buf_pool->mutex would be
less contended with 4 buffer pool instances than with 1 instance,
in buf_page_io_complete(). That contention could be alleviated
further by making more use of std::atomic and by splitting
buf_pool_t::mutex further (MDEV-15053).
We will deprecate and ignore the following parameters:
innodb_buffer_pool_instances
innodb_page_cleaners
There will be only one buffer pool and one page cleaner task.
In a number of INFORMATION_SCHEMA views, columns that indicated
the buffer pool instance will be removed:
information_schema.innodb_buffer_page.pool_id
information_schema.innodb_buffer_page_lru.pool_id
information_schema.innodb_buffer_pool_stats.pool_id
information_schema.innodb_cmpmem.buffer_pool_instance
information_schema.innodb_cmpmem_reset.buffer_pool_instance
During native table rebuild or index creation, InnoDB used to skip
redo logging and write MLOG_INDEX_LOAD records to inform crash recovery
and Mariabackup of the gaps in redo log. This is fragile and prohibits
some optimizations, such as skipping the doublewrite buffer for
newly (re)initialized pages (MDEV-19738).
row_merge_write_redo(): Remove. We do not write MLOG_INDEX_LOAD
records any more. Instead, we write full redo log.
FlushObserver: Remove.
fseg_free_page_func(): Remove the parameter log. Redo logging
cannot be disabled.
fil_space_t::redo_skipped_count: Remove.
We cannot remove buf_block_t::skip_flush_check, because PageBulk
will temporarily generate invalid B-tree pages in the buffer pool.
- n_ext value may be less than dtuple_get_n_ext(dtuple) when PK is being
updated and new record inherits the externally stored fields from
delete mark old record.
row_log_table_get_pk_old_col(): For replacing a NULL value for a
column of the being-added primary key, look up the correct
default value, even if columns had been instantly reordered or
dropped earlier. This ought to have been broken ever since
commit 0e5a4ac253 (MDEV-15562).
ha_innobase::commit_inplace_alter_table(): After
ALTER_STORED_COLUMN_ORDER, ensure that the virtual column metadata
will be reloaded also when the table is not being rebuilt.