This is continuation of MDEV-22153 bug when contiguity of history
partitions is broken. ha_partition::open_read_partitions() can not
handle non-contiguous list of default partitions.
Fix: when default partition is dropped convert list of partitions to
non-default.
The test was broken in commit f40ca33bbc.
The background DROP TABLE queue in InnoDB will continue to
use names like #sql-ib, and we must filter out those file names.
config.
The solution is to read the system variable value on startup and to fill
databases_exclude_hash.
xb_load_list_string() became non-static and was reformatted. The system
variable value is read and processed in get_mysql_vars(), which was also
reformatted.
The reason for this is to make all temporary file names similar and
also to be able to figure out from where a #sql-xxx name orginates.
New format is for most cases:
'#sql-name-current_pid-thread_id[-increment]'
Where name is one of subselect, alter, exchange, temptable or backup
The exceptions are:
ALTER PARTITION shadow files:
'#sql-shadow-thread_id-'original_table_name'
Names used with temp pool:
'#sql-name-current_pid-pool_number'
MDEV-22088 S3 partitioning support
All ALTER PARTITION commands should now work on S3 tables except
REBUILD PARTITION
TRUNCATE PARTITION
REORGANIZE PARTITION
In addition, PARTIONED S3 TABLES can also be replicated.
This is achived by storing the partition tables .frm and .par file on S3
for partitioned shared (S3) tables.
The discovery methods are enchanced by allowing engines that supports
discovery to also support of the partitioned tables .frm and .par file
Things in more detail
- The .frm and .par files of partitioned tables are stored in S3 and kept
in sync.
- Added hton callback create_partitioning_metadata to inform handler
that metadata for a partitoned file has changed
- Added back handler::discover_check_version() to be able to check if
a table's or a part table's definition has changed.
- Added handler::check_if_updates_are_ignored(). Needed for partitioning.
- Renamed rebind() -> rebind_psi(), as it was before.
- Changed CHF_xxx hadnler flags to an enum
- Changed some checks from using table->file->ht to use
table->file->partition_ht() to get discovery to work with partitioning.
- If TABLE_SHARE::init_from_binary_frm_image() fails, ensure that we
don't leave any .frm or .par files around.
- Fixed that writefrm() doesn't leave unusable .frm files around
- Appended extension to path for writefrm() to be able to reuse to function
for creating .par files.
- Added DBUG_PUSH("") to a a few functions that caused a lot of not
critical tracing.
MDEV-22199 Add VISIBLE attribute for indexes in CREATE TABLE
This was done to make it easier to read in dumps from MySQL 8.0 generated
with MySQL workbench
>= M_TOT_PARTS' FAILED.
This patch is taken from MySQL, originally written by Mattias Jonsson
Here follows the original commit message:
Problem in handle_alter_part_error(),
result in altered partition_info object was still used
if table was under LOCK TABLES.
Solution was to always close and destroy all table
and table_share instances if exclusive mdl lock was
possible.
If not succeeding in get an exlusive lock (only possible
during rollback of DDL), at least close and destroy this
table instance.
rb#7361.
Approved by Mikael and Aditya.
If a transaction had no effect due to INSERT IGNORE and a new
transaction was started with START TRANSACTION without committing
the previous one, the server crashed on assertion when starting
a new wsrep transaction.
As a fix, refined the condition to do wsrep_commit_empty() at the end
of the ha_commit_trans().
In main.index_merge_myisam we remove the test that was added in
commit a2d24def8c because
it duplicates the test case that was added in
commit 5af12e4635.
The test innodb.innodb_wl6326 that had been disabled in 10.4 due to
MDEV-21535 is failing on 10.5 due to a different reason: the removal
of the MLOG_COMP_END_COPY_CREATED operations in MDEV-12353
commit 276f996af9 caused PAGE_LAST_INSERT
to be set to something nonzero by the function page_copy_rec_list_end().
This in turn would cause btr_page_get_split_rec_to_right() to behave
differently: we would not attempt to split the page at all, but simply
insert the new record into the new, empty, right leaf page.
Even though the change reduced the sizes of some tables, it is better
to aim for balanced trees.
page_copy_rec_list_end(), PageBulk::finishPage():
Preserve PAGE_LAST_INSERT, PAGE_N_DIRECTION, PAGE_DIRECTION.
PageBulk::finish(): Move some common code from PageBulk::finishPage().
Fixed a couple of race conditions in the test case to ensure stable order
of events. Also removed all sleeps. Test execution time is down from 18s
to 0.15s.
On disconnect audit event is triggered after control is returned to
mysqltest client. Which means mysqltest may issue more commands
concurrently before disconnect is actually logged.
Similar problem happens with regular query execution: an event is
triggered after control is returner to the client. Which may end
up with unstable order of events in different connections.
Delayed insert rows are enqueued separately and can either be combined
into single event or go as separate events. Reduced number of inserted
rows to 1 to stabilize result.
Also backported 2b3f6ab from 10.5.
Changed wording in error messages from MySQL to MariaDB. In
cases where the word server could be used instead it was done.
Tests that have these errors recorded were updated.
was restored.
Optionally rollback prepared XA's on "mariabackup --prepare".
The fix MUST NOT be ported on 10.5+, as MDEV-742 fix solves the issue for
slaves.
ADD default history partitions generates wrong partition name,
f.ex. p2 instead of p1. Gap in sequence of partition names leads to
ha_partition::open_read_partitions() fail on inexistent name.
Manual fixing such broken table requires:
1. create empty table by any name (t_empty) with correct number
of partitions;
2. stop the server;
3. rename data files (.myd, .myi or .ibd) of broken table to t_empty
fixing the partition sequence (#p2 to #p1, #p3 to #p2);
4. start the server;
5. drop the broken table;
6. rename t_empty to correct table name.
Turn read cache off for periodic update.
Like 498a96a4 says:
Aria with row_format=fixed uses IO_CACHE of type READ_CACHE for
sequential read in update loop. When history row is inserted inside
this loop the cache misses it and fails with error.
This applicable to any additional row inserts on UPDATE. In this case
it was initiated by UPDATE FOR PORTION.
Related to MDEV-20441.
- Fixed mysql_prepare_create_table() constraint duplicate checking;
- Refactored period constraint handling in mysql_prepare_alter_table():
* No need to allocate new objects;
* Keep old constraint name but exclude it from dup checking by automatic_name;
- Some minor memory leaks fixed;
- Some conceptual TODOs.
Cause
Join tmp table inserts null row because of OUTER JOIN, that's
expected. Since `multi_update::prepare2()` converted
`Item_temptable_rowid` into `Item_field` (28dbdf3)
`multi_update::send_data()` accesses join tmp record directly and
treats it as a normal row ignoring null status of ref field. NULL ref
field is then treated as normal in `multi_update::do_updates()` which
tries to position updated table by reference 0.
Note that reference 0 may be valid reference and the first row of
table can be wrongly updated (see multi_update.test).
Fix
Do not add row into multi-update tmp table in case of null ref
field. Join tmp table does not have null_row status at this time (as
well as `STATUS_NULL_ROW`) and cannot be skipped by these properties
(see first comment in multi_update::send_data()). But it has all null
fields (including the ref field).
SQL_SELECT::check_quick() returns error status only
test_quick_select() returns -1. Fix error handling when lower frames
throw error, but it is ignored by test_quick_select(). Fix return
status for out-of-memory errors which are obviously must be processed
as error in upper frames.
Assertion `old_part_id == m_last_part' failed in ha_partition::update_row or `part_id == m_last_part' in ha_partition::delete_row upon UPDATE/DELETE after dropping versioning
PRIMARY KEY change hadn't been treated as partition reorganization in case of partitioning by KEY() (without parameters).
* set `*partition_changed= true` in the described case.
* since add/drop system versioning does not affect alter_info->key_list, it required separate attention
In commit a5584b13d1
some scrubbing-related status variables were removed along with
the background scrubbing code.
The status variable INNODB_ENCRYPTION_NUM_KEY_REQUESTS
was inadvertently removed as part of that.
innodb_status_variables[]: Restore "encryption_num_key_requests".
We introduce the test innodb.innodb_status_variables
in order to catch similar regressions in the future.
Improve the test that was imported and adapted for MariaDB in
commit fb217449dc.
row_undo_step(): Move the DEBUG_SYNC point from trx_rollback_for_mysql().
This DEBUG_SYNC point is executed after rolling back one row.
trx_rollback_for_mysql(): Clarify the comments that describe the scenario,
and remove the DEBUG_SYNC point.
If the statement "if (trx->has_logged_persistent())" and its body are
removed from trx_rollback_for_mysql(), then the test
innodb.xa_recovery_debug will fail because the transaction would still
exist in the XA PREPARE state. If we allow the XA COMMIT statement
to succeed in the test, we would observe an incorrect state of the
XA transaction where the table would contain row (1,NULL). Depending
on whether the XA transaction was committed, the table should either
be empty or contain the record (1,1). The intermediate state of
(1,NULL) should never be observed after completed recovery.
* The overlaps check is implemented on a handler level per row command.
It creates a separate cursor (actually, another handler instance) and
caches it inside the original handler, when ha_update_row or
ha_insert_row is issued. Cursor closes on unlocking the handler.
* Containing the same key in index means unique constraint violation
even in usual terms. So we fetch left and right neighbours and check
that they have same key prefix, excluding from the key only the period part.
If it doesnt match, then there's no such neighbour, and the check passes.
Otherwise, we check if this neighbour intersects with the considered key.
* The check does not introduce new error and fails with ER_DUPP_KEY error.
This might break REPLACE workflow and should be fixed separately
* rename to a generic name
* move remaning initializations from query exec to prepare time
* simplify/unify key handling in open_table_from_share and delayed
* remove dead code
* move tests where they belong