Remove the dependency on unzip. Instead, generate the InnoDB files
with perl.
log_block_checksum_is_ok(): Correct the error message.
recv_scan_log_recs(): Remove the duplicated error message for
log block checksum mismatch.
innobase_start_or_create_for_mysql(): If the server is in read-only
mode or if innodb_force_recovery>=3, do not try to modify the system
tablespace. (If the doublewrite buffer or the non-core system tables
do not exist, do not try to create them.)
innodb_shutdown(): Relax a debug assertion. If the system tablespace
did not contain a doublewrite buffer and if we started up in
innodb_read_only mode or with innodb_force_recovery>=3, it will not
be created.
dict_create_or_check_sys_tablespace(): Set the flag
srv_sys_tablespaces_open when the tables exist.
recv_scan_log_recs(): Remember if redo log apply is needed,
even if starting up in innodb_read_only mode.
recv_recovery_from_checkpoint_start_func(): Refuse
innodb_read_only startup if redo log apply is needed.
[0-9]*[.]?[0-9]* wasn't a sufficient regex to cover the
%lg used in Json_writer::add_double. Exponent formats
where missed.
Here we normalize all the replace_regex expressions for
ANALYZE FORMAT=JSON into one include file.
Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
Test crash recovery from an encrypted redo log with innodb_encrypt_log=0.
Previously, we did a clean shutdown, so only the log checkpoint
information would have been read from the redo log. With this change,
we will be reading and applying encrypted redo log records.
include/start_mysqld.inc: Observe $restart_parameters.
encryption.innodb-log-encrypt: Remove some unnecessary statements,
and instead of restarting the server and concurrently accessing
the files while the server is running, kill the server, check the
files, and finally start up the server.
innodb.log_data_file_size: Use start_mysqld.inc with $restart_parameters.
- MDEV-11621 rpl.rpl_gtid_stop_start fails sporadically in buildbot
- MDEV-11620 rpl.rpl_upgrade_master_info fails sporadically in buildbot
The issue above was probably that the build machine was overworked and the
shutdown took longer than 30 resp 10 seconds, which caused MyISAM tables
to be marked as crashed.
Fixed by flushing myisam tables before doing a forced shutdown/kill.
I also increased timeout for forced shutdown from 10 seconds to 60 seconds
to fix other possible issues on slow machines.
Fixed also some compiler warnings
MTR raises default wait_for_pos_timeout from 300 to 1500 when tests
are run with valgrind. The same needs to be done for other
replication-related waits.
The change should fix one of failures mentioned in MDEV-10653
(rpl.rpl_parallel fails in buildbot with timeout), the one
on the valgrind builder; but not all of them
encryption.innodb_scrub: Clean up. Make it also cover ROW_FORMAT=COMPRESSED,
removing the need for encryption.innodb_scrub_compressed.
Add a FIXME comment saying that we should create a secondary index, to
demonstrate that also undo log pages get scrubbed. Currently that is
not working!
Also clean up encryption.innodb_scrub_background, but keep it disabled,
because the background scrubbing does not work reliably.
Fix both tests so that if something is not scrubbed, the test will be
aborted, so that the data files will be preserved. Allow the tests to
run on Windows as well.
* some of these tests run just fine with InnoDB:
-> s/have_xtradb/have_innodb/
* sys_var tests did basic tests for xtradb only variables
-> remove them, they're useless anyway (sysvar_innodb does it better)
* multi_update had innodb specific tests
-> move to multi_update_innodb.test
MTR raises default wait_for_pos_timeout from 300 to 1500 when tests
are run with valgrind. The same needs to be done for other
replication-related waits
This change is a backport from 10.0 to 5.5 for:
1. The full patch for:
MDEV-4841 Wrong character set of ADDTIME() and DATE_ADD()
9adb6e991e
2. A small fragment of:
MDEV-5298 Illegal mix of collations on timestamp
03f6778d61
which overrides Item_temporal_hybrid_func::cmp_type(),
and adds a new line into cache_temporal_4265.result.
This should be functionally equivalent to WL#6204 in MySQL 8.0.0, with
the notable difference that the file format changes are limited to
repurposing a previously unused data field in B-tree pages.
For persistent InnoDB tables, write the last used AUTO_INCREMENT
value to the root page of the clustered index, in the previously
unused (0) PAGE_MAX_TRX_ID field, now aliased as PAGE_ROOT_AUTO_INC.
Unlike some other previously unused InnoDB data fields, this one was
actually always zero-initialized, at least since MySQL 3.23.49.
The writes to PAGE_ROOT_AUTO_INC are protected by SX or X latch on the
root page. The SX latch will allow concurrent read access to the root
page. (The field PAGE_ROOT_AUTO_INC will only be read on the
first-time call to ha_innobase::open() from the SQL layer. The
PAGE_ROOT_AUTO_INC can only be updated when executing SQL, so
read/write races are not possible.)
During INSERT, the PAGE_ROOT_AUTO_INC is updated by the low-level
function btr_cur_search_to_nth_level(), adding no extra page
access. [Adaptive hash index lookup will be disabled during INSERT.]
If some rare UPDATE modifies an AUTO_INCREMENT column, the
PAGE_ROOT_AUTO_INC will be adjusted in a separate mini-transaction in
ha_innobase::update_row().
When a page is reorganized, we have to preserve the PAGE_ROOT_AUTO_INC
field.
During ALTER TABLE, the initial AUTO_INCREMENT value will be copied
from the table. ALGORITHM=COPY and online log apply in LOCK=NONE will
update PAGE_ROOT_AUTO_INC in real time.
innodb_col_no(): Determine the dict_table_t::cols[] element index
corresponding to a Field of a non-virtual column.
(The MySQL 5.7 implementation of virtual columns breaks the 1:1
relationship between Field::field_index and dict_table_t::cols[].
Virtual columns are omitted from dict_table_t::cols[]. Therefore,
we must translate the field_index of AUTO_INCREMENT columns into
an index of dict_table_t::cols[].)
Upgrade from old data files:
By default, the AUTO_INCREMENT sequence in old data files would appear
to be reset, because PAGE_MAX_TRX_ID or PAGE_ROOT_AUTO_INC would contain
the value 0 in each clustered index page. In new data files,
PAGE_ROOT_AUTO_INC can only be 0 if the table is empty or does not contain
any AUTO_INCREMENT column.
For backward compatibility, we use the old method of
SELECT MAX(auto_increment_column) for initializing the sequence.
btr_read_autoinc(): Read the AUTO_INCREMENT sequence from a new-format
data file.
btr_read_autoinc_with_fallback(): A variant of btr_read_autoinc()
that will resort to reading MAX(auto_increment_column) for data files
that did not use AUTO_INCREMENT yet. It was manually tested that during
the execution of innodb.autoinc_persist the compatibility logic is
not activated (for new files, PAGE_ROOT_AUTO_INC is never 0 in nonempty
clustered index root pages).
initialize_auto_increment(): Replaces
ha_innobase::innobase_initialize_autoinc(). This initializes
the AUTO_INCREMENT metadata. Only called from ha_innobase::open().
ha_innobase::info_low(): Do not try to lazily initialize
dict_table_t::autoinc. It must already have been initialized by
ha_innobase::open() or ha_innobase::create().
Note: The adjustments to class ha_innopart were not tested, because
the source code (native InnoDB partitioning) is not being compiled.
* remove old 5.2+ InnoDB support for virtual columns
* enable corresponding parts of the innodb-5.7 sources
* copy corresponding test cases from 5.7
* copy detailed Alter_inplace_info::HA_ALTER_FLAGS flags from 5.7
- and more detailed detection of changes in fill_alter_inplace_info()
* more "innodb compatibility hooks" in sql_class.cc to
- create/destroy/reset a THD (used by background purge threads)
- find a prelocked table by name
- open a table (from a background purge thread)
* different from 5.7:
- new service thread "thd_destructor_proxy" to make sure all THDs are
destroyed at the correct point in time during the server shutdown
- proper opening/closing of tables for vcol evaluations in
+ FK checks (use already opened prelocked tables)
+ purge threads (open the table, MDLock it, add it to tdc, close
when not needed)
- cache open tables in vc_templ
- avoid unnecessary allocations, reuse table->record[0] and table->s->default_values
- not needed in 5.7, because it overcalculates:
+ tell the server to calculate vcols for an on-going inline ADD INDEX
+ calculate vcols for correct error messages
* update other engines (mroonga/tokudb) accordingly
cannot use TABLE:merge_keys for that, because Field::part_of_key
was designed to mark fields for KEY_READ, so a field is not a
"part of key", if only prefix of the field is.
- created binlog_encryption test suite and added it to the default list
- moved some tests from rpl, binlog and multisource suites to extra
so that they could be re-used in different suites
- made minor changes in include files
A number of tests used to fail due to just not being able to
access MyRocks' I_S plugins:
cons_snapshot_repeatable_read
-drop_table3
-index_file_map
-index_key_block_size
-issue100_delete
-truncate_table3
for InnoDB tables"
Don't use thr_lock.c locks for InnoDB tables. Below is list of changes that
were needed to implement this:
- HANDLER OPEN acquireis MDL_SHARED_READ instead of MDL_SHARED
- HANDLER READ calls external_lock() even if SE is not going to be locked by
THR_LOCK
- InnoDB lock wait timeouts are now honored which are much shorter by default
than server lock wait timeouts (1 year vs 50 seconds)
- with @@autocommit= 1 LOCK TABLES disables autocommit implicitely, though
user still sees @@autocommt= 1
- the above starts implicit transaction
- transactions started by LOCK TABLES are now rolled back on disconnect
(previously everything was committed due to autocommit)
- transactions started by LOCK TABLES are now rolled back by ROLLBACK
(previously everything was committed due to autocommit)
- it is now impossible to change BINLOG_FORMAT under LOCK TABLES (at least
to statement) due to running transaction
- LOCK TABLES WRITE is additionally handled by MDL
- ...in contrast LOCK TABLES READ protection against DML is pure InnoDB
- combining transactional and non-transactional tables under LOCK TABLES
may cause rolled back changes in transactional table and "committed"
changes in non-transactional table
- user may disable innodb_table_locks, which will cause LOCK TABLES to be
noop basically
Removed tests for BUG#45143 and BUG#55930 which cover InnoDB + THR_LOCK. To
operate properly these tests require code flow to go through THR_LOCK debug
sync points, which is not the case after this patch. These tests are removed
by WL#6671 as well. An alternative is to port them to different storage engine.
- Add include/have_rocksdb.inc (TODO: is there any way to have this
file somewhere under storage/rocksdb/mysql-test ?)
- Make rocksdb.test require have_partition.inc because it uses
partitioned tables
Merge feature into 10.2 from feature branch.
Delayed replication adds an option
CHANGE MASTER TO master_delay=<seconds>
Replication will then delay applying events with that many
seconds. This creates a replication slave that reflects the state of
the master some time in the past.
Feature is ported from MySQL source tree.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Problem:
When using the delayed slave feature, and the SQL thread is delaying,
and the user issues STOP SLAVE, the event we wait for was executed.
It should not be executed.
Fix:
Check the return value from the delay function,
slave.cc:slave_sleep(). If the return value is 1, it means the thread
has been stopped, in this case we don't execute the statement.
Also, refactored the test case for delayed slave a little: added the
test script include/rpl_assert.inc, which asserts that a condition holds
and prints a message if not. Made rpl_delayed_slave.test use this. The
advantage is that the test file is much easier to read and maintain,
because it is clear what is an assertion and what is not, and also the
expected result can be found in the test file, you don't have to compare
it to the result file.
Manually merged into MariaDB from MySQL commit
fd2b210383358fe7697f201e19ac9779879ba72a
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>