- multi_range_read_info_const now uses the new records_in_range interface
- Added handler::avg_io_cost()
- Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is
not 1.0. In this case we trust the avg_io_cost() from the handler.
- Changed test_quick_select to use TIME_FOR_COMPARE instead of
TIME_FOR_COMPARE_IDX to align this with the rest of the code.
- Fixed bug when using test_if_cheaper_ordering where we didn't use
keyread if index was changed
- Fixed a bug where we didn't use index only read when using order-by-index
- Added keyread_time() to HEAP.
The default keyread_time() was optimized for blocks and not suitable for
HEAP. The effect was the HEAP prefered table scans over ranges for btree
indexes.
- Fixed get_sweep_read_cost() for HEAP tables
- Ensure that range and ref have same cost for simple ranges
Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure
we favior ref for range for simple queries.
- Fixed that matching_candidates_in_table() uses same number of records
as the rest of the optimizer
- Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for
HEAP and temporary tables better. A few tests changed because of this.
- heap::read_time() and heap::keyread_time() adjusted to not add +1.
This was to ensure that handler::keyread_time() doesn't give
higher cost for heap tables than for normal tables. One effect of
this is that heap and derived tables stored in heap will prefer
key access as this is now regarded as cheap.
- Changed cost for index read in sql_select.cc to match
multi_range_read_info_const(). All index cost calculation is now
done trough one function.
- 'ref' will now use quick_cost for keys if it exists. This is done
so that for '=' ranges, 'ref' is prefered over 'range'.
- scan_time() now takes avg_io_costs() into account
- get_delayed_table_estimates() uses block_size and avg_io_cost()
- Removed default argument to test_if_order_by_key(); simplifies code
- Only indentation changes in sql_rename.cc
- Ignore some WSREP error messages when there isn't a internet connection
- Force restart of stat_tables_part.test to make result stable
- Fixed compiler warnings in CONNECT
MDEV-19964 S3 replication support
Added new configure options:
s3_slave_ignore_updates
"If the slave has shares same S3 storage as the master"
s3_replicate_alter_as_create_select
"When converting S3 table to local table, log all rows in binary log"
This allows on to configure slaves to have the S3 storage shared or
independent from the master.
Other thing:
Added new session variable '@@sql_if_exists' to force IF_EXIST to DDL's.
This was to remove a performance regression between 10.3 and 10.4
In 10.5 we will have a better implementation of records_in_range
that will enable us to get more statistics.
This change was not done in 10.4 because the 10.5 will be part of
a larger change that is not suitable for the GA 10.4 version
Other things:
- Changed default handler block_size to 8192 to fix things statistics
for engines that doesn't set the block size.
- Fixed a bug in spider when using multiple part const ranges
(Patch from Kentoku)
e.g.
- dont -> don't
- occurence -> occurrence
- succesfully -> successfully
- easyly -> easily
Also remove trailing space in selected files.
These changes span:
- server core
- Connect and Innobase storage engine code
- OQgraph, Sphinx and TokuDB storage engines
Related to MDEV-21769.
Lifted long standing limitation to the XA of rolling it back at the
transaction's
connection close even if the XA is prepared.
Prepared XA-transaction is made to sustain connection close or server
restart.
The patch consists of
- binary logging extension to write prepared XA part of
transaction signified with
its XID in a new XA_prepare_log_event. The concusion part -
with Commit or Rollback decision - is logged separately as
Query_log_event.
That is in the binlog the XA consists of two separate group of
events.
That makes the whole XA possibly interweaving in binlog with
other XA:s or regular transaction but with no harm to
replication and data consistency.
Gtid_log_event receives two more flags to identify which of the
two XA phases of the transaction it represents. With either flag
set also XID info is added to the event.
When binlog is ON on the server XID::formatID is
constrained to 4 bytes.
- engines are made aware of the server policy to keep up user
prepared XA:s so they (Innodb, rocksdb) don't roll them back
anymore at their disconnect methods.
- slave applier is refined to cope with two phase logged XA:s
including parallel modes of execution.
This patch does not address crash-safe logging of the new events which
is being addressed by MDEV-21469.
CORNER CASES: read-only, pure myisam, binlog-*, @@skip_log_bin, etc
Are addressed along the following policies.
1. The read-only at reconnect marks XID to fail for future
completion with ER_XA_RBROLLBACK.
2. binlog-* filtered XA when it changes engine data is regarded as
loggable even when nothing got cached for binlog. An empty
XA-prepare group is recorded. Consequent Commit-or-Rollback
succeeds in the Engine(s) as well as recorded into binlog.
3. The same applies to the non-transactional engine XA.
4. @@skip_log_bin=OFF does not record anything at XA-prepare
(obviously), but the completion event is recorded into binlog to
admit inconsistency with slave.
The following actions are taken by the patch.
At XA-prepare:
when empty binlog cache - don't do anything to binlog if RO,
otherwise write empty XA_prepare (assert(binlog-filter case)).
At Disconnect:
when Prepared && RO (=> no binlogging was done)
set Xid_cache_element::error := ER_XA_RBROLLBACK
*keep* XID in the cache, and rollback the transaction.
At XA-"complete":
Discover the error, if any don't binlog the "complete",
return the error to the user.
Kudos
-----
Alexey Botchkov took to drive this work initially.
Sergei Golubchik, Sergei Petrunja, Marko Mäkelä provided a number of
good recommendations.
Sergei Voitovich made a magnificent review and improvements to the code.
They all deserve a bunch of thanks for making this work done!
join_cache_level=6+
The patch fixes two similar bugs in the commit 8eeb689e9f
that added multi_range_read support to partitions. The commit opened
a possibility to join a partition table using BKA+MRR. However in some
cases it could lead to wrong results or even crashes.
This could happened when
- index condition pushdown was used to join the table or
- the joined table was an inner table of an outer join and 'not exist'
optimization was applied or
- the join table was the inner table of a semi-join and the first match
optimization was applied
The bugs were in the code of the call-back functions
- partition_multi_range_key_skip_record() and
- partition_multi_range_key_skip_index_tuple().
Each of this function consist only of an invocation of another function.
Yet a wrong parameter was passed at this invocation.
The fix was suggested by Sergey Petrunia and it is apparently in line
with original design.
The corresponding comprehensive test cases demonstrating the problems
caused by the bugs were constructed by me.
Now there can be only one log file instead of several which
logically work as a single file.
Possible names of redo log files: ib_logfile0,
ib_logfile101 (for just created one)
innodb_log_fiels_in_group: value of this variable is not used
by InnoDB. Possible values are still 1..100, to not break upgrade
LOG_FILE_NAME: add constant of value "ib_logfile0"
LOG_FILE_NAME_PREFIX: add constant of value "ib_logfile"
get_log_file_path(): convenience function that returns full
path of a redo log file
SRV_N_LOG_FILES_MAX: removed
srv_n_log_files: we can't remove this for compatibility reasons,
but now server doesn't use this variable
log_sys_t::file::fd: now just one, not std::vector
log_sys_t::log_capacity: removed word 'group'
find_and_check_log_file(): part of logic from huge srv_start()
moved here
recv_sys_t::files: file descriptors of redo log files.
There can be several of those in case we're upgrading
from older MariaDB version.
recv_sys_t::remove_extra_log_files: whether to remove
ib_logfile{1,2,3...} after successfull upgrade.
recv_sys_t::read(): open if needed and read from one
of several log files
recv_sys_t::files_size(): open if needed and return files count
redo_file_sizes_are_correct(): check that redo log files
sizes are equal. Just to log an error for a user.
Corresponding check was moved from srv0start.cc
namespace deprecated: put all deprecated variables here to
prevent usage of it by us, developers
Remove usage of deprecated variable storage_engine. It was deprecated in 5.5 but
it never issued a deprecation warning. Make it issue a warning in 10.5.1.
Replaced with default_storage_engine.
Description:
============
To change 'CONSERVATIVE' @@global.slave_parallel_mode default to 'OPTIMISTIC'
in 10.5.
@sql/sys_vars.cc
Changed default parallel_mode to 'OPTIMISTIC'
@sql/rpl_filter.cc
Changed default parallel_mode to 'OPTIMISTIC'
@sql/mysqld.cc
Removed the initialization of 'SLAVE_PARALLEL_CONSERVATIVE' to
'opt_slave_parallel_mode' variable.
@mysql-test/suite/rpl/t/rpl_parallel_mdev6589.test
@mysql-test/suite/rpl/t/rpl_mdev6386.test
Added 'mtr' suppression to ignore 'ER_PRIOR_COMMIT_FAILED'. In case of
'OPTIMISTIC' mode if a transaction gets killed during "wait_for_prior_commit"
it results in above error "1964". Hence suppression needs to be added for this
error.
@mysql-test/suite/rpl/t/rpl_parallel_conflicts.test
Test has a 'slave.opt' which explicitly sets slave_parallel_mode to
'conservative'. When the test ends this mode conflicts with new default mode.
Hence check test case reports an error. The 'slave.opt' is removed and options
are set and reset within test.
@mysql-test/suite/multi_source/info_logs.result
@mysql-test/suite/multi_source/reset_slave.result
@mysql-test/suite/multi_source/simple.result
Result content mismatch in "show slave status" output. This is expected as new
slave_parallel_mode='OPTIMISTIC'.
@mysql-test/include/check-testcase.test
Updated default 'slave_parallel_mode' to 'optimistic'.
Refactored rpl_parallel.test into following test cases.
Test case 1: @mysql-test/suite/rpl/t/rpl_parallel_domain.test
Test case 2: @mysql-test/suite/rpl/t/rpl_parallel_domain_slave_single_grp.test
Test case 3: @mysql-test/suite/rpl/t/rpl_parallel_single_grpcmt.test
Test case 4: @mysql-test/suite/rpl/t/rpl_parallel_stop_slave.test
Test case 5: @mysql-test/suite/rpl/t/rpl_parallel_slave_bgc_kill.test
Test case 6: @mysql-test/suite/rpl/t/rpl_parallel_gco_wait_kill.test
Test case 7: @mysql-test/suite/rpl/t/rpl_parallel_free_deferred_event.test
Test case 8: @mysql-test/suite/rpl/t/rpl_parallel_missed_error_handling.test
Test case 9: @mysql-test/suite/rpl/t/rpl_parallel_innodb_lock_conflict.test
Test case 10: @mysql-test/suite/rpl/t/rpl_parallel_gtid_slave_pos_update_fail.test
Test case 11: @mysql-test/suite/rpl/t/rpl_parallel_wrong_exec_master_pos.test
Test case 12: @mysql-test/suite/rpl/t/rpl_parallel_partial_binlog_trans.test
Test case 13: @mysql-test/suite/rpl/t/rpl_parallel_ignore_error_on_rotate.test
Test case 14: @mysql-test/suite/rpl/t/rpl_parallel_wrong_binlog_order.test
Test case 15: @mysql-test/suite/rpl/t/rpl_parallel_incorrect_relay_pos.test
Test case 16: @mysql-test/suite/rpl/t/rpl_parallel_retry_deadlock.test
Test case 17: @mysql-test/suite/rpl/t/rpl_parallel_deadlock_corrupt_binlog.test
Test case 18: @mysql-test/suite/rpl/t/rpl_parallel_mode.test
Test case 19: @mysql-test/suite/rpl/t/rpl_parallel_analyze_table_hang.test
Test case 20: @mysql-test/suite/rpl/t/rpl_parallel_record_gtid_wakeup.test
Test case 21: @mysql-test/suite/rpl/t/rpl_parallel_stop_on_con_kill.test
Test case 22: @mysql-test/suite/rpl/t/rpl_parallel_rollback_assert.test
Apparently, regular expression operations that remove entire lines
of output do not work with list_files, and hence the adjustments in
commit 1c282d4bc4 were ineffective.
For cat_file (preceded by list_files_write_file) the replace_regex
does work.
For some reason, for suite/parts/inc/partition_crash_exchange.inc
some file names will be lost when using list_files_write_file
instead of list_files.
We use a precise pattern match. dict_mem_create_temporary_tablename()
is generating #sql-ib names followed by decimal digits only.
Problem:
=======
Test "binlog.binlog_parallel_replication_marks_row" fails sporadically due to
result length mismatch.
Analysis:
=========
Test generates a binary log and it looks for certain words within the binary
log file and prints them. For example word like "GTID,BEGIN,COMMIT ...".
Binary log output contains base64 encoded characters. Occasionally the encoded
characters match with the above words and results in test failure.
+XwoFWxMBAAAALgAAAGEDAAAAAB8AAAAAAAEABHRlc3QAAnQxAAIDAwACFGTIDQ==
+AAAAAAAAAAAEEwQADQgICAoKCgGTIDw9
Fix:
===
Improve the regular expression to match exact words.
Fix partitioning and DS-MRR to work together
- In ha_partition::index_end(): take into account that ha_innobase (and
other engines using DS-MRR) will have inited=RND when initialized for
DS-MRR scan.
- In ha_partition::multi_range_read_next(): if the MRR scan is using
HA_MRR_NO_ASSOCIATION mode, it is not guaranteed that the partition's
handler will store anything into *range_info.
- In DsMrr_impl::choose_mrr_impl(): ha_partition will inquire partitions
about how much memory their MRR implementation needs by passing
*buffer_size=0. DS-MRR code didn't know about this (actually it used
uint for buffer size calculation and would have an under-flow).
Returning *buffer_size=0 made ha_partition assume that partitions do
not need MRR memory and pass the same buffer to each of them.
Now, this is fixed. If DS-MRR gets *buffer_size=0, it will return
the amount of buffer space needed, but not more than about
@@mrr_buffer_size.
* Fix ha_{innobase,maria,myisam}::clone. If ha_partition uses MRR on its
partitions, and partition use DS-MRR, the code will call handler->clone
with TABLE (*NOT partition*) name as an argument.
DS-MRR has no way of knowing the partition name, so the solution was
to have the ::clone() function for the affected storage engine to ignore
the name argument and get it elsewhere.
Count the "gap" time between table accesses and display it as
r_other_time_ms in the "table" element.
* The advantage of this approach is that it doesn't add any new
my_timer_cycles() calls.
* The disadvantage is that the definition of what is done during
"other time" is not that clear: it includes checking the WHERE
(for this table), constructing index lookup tuple (for the next table)
writing to GROUP BY temporary table (as we dont account for that time
separately [yet], etc)
Historically, InnoDB split the redo log into at least 2 files.
MDEV-12061 allowed the minimum to be innodb_log_files_in_group=1,
but it kept the default at innodb_log_files_in_group=2.
Because performance seems to be slightly better with only one log file,
and because implementing an append-only variant of the log would require
a single file, let us define the default to be 1, and have
innodb_log_file_size=96M, to retain the same default total size.
The test main.index_merge_innodb is taking very much time,
especially on later versions (10.2 and 10.3).
Some of this could be attributed to the use of INSERT...SELECT,
which is time-consumingly creating explicit record locks in InnoDB
for the locking read in the SELECT part.
In 10.3 and later, some slowness can be attributed to MDEV-12288,
which makes the InnoDB purge thread spend time to reset transaction
identifiers in the inserted records. If we prevent purge from
running before all tables are dropped, the test seems to be
10% faster on an unoptimized debug build on 10.5. (A proper fix
would be to implement MDEV-515 and stop writing row-level undo log
records for inserts into an empty table or partition.)
At the same time, it should not hurt to make main.index_merge_myisam
to use the sequence engine. Not only could it be a little faster,
but the test would be slightly more readable.
mark big_tables deprecated, the server can put temp tables on disk
as needed avoiding "table full" errors.
in case someone would really need to force a tmp table to be created
on disk from the start and for testing allow tmp_memory_table_size
to be set to 0.
fix tests to use that instead (and add a test that it actually
works).
make sure in-memory TREE size limit is never 0 (it's [ab]using
tmp_memory_table_size at the moment)
remove few sys_vars.*_basic tests
MDEV-20589: Server still crashes in Field::set_warning_truncated_wrong_value
- Use dbug_tmp_use_all_columns() to mark that all fields can be used
- Remove field->is_stat_field (not needed)
- Remove extra arguments to Field::clone() that should not be there
- Safety fix for Field::set_warning_truncated_wrong_value() to not crash
if table is zero in production builds (We have got crashes several times
here so better to be safe than sorry).
- Threat wrong character string warnings identical to other field
conversion warnings. This removes some warnings we before got from
internal conversion errors. There is no good reason why a user would
get an error in case of 'key_field='wrong-utf8-string' but not for
'field=wrong-utf8-string'. The old code could also easily give
thousands of no-sence warnings for one single statement.
Cherry-pick the commits the mysql and some changes.
WL#4618 RBR: extended table metadata in the binary log
This patch extends Table Map Event. It appends some new fields for
more metadata. The new metadata includes:
- Signedness of Numberic Columns
- Character Set of Character Columns and Binary Columns
- Column Name
- String Value of SET Columns
- String Value of ENUM Columns
- Primary Key
- Character Set of SET Columns and ENUM Columns
- Geometry Type
Some of them are optional, the patch introduces a GLOBAL system
variable to control it. It is binlog_row_metadata.
- Scope: GLOBAL
- Dynamic: Yes
- Type: ENUM
- Values: {NO_LOG, MINIMAL, FULL}
- Default: NO_LOG
Only Signedness, character set and geometry type are logged if it is MINIMAL.
Otherwise all of them are logged.
Also add a binlog_type_info() to field, So that we can have extract
relevant binlog info from field.