my_real_read() detects if the connection was killed and sets error to
ER_CONNECTION_KILLED. However net_real_write() overrides the error
with ER_NET_READ_INTERRUPTED or ER_NET_READ_ERROR when it tries to
send the 'Connection was killed' to the user on a closed connection.
Fixed by not overwriting the original error code if the connection was
shutdown.
Rewiewed-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Some thing causes the aria_log_control file to be larger than the expected
52 bytes. The control file has the correct information but somehow it
is filled up with ox00 bytes up to 512 bytes.
This could have happened in case of a file system crash that enlarged
the file to the sector boundary.
Fixed that aria will ignore bytes outside of it's expected
Other things:
- Fixed wrong DBUG_ASSERT() in my_malloc_size_cb_func() that could cause
crashes in debug binaries during Aria recovery.
The effect is that 'show processlist' will show the Slave SQL thread
until the thread ends. This may help finding cases where the Slave SQL
thread could hang for some time during the cleanup part.
The Slave SQL thread will have the state "Slave SQL thread ending' during
this stage.
Reviewed-by: Kristian Nielsen <knielsen@knielsen-hq.org>
When running mariabackup with --prepare --export options, a deprecation
warning appears because the program name is set to "mysqld" instead of "mariadbd"
The fix ensures that when running in mysqld mode, we properly set the
program name to "mariadbd" to avoid the deprecation warning while
maintaining the original functionality
In commit 6e6a1b316ca8df5116613fbe4ca2dc37b3c73bd1 (MDEV-35000)
a race condition was exposed.
ha_innobase::check_if_incompatible_data(): If the statistics have
already been initialized for the table, skip the invocation of
innobase_copy_frm_flags_from_create_info() in order to avoid
unexpectedly ruining things for other threads that are concurrently
accessing the table.
dict_stats_save(): Add debug instrumentation that is necessary for
reproducing the interlocking of the failure scenario.
The problem with MariaDB waiting was fixed earlier.
However the server still gives the old error,in case of disk full,
that includes "waiting for someone to free some space" even if
there is now wait.
This commit changes the error message for the non waiting case to:
Disk got full writing 'db.table' (Errcode: 28 "No space left on device")
Disk got full writing 'test.t1' (Errcode: 28 "No space left on device")Disk got full writing 'test.t1' (Errcode: 28 "No space left on device")Disk got full writing 'test.t1' (Errcode: 28 "No space left on device")
This commit introduces an additional command line option to the mariadb
client.
--script-dir=<directory> will cause the `source` command to look for
files initially in CWD, then in <script-dir> if not found in CWD.
Update pre-install script to fix package detection and standardize
MariaDB/MySQL terminology in messages for improved upgrade reliability
and reduced user confusion.
All new code of the whole pull request, including one or several files that are
either new files or modified ones, are contributed under the BSD-new license. I
am contributing on behalf of my employer Amazon Web Services, Inc.
The test is killing and restarting the server very many times.
This may lead to timeouts on architectures or builds that lack
an SIMD based encryption implementation, such as IBM System Z (s390x)
or cmake -DWITH_MSAN=ON builds.
Problem - current coding standards explicitly discourages
use of int, char, short etc, and recommends fixed types instead.
This is overly pedantic. The real problem, and the only problem we have
with types concerning portability, the inappropriate use of `long` is
too easy to overlook.
Thus,un-deprecate the types that are portable, for all practical purpose
i.e int, short, long long. Warning that char might be unsigned, though,
all compilers have appropriate flags.
Yet, use strongly wording to deprecate long and ulong, those are the types
that create real portability problems.
Added explicit checkpoint wait instead of implicit assumption
that statement sent via --send will already be executed when
lock-contesting statement is started in another session.
When MSAN adds the -fsantize=memory this significantly affects compile
and link tests. Whether this is a compile/run/or looking for a symbol
in a library these cmake tests require the same flags be set.
Ideally the minimum invocation of cmake to create a MSAN build as
investigated in MDBF-793 should be:
-DWITH_MSAN=ON \
-DCMAKE_{EXE,MODULE}_LINKER_FLAGS="-L${MSAN_LIBDIR} -Wl,-rpath=${MSAN_LIBDIR}"
On the assumption that the compiler supports msan and the instrumented
libraries are in MSAN_LIBDIR (maybe later can be made a cmake option).
Without cmake policy below, the checking of everything from libc++ to
libfmt will not have the supplied linker flags or the compile options
that WITH_MSAN=ON invokes. Many try_compile and CMake functions that
wrapped this and headers failed to be recognised due to missing msan symbols
when linking. Also without the -L path, they where applying a link test
to the default path libraries rather than the MSAN instrumented ones.
The CMake policy enabled is CMP0056, added CMake 3.2, applies
CMAKE_EXE_LINKER_FLAGS to try_compile.
With this change the MY_CHECK_AND_SET_COMPILER_FLAG
remove explict build types resulting in just CMAKE_{C,CXX}_FLAGS being
set rather than CMAKE_{C,CXX}_FLAGS_{DEBUG,RELWITHDEBINFO}. These
are needed for the default CMP0066 policy to be correctly applied and
the msan flags of -fsanitizer=memory are applied to all compile checks.
Likewise with MY_CHECK_AND_SET_LINKER_FLAG for CMAKE_{EXE,MODULE,SHARED}_LINKER_FLAGS
for those check that involve full linking and CHECK_CXX_SOURCE_RUNS for
example.
stats_deinit(): Replaces dict_stats_deinit().
Deinitialize the statistics for persistent tables,
so that they will be reloaded or recalculated
on a subsequent ha_innobase::open().
ha_innobase::rename_table(): Invoke stats_deinit() so that the
subsequent ha_innobase::open() will reload the InnoDB persistent
statistics. That is, it will remain possible to have the InnoDB
persistent statistics reloaded by executing the following:
RENAME TABLE t TO tmp, tmp TO t;
dict_table_close(table): Replaced with table->release().
There will no longer be any logic that would attempt to ensure
that the InnoDB persistent statistics will be reloaded after
FLUSH TABLES has been executed. This also fixes the problem that
dict_table_t::stat_modified_counter would be frequently reset to 0,
whenever ha_innobase::open() is invoked after the table reference
count had dropped to 0.
dict_table_close(table, thd, mdl): Remove the parameter "dict_locked".
Do not try to invalidate the statistics.
ha_innobase::statistics_init(): Replaces dict_stats_init(table).
Reviewed by: Thirunarayanan Balathandayuthapani
innodb_stats_transient_sample_pages, innodb_stats_persistent_sample_pages:
Change the type to UNSIGNED, because the number of pages in a table
is limited to 32 bits by the InnoDB file format.
btr_get_size_and_reserved(), fseg_get_n_frag_pages(),
fseg_n_reserved_pages_low(), fseg_n_reserved_pages(): Return uint32_t.
The file format limits page numbers to 32 bits.
dict_table_t::stat: An Atomic_relaxed<uint32_t> that combines a
number of metadata fields.
innodb_copy_stat_flags(): Copy the statistics flags from
TABLE_SHARE or HA_CREATE_INFO.
dict_table_t::stats_initialized(), dict_table_t::stats_is_persistent():
Accessors to dict_table_t::stat.
Reviewed by: Thirunarayanan Balathandayuthapani
If a query has many OR-ed constructs which can use multiple indexes
(key1=1 AND key2=10) OR
(key1=2 AND key2=20) OR
(key1=3 AND key2=30) OR
...
The range optimizer would construct and then discard a lot of potential
index_merge plans. This process
1. is CPU-intensive
2. can hit the @@optimizer_max_sel_args limitation after which all
potential range or index_merge plans are discarded.
The fix is to apply a heuristic: if there is an OR clause with more than
MAX_OR_ELEMENTS_FOR_INDEX_MERGE=100 branches (hard-coded constant),
disallow construction of index_merge plans for the OR branches.
In the `check_join_cache_usage()` function there is a branching issue
where an accidental fall-through to BKA/BKAH buffers may occur, even
when the join_cache_level setting does not permit their use.
This patch corrects the condition to ensure that BKA/BKAH join caching
is only enabled when explicitly allowed by join_cache_level
Reviewer: Sergei Petrunia <sergey@mariadb.com>
* `get_master_version_and_clock()` de-duplicate label using fall-through
* `io_slave_killed()` & `check_io_slave_killed()`:
* reüse the result from the level lower
* add distinguishing docs
* `try_to_reconnect()`: extract `'` from `if`-`else`
* `handle_slave_io()`: Both `while`s have the same condition;
looks like the outer `while` can simply be an `if`.
* `connect_to_master()`:
* assume `mysql_errno()` is not 0 on connection error
* utilize 0’s falsiness in the loop
* extend docs
* `sql/sql_show.cc`: refactor SHOW ALL REPLICAS filter’s condition
* `sql/mysqld.cc`: init `master-retry-count` with `master_retry_count`
Reviewed-by: Kristian Nielsen <knielsen@knielsen-hq.org>
When the IO thread (re)connect to a primary,
no updates are available besides unique errors that cause the failure.
These new `Master_info` numbers supplement SHOW SLAVE STATUS’s (most-
recent) ‘Connecting’ state with statistics on (re)connect attempts:
* `Connects_Tried`: how many retries have been attempted so far
This was previously a local variable that only counted re-attempts;
it’s now meaningful even after the “Connecting” state concludes.
* `Master_Retry_Count` (from MDEV-25674): out of how many configured
Side-note: Some of the tests updated by this commit dump the entire
SHOW SLAVE STATUS, which might include non-deterministic entries.
Reviewed-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
This new CHANGE MASTER TO field specifies the `--master-retry-count`
(global option: the number of Primary connection attempts)
for each multi-source replica; i.e, per-channel `performance_schema.`
`replication_connection_configuration.CONNECTION_RETRY_COUNT`.
`--master-retry-count` remains the default for new `CHANGE MASTER TO`s.
This new keyword and `master-info` entry
matches those of pre-‘REPLICATION SOURCE’ MySQL.
`try_to_reconnect()` wraps `safe_reconnect()` with logging, but the
latter already loops reconnection attempts up to `master_retry_count`
times with `mi->connect_retry`-msec sleeps inbetween.
This means `try_to_reconnect()` has been counting disconnects of the
replication session (since it doesn’t loop) while `safe_reconnect()`
was counting actual tries (which may be multiple per disconnect).
In practice, this outer counter’s only benefit was to override the edge
case `--master-retry-count=0` that the inner loop doesn’t cover with 1.
Writing the redo log resized will write uninitialized data. There is
a MEM_MAKE_DEFINED construct in the code to bless this however it was
correct on the initial length, but not the changed length.
The MEM_MAKE_DEFINED is moved earlier in the code where the length
contains the correct value.
Added caching of database directories that did not have a db.opt file.
This was common for older MariaDB installaiton or if a user created
a database with 'mkdir'.
Other things:
- Give a note "no db.opt file" if one uses SHOW CREATE DATABASE one
a database without a db.opt file.
In MDEV-26345 77ed235d50bd9b1480f26d18ea0b70ca7480af23 a bitmap is
introduced to skip spider GBH SELECTed constant fields when storing
the results from the data node. Unfortunately this bitmap was not used
in all applicable calls. This patch fixes it. The test covers most of
the calls, with the exception of
spider_db_store_result_for_reuse_cursor(), which is not covered in
existing tests, because it is only called when limit_mode()==1, which
is not the case for any spider backend wrapper.
This is for preparing MariaDB for catalogs.
mtr tests should in the future use MARIADB_TOPDIR or MARIADB_DATADIR
instead of '@@datadir'. This is especially true for replication tests
that access binlog files.
MARIADB_TOPDIR is the top directory where binary log and engine log files
resides.
MARIADB_DATADIR is the directory where the database/schema directories
resides.
MARIADB_TOPDIR is not depending on catalogs.
When catalogs is used MARIADB_DATADIR will point to the directory of the
current catalog. If catalogs is not used then
MARIAD_DATADIR=MARIADB_TOPDIR.
recv_sys_t::parse(): Allocate decrypt_buf also for storing==BACKUP
but limit its size to correspond to 1 byte of record payload.
Ensure that last_offset=0 for storing==BACKUP.
When parsing INIT_PAGE or FREE_PAGE, avoid an unnecessary
l.copy_if_needed() for storing!=YES.
When parsing EXTENDED in storing==BACKUP, properly invoke
l.copy_if_needed() on a large enough decrypt_buf.
When parsing WRITE, MEMMOVE, MEMSET in storing==BACKUP,
skip further validation (and potential overflow of the tiny decrypt_buf),
like we used to do before commit 46aaf328ce424aededdb61e59a48db05630563d5
(MDEV-35830).
Reviewed by: Debarun Banerjee
A spider_conn may outlive its associated ha_spider (in the field
queued_ping_spider) used for connecting to and pinging the data
node (a call to spider_db_ping(), guarded by the boolean field
queued_ping). In a call to ha_spider::close() (which is often preceded
with the deletion of the ha_spider itself), many cleanups happen,
including freeing the associated spider_share, which is used by the
spider_conn in spider_db_ping. Therefore it is necessary to reset both
the queued_ping_spider and queued_ping fields, so that any further
spider interaction with the data node will not trigger the call using
the ha_spider including its freed spider_share.
Also out of caution added an assert and internal error in case a
connection has not been established (the db_conn field of type
MYSQL * is NULL), and attempt to connect is skipped because both
queued_connect and queued_ping are false. Note that this unlikely (if
not impossible) scenario would not be a regression caused by this
change, as it strictly falls under the scenario of this bug.
This fixes server startup segv introduced in MDEV-34716
d2eba35653b87a8fbd3bffe3ac4b4eb0ab7c0ca9 when upgrading from server
versions lower than 11.7.
Also construct the JSON Options column when upgrading from a version
without the column.
Fix a regression introduced in MDEV-35840 78157c4765f2c086fabe183d51d7734ecffdbdd8
Also tested compiling with -O3 that the -Warray-bounds fixed in
MDEV-35840 does not resurface
my_getopt compares option names case-sensitively, causing
"Unknown option" errors when users type mixed-case options like
wsrep_slave_UK_checks in lowercase wsrep_slave_fk_checks.
Made the comparison in the getopt_compare_strings() case-insensitive.
Skip printing the error if it is expected — that is, when mariadb-import
is aborting.
In a multithreading scenario, another thread may encounter an error
because the corresponding connection was killed. Do not print this error.
Some jobs in the test stage of the pipeline are failing due to
mysql_install_db not being found in the expected location for newer
versions (11.4) of mariadb when executing test_upgrade.sh.
Fix missing binary failures in test_upgrade.sh:
- Modify commands that look for variables and binaries to account for
both mysql and mariadb prefixes of the binaries
- Add more directories to the list of locations where the binaries are
searched for
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Make sure the table mysql.gtid_slave_pos is altered to InnoDB before
starting parallel replication. The parallel replication of the suppression
insertion in the test case was trying to update the GTID position in
parallel with the ALTER TABLE, which could occasionally deadlock on the MDL
lock.
Reviewed-by: Monty <monty@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Problem was that initial GTID was set on wsrep_before_prepare
out-of-order. In practice GTID was set to same as previous
executed transaction GTID. In recovery valid GTID was found
from prepared transaction and this transaction is committed
leading to fact that same GTID was executed twice.
This is fixed by setting invalid GTID at wsrep_before_prepare
and later in wsrep_before_commit actual correct GTID is set
and this setting is done while we are in commit monitor i.e.
assigment is done in order of replication.
In recovery if prepared transaction is found we check its
GTID, if it is invalid transaction will be rolled back
and if it is valid it will be committed.
Initialize gtid seqno from recovered seqno when
bootstrapping a new cluster.
Added two test cases for both mariabackup and rsync SST methods
to show that GTIDs remain consistent on cluster and that
all expected rows are in the table.
Added tests for wsrep GTID recovery with binlog on and off.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>