According to ISO 8601 standard, 'T' should be followed by the time of
day. If a date ends with only 'T', throw an error in strict mode and a
warning in other modes.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Item_func_sp::execute() was called two times per row in this scenario:
SELECT ROW(f1(),1) = ROW(1,1), @counter FROM seq_1_to_5;
- the first time from Item_func_sp::bring_value()
- the second time from Item_func_sp::val_int()
Fix:
Changing Item_func_sp::bring_value() to call execute() only
when the result type is ROW_RESULT.
The fix for MDEV-34413 added support for Index Condition Pushdown with reverse
ordered scans. This makes Rowid filtering work with reverse-ordered scans, too,
so enable it. For example, InnoDB can now check the pushed index condition and
then check the rowid filter on success, in the ORDER BY ... DESC case.
Allows index condition pushdown for reverse ordered scans, a previously
disabled feature due to poor performance. This patch adds a new
API to the handler class called set_end_range which allows callers to
tell the handler what the end of the index range will be when scanning.
Combined with a pushed index condition, the handler can scan the index
efficiently and not read beyond the end of the given range. When
checking if the pushed index condition matches, the handler will also
check if scanning has reached the end of the provided range and stop if
so.
If we instead only enabled ICP for reverse ordered scans without
also calling this new API, then the handler would perform unnecessary
index condition checks. In fact this would continue until the end of
the index is reached.
These changes are agnostic of storage engine. That is, any storage
engine that supports index condition pushdown will inhereit this new
behavior as it is implemented in the SQL and storage engine
API layers.
The partitioned tables storage meta-engine (ha_partition) adds an
override of set_end_range which recursively calls set_end_range on its
child storage engine (handler) implementations.
This commit updates the test made in an earlier commit to show that
ICP matches happen for the reverse ordered case.
This patch is based on changes written by Olav Sandstaa in
MySQL commit da1d92fd46071cd86de61058b6ea39fd9affcd87
Adds tests which show that ICP was not enabled for reverse-ordered scans
prior to this mdev. A later commit for this same mdev records again
these same tests, showing that ICP for reverse-ordered scans is
enabled and working.
This commit introduces an additional command line option to the mariadb
client.
--script-dir=<directory> will cause the `source` command to look for
files initially in CWD, then in <script-dir> if not found in CWD.
The test is killing and restarting the server very many times.
This may lead to timeouts on architectures or builds that lack
an SIMD based encryption implementation, such as IBM System Z (s390x)
or cmake -DWITH_MSAN=ON builds.
Problem - current coding standards explicitly discourages
use of int, char, short etc, and recommends fixed types instead.
This is overly pedantic. The real problem, and the only problem we have
with types concerning portability, the inappropriate use of `long` is
too easy to overlook.
Thus,un-deprecate the types that are portable, for all practical purpose
i.e int, short, long long. Warning that char might be unsigned, though,
all compilers have appropriate flags.
Yet, use strongly wording to deprecate long and ulong, those are the types
that create real portability problems.
stats_deinit(): Replaces dict_stats_deinit().
Deinitialize the statistics for persistent tables,
so that they will be reloaded or recalculated
on a subsequent ha_innobase::open().
ha_innobase::rename_table(): Invoke stats_deinit() so that the
subsequent ha_innobase::open() will reload the InnoDB persistent
statistics. That is, it will remain possible to have the InnoDB
persistent statistics reloaded by executing the following:
RENAME TABLE t TO tmp, tmp TO t;
dict_table_close(table): Replaced with table->release().
There will no longer be any logic that would attempt to ensure
that the InnoDB persistent statistics will be reloaded after
FLUSH TABLES has been executed. This also fixes the problem that
dict_table_t::stat_modified_counter would be frequently reset to 0,
whenever ha_innobase::open() is invoked after the table reference
count had dropped to 0.
dict_table_close(table, thd, mdl): Remove the parameter "dict_locked".
Do not try to invalidate the statistics.
ha_innobase::statistics_init(): Replaces dict_stats_init(table).
Reviewed by: Thirunarayanan Balathandayuthapani
innodb_stats_transient_sample_pages, innodb_stats_persistent_sample_pages:
Change the type to UNSIGNED, because the number of pages in a table
is limited to 32 bits by the InnoDB file format.
btr_get_size_and_reserved(), fseg_get_n_frag_pages(),
fseg_n_reserved_pages_low(), fseg_n_reserved_pages(): Return uint32_t.
The file format limits page numbers to 32 bits.
dict_table_t::stat: An Atomic_relaxed<uint32_t> that combines a
number of metadata fields.
innodb_copy_stat_flags(): Copy the statistics flags from
TABLE_SHARE or HA_CREATE_INFO.
dict_table_t::stats_initialized(), dict_table_t::stats_is_persistent():
Accessors to dict_table_t::stat.
Reviewed by: Thirunarayanan Balathandayuthapani
If a query has many OR-ed constructs which can use multiple indexes
(key1=1 AND key2=10) OR
(key1=2 AND key2=20) OR
(key1=3 AND key2=30) OR
...
The range optimizer would construct and then discard a lot of potential
index_merge plans. This process
1. is CPU-intensive
2. can hit the @@optimizer_max_sel_args limitation after which all
potential range or index_merge plans are discarded.
The fix is to apply a heuristic: if there is an OR clause with more than
MAX_OR_ELEMENTS_FOR_INDEX_MERGE=100 branches (hard-coded constant),
disallow construction of index_merge plans for the OR branches.
In the `check_join_cache_usage()` function there is a branching issue
where an accidental fall-through to BKA/BKAH buffers may occur, even
when the join_cache_level setting does not permit their use.
This patch corrects the condition to ensure that BKA/BKAH join caching
is only enabled when explicitly allowed by join_cache_level
Reviewer: Sergei Petrunia <sergey@mariadb.com>
* `get_master_version_and_clock()` de-duplicate label using fall-through
* `io_slave_killed()` & `check_io_slave_killed()`:
* reüse the result from the level lower
* add distinguishing docs
* `try_to_reconnect()`: extract `'` from `if`-`else`
* `handle_slave_io()`: Both `while`s have the same condition;
looks like the outer `while` can simply be an `if`.
* `connect_to_master()`:
* assume `mysql_errno()` is not 0 on connection error
* utilize 0’s falsiness in the loop
* extend docs
* `sql/sql_show.cc`: refactor SHOW ALL REPLICAS filter’s condition
* `sql/mysqld.cc`: init `master-retry-count` with `master_retry_count`
Reviewed-by: Kristian Nielsen <knielsen@knielsen-hq.org>
When the IO thread (re)connect to a primary,
no updates are available besides unique errors that cause the failure.
These new `Master_info` numbers supplement SHOW SLAVE STATUS’s (most-
recent) ‘Connecting’ state with statistics on (re)connect attempts:
* `Connects_Tried`: how many retries have been attempted so far
This was previously a local variable that only counted re-attempts;
it’s now meaningful even after the “Connecting” state concludes.
* `Master_Retry_Count` (from MDEV-25674): out of how many configured
Side-note: Some of the tests updated by this commit dump the entire
SHOW SLAVE STATUS, which might include non-deterministic entries.
Reviewed-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
This new CHANGE MASTER TO field specifies the `--master-retry-count`
(global option: the number of Primary connection attempts)
for each multi-source replica; i.e, per-channel `performance_schema.`
`replication_connection_configuration.CONNECTION_RETRY_COUNT`.
`--master-retry-count` remains the default for new `CHANGE MASTER TO`s.
This new keyword and `master-info` entry
matches those of pre-‘REPLICATION SOURCE’ MySQL.
`try_to_reconnect()` wraps `safe_reconnect()` with logging, but the
latter already loops reconnection attempts up to `master_retry_count`
times with `mi->connect_retry`-msec sleeps inbetween.
This means `try_to_reconnect()` has been counting disconnects of the
replication session (since it doesn’t loop) while `safe_reconnect()`
was counting actual tries (which may be multiple per disconnect).
In practice, this outer counter’s only benefit was to override the edge
case `--master-retry-count=0` that the inner loop doesn’t cover with 1.
Writing the redo log resized will write uninitialized data. There is
a MEM_MAKE_DEFINED construct in the code to bless this however it was
correct on the initial length, but not the changed length.
The MEM_MAKE_DEFINED is moved earlier in the code where the length
contains the correct value.
Added caching of database directories that did not have a db.opt file.
This was common for older MariaDB installaiton or if a user created
a database with 'mkdir'.
Other things:
- Give a note "no db.opt file" if one uses SHOW CREATE DATABASE one
a database without a db.opt file.
This is for preparing MariaDB for catalogs.
mtr tests should in the future use MARIADB_TOPDIR or MARIADB_DATADIR
instead of '@@datadir'. This is especially true for replication tests
that access binlog files.
MARIADB_TOPDIR is the top directory where binary log and engine log files
resides.
MARIADB_DATADIR is the directory where the database/schema directories
resides.
MARIADB_TOPDIR is not depending on catalogs.
When catalogs is used MARIADB_DATADIR will point to the directory of the
current catalog. If catalogs is not used then
MARIAD_DATADIR=MARIADB_TOPDIR.
recv_sys_t::parse(): Allocate decrypt_buf also for storing==BACKUP
but limit its size to correspond to 1 byte of record payload.
Ensure that last_offset=0 for storing==BACKUP.
When parsing INIT_PAGE or FREE_PAGE, avoid an unnecessary
l.copy_if_needed() for storing!=YES.
When parsing EXTENDED in storing==BACKUP, properly invoke
l.copy_if_needed() on a large enough decrypt_buf.
When parsing WRITE, MEMMOVE, MEMSET in storing==BACKUP,
skip further validation (and potential overflow of the tiny decrypt_buf),
like we used to do before commit 46aaf328ce424aededdb61e59a48db05630563d5
(MDEV-35830).
Reviewed by: Debarun Banerjee
A spider_conn may outlive its associated ha_spider (in the field
queued_ping_spider) used for connecting to and pinging the data
node (a call to spider_db_ping(), guarded by the boolean field
queued_ping). In a call to ha_spider::close() (which is often preceded
with the deletion of the ha_spider itself), many cleanups happen,
including freeing the associated spider_share, which is used by the
spider_conn in spider_db_ping. Therefore it is necessary to reset both
the queued_ping_spider and queued_ping fields, so that any further
spider interaction with the data node will not trigger the call using
the ha_spider including its freed spider_share.
Also out of caution added an assert and internal error in case a
connection has not been established (the db_conn field of type
MYSQL * is NULL), and attempt to connect is skipped because both
queued_connect and queued_ping are false. Note that this unlikely (if
not impossible) scenario would not be a regression caused by this
change, as it strictly falls under the scenario of this bug.
This fixes server startup segv introduced in MDEV-34716
d2eba35653b87a8fbd3bffe3ac4b4eb0ab7c0ca9 when upgrading from server
versions lower than 11.7.
Also construct the JSON Options column when upgrading from a version
without the column.
Skip printing the error if it is expected — that is, when mariadb-import
is aborting.
In a multithreading scenario, another thread may encounter an error
because the corresponding connection was killed. Do not print this error.
Some jobs in the test stage of the pipeline are failing due to
mysql_install_db not being found in the expected location for newer
versions (11.4) of mariadb when executing test_upgrade.sh.
Fix missing binary failures in test_upgrade.sh:
- Modify commands that look for variables and binaries to account for
both mysql and mariadb prefixes of the binaries
- Add more directories to the list of locations where the binaries are
searched for
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Make sure the table mysql.gtid_slave_pos is altered to InnoDB before
starting parallel replication. The parallel replication of the suppression
insertion in the test case was trying to update the GTID position in
parallel with the ALTER TABLE, which could occasionally deadlock on the MDL
lock.
Reviewed-by: Monty <monty@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
This fixes compilation when using gcc 7.5.0
Apparantly this version of gcc does not support
enum privilege_t: unsigned long long forr printf
argument checking.
log_t::writer_update(): Add the parameter bool resizing,
to indicate whether log resizing is in progress.
We must enable log_writer_resizing only if resize_lsn>1,
to ensure that log_t::resize_abort() will not choose the wrong
log_sys.log_writer.
log_t::resize_initiator: The thread that successfully invoked
resize_start().
log_t::resize_start(): Simplify some logic, and assign resize_initiator
if we successfully started log resizing.
log_t::resize_abort(): Abort log resizing if we are the
resize_initiator.
innodb_log_file_size_update(): Clean up some logic.
Reviewed by: Debarun Banerjee
dict_stats_fetch_from_ps(): Acquire dict_sys.latch as few times as
possible, and release dict_sys.latch after invoking pars_sql(),
so that we will not be unnecessarily holding dict_sys.latch while
possibly waiting for data to be read into the buffer pool.
row_create_index_for_mysql(): Tolerate DB_LOCK_TABLE_FULL better.
fts_create_one_common_table(), fts_create_one_index_table():
Do not corrupt the error state of a non-active transaction object.
fts_config_set_value(): Only run another statement if there was
no error yet.
row_parse_int(): Refactor the code and define the function static in
one compilation unit. For any negative values, we must return 0.
row_search_get_max_rec(), row_search_max_autoinc(): Moved to the same
compilation unit with row_parse_int().
We also remove a work-around of an internal compiler error when
targeting ARMv8 on GCC 4.8.5, a compiler that is no longer supported.
Reviewed by: Debarun Banerjee