is_bulk_op())' fails on UPDATE on a partitioned table with subquery
(MySQL:71630)
Analysis and fix: Error is not checked. So correct error state is not returned.
Fix: Check for error and return the error state.
Diagnostics_area::set_error_status
Analysis: When strict mode is enabled, all warnings are converted to errors
including those which do not occur because of bad data.
Fix: Query should not be aborted when we have warning because limit to
examine rows was reached because it doesn't happen due to bad data.
So thd->abort_on_warning should be false.
- row_search_mvcc() should return DB_INTERRUPTED when it got killed.
- Add a syncpoint for the ICP check.
- Add test coverage for killed-during-ICP-check scenario
Backport of MDEV-22761 fixes for ICP from 10.4 commits:
* a6f956488c712bef3b13660584d1b905e0c676cc
* c03885cd9ceb1ede7f49a9e218022b401b3a1e28
XtraDB was fixed in deb3b9a17498
Reviewer: Daniel Black
EVEN IF I LOG TO FILE.
Analysis:
----------
MYSQL_UPGRADE of the master breaks the replication when
the query logging is enabled with FILE/NONE 'log-output'
option on the slave.
mysql_upgrade modifies the 'general_log' and 'slow_log'
tables after the logging is disabled as below:
SET @old_log_state = @@global.general_log;
SET GLOBAL general_log = 'OFF';
ALTER TABLE general_log
MODIFY event_time TIMESTAMP NOT NULL,
( .... );
SET GLOBAL general_log = @old_log_state;
and
SET @old_log_state = @@global.slow_query_log;
SET GLOBAL slow_query_log = 'OFF';
ALTER TABLE slow_log
MODIFY start_time TIMESTAMP NOT NULL,
( .... );
SET GLOBAL slow_query_log = @old_log_state;
In the binary log, only the ALTER statements are logged
but not the SET statements which turns ON/OFF the logging.
So when the slave replays the binary log,the ALTER of LOG
tables throws an error since the logging is enabled. Also
the 'log-output' option is not checked to determine
whether to allow/disallow the ALTER operation.
Fix:
----
The 'log-output' option is included in the check while
determining whether the query logging happens using the
log tables.
Picked from mysql respository at 0daaf8aecd8f84ff1fb400029139222ea1f0d812
The crash was caused by improper raising of an error or replication checksum
verification at time of the server initialization. As there is no THD object
associated with the main initializing thread yet the error text should be
assigned with calling a respective macro that is aware of that possibility.
Fixed accordingly.
[At merging to 10.4 the new test result file needs
+# restart: --master_verify_checksum=ON --debug_dbug=+d,corrupt_read_log_event_char
that mtr run will hint on.]
Analysis:
========
"mysqlbinlog -v" option will reconstruct row events and display them as
commented SQL statements. If this option is given twice, the output includes
comments to indicate column data types and some metadata.
`log_event_print_value` is the function reponsible for printing values and
their types. This function doesn't handle GEOMETRY type. Hence the above error
gets printed.
Fix:
===
Add support for GEOMETRY datatype.
Appoligies, had a dirty branch before pushing:
This reverts commit 053653a23cac6f3f2e5288979438de27c9d0100a.
This reverts commit 0ff897807fc2f4a32e1ba1ae148005930ea604b5.
This reverts commit 85b085972b729f6c049050f851692c9a5b86f3d5.
This reverts commit f3f45e46b614bddcef0a37f4352c5909ca565d1d.
This reverts commit a470b3570a7ce2534c9021f3b84d7457a3ba08e1.
This reverts commit f8b8d202bc83d3de46c89ef86333fe602e711265.
This reverts commit 6b6f066fdd9f5f64813ded550e7dbda176ee3c82.
This reverts commit a701e9e6c390c3cbac69872e95b1aec565341d30.
This reverts commit c169838611e13c9f0559b2f49ba8c36aec11a78b.
Problem:
=======
SHOW BINLOG EVENTS FROM <"random"-pos> caused a variety of failures as
reported in MDEV-18046. They are fixed but that approach is not future-proof
as well as is not optimal to create extra check for being constructed event
parameters.
Analysis:
=========
"show binlog events from <pos>" code considers the user given position as a
valid event start position. The code starts reading data from this event start
position onwards and tries to map it to a set of known events. Each event has
a specific event structure and asserts have been added to ensure that, read
event data, satisfies the event specific requirements. When a random position
is supplied to "show binlog events command" the event structure specific
checks will fail and they result in assert.
For example: https://jira.mariadb.org/browse/MDEV-18046
In the bug description user executes CREATE TABLE/INSERT and ALTER SQL
commands.
When a crazy offset like "SHOW BINLOG EVENTS FROM 365" is provided code
assumes offset 365 as valid event begin and proceeds to EVENT_LEN_OFFSET reads
some random length and comes up with a crazy event which didn't exits in the
binary log. In this quoted example scenario, event read at offset 365 is
considered as "Update_rows_log_event", which is not present in binary log.
Since this is a random event its validation fails and code results in
assert/segmentation fault, as shown below.
mysqld: /data/src/10.4/sql/log_event.cc:10863: Rows_log_event::Rows_log_event(
const char*, uint, const Format_description_log_event*):
Assertion `var_header_len >= 2' failed.
181220 15:27:02 [ERROR] mysqld got signal 6 ;
#7 0x00007fa0d96abee2 in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6
#8 0x000055e744ef82de in Rows_log_event::Rows_log_event (this=0x7fa05800d390,
buf=0x7fa05800d080 "", event_len=254, description_event=0x7fa058006d60) at
/data/src/10.4/sql/log_event.cc:10863
#9 0x000055e744f00cf8 in Update_rows_log_event::Update_rows_log_event
Since we are reading random data repeating the same command SHOW BINLOG EVENTS
FROM 365 produces different types of crashes with different events. MDEV-18046
reported 10 such crashes.
In order to avoid such scenarios user provided starting offset needs to be
validated for its correctness. Best way of doing this is to make use of
checksums if they are available. MDEV-18046 fix introduced the checksum based
validation.
The issue still remains in cases where binlog checksums are disabled. Please
find the following bug reports.
MDEV-22473: binlog.binlog_show_binlog_event_random_pos failed in buildbot,
server crashed in read_log_event
MDEV-22455: Server crashes in Table_map_log_event,
binlog.binlog_invalid_read_in_rotate failed in buildbot
Fix:
====
When binlog checksum is disabled, perform scan(via reading event by event), to
validate the requested FROM <pos> offset. Starting from offset 4 read the
event_length of next_event in the binary log. Using the next_event length
advance current offset to point to next event. Repeat this process till the
current offset is less than or equal to crazy offset. If current offset is
higher than crazy offset provide appropriate invalid input offset error.
This piece of the code in Item_func_or_sum::agg_item_set_converter:
if (!conv && ((*arg)->collation.repertoire == MY_REPERTOIRE_ASCII))
conv= new (thd->mem_root) Item_func_conv_charset(thd, *arg, coll.collation, 1);
was wrong because:
1. It could change Item_cache to Item_func_conv_charset
(with the old Item_cache in args[0]).
Such Item type change is not always supported, e.g.
the code in Item_singlerow_subselect::reset() expects only Item_cache,
to be able to call Item_cache::set_null(). So it erroneously
reinterpreted Item_func_conv_charset to Item_cache and called
a non-existing method set_null(), which crashed the server.
2. The 1 in the last parameter to Item_func_conv_charset() was also
a problem. In MariaDB versions where the reported query did not crash,
it erroneously returned "empty set" instead of one row, because
the 1 made subselects execute too earlier and return NULL.
Fix:
1. Removing the above two lines from Item_func_or_sum::agg_item_set_converter()
2. Adding the repertoire test inside the constructor of Item_func_conv_charset,
so it now detects itself as "safe" in more cases than before.
This is needed to avoid new "Illegal mix of collations" after
removing the wrong code in various scenarios when character set
conversion from pure ASCII happens, including the reported scenario.
So now this sequence:
Item_cache -> Item_func_concat
is replaced to this compatible sequence (the top Item is still Item_cache):
new Item_cache -> Item_func_conv_charset -> Item_func_concat
Before the fix it was replaced to this incompatible sequence:
Item_func_conv_charset -> old Item_cache -> Item_func_concat
Passing a null pointer to a nonnull argument is not only undefined
behaviour, but it also grants the compiler the permission to optimize
away further checks whether the pointer is null. GCC -O2 at least
starting with version 8 may do that, potentially causing SIGSEGV.
These problems were caught in a WITH_UBSAN=ON build with the
Bug#7024 test in main.view.
Passing a null pointer to the "%s" argument of a printf-like
function is undefined behaviour. In the GNU libc implementation
of the printf() family of functions, it happens to work.
GCC 10.2.0 would diagnose this with -Wformat-overflow -Og.
In -fsanitize=undefined (WITH_UBSAN=ON) builds, a runtime error
would be generated. In some other builds, GCC 8 or later might infer
that the parameter is nonnull and optimize away further checks whether
the parameter is null, leading to SIGSEGV.
(This commit is exclusively for 10.1 branch, do not merge it to upper ones)
In case of a pattern of non-STMT_END-marked Rows-log-event (A) followed by
a STMT_END marked one (B) mysqlbinlog mixes up the base64 encoded rows events
with their pseudo sql representation produced by the verbose option:
BINLOG '
base64 encoded data for A
### verbose section for A
base64 encoded data for B
### verbose section for B
'/*!*/;
In effect the produced BINLOG '...' query is not valid and is rejected with the error.
Examples of this way malformed BINLOG could have been found in binlog_row_annotate.result
that gets corrected with the patch.
The issue is fixed with introduction an auxiliary IO_CACHE to hold on the verbose
comments until the terminal STMT_END event is found. The new cache is emptied
out after two pre-existing ones are done at that time.
The correctly produced output now for the above case is as the following:
BINLOG '
base64 encoded data for A
base64 encoded data for B
'/*!*/;
### verbose section for A
### verbose section for B
Thanks to Alexey Midenkov for the problem recognition and attempt to tackle,
Venkatesh Duggirala who produced a patch for the upstream whose
idea is exploited here, as well as to MDEV-23077 reporter LukeXwang who
also contributed a piece of a patch aiming at this issue.
Extra: mysqlbinlog_row_minimal refined to not produce mutable numeric values into the result file.
The issue here was that the query was using ORDER BY LIMIT optimzation where
the access method was changed from EQ_REF access to an index scan (index that would
resolve the ORDER BY clause).
But the parameter READ_RECORD::unlock_row was not reset to rr_unlock_row, which is
used when the access method is not EQ_REF access.
Since MDEV-18778, timezone tables get changed to innodb
to allow them to be replicated to other galera nodes.
Even without galera, timezone tables could be declared innodb.
With the standalone innodb tables, the mysql_tzinfo_to_sql takes
approximately 27 seconds.
With the transactions enabled in this patch, 1.2 seconds is
the approximate load time.
While explicit checks for the engine of the time zone tables could be
done, or checks against !opt_skip_write_binlog, non-transactional
storage engines will just ignore the transactional state without
even a warning so its safe to enact globally.
Leap seconds are pretty much ignored as they are a single insert
statement and have gone out of favour as they have caused MariaDB
stalls in the past.
Removing the ORDER BY clause from the UNION when UNION is inside an IN/ALL/ANY/EXISTS subquery.
The rewrites are done for subqueries but this rewrite is not done for the fake_select of
the UNION.
Problem:- rpl_parallel2 was failing non-deterministically
Analysis:-
When FLUSH TABLES WITH READ LOCK is executed, it will allow all worker
threads to complete their ongoing transactions and then it will pause them.
At this state FTWRL will proceed to acquire global read lock. FTWRL first
blocks threads from starting new commits, then upgrades the lock to block
commit of existing transactions.
Step1:
FLUSH TABLES WITH READ LOCK - Blocks new commits
Step2:
* STOP SLAVE command enables 'force_abort=1' which unblocks workers,
they continue to execute events.
* T1: Waits in 'record_gtid' call to update 'gtid_slave_pos' table with
its current GTID, but it is blocked becuase of Step1.
* T2: Holds COMMIT lock and waits for T1 to commit.
Step3:
FLUSH TABLES WITH READ LOCK - Waiting to get BLOCK_COMMIT.
This results in deadlock. When STOP SLAVE command allows paused workers to
proceed, workers should skip the execution of all further events, similar
to 'conservative' parallel mode.
Solution:-
We will assign 1 to skip_event_group when we are aborted in do_ftwrl_wait.
rpl_parallel_entry->pause_sub_id is only reset when force_abort is off in
rpl_pause_after_ftwrl.
Do not collect EITS statistics for this statement:
ALTER TABLE t ANALYZE PARTITION p
EITS stats are currently global, not per-partition.
Collecting global stats when we are asked to process just one partition
causes issues for DBAs.
Fix prefix key comparison in partitioning. Comparions must
take into account no more than prefix_len characters.
It used to compare prefix_len*mbmaxlen bytes.
* Fix the crash: IN-to-EXISTS rewrite causes an error (and so
JOIN::optimize() fails with an error, too), don't call
update_used_tables(). Terminate the query execution instead.
* Fix the cause of the error in the IN-to-EXISTS rewrite: don't do
the rewrite if doing it will cause an error of this kind:
This version of MariaDB doesn't yet support 'SUBQUERY in ROW in left
expression of IN/ALL/ANY'
* Fix another issue exposed by this testcase:
JOIN::setup_subquery_caches() may be invoked before any select has
saved its query plan, and will crash because none of the SELECTs
has called create_explain_query_if_not_exists() to create the Explain
Data Structure for this SELECT.
TODO: When merging this to 10.2, remove the poorly-placed call to
create_explain_query_if_not_exists made by fix for M_D_E_V-16153
The issue occurs when the subquery_cache is enabled.
When there is a cache miss the division was leading to a value with scale 9.
In the case of cache hit the value returned was of scale 9 and due to the different
values for the scales the where condition evaluated to FALSE, hence the output
was incomplete.
To fix this problem we need to round up the decimal to the limit mentioned in
Item::decimals. This would make sure the values are compared with the same
scale.
Server auto-sets lower_case_file_system value based on default
datadir's behavior instead of instead of using the directory specified
by the user through the configuration file or command line options.
This patch fixes this problem.
An oveflow was happening on windows because on Windows sizeof(ulong) is 4 bytes
while it is 8 bytes on Linux.
Switched avg_frequency and avg length for column statistics to ulonglong.
Switched avg_frequency for index statistics to ulonglong.
failed in Diagnostics_area::set_ok_status on FUNCTION replace
When there is REPLACE in the statement, sp_drop_routine_internal() returns
0 (SP_OK) on success which is then assigned to ret. So ret becomes false
and the error state is lost. The expression inside DBUG_ASSERT()
evaluates to false and thus the assertion failure.
While trying to detect datadir, take into account that one can use
Windows service name as section name in options file, for Windows service.
The historical obscurity is being used by WAMP installations.
Make sure that the sort_buffer that is allocated has atleast space for MERGEBUFF2 keys.
The issue here was that the record length is quite high and sort buffer size is very small,
due to which we end up with zero number of keys in the sort buffer. The Sort_param::max_keys_per_buffer
was zero in such a case, due to which we were flushing empty sort_buffer to the disk.
- service not using "--defaults-file" can have any name not just "MySQL"
- service with "--defaults-file", without datadir in them
use default datadir (install_root\data)
The issue here was that the left expr and right expr of the ANY subquery
had different character sets, so we were converting the left expr to utf8 character set.
So when this conversion was happening we were actually converting the item inside the cache,
it looked like <cache>(convert(t1.l1 using utf8)), which is incorrect.
To fix this problem we are going to store the reference of the left expr and convert that
to utf8 character set, it would look like convert(<cache>(`test`.`t1`.`l1`) using utf8)