* It uses stack a lot, limit the recursion depth like
we used to do for pcre.
* But don't do that for newer pcre2 that uses stack very little
(and doesn't support limiting recursion depth anyway).
* Fix tests to verify only that deep recursion doesn't crash, but
ignore results and warnings (which depend on pcre2 version)
* Also different pcre2 versions report different offset in error messages,
take that into account.
* size represents the size of an element in the Unique class
* full_size is used when the Unique class counts the number of
duplicates stored per element. This requires additional space per Unique
element.
Removing redundant space character after the data type comment
in SHOW CREATE TABLE, so the output changes from e.g.:
a TIME /* mariadb-5.3 */ DEFAULT NULL
to
a TIME /* mariadb-5.3 */ DEFAULT NULL
This is a prerequisite for MDEV-19906.
- Making classes Field_time, Field_datetime, Field_timestamp abstract
- Adding instantiable Field_time0, Field_datetime0, Field_timestamp0 classes
- Removing redundant cast in field_conv.cc, item_timefunc.cc, sp.cc in calls for set_time() and get_timestamp()
- Replacing store_TIME() to store_timestamp() in log.cc and removing redundant cast
with condition_pushdown_from_having
This bug could manifest itself for queries with GROUP BY and HAVING clauses
when the HAVING clause was a conjunctive condition that depended
exclusively on grouping fields and at least one conjunct contained an
equality of the form fld=sq where fld is a grouping field and sq is a
constant subquery.
In this case the optimizer tries to perform a pushdown of the HAVING
condition into WHERE. To construct the pushable condition the optimizer
first transforms all multiple equalities in HAVING into simple equalities.
This has to be done for a proper processing of the pushed conditions
in WHERE. The multiple equalities at all AND/OR levels must be converted
to simple equalities because any multiple equality may refer to a multiple
equality at the upper level.
Before this patch the conversion was performed like this:
multiple_equality(x,f1,...,fn) => x=f1 and ... and x=fn.
When an equality item for x=fi was constructed both the items for x and fi
were cloned. If x happened to be a constant subquery that could not be
cloned the conversion failed. If the conversions of multiple equalities
previously performed had succeeded then the whole condition became in an
inconsistent state that could cause different failures.
The solution provided by the patch is:
1. to use a different conversion rule if x is a constant
multiple_equality(x,f1,...,fn) => f1=x and f2=f1 and ... and fn=f1
2. not to clone x if it's a constant.
Such conversions cannot fail and besides the result of the conversion
preserves the equivalence of f1,...,fn that can be used for other
optimizations.
This patch also made sure that expensive predicates are not pushed from
HAVING to WHERE.
Item_cond inherits from Item_args but doesn't store its arguments
as function arguments, which means it has zero arguments.
Don't call memcpy in this case.
(Variant #2 of the patch, which keeps the sp_head object inside the
MEM_ROOT that sp_head object owns)
(10.3 requires extra work due to sp_package, will commit a separate
patch for it)
sp_head::operator new() and operator delete() were dereferencing sp_head*
pointers to memory that didn't hold a valid sp_head object (it was
not created/already destroyed).
This caused UBSan to crash when looking up type information.
Fixed by providing static sp_head::create() and sp_head::destroy() methods.
A certification failure followed by a clean shutdown would cause an
inconsistency between the sequence number stored in innodb and the
sequence number stored in provider.
This happened both in the case of local certification failure, and in
the case where dummy writeset is applied.
The fix consists of:
- updating wsrep position after dummy writeset is delivered in
`Wsrep_high_priority_service::log_dummy_write_set()`
- updating wsrep position while releasing commit order in wsrep-lib
side
Added two tests which stress the situation where a server is shutdown
after a certification failure.
The string doesn't appear to be null-terminated when binlog checksums are
enabled. This causes a corrupt binlog name in the error message when a
slave is ahead of the master.
(Variant #2 of the patch, which keeps the sp_head object inside the
MEM_ROOT that sp_head object owns)
(10.3 version of the fix, with handling for class sp_package)
sp_head::operator new() and operator delete() were dereferencing sp_head*
pointers to memory that didn't hold a valid sp_head object (it was
not created/already destroyed).
This caused UBSan to crash when looking up type information.
Fixed by providing static sp_head::create() and sp_head::destroy() methods.
In this scenario:
- There is a possible range access for table T
- And there is a ref access on the same index which uses fewer key parts
- The join optimizer picks the ref access (because it is cheaper)
- make_join_select applies this heuristic to switch to range:
/* Range uses longer key; Use this instead of ref on key */
Join buffer will be used without having called
JOIN_TAB::make_scan_filter(). This means, conditions that should be
checked when reading table T will be checked after T is joined with the
contents of the join buffer, instead.
Fixed this by adding a make_scan_filter() check.
(updated patch after backport to 10.3)
(Fix testcase on Windows)
Analysis:
========
'max_binlog_cache_size' is configured and a huge transaction is executed. When
the transaction specific events size exceeds 'max_binlog_cache_size' the event
cannot be written to the binary log cache and cache write error is raised.
Upon cache write error the statement is rolled back and the transaction cache
should be truncated to a previous statement specific position. The truncate
operation should reset the cache to earlier valid positions and flush the new
changes. Even though the flush is successful the cache write error is still in
marked state. The truncate code interprets the cache write error as cache flush
failure and returns abruptly without modifying the write cache parameters.
Hence cache is in a invalid state. When a COMMIT statement is executed in this
session it tries to flush the contents of transaction cache to binary log.
Since cache has partial events the cache write operation will report
'writer.remains' assert.
Fix:
===
Binlog truncate function resets the cache to a specified size. As a first step
of truncation, clear the cache write error flag that was raised during earlier
execution. With this new errors that surface during cache truncation can be
clearly identified.
MDEV-18046: Assortment of crashes, assertion failures and ASAN errors in mysql_show_binlog_events
Problem:
========
SHOW BINLOG EVENTS FROM <pos> reports following assert when ASAN is enabled.
uint32 binlog_get_uncompress_len(const char*):
Assertion `(buf[0] & 0xe0) == 0x80' failed
Fix:
===
**Part11: Converted debug assert to error handler code**
Problem:
========
SHOW BINLOG EVENTS FROM <pos> reports following ASAN error.
AddressSanitizer: heap-buffer-overflow on address
READ of size 1 at 0x60e00009cf71 thread T28
#0 0x55e37e034ae2 in net_field_length
Fix:
===
**Part10: Avoid reading out of buffer**
Problem:
========
SHOW BINLOG EVENTS FROM <pos> reports following ASAN error
AddressSanitizer: SEGV on unknown address
The signal is caused by a READ memory access.
User_var_log_event::User_var_log_event(char const*, unsigned int,
Format_description_log_event const*)
Implemented part of upstream patch.
commit: mysql/mysql-server@a3a497ccf7
Fix:
===
**Part8: added checks to avoid reading out of buffer limits**
Problem:
========
SHOW BINLOG EVENTS FROM <pos> reports following ASAN error
"heap-buffer-overflow on address" and some times it asserts.
Table_map_log_event::Table_map_log_event(const char*, uint,
const Format_description_log_event*)
Assertion `m_field_metadata_size <= (m_colcnt * 2)' failed.
Fix:
===
**Part7: Avoid reading out of buffer**
Converted debug assert to error handler code.
Problem:
========
SHOW BINLOG EVENTS FROM <pos> reports following ASAN error
AddressSanitizer: heap-buffer-overflow on address 0x60400002acb8
Load_log_event::copy_log_event(char const*, unsigned long, int,
Format_description_log_event const*)
Fix:
===
**Part6: Moved the event_len validation to the begin of copy_log_event function**
Problem:
========
SHOW BINLOG EVENTS FROM <pos> reports following ASAN error
AddressSanitizer: heap-buffer-overflow on address
String::append(char const*, unsigned int)
Query_log_event::pack_info(Protocol*)
Fix:
===
**Part5: Added check to catch buffer overflow**
Problem:
========
SHOW BINLOG EVENTS FROM <pos> reports following ASAN error
heap-buffer-overflow within "my_strndup" in Rotate_log_event
my_strndup /mysys/my_malloc.c:254
Rotate_log_event::Rotate_log_event(char const*, unsigned int,
Format_description_log_event const*)
Fix:
===
**Part4: Improved the check for event_len validation**
Problem:
========
SHOW BINLOG EVENTS FROM <pos> reports following crash when ASAN is enabled.
SEGV on unknown address
in inline_mysql_mutex_destroy
in my_bitmap_free
in Update_rows_log_event::~Update_rows_log_event()
Fix:
===
**Part3: Initialize m_cols_ai.bitmap to NULL**
Problem:
========
SHOW BINLOG EVENTS FROM <pos> reports following assert when ASAN is enabled.
Rows_log_event::Rows_log_event(const char*, uint,
const Format_description_log_event*):
Assertion `var_header_len >= 2'
Implemented part of upstream patch.
commit: mysql/mysql-server@a3a497ccf7
Fix:
===
**Part2: Avoid reading out of buffer limits**
Problem:
========
SHOW BINLOG EVENTS FROM <pos> causes a variety of failures, some of which are
listed below. It is not a race condition issue, but there is some
non-determinism in it.
Analysis:
========
"show binlog events from <pos>" code considers the user given position as a
valid event start position. The code starts reading data from this event start
position onwards and tries to map it to a set of known events. Each event has
a specific event structure and asserts have been added to ensure that read
event data satisfies the event specific requirements. When a random position
is supplied to "show binlog events command" the event structure specific
checks will fail and they result in assert.
Fix:
====
The fix is split into different parts. Each part addresses either an ASAN
issue or an assert/crash.
**Part1: Checksum based position validation when checksum is enabled**
Using checksum validate the very first event read at the user specified
position. If there is a checksum mismatch report an appropriate error for the
invalid event.
Assertion failed in wsrep-lib after transaction replay which
failed due to conflict in certification.
- Implemented reproducible test case MDEV-20793 to reproduce the crash.
- Fixed wsrep-lib to deal with certification error during replay.
Cause:
* when autocommit=0 (or transaction is issued by user),
`ha_commit_trans` is called twice on ALTER TABLE, causing a duplicated
insert into `transaction_registry` (ER_DUP_ENTRY).
Solution:
* ALTER TABLE makes an implicit commit by a second call. We actually
need to make an insert only when it is a real commit. So is_real
variable is additionally checked.
Set read_set bitmap for view from the JOIN::all_fields list instead of JOIN::fields_list
as split_sum_func would have added items to the all_fields list.
The issue here is for degenerate joins we should execute the window
function but it is not getting executed in all the cases.
To get the window function values window function needs to be executed
always. This currently does not happen in few cases
where the join would return 0 or 1 row like
1) IMPOSSIBLE WHERE
2) MIN/MAX optimization
3) EMPTY CONST TABLE
The fix is to make sure that window functions get executed
and the temporary table is setup for the execution of window functions