Fixes a scenario where an IN subquery returned the wrong result
because the pushed WHERE clause was not retained for downstream
result filtering. For example:
CREATE TABLE t1 (c1 TEXT, UNIQUE (c1(1)));
INSERT INTO t1 (c1) VALUES ('a');
SELECT 'abc' IN (SELECT c1 FROM t1);
Internally, he 'abc' IN subquery condition becomes the constant
condition:
'abc' = t1.c1 or t1.c1 is null
Prior to this patch, this condition was incorrectly removed when
converting the subquery engine to an index lookup-based engine.
Now eligible conditions are preserved during such engine rewrites.
SplM_opt_info::last_refills is never set, so remove it. It was used in
In JOIN_TAB::choose_best_splitting(), use local variable "refills" there,
instead.
The logic in the function is that we compute
startup_cost= refills * spl_plan->cost;
only when we have a splitting query plan which implies "refills" was set
accordingly.
Also
- Added a comment about what "refills" is.
- Updated derived_cond_pushdown.result: the change makes the
Split-Materialized plans more expensive, so the join order changes from
t3, <split-materialized(outer_ref)> to <split-materialized>, t3
in 11.4 multi_update::prepare() is called both for single and multi update.
as it's invoked before lock_tables(), it cannot do prepare_for_insert(),
which calls external_lock() and only can be used on an already locked table
During the optimization phase, when determining whether or not to materialize
a derived table all at once or in groups (split materialization), all key
equalities that reference the derived table have their cond Items pushed into
JOIN::spl_opt_info->inj_cond_list.
From there they are filtered (tables in join prefix) and injected into the
JOIN conditions (in JOIN::inject_best_splitting_cond()), where they might
be removed because they are involved ref access and they aren't needed.
These pushed items conditions were always Item_func_eq, whether or or
the key usage specifies null safe equality (<=>) or not.
The fix is to create an Item_func_equal condition when the key equality
is specified using <=>.
approved by Sergei Petrunia (sergey@mariadb.com) PR#4198
When populating the structure spl_opt_info for a TABLE, and evaluating a
key_field for inclusion in spl_opt_info->added_key_fields, the null_rejecting
attribute may be incorrectly set. Originally, this attribute was
assumed to be TRUE, then it was changed
Item *real= key_field->val->real_item();
if ((real->type() == Item::FIELD_ITEM) &&
((Item_field*)real)->field->maybe_null())
added_key_field->null_rejecting= true;
else
added_key_field->null_rejecting= false;
which also incorrectly assumed that the added key field depended on
whether the field was able to set to null.
The correct setting for this attribute is simply to pass it through from
the key being evaluated.
The result of an incorrect value is, in this test case, incorrect
equality conditions being pushed into our (lateral) derived table,
excluding rows that might legitimately contain NULL and thus returning
a wrong result.
Approved by Sergei Petrunia, PR#4140
Since MDEV-33209 (09ea2dc788)
the the stack overflow errors are just injected instead of
frailer mechanisms to consume stack. These mechanims where
not carried forward to the JSON_TABLE or JSON_SCHEMA_VALID where
the pattern was the same.
add_extra_deps also no-longer recursively iterates in
out of stack conditions.
Tests performed in json_debug_nonembedded(_noasan).
This is the same as MDEV-35368, which was previously incompletely fixed
(on *nix-only, for unix socket connections)
This time, we fix it compatibly to Connector/C, by not verifying
server certificate for local connections, which, in addition to socket
and named pipe, are also "127.0.0.1" and "::1", and on Windows "localhost"
as well.
The corresponding code in Connector/C is was added by
1287c901dc8515823d28edcebfe4be65e6c5a6b3.
It remain a good question whether mariabackup should use SSL at all
since all it does are local connections, for "BACKUP STAGE" stuff.
Anonymous block is represented internally by the class sp_head,
so every statement inside an anonymous block is a SP instruction.
On the other hand, the anonymous block specified in the FROM clause of
the PREPARE statement is treated as a single statement. In result,
all parameter markers (represented by the character ?) are parts of
the anonymous block specified in the prepared statement and at the same
time parameter are markers, internally represented by instances of
the class Item_param and distributed among SP instructions representing
SQL statements (every SQL statement is represented by an instance of
the class sp_instr_stmt)
In case table metadata changed on running an anonymous block in prepared
statement mode, only SP instruction's statement is re-parsed. Before
re-parsing a SP's statement, all items are cleaned up including
instances of the class Item_param that represent positional parameters.
Unfortunately, this leads to presence of a dangling pointer in
Prepared_statement::param_array that references to the deleted
Item_param while invoking reset_stmt_params happening on every execution
of a prepared statement.
To fix the issue, no instances of Item_param created on re-parsings
a statement for failed SP instruction, rather instances of Item_param
left from first time parsing are re-used. As a consequence, all pointers
to instances of the class Item_param stored in the array
Prepared_statememt::param_array and possibly spread along the code base
(e.g. select_lex->limit_params.select_limit)
still point to valid Items.
Check stmt_da::is_error before calling stmt_da::sql_errno(which is what
the assertion ensures)
The bug did not have any negative effects in optimized builds.
Execution of UPDATE statement on a table that has an associated trigger
and the UPDATE statement tries to modify a column having the DEFAULT
clause, could result in either assert firing or incorrect issuing of
the ER_BAD_NULL_ERROR error.
The reason of such behaviour is that on opening of a table that has
an associated trigger, the method
Table_triggers_list::prepare_record_accessors
called to prepare Field objects referencing TABLE::record[1] instead
of record[0]. This method allocates a new array of Field objects
as copies of original table fields but updated null_ptr data
members pointing to an array of extra_null_bitmap allocated before
that on table's mem_root. Later switch_to_nullable_trigger_fields()
is called where table' fields is switched to the new array allocated at
Table_triggers_list::prepare_record_accessors().
After that, when fill_record() is invoked to fill table fields with
values, so the make_default_field is invoked to handle the clause
DEFAULT and the function make_default_field() called to create a field
object. The function make_default_field() creates a copy of Field
object and updates its data member prt/null_tr to position their to
right place of table's record buffer, but since the method
Table_triggers_list::prepare_record_accessors
has been invoked before, the expression
def_field->table->s->default_values - def_field->table->record[0]
used for pointers adjustment leads to pointing to arbitrary memory not
associated with the table.
To fix the issue, use the TABLE_SHARE fields for referencing to columns
default values.
Include PATH in DLL search paths to support broader scenarios.
Enables use of common distros like OpenSSL from Shining Light Productions
(used by Chocolatey, AppVeyor, etc).
Previously, only vcpkg libraries were detected.
Background:
In MDEV-33474, we introduced runtime dependency packaging primarily to
support libcurl and other potential third-party dependencies from vcpkg.
Problem:
The INSTALL(RUNTIME_DEPENDENCY_SET) command was failing at packaging step
unless shared libraries from the same build were explicitly excluded via
PRE_EXCLUDE_REGEXES. While initially only server.dll was excluded this way,
this turned out insufficient for users compiling their own plugins
Solution:
Exclude all linked shared libraries from the same build via
PRE_EXCLUDE_REGEXES. Move dependency detection and install to the end of
CMake processing, after all add_library/add_executable calls, when all
targets are known.
Also made the INSTALL_RUNTIME_DEPENDENCIES variable independent of vcpkg
detection, for simplicity.
Two new error codes ER_SEQUENCE_TABLE_HAS_TOO_FEW_ROWS and
ER_SEQUENCE_TABLE_HAS_TOO_MANY_ROWS were introduced in MDEV-36032 in
both 10.11 and, as part of MDEV-22491, 12.0. Here we remove them from
10.11, but they should remain in 12.0.
MariaDB server crashes when a query includes a derived table
containing unnamed column (eg: `SELECT '' from t`). When `Item`
object representing such unnamed column was checked for valid,
non-empty name in `TABLE_LIST::create_field_translation`, the
server crahsed(assertion `item->name.str && item->name.str[0]`
failed).
This fix removes the redundant assertion. The assert was a strict
debug guard that's no longer needed because the code safely handles
empty strings without it.
Selecting `''` from a derived table caused `item->name.str`
to be an empty string. While the pointer itself wasn't `NULL`
(`item->name.str` is `true`), its first character (`item->name.str[0]`)
was null terminator, which evaluates to `false` and eventually made
the assert fail. The code immediately after the assert can safely
handle empty strings and the assert was guarding against something
which the code can already handle.
Includes `mysql-test/main/derived.test` to verify the fix.
is_bulk_op())' failed after ALTER TABLE of versioned table
Missed error code resulted in my_ok() at higher frame which failed on
assertion for m_status in state of error.
As of CMake 3.24 CMAKE_COMPILER_IS_GNU(CC|CXX) are deprecated and should
be replaced with CMAKE_(C|CXX)_COMPILER_ID which were introduced with
CMake 2.6.
MDEV-33813 caused a regressing in that when a disk got full when
writing to a MyISAM or Aria table the MariaDB connection would, instead
of doing a retry after 60 seconds, hang until the query was killed.
Fixed by changing mysql_coind_wait() top mysql_cond_timedwait()
Author: Thomas Stangner
handler::clone() call did not work with read only tables like S3.
It gave a wrong error message (out of memory instead of a permission
error) and aborted the query.
The issue was that the clone call had a wrong parameter to ha_open().
This now fixed. I also changed the clone call to provide the correct
error message if things fails.
This patch fixes an 'out of memory' error when using the S3 engine
for queries that could use multiple indexes together to find the matching
rows, like the following:
SELECT * FROM t1 WHERE key1 = 99 OR key2 = 2
This commit fixes a bug where Aria tables are used in
(master->slave1->slave2) and a backup is taken on slave2. In this case
it is possible that the replication position in the backup, stored in
mysql.gtid_slave_pos, will be wrong. This will lead to replication
errors if one is trying to use the backup as a new slave.
Analyze:
Replicated row events are committed with trans_commit_stmt() and
thd->transaction->all.ha_list != 0.
This means that backup_commit_lock is not taken for Aria tables,
which means the rows are committed and binary logged on the slave
under BLOCK_COMMIT which should not happen.
This issue does not occur on the master as thd->transaction->all.ha_list
is == 0 under AUTO_COMMIT, which sets 'is_real_trans' and 'rw_trans'
which in turn causes backup_commit_lock to be taken.
Fixed by checking in ha_check_and_coalesce_trx_read_only() if all handlers
supports rollback and if not, then wait for BLOCK_COMMIT also for
statement commit.
forever, cannot be killed
mysql_rm_table_no_locks() does TDC_RT_REMOVE_ALL which waits while
share is closed. The table normally is open only as OPEN_STUB, this is
what parser does for CREATE TABLE. But for SELECT the table is opened
not as a stub. If it is the same table name we anyway have two
TABLE_LIST objects: stub and not stub. So for "not stub"
TDC_RT_REMOVE_ALL sees open count and decides to wait until it is
closed. And it hangs because that was opened in the same thread.
The fix disables subqueries in CHECK expression at parser
level. Thanks to Sergei Golubchik <serg@mariadb.org> for the patch.
Oracle mode has different set operator precedence and handling (not by
standard). In Oracle mode the below test case is handled as-is, in
plain order from left to right. In MariaDB default mode follows SQL
standard and makes INTERSECT prioritized, so UNION is taken from
derived table which is INTERSECT result (here and below the same
applies for EXCEPT).
Non-distinct set operator (UNION ALL/INTERSECT ALL) works via unique
key release but it can be done only once. We cannot add index to
non-empty heap table (see heap_enable_indexes()). So every UNION ALL
before rightmost UNION DISTINCT works as UNION DISTINCT. That is
common syntax, MySQL, MSSQL and Oracle work that way.
There is union_distinct property which indicates the rightmost
distinct UNION (at least, so the algorithm works simple: it releases
the unique key after union_distinct in the loop
(st_select_lex_unit::exec()).
INTERSECT ALL code (implemented by MDEV-18844 in a896beb) does not
know about Oracle mode and treats union_distinct as the last
operation, that's why it releases unique key on union_distinct
operation. INTERSECT ALL requires unique key for it to work, so before
any INTERSECT ALL unique key must not be released (see
select_unit_ext::send_data()).
The patch tweaks INTERSECT ALL code for Oracle mode. In
disable_index_if_needed() it does not allow unique key release before
the last operation and it allows unfold on the last operation. Test
case with UNION DISTINCT following INTERSECT ALL at least does not
include invalid data, but in fact the whole INTERSECT ALL code could
be refactored for better semantical triggers.
The patch fixes typo in st_select_lex_unit::prepare() where
have_except_all_or_intersect_all masked eponymous data member which
wrongly triggered unique key release in st_select_lex_unit::prepare().
The patch fixes unknown error in case ha_disable_indexes() fails.
Note: optimize_bag_operation() does some operator substitutions, but
it does not run under PS. So if there is difference in test with --ps
that means non-optimized (have_except_all_or_intersect_all == true)
code path is not good.
Note 2: VIEW is stored and executed in normal mode (see
Sql_mode_save_for_frm_handling) hence when SELECT order is different
in Oracle mode (defined by parsed_select_expr_cont()) it must be
covered by --disable_view_protocol.
In the main.plugin this function is called assuming the function
prototype int (*)(THD *, st_mysql_show_var *, void *, system_status_var *, enum_var_type)'
as changed in b4ff64568c.
We update the ha_example::show_func_example to match the prototype
on which it is called.
If the execution of the two reads in log_t::get_lsn_approx() is
interleaved with concurrent writes of those fields in
log_t::write_buf() or log_t::persist(), the returned approximation
will be an upper bound. If log_t::append_prepare_wait() is pending,
the approximation could be a lower bound.
We must adjust each caller of log_t::get_lsn_approx() for the
possibility that the return value is larger than
MAX(oldest_modification) in buf_pool.flush_list.
af_needed_for_redo(): Add a comment that explains why the glitch
is not a problem.
page_cleaner_flush_pages_recommendation(): Revise the logic for
the unlikely case cur_lsn < oldest_lsn. The original logic would have
invoked af_get_pct_for_lsn() with a very large age value, which
would likely cause an overflow of the local variable lsn_age_factor,
and make pct_for_lsn a "random number". Based on that value,
total_ratio would be normalized to something between 0.0 and 1.0.
Nothing extremely bad should have happened in this case;
the innodb_io_capacity_max should not be exceeded.
buf_pool_t::resize(): After successfully shrinking the buffer pool,
announce the success. The size had already been updated in shrunk().
After failing to shrink the buffer pool, re-enable the adaptive
hash index if it had been enabled.
Reviewed by: Debarun Banerjee
Calling SetArrayOptions with Nodes[i - 1].Key as its nm argument
exposed a MemorySanitizer error when i=0 for the tests:
* connect.bson_udf
* connect.json_udf
* connect.json_udf_bin
Its assumed that a basic optimization would have eliminated
these invalid expressions.
As the nm argument was unused, it has been removed.
THD::reset_sub_statement_state and THD::restore_sub_staement_state
swap auto_inc_intervals_forced(Discrete_intervals_list) of a THD class
with a local variable temporary to execute other things before restoring
at the end of Table_triggers_list::process_triggers under a
rpl_master_erroneous_autoinc(true) condition as exposed by the
rpl.rpl_trigger test.
The uninitialized data isn't used and the only required action is to
copy the data in one direction. As the intent is for the auto_inc_intervals_forced
value to be overwritten or unused, MEM_UNDEFINED is used on it to
ensure the previous state is considered invalid.
The other uses of reset_sub_statement_state in Item_sp::execute_impl
also follow the same pattern of taking a copy to restore within the
same function.
Without this increase the mtr test case pre/post conditions will
fail as the stack usage has increased under MSAN with clang-20.1.
ASAN takes a 11M stack, however there was no obvious gain in MSAN
test success after 2M.
The resulting behaviour observed on smaller stack size was a
SEGV normally.
Hide the default stack size from the sysvar tests that expose
thread-stack as a variable with its default value.
Despite being included in the HAVE_valgrind define.
As such it's best differenciated from valgrind in the
server identifier as they have for the purposes a distinct
and different set of behaviours.
MSAN has its own set of test inclusions that that are different
from valgrind and such including "valgrind" in a server string that
gets tested for valgrind will incorrectly exclude some tests
that are suitable for MSAN but not valgrind.
There's a have_sanitizer system variable for exposing
the sanitizer being used so there's no need for
version verboseness.
Correct have_sanitizer system variable description to
include MSAN has been possible for a while.
In original fix, commit 82d7419e06,
a 16k stack frame limit was imposed. Under the stack usage is doubled due
to MSAN. Debug builds without optimization can use more as well.
ASAN Debug builds also exceeded the 16k stack frame limit.
To keep some safety limit, a 64k limit is imposed to the compiler
under MSAN or ASAN with CMAKE_BUILD_TYPE=Debug.
Clang ~16+ on MSAN became quite strict with uninitalized
data being passed and returned from functions. Non-debug builds
have a basic optimization that hides these from those builds
Two innodb cases violate the assumptions, however once inlined
with a basic optimization those that existed for uninitialized
values are removed.
(MDEV-36316) rec_set_bit_field_2 calling mach_read_from_2 hits a read of
bits it wasn't actually changing.
(MDEV-36327) The function dict_process_sys_columns_rec left
nth_v_col uninitialized unless it was a virtual column. This was
ok as the function i_s_sys_columns_fill_table also didn't read
this value unless it was a virtual column.
my_time_fraction_remainder(): Remove a DBUG_ASSERT, because there is
none in sec_part_shift() or sec_part_unshift() either. A buffer
overflow should be caught by cmake -DWITH_ASAN=ON in all three.
This fixes a build with GCC 14.2 and
cmake -DCMAKE_BUILD_TYPE=Debug -DCMAKE_CXX_FLAGS=-Og.
Reviewed by: Daniel Black
New warnings come from 3 places
1. Warning C5287: Warning comes from json_lib.c from code like
compile_time_assert((int) JSON_VALUE_NULL == (int) JSV_NULL);
2. Warning C5287: Similar warning come from wc_static_assert() from code
in wolfSSL's header file
3. Warning C5286 in WolfSSL code, -enum_value
(i.e multiplying enum with -1)is used
To fix:
- Disable warnings in WolfSSL code, using /wd<num> flag.
- workaround warning for users of WolfSSL, disable
wc_static_assert() with -DWC_NO_STATIC_ASSERT compile flag
- Rewrite some compile_time_assert in json_lib.c to avoid warning.
- add target_link_libraries(vio ${SSL_LIBRARIES}) so that
vio picks up -DWC_NO_STATIC_ASSERT