When you only need view structure, don't call handle_derived with
DT_CREATE and rely on its internal hackish check to skip DT_CREATE.
Because handle_derived is called from many different places,
and this internal hackish check is indiscriminative.
Instead, just don't ask handle_derived to do DT_CREATE
if you don't want it to do DT_CREATE.
When you only need view structure, don't call handle_derived with
DT_CREATE and rely on its internal hackish check to skip DT_CREATE.
Because handle_derived is called from many different places,
and this internal hackish check is indiscriminative.
Instead, just don't ask handle_derived to do DT_CREATE
if you don't want it to do DT_CREATE.
plugin variables in SET only locked the plugin till the end of the
statement. If SET with a plugin variable was prepared, it was possible
to uninstall the plugin before EXECUTE. Then EXECUTE would crash,
trying to resolve a now-invalid pointer to a disappeared variable.
Fix: keep plugins locked until the prepared statement is closed.
A user connected to a server with an expired password
can't change password with the statement "SET password=..."
if this statement is run in PS mode. In mentioned use case a user
gets the error ER_MUST_CHANGE_PASSWORD on attempt to run
the statement PREPARE stmt FOR "SET password=...";
The reason of failure to reset password by a locked user using the
statement PREPARE stmt FOR "SET password=..." is that PS-related
statements are not listed among the commands allowed for execution
by a user with expired password. However, simple adding of PS-related
statements (PREPARE FOR/EXECUTE/DEALLOCATE PREPARE ) to the list of
statements allowed for execution by a locked user is not enough
to solve problems, since it opens the opportunity for a locked user
to execute any statement in the PS mode.
To exclude this opportunity, additional checking that the statement
being prepared for execution in PS-mode is the SET statement has to be added.
This extra checking has been added by this patch into the method
Prepared_statement::prepared() that executed on preparing any statement
for execution in PS-mode.
Running statements with SET STATEMENT FOR clause is handled incorrectly in
case the whole statement is executed in prepared statement mode.
For example, running of the following statement
SET STATEMENT sql_mode = 'NO_ENGINE_SUBSTITUTION' FOR CREATE TABLE t1 AS SELECT CONCAT('abc') AS c1;
results in different definition of the table t1 depending on whether
the statement is executed as a prepared or as a regular statement.
In first case the column c1 is defined as
`c1` varchar(3) DEFAULT NULL
in the last case the column c1 is defined as
`c1` varchar(3) NOT NULL
Different definition for the column c1 arise due to the fact that
a value of the data memeber Item_func_concat::maybe_null depends on
whether strict mode is on or off. Below is definition of the method
fix_fields() of the class Item_str_func that is base class for the
class Item_func_concat that is created on parsing the
SET STATEMENT FOR clause.
bool Item_str_func::fix_fields(THD *thd, Item **ref)
{
bool res= Item_func::fix_fields(thd, ref);
/*
In Item_str_func::check_well_formed_result() we may set null_value
flag on the same condition as in test() below.
*/
maybe_null= maybe_null || thd->is_strict_mode();
return res;
}
Although the clause SET STATEMENT sql_mode = 'NO_ENGINE_SUBSTITUTION' FOR
is parsed on PREPARE phase during processing of the prepared statement,
real setting of the sql_mode system variable is done on EXECUTION phase.
On the other hand, the method Item_str_func::fix_fields is called on PREPARE
phase. In result, thd->is_strict_mode() returns true during calling the method
Item_str_func::fix_fields(), the data member maybe_null is assigned the value
true and column c1 is defined as DEFAULT NULL.
To fix the issue the system variables listed in the SET STATEMENT FOR clause
are set at the beginning of handling the PREPARE phase just right before
calling the function check_prepared_statement() and their original values
restored immediate after return from this function.
Additionally, to avoid code duplication the source code used in the function
mysql_execute_command for setting variables, specified by SET STATEMENT
clause, were extracted to the standalone functions
run_set_statement_if_requested(). This new function is called from
the function mysql_execute_command() and the method
Prepared_statement::prepare().
The reason for the failure is that
thd->mdl_context.release_transactional_locks()
was called after commit & rollback even in cases where the current
transaction is still active.
For 10.2, 10.3 and 10.4 the fix is simple:
- Replace all calls to thd->mdl_context.release_transactional_locks() with
thd->release_transactional_locks(). The thd function will only call
the mdl_context function if there are no active transactional locks.
In 10.6 we will better fix where we will change the return value for
some trans_xxx() functions to indicate if transaction did close the
transaction or not. This will avoid the need of the indirect call.
Other things:
- trans_xa_commit() and trans_xa_rollback() will automatically
call release_transactional_locks() if the transaction is closed.
- We can't do that for the other functions as the caller of many of these
are doing additional work (like close_thread_tables) before calling
release_transactional_locks().
- Added missing abort_result_set() and missing DBUG_RETURN in
select_create::send_eof()
- Fixed wrong indentation in injector::transaction::commit()
- mysqlnd from PHP < 7.3
- mysql-connector-python any version
- mysql-connector-java any version
Relaxed check about garbage at the end of the packet in case of no parameters.
Added check for array binding.
Fixed test according to the new paradigm (allow junk at the end of the packet)
Prepared statements which were run over binary protocol crashed
a server if the statement did not have CF_PS_ARRAY_BINDING_OPTIMIZED
flag and the statement was executed in bulk mode and a BF abort occrurred.
This was because the bulk execution resulted in several statements without
calling wsrep_after_statement() between, which confused wsrep transaction
state tracking.
As a fix, call wsrep_after_statement() in bulk loop after each execution
if CF_PS_ARRAY_BINDING_OPTIMIZED is not set.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
In case of direct execution(stmtid=-1, mariadb_stmt_execute_direct in C
API) application is in control of how many parameters client sends to
the server. In case this number is not equal to actual query parameters
number, the server may start to interprete packet data incorrectly, e.g.
starting from the size of null bitmap. And that could cause it to crash
at some point. The commit introduces some additional COM_STMT_EXECUTE
packet sanity checks:
- checking that "types sent" byte is set, and the value is equal to 1.
if it's not direct execution, then that value is 0 or 1.
- checking that parameter type value is a valid type, and parameter
flags value is 0 or only "unsigned" bit is set
- added more checks that read does not go beyond the end of the packet
This commit fixed the problems with S3 after the "DROP TABLE FORCE" changes.
It also fixes all failing replication S3 tests.
A slave is delayed if it is trying to execute replicated queries on a
table that is already converted to S3 by the master later in the binlog.
Fixes for replication events on S3 tables for delayed slaves:
- INSERT and INSERT ... SELECT and CREATE TABLE are ignored but written
to the binary log. UPDATE & DELETE will be fixed in a future commit.
Other things:
- On slaves with --s3-slave-ignore-updates set, allow S3 tables to be
opened in read-write mode. This was done to be able to
ignore-but-replicate queries like insert. Without this change any
open of an S3 table failed with 'Table is read only' which is too
early to be able to replicate the original query.
- Errors are now printed if handler::extra() call fails in
wait_while_tables_are_used().
- Error message for row changes are changed from HA_ERR_WRONG_COMMAND
to HA_ERR_TABLE_READONLY.
- Disable some maria_extra() calls for S3 tables. This could cause
S3 tables to fail in some cases.
- Added missing thr_lock_delete() to ma_open() in case of failure.
- Removed from mysql_prepare_insert() the not needed argument 'table'.
Protocol_local fixed so it can be used now.
Some Protocol:: methods made virtual so they can adapt.
as well as net_ok and net_send_error functions.
execute_sql_string function is exported to the plugins.
To be changed with the mysql_use_result.
An alternative implementation (replacing the one based on repertoire).
This implementation makes Field send itself to Protocol_text using
data type specific Protocol methods rather than field->val_str()
followed by protocol_text->store_str().
As now Field sends itself in the same way to all protocol types
(e.g. Protocol_binary, Protocol_text, Protocol_local),
the method Field::send_binary() was renamed just to Field::send().
Note, this change introduces symmetry between Field and Item,
because Items also send themself using a single method Item::send(),
which is used for *all* protocol types.
Performance improvement is achieved by the fact that Protocol_text
implements these data type specific methods using store_numeric_string_aux()
rather than store_string_aux(). The conversion now happens only when
character_set_results is not ASCII compatible character sets
(e.g. UCS2, UTF16, UTF32).
In the old code (before any MDEV-23162 work, e.g. as of 10.5.4),
Protocol_text::store(Field*) used val_str() for all data types.
So the execution went through the character set conversion routines
even for numeric and temporal data types.
Benchmarking summary (see details in MDEV-23478):
The new approach stably demonstrates additional improvement comparing
to the previous implementation (the smaller time - the better):
Original - the commit before MDEV-23162
be98036f25
1m9.336s
1m9.290s
1m9.300s
MDEV-23162 - the repertoire optimization
1m6.101s
1m5.988s
1m6.264s
MDEV-23478 - this commit
1m2.150s
1m2.079s
1m2.099s