The EXPLAIN EXTENDED statement run as a prepared statement can produce extra
warning comparing with a case when EXPLAIN EXTENDED statement is run as
a regular statement. For example, the following test case
CREATE TABLE t1 (c int);
CREATE TABLE t2 (d int);
EXPLAIN EXTENDED SELECT (SELECT 1 FROM t2 WHERE d = c) FROM t1;
produces the extra warning
"Field or reference 'c' of SELECT #2 was resolved in SELECT #1"
in case the above mentioned "EXPLAIN EXTENDED" statement is executed
in PS mode, that is by submitting the following statements:
PREPARE stmt FROM "EXPLAIN EXTENDED SELECT (SELECT 1 FROM t2 WHERE d = c) FROM t1";
EXECUTE stmt;
The reason of the extra warning emittion is in a way items
are handled (being fixed) during execution of the JOIN::prepare() method.
The method Item_field::fix_fields() calls the find_field_in_tables()
function in case a field hasn't been associated yet with the item.
Implementation of the find_field_in_tables() function first checks whether
a table containing the required field was already opened and cached.
It is done by checking the data member item->cached_table. This data member
is set on handling the PRERARE FROM statement and checked on executing
the EXECUTE statement. If the data member item->cached_table is set
the find_field_in_tables() function invoked and the
mark_select_range_as_dependent() function called if the field
is an outer referencee. The mark_select_range_as_dependent() function
calls the mark_as_dependent() function that finally invokes
the push_warning_printf() function that produces extra warning.
To fix the issue, calling of push_warning_printf() is elimited in case
it was run indirectly in result of hanlding already opened table from
the Item_field::fix_fields() method.
Adds an implementation for SELECT ... FOR UPDATE SKIP LOCKED /
SELECT ... LOCK IN SHARED MODE SKIP LOCKED
This is implemented only InnoDB at the moment, not in RockDB yet.
This adds a new hander flag HA_CAN_SKIP_LOCKED than
will be used when the storage engine advertises the flag.
When a storage engine indicates this flag it will get
TL_WRITE_SKIP_LOCKED and TL_READ_SKIP_LOCKED transaction types.
The Lex structure has been updated to store both the FOR UPDATE/LOCK IN
SHARE as well as the SKIP LOCKED so the SHOW CREATE VIEW
implementation is simplier.
"SELECT FOR UPDATE ... SKIP LOCKED" combined with CREATE TABLE AS or
INSERT.. SELECT on the result set is not safe for STATEMENT based
replication. MIXED replication will replicate this as row based events."
Thanks to guidance from Facebook commit
193896c466
This helped verify basic test case, and components that need implementing
(even though every part was implemented differently).
Thanks Marko for guidance on simplier InnoDB implementation.
Reviewers: Marko, Monty
Use < TL_FIRST_WRITE for determining a READ transaction.
Use TL_FIRST_WRITE as the relative operator replacing TL_WRITE_ALLOW_WRITE
as the minimium WRITE lock type.
Added a new wsrep_mode feature DISALLOW_LOCAL_GTID for this.
Nodes can have GTIDs for local transactions in the following scenarios:
A DDL statement is executed with wsrep_OSU_method=RSU set.
A DML statement writes to a non-InnoDB table.
A DML statement writes to an InnoDB table with wsrep_on=OFF set.
If user has set wsrep_mode=DISALLOW_LOCAL_GTID these operations
produce a error ERROR HY000: Galera replication not supported
Two new features for Galera
* Write a warning to error log if Galera replicates table with storage engine not supported by Galera (at the moment only InnoDB is supported
** Warning is pushed to client also
** MyISAM is allowed if wsrep_replicate_myisam=ON
* Write a warning to error log if Galera replicates table with no primary key
** Warning is pushed to client also
** MyISAM is allowed if wsrep_relicate_myisam=ON
* In both cases apply flood control if > 10 same warning is writen to error log
(requires log_warnings > 1), flood control will suppress warnings for 300 seconds
Added new enum variable `wsrep_mode` which can be used to turn on WSREP
features which are not part of default behaviour.
Added enum `BINLOG_ROW_FORMAT_ONLY`, `REQUIRED_PRIMARY_KEY` and
`STRICT_REPLICATION`. `wsrep-mode=STRICT_REPLICATION` behaves
like variable `wsrep_strict_ddl`.
Variable wsrep_strict_ddl is deprecated and if set we use
new wsrep_mode setting instead.
Reviewed and improved by: Jan Lindström <jan.lindstrom@mariadb.com>
Problem:
=======
Upon deleting or updating a row in a parent table (with primary key), if
the child table has virtual column and an associated key with ON UPDATE
CASCADE/ON DELETE CASCADE, it will result in slave crash.
Analysis:
========
Tables which are related through foreign key require prelocking similar to
triggers. i.e If a table has triggers/foreign keys we should add all tables
and routines used by them to the prelocking set. This prelocking happens
during 'open_and_lock_tables' call. Each table being opened is checked for
foreign key references. If foreign key reference exists then the child
table is opened and it is linked to the table_list. Upon any modification
to parent table its corresponding child tables are retried from table_list
and they are updated accordingly. This prelocking work fine on master.
On slave prelocking works for following cases.
- Statement/mixed based replication
- In row based replication when trigger execution is enabled through
'slave_run_triggers_for_rbr=YES/LOGGING/ENFORCE'
Otherwise it results in an assert/crash, as the parent table will not find
the corresponding child table and it will be NULL. Dereferencing NULL
pointer leads to slave server exit.
Fix:
===
Introduce a new 'slave_fk_event_map' flag similar to 'trg_event_map'. This
flag will ensure that when foreign key is enabled in row based replication
all the parent and child tables are prelocked, so that parent is able to
locate the child table.
Note: This issue is specific to slave, hence only slave needs to be
upgraded.
When the query using a recursive CTE whose definition contained wildcard
symbols in the recursive part was processed at the prepare stage an
assertion was hit if the query was executed without any default database
set. The failure happened when the function insert_fields() tried to check
column privileges for the temporary table created for a recursive
reference to the CTE. No acl checks are needed for any CTE. That's why this
check should be blocked as well. The patch formulates a stricter condition
at which this check is to be blocked that covers the case when a query
using recursive CTEs is executed with no default database set.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
The reason for the failure is that
thd->mdl_context.release_transactional_locks()
was called after commit & rollback even in cases where the current
transaction is still active.
For 10.2, 10.3 and 10.4 the fix is simple:
- Replace all calls to thd->mdl_context.release_transactional_locks() with
thd->release_transactional_locks(). The thd function will only call
the mdl_context function if there are no active transactional locks.
In 10.6 we will better fix where we will change the return value for
some trans_xxx() functions to indicate if transaction did close the
transaction or not. This will avoid the need of the indirect call.
Other things:
- trans_xa_commit() and trans_xa_rollback() will automatically
call release_transactional_locks() if the transaction is closed.
- We can't do that for the other functions as the caller of many of these
are doing additional work (like close_thread_tables) before calling
release_transactional_locks().
- Added missing abort_result_set() and missing DBUG_RETURN in
select_create::send_eof()
- Fixed wrong indentation in injector::transaction::commit()
Deadlock is possible between applier thread and local committing thread with active FLUSH TABLE.
Applier thread should skip table share checks and locks when opening table.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
This commit fixed the problems with S3 after the "DROP TABLE FORCE" changes.
It also fixes all failing replication S3 tests.
A slave is delayed if it is trying to execute replicated queries on a
table that is already converted to S3 by the master later in the binlog.
Fixes for replication events on S3 tables for delayed slaves:
- INSERT and INSERT ... SELECT and CREATE TABLE are ignored but written
to the binary log. UPDATE & DELETE will be fixed in a future commit.
Other things:
- On slaves with --s3-slave-ignore-updates set, allow S3 tables to be
opened in read-write mode. This was done to be able to
ignore-but-replicate queries like insert. Without this change any
open of an S3 table failed with 'Table is read only' which is too
early to be able to replicate the original query.
- Errors are now printed if handler::extra() call fails in
wait_while_tables_are_used().
- Error message for row changes are changed from HA_ERR_WRONG_COMMAND
to HA_ERR_TABLE_READONLY.
- Disable some maria_extra() calls for S3 tables. This could cause
S3 tables to fail in some cases.
- Added missing thr_lock_delete() to ma_open() in case of failure.
- Removed from mysql_prepare_insert() the not needed argument 'table'.
The problem was that the server was calling virtual functions on a record
that was not initialized with new data.
This happened when fill_record() was aborted in the middle because an
error in save_val() or save_in_field()
* Allocate items on thd->mem_root while refixing vcol exprs
* Make vcol tree changes register and roll them back after the statement is executed.
Explanation:
Due to collation implementation specifics an Item tree could change while fixing.
The tricky thing here is to make it on a proper arena.
It's usually not a problem when a field is deterministic, however, makes a pain vice-versa, during allocation allocating.
A non-deterministic field should be refixed on each statement, since it depends on the environment state.
Changing the tree will be temporary and therefore it should be reverted after the statement execution.
first step in moving drop table out of the handler.
todo: other methods that don't need an open table
for now hton->drop_table is optional, for backward compatibility
reasons
After this code
end_inplace:
if (thd->locked_tables_list.reopen_tables(thd, false))
goto err_with_mdl_after_alter;
table is not reopened (need_reopen is false) but
some_table_marked_for_reopen is reset to false.
Item_field is allocated on table lock and assigned new name on first
ALTER which is then freed at the end of the command. Second ALTER
accessess this Item_field and gets garbage value.
Problem was that FLUSH TABLES where trying to read latest sequence state
which conflicted with a running ALTER SEQUENCE. Removed the reading
of the state, when opening a table for FLUSH, as it's not needed in this
case.
Other thing:
- Fixed a potential issue with concurrently running ALTER SEQUENCE where
the later ALTER could potentially read old data