At the second execution of the PS
1. mark_as_dependent() is called with the same parameters as at the first
execution (select#4 and select#3)
2. as outer_select (select#3) has been already merged at the first
execution of PS it cannot be reached using the outer_select() function
anymore (and so can not stop iteration).
3. as a result all selects towards the top level select including the
select for 'ca' are marked as uncacheable.
4. Marked uncacheable it executed incorrectly triggering filling its
temporary table several times and using freed memory at the end.
To avoid the problem we use name resolution context to go "up".
NOTE: problem also exists in 10.2 but has no visible effect on execution.
That is why the problem is fixed in 10.2.
The patch also add debug logging of important procedures and
better specify parameters types of st_select_lex::mark_as_dependent.
Adds an implementation for SELECT ... FOR UPDATE SKIP LOCKED /
SELECT ... LOCK IN SHARED MODE SKIP LOCKED
This is implemented only InnoDB at the moment, not in RockDB yet.
This adds a new hander flag HA_CAN_SKIP_LOCKED than
will be used when the storage engine advertises the flag.
When a storage engine indicates this flag it will get
TL_WRITE_SKIP_LOCKED and TL_READ_SKIP_LOCKED transaction types.
The Lex structure has been updated to store both the FOR UPDATE/LOCK IN
SHARE as well as the SKIP LOCKED so the SHOW CREATE VIEW
implementation is simplier.
"SELECT FOR UPDATE ... SKIP LOCKED" combined with CREATE TABLE AS or
INSERT.. SELECT on the result set is not safe for STATEMENT based
replication. MIXED replication will replicate this as row based events."
Thanks to guidance from Facebook commit
193896c466
This helped verify basic test case, and components that need implementing
(even though every part was implemented differently).
Thanks Marko for guidance on simplier InnoDB implementation.
Reviewers: Marko, Monty
Use < TL_FIRST_WRITE for determining a READ transaction.
Use TL_FIRST_WRITE as the relative operator replacing TL_WRITE_ALLOW_WRITE
as the minimium WRITE lock type.
The bug caused crashes of the server when processing queries with nested
table value constructors (TVC) . It happened because the grammar rules to
parse TVC used the same global lists for both nested TVC and nesting TVC.
As a result invalid select trees were constructed for queries with nested
TVC and this led to crashes at the prepare stage.
This patch provides its own lists structures for each TVC nest level.
Besides the patch fixes a bug in the function wrap_tvc() that missed
inheritance of the SELECT_LEX::exclude_from_table_unique_test for
selects that wrapped TVCs. This inheritance is critical for specifications
of derived tables that employ nested TVCs.
Approved by dmitry.shulga@mariadb.com
This bug caused crashes of the server when processing queries with table
value constructors (TVC) that contained subqueries and were used itself as
subselects. For such TVCs the following transformation is applied at the
prepare stage:
VALUES (v1), ... (vn) => SELECT * FROM (VALUES (v1), ... (vn)) tvc_x.
This transformation allows to reduce the problem of evaluation of TVCs used
as subselects to the problem of evaluation of regular subselects.
The transformation is implemented in the wrap_tvc(). The code the function
to mimic the behaviour of the parser when processing the result of the
transformation. However this imitation was not free of some flaws. First
the function called the method exclude() that completely destroyed the
select tree structures below the transformed TVC. Second the function
used the procedure mysql_new_select to create st_select_lex nodes for
both wrapping select of the transformation and TVC. This also led to
constructing of invalid select tree structures.
The patch actually re-engineers the code of wrap_tvc().
Approved by Oleksandr Byelkin <sanja@mariadb.com>
through 2nd execution of SP
This bug caused a server crash on the second call of any stored procedure
that contained an UPDATE statement over a multi-table view reporting an
error message at the prepare stage.
On the first call of the stored procedure after reporting an error at
the preparation stage of the UPDATE statement finished without calling
the function SELECT_LEX::save_prep_leaf_tables() for the SELECT used as
the definition of the view. This left the SELECT_LEX structure used by
the UPDATE statement in an inconsistent state for second call of the stored
procedure.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
On parsing statements for which a starting backtick (`) delimiter doesn't have
a corresponding ending backtick, a current pointer to a position inside a
pre-processed buffer could go beyond the end of the buffer.
This bug report caused by the commit d496765903
"MDEV-22022 Various mangled SQL statements will crash 10.3 to 10.5 debug builds".
In order to fix the issue both pointers m_ptr and m_cpp_ptr must be
rolled back to previous position in raw input and pre-processed input streams
correspondingly in case end of query reached during parsing.
Due to a premature cleanup of the unit that specified a recursive CTE
used in the second operand of union the server fell into an infinite
loop in the reported test case. In other cases this premature cleanup
could cause other problems.
The bug is the result of a not quite correct fix for MDEV-17024. The
unit that specifies a recursive CTE has to be cleaned only after the
cleanup of the last external reference to this CTE. It means that
cleanups of the unit triggered not by the cleanup of a external
reference to the CTE must be blocked.
Usage of local table chains in selects to get external references to
recursive CTEs was not correct either because of possible merges of
some selects.
Also fixed a minor bug in st_select_lex::set_explain_type() that caused
typing 'RECURSIVE UNION' instead of 'UNION' in EXPLAIN output for external
references to a recursive CTE.
Reimplement MDEV-14275 Improving memory utilization for information schema
Postpone temp table instantiation until after setup_fields().
Replace all unused (not marked in read_set) columns in an I_S table
with CHAR(0). This can drastically reduce the footprint of a MEMORY
table (a TABLE_CATALOG alone is 1538 bytes per row).
This does not change the engine. If the table was decided to be Aria
(because of, say, blobs) then after optimization it'll stay Aria
even if all blobs were removed.
Note 1: when transforming table structure, share->blob_fields is
preserved, otherwise Aria might switch from DYNAMIC to STATIC row format
and expect a special field for a deleted mark, which create_tmp_tabe
didn't provide.
Note 2: optimizer was doing handler::info() (to know the number of rows)
before the temp table is populated. That didn't make much sense. Now
it's done before the table is even instantiated. Preserve the old
behavior and report 0 rows.
This reverts e2664ee836 and a8458a2345
Lex_input_stream::scan_ident_delimited() could go beyond the end
of the input when a starting backtick (`) delimiter did not have a
corresponding ending backtick.
Fix: catch the case when yyGet() returns 0, which means
either eof-of-query or straight 0x00 byte inside backticks,
and make the parser fail on syntax error, displaying the left
backtick as the syntax error place.
In case of filename in a script like this:
SET CHARACTER_SET_CLIENT=17; -- 17 is 'filename'
SELECT doc.`Children`.0 FROM t1;
the ending backtick was not recognized as such because my_charlen() returns 0 for
a straight backtick (backticks must normally be encoded as @0060 in filename).
The same fix works for 'filename': the execution skips the backtick
and reaches the end of the query, then yyGet() returns 0.
This fix is OK for now. But eventually 'filename' should either be disallowed
as a parser character set, or fixed to handle encoded punctuation properly.
- Adding optional qualifiers to data types:
CREATE TABLE t1 (a schema.DATE);
Qualifiers now work only for three pre-defined schemas:
mariadb_schema
oracle_schema
maxdb_schema
These schemas are virtual (hard-coded) for now, but may turn into real
databases on disk in the future.
- mariadb_schema.TYPE now always resolves to a true MariaDB data
type TYPE without sql_mode specific translations.
- oracle_schema.DATE translates to MariaDB DATETIME.
- maxdb_schema.TIMESTAMP translates to MariaDB DATETIME.
- Fixing SHOW CREATE TABLE to use a qualifier for a data type TYPE
if the current sql_mode translates TYPE to something else.
The above changes fix the reported problem, so this script:
SET sql_mode=ORACLE;
CREATE TABLE t2 AS SELECT mariadb_date_column FROM t1;
is now replicated as:
SET sql_mode=ORACLE;
CREATE TABLE t2 (mariadb_date_column mariadb_schema.DATE);
and the slave can unambiguously treat DATE as the true MariaDB DATE
without ORACLE specific translation to DATETIME.
Similar,
SET sql_mode=MAXDB;
CREATE TABLE t2 AS SELECT mariadb_timestamp_column FROM t1;
is now replicated as:
SET sql_mode=MAXDB;
CREATE TABLE t2 (mariadb_timestamp_column mariadb_schema.TIMESTAMP);
so the slave treats TIMESTAMP as the true MariaDB TIMESTAMP
without MAXDB specific translation to DATETIME.
* Fix the crash: IN-to-EXISTS rewrite causes an error (and so
JOIN::optimize() fails with an error, too), don't call
update_used_tables(). Terminate the query execution instead.
* Fix the cause of the error in the IN-to-EXISTS rewrite: don't do
the rewrite if doing it will cause an error of this kind:
This version of MariaDB doesn't yet support 'SUBQUERY in ROW in left
expression of IN/ALL/ANY'
* Fix another issue exposed by this testcase:
JOIN::setup_subquery_caches() may be invoked before any select has
saved its query plan, and will crash because none of the SELECTs
has called create_explain_query_if_not_exists() to create the Explain
Data Structure for this SELECT.
TODO: When merging this to 10.2, remove the poorly-placed call to
create_explain_query_if_not_exists made by fix for M_D_E_V-16153