If compiling a non DBUG binary with
-DDBUG_ASSERT_AS_PRINTF asserts will be
changed to printf + stack trace (of stack
trace are enabled).
- Changed #ifndef DBUG_OFF to
#ifdef DBUG_ASSERT_EXISTS
for those DBUG_OFF that was just used to enable
assert
- Assert checking that could greatly impact
performance where changed to DBUG_ASSERT_SLOW which
is not affected by DBUG_ASSERT_AS_PRINTF
- Added one extra option to my_print_stacktrace() to
get more silent in case of stack trace printing as
part of assert.
- Added sql/mariadb.h file that should be included first by files in sql
directory, if sql_plugin.h is not used (sql_plugin.h adds SHOW variables
that must be done before my_global.h is included)
- Removed a lot of include my_global.h from include files
- Removed include's of some files that my_global.h automatically includes
- Removed duplicated include's of my_sys.h
- Replaced include my_config.h with my_global.h
with window functions (mdev-10855).
This patch just modified the function pushdown_cond_for_derived()
to support this feature.
Some test cases demonstrating this optimization were added to
derived_cond_pushdown.test.
"Optimization for equi-joins of derived tables with GROUP BY"
should be considered rather as a 'proof of concept'.
The task itself is targeted at an optimization that employs re-writing
equi-joins with grouping derived tables / views into lateral
derived tables. Here's an example of such transformation:
select t1.a,t.max,t.min
from t1 [left] join
(select a, max(t2.b) max, min(t2.b) min from t2
group by t2.a) as t
on t1.a=t.a;
=>
select t1.a,tl.max,tl.min
from t1 [left] join
lateral (select a, max(t2.b) max, min(t2.b) min from t2
where t1.a=t2.a) as t
on 1=1;
The transformation pushes the equi-join condition t1.a=t.a into the
derived table making it dependent on table t1. It means that for
every row from t1 a new derived table must be filled out. However
the size of any of these derived tables is just a fraction of the
original derived table t. One could say that transformation 'splits'
the rows used for the GROUP BY operation into separate groups
performing aggregation for a group only in the case when there is
a match for the current row of t1.
Apparently the transformation may produce a query with a better
performance only in the case when
- the GROUP BY list refers only to fields returned by the derived table
- there is an index I on one of the tables T used in FROM list of
the specification of the derived table whose prefix covers the
the fields from the proper beginning of the GROUP BY list or
fields that are equal to those fields.
Whether the result of the re-writing can be executed faster depends
on many factors:
- the size of the original derived table
- the size of the table T
- whether the index I is clustering for table T
- whether the index I fully covers the GROUP BY list.
This patch only tries to improve the chosen execution plan using
this transformation. It tries to do it only when the chosen
plan reaches the derived table by a key whose prefix covers
all the fields of the derived table produced by the fields of
the table T from the GROUP BY list.
The code of the patch does not evaluates the cost of the improved
plan. If certain conditions are met the transformation is applied.
Make st_select_lex::set_explain_type() take into account that JOIN_TABs
it is traversing may be also post-join aggregation JOIN_TABs (which
have pos_in_table_list=NULL, etc).
This patch fills in a serious flaw in the
code that supports condition pushdown into
materialized views / derived tables.
If a predicate happened to contain a reference
to a mergeable view / derived table and it does
not depended directly on the target materialized
view / derived table then the predicate was not
considered as a subject to pusdown to this view
/ derived table.
IB: Fixes in logic when to do versioned or usual row updates. Now it is
able to do unversioned updates for versioned tables just by disabling
`TABLE_SHARE::versioned` flag.
SQL: DDL tracking for:
* RENAME TABLE, ALTER TABLE .. RENAME TO;
* DROP TABLE;
* data-modifying operations (f.ex. ALTER TABLE .. ADD/DROP COLUMN).
Significantly reduce the amount of InnoDB, XtraDB and Mariabackup
code changes by defining pfs_os_file_t as something that is
transparently compatible with os_file_t.
- Adding new virtual methods in Type_handler:
* Column_definition_prepare_stage1()
* Column_definition_prepare_stage2()
* calc_pack_length()
- Using new methods to remove type specific code in:
* Global function calc_pack_length()
* Column_definition::prepare_create_field()
* The loop body mysql_prepare_create_table()
* Column_definition::sp_prepare_create_field()
Do not silence uncertain cases, or fix any bugs.
The only functional change should be that ha_federated::extra()
is not calling DBUG_PRINT to report an unhandled case for
HA_EXTRA_PREPARE_FOR_DROP.
Do not silence uncertain cases, or fix any bugs.
The only functional change should be that ha_federated::extra()
is not calling DBUG_PRINT to report an unhandled case for
HA_EXTRA_PREPARE_FOR_DROP.
At some conditions the function opt_sum_query() can apply MIN/MAX
optimizations to to Item_sum objects of a select These optimizations
becomes invalid if this select is the subquery of an IN subquery
predicate that is converted to a EXISTS subquery. Thus in this case
the MIX/MAX optimizations that have been applied in opt_sum_query()
must be rolled back.
This bug appeared in 5.3 when the code for the cost base choice between
materialization and in-to-exists transformation of non-correlated
IN subqueries was introduced. Before this code in-to-exists
transformations were always performed before the call of opt_sum_query().
- SETVAL(sequence_name, next_value, is_used, round)
- ALTER SEQUENCE, including RESTART WITH
Other things:
- Added handler::extra() option HA_EXTRA_PREPARE_FOR_ALTER_TABLE to signal
ha_sequence() that it should allow write_row statments.
- ALTER ONLINE TABLE now works with SEQUENCE:s