When there is no bounds on the upper or lower part of the window,
it doesn't matter if the type is numeric.
It also doesn't matter how many ORDER BY items there are in the
query.
Reviewers: Sergei Petrunia and Oleg Smirnov
This bug could affect queries containing a join of derived tables over
grouping views such that one of the derived tables contains a window
function while another uses view V with dependent subquery DSQ containing
a set function aggregated outside of the subquery in the view V. The
subquery also refers to the fields from the group clause of the view.Due to
this bug execution of such queries could produce wrong result sets.
When the fix_fields() method performs context analysis of a set function AF
first, at the very beginning the function Item_sum::init_sum_func_check()
is called. The function copies the pointer to the embedding set function,
if any, stored in THD::LEX::in_sum_func into the corresponding field of the
set function AF simultaneously changing the value of THD::LEX::in_sum_func
to point to AF. When at the very end of the fix_fields() method the function
Item_sum::check_sum_func() is called it is supposed to restore the value
of THD::LEX::in_sum_func to point to the embedding set function. And in
fact Item_sum::check_sum_func() did it, but only for regular set functions,
not for those used in window functions. As a result after the context
analysis of AF had finished THD::LEX::in_sum_func still pointed to AF.
It confused the further context analysis. In particular it led to wrong
resolution of Item_outer_ref objects in the fix_inner_refs() function.
This wrong resolution forced reading the values of grouping fields referred
in DSQ not from the temporary table used for aggregation from which they
were supposed to be read, but from the table used as the source table for
aggregation.
This patch guarantees that the value of THD::LEX::in_sum_func is properly
restored after the call of fix_fields() for any set function.
Part#2, variant 2: Make the printed r_ values in JSON output consistent.
After this patch, ANALYZE output has:
- r_index_rows (NEW) - Observed number of rows before ICP or Rowid Filtering
checks. This is a per-scan average. like r_rows and "rows" are.
- r_rows (AS BEFORE) - Observed number of rows after ICP and Rowid Filtering.
- r_icp_filtered (NEW) - Observed selectivity of ICP condition.
- (AS BEFORE) observed selectivity of Rowid Filter is in
$.rowid_filter.r_selectivity_pct
- r_total_filtered - Observed combined selectivity: fraction of rows left
after applying ICP condition, Rowid Filter, and attached_condition.
This is now comparable with "filtered" and is printed right after it.
- r_filtered (AS BEFORE) - Observed selectivity of "attached_condition".
Tabular ANALYZE output is not changed. Note that JSON's r_filtered and
r_rows have the same meanings as before and have the same meaning as in
tabular output.
Crash was caused by referencing a null pointer on getting
the number of the nesting levels of the set function for the current
select_lex at the method Item_field::fix_fields.
The current select for processing is taken from Name_resolution_context
that filled in at the function set_new_item_local_context() and
where initialization of the data member Name_resolution_context
was mistakenly removed by the commit
d6ee351bbb
(Revert "MDEV-24454 Crash at change_item_tree")
To fix the issue, correct initialization of data member
Name_resolution_context::select_lex
that was removed by the commit d6ee351bbb
is restored.
ANALYZE FORMAT=JSON output now includes table.r_engine_stats which
has the engine statistics. Only non-zero members are printed.
Internally: EXPLAIN data structures Explain_table_acccess and
Explain_update now have handler* handler_for_stats pointer.
It is used to read statistics from handler_for_stats->handler_stats.
The following applies only to 10.9+, backport doesn't use it:
Explain data structures exist after the tables are closed. We avoid
walking invalid pointers using this:
- SQL layer calls Explain_query::notify_tables_are_closed() before
closing tables.
- After that call, printing of JSON output is disabled. Non-JSON output
can be printed but we don't access handler_for_stats when doing that.
(Initial patch by Varun Gupta. Amended and added comments).
When the query has both
1. Aggregate functions that require sorting data by group, and
2. Window functions
we need to use two temporary tables. The first temp.table will hold the
join output. Then it is passed to filesort(). Reading it in sorted
order allows to compute the aggregate functions.
Then, we need to write their values into the second temp. table. Then,
Window Function computation step can pass that to filesort() and read
them in the order it needs.
Failure to create the second temp. table would cause an assertion
failure: window function could would not find where to get the values
of the aggregate functions.