Part#2, variant 2: Make the printed r_ values in JSON output consistent.
After this patch, ANALYZE output has:
- r_index_rows (NEW) - Observed number of rows before ICP or Rowid Filtering
checks. This is a per-scan average. like r_rows and "rows" are.
- r_rows (AS BEFORE) - Observed number of rows after ICP and Rowid Filtering.
- r_icp_filtered (NEW) - Observed selectivity of ICP condition.
- (AS BEFORE) observed selectivity of Rowid Filter is in
$.rowid_filter.r_selectivity_pct
- r_total_filtered - Observed combined selectivity: fraction of rows left
after applying ICP condition, Rowid Filter, and attached_condition.
This is now comparable with "filtered" and is printed right after it.
- r_filtered (AS BEFORE) - Observed selectivity of "attached_condition".
Tabular ANALYZE output is not changed. Note that JSON's r_filtered and
r_rows have the same meanings as before and have the same meaning as in
tabular output.
ANALYZE FORMAT=JSON output now includes table.r_engine_stats which
has the engine statistics. Only non-zero members are printed.
Internally: EXPLAIN data structures Explain_table_acccess and
Explain_update now have handler* handler_for_stats pointer.
It is used to read statistics from handler_for_stats->handler_stats.
The following applies only to 10.9+, backport doesn't use it:
Explain data structures exist after the tables are closed. We avoid
walking invalid pointers using this:
- SQL layer calls Explain_query::notify_tables_are_closed() before
closing tables.
- After that call, printing of JSON output is disabled. Non-JSON output
can be printed but we don't access handler_for_stats when doing that.
Introduces @@optimizer_switch flag: hash_join_cardinality
When this option is on, use EITS statistics to produce tighter bounds
for hash join output cardinality.
This patch is an extension / replacement to a similar patch in 10.6
New features:
- optimizer_switch hash_join_cardinality is on by default
- records_out is set to fanout when HASH is used
- Fixed bug in is_eits_usable: The function did not work with views
After MDEV-30830 has added block-nl-join.r_unpack_time_ms, it became
apparent that there is some unaccounted-for time in BNL join operation,
namely the time that is spent after unpacking the join buffer record.
Fix this by adding a Gap_time_tracker to track the time that is spent
after unpacking the join buffer record and before any next time tracking.
The collected time is printed in block-nl-join.r_other_time_ms.
Reviewed by: Monty <monty@mariadb.org>