For index merge union[or sort union], the estimates are not taken into account while calculating the selectivity of
a condition. So instead of showing the estimates of the index merge union[or sort union], it shows estimates equal to
all the records of the table.
The fix for the issue is to include the selectivity of index merge
union[or sort union] while calculating the selectivity of a condition.
optimizer_use_condition_selectivity>=3
Selectivity analysis should be disabled for Geometrical columns
for the case like geometric_field= string_constant.
Currently for selectivity calculation we perform range analysis for a column even when we don't have any statistics(EITS).
This makes less sense but is used to catch contradiction for WHERE condition.
So the solution is to not perform range analysis for selectivity calculation for columns that do not have statistics.
and use_stat_tables= PREFERABLY
Currently the code that calculates selectivity for a table does not take into account the case when
we can have GROUP BY optimization (looses index scan).
This patch introduces support for the system variable eq_range_index_dive_limit
that existed in MySQL starting from 5.6. The variable sets a limit for
index dives into equality ranges. Index dives are performed by optimizer
to estimate the number of rows in range scans. Index dives usually provide
good estimate but they are pretty expensive. To estimate the number of rows
in equality ranges statistical data on indexes can be employed. Its usage gives
not so good estimates but it's cheap. So if the number of equality dives
required by an index scan exceeds the set limit no dives for equality
ranges are performed by the optimizer for this index.
As the new system variable is introduced in a stable version the default
value for it is set to a special value meaning there is no limit for the number
of index dives performed by the optimizer.
The patch partially uses the MySQL code for WL 5957
'Statistics-based Range optimization for many ranges'.
TRASH was mapped to TRASH_FREE and was supposed to be used for memory
that should not be accessed anymore, while TRASH_ALLOC() is to be
used for uninitialized but to-be-used memory.
But sometimes TRASH() was used in the latter sense.
Remove TRASH() macro, always use explicit TRASH_ALLOC() or TRASH_FREE().
Other things, mainly to get
create_mysqld_error_find_printf_error tool to work:
- Added protection to not include mysqld_error.h twice
- Include "unireg.h" instead of "mysqld_error.h" in server
- Added protection if ER_XX messages are already defined
- Removed wrong calls to my_error(ER_OUTOFMEMORY) as
my_malloc() and my_alloc will do this automatically
- Added missing %s to ER_DUP_QUERY_NAME
- Removed old and wrong calls to my_strerror() when using
MY_ERROR_ON_RENAME (wrong merge)
- Fixed deadlock error message from Galera. Before the extra
information given to ER_LOCK_DEADLOCK was missing because
ER_LOCK_DEADLOCK doesn't provide any extra information.
I kept #ifdef mysqld_error_find_printf_error_used in sql_acl.h
to make it easy to do this kind of check again in the future
Code in QUICK_RANGE_SELECT::init_ror_merged_scan() could theoretically
have caused crashes if this was ever called from an update or delete
This also found a bug in the vcol/range.result. file.
- Fix win64 pointer truncation warnings
(usually coming from misusing 0x%lx and long cast in DBUG)
- Also fix printf-format warnings
Make the above mentioned warnings fatal.
- fix pthread_join on Windows to set return value.
In get_mm_tree we have to change Field_geom::geom_type to
GEOMETRY as we have to let storing all types of the spatial features
in the field. So now we restore the original geom_type as it's
done.
The patch actually fixes the old defect of the optimizer that
could not extract keys for range access from IN predicates
with row arguments.
This problem was resolved in the mysql-5.7 code. The patch
supersedes what was done there:
- it can build range access when not all components of
the first row argument are refer to the columns of the table
for which the range access is constructed.
- it can use equality predicates to build range access
to the table that is not referred to in this argument.
In get_mm_tree we have to change Field_geom::geom_type to
GEOMETRY as we have to let storing all types of the spatial features
in the field. So now we restore the original geom_type as it's
done.
Also, implement MDEV-11027 a little differently from 5.5 and 10.0:
recv_apply_hashed_log_recs(): Change the return type back to void
(DB_SUCCESS was always returned).
Report progress also via systemd using sd_notifyf().
* rename to "keyread" (to avoid conflicts with tokudb),
* change from bool to uint and store the keyread index number there
* provide a bool accessor to check if keyread is enabled
mark_columns_used_by_index used to do
reset + mark_columns_used_by_index_no_reset + start keyread + set bitmaps
Now prepare_for_keyread does that, while mark_columns_used_by_index
does only reset + mark_columns_used_by_index_no_reset,
just as its name suggests.
move TABLE::key_read into handler. Because in index merge and DS-MRR
there can be many handlers per table, and some of them use
key read while others don't. "keyread" is really per handler,
not per TABLE property.
When building different range and index-merge trees the range optimizer
could build an index-merge tree with an index scan containing less ranges
then needed. This index-merge could be chosen as the best. Following this
index-merge the executioner missed some rows in the result set.
The invalid index scan was built due to an inconsistency in the code
back-ported from mysql into 5.3 that fixed mysql bug #11765831:
the code added to key_or() could change shared keys of the second
ored tree. Partially the problem was fixed in the patch for mariadb
bug #823301, but it turned out that only partially.
Found and fixed 2 problems:
- Filesort addon fields didn't mark virtual columns properly
- multi-range-read calculated vcol bitmap but was not using it.
This caused wrong vcol field to be calculated on read, which caused the assert.
In file sql/opt_range.cc,when calculate_cond_selectivity_for_table() is called with optimizer_use_condition_selectivity=4 then
- thd->no_errors is set to 1
- the original value of thd->no_error is not restored to its original value
- this is causing the assertion to fail in the subsequent queries
Fixed by restoring the original value of thd->no_errors