1
0
mirror of https://github.com/MariaDB/server.git synced 2025-11-09 11:41:36 +03:00
Commit Graph

995 Commits

Author SHA1 Message Date
Marko Mäkelä
c04284e747 Merge 10.10 into 10.11 2023-06-07 15:01:43 +03:00
Marko Mäkelä
82230aa423 Merge 10.9 into 10.10 2023-06-07 14:48:37 +03:00
Sergei Golubchik
cbabb95915 Merge branch '11.0' into 11.1 2023-06-05 20:15:15 +02:00
Sergei Golubchik
0005f2f06c Merge branch 'bb-10.11-release' into bb-11.0-release 2023-06-05 19:27:00 +02:00
Sergei Golubchik
4e2b93dffe Merge branch 'bb-10.10-release' into bb-10.11-release 2023-06-05 19:04:58 +02:00
Sergei Golubchik
30bba8e275 Merge branch 'github/bb-10.9-release' into bb-10.10-release 2023-06-05 18:59:43 +02:00
Sergei Golubchik
33fd519ca7 Merge branch 'github/bb-10.6-release' into bb-10.9-release 2023-06-05 18:55:26 +02:00
Sergei Golubchik
a42a6fa99b Merge branch 'bb-10.5-release' into bb-10.6-release 2023-06-05 18:53:02 +02:00
Sergei Golubchik
bed70468ea Merge branch 'bb-10.4-release' into bb-10.5-release 2023-06-05 17:50:51 +02:00
Sergei Petrunia
928012a27a MDEV-31403: Server crashes in st_join_table::choose_best_splitting
The code in choose_best_splitting() assumed that the join prefix is
in join->positions[].

This is not necessarily the case. This function might be called when
the join prefix is in join->best_positions[], too.
Follow the approach from best_access_path(), which calls this function:
pass the current join prefix as an argument,
"const POSITION *join_positions" and use that.
2023-06-05 18:24:39 +03:00
Marko Mäkelä
31be25349f Merge 10.6 into 10.9 2023-05-25 09:24:32 +03:00
Marko Mäkelä
270eeeb523 Merge 10.5 into 10.6 2023-05-23 12:25:39 +03:00
Monty
16258677b3 MDEV-6768 Wrong result with aggregate with join with no result set
When a query does implicit grouping and join operation produces an empty
result set, a NULL-complemented row combination is generated.
However, constant table fields still show non-NULL values.

What happens in the is that end_send_group() is called with a
const row but without any rows matching the WHERE clause.
This last part is shown by 'join->first_record' not being set.

This causes item->no_rows_in_result() to be called for all items to reset
all sum functions to their initial state. However fields are not set
to NULL.

The used fix is to produce NULL-complemented records for constant tables
as well. Also, reset the constant table's records back in case we're
in a subquery which may get re-executed.
An alternative fix would have item->no_rows_in_result() also work
with Item_field objects.

There is some other issues with the code:
- join->no_rows_in_result_called is used but never set.
- Tables that are used with group functions are not properly marked as
  maybe_null, which is required if the table rows should be regarded as
  null-complemented (not existing).
- The code that tries to detect if mixed_implicit_grouping should be set
  didn't take into account all usage of fields and sum functions.
- Item_func::restore_to_before_no_rows_in_result() called the wrong
  function.
- join->clear() does not use a table_map argument to clear_tables(),
  which caused it to ignore constant tables.
- unclear_tables() does not correctly restore status to what is
  was before clear_tables().

Main bug fix was to always use a table_map argument to clear_tables() and
always use join->clear() and clear_tables() together with unclear_tables().

Other fixes:
- Fixed Item_func::restore_to_before_no_rows_in_result()
- Set 'join->no_rows_in_result_called' when no_rows_in_result_set()
  is called.
- Removed not used argument from setup_end_select_func().
- More code comments
- Ensure that end_send_group() modifies the same fields as are in the
  result set.
- Changed return_zero_rows() to use pointers instead of references,
  similar to the rest of the code.

Reviewer: Sergei Petrunia <sergey@mariadb.com>
2023-05-22 17:15:46 +03:00
Oleg Smirnov
60f0765b58 MDEV-30143 Segfault on select query using index for group-by and filesort
The problem was trying to access JOIN_TAB::select which is set to NULL
when using the filesort. The correct way is accessing either
JOIN_TAB::select or JOIN_TAB::filesort->select depending on whether
the filesort is used.
This commit introduces member function JOIN_TAB::get_sql_select()
encapsulating that check so the code duplication is eliminated.

The new condition (s->table->quick_keys.is_set(best_key->key))
was added to  best_access_path() to eliminate a Valgrind error.
The cause of that error was using TRASH_ALLOC(quick_key_parts)
instead of bzero(quick_key_parts); hence, accessing
s->table->quick_key_parts[best_key->key]) without prior checking
for quick_keys.is_set() might have caused reading "dirty" memory
2023-05-20 09:53:43 +07:00
Oleksandr Byelkin
de703a2b21 Merge branch '10.4' into 10.4.29 release 2023-05-11 09:07:45 +02:00
Monty
08a4732860 MDEV-28217 Incorrect Join Execution When Controlling Join Buffer Size
The problem was that join_buffer_size conflicted with
join_buffer_space_limit, which caused the query to be run without join
buffer. However this caused wrong results as the optimizer assumed
that hash+join buffer would ensure that the equi-join condition
would be satisfied, and didn't check it itself.

Fixed by not using join_buffer_space_limit when
optimize_join_buffer_size=off. This matches the documentation at
https://mariadb.com/kb/en/block-based-join-algorithms

Other things:
- Removed not used variable JOIN_TAB::join_buffer_size_limit
- Give an error if we cannot allocate a join buffer. This can
  only happen if the join_buffer variables are wrongly configured or
  we are running out of memory.
  In the future, instead of returning an error, we could properly
  convert the query plan that uses BNL-H join into one that doesn't
  use join buffering:
  make sure the equi-join condition is checked where appropriate.

Reviewer: Sergei Petrunia <sergey@mariadb.com>
2023-05-04 18:40:28 +03:00
Oleksandr Byelkin
1c60c7ab4b Merge branch '10.10' into 10.11 2023-05-04 11:56:52 +02:00
Oleksandr Byelkin
16e5bc4cbc Merge branch '10.9' into 10.10 2023-05-04 11:50:34 +02:00
Oleksandr Byelkin
d7fae797f4 Merge branch '10.8' into 10.9 2023-05-04 11:39:51 +02:00
Oleksandr Byelkin
652d54bf00 Merge branch '10.5' into 10.6 2023-05-04 07:36:37 +02:00
Oleksandr Byelkin
e87440b79e Merge branch '10.4' into 10.5 2023-05-03 15:53:14 +02:00
Igor Babaev
ce7ffe61d8 MDEV-26301 Split optimization refills temporary table too many times
This patch optimizes the number of refills for the lateral derived table
to which a materialized derived table subject to split optimization is
is converted. This optimized number of refills is now considered as the
expected number of refills of the materialized derived table when searching
for the best possible splitting of the table.
2023-05-03 14:11:11 +02:00
Monty
7f96dd50e2 MDEV-6768 Wrong result with aggregate with join with no result set
When a query does implicit grouping and join operation produces an empty
result set, a NULL-complemented row combination is generated.
However, constant table fields still show non-NULL values.

What happens in the is that end_send_group() is called with a
const row but without any rows matching the WHERE clause.
This last part is shown by 'join->first_record' not being set.

This causes item->no_rows_in_result() to be called for all items to reset
all sum functions to their initial state. However fields are not set
to NULL.

The used fix is to produce NULL-complemented records for constant tables
as well. Also, reset the constant table's records back in case we're
in a subquery which may get re-executed.
An alternative fix would have item->no_rows_in_result() also work
with Item_field objects.

There is some other issues with the code:
- join->no_rows_in_result_called is used but never set.
- Tables that are used with group functions are not properly marked as
  maybe_null, which is required if the table rows should be regarded as
  null-complemented (not existing).
- The code that tries to detect if mixed_implicit_grouping should be set
  didn't take into account all usage of fields and sum functions.
- Item_func::restore_to_before_no_rows_in_result() called the wrong
  function.
- join->clear() does not use a table_map argument to clear_tables(),
  which caused it to ignore constant tables.
- unclear_tables() does not correctly restore status to what is
  was before clear_tables().

Main bug fix was to always use a table_map argument to clear_tables() and
always use join->clear() and clear_tables() together with unclear_tables().

Other fixes:
- Fixed Item_func::restore_to_before_no_rows_in_result()
- Set 'join->no_rows_in_result_called' when no_rows_in_result_set()
  is called.
- Removed not used argument from setup_end_select_func().
- More code comments
- Ensure that end_send_group() modifies the same fields as are in the
  result set.
- Changed return_zero_rows() to use pointers instead of references,
  similar to the rest of the code.
2023-05-02 23:43:12 +03:00
Oleg Smirnov
f0b665f880 MDEV-8320 Allow index usage for DATE(col) <=> const and YEAR <=> const
Rewrite datetime comparison conditions into sargeable. For example,
    YEAR(col) <= val  ->  col <= YEAR_END(val)
    YEAR(col) <  val  ->  col <  YEAR_START(val)
    YEAR(col) >= val  ->  col >= YEAR_START(val)
    YEAR(col) >  val  ->  col >  YEAR_END(val)
    YEAR(col) =  val  ->  col BETWEEN YEAR_START(val) AND YEAR_END(val)
Do the same with DATE(col), for example:
    DATE(col) <= val  ->  col <= DAY_END(val)

After such a rewrite index lookup on column "col" can be employed
2023-04-25 20:21:35 +07:00
Sergei Petrunia
c7fe8e51de Merge 10.11 into 11.0 2023-04-17 16:50:01 +03:00
Marko Mäkelä
656c2e18b1 Merge 10.10 into 10.11 2023-04-14 13:08:28 +03:00
Marko Mäkelä
a009280e60 Merge 10.9 into 10.10 2023-04-14 12:24:14 +03:00
Marko Mäkelä
44281b88f3 Merge 10.8 into 10.9 2023-04-14 11:32:36 +03:00
Sergei Petrunia
0269d82d53 ANALYZE FORMAT=JSON: Backport block-nl-join.r_unpack_time_ms from 11.0 +fix MDEV-30830.
Also fix it to work with hashed join (MDEV-30830).

Reviewed by: Monty <monty@mariadb.org>
2023-04-04 12:18:29 +03:00
Sergei Petrunia
dc1d6213f9 MDEV-30806: ANALYZE FORMAT=JSON: better support for BNL and BNL-H joins
In block-nl-join, add:

- r_loops - this shows how many incoming record combinations this
  query plan node had.

- r_effective_rows - this shows the average number of matching rows
  that this table had for each incoming record combination. This is
  comparable with r_rows in non-blocked access methods.
  For BNL-joins, it is always equal to
   $.table.r_rows * $.table.r_filtered
  For BNL-H joins the value cannot be computed from other values

Reviewed by: Monty <monty@mariadb.org>
2023-03-31 14:11:32 +03:00
Monty
7a277a3352 Allow firstmatch to use HASH joins
Firstmatch_picker::check_qep() has an optimization that allows firstmatch
to be used together with join buffer under some conditions. In this
case the cost was assumed to be same as what best_access_path()
had calculated.

However if HASH+join_buffer was used, then
fix_semijoin_strategies_for_picked_join_order() would remove the
join_buffer (which would cause a full join to be used) and the cost
assumption by Firstmatch_picker::check_qep() would be wrong.
Later check_join_cache_usage() sees that it's a full scan and decides
it can use join buffering, (But not the hash join).

Fixed by also allowing HASH joins with firstmatch.
This removes the need to change disable and re-enable join buffer.

Test case changes:
- HASH join used with firstmatch (Using join buffer (flat, BNLH join))
- Filtered could change with firstmatch as the conversion with and without
  join_buffered lost the filtering information.
- The not "re-enabling join buffer" is shown in main.optimizer_trace

Original code by Sergei, optimized by Monty.

Author: Sergei Petrunia <sergey@mariadb.com>, monty@mariadb.org
2023-03-07 14:27:26 +02:00
Monty
ae05097714 Fixed crashing bug in recursive SQL if write to tmp table would fail
This error was discovered while working on
MDEV-30540 Wrong result with IN list length reaching
           IN_PREDICATE_CONVERSION_THRESHOLD

If there is read error from handler::ha_rnd_next() during a recursive
query, st_select_lex_unit::exec_recursive() will crash as it will try to
get the error code from a structure that was deleted by the callee.
The code was using the construct:
   sl->join->exec();
   saved_error=sl->join->error;
This does not work as sl->join was freed by the exec() and sl->join would
be set to 0.
Fixed by having JOIN::exec() return the error code.
The included test case simulates the error in ha_rnd_next(), which causes
a crash without the patch.
scovered whle working on
MDEV-30540 Wrong result with IN list length reaching
           IN_PREDICATE_CONVERSION_THRESHOLD

If there is read error from handler::ha_rnd_next() during a recursive
query, st_select_lex_unit::exec_recursive() will crash as it will try to
get the error code from a structure that was deleted by the callee.
The code was using the construct:
   sl->join->exec();
   saved_error=sl->join->error;
This does not work as sl->join was freed by the exec() and sl->join was
set to 0.
Fixed by having JOIN::exec() return the error code.
The included test case simulates the error in ha_rnd_next(), which causes
a crash without the patch.
2023-03-02 13:11:54 +02:00
Monty
15e889c300 MDEV-30699: Updated prev_record_reads() to be more exact
The old code in prev_record_reads() did give wrong estimates when a
join_buffer was used or if the table was depending on more than one
other tables. When join_cache is used, it will cause a re-order of row
combinations, which causes more calls to the engine for tables that
are depending on tables before the join_cached one.

The new prev_records_read() code provides more exact estimates and
should never give a 'too low estimate', assuming that the data to the
function is correct

The definition of prev_record_read() is also updated.
The new definition is:
  "Estimate the number of engine ha_index_read_calls for EQ_REF tables
  when taking into account the one-row-cache in join_read_always_key()"

The cost of using prev_record_reads() value is changed. The value is
now used similar as before to calculate the cost of the storage engine
calls. However the cost of the WHERE cost is changed to take into
account the total number of row combinations as the WHERE has to be
checked even if the one-row-cache is used. This makes the cost
slightly higher than before (for the same prev_record_reads() value).

Other things:
- Cached return value of prev_record_read() in best_access_path() to
  avoid some function calls.
- Fixed bug where position[].use_join_buffer was set in
  best_acess_path() when join buffer was not used. This confused the
  semi join optimizer to try to reoptimize plans that did not need to be
  reoptimized.
  The effect of the bug fix is that we avoid doing some re-optimziations
  with semi-joins when join_buffer is not used. In these cases the value
  shown for the 'Filtering' column in EXPLAIN EXTENDED may change.
- Added 'prev_record.cc' that was used to verify the logic in
  prev_record_reads().

Changes in test suite:
- EQ_REF tables are moved up to be earlier. This is because either the
  higher WHERE cost when EQ_REF is used with more row combination or
  change of cost when using join_cache.
- Filtered has changed (to the better) for some cases using semi-joins
  subselect_sj.test subselect_sj_jcl6.test
2023-02-21 15:36:39 +03:00
Sergei Petrunia
d61bc94fa0 MDEV-30659 Server crash on EXPLAIN SELECT/SELECT on table with engine Aria for LooseScan Strategy
Amended patch from Monty:

The issue was that Loose_scan_opt::save_to_position() did not take
into account records_out from best_access_path()

Make sure that POSITION object filled by Loose_scan_opt::save_to_position()
has records_out not higher than any other possible access method.
2023-02-21 15:27:23 +03:00
Marko Mäkelä
2e431ff7e6 Merge 10.11 into 11.0 2023-02-16 13:34:45 +02:00
Marko Mäkelä
1fd0099839 Merge 10.10 into 10.11 2023-02-16 11:41:18 +02:00
Marko Mäkelä
345356b868 Merge 10.9 into 10.10 2023-02-16 11:36:38 +02:00
Marko Mäkelä
0d55914d96 Merge 10.8 into 10.9 2023-02-16 10:25:34 +02:00
Marko Mäkelä
6aec87544c Merge 10.5 into 10.6 2023-02-10 13:03:01 +02:00
Monty
3316a54db3 Code cleanups and add some caching of functions to speed up things
Detailed description:
- Added more function comments and fixed types in some old comments
- Removed an outdated comment
- Cleaned up some functions in records.cc
  - Replaced "while" with "if"
  - Reused error code
  - Made functions similar
- Added caching of pfs_batch_update()
- Simplified some rowid_filter code
  - Only call build_range_rowid_filter() if rowid filter will be used
  - Replaced tab->is_rowid_filter_built with need_to_build_rowid_filter.
    We only have to test need_to_build_rowid_filter to know if we have
    to build the filter. Old code needed two tests
  - Added function 'clear_range_rowid_filter' to disable rowid filter.
    Made things simpler as we can now clear all rowid filter variables
    in one place.
- Removed some 'if' in sub_select()
2023-02-10 12:59:36 +02:00
Marko Mäkelä
c41c79650a Merge 10.4 into 10.5 2023-02-10 12:02:11 +02:00
Vicențiu Ciorbaru
08c852026d Apply clang-tidy to remove empty constructors / destructors
This patch is the result of running
run-clang-tidy -fix -header-filter=.* -checks='-*,modernize-use-equals-default' .

Code style changes have been done on top. The result of this change
leads to the following improvements:

1. Binary size reduction.
* For a -DBUILD_CONFIG=mysql_release build, the binary size is reduced by
  ~400kb.
* A raw -DCMAKE_BUILD_TYPE=Release reduces the binary size by ~1.4kb.

2. Compiler can better understand the intent of the code, thus it leads
   to more optimization possibilities. Additionally it enabled detecting
   unused variables that had an empty default constructor but not marked
   so explicitly.

   Particular change required following this patch in sql/opt_range.cc

   result_keys, an unused template class Bitmap now correctly issues
   unused variable warnings.

   Setting Bitmap template class constructor to default allows the compiler
   to identify that there are no side-effects when instantiating the class.
   Previously the compiler could not issue the warning as it assumed Bitmap
   class (being a template) would not be performing a NO-OP for its default
   constructor. This prevented the "unused variable warning".
2023-02-09 16:09:08 +02:00
Sergei Petrunia
6c4076fac4 MDEV-30032: EXPLAIN FORMAT=JSON output: part #2: print 'loops'. 2023-02-03 11:22:17 +03:00
Sergei Petrunia
ffe0beca25 MDEV-30032: EXPLAIN FORMAT=JSON output: print costs
Basic printout for join and table execution costs.
2023-02-03 11:01:24 +03:00
Monty
66d9c1b22d Fixes for 'Filtering'
- table_after_join_selectivity() should use records_init (new bug)
- get_examined_rows() changed to double to get similar results
  as in MariaDB 10.11
- Fixed bug where table_after_join_selectivity() did not correct
  selectivity in the case where a RANGE is used instead of a REF.
  This can happen if the range can use more key_parts than the REF.
  WHERE key_part1=10 and key_part2 < 10

Other things:
- Use JT_RANGE instead of JT_ALL for RANGE access in all parts of the code.
  Before we used JT_ALL for RANGE.
- Force RANGE be used in best_access_path() if the range used more key
  parts than ref. In the original code, this was done much later in
  make_join_select)(). However we need to know in
  table_after_join_selectivity() if we have used RANGE or not.
- Added more information about filtering to optimizer_trace.
2023-02-02 23:59:44 +03:00
Monty
0fada9c2ab Removed worst_seek argument for cost_for_index_read()
The argument was not used.
2023-02-02 23:59:18 +03:00
Monty
b66cdbd1ea Changing all cost calculation to be given in milliseconds
This makes it easier to compare different costs and also allows
the optimizer to optimizer different storage engines more reliably.

- Added tests/check_costs.pl, a tool to verify optimizer cost calculations.
  - Most engine costs has been found with this program. All steps to
    calculate the new costs are documented in Docs/optimizer_costs.txt

- User optimizer_cost variables are given in microseconds (as individual
  costs can be very small). Internally they are stored in ms.
- Changed DISK_READ_COST (was DISK_SEEK_BASE_COST) from a hard disk cost
  (9 ms) to common SSD cost (400MB/sec).
- Removed cost calculations for hard disks (rotation etc).
- Changed the following handler functions to return IO_AND_CPU_COST.
  This makes it easy to apply different cost modifiers in ha_..time()
  functions for io and cpu costs.
  - scan_time()
  - rnd_pos_time() & rnd_pos_call_time()
  - keyread_time()
- Enhanched keyread_time() to calculate the full cost of reading of a set
  of keys with a given number of ranges and optional number of blocks that
  need to be accessed.
- Removed read_time() as keyread_time() + rnd_pos_time() can do the same
  thing and more.
- Tuned cost for: heap, myisam, Aria, InnoDB, archive and MyRocks.
  Used heap table costs for json_table. The rest are using default engine
  costs.
- Added the following new optimizer variables:
  - optimizer_disk_read_ratio
  - optimizer_disk_read_cost
  - optimizer_key_lookup_cost
  - optimizer_row_lookup_cost
  - optimizer_row_next_find_cost
  - optimizer_scan_cost
- Moved all engine specific cost to OPTIMIZER_COSTS structure.
- Changed costs to use 'records_out' instead of 'records_read' when
  recalculating costs.
- Split optimizer_costs.h to optimizer_costs.h and optimizer_defaults.h.
  This allows one to change costs without having to compile a lot of
  files.
- Updated costs for filter lookup.
- Use a better cost estimate in best_extension_by_limited_search()
  for the sorting cost.
- Fixed previous issues with 'filtered' explain column as we are now
  using 'records_out' (min rows seen for table) to calculate filtering.
  This greatly simplifies the filtering code in
  JOIN_TAB::save_explain_data().

This change caused a lot of queries to be optimized differently than
before, which exposed different issues in the optimizer that needs to
be fixed.  These fixes are in the following commits.  To not have to
change the same test case over and over again, the changes in the test
cases are done in a single commit after all the critical change sets
are done.

InnoDB changes:
- Updated InnoDB to not divide big range cost with 2.
- Added cost for InnoDB (innobase_update_optimizer_costs()).
- Don't mark clustered primary key with HA_KEYREAD_ONLY. This will
  prevent that the optimizer is trying to use index-only scans on
  the clustered key.
- Disabled ha_innobase::scan_time() and ha_innobase::read_time() and
  ha_innobase::rnd_pos_time() as the default engine cost functions now
  works good for InnoDB.

Other things:
- Added  --show-query-costs (\Q) option to mysql.cc to show the query
  cost after each query (good when working with query costs).
- Extended my_getopt with GET_ADJUSTED_VALUE which allows one to adjust
  the value that user is given. This is used to change cost from
  microseconds (user input) to milliseconds (what the server is
  internally using).
- Added include/my_tracker.h  ; Useful include file to quickly test
  costs of a function.
- Use handler::set_table() in all places instead of 'table= arg'.
- Added SHOW_OPTIMIZER_COSTS to sys variables. These are input and
  shown in microseconds for the user but stored as milliseconds.
  This is to make the numbers easier to read for the user (less
  pre-zeros).  Implemented in 'Sys_var_optimizer_cost' class.
- In test_quick_select() do not use index scans if 'no_keyread' is set
  for the table. This is what we do in other places of the server.
- Added THD parameter to Unique::get_use_cost() and
  check_index_intersect_extension() and similar functions to be able
  to provide costs to called functions.
- Changed 'records' to 'rows' in optimizer_trace.
- Write more information to optimizer_trace.
- Added INDEX_BLOCK_FILL_FACTOR_MUL (4) and INDEX_BLOCK_FILL_FACTOR_DIV (3)
  to calculate usage space of keys in b-trees. (Before we used numeric
  constants).
- Removed code that assumed that b-trees has similar costs as binary
  trees. Replaced with engine calls that returns the cost.
- Added Bitmap::find_first_bit()
- Added timings to join_cache for ANALYZE table (patch by Sergei Petrunia).
- Added records_init and records_after_filter to POSITION to remember
  more of what best_access_patch() calculates.
- table_after_join_selectivity() changed to recalculate 'records_out'
  based on the new fields from best_access_patch()

Bug fixes:
- Some queries did not update last_query_cost (was 0). Fixed by moving
  setting thd->...last_query_cost in JOIN::optimize().
- Write '0' as number of rows for const tables with a matching row.

Some internals:
- Engine cost are stored in OPTIMIZER_COSTS structure.  When a
  handlerton is created, we also created a new cost variable for the
  handlerton. We also create a new variable if the user changes a
  optimizer cost for a not yet loaded handlerton either with command
  line arguments or with SET
  @@global.engine.optimizer_cost_variable=xx.
- There are 3 global OPTIMIZER_COSTS variables:
  default_optimizer_costs   The default costs + changes from the
                            command line without an engine specifier.
  heap_optimizer_costs      Heap table costs, used for temporary tables
  tmp_table_optimizer_costs The cost for the default on disk internal
                            temporary table (MyISAM or Aria)
- The engine cost for a table is stored in table_share. To speed up
  accesses the handler has a pointer to this. The cost is copied
  to the table on first access. If one wants to change the cost one
  must first update the global engine cost and then do a FLUSH TABLES.
  This was done to be able to access the costs for an open table
  without any locks.
- When a handlerton is created, the cost are updated the following way:
  See sql/keycaches.cc for details:
  - Use 'default_optimizer_costs' as a base
  - Call hton->update_optimizer_costs() to override with the engines
    default costs.
  - Override the costs that the user has specified for the engine.
  - One handler open, copy the engine cost from handlerton to TABLE_SHARE.
  - Call handler::update_optimizer_costs() to allow the engine to update
    cost for this particular table.
  - There are two costs stored in THD. These are copied to the handler
    when the table is used in a query:
    - optimizer_where_cost
    - optimizer_scan_setup_cost
- Simply code in best_access_path() by storing all cost result in a
  structure. (Idea/Suggestion by Igor)
2023-02-02 23:54:45 +03:00
Michael Widenius
33fc8037e0 Fixed some issues with FORCE INDEX
Added code to support that force index can be used to force an index scan
instead of a full table scan. Currently this code is disable but I added
a test to verify that things works if the code is ever enabled.

Other things:

- FORCE INDEX will now work with "Range checked for each record" and
  join cache (see main/type_time_6065)
- Removed code ifdef with BAD_OPTIMIZATION (New cost calculations should
  fix this).
- Removed TABLE_LIST->force_index and comment that it should be removed
- Added TABLE->force_index_join and use in the corresponding places.
  This means that FORCE INDEX FOR ORDER BY will not affect keys used
  in joins anymore.
  Remove TODO that the above should be added.
  I still kept TABLE->force_index as it's used in
  test_if_cheaper_ordering() and opt_range.cc
- Removed setting table->force_index when calling test_quick_select() as
  it's not needed (force_index is an argument to test_quick_select())
2023-02-02 23:12:46 +03:00
Monty
4515a89814 Fixed cost calculations for materialized tables
One effect of this change in the test suite is that tests with very few
rows changed to use sub queries instead of materialization. This is
correct and expected as for these the materialization overhead is too high.

A lot of tests where fixed to still use materialization by adding a
few rows to the tables (most tests has only 2-3 rows and are thus easily
affected when cost computations are changed).

Other things:
- Added more variables to TMPTABLE_COSTS for better cost calculation
- Added cost of copying rows to TMPTABLE_COSTS lookup and write
- Added THD::optimizer_cache_hit_ratio for easier cost calculations
- Added DISK_FAST_READ_SIZE to be used when calculating costs when
  reading big blocks from a disk
2023-02-02 22:58:38 +03:00
Monty
1d82e5daf7 Move join->emb_smj_nest setting to choose_plan()
This cleans up the interface for choose_plan() as it is not depending
on setting join->emb_sj_nest.

choose_plan() now sets up join->emb_sj_nest and join->allowed_tables before
calling optimize_straight_join() and best_extension_by_limited_search().

Other things:
- Converted some 'if' to DBUG_ASSERT() as these should always be true.
- Calculate 'allowed_tables' in choose_plan() as this never changes in
  the childs.
- Added assert to check that next_emb->nested_join->n_tables doesn't
  get to a wrong value.
- Documented some variables in sql_select.h
2023-02-02 22:55:21 +03:00