Heap tables are allocated blocks to store rows according to
my_default_record_cache (mapped to the server global variable
read_buffer_size).
This causes performance issues when the record length is big
(> 1000 bytes) and the my_default_record_cache is small.
Changed to instead split the default heap allocation to 1/16 of the
allowed space and not use my_default_record_cache anymore when creating
the heap. The allocation is also aligned to be just under a power of 2.
For some test that I have been running, which was using record length=633,
the speed of the query doubled thanks to this change.
Other things:
- Fixed calculation of max_records passed to hp_create() to take
into account padding between records.
- Updated calculation of memory needed by heap tables. Before we
did not take into account internal structures needed to access rows.
- Changed block sized for memory_table from 1 to 16384 to get less
fragmentation. This also avoids a problem where we need 1K
to manage index and row storage which was not counted for before.
- Moved heap memory usage to a separate test for 32 bit.
- Allocate all data blocks in heap in powers of 2. Change reported
memory usage for heap to reflect this.
Reviewed-by: Sergei Golubchik <serg@mariadb.org>
The idea is that instead of marking all select_lex's with DISTINCT, we
only mark those that really need distinct result.
Benefits of this change:
- Temporary tables used with derived tables, UNION, IN are now smaller
as duplicates are removed already on the insert phase.
- The optimizer can now produce better plans with EQ_REF. This can be
seen from the tests where several queries does not anymore materialize
derived tables twice.
- Queries affected by 'in_predicate_conversion_threshold' where large IN
lists are converted to sub query produces better plans.
Other things:
- Removed on duplicate call to sel->init_select() in
LEX::add_primary_to_query_expression_body()
- I moved the testing of
tab->table->pos_in_table_list->is_materialized_derived()
in join_read_const_table() to the caller as it caused problems for
derived tables that could be proven to be const tables.
This also is likely to fix some bugs as if join_read_const_table()
was aborted, the table was left marked as JT_CONST, which cannot
be good. I added an ASSERT there for now that can be removed when
the code has been properly tested.
MDEV-27036: repeated "table" key resolve for print_explain_json
MDEV-27036: duplicated keys in best_access_path
MDEV-27036: Explain_aggr_filesort::print_json_members: resolve duplicated "filesort" member in Json object
MDEV-27036: Explain_basic_join::
print_explain_json_interns fixed start_dups_weedout case for main.explain_json test
When executing set operations in a pipeline using only one temporary table
additional scans of intermediate results may be needed. The scans are
performed with usage of the rnd_next() handler function that might
leave record buffers used for the temporary table not in a state that
is good for following writes into the table. For example it happens for
aria engine when the last call of rnd_next() encounters only deleted
records. Thus a cleanup of record buffers is needed after each such scan
of the temporary table.
Approved by Oleksandr Byelkin <sanja@mariadb.com>