Removed some redundant hint related string literals from
spd_db_conn.cc
Clean up SPIDER_PARAM_*_[CHAR]LEN[S]
Adding tests covering monitoring_kind=2. What it does is that it reads
from mysql.spider_link_mon_servers with matching db_name, table_name,
link_id, and does not do anything about that...
How monitoring_* can be useful: in the deprecated spider high
availability feature, when one remote fails, spider will try another
remote, which apparently makes use of these table parameters.
A test covering the query_cache_sync table param. Some further tests
on some spider table params.
Wrapper should be case insensitive.
Code documentation on spider priority binary tree.
Add an assertion that static_key_cardinality is always -1. All tests
pass still
The direct aggregate mechanism sems to be only intended to work when
otherwise a full table scan query will be executed from the spider
node and the aggregation done at the spider node too. Typically this
happens in sub_select(). In the test spider.direct_aggregate_part
direct aggregate allows to send COUNT statements directly to the data
nodes and adds up the results at the spider node, instead of iterating
over the rows one by one at the spider node.
By contrast, the group by handler (GBH) typically sends aggregated
queries directly to data nodes, in which case DA does not improve the
situation here.
That is why we should fix it by disabling DA when GBH is used.
There are other reasons supporting this change. First, the creation of
GBH results in a call to change_to_use_tmp_fields() (as opposed to
setup_copy_fields()) which causes the spider DA function
spider_db_fetch_for_item_sum_funcs() to work on wrong items. Second,
the spider DA function only calls direct_add() on the items, and the
follow-up add() needs to be called by the sql layer code. In
do_select(), after executing the query with the GBH, it seems that the
required add() would not necessarily be called.
Disabling DA when GBH is used does fix the bug. There are a few
other things included in this commit to improve the situation with
spider DA:
1. Add a session variable that allows user to disable DA completely,
this will help as a temporary measure if/when further bugs with DA
emerge.
2. Move the increment of direct_aggregate_count to the spider DA
function. Currently this is done in rather bizarre and random
locations.
3. Fix the spider_db_mbase_row creation so that the last of its row
field (sentinel) is NULL. The code is already doing a null check, but
somehow the sentinel field is on an invalid address, causing the
segfaults. With a correct implementation of the row creation, we can
avoid such segfaults.
Spider connection string is a comma-separated parameter definitions,
where each definition is of the form "<param_title> <param_value>",
where <param_value> is quote delimited on both ends, with backslashes
acting as an escaping prefix.
Despite the simple syntax, the existing spider connection string
parser was poorly-written, complex, hard to reason and error-prone,
causing issues like the one described in MDEV-31117. For example it
treated param title the same way as param value when assigning, and
have nonsensical fields like delim_title_len and delim_title.
Thus as part of the bugfix, we clean up the spider comment connection
string parsing, including:
- Factoring out some code from the parsing function
- Simplify the struct `st_spider_param_string_parse`
- And any necessary changes caused by the above changes
The existing (incorrect) overriding mechanism is:
Non-minus-one var value overrides table param overrides default value.
Before MDEV-27169, unspecified var value is -1. So if the user sets
both the var to be a value other than -1 and the table param, the var
value will prevail, which is incorrect.
After MDEV-27169, unspecified var value is default value. So if the
user does not set the var but sets the table param, the default value
will prevail, which is even more incorrect.
This patch fixes it so that table param, if specified, always
overrides var value, and the latter if not specified or set to -1,
falls back to the default value
We achieve this by replacing all such overriding in spd_param.cc with
macros that override in the correct way, and removing all the
"overriding -1" lines involving table params in
spider_set_connect_info_default() except for those table params not
defined as sysvar/thdvar in spd_params.cc
We also introduced macros for non-overriding sysvar and thdvar, so
that the code is cleaner and less error-prone
In server versions where MDEV-27169 has not been applied, we also
backport the patch, that is, replacing -1 default values with real
default values
In server versions where MDEV-28006 has not been applied, we do the
same for udf params
Set the lock wait timeout to 1 beforehand, and reset it afterwards, to
avoid lock conflict caused by opening the same table twice in case of
self-reference.
Extracted out common subroutines, gave more meaningful names etc,
added comments etc.
Also:
- Documented active servers load balancing reads, and other fields in
SPIDER_SHARE etc.
- Removed commented out code
- Documented and refactored self-reference check
- Removed some unnecessary functions
- Renamed unhelpful roop_count
- Refactored spider_get_{sts,crd}, where we turn get_type into an enum
- Cleaned up spider_mbase_handler::show_table_status() and
spider_mbase_handler::show_index()
This makes it easier to compare different costs and also allows
the optimizer to optimizer different storage engines more reliably.
- Added tests/check_costs.pl, a tool to verify optimizer cost calculations.
- Most engine costs has been found with this program. All steps to
calculate the new costs are documented in Docs/optimizer_costs.txt
- User optimizer_cost variables are given in microseconds (as individual
costs can be very small). Internally they are stored in ms.
- Changed DISK_READ_COST (was DISK_SEEK_BASE_COST) from a hard disk cost
(9 ms) to common SSD cost (400MB/sec).
- Removed cost calculations for hard disks (rotation etc).
- Changed the following handler functions to return IO_AND_CPU_COST.
This makes it easy to apply different cost modifiers in ha_..time()
functions for io and cpu costs.
- scan_time()
- rnd_pos_time() & rnd_pos_call_time()
- keyread_time()
- Enhanched keyread_time() to calculate the full cost of reading of a set
of keys with a given number of ranges and optional number of blocks that
need to be accessed.
- Removed read_time() as keyread_time() + rnd_pos_time() can do the same
thing and more.
- Tuned cost for: heap, myisam, Aria, InnoDB, archive and MyRocks.
Used heap table costs for json_table. The rest are using default engine
costs.
- Added the following new optimizer variables:
- optimizer_disk_read_ratio
- optimizer_disk_read_cost
- optimizer_key_lookup_cost
- optimizer_row_lookup_cost
- optimizer_row_next_find_cost
- optimizer_scan_cost
- Moved all engine specific cost to OPTIMIZER_COSTS structure.
- Changed costs to use 'records_out' instead of 'records_read' when
recalculating costs.
- Split optimizer_costs.h to optimizer_costs.h and optimizer_defaults.h.
This allows one to change costs without having to compile a lot of
files.
- Updated costs for filter lookup.
- Use a better cost estimate in best_extension_by_limited_search()
for the sorting cost.
- Fixed previous issues with 'filtered' explain column as we are now
using 'records_out' (min rows seen for table) to calculate filtering.
This greatly simplifies the filtering code in
JOIN_TAB::save_explain_data().
This change caused a lot of queries to be optimized differently than
before, which exposed different issues in the optimizer that needs to
be fixed. These fixes are in the following commits. To not have to
change the same test case over and over again, the changes in the test
cases are done in a single commit after all the critical change sets
are done.
InnoDB changes:
- Updated InnoDB to not divide big range cost with 2.
- Added cost for InnoDB (innobase_update_optimizer_costs()).
- Don't mark clustered primary key with HA_KEYREAD_ONLY. This will
prevent that the optimizer is trying to use index-only scans on
the clustered key.
- Disabled ha_innobase::scan_time() and ha_innobase::read_time() and
ha_innobase::rnd_pos_time() as the default engine cost functions now
works good for InnoDB.
Other things:
- Added --show-query-costs (\Q) option to mysql.cc to show the query
cost after each query (good when working with query costs).
- Extended my_getopt with GET_ADJUSTED_VALUE which allows one to adjust
the value that user is given. This is used to change cost from
microseconds (user input) to milliseconds (what the server is
internally using).
- Added include/my_tracker.h ; Useful include file to quickly test
costs of a function.
- Use handler::set_table() in all places instead of 'table= arg'.
- Added SHOW_OPTIMIZER_COSTS to sys variables. These are input and
shown in microseconds for the user but stored as milliseconds.
This is to make the numbers easier to read for the user (less
pre-zeros). Implemented in 'Sys_var_optimizer_cost' class.
- In test_quick_select() do not use index scans if 'no_keyread' is set
for the table. This is what we do in other places of the server.
- Added THD parameter to Unique::get_use_cost() and
check_index_intersect_extension() and similar functions to be able
to provide costs to called functions.
- Changed 'records' to 'rows' in optimizer_trace.
- Write more information to optimizer_trace.
- Added INDEX_BLOCK_FILL_FACTOR_MUL (4) and INDEX_BLOCK_FILL_FACTOR_DIV (3)
to calculate usage space of keys in b-trees. (Before we used numeric
constants).
- Removed code that assumed that b-trees has similar costs as binary
trees. Replaced with engine calls that returns the cost.
- Added Bitmap::find_first_bit()
- Added timings to join_cache for ANALYZE table (patch by Sergei Petrunia).
- Added records_init and records_after_filter to POSITION to remember
more of what best_access_patch() calculates.
- table_after_join_selectivity() changed to recalculate 'records_out'
based on the new fields from best_access_patch()
Bug fixes:
- Some queries did not update last_query_cost (was 0). Fixed by moving
setting thd->...last_query_cost in JOIN::optimize().
- Write '0' as number of rows for const tables with a matching row.
Some internals:
- Engine cost are stored in OPTIMIZER_COSTS structure. When a
handlerton is created, we also created a new cost variable for the
handlerton. We also create a new variable if the user changes a
optimizer cost for a not yet loaded handlerton either with command
line arguments or with SET
@@global.engine.optimizer_cost_variable=xx.
- There are 3 global OPTIMIZER_COSTS variables:
default_optimizer_costs The default costs + changes from the
command line without an engine specifier.
heap_optimizer_costs Heap table costs, used for temporary tables
tmp_table_optimizer_costs The cost for the default on disk internal
temporary table (MyISAM or Aria)
- The engine cost for a table is stored in table_share. To speed up
accesses the handler has a pointer to this. The cost is copied
to the table on first access. If one wants to change the cost one
must first update the global engine cost and then do a FLUSH TABLES.
This was done to be able to access the costs for an open table
without any locks.
- When a handlerton is created, the cost are updated the following way:
See sql/keycaches.cc for details:
- Use 'default_optimizer_costs' as a base
- Call hton->update_optimizer_costs() to override with the engines
default costs.
- Override the costs that the user has specified for the engine.
- One handler open, copy the engine cost from handlerton to TABLE_SHARE.
- Call handler::update_optimizer_costs() to allow the engine to update
cost for this particular table.
- There are two costs stored in THD. These are copied to the handler
when the table is used in a query:
- optimizer_where_cost
- optimizer_scan_setup_cost
- Simply code in best_access_path() by storing all cost result in a
structure. (Idea/Suggestion by Igor)
This is Kentoku's patch for MDEV-22979 (e6e41f04f4 + 22a0097727),
which fixes 30370.
It changes the wait to a timed wait for the first sts thread, which
waits on server start to execute the init queries for spider. It also
flips the flag init_command to false when the sts thread is being
freed. With these changes the sts thread can check the flag regularly
and abort the init_queries when it finds out the init_command is
false. This avoids the deadlock that causes the problem in MDEV-30370.
It also fixes MDEV-22979 for 10.4, but not 10.5. I have not tested
higher versions for MDEV-22979.
A test has also been done on MDEV-29904 to avoid regression, given
MDEV-27233 is a similar problem and its patch caused the
regression. The test passes for 10.4-11.0.
However, this adhoc test only works consistently when placed in the
main testsuite. We should not place spider tests in the main suite, so
we do not include it in this commit. A patch for MDEV-27912 should fix
this problem and allow a proper test for MDEV-29904. See comments in
the jira ticket MDEV-30370/29904 for the adhoc testcase used for this
commit.
The conn_kind, which stands for "connection kind", is no longer useful
because the HandlerSocket support is deleted and Spider now has only
one connection kind, SPIDER_CONN_KIND_MYSQL. Remove conn_kind and
related code.
Signed-off-by: Yuchen Pei <yuchen.pei@mariadb.com>
Reviewed-by: Nayuta Yanagisawa <nayuta.yanagisawa@mariadb.com>