perfschema thread walker needs to take thread's LOCK_thd_kill to prevent
the thread from disappearing why it's being looked at.
But there's no need to lock it for the current thread.
In fact, it was harmful as some code down the stack might take
LOCK_thd_kill (e.g. set_killed() does it, and my_malloc_size_cb_func()
calls set_killed()). And it caused a bunch of mutexes being locked under
LOCK_thd_kill, which created problems later when my_malloc_size_cb_func()
called set_killed() at some unspecified point under some
random mutexes.
to iterate over all status variables one should use
LOCK_all_status_vars not LOCK_status
this fixes sporadic mutex lock inversion in plugins.password_reuse_check:
* acl_cache->lock is taken over complex operations that might increment
status counters (under LOCK_status).
* acl_cache->lock is needed to get the values of Acl% status variables
when iterating over status variables
fixes the failure of
./mtr --ps sys_vars.gtid_slave_pos_grant sysschema.v_session_ssl_status
safe_mutex: Found wrong usage of mutex 'LOCK_thd_data' and 'LOCK_active_mi'
* treat FUNC/ARRAY variables as SESSION (otherwise they won't be shown)
* allow SHOW_SIMPLE_FUNC everywhere where SHOW_FUNC is
* increase row buffer size to avoid "too short" assert
This reverts commit 03c9a4ef4a.
The fix is wrong. It was doing this: if the uninitialized
wait->m_class has some specific value, then don't initialize it.
remove old deprecation helpers that were not used anywhere.
create new deprecation helpers and enforce their usage
this also removes inconsistencies in reporting deprecation:
sometimes it was ER_WARN_DEPRECATED_SYNTAX (1287),
sometimes ER_WARN_DEPRECATED_SYNTAX_NO_REPLACEMENT (1681),
sometimes a warning, sometimes a note.
it should always be
* ER_WARN_DEPRECATED_SYNTAX
* a warning (because it's something actionable, not purely informational)
- Replacing the old style inplace check_db_name() in make_table_name_list()
to the new style non-modifying code
- Adding "const" qualifier to the "db" parameter to ha_discover_table_names()
and its dependency functions.
…with: Test assertion failed
Problem:
=======
Assertion text: 'Value returned by SSS and PS table for Last_Error_Number
should be same.'
Assertion condition: '"1146" = "0"'
Assertion condition, interpolated: '"1146" = "0"'
Assertion result: '0'
Analysis:
========
In parallel replication when slave is started the worker pool gets
activated and it gets cleared when slave stops. Each time the worker pool
gets activated a backup worker pool also gets created to store worker
specific perforance schema information in case of errors. On error, all
relevant information is copied from rpl_parallel_thread to rli and it gets
cleared from thread. Then server waits for all workers to complete their
work, during this stage performance schema table specific worker info is
stored into the backup pool and finally the actual pool gets cleared. If
users query the performance schema table to know the status of workers the
information from backup pool will be used. The test simulates
ER_NO_SUCH_TABLE error and verifies the worker information in pfs table.
Test works fine if execution occurs in following order.
Step 1. Error occurred 'worker information is copied to backup pool'.
Step 2. handle_slave_sql invokes 'rpl_parallel_resize_pool_if_no_slaves' to
deactivate worker pool, it marks the pool->count=0
Step 3. PFS table is queried, since actual pool is deactivated backup pool
information is read.
If the Step 3 happens prior to Step2 the pool is yet to be deactivated and
the actual pool is read, which doesn't have any error details as they were
cleared. Hence test ocasionally fails.
Fix:
===
Upon error mark the back pool as being active so that if PFS table is
quried since the backup pool is flagged as valid its information will be
read, in case it is not flagged regular pool will be read.
This work is one of the last pieces created by the late Sujatha Sivakumar.
- agressively -> aggressively
- exising -> existing
- occured -> occurred
- releated -> related
- seperated -> separated
- sucess -> success
- use use -> use
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
When one session SELECT ... FOR UPDATE and holds the lock, subsequent
sessions that SELECT ... FOR UPDATE will wait to get the lock.
Currently, that event is labeled as `wait/io/table/sql/handler`, which
is incorrect. Instead, it should have been
`wait/lock/table/sql/handler`.
Two factors contribute to this bug:
1. Instrumentation interface and the heavy usage of `TABLE_IO_WAIT` in
`sql/handler.cc` file. See interface [^1] for better understanding;
2. The balancing act [^2] of doing instrumentation aggregration _AND_
having good performance. For example, EVENTS_WAITS_SUMMARY... is
aggregated using EVENTS_WAITS_CURRENT. Aggregration needs to be based
on the same wait class, and the code was overly aggressive in label a
LOCK operation as an IO operation in this case.
The proposed fix is pretty simple, but understanding the bug took a
while. Hence the footnotes below. For future improvement and
refactoring, we may want to consider renaming `TABLE_IO_WAIT` and making
it less coarse and more targeted.
Note that newly added test case, events_waits_current_MDEV-29091,
initially didn't pass Buildbot CI for embedded build tests. Further
research showed that other impacted tests all included not_embedded.inc.
This oversight was fixed later.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
[^1]: To understand `performance_schema` instrumentation interface, I
found this URL is the most helpful:
https://dev.mysql.com/doc/dev/mysql-server/latest/PAGE_PFS_PSI.html
[^2]: The best place to understand instrumentation projection,
composition, and aggregration is through the source file. Although I
prefer reading Doxygen produced html file, but for whatever reason, the
rendering is not ideal. Here is link to 10.6's pfs.cc:
https://github.com/MariaDB/server/blob/10.6/storage/perfschema/pfs.cc
This patch is the result of running
run-clang-tidy -fix -header-filter=.* -checks='-*,modernize-use-equals-default' .
Code style changes have been done on top. The result of this change
leads to the following improvements:
1. Binary size reduction.
* For a -DBUILD_CONFIG=mysql_release build, the binary size is reduced by
~400kb.
* A raw -DCMAKE_BUILD_TYPE=Release reduces the binary size by ~1.4kb.
2. Compiler can better understand the intent of the code, thus it leads
to more optimization possibilities. Additionally it enabled detecting
unused variables that had an empty default constructor but not marked
so explicitly.
Particular change required following this patch in sql/opt_range.cc
result_keys, an unused template class Bitmap now correctly issues
unused variable warnings.
Setting Bitmap template class constructor to default allows the compiler
to identify that there are no side-effects when instantiating the class.
Previously the compiler could not issue the warning as it assumed Bitmap
class (being a template) would not be performing a NO-OP for its default
constructor. This prevented the "unused variable warning".
This makes it easier to compare different costs and also allows
the optimizer to optimizer different storage engines more reliably.
- Added tests/check_costs.pl, a tool to verify optimizer cost calculations.
- Most engine costs has been found with this program. All steps to
calculate the new costs are documented in Docs/optimizer_costs.txt
- User optimizer_cost variables are given in microseconds (as individual
costs can be very small). Internally they are stored in ms.
- Changed DISK_READ_COST (was DISK_SEEK_BASE_COST) from a hard disk cost
(9 ms) to common SSD cost (400MB/sec).
- Removed cost calculations for hard disks (rotation etc).
- Changed the following handler functions to return IO_AND_CPU_COST.
This makes it easy to apply different cost modifiers in ha_..time()
functions for io and cpu costs.
- scan_time()
- rnd_pos_time() & rnd_pos_call_time()
- keyread_time()
- Enhanched keyread_time() to calculate the full cost of reading of a set
of keys with a given number of ranges and optional number of blocks that
need to be accessed.
- Removed read_time() as keyread_time() + rnd_pos_time() can do the same
thing and more.
- Tuned cost for: heap, myisam, Aria, InnoDB, archive and MyRocks.
Used heap table costs for json_table. The rest are using default engine
costs.
- Added the following new optimizer variables:
- optimizer_disk_read_ratio
- optimizer_disk_read_cost
- optimizer_key_lookup_cost
- optimizer_row_lookup_cost
- optimizer_row_next_find_cost
- optimizer_scan_cost
- Moved all engine specific cost to OPTIMIZER_COSTS structure.
- Changed costs to use 'records_out' instead of 'records_read' when
recalculating costs.
- Split optimizer_costs.h to optimizer_costs.h and optimizer_defaults.h.
This allows one to change costs without having to compile a lot of
files.
- Updated costs for filter lookup.
- Use a better cost estimate in best_extension_by_limited_search()
for the sorting cost.
- Fixed previous issues with 'filtered' explain column as we are now
using 'records_out' (min rows seen for table) to calculate filtering.
This greatly simplifies the filtering code in
JOIN_TAB::save_explain_data().
This change caused a lot of queries to be optimized differently than
before, which exposed different issues in the optimizer that needs to
be fixed. These fixes are in the following commits. To not have to
change the same test case over and over again, the changes in the test
cases are done in a single commit after all the critical change sets
are done.
InnoDB changes:
- Updated InnoDB to not divide big range cost with 2.
- Added cost for InnoDB (innobase_update_optimizer_costs()).
- Don't mark clustered primary key with HA_KEYREAD_ONLY. This will
prevent that the optimizer is trying to use index-only scans on
the clustered key.
- Disabled ha_innobase::scan_time() and ha_innobase::read_time() and
ha_innobase::rnd_pos_time() as the default engine cost functions now
works good for InnoDB.
Other things:
- Added --show-query-costs (\Q) option to mysql.cc to show the query
cost after each query (good when working with query costs).
- Extended my_getopt with GET_ADJUSTED_VALUE which allows one to adjust
the value that user is given. This is used to change cost from
microseconds (user input) to milliseconds (what the server is
internally using).
- Added include/my_tracker.h ; Useful include file to quickly test
costs of a function.
- Use handler::set_table() in all places instead of 'table= arg'.
- Added SHOW_OPTIMIZER_COSTS to sys variables. These are input and
shown in microseconds for the user but stored as milliseconds.
This is to make the numbers easier to read for the user (less
pre-zeros). Implemented in 'Sys_var_optimizer_cost' class.
- In test_quick_select() do not use index scans if 'no_keyread' is set
for the table. This is what we do in other places of the server.
- Added THD parameter to Unique::get_use_cost() and
check_index_intersect_extension() and similar functions to be able
to provide costs to called functions.
- Changed 'records' to 'rows' in optimizer_trace.
- Write more information to optimizer_trace.
- Added INDEX_BLOCK_FILL_FACTOR_MUL (4) and INDEX_BLOCK_FILL_FACTOR_DIV (3)
to calculate usage space of keys in b-trees. (Before we used numeric
constants).
- Removed code that assumed that b-trees has similar costs as binary
trees. Replaced with engine calls that returns the cost.
- Added Bitmap::find_first_bit()
- Added timings to join_cache for ANALYZE table (patch by Sergei Petrunia).
- Added records_init and records_after_filter to POSITION to remember
more of what best_access_patch() calculates.
- table_after_join_selectivity() changed to recalculate 'records_out'
based on the new fields from best_access_patch()
Bug fixes:
- Some queries did not update last_query_cost (was 0). Fixed by moving
setting thd->...last_query_cost in JOIN::optimize().
- Write '0' as number of rows for const tables with a matching row.
Some internals:
- Engine cost are stored in OPTIMIZER_COSTS structure. When a
handlerton is created, we also created a new cost variable for the
handlerton. We also create a new variable if the user changes a
optimizer cost for a not yet loaded handlerton either with command
line arguments or with SET
@@global.engine.optimizer_cost_variable=xx.
- There are 3 global OPTIMIZER_COSTS variables:
default_optimizer_costs The default costs + changes from the
command line without an engine specifier.
heap_optimizer_costs Heap table costs, used for temporary tables
tmp_table_optimizer_costs The cost for the default on disk internal
temporary table (MyISAM or Aria)
- The engine cost for a table is stored in table_share. To speed up
accesses the handler has a pointer to this. The cost is copied
to the table on first access. If one wants to change the cost one
must first update the global engine cost and then do a FLUSH TABLES.
This was done to be able to access the costs for an open table
without any locks.
- When a handlerton is created, the cost are updated the following way:
See sql/keycaches.cc for details:
- Use 'default_optimizer_costs' as a base
- Call hton->update_optimizer_costs() to override with the engines
default costs.
- Override the costs that the user has specified for the engine.
- One handler open, copy the engine cost from handlerton to TABLE_SHARE.
- Call handler::update_optimizer_costs() to allow the engine to update
cost for this particular table.
- There are two costs stored in THD. These are copied to the handler
when the table is used in a query:
- optimizer_where_cost
- optimizer_scan_setup_cost
- Simply code in best_access_path() by storing all cost result in a
structure. (Idea/Suggestion by Igor)