1
0
mirror of https://github.com/MariaDB/server.git synced 2025-08-09 22:24:09 +03:00
Commit Graph

4799 Commits

Author SHA1 Message Date
Monty
381e9adb6c MDEV-34150 Assertion failure in Diagnostics_area::set_error_status upon binary logging hitting tmp space limit
- Moved writing to binlog_cache from close_thread_tables() to
  binlog_commit().
- In select_create() delete cached row events instead of flushing them
  to disk. This was done to avoid possible disk write error in this code.
2024-05-27 12:39:04 +02:00
Monty
d2304554ac MDEV-33751 Assertion `thd' failed in int temp_file_size_cb_func(tmp_file_tracking*, int)
Changes:
- Fixed that MyISAM and Aria parallel repair works with tmp file limit.
  This required to add current_thd to all parallel workers and add
  protection in my_malloc_size_cb_func() and temp_file_size_cb_func() to
  be able to handle shared THD's.  I removed the old code in MyISAM to
  set current_thd() as only worked when using with virtal indexed
  columns and I wanted to keep the Aria and MyISAM code identical.

Other things:
- Improved error messages from Aria parallel repair and
  create_internal_tmp_table_from_heap().
2024-05-27 12:39:04 +02:00
Monty
b9f5793176 MDEV-9101 Limit size of created disk temporary files and tables
Two new variables added:
- max_tmp_space_usage : Limits the the temporary space allowance per user
- max_total_tmp_space_usage: Limits the temporary space allowance for
  all users.

New status variables: tmp_space_used & max_tmp_space_used
New field in information_schema.process_list: TMP_SPACE_USED

The temporary space is counted for:
- All SQL level temporary files. This includes files for filesort,
  transaction temporary space, analyze, binlog_stmt_cache etc.
  It does not include engine internal temporary files used for repair,
  alter table, index pre sorting etc.
- All internal on disk temporary tables created as part of resolving a
  SELECT, multi-source update etc.

Special cases:
- When doing a commit, the last flush of the binlog_stmt_cache
  will not cause an error even if the temporary space limit is exceeded.
  This is to avoid giving errors on commit. This means that a user
  can temporary go over the limit with up to binlog_stmt_cache_size.

Noteworthy issue:
- One has to be careful when using small values for max_tmp_space_limit
  together with binary logging and with non transactional tables.
  If a the binary log entry for the query is bigger than
  binlog_stmt_cache_size and one hits the limit of max_tmp_space_limit
  when flushing the entry to disk, the query will abort and the
  binary log will not contain the last changes to the table.
  This will also stop the slave!
  This is also true for all Aria tables as Aria cannot do rollback
  (except in case of crashes)!
  One way to avoid it is to use @@binlog_format=statement for
  queries that updates a lot of rows.

Implementation:
- All writes to temporary files or internal temporary tables, that
  increases the file size, are routed through temp_file_size_cb_func()
  which updates and checks the temp space usage.
- Most of the temporary file monitoring is done inside IO_CACHE.
  Temporary file monitoring is done inside the Aria engine.
- MY_TRACK and MY_TRACK_WITH_LIMIT are new flags for ini_io_cache().
  MY_TRACK means that we track the file usage. TRACK_WITH_LIMIT means
  that we track the file usage and we give an error if the limit is
  breached. This is used to not give an error on commit when
  binlog_stmp_cache is flushed.
- global_tmp_space_used contains the total tmp space used so far.
  This is needed quickly check against max_total_tmp_space_usage.
- Temporary space errors are using EE_LOCAL_TMP_SPACE_FULL and
  handler errors are using HA_ERR_LOCAL_TMP_SPACE_FULL.
  This is needed until we move general errors to it's own error space
  so that they cannot conflict with system error numbers.
- Return value of my_chsize() and mysql_file_chsize() has changed
  so that -1 is returned in the case my_chsize() could not decrease
  the file size (very unlikely and will not happen on modern systems).
  All calls to _chsize() are updated to check for > 0 as the error
  condition.
- At the destruction of THD we check that THD::tmp_file_space == 0
- At server end we check that global_tmp_space_used == 0
- As a precaution against errors in the tmp_space_used code, one can set
  max_tmp_space_usage and max_total_tmp_space_usage to 0 to disable
  the tmp space quota errors.
- truncate_io_cache() function added.
- Aria tables using static or dynamic row length are registered in 8K
  increments to avoid some calls to update_tmp_file_size().

Other things:
- Ensure that all handler errors are registered.  Before, some engine
  errors could be printed as "Unknown error".
- Fixed bug in filesort() that causes a assert if there was an error
  when writing to the temporay file.
- Fixed that compute_window_func() now takes into account write errors.
- In case of parallel replication, rpl_group_info::cleanup_context()
  could call trans_rollback() with thd->error set, which would cause
  an assert. Fixed by resetting the error before calling trans_rollback().
- Fixed bug in subselect3.inc which caused following test to use
  heap tables with low value for max_heap_table_size
- Fixed bug in sql_expression_cache where it did not overflow
  heap table to Aria table.
- Added Max_tmp_disk_space_used to slow query log.
- Fixed some bugs in log_slow_innodb.test
2024-05-27 12:39:04 +02:00
Sergei Golubchik
9293d40fa7 MDEV-33145 support for old-mode=OLD_FLUSH_STATUS
add old-mode that restores inconsistent legacy behavior for FLUSH STATUS.
It doesn't affect FLUSH { SESSION | GLOBAL } STATUS.
2024-05-27 12:39:03 +02:00
Sergei Golubchik
9ecec1f730 cleanup: old_mode_deprecated
make it easier to add more old-mode values
2024-05-27 12:39:03 +02:00
Monty
775cba4d0f MDEV-33145 Add FLUSH GLOBAL STATUS
- FLUSH GLOBAL STATUS now resets most global_status_vars.
  At this stage, this is mainly to be used for testing.
- FLUSH SESSION STATUS added as an alias for FLUSH STATUS.
- FLUSH STATUS does not require any privilege (before required RELOAD).
- FLUSH GLOBAL STATUS requires RELOAD privilege.
- All global status reset moved to FLUSH GLOBAL STATUS.
- Replication semisync status variables are now reset by
  FLUSH GLOBAL STATUS.
- In test cases, the only changes are:
  - Replace FLUSH STATUS with FLUSH GLOBAL STATUS
  - Replace FLUSH STATUS with FLUSH STATUS; FLUSH GLOBAL STATUS.
    This was only done in a few tests where the test was using SHOW STATUS
    for both local and global variables.
- Uptime_since_flush_status is now always provided, independent if
  ENABLED_PROFILING is enabled when compiling MariaDB.
- @@global.Uptime_since_flush_status is reset on FLUSH GLOBAL STATUS
  and @@session.Uptime_since_flush_status is reset on FLUSH SESSION STATUS.
- When connected, @@session.Uptime_since_flush_status is set to 0.
2024-05-27 12:39:03 +02:00
Monty
2464ee758a MDEV-33655 Remove alter_algorithm
Remove alter_algorithm but keep the variable as no-op (with a warning).

The reasons for removing alter_algorithm are:
- alter_algorithm was introduced as a replacement for the
  old_alter_table that was used to force the usage of the original
  alter table algorithm (copy) in the cases where the new alter
  algorithm did not work. The new option was added as a way to force
  the usage of a specific algorithm when it should instead have made
  it possible to disable algorithms that would not work for some
  reason.
- alter_algorithm introduced some cases where ALTER TABLE would not
  work without specifying the ALGORITHM=XXX option together with
  ALTER TABLE.
- Having different values of alter_algorithm on master and slave could
  cause slave to stop unexpectedly.
- ALTER TABLE FORCE, as used by mariadb-upgrade, would not always work
  if alter_algorithm was set for the server.
- As part of the MDEV-33449 "improving repair of tables" it become
  clear that alter- algorithm made it harder to provide a better and
  more consistent ALTER TABLE FORCE and REPAIR TABLE and it would be
  better to remove it.
2024-05-27 12:39:03 +02:00
Monty
b879b8a5c8 More windows changes for 32 bit unsigned timestamp:
MDEV-32188 make TIMESTAMP use whole 32-bit unsigned range

- Changed usage of timeval to my_timeval as the timeval parts on windows
  are 32-bit long, which causes some compiler issues on windows.
2024-05-27 12:39:02 +02:00
Sergei Golubchik
869e67c92f cleanup: remove thd->stmt_changes_data
what is done in the plugin - stays in the plugin
2024-05-27 12:39:02 +02:00
Sergei Golubchik
3781848bca mark the deprecated sysvar deprecated
and adjust the copyright year
2024-05-27 12:39:02 +02:00
Monty
243b9f3cd2 MDEV-33501 Extend query_response_time plugin to be compatible with Percona server
This is to update the plugin to be compatible with Percona's
query_response_time plugin, with some additions.
Some of the tests are taken from Percona server.

- Added plugins QUERY_RESPONSE_TIME_READ, QUERY_RESPONSE_TIME_WRITE and
  QUERY_RESPONSE_TIME_READ_WRITE.
- Added option query_response_time_session_stats, with possible values
  GLOBAL, ON or OFF, to the query_response_time plugin.

Notes:
- All modules are dependent on QUERY_RESPONSE_READ_TIME. This must always
  be enabled if any of the other modules are used.
  This will be auto-enabled in the near future.
- Accounting are done per statement. Stored functions are regarded
  as part of the original statement.
- For stored procedures the accounting are done per statement executed
  in the stored procedure. CALL will not be accounted because of this.
- FLUSH commands will not be accounted for. This is to ensure that
  FLUSH QUERY_RESPONSE_TIME is not part of the statistics.
  (This helps when testing with mtr and otherwise).
- FLUSH QUERY_RESPONSE_TIME_READ and FLUSH QUERY_RESPONSE_TIME_READ
  only resets the corresponding status.
- FLUSH QUERY_RESPONSE_TIME and FLUSH QUERY_RESPONSE_TIME_READ_WRITE or
  changing the value of query_response_time_range_base followed by
  any FLUSH of QUERY_RESPOSNSE_TIME resets all status.
2024-05-27 12:39:02 +02:00
Oleksandr Byelkin
dd7d9d7fb1 Merge branch '11.4' into 11.5 2024-05-23 17:01:43 +02:00
Oleksandr Byelkin
99b370e023 Merge branch '11.2' into 11.4 2024-05-21 19:38:51 +02:00
Sergei Golubchik
bf5da43e50 Merge branch '11.1' into 11.2 2024-05-13 10:00:26 +02:00
Sergei Golubchik
f0a5412037 Merge branch '11.0' into 11.1 2024-05-13 09:52:30 +02:00
Sergei Golubchik
f9807aadef Merge branch '10.11' into 11.0 2024-05-12 12:18:28 +02:00
Nikita Malyavin
398ae9ee3c MDEV-33330 Server crash or assertion failure in binlog_get_pending_rows_event
When binlog_get_pending_rows_event was refactored, one usage in
binlog_need_stmt_format has not been taken in mind.

As binlog_get_pending_rows_event now requires existing cache_mngr, this check
is now made first.
2024-05-03 23:01:10 +02:00
Alexander Barkov
001f93df2b MDEV-12668 SRID is not preserved in UNION, VIEW, MIN, MAX
Fixing the problem that an operation involving a mix of
two or more GEOMETRY operands did not preserve their SRIDs.
Now SRIDs are preserved by hybrid functions, subqueries, TVCs, UNIONs, VIEWs.

Incompatible change:
  An attempt to mix two different SRIDs now raises an error.

Details:

- Adding a new class Type_extra_attributes. It's a generic
  container which can store very specific data type attributes.
  For now it can store one uint32 and one const pointer attribute
  (for GEOMETRY's SRID and for ENUM/SET TYPELIB respectively).
  In the future it can grow as needed.

  Type_extra_attributes will also be reused soon to store "const Type_zone*"
  pointers for the TIMESTAMP's "WITH TIME ZONE 'tz'" attribute
  (a timestamp data type with a fixed time zone independent from @@time_zone).
  The time zone attribute will be stored in exactly the same way like
  a TYPELIB pointer is stored by ENUM/SET.

- Removing Column_definition_attributes members "interval" and "srid".
  Deriving Column_definition_attributes from the generic attribute container
  Type_extra_attributes instead.

- Adding a new class Type_typelib_attributes, to store
  the TYPELIB of the ENUM and SET data types. Deriving Field_enum from it.
  Removing the member Field_enum::typelib.

- Adding a new class Type_geom_attributes, to store
  the GEOMETRY related attributes. Deriving Field_geom from it.
  Removing the member Field_geom::srid.

- Removing virtual methods:
    Field::get_typelib()
    Type_all_attributes::get_typelib() and
    Type_all_attributes::set_typelib()
  They were very specific to TYPELIB.
  Adding more generic virtual methods instead:
  * Field::type_extra_attributes() - to get extra attributes
  * Type_all_attributes::type_extra_attributes() - to get extra attributes
  * Type_all_attributes::type_extra_attributes_addr() - to set extra attributes

- Removing Item_type_holder::enum_set_typelib. Deriving Item_type_holder
  from the generic attribute container Type_extra_attributes instead.
  This makes it possible for UNION to preserve SRID
  (in addition to preserving TYPELIB).

- Deriving Item_hybrid_func from Type_extra_attributes.
  This makes it possible for hybrid functions (e.g. CASE, COALESCE,
  LEAST, GREATEST etc) to preserve SRID.

- Deriving Item_singlerow_subselect from Type_extra_attributes and
  overriding methods:
  * Item_cache::type_extra_attributes()
  * subselect_single_select_engine::fix_length_and_dec()
  * Item_singlerow_subselect::type_extra_attributes()
  * Item_singlerow_subselect::type_extra_attributes_addr()
  This is needed to preserve SRID in subqueries and TVCs

- Cleanup: fixing the data type of members
  * Binlog_type_info::m_enum_typelib
  * Binlog_type_info::m_set_typelib
  from "TYPELIB *" to "const TYPELIB *"
2024-04-27 23:15:49 +04:00
Sergei Golubchik
018d537ec1 Merge branch '10.6' into 10.11 2024-04-22 15:23:10 +02:00
Sergei Golubchik
ea6975b1f1 MDEV-30366 Permit bulk implementation to return ALL individual results
COM_STMT_BULK_STMT new flag to server to returns all unitary results
2024-04-22 14:57:29 +02:00
Alexander Barkov
fd247cc21f MDEV-31340 Remove MY_COLLATION_HANDLER::strcasecmp()
This patch also fixes:
  MDEV-33050 Build-in schemas like oracle_schema are accent insensitive
  MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0
  MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0
  MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0
  MDEV-33088 Cannot create triggers in the database `MYSQL`
  MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0
  MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0
  MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0
  MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS
  MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0

- Removing the virtual function strnncoll() from MY_COLLATION_HANDLER

- Adding a wrapper function CHARSET_INFO::streq(), to compare
  two strings for equality. For now it calls strnncoll() internally.
  In the future it will turn into a virtual function.

- Adding new accent sensitive case insensitive collations:
    - utf8mb4_general1400_as_ci
    - utf8mb3_general1400_as_ci
  They implement accent sensitive case insensitive comparison.
  The weight of a character is equal to the code point of its
  upper case variant. These collations use Unicode-14.0.0 casefolding data.

  The result of
     my_charset_utf8mb3_general1400_as_ci.strcoll()
  is very close to the former
     my_charset_utf8mb3_general_ci.strcasecmp()

  There is only a difference in a couple dozen rare characters, because:
    - the switch from "tolower" to "toupper" comparison, to make
      utf8mb3_general1400_as_ci closer to utf8mb3_general_ci
    - the switch from Unicode-3.0.0 to Unicode-14.0.0
  This difference should be tolarable. See the list of affected
  characters in the MDEV description.

  Note, utf8mb4_general1400_as_ci correctly handles non-BMP characters!
  Unlike utf8mb4_general_ci, it does not treat all BMP characters
  as equal.

- Adding classes representing names of the file based database objects:

    Lex_ident_db
    Lex_ident_table
    Lex_ident_trigger

  Their comparison collation depends on the underlying
  file system case sensitivity and on --lower-case-table-names
  and can be either my_charset_bin or my_charset_utf8mb3_general1400_as_ci.

- Adding classes representing names of other database objects,
  whose names have case insensitive comparison style,
  using my_charset_utf8mb3_general1400_as_ci:

  Lex_ident_column
  Lex_ident_sys_var
  Lex_ident_user_var
  Lex_ident_sp_var
  Lex_ident_ps
  Lex_ident_i_s_table
  Lex_ident_window
  Lex_ident_func
  Lex_ident_partition
  Lex_ident_with_element
  Lex_ident_rpl_filter
  Lex_ident_master_info
  Lex_ident_host
  Lex_ident_locale
  Lex_ident_plugin
  Lex_ident_engine
  Lex_ident_server
  Lex_ident_savepoint
  Lex_ident_charset
  engine_option_value::Name

- All the mentioned Lex_ident_xxx classes implement a method streq():

  if (ident1.streq(ident2))
     do_equal();

  This method works as a wrapper for CHARSET_INFO::streq().

- Changing a lot of "LEX_CSTRING name" to "Lex_ident_xxx name"
  in class members and in function/method parameters.

- Replacing all calls like
    system_charset_info->coll->strcasecmp(ident1, ident2)
  to
    ident1.streq(ident2)

- Taking advantage of the c++11 user defined literal operator
  for LEX_CSTRING (see m_strings.h) and Lex_ident_xxx (see lex_ident.h)
  data types. Use example:

  const Lex_ident_column primary_key_name= "PRIMARY"_Lex_ident_column;

  is now a shorter version of:

  const Lex_ident_column primary_key_name=
    Lex_ident_column({STRING_WITH_LEN("PRIMARY")});
2024-04-18 15:22:10 +04:00
Marko Mäkelä
829cb1a49c Merge 10.5 into 10.6 2024-04-17 14:14:58 +03:00
Dave Gosselin
a8a75ba2d0 Factor TABLE_LIST creation from add_table_to_list
Ideally our methods and functions should do one thing, do that well,
and do only that.  add_table_to_list does far more than adding a
table to a list, so this commit factors the TABLE_LIST creation out
to a new TABLE_LIST constructor.  It then uses placement new()
to create it in the correct memory area (result of thd->calloc).
Benefits of this approach:
 1. add_table_to_list now returns as early as possible on an error
 2. fewer side-effects incurred on creating the TABLE_LIST object
 3. TABLE_LIST won't be calloc'd if copy_to_db fails
 4. local declarations moved closer to their respective first uses
 5. improved code readability and logical flow
Also factored a couple of other functions to keep the happy path
more to the left, which makes them easier to follow at a glance.
2024-04-16 10:09:43 -04:00
Kristian Nielsen
16aa4b5f59 Merge from 10.4 to 10.5
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
2024-04-15 17:46:49 +02:00
Monty
3655cefc42 MDEV-33813 ERROR 1021 (HY000): Disk full (./org/test1.MAI); waiting for someone to free some space
Fixed that internal temporary tables are not waiting for freed disk space.

Other things:
- 'kill id' will now kill a query waiting for free disk space instantly.
  Before it could take up to 60 seconds for the kill would be noticed.
- Fixed that sorting one index is not using MY_WAIT_IF_FULL for temp files.
- Fixed bug where share->write_flag set MY_WAIT_IF_FULL for temp files.

It is quite hard to do a test case for this. Instead I tested all
combinations interactively.
2024-04-10 17:01:24 +03:00
Oleksandr Byelkin
cd28b2479c Merge branch '11.1' into 11.2 2024-04-09 12:12:33 +02:00
Marko Mäkelä
683fbced6b Merge 11.0 into 11.1 2024-03-28 12:15:36 +02:00
Marko Mäkelä
fec2fd6add Merge 10.11 into 11.0 2024-03-28 10:51:36 +02:00
Marko Mäkelä
788953463d Merge 10.6 into 10.11
Some fixes related to commit f838b2d799 and
Rows_log_event::do_apply_event() and Update_rows_log_event::do_exec_row()
for system-versioned tables were provided by Nikita Malyavin.
This was required by test versioning.rpl,trx_id,row.
2024-03-28 09:16:57 +02:00
Vladislav Vaintroub
d695e2de54 MDEV-33506 Show original IP in the "aborted" message.
Add "real ip:<ip_or_localhost>" part to the aborted message
Only for proxy-protocoled connection, so it does not  not to cause
confusion to normal users.
2024-03-26 13:10:36 +01:00
Vladislav Vaintroub
318000cffc MDEV-33506 Show original IP in the "aborted" message.
Add "real ip:<ip_or_localhost>" part to the aborted message
Only for proxy-protocoled connection, so it does not  not to cause
confusion to normal users.
2024-03-26 11:11:03 +01:00
Brandon Nesterenko
75c7c6dc39 MDEV-33551: Semi-sync Wait Point AFTER_COMMIT Slow on Workloads with Heavy Concurrency
When using semi-sync replication with
rpl_semi_sync_master_wait_point=AFTER_COMMIT, the performance of the
primary can significantly reduce compared to AFTER_SYNC's
performance for workloads with many concurrent users executing
transactions. This is because all connections on the primary share
the same cond_wait variable/mutex pair, so any time an ACK is
received from a replica, all waiting connections are awoken to check
if the ACK was for itself, which is done in mutual exclusion.

This patch changes this such that the waiting THD will use its own
local condition variable, and the ACK receiver thread only signals
connections which have been ACKed for wakeup. That is, the
THD::LOCK_wakeup_ready condition variable is re-used for this
purpose, and the Active_tranx queue nodes are extended to hold the
waiting thread, so it can be signalled once ACKed.

Additionally:

 1)  Removed part of MDEV-11853 additions, which allowed suspended
connection threads awaiting their semi-sync ACKs to live until their
ACKs had been received. This part, however, wasn't needed.  That is,
all that was needed was for the Ack_thread to survive.  So now the
connection threads are killed during phase 1. Thereby
THD::is_awaiting_semisync_ack, and all its related code was removed.

 2) COND_binlog_send is repurposed to signal on the condition when
Active_tranx is emptied during clear_active_tranx_nodes.

 3) At master shutdown (when waiting for slaves), instead of the
main loop individually waiting for each ACK, await_slave_reply()
(renamed await_all_slave_replies()) just waits once for the
repurposed COND_binlog_send to signal it is empty.

 4) Test rpl_semi_sync_shutdown_await_ack is updates as following:
   4.1) Added test case (adapted from Kristian Nielsen) to ensure
that if a thread awaiting its ACK is killed while SHUTDOWN WAIT FOR
ALL SLAVES is issued, the primary will still wait for the ACK from
the killed thread.
   4.2) As connections which by-passed phase 1 of thread killing no
longer are delayed for kill until phase 2, we can no longer query
yes/no tx after receiving an ACK/timeout. The check for these
variables is removed.
   4.3) Comment descriptions are updated which mention that the
connection is alive; and adjusted to be the Ack_thread.

Reviewed By:
============
Kristian Nielsen <knielsen@knielsen-hq.org>
2024-03-21 08:42:18 -06:00
Dmitry Shulga
d7758debae MDEV-33218: Assertion `active_arena->is_stmt_prepare_or_first_stmt_execute() || active_arena->state == Query_arena::STMT_SP_QUERY_ARGUMENTS' failed in st_select_lex::fix_prepare_information
In case there is a view that queried from a stored routine or
a prepared statement and this temporary table is dropped between
executions of SP/PS, then it leads to hitting an assertion
at the SELECT_LEX::fix_prepare_information. The fired assertion
 was added by the commit 85f2e4f8e8
(MDEV-32466: Potential memory leak on executing of create view statement).
Firing of this assertion means memory leaking on execution of SP/PS.
Moreover, if the added assert be commented out, different result sets
can be produced by the statement SELECT * FROM the hidden table.

Both hitting the assertion and different result sets have the same root
cause. This cause is usage of temporary table's metadata after the table
itself has been dropped. To fix the issue, reload the cache of stored
routines. To do it  cache of stored routines is reset at the end of
execution of the function dispatch_command(). Next time any stored routine
be called it will be loaded from the table mysql.proc. This happens inside
the method Sp_handler::sp_cache_routine where loading of a stored routine
is performed in case it missed in cache. Loading is performed unconditionally
while previously it was controlled by the parameter lookup_only. By that
reason the signature of the method Sroutine_hash_entry::sp_cache_routine
was changed by removing unused parameter lookup_only.

Clearing of sp caches affects the test main.lock_sync since it forces
opening and locking the table mysql.proc but the test assumes that each
statement locks its tables once during its execution. To keep this invariant
the debug sync points with names "before_lock_tables_takes_lock" and
"after_lock_tables_takes_lock" are not activated on handling the table
mysql.proc
2024-03-14 15:43:03 +07:00
Marko Mäkelä
c3a00dfa53 Merge 10.5 into 10.6 2024-03-12 09:19:57 +02:00
Sergei Petrunia
48f42ab2e5 Fix comment. 2024-03-04 14:53:57 +03:00
Alexander Barkov
929c2e06aa MDEV-31531 Remove my_casedn_str() and my_caseup_str()
Under terms of MDEV 27490 we'll add support for non-BMP identifiers
and upgrade casefolding information to Unicode version 14.0.0.
In Unicode-14.0.0 conversion to lower and upper cases can increase octet length
of the string, so conversion won't be possible in-place any more.

This patch removes virtual functions performing in-place casefolding:
  - my_charset_handler_st::casedn_str()
  - my_charset_handler_st::caseup_str()
and fixes the code to use the non-inplace functions instead:
  - my_charset_handler_st::casedn()
  - my_charset_handler_st::caseup()
2024-02-28 22:20:29 +04:00
Oleksandr Byelkin
fa69b085b1 Merge branch '11.3' into 11.4 2024-02-15 13:53:21 +01:00
Marko Mäkelä
64cce8d5bf Merge 10.6 into 10.11 2024-02-14 16:12:53 +02:00
Alexey Botchkov
85517f609a MDEV-33393 audit plugin do not report user did the action..
The '<replication_slave>' user is assigned to the slave replication
thread so this name appears in the auditing logs.
2024-02-14 00:02:29 +04:00
Nikita Malyavin
62d35a074f Add Statement::sql_command_flags() function.
Now one can call thd->sql_command_flags() instead of
sql_command_flags[thd->lex->sql_command].
2024-02-12 22:26:06 +01:00
Monty
3907345e22 MDEV-33306 Optimizer choosing incorrect index in 10.6, 10.5 but not in 10.4
In MariaDB up to 10.11, the test_if_cheaper_ordering() code (that tries
to optimizer how GROUP BY is executed) assumes that if a table scan is used
then if there is any index usable by GROUP BY it will be used.

The reason MySQL 10.4 provides a better plan is because of two differences:
- Plans using 'ref' has a cost of 1/10 of what it should be (as a
  protection against table scans). This is why 'ref' is used in 10.4
  and not in 10.5.
- When 'ref' is used, then GROUP BY will not use an index for GROUP BY.

In MariaDB 10.5 the chosen plan is a table scan (as it calculated to be
faster) but as 'ref' is not used, the test_if_cheaper_ordering()
optimizer phase decides (as ref is not usd) to use an index for GROUP BY,
which has bad performance.

Description of fix:
- All new code is protected by the "optimizer_adjust_secondary_key_costs"
  variable, which is now a bit map, and is only executed if the option
  "disable_forced_index_in_group_by" set.
- Corrects GROUP BY handling in test_if_cheaper_ordering() by making
  the choise of using and index with GROUP BY cost based instead of rule
  based.
- Adds TIME_FOR_COMPARE to all costs, when using group by, to make
  read_time, index_scan_time and range_cost comparable.

Other things:
- Made optimizer_adjust_secondary_key_costs a bit map (compatible with old
  code).

Notes:
Current code ignores costs for the algorithm used when doing GROUP
BY on the first table:
  - Create an in-memory temporary table for handling group by and doing a
    filesort of the result file
We can probably in 10.6 continue to ignore this cost.

This patch should NOT be merged to 11.0 series (not needed in 11.0).
2024-02-12 16:43:00 +02:00
Oleksandr Byelkin
d21cb43db1 Merge branch '11.2' into 11.3 2024-02-04 16:42:31 +01:00
Sergei Golubchik
79580f4f96 Merge branch '11.1' into 11.2 2024-02-02 17:43:57 +01:00
Sergei Golubchik
b6680e0101 Merge branch '11.0' into 11.1 2024-02-02 11:30:47 +01:00
Sergei Golubchik
87e13722a9 Merge branch '10.6' into 10.11 2024-02-01 18:36:14 +01:00
Oleksandr Byelkin
fe490f85bb Merge branch '10.11' into 11.0 2024-01-30 08:54:10 +01:00
Oleksandr Byelkin
14d930db5d Merge branch '10.6' into 10.11 2024-01-30 08:17:58 +01:00
Monty
6f65e08277 MDEV-33118 optimizer_adjust_secondary_key_costs variable
optimizer-adjust_secondary_key_costs is added to provide 2 small
adjustments to the 10.x optimizer cost model. This can be used in the
case where the optimizer wrongly uses a secondary key instead of a
clustered primary key.

The reason behind this change is that MariaDB 10.x does not take into
account that for engines like InnoDB, that scanning a primary key can be
up to 7x faster than scanning a secondary key + read the row data trough
the primary key.

The different values for optimizer_adjust_secondary_key_costs are:

optimizer_adjust_secondary_key_costs=0
- No changes to current model

optimizer_adjust_secondary_key_costs=1
- Ensure that the cost of of secondary indexes has a cost of at
  least 5x times the cost of a clustered primary key (if one exists).
  This disables part of the worst_seek optimization described below.

optimizer_adjust_secondary_key_costs=2
- Disable "worst_seek optimization" and adjust filter cost slightly
  (add cost of 1 if filter is used).

The idea behind 'worst_seek optimization' is that we limit the
cost for all non clustered ref access to the least of:
- best-rows-by-range (or all rows in no range found) / 10
- scan-time-table (roughly number of file blocks to scan table) * 3

In addition we also do not try to use rowid_filter if number of rows
estimated for 'ref' access is less than the worst_seek limitation.

The idea is that worst_seek is trying to take into account that if
we do a lot of accesses through a key, this is likely to be cached.
However it only does this for secondary keys, and not for clustered
keys or index only reads.

The effect of the worst_seek are:
- In some cases 'ref' will have a much lower cost than range or using
  a clustered key.
- Some possible rowid filters for secondary keys will be ignored.

When implementing optimizer_adjust_secondary_key_costs=2, I noticed
that there is a slightly different costs for how ref+filter and
range+filter are calculated.  This caused a lot of range and
range+filter to change to ref+filter, which is not good as
range+filter provides the optimizer a better estimate of how many
accepted rows there will be in the result set.
Adding a extra small cost (1 seek) when using filter mitigated the
above problems in almost all cases.

This patch should not be applied to MariaDB 11.0 as worst_seeks is
removed in 11.0 and the cost calculation for clustered keys, secondary
keys, index scan and filter is more exact.

Test case changes for --optimizer-adjust_secondary_key_costs=1
(Fix secondary key costs to be 5x of primary key):

- stat_tables_innodb:
  - Complex change (probably ok as number of rows are really small)
    - ref over 1 row changed to range over 10 rows with join buffer
    - ref over 5 rows changed to eq_ref
    - secondary ref over 1 row changed to ref of primary key over 4 rows
    - Change of key to use longer key with index pushdown (a little
      bit worse but not significant).
  - Change to use secondary (1 row) -> primary (4 rows)
- rowid_filter_innodb:
  - index_merge (2 rows) & ref (1) -> all (23 rows) -> primary eq_ref.

Test case changes for --optimizer-adjust_secondary_key_costs=2
(remove of worst_seeks & adjust filter cost):

- stat_tables_innodb:
  - Join order change (probably ok as number of rows are really small)
  - ref (5 rows) & ref(1 row) changed to range (10 rows & join buffer)
    & eq_ref.
- selectivity_innodb:
  - ref -> ref|filter  (ok)
- rowid_filter_innodb:
  - ref -> ref|filter (ok)
  - range|filter (64 rows) changed to ref|filter (128 rows).
    ok as ref|filter outputs wrong number of rows in explain.
- range, range_mrr_icp:
  -ref (500 rows -> ALL (1000 rows) (ok)
- select_pkeycache, select, select_jcl6:
  - ref|filter (2 rows) -> ref (2 rows) (ok)
- selectivity:
  - ref -> ref_filter (ok)
- range:
  - Change of 'filtered' but no stat or plan change (ok)
- selectivity:
 - ref -> ref+filter (ok)
 - Change of filtered but no plan change (ok)
- join_nested_jcl6:
  - range -> ref|filter (ok as only 2 rows)
- subselect3, subselect3_jcl6:
  - ref_or_null (4 rows) -> ALL (10 rows) (ok)
  - Index_subquery (4 rows) -> ALL (10 rows)  (ok)
- partition_mrr_myisam, partition_mrr_aria and partition_mrr_innodb:
  - Uses ALL instead of REF for a key value that is the same for > 50%
    of rows.  (good)
order_by_innodb:
  - range (200 rows) -> ref (20 rows)+filesort (ok)
- subselect_sj2_mat:
  - One test changed. One ALL removed and replaced with eq_ref. Likely
    to be better.
- join_cache:
  - Changed ref over 60% of the rows to use hash join (ok)
- opt_tvc:
  - Changed to use eq_ref instead of ref with plan change (probably ok)
- opt_trace:
  - No worst/max seeks clipping (good).
  - Almost double range_scan_time and index_scan_time (ok).
- rowid_filter:
  - ref -> ref|filtered (ok)
  - range|filter (77 rows) changed to ref|filter (151 rows).  Proably
    ok as ref|filter outputs wrong number of rows in explain.

Reviewer: Sergei Petrunia <sergey@mariadb.com>
2024-01-23 13:03:11 +02:00
Michael Widenius
7af50e4df4 MDEV-32551: "Read semi-sync reply magic number error" warnings on master
rpl_semi_sync_slave_enabled_consistent.test and the first part of
the commit message comes from Brandon Nesterenko.

A test to show how to induce the "Read semi-sync reply magic number
error" message on a primary. In short, if semi-sync is turned on
during the hand-shake process between a primary and replica, but
later a user negates the rpl_semi_sync_slave_enabled variable while
the replica's IO thread is running; if the io thread exits, the
replica can skip a necessary call to kill_connection() in
repl_semisync_slave.slave_stop() due to its reliance on a global
variable. Then, the replica will send a COM_QUIT packet to the
primary on an active semi-sync connection, causing the magic number
error.

The test in this patch exits the IO thread by forcing an error;
though note a call to STOP SLAVE could also do this, but it ends up
needing more synchronization. That is, the STOP SLAVE command also
tries to kill the VIO of the replica, which makes a race with the IO
thread to try and send the COM_QUIT before this happens (which would
need more debug_sync to get around). See THD::awake_no_mutex for
details as to the killing of the replica’s vio.

Notes:
- The MariaDB documentation does not make it clear that when one
  enables semi-sync replication it does not matter if one enables
  it first in the master or slave. Any order works.

Changes done:
- The rpl_semi_sync_slave_enabled variable is now a default value for
  when semisync is started. The variable does not anymore affect
  semisync if it is already running. This fixes the original reported
  bug.  Internally we now use repl_semisync_slave.get_slave_enabled()
  instead of rpl_semi_sync_slave_enabled. To check if semisync is
  active on should check the @@rpl_semi_sync_slave_status variable (as
  before).
- The semisync protocol conflicts in the way that the original
  MySQL/MariaDB client-server protocol was designed (client-server
  send and reply packets are strictly ordered and includes a packet
  number to allow one to check if a packet is lost). When using
  semi-sync the master and slave can send packets at 'any time', so
  packet numbering does not work. The 'solution' has been that each
  communication starts with packet number 1, but in some cases there
  is still a chance that the packet number check can fail.  Fixed by
  adding a flag (pkt_nr_can_be_reset) in the NET struct that one can
  use to signal that packet number checking should not be done. This
  is flag is set when semi-sync is used.
- Added Master_info::semi_sync_reply_enabled to allow one to configure
  some slaves with semisync and other other slaves without semisync.
  Removed global variable semi_sync_need_reply that would not work
  with multi-master.
- Repl_semi_sync_master::report_reply_packet() can now recognize
  the COM_QUIT packet from semisync slave and not give a
  "Read semi-sync reply magic number error" error for this case.
  The slave will be removed from the Ack listener.
- On Windows, don't stop semisync Ack listener just because one
  slave connection is using socket_id > FD_SETSIZE.
- Removed busy loop in Ack_receiver::run() by using
 "Self-pipe trick" to signal new slave and stop Ack_receiver.
- Changed some Repl_semi_sync_slave functions that always returns 0
  from int to void.
- Added Repl_semi_sync_slave::slave_reconnect().
- Removed dummy_function Repl_semi_sync_slave::reset_slave().
- Removed some duplicate semisync notes from the error log.
- Add test of "if (get_slave_enabled() && semi_sync_need_reply)"
  before calling Repl_semi_sync_slave::slave_reply().
  (Speeds up the code as we can skip all initializations).
- If epl_semisync_slave.slave_reply() fails, we disable semisync
  for that connection.
- We do not call semisync.switch_off() if there are no active slaves.
  Instead we check in Repl_semi_sync_master::commit_trx() if there are
  no active threads. This simplices the code.
- Changed assert() to DBUG_ASSERT() to ensure that the DBUG log is
  flushed in case of asserts.
- Removed the internal rpl_semi_sync_slave_status as it is not needed
  anymore. The @@rpl_semi_sync_slave_status status variable is now
  mapped to rpl_semi_sync_enabled.
- Removed rpl_semi_sync_slave_enabled  as it is not needed anymore.
  Repl_semi_sync_slave::get_slave_enabled() contains the active status.
- Added checking that we do not add a slave twice with
  Ack_receiver::add_slave(). This could happen with old code.
- Removed Repl_semi_sync_master::check_and_switch() as it is not
  needed anymore.
- Ensure that when we call Ack_receiver::remove_slave() that the slave
  is removed from the listener before function returns.
- Call listener.listen_on_sockets() outside of mutex for better
  performance and less contested mutex.
- Ensure that listening is ignoring newly added slaves when checking for
  responses.
- Fixed the master ack_receiver listener is not killed if there are no
  connected slaves (and thus stop semisync handling of future
  connections). This could happen if all slaves sockets where would be
  marked as unreliable.
- Added unlink() to base_ilist_iterator and remove() to
  I_List_iterator. This enables us to remove 'dead' slaves in
  Ack_recever::run().
- kill_zombie_dump_threads() now does killing of dump threads properly.
  - It can now kill several threads (should be impossible but could
    happen if IO slaves reconnects very fast).
  - We now wait until the dump thread is done before starting the
    dump.
- Added an error if kill_zombie_dump_threads() fails.
- Set thd->variables.server_id before calling
  kill_zombie_dump_threads(). This simplies the code.
- Added a lot of comments both in code and tests.
- Removed DBUG_EVALUATE_IF "failed_slave_start" as it is not used.

Test changes:
- rpl.rpl_session_var2 added which runs rpl.rpl_session_var test with
  semisync enabled.
- Some timings changed slight with startup of slave which caused
  rpl_binlog_dump_slave_gtid_state_info.text to fail as it checked the
  error log file before the slave had started properly. Fixed by
  adding wait_for_pattern_in_file.inc that allows waiting for the
  pattern to appear in the log file.
- Tests have been updated so that we first set
  rpl_semi_sync_master_enabled on the master and then set
  rpl_semi_sync_slave_enabled on the slaves (this is according to how
  the MariaDB documentation document how to setup semi-sync).
- Error text "Master server does not have semi-sync enabled" has been
  replaced with "Master server does not support semi-sync" for the
  case when the master supports semi-sync but semi-sync is not
  enabled.

Other things:
- Some trivial cleanups in Repl_semi_sync_master::update_sync_header().
- We should in 11.3 changed the default value for
  rpl-semi-sync-master-wait-no-slave from TRUE to FALSE as the TRUE
  does not make much sense as default. The main difference with using
  FALSE is that we do not wait for semisync Ack if there are no slave
  threads.  In the case of TRUE we wait once, which did not bring any
  notable benefits except slower startup of master configured for
  using semisync.

Co-author: Brandon Nesterenko <brandon.nesterenko@mariadb.com>

This solves the problem reported in MDEV-32960 where a new
slave may not be registered in time and the master disables
semi sync because of that.
2024-01-23 13:03:11 +02:00
Marko Mäkelä
9374772ecd Merge 10.11 into 11.0 2024-01-19 09:07:48 +02:00