1
0
mirror of https://github.com/MariaDB/server.git synced 2025-11-09 11:41:36 +03:00
Commit Graph

3197 Commits

Author SHA1 Message Date
Marko Mäkelä
0076eb3d4e Merge 10.5 into 10.6 2024-06-24 13:09:47 +03:00
Dave Gosselin
db0c28eff8 MDEV-33746 Supply missing override markings
Find and fix missing virtual override markings.  Updates cmake
maintainer flags to include -Wsuggest-override and
-Winconsistent-missing-override.
2024-06-20 11:32:13 -04:00
Alexander Barkov
c4bf4ce948 Merge remote-tracking branch 'origin/11.2' into 11.4 2024-06-17 15:46:39 +04:00
Marko Mäkelä
a21e49cbcc Merge 11.1 into 11.2 2024-06-17 12:02:03 +03:00
Yuchen Pei
2d3e2c58b6 Merge branch '10.11' into 11.1 2024-05-31 10:54:31 +10:00
Marko Mäkelä
22ba7e4ff8 Merge 10.6 into 10.11 2024-05-30 16:04:00 +03:00
Marko Mäkelä
5ba542e9ee Merge 10.5 into 10.6 2024-05-30 14:27:07 +03:00
Monty
94033fcf83 MDEV-33151 Add more columns to TABLE_STATISTICS and USER STATS
Columns added to TABLE_STATISTICS
- ROWS_INSERTED, ROWS_DELETED, ROWS_UPDATED, KEY_READ_HITS and
  KEY_READ_MISSES.

Columns added to CLIENT_STATISTICS and USER_STATISTICS:
- KEY_READ_HITS and KEY_READ_MISSES.

User visible changes (except new columns):
- CLIENT_STATISTICS and USER_STATISTICS has columns KEY_READ_HITS and
  KEY_READ_MISSES added after column ROWS_UPDATED before SELECT_COMMANDS.

Other changes:
- Do not collect table statistics for system tables like index_stats
  table_stats, performance_schema, information_schema etc as the user
  has no control of these and the generate noice in the statistics.
- All row variables that are part of user_stats are moved to
  'struct rows_stats' to make it easy to clear all of them at once.
- ha_read_key_misses added to STATUS_VAR

Notes:
- userstat.result has a change of numbers of rows for handler_read_key.
  This is because use-stat-tables is now disabled for the test.
2024-05-27 12:39:04 +02:00
Monty
b9f5793176 MDEV-9101 Limit size of created disk temporary files and tables
Two new variables added:
- max_tmp_space_usage : Limits the the temporary space allowance per user
- max_total_tmp_space_usage: Limits the temporary space allowance for
  all users.

New status variables: tmp_space_used & max_tmp_space_used
New field in information_schema.process_list: TMP_SPACE_USED

The temporary space is counted for:
- All SQL level temporary files. This includes files for filesort,
  transaction temporary space, analyze, binlog_stmt_cache etc.
  It does not include engine internal temporary files used for repair,
  alter table, index pre sorting etc.
- All internal on disk temporary tables created as part of resolving a
  SELECT, multi-source update etc.

Special cases:
- When doing a commit, the last flush of the binlog_stmt_cache
  will not cause an error even if the temporary space limit is exceeded.
  This is to avoid giving errors on commit. This means that a user
  can temporary go over the limit with up to binlog_stmt_cache_size.

Noteworthy issue:
- One has to be careful when using small values for max_tmp_space_limit
  together with binary logging and with non transactional tables.
  If a the binary log entry for the query is bigger than
  binlog_stmt_cache_size and one hits the limit of max_tmp_space_limit
  when flushing the entry to disk, the query will abort and the
  binary log will not contain the last changes to the table.
  This will also stop the slave!
  This is also true for all Aria tables as Aria cannot do rollback
  (except in case of crashes)!
  One way to avoid it is to use @@binlog_format=statement for
  queries that updates a lot of rows.

Implementation:
- All writes to temporary files or internal temporary tables, that
  increases the file size, are routed through temp_file_size_cb_func()
  which updates and checks the temp space usage.
- Most of the temporary file monitoring is done inside IO_CACHE.
  Temporary file monitoring is done inside the Aria engine.
- MY_TRACK and MY_TRACK_WITH_LIMIT are new flags for ini_io_cache().
  MY_TRACK means that we track the file usage. TRACK_WITH_LIMIT means
  that we track the file usage and we give an error if the limit is
  breached. This is used to not give an error on commit when
  binlog_stmp_cache is flushed.
- global_tmp_space_used contains the total tmp space used so far.
  This is needed quickly check against max_total_tmp_space_usage.
- Temporary space errors are using EE_LOCAL_TMP_SPACE_FULL and
  handler errors are using HA_ERR_LOCAL_TMP_SPACE_FULL.
  This is needed until we move general errors to it's own error space
  so that they cannot conflict with system error numbers.
- Return value of my_chsize() and mysql_file_chsize() has changed
  so that -1 is returned in the case my_chsize() could not decrease
  the file size (very unlikely and will not happen on modern systems).
  All calls to _chsize() are updated to check for > 0 as the error
  condition.
- At the destruction of THD we check that THD::tmp_file_space == 0
- At server end we check that global_tmp_space_used == 0
- As a precaution against errors in the tmp_space_used code, one can set
  max_tmp_space_usage and max_total_tmp_space_usage to 0 to disable
  the tmp space quota errors.
- truncate_io_cache() function added.
- Aria tables using static or dynamic row length are registered in 8K
  increments to avoid some calls to update_tmp_file_size().

Other things:
- Ensure that all handler errors are registered.  Before, some engine
  errors could be printed as "Unknown error".
- Fixed bug in filesort() that causes a assert if there was an error
  when writing to the temporay file.
- Fixed that compute_window_func() now takes into account write errors.
- In case of parallel replication, rpl_group_info::cleanup_context()
  could call trans_rollback() with thd->error set, which would cause
  an assert. Fixed by resetting the error before calling trans_rollback().
- Fixed bug in subselect3.inc which caused following test to use
  heap tables with low value for max_heap_table_size
- Fixed bug in sql_expression_cache where it did not overflow
  heap table to Aria table.
- Added Max_tmp_disk_space_used to slow query log.
- Fixed some bugs in log_slow_innodb.test
2024-05-27 12:39:04 +02:00
Monty
24c57165d5 ALTER TABLE and replication should convert old row_end timestamps to new timestamp range
MDEV-32188 make TIMESTAMP use whole 32-bit unsigned range

- Added --update-history option to mariadb-dump to change 2038
  row_end timestamp to 2106.
- Updated ALTER TABLE ... to convert old row_end timestamps to
  2106 timestamp for tables created before MariaDB 11.4.0.
- Fixed bug in CHECK TABLE where we wrongly suggested to USE REPAIR
  TABLE when ALTER TABLE...FORCE is needed.
- mariadb-check printed table names that where used with REPAIR TABLE but
  did not print table names used with ALTER TABLE or with name repair.
  Fixed by always printing a table that is fixed if --silent is not
  used.
- Added TABLE::vers_fix_old_timestamp() that will change max-timestamp
  for versioned tables when replication from a pre-11.4.0 server.

A few test cases changed. This is caused by:
- CHECK TABLE now prints 'Please do ALTER TABLE... instead of
  'Please do REPAIR TABLE' when there is a problem with the information
  in the .frm file (for example a very old frm file).
- mariadb-check now prints repaired table names.
- mariadb-check also now prints nicer error message in case ALTER TABLE
  is needed to repair a table.
2024-05-27 12:39:03 +02:00
Monty
c4cad8d50c MDEV-33449 improving repair of tables
This task is to ensure we have a clear definition and rules of how to
repair or optimize a table.

The rules are:

- REPAIR should be used with tables that are crashed and are
  unreadable (hardware issues with not readable blocks, blocks with
  'unexpected data' etc)
- OPTIMIZE table should be used to optimize the storage layout for the
  table (recover space for delete rows and optimize the index
  structure.
- ALTER TABLE table_name FORCE should be used to rebuild the .frm file
  (the table definition) and the table (with the original table row
  format). If the table is from and older MariaDB/MySQL release with a
  different storage format, it will convert the data to the new
  format. ALTER TABLE ... FORCE is used as part of mariadb-upgrade

Here follows some more background:

The 3 ways to repair a table are:
1) ALTER TABLE table_name FORCE" (not other options).
   As an alias we allow: "ALTER TABLE table_name ENGINE=original_engine"
2) "REPAIR TABLE" (without FORCE)
3) "OPTIMIZE TABLE"

All of the above commands will optimize row space usage (which means that
space will be needed to hold a temporary copy of the table) and
re-generate all indexes. They will also try to replicate the original
table definition as exact as possible.

For ALTER TABLE and "REPAIR TABLE without FORCE", the following will hold:
If the table is from an older MariaDB version and data conversion is
needed (for example for old type HASH columns, MySQL JSON type or new
TIMESTAMP format) "ALTER TABLE table_name FORCE, algorithm=COPY" will be
used.

The differences between the algorithms are
1) Will use the fastest algorithm the engine supports to do a full repair
   of the table (except if data conversions are is needed).
2) Will use the storage engine internal REPAIR facility (MyISAM, Aria).
   If the engine does not support REPAIR then
   "ALTER TABLE FORCE, ALGORITHM=COPY" will be used.
   If there was data incompatibilities (which means that FORCE was used)
   then there will be a warning after REPAIR that ALTER TABLE FORCE is
   still needed.
   The reason for this is that REPAIR may be able to go around data
   errors (wrong incompatible data, crashed or unreadable sectors) that
   ALTER TABLE cannot do.
3) Will use the storage engine internal OPTIMIZE. If engine does not
   support optimize, then "ALTER TABLE FORCE" is used.

The above will ensure that ALTER TABLE FORCE is able to
correct almost any errors in the row or index data.  In case of
corrupted blocks then REPAIR possible followed by ALTER TABLE is needed.
This is important as mariadb-upgrade executes ALTER TABLE table_name
FORCE for any table that must be re-created.

Bugs fixed with InnoDB tables when using ALTER TABLE FORCE:
- No error for INNODB_DEFAULT_ROW_FORMAT=COMPACT even if row length
  would be too wide. (Independent of innodb_strict_mode).
- Tables using symlinks will be symlinked after any of the above commands
  (Independent of the setting of --symbolic-links)

If one specifies an algorithm together with ALTER TABLE FORCE, things
will work as before (except if data conversion is required as then
the COPY algorithm is enforced).

ALTER TABLE .. OPTIMIZE ALL PARTITIONS will work as before.

Other things:
- FORCE argument added to REPAIR to allow one to first run internal
  repair to fix damaged blocks and then follow it with ALTER TABLE.
- REPAIR will not update frm_version if ha_check_for_upgrade() finds
  that table is still incompatible with current version. In this case the
  REPAIR will end with an error.
- REPAIR for storage engines that does not have native repair, like InnoDB,
  is now using ALTER TABLE FORCE.
- REPAIR csv-table USE_FRM now works.
  - It did not work before as CSV tables had extension list in wrong
    order.
- Default error messages length for %M increased from 128 to 256 to not
  cut information from REPAIR.
- Documented HA_ADMIN_XX variables related to repair.
- Added HA_ADMIN_NEEDS_DATA_CONVERSION to signal that we have to
  do data conversions when converting the table (and thus ALTER TABLE
  copy algorithm is needed).
- Fixed typo in error message (caused test changes).
2024-05-27 12:39:03 +02:00
Monty
a00e99acca MDEV-33152 Add QUERIES to INDEX_STATISTICS
Other changes:
- Do not collect index statistics for system tables like index_stats
  table_stats, performance_schema, information_schema etc as the user
  has no control of these and the generate noise in the statistics.
2024-05-27 12:39:02 +02:00
Sergei Golubchik
869e67c92f cleanup: remove thd->stmt_changes_data
what is done in the plugin - stays in the plugin
2024-05-27 12:39:02 +02:00
Sergei Golubchik
3781848bca mark the deprecated sysvar deprecated
and adjust the copyright year
2024-05-27 12:39:02 +02:00
Monty
243b9f3cd2 MDEV-33501 Extend query_response_time plugin to be compatible with Percona server
This is to update the plugin to be compatible with Percona's
query_response_time plugin, with some additions.
Some of the tests are taken from Percona server.

- Added plugins QUERY_RESPONSE_TIME_READ, QUERY_RESPONSE_TIME_WRITE and
  QUERY_RESPONSE_TIME_READ_WRITE.
- Added option query_response_time_session_stats, with possible values
  GLOBAL, ON or OFF, to the query_response_time plugin.

Notes:
- All modules are dependent on QUERY_RESPONSE_READ_TIME. This must always
  be enabled if any of the other modules are used.
  This will be auto-enabled in the near future.
- Accounting are done per statement. Stored functions are regarded
  as part of the original statement.
- For stored procedures the accounting are done per statement executed
  in the stored procedure. CALL will not be accounted because of this.
- FLUSH commands will not be accounted for. This is to ensure that
  FLUSH QUERY_RESPONSE_TIME is not part of the statistics.
  (This helps when testing with mtr and otherwise).
- FLUSH QUERY_RESPONSE_TIME_READ and FLUSH QUERY_RESPONSE_TIME_READ
  only resets the corresponding status.
- FLUSH QUERY_RESPONSE_TIME and FLUSH QUERY_RESPONSE_TIME_READ_WRITE or
  changing the value of query_response_time_range_base followed by
  any FLUSH of QUERY_RESPOSNSE_TIME resets all status.
2024-05-27 12:39:02 +02:00
Vladislav Vaintroub
736449d30f MDEV-34205: ASAN stack buffer overflow in strxnmov() in frm_file_exists
Correct the second parameter for strxnmov to prevent potential buffer
overflows. The second parameter must be one less than the size of the
input buffer to avoid writing past the end of the buffer.

While the second parameter is usually correct, there are exceptions
that need fixing.

This commit addresses the issue within frm_file_exists() and other
affected places.
2024-05-23 22:08:27 +02:00
Oleksandr Byelkin
dd7d9d7fb1 Merge branch '11.4' into 11.5 2024-05-23 17:01:43 +02:00
Oleksandr Byelkin
99b370e023 Merge branch '11.2' into 11.4 2024-05-21 19:38:51 +02:00
Sergei Golubchik
bf5da43e50 Merge branch '11.1' into 11.2 2024-05-13 10:00:26 +02:00
Yuchen Pei
a6ae1c2dfb MDEV-32487 Check plugin is ready when resolving storage engine
This handles the situation when one thread is still initiating a
storage engine plugin, while another is creating a table using it.
2024-05-13 09:15:14 +10:00
Sergei Golubchik
f9807aadef Merge branch '10.11' into 11.0 2024-05-12 12:18:28 +02:00
Sergei Golubchik
a6b2f820e0 Merge branch '10.6' into 10.11 2024-05-10 20:02:18 +02:00
Sergei Golubchik
7b53672c63 Merge branch '10.5' into 10.6 2024-05-08 20:06:00 +02:00
Sergei Golubchik
22b3ba9312 MDEV-25102 UNIQUE USING HASH error after ALTER ... DISABLE KEYS
on disable_indexes(HA_KEY_SWITCH_NONUNIQ_SAVE) the engine does
not know that the long unique is logically unique, because on the
engine level it is not. And the engine disables it,

Change the disable_indexes/enable_indexes API. Instead of the enum
mode, send a key_map of indexes that should be enabled. This way the
server will decide what is unique, not the engine.
2024-05-06 17:16:10 +02:00
Sergei Golubchik
4f5dea43df cleanup
* remove dead code
* simplify the check for table->s->next_number_index
* misc
2024-05-05 21:37:08 +02:00
Sergei Golubchik
947eeaa6dc MDEV-29345 update case insensitive (large) unique key with insensitive change of value - duplicate key
use collation-sensitive comparison when comparing fields
2024-05-05 21:37:08 +02:00
Nikita Malyavin
72429cad7f MDEV-30046 wrong row targeted with "insert ... on duplicate" and "replace"
When HA_DUPLICATE_POS is not supported, the row to replace was navigated by
ha_index_read_idx_map, which uses only hash to navigate.

Suchwise, given a hash collision it may choose an incorrect row.

handler::position would be correct and very convenient to use here.

dup_ref is already set by handler independently of the engine
capabilities, when an extra lookup is made (for long unique or something else,
for example WITHOUT OVERLAPS) such error will be indicated by
file->lookup_errkey != -1.
2024-05-05 18:38:34 +02:00
Yuchen Pei
b84d335d9d MDEV-33538 make auxiliary spider plugins init depend on actual spider
The two I_S plugins SPIDER_ALLOC_MEM and SPIDER_WRAPPER_PROTOCOL
only makes sense if the main SPIDER plugin is installed. Further,
SPIDER_ALLOC_MEM requires a mutex that requires SPIDER init to fill
the table.

We also update the spider init query to override
--transaction_read_only=on so that it does not affect the spider init.

Also fixed error handling in spider_db_init() so that failure in
spider table init does not result in memory leak
2024-05-03 14:47:54 +10:00
Sergei Petrunia
486d42d812 MDEV-18478 ANALYZE for statement should show selectivity of ICP, part#3
Fix the previous patch:
- Only enable handler_stats if thd->should_collect_handler_stats()==true.
- Make handler_index_cond_check() work when handler_stats are not enabled.
2024-04-23 22:55:22 +03:00
Dave Gosselin
a11a10191a accrue statistics to correct handler 2024-04-23 22:55:22 +03:00
Sergei Petrunia
e87d1e391b MDEV-18478 ANALYZE for statement should show selectivity of ICP, part#1
(Based on the original patch by Jason Cu)

Part #1:
- Add ha_handler_stats::{icp_attempts,icp_match}, make
  handler_index_cond_check() increment them.
- ANALYZE FORMAT=JSON now prints r_icp_filtered based on these counters.
2024-04-23 22:55:22 +03:00
Sergei Golubchik
018d537ec1 Merge branch '10.6' into 10.11 2024-04-22 15:23:10 +02:00
Alexander Barkov
fd247cc21f MDEV-31340 Remove MY_COLLATION_HANDLER::strcasecmp()
This patch also fixes:
  MDEV-33050 Build-in schemas like oracle_schema are accent insensitive
  MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0
  MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0
  MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0
  MDEV-33088 Cannot create triggers in the database `MYSQL`
  MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0
  MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0
  MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0
  MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS
  MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0

- Removing the virtual function strnncoll() from MY_COLLATION_HANDLER

- Adding a wrapper function CHARSET_INFO::streq(), to compare
  two strings for equality. For now it calls strnncoll() internally.
  In the future it will turn into a virtual function.

- Adding new accent sensitive case insensitive collations:
    - utf8mb4_general1400_as_ci
    - utf8mb3_general1400_as_ci
  They implement accent sensitive case insensitive comparison.
  The weight of a character is equal to the code point of its
  upper case variant. These collations use Unicode-14.0.0 casefolding data.

  The result of
     my_charset_utf8mb3_general1400_as_ci.strcoll()
  is very close to the former
     my_charset_utf8mb3_general_ci.strcasecmp()

  There is only a difference in a couple dozen rare characters, because:
    - the switch from "tolower" to "toupper" comparison, to make
      utf8mb3_general1400_as_ci closer to utf8mb3_general_ci
    - the switch from Unicode-3.0.0 to Unicode-14.0.0
  This difference should be tolarable. See the list of affected
  characters in the MDEV description.

  Note, utf8mb4_general1400_as_ci correctly handles non-BMP characters!
  Unlike utf8mb4_general_ci, it does not treat all BMP characters
  as equal.

- Adding classes representing names of the file based database objects:

    Lex_ident_db
    Lex_ident_table
    Lex_ident_trigger

  Their comparison collation depends on the underlying
  file system case sensitivity and on --lower-case-table-names
  and can be either my_charset_bin or my_charset_utf8mb3_general1400_as_ci.

- Adding classes representing names of other database objects,
  whose names have case insensitive comparison style,
  using my_charset_utf8mb3_general1400_as_ci:

  Lex_ident_column
  Lex_ident_sys_var
  Lex_ident_user_var
  Lex_ident_sp_var
  Lex_ident_ps
  Lex_ident_i_s_table
  Lex_ident_window
  Lex_ident_func
  Lex_ident_partition
  Lex_ident_with_element
  Lex_ident_rpl_filter
  Lex_ident_master_info
  Lex_ident_host
  Lex_ident_locale
  Lex_ident_plugin
  Lex_ident_engine
  Lex_ident_server
  Lex_ident_savepoint
  Lex_ident_charset
  engine_option_value::Name

- All the mentioned Lex_ident_xxx classes implement a method streq():

  if (ident1.streq(ident2))
     do_equal();

  This method works as a wrapper for CHARSET_INFO::streq().

- Changing a lot of "LEX_CSTRING name" to "Lex_ident_xxx name"
  in class members and in function/method parameters.

- Replacing all calls like
    system_charset_info->coll->strcasecmp(ident1, ident2)
  to
    ident1.streq(ident2)

- Taking advantage of the c++11 user defined literal operator
  for LEX_CSTRING (see m_strings.h) and Lex_ident_xxx (see lex_ident.h)
  data types. Use example:

  const Lex_ident_column primary_key_name= "PRIMARY"_Lex_ident_column;

  is now a shorter version of:

  const Lex_ident_column primary_key_name=
    Lex_ident_column({STRING_WITH_LEN("PRIMARY")});
2024-04-18 15:22:10 +04:00
Sergei Petrunia
159b7ca3f2 MDEV-12404: Add assertions about Index Condition Pushdown use
Add assertions about limitations one has when using Index Condition
Pushdown:
- add handler::assert_icp_limitations()
- call this function from functions that may attempt violations.

Verified that assert_icp_limitations() as well as calls to it are
compiled away in release build.
2024-04-18 11:35:59 +03:00
Marko Mäkelä
829cb1a49c Merge 10.5 into 10.6 2024-04-17 14:14:58 +03:00
Sergei Golubchik
41e7ceb0ac MDEV-33889 Read only server throws error when running a create temporary table as select statement
create_partitioning_metadata() should only mark transaction r/w
if it actually did anything (that is, the table is partitioned).

otherwise it's a no-op, called even for temporary tables and
it shouldn't do anything at all
2024-04-16 20:43:31 +02:00
Sergei Golubchik
41296a07c8 Merge branch '10.5' into 10.6 2024-04-11 13:58:22 +02:00
Jan Lindström
7aa86eb1e1 MDEV-33828 : Transactional commit not supported by involved engine(s)
Problem was too tight condition on ha_commit_trans to not
allow non transactional storage engines participate 2pc
in Galera case. This is required because transaction
using e.g. procedures might read mysql.proc table inside
a trasaction and these tables use at the moment Aria
storage engine that does not support 2pc.

Fixed by allowing read only transactions to storage
engines that do not support two phase commit to participate
2pc transaction. These will be committed later separately.

Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
2024-04-09 12:21:53 +02:00
Oleksandr Byelkin
cd28b2479c Merge branch '11.1' into 11.2 2024-04-09 12:12:33 +02:00
sjaakola
2fcf2ec229 MDEV-33749 hyphen in table name can cause galera certification failures
Fix in this commit handles foreign key value appending into write set
so that db and table names are converted from the filepath format
to tablename format. This is compatible with key values appended from
elsewhere in the code base

There is a mtr test galera.galera_table_with_hyphen for regression testing

Reviewer: monty@mariadb.com
2024-04-04 17:12:09 +03:00
Marko Mäkelä
fec2fd6add Merge 10.11 into 11.0 2024-03-28 10:51:36 +02:00
Marko Mäkelä
788953463d Merge 10.6 into 10.11
Some fixes related to commit f838b2d799 and
Rows_log_event::do_apply_event() and Update_rows_log_event::do_exec_row()
for system-versioned tables were provided by Nikita Malyavin.
This was required by test versioning.rpl,trx_id,row.
2024-03-28 09:16:57 +02:00
Sergei Golubchik
f71d7f2f0f Merge branch '10.5' into 10.6 2024-03-13 21:02:34 +01:00
Marko Mäkelä
f703e72bd8 Merge 10.4 into 10.5 2024-03-11 10:08:20 +02:00
Alexander Barkov
7246054cbb MDEV-33442 REPAIR TABLE corrupts UUIDs
Problem:
REPAIR TABLE executed for a pre-MDEV-29959 table (with the old UUID format)
updated the server version in the FRM file without rewriting the data,
so it created a new FRM for old UUIDs. After that MariaDB could not
read UUIDs correctly.

Fix:

- Adding a new virtual method in class Type_handler:

      virtual bool type_handler_for_implicit_upgrade() const;

  * For the up-to-date data types it returns "this".
  * For the data types which need to be implicitly upgraded
    during REPAIR TABLE or ALTER TABLE, it returns a pointer
    to a new replacement data type handler.

    Old VARCHAR and old UUID type handlers override this method.
    See more comments below.

- Changing the semantics of the method

    Type_handler::Column_definition_implicit_upgrade(Column_definition *c)

  to the opposite, so now:
    * c->type_handler() references the old data type (to upgrade from)
    * "this" references the new data type (to upgrade to).

  Before this change Column_definition_implicit_upgrade() was supposed
  to be called with the old data type handler (to upgrade from).

  Renaming the method to Column_definition_implicit_upgrade_to_this(),
  to avoid automatic merges in this method.

  Reflecting this change in Create_field::upgrade_data_types().

- Replacing the hard-coded data type tests inside handler::check_old_types()
  to a call for the new virtual method
  Type_handler::type_handler_for_implicit_upgrade()

- Overriding Type_handler_fbt::type_handler_for_implicit_upgrade()
  to call a new method FbtImpl::type_handler_for_implicit_upgrade().

  Reasoning:

  Type_handler_fbt is a template, so it has access only to "this".
  So in case of UUID data types, the type handler for old UUID
  knows nothing about the type handler of new UUID inside sql_type_fixedbin.h.
  So let's have Type_handler_fbt delegate type_handler_for_implicit_upgrade()
  to its Type_collection, which knows both new UUID and old UUID.

- Adding Type_collection_uuid::type_handler_for_implicit_upgrade().
  It returns a pointer to the new UUID type handler.

- Overriding Type_handler_var_string::type_handler_for_implicit_upgrade()
  to return a pointer to type_handler_varchar (true VARCHAR).

- Cleanup: these two methods:
    handler::check_old_types()
    handler::ha_check_for_upgrade()
  were always called consequently.
  So moving the call for check_old_types() inside ha_check_for_upgrade(),
  and making check_old_types() private.

- Cleanup: removing the "bool varchar" parameter from fill_alter_inplace_info(),
  as its not used any more.
2024-02-26 19:00:45 +04:00
Marko Mäkelä
d73baa402a Merge 10.11 into 11.0 2024-02-20 12:02:01 +02:00
Kristian Nielsen
c73c6aea63 MDEV-33426: Aria temptables wrong thread-specific memory accounting in slave thread
Aria temporary tables account allocated memory as specific to the current
THD. But this fails for slave threads, where the temporary tables need to be
detached from any specific THD.

Introduce a new flag to mark temporary tables in replication as "global",
and use that inside Aria to not account memory allocations as thread
specific for such tables.

Based on original suggestion by Monty.

Reviewed-by: Monty <monty@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
2024-02-16 12:48:30 +01:00
Marko Mäkelä
64cce8d5bf Merge 10.6 into 10.11 2024-02-14 16:12:53 +02:00
Marko Mäkelä
691f923906 Merge 10.5 into 10.6 2024-02-13 20:42:59 +02:00
Marko Mäkelä
8ec12e0d6d Merge 10.4 into 10.5 2024-02-12 11:38:13 +02:00