1
0
mirror of https://github.com/MariaDB/server.git synced 2025-11-09 11:41:36 +03:00
Commit Graph

8684 Commits

Author SHA1 Message Date
Sergei Petrunia
ef9e3e73ed MDEV-30651: Assertion `sel->quick' in make_range_rowid_filters
(Variant for 10.6: return error code from SQL_SELECT::test_quick_select)
The optimizer deals with Rowid Filters this way:

1. First, range optimizer is invoked. It saves information
   about all potential range accesses.
2. A query plan is chosen. Suppose, it uses a Rowid Filter on
   index $IDX.
3. JOIN::make_range_rowid_filters() calls the range optimizer
again to create a quick select on index $IDX which will be used
to populate the rowid filter.

The problem: KILL command catches the query in step #3. Quick
Select is not created which causes a crash.

Fixed by checking if query was killed.
2024-06-17 12:50:43 +03:00
Sergei Petrunia
b47bd3f8bf MDEV-33875: ORDER BY DESC causes ROWID Filter slowdown
Rowid Filter cannot be used with reverse-ordered scans, for the
same reason as IndexConditionPushdown cannot be.

test_if_skip_sort_order() already has logic to disable ICP when
setting up a reverse-ordered scan. Added logic to also disable
Rowid Filter in this case, factored out the code into
prepare_for_reverse_ordered_access(), and added a comment describing
the cause of this limitation.
2024-06-17 09:50:32 +03:00
Marko Mäkelä
d34289a3e2 Merge 10.11 into 11.1 2024-06-17 09:21:50 +03:00
Marko Mäkelä
b81d717387 Merge 10.6 into 10.11 2024-06-11 12:50:10 +03:00
Marko Mäkelä
27834ebc91 Merge 10.5 into 10.6 2024-06-10 15:22:15 +03:00
Alexander Barkov
246c0b3a35 MDEV-34227 On startup: UBSAN: runtime error: applying non-zero offset in JOIN::make_aggr_tables_info in sql/sql_select.cc
Avoid undefined behaviour (applying offset to nullptr).
The reported scenario is covered in mysql-test/connect-no-db.test
No new tests needed.
2024-06-10 12:50:52 +04:00
Yuchen Pei
2d3e2c58b6 Merge branch '10.11' into 11.1 2024-05-31 10:54:31 +10:00
Marko Mäkelä
22ba7e4ff8 Merge 10.6 into 10.11 2024-05-30 16:04:00 +03:00
Monty
fcb3183479 MDEV-34142 Server crashes in create_internal_tmp_table with low tmp space limit 2024-05-27 12:39:04 +02:00
Monty
46751d4b81 MDEV-34060 Unexpected behavior upon reading I_S.ALL_PLUGINS under limited tmp space. 2024-05-27 12:39:04 +02:00
Monty
d2304554ac MDEV-33751 Assertion `thd' failed in int temp_file_size_cb_func(tmp_file_tracking*, int)
Changes:
- Fixed that MyISAM and Aria parallel repair works with tmp file limit.
  This required to add current_thd to all parallel workers and add
  protection in my_malloc_size_cb_func() and temp_file_size_cb_func() to
  be able to handle shared THD's.  I removed the old code in MyISAM to
  set current_thd() as only worked when using with virtal indexed
  columns and I wanted to keep the Aria and MyISAM code identical.

Other things:
- Improved error messages from Aria parallel repair and
  create_internal_tmp_table_from_heap().
2024-05-27 12:39:04 +02:00
Monty
865ef0f567 MDEV-33680 Server hangs or assertion fails upon SELECT with limited max_tmp_space_usage
The bug was that Aggregator_distinct::add() did not properly handle
write errors. (Old bug exposed by the new code).
2024-05-27 12:39:04 +02:00
Monty
b9f5793176 MDEV-9101 Limit size of created disk temporary files and tables
Two new variables added:
- max_tmp_space_usage : Limits the the temporary space allowance per user
- max_total_tmp_space_usage: Limits the temporary space allowance for
  all users.

New status variables: tmp_space_used & max_tmp_space_used
New field in information_schema.process_list: TMP_SPACE_USED

The temporary space is counted for:
- All SQL level temporary files. This includes files for filesort,
  transaction temporary space, analyze, binlog_stmt_cache etc.
  It does not include engine internal temporary files used for repair,
  alter table, index pre sorting etc.
- All internal on disk temporary tables created as part of resolving a
  SELECT, multi-source update etc.

Special cases:
- When doing a commit, the last flush of the binlog_stmt_cache
  will not cause an error even if the temporary space limit is exceeded.
  This is to avoid giving errors on commit. This means that a user
  can temporary go over the limit with up to binlog_stmt_cache_size.

Noteworthy issue:
- One has to be careful when using small values for max_tmp_space_limit
  together with binary logging and with non transactional tables.
  If a the binary log entry for the query is bigger than
  binlog_stmt_cache_size and one hits the limit of max_tmp_space_limit
  when flushing the entry to disk, the query will abort and the
  binary log will not contain the last changes to the table.
  This will also stop the slave!
  This is also true for all Aria tables as Aria cannot do rollback
  (except in case of crashes)!
  One way to avoid it is to use @@binlog_format=statement for
  queries that updates a lot of rows.

Implementation:
- All writes to temporary files or internal temporary tables, that
  increases the file size, are routed through temp_file_size_cb_func()
  which updates and checks the temp space usage.
- Most of the temporary file monitoring is done inside IO_CACHE.
  Temporary file monitoring is done inside the Aria engine.
- MY_TRACK and MY_TRACK_WITH_LIMIT are new flags for ini_io_cache().
  MY_TRACK means that we track the file usage. TRACK_WITH_LIMIT means
  that we track the file usage and we give an error if the limit is
  breached. This is used to not give an error on commit when
  binlog_stmp_cache is flushed.
- global_tmp_space_used contains the total tmp space used so far.
  This is needed quickly check against max_total_tmp_space_usage.
- Temporary space errors are using EE_LOCAL_TMP_SPACE_FULL and
  handler errors are using HA_ERR_LOCAL_TMP_SPACE_FULL.
  This is needed until we move general errors to it's own error space
  so that they cannot conflict with system error numbers.
- Return value of my_chsize() and mysql_file_chsize() has changed
  so that -1 is returned in the case my_chsize() could not decrease
  the file size (very unlikely and will not happen on modern systems).
  All calls to _chsize() are updated to check for > 0 as the error
  condition.
- At the destruction of THD we check that THD::tmp_file_space == 0
- At server end we check that global_tmp_space_used == 0
- As a precaution against errors in the tmp_space_used code, one can set
  max_tmp_space_usage and max_total_tmp_space_usage to 0 to disable
  the tmp space quota errors.
- truncate_io_cache() function added.
- Aria tables using static or dynamic row length are registered in 8K
  increments to avoid some calls to update_tmp_file_size().

Other things:
- Ensure that all handler errors are registered.  Before, some engine
  errors could be printed as "Unknown error".
- Fixed bug in filesort() that causes a assert if there was an error
  when writing to the temporay file.
- Fixed that compute_window_func() now takes into account write errors.
- In case of parallel replication, rpl_group_info::cleanup_context()
  could call trans_rollback() with thd->error set, which would cause
  an assert. Fixed by resetting the error before calling trans_rollback().
- Fixed bug in subselect3.inc which caused following test to use
  heap tables with low value for max_heap_table_size
- Fixed bug in sql_expression_cache where it did not overflow
  heap table to Aria table.
- Added Max_tmp_disk_space_used to slow query log.
- Fixed some bugs in log_slow_innodb.test
2024-05-27 12:39:04 +02:00
Sergei Golubchik
3781848bca mark the deprecated sysvar deprecated
and adjust the copyright year
2024-05-27 12:39:02 +02:00
Monty
243b9f3cd2 MDEV-33501 Extend query_response_time plugin to be compatible with Percona server
This is to update the plugin to be compatible with Percona's
query_response_time plugin, with some additions.
Some of the tests are taken from Percona server.

- Added plugins QUERY_RESPONSE_TIME_READ, QUERY_RESPONSE_TIME_WRITE and
  QUERY_RESPONSE_TIME_READ_WRITE.
- Added option query_response_time_session_stats, with possible values
  GLOBAL, ON or OFF, to the query_response_time plugin.

Notes:
- All modules are dependent on QUERY_RESPONSE_READ_TIME. This must always
  be enabled if any of the other modules are used.
  This will be auto-enabled in the near future.
- Accounting are done per statement. Stored functions are regarded
  as part of the original statement.
- For stored procedures the accounting are done per statement executed
  in the stored procedure. CALL will not be accounted because of this.
- FLUSH commands will not be accounted for. This is to ensure that
  FLUSH QUERY_RESPONSE_TIME is not part of the statistics.
  (This helps when testing with mtr and otherwise).
- FLUSH QUERY_RESPONSE_TIME_READ and FLUSH QUERY_RESPONSE_TIME_READ
  only resets the corresponding status.
- FLUSH QUERY_RESPONSE_TIME and FLUSH QUERY_RESPONSE_TIME_READ_WRITE or
  changing the value of query_response_time_range_base followed by
  any FLUSH of QUERY_RESPOSNSE_TIME resets all status.
2024-05-27 12:39:02 +02:00
Oleksandr Byelkin
dd7d9d7fb1 Merge branch '11.4' into 11.5 2024-05-23 17:01:43 +02:00
Oleksandr Byelkin
99b370e023 Merge branch '11.2' into 11.4 2024-05-21 19:38:51 +02:00
Marko Mäkelä
6fd4fa7d71 Merge 11.0 into 11.1 2024-05-20 11:05:03 +03:00
Hugo Wen
d1e230d9db MDEV-34112 Replace one operator name keyword
Alternative operator name keywords like `and`, `or`, `xor`, etc., are
uncommon in MariaDB and can cause obscure build errors when the GCC
flag `-fno-operator-names` is applied.

Description of `-fno-operator-names`:
https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html
> Do not treat the operator name keywords `and`, `bitand`, `bitor`,
> `compl`, `not`, `or` and `xor` as synonyms as keywords.

Part of the build errors:

    /local/p4clients/pkgbuild-LdLa_/workspace/src/RDSMariaDB/sql/sql_select.cc:11171:28: error: expected ‘)’ before ‘and’
    11171 |     DBUG_ASSERT(sel >= 0.0 and sel <= 1.00001);
          |                            ^~~
    /local/p4clients/pkgbuild-LdLa_/workspace/src/RDSMariaDB/include/my_global.h:372:44: note: in definition of macro ‘unlikely’
      372 | #define unlikely(x)     __builtin_expect(((x) != 0),0)
          |                                            ^
    ...

The build failure is caused by using alternative operator name keywords
`and` introduced in commit b66cdbd1e.
Replace the `and` keyword with `&&` and target on MariaDB 11.0+ branches
which include the commit.

All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer
Amazon Web Services, Inc.
2024-05-15 15:10:05 +02:00
Dmitry Shulga
5e6c122427 MDEV-33769: Memory leak found in the test main.rownum run with --ps-protocol against a server built with the option -DWITH_PROTECT_STATEMENT_MEMROOT
A memory leak happens on the second execution of a query that run in PS mode
and uses the function ROWNUM().

A memory leak took place on allocation of an instance of the class Item_int
for storing a limit value that is performed at the function set_limit_for_unit
indirectly called from JOIN::optimize_inner. Typical trace to the place where
the memory leak occurred is below:
 JOIN::optimize_inner
  optimize_rownum
   process_direct_rownum_comparison
    set_limit_for_unit
     new (thd->mem_root) Item_int(thd, lim, MAX_BIGINT_WIDTH);

To fix this memory leak, calling of the function optimize_rownum()
has to be performed only once on first execution and never called
after that. To control it, the new data member
  first_rownum_optimization
added into the structure st_select_lex.
2024-05-13 17:07:48 +07:00
Sergei Golubchik
f0a5412037 Merge branch '11.0' into 11.1 2024-05-13 09:52:30 +02:00
Sergei Golubchik
f9807aadef Merge branch '10.11' into 11.0 2024-05-12 12:18:28 +02:00
Sergei Golubchik
a6b2f820e0 Merge branch '10.6' into 10.11 2024-05-10 20:02:18 +02:00
Sergei Golubchik
7b53672c63 Merge branch '10.5' into 10.6 2024-05-08 20:06:00 +02:00
Sergei Petrunia
40b3525fcc MDEV-28621: group by optimization incorrectly removing subquery where subject buried in a function
Workaround patch: Do not remove GROUP BY clause when it has
subquer(ies) in it.

remove_redundant_subquery_clauses() removes redundant GROUP BY clause
from queries in form:
  expr IN (SELECT no_aggregates GROUP BY ...)
  expr {CMP} {ALL|ANY|SOME} (SELECT no_aggregates GROUP BY ...)
This hits problems when the GROUP BY clause itself has subquer(y/ies).

This patch is just a workaround: it disables removal of GROUP BY clause
if the clause has one or more subqueries in it.

Tests:
- subselect_elimination.test has all known crashing cases.
- subselect4.result, insert_select.result are updated.
Note that in some cases results of SELECT are changed too (not just
EXPLAINs). These are caused by non-deterministic SQL: when running a
query like:

  x > ANY( SELECT col1 FROM t1 GROUP BY constant_expression)

without removing the GROUP BY, the executor is free to pick the value
of t1.col1 from any row in the GROUP BY group (denote it $COL1_VAL).
Then, it computes x > ANY(SELECT $COL1_VAL).

When running the same query and removing the GROUP BY:

   x > ANY( SELECT col1 FROM t1)

the executor will actually check all rows of t1.
2024-05-07 21:25:22 +02:00
Sergei Golubchik
22b3ba9312 MDEV-25102 UNIQUE USING HASH error after ALTER ... DISABLE KEYS
on disable_indexes(HA_KEY_SWITCH_NONUNIQ_SAVE) the engine does
not know that the long unique is logically unique, because on the
engine level it is not. And the engine disables it,

Change the disable_indexes/enable_indexes API. Instead of the enum
mode, send a key_map of indexes that should be enabled. This way the
server will decide what is unique, not the engine.
2024-05-06 17:16:10 +02:00
Alexander Barkov
c6e3fe29d4 MDEV-30646 View created via JSON_ARRAYAGG returns incorrect json object
Backporting add782a13e from 10.6, this fixes the problem.
2024-04-29 13:47:45 +04:00
Sergei Golubchik
018d537ec1 Merge branch '10.6' into 10.11 2024-04-22 15:23:10 +02:00
Alexander Barkov
fd247cc21f MDEV-31340 Remove MY_COLLATION_HANDLER::strcasecmp()
This patch also fixes:
  MDEV-33050 Build-in schemas like oracle_schema are accent insensitive
  MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0
  MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0
  MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0
  MDEV-33088 Cannot create triggers in the database `MYSQL`
  MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0
  MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0
  MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0
  MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS
  MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0

- Removing the virtual function strnncoll() from MY_COLLATION_HANDLER

- Adding a wrapper function CHARSET_INFO::streq(), to compare
  two strings for equality. For now it calls strnncoll() internally.
  In the future it will turn into a virtual function.

- Adding new accent sensitive case insensitive collations:
    - utf8mb4_general1400_as_ci
    - utf8mb3_general1400_as_ci
  They implement accent sensitive case insensitive comparison.
  The weight of a character is equal to the code point of its
  upper case variant. These collations use Unicode-14.0.0 casefolding data.

  The result of
     my_charset_utf8mb3_general1400_as_ci.strcoll()
  is very close to the former
     my_charset_utf8mb3_general_ci.strcasecmp()

  There is only a difference in a couple dozen rare characters, because:
    - the switch from "tolower" to "toupper" comparison, to make
      utf8mb3_general1400_as_ci closer to utf8mb3_general_ci
    - the switch from Unicode-3.0.0 to Unicode-14.0.0
  This difference should be tolarable. See the list of affected
  characters in the MDEV description.

  Note, utf8mb4_general1400_as_ci correctly handles non-BMP characters!
  Unlike utf8mb4_general_ci, it does not treat all BMP characters
  as equal.

- Adding classes representing names of the file based database objects:

    Lex_ident_db
    Lex_ident_table
    Lex_ident_trigger

  Their comparison collation depends on the underlying
  file system case sensitivity and on --lower-case-table-names
  and can be either my_charset_bin or my_charset_utf8mb3_general1400_as_ci.

- Adding classes representing names of other database objects,
  whose names have case insensitive comparison style,
  using my_charset_utf8mb3_general1400_as_ci:

  Lex_ident_column
  Lex_ident_sys_var
  Lex_ident_user_var
  Lex_ident_sp_var
  Lex_ident_ps
  Lex_ident_i_s_table
  Lex_ident_window
  Lex_ident_func
  Lex_ident_partition
  Lex_ident_with_element
  Lex_ident_rpl_filter
  Lex_ident_master_info
  Lex_ident_host
  Lex_ident_locale
  Lex_ident_plugin
  Lex_ident_engine
  Lex_ident_server
  Lex_ident_savepoint
  Lex_ident_charset
  engine_option_value::Name

- All the mentioned Lex_ident_xxx classes implement a method streq():

  if (ident1.streq(ident2))
     do_equal();

  This method works as a wrapper for CHARSET_INFO::streq().

- Changing a lot of "LEX_CSTRING name" to "Lex_ident_xxx name"
  in class members and in function/method parameters.

- Replacing all calls like
    system_charset_info->coll->strcasecmp(ident1, ident2)
  to
    ident1.streq(ident2)

- Taking advantage of the c++11 user defined literal operator
  for LEX_CSTRING (see m_strings.h) and Lex_ident_xxx (see lex_ident.h)
  data types. Use example:

  const Lex_ident_column primary_key_name= "PRIMARY"_Lex_ident_column;

  is now a shorter version of:

  const Lex_ident_column primary_key_name=
    Lex_ident_column({STRING_WITH_LEN("PRIMARY")});
2024-04-18 15:22:10 +04:00
Marko Mäkelä
829cb1a49c Merge 10.5 into 10.6 2024-04-17 14:14:58 +03:00
Oleksandr Byelkin
9b18275623 Merge branch '10.4' into 10.5 2024-04-16 11:04:14 +02:00
Yuchen Pei
f9e0ebeca4 MDEV-33742 Do not create group by handler when all tables are constant 2024-04-08 14:35:36 +10:00
Sergei Petrunia
8cc36fb743 MDEV-21102: Server crashes in JOIN_CACHE::write_record_data upon EXPLAIN with subqueries
JOIN_CACHE has a light-weight initialization mode that's targeted at
EXPLAINs. In that mode, JOIN_CACHE objects are not able to execute.

Light-weight mode was used whenever the statement was an EXPLAIN. However
the EXPLAIN can execute subqueries, provided they enumerate less than
@@expensive_subquery_limit rows.

Make sure we use light-weight initialization mode only when the select is
more expensive @@expensive_subquery_limit.

Also add an assert into JOIN_CACHE::put_record() which prevents its use
if it was initialized for EXPLAIN only.
2024-04-04 10:30:09 +03:00
Marko Mäkelä
683fbced6b Merge 11.0 into 11.1 2024-03-28 12:15:36 +02:00
Marko Mäkelä
fec2fd6add Merge 10.11 into 11.0 2024-03-28 10:51:36 +02:00
Marko Mäkelä
788953463d Merge 10.6 into 10.11
Some fixes related to commit f838b2d799 and
Rows_log_event::do_apply_event() and Update_rows_log_event::do_exec_row()
for system-versioned tables were provided by Nikita Malyavin.
This was required by test versioning.rpl,trx_id,row.
2024-03-28 09:16:57 +02:00
Sergei Golubchik
55cea0c2a6 Merge branch '10.5' into 10.6 2024-03-14 19:52:08 +01:00
Sergei Petrunia
9d5a8bd663 MDEV-33665: MSAN failure due to uninitialized Item_func::not_null_tables_cache
eliminate_item_equal() uses quick_fix_field() for Item objects it creates.
It computes some of their attributes on its own (see update_used_tables()
call) but it doesn't update not_null_tables_cache.

Recompute not_null_tables_cache also. Not computing it is currently
harmless, except for producing MSAN error when some other code
propagates the wrong value of not_null_tables_cache to other item.
2024-03-14 16:11:40 +03:00
Sergei Golubchik
f71d7f2f0f Merge branch '10.5' into 10.6 2024-03-13 21:02:34 +01:00
Marko Mäkelä
f703e72bd8 Merge 10.4 into 10.5 2024-03-11 10:08:20 +02:00
Marko Mäkelä
d73baa402a Merge 10.11 into 11.0 2024-02-20 12:02:01 +02:00
Kristian Nielsen
5707f1efda MDEV-33468: Crash due to missing stack overrun check in two recursive functions
Thanks to Yury Chaikou for finding this problem (and the fix).

Reviewed-by: Monty <monty@mariadb.org>
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
2024-02-16 12:48:30 +01:00
Marko Mäkelä
64cce8d5bf Merge 10.6 into 10.11 2024-02-14 16:12:53 +02:00
Marko Mäkelä
691f923906 Merge 10.5 into 10.6 2024-02-13 20:42:59 +02:00
Monty
3907345e22 MDEV-33306 Optimizer choosing incorrect index in 10.6, 10.5 but not in 10.4
In MariaDB up to 10.11, the test_if_cheaper_ordering() code (that tries
to optimizer how GROUP BY is executed) assumes that if a table scan is used
then if there is any index usable by GROUP BY it will be used.

The reason MySQL 10.4 provides a better plan is because of two differences:
- Plans using 'ref' has a cost of 1/10 of what it should be (as a
  protection against table scans). This is why 'ref' is used in 10.4
  and not in 10.5.
- When 'ref' is used, then GROUP BY will not use an index for GROUP BY.

In MariaDB 10.5 the chosen plan is a table scan (as it calculated to be
faster) but as 'ref' is not used, the test_if_cheaper_ordering()
optimizer phase decides (as ref is not usd) to use an index for GROUP BY,
which has bad performance.

Description of fix:
- All new code is protected by the "optimizer_adjust_secondary_key_costs"
  variable, which is now a bit map, and is only executed if the option
  "disable_forced_index_in_group_by" set.
- Corrects GROUP BY handling in test_if_cheaper_ordering() by making
  the choise of using and index with GROUP BY cost based instead of rule
  based.
- Adds TIME_FOR_COMPARE to all costs, when using group by, to make
  read_time, index_scan_time and range_cost comparable.

Other things:
- Made optimizer_adjust_secondary_key_costs a bit map (compatible with old
  code).

Notes:
Current code ignores costs for the algorithm used when doing GROUP
BY on the first table:
  - Create an in-memory temporary table for handling group by and doing a
    filesort of the result file
We can probably in 10.6 continue to ignore this cost.

This patch should NOT be merged to 11.0 series (not needed in 11.0).
2024-02-12 16:43:00 +02:00
Marko Mäkelä
8ec12e0d6d Merge 10.4 into 10.5 2024-02-12 11:38:13 +02:00
Rex
36f51d9748 MDEV-29179 Condition pushdown from HAVING into WHERE is not shown in optimizer trace
JOIN::optimize_inner(), Condition pushdown from HAVING into WHERE
	  not shown in optimizer trace.
2024-02-11 22:21:32 +01:00
Oleg Smirnov
15623c7f29 MDEV-30660 Aggregation functions fail to leverage uniqueness property
When executing a statement of the form
  SELECT AGGR_FN(DISTINCT c1, c2,..,cn) FROM t1,
where AGGR_FN is an aggregate function such as COUNT(), AVG() or SUM(),
and a unique index exists on table t1 covering some or all of the
columns (c1, c2,..,cn), the retrieved values are inherently unique.
Consequently, the need for de-duplication imposed by the DISTINCT
clause can be eliminated, leading to optimization of aggregation
operations.
This optimization applies under the following conditions:
  - only one table involved in the join (not counting const tables)
  - some arguments of the aggregate function are fields
        (not functions/subqueries)

This optimization extends to queries of the form
  SELECT AGGR_FN(c1, c2,..,cn) GROUP BY cx,..cy
when a unique index covers some or all of the columns
(c1, c2,..cn, cx,..cy)
2024-02-10 14:54:03 +07:00
Dmitry Shulga
e48bd474a2 MDEV-15703: Crash in EXECUTE IMMEDIATE 'CREATE OR REPLACE TABLE t1 (a INT DEFAULT ?)' USING DEFAULT
This patch fixes the issue with passing the DEFAULT or IGNORE values to
positional parameters for some kind of SQL statements to be executed
as prepared statements.

The main idea of the patch is to associate an actual value being passed
by the USING clause with the positional parameter represented by
the Item_param class. Such association must be performed on execution of
UPDATE statement in PS/SP mode. Other corner cases that results in
server crash is on handling CREATE TABLE when positional parameter
placed after the DEFAULT clause or CALL statement and passing either
the value DEFAULT or IGNORE as an actual value for the positional parameter.
This case is fixed by checking whether an error is set in diagnostics
area at the function pack_vcols() on return from the function pack_expression()
2024-02-08 09:21:54 +01:00
Oleksandr Byelkin
d21cb43db1 Merge branch '11.2' into 11.3 2024-02-04 16:42:31 +01:00