1
0
mirror of https://github.com/MariaDB/server.git synced 2025-11-06 13:10:12 +03:00
Commit Graph

2936 Commits

Author SHA1 Message Date
Sergei Golubchik
c27d78beb5 MDEV-36870 Spurious unrelated permission error when selecting from table with default that uses nextval(sequence)
Lots of different cases, SELECT, SELECT DEFAULT(),
UPDATE t SET x=DEFAULT, prepares statements,
opening of a table for the I_S, prelocking (so TL_WRITE),
insert with subquery (so SQLCOM_SELECT), etc.

Don't check NEXTVAL privileges in fix_fields() anymore, it cannot
possibly handle all the cases correctly. Make a special method
Item_func_nextval::check_access() for that and invoke it from

* fix_fields on explicit SELECT NEXTVAL()
  (but not if NEXTVAL() is used in a DEFAULT clause)
* when DEFAULT bareword in used in, say, UPDATE t SET x=DEFAULT
  (but not if DEFAULT() itself is used in a DEFAULT clause)
* in CREATE TABLE
* in ALTER TABLE ALGORITHM=INPLACE (that doesn't go CREATE TABLE path)
* on INSERT

helpers
* Virtual_column_info::check_access() to walk the item tree and invoke
  Item::check_access()
* TABLE::check_sequence_privileges() to iterate default expressions
  and invoke Virtual_column_info::check_access()

also, single-table UPDATE in prepared statements now associates
value items with fields just as multi-update already did, fixes the
case of PREPARE s "UPDATE t SET x=?"; EXECUTE s USING DEFAULT.
2025-07-09 18:04:46 +02:00
Oleksandr Byelkin
19644f6821 Merge branch '10.5' into 10.6 2025-04-26 10:41:52 +02:00
Oleksandr Byelkin
4fc9dc84b0 MDEV-32086 (part 2) Server crash when inserting from derived table containing insert target table
Get rid of need of matherialization for usual INSERT (cache results in
Item_cache* if needed)

- subqueries in VALUE do not see new records in the table we are
  inserting to
- subqueries in RETIRNING prohibited to use the table we are inserting to
2025-04-25 15:10:36 +02:00
Oleksandr Byelkin
9b313d2de1 MDEV-32086 Server crash when inserting from derived table containing insert target table
Use original solution for INSERT ... SELECT - select result buferisation.

Also fix MDEV-36447 and MDEV-33139
2025-04-25 12:58:24 +02:00
Julius Goryavsky
e3d7d5ca26 Merge branch '10.5' into '10.6' 2025-02-27 04:02:33 +01:00
Sergei Golubchik
cd7513995d MDEV-36026 Problem with INSERT SELECT on NOT NULL columns while having BEFORE UPDATE trigger
MDEV-8605 and MDEV-19761 didn't handle INSERT (columns) SELECT

followup for a69da0c31e
2025-02-10 15:55:00 +01:00
Sergei Golubchik
066e8d6aea Merge branch '10.5' into 10.6 2025-01-29 11:17:38 +01:00
Sergei Golubchik
9a0ac0cdf7 MDEV-35911 Assertion `marked_for_write_or_computed()' failed in bool Field_new_decimal::store_value(const my_decimal*, int*)
disable the assert.

also, use the same check for check_that_all_fields_are_given_values()
as it's used in not_null_fields_have_null_values() - to avoid
issuing the same warning twice.
2025-01-28 19:29:54 +01:00
Jan Lindström
43c36b3c88 MDEV-35852 : ASAN heap-use-after-free in WSREP_DEBUG after INSERT DELAYED
Problem was that in case of INSERT DELAYED thd->query() is
freed before we call trans_rollback where WSREP_DEBUG
could access thd->query() in wsrep_thd_query().

Fix is to reset thd->query() to NULL in delayed_insert
destructor after it is freed. There is already
null guard at wsrep_thd_query().

Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
2025-01-20 12:19:31 +01:00
Marko Mäkelä
98dbe3bfaf Merge 10.5 into 10.6 2025-01-20 09:57:37 +02:00
Sergei Golubchik
a69da0c31e MDEV-19761 Before Trigger not processed for Not Null Columns if no explicit value and no DEFAULT
it's incorrect to zero out table->triggers->extra_null_bitmap
before a statement, because if insert uses an explicit field list
and omits a field that has no default value, the field should
get NULL implicitly. So extra_null_bitmap should have 1s for all
fields that have no defaults

* create extra_null_bitmap_init and initialize it as above
* copy extra_null_bitmap_init to extra_null_bitmap for inserts
* still zero out extra_null_bitmap for updates/deletes where
  all fields definitely have a value
* make not_null_fields_have_null_values() to send
  ER_NO_DEFAULT_FOR_FIELD for fields with no default and no value,
  otherwise creation of a trigger with an empty body would change the
  error message
2025-01-17 23:42:56 +01:00
Aleksey Midenkov
78c192644c MDEV-35343 unexpected replace behaviour when long unique index on
system versioned table

For versioned table REPLACE first tries to insert a row, if it gets
duplicate key error and optimization is possible it does UPDATE +
INSERT history. If optimization is not possible it goes normal branch
for UPDATE to history and repeats the cycle of INSERT.

The failure was in normal branch when we tried UPDATE to history but
such history already exists from previous cycles. There is no such
failures in optimized branch because vers_insert_history_row() already
ignores duplicates.

The fix ignores duplicate errors for UPDATE to history and does DELETE
instead.
2025-01-14 18:56:13 +03:00
Aleksey Midenkov
92383f8db1 MDEV-26891 Segfault in Field::register_field_in_read_map upon INSERT
DELAYED with virtual columns

Segfault was cause by two different copies of same Field instance in
prepared delayed insert. One was made by
Delayed_insert::get_local_table() (see make_new_field()). That copy
went through parse_vcol_defs() and received new vcol_info->expr.

Another one was made by copy_keys_from_share() by this code:

        /*
          We are using only a prefix of the column as a key:
          Create a new field for the key part that matches the index
        */
        field= key_part->field=field->make_new_field(root, outparam, 0);
        field->field_length= key_part->length;

So, key_part and table got different objects of same field and the
crash was because key_part->field->vcol_info->expr is NULL.

The fix does update_keypart_vcol_info() to update vcol_info->expr in
key_part->field.

Cleanup: memdup_vcol() is static inline instead of macro + check OOM.
2025-01-14 18:56:13 +03:00
Oleksandr Byelkin
0d35fe6e57 MDEV-35326: Memory Leak in init_io_cache_ext upon SHUTDOWN
The problems were that:
1) resources was freed "asimetric" normal execution in send_eof,
 in case of error in destructor.
2) destructor was not called in case of SP for result objects.
(so if the last SP execution ended with error resorces was not
freeded on reinit before execution (cleanup() called before next
execution) and destructor also was not called due to lack of
delete call for the object)

Result cleanup() renamed to reset_for_next_ps_execution() to better
reflect function().

All result method revised and freeing resources made "symetric".

Destructor of result object called for SP.

Added skipped invalidation in case of error in insert.

Removed misleading naming of reset(thd) (could be mixed with
with reset()).
2025-01-13 10:04:27 +01:00
Yuchen Pei
671f80c738 Merge branch '10.5' into 10.6 2024-12-17 11:06:09 +11:00
Oleg Smirnov
d98ac8511e MDEV-26247 MariaDB Server SEGV on INSERT .. SELECT
This problem occured for statements like `INSERT INTO t1 SELECT 1`,
which do not have tables in the SELECT part. In such scenarios
SELECT_LEX::insert_tables was not properly set at `setup_tables()`,
and this led to either incorrect execution or a crash

Reviewer: Oleksandr Byelkin <sanja@mariadb.com>
2024-12-14 14:04:21 +07:00
Oleg Smirnov
e640373389 Revert "MDEV-26427 MariaDB Server SEGV on INSERT .. SELECT"
This reverts commit 49e14000ee
as it introduces regression MDEV-29935 and has to be reconsidered
in general
2024-12-14 13:08:17 +07:00
Dmitry Shulga
54c1031b74 MDEV-34958: after Trigger doesn't work correctly with bulk insert
This bug has the same nature as the issues
  MDEV-34718: Trigger doesn't work correctly with bulk update
  MDEV-24411: Trigger doesn't work correctly with bulk insert

To fix the issue covering all use cases, resetting the thd->bulk_param
temporary to the value nullptr before invoking triggers and restoring
its original value on finishing execution of a trigger is moved to the method
  Table_triggers_list::process_triggers
that be invoked ultimately for any kind of triggers.
2024-12-13 16:19:39 +07:00
Oleksandr Byelkin
f00711bba2 Merge branch '10.5' into 10.6 2024-10-29 14:20:03 +01:00
Sergei Golubchik
3a1cf2c85b MDEV-34679 ER_BAD_FIELD uses non-localizable substrings 2024-10-17 21:37:37 +02:00
Monty
bddbef3573 MDEV-34533 asan error about stack overflow when writing record in Aria
The problem was that when using clang + asan, we do not get a correct value
for the thread stack as some local variables are not allocated at the
normal stack.

It looks like that for example clang 18.1.3, when compiling with
-O2 -fsanitize=addressan it puts local variables and things allocated by
alloca() in other areas than on the stack.

The following code shows the issue

Thread 6 "mariadbd" hit Breakpoint 3, do_handle_one_connection
    (connect=0x5080000027b8,
    put_in_cache=<optimized out>) at sql/sql_connect.cc:1399

THD *thd;
1399      thd->thread_stack= (char*) &thd;
(gdb) p &thd
(THD **) 0x7fffedee7060
(gdb) p $sp
(void *) 0x7fffef4e7bc0

The address of thd is 24M away from the stack pointer

(gdb) info reg
...
rsp            0x7fffef4e7bc0      0x7fffef4e7bc0
...
r13            0x7fffedee7060      140737185214560

r13 is pointing to the address of the thd. Probably some kind of
"local stack" used by the sanitizer

I have verified this with gdb on a recursive call that calls alloca()
in a loop. In this case all objects was stored in a local heap,
not on the stack.

To solve this issue in a portable way, I have added two functions:

my_get_stack_pointer() returns the address of the current stack pointer.
The code is using asm instructions for intel 32/64 bit, powerpc,
arm 32/64 bit and sparc 32/64 bit.
Supported compilers are gcc, clang and MSVC.
For MSVC 64 bit we are using _AddressOfReturnAddress()

As a fallback for other compilers/arch we use the address of a local
variable.

my_get_stack_bounds() that will return the address of the base stack
and stack size using pthread_attr_getstack() or NtCurrentTed() with
fallback to using the address of a local variable and user provided
stack size.

Server changes are:

- Moving setting of thread_stack to THD::store_globals() using
  my_get_stack_bounds().
- Removing setting of thd->thread_stack, except in functions that
  allocates a lot on the stack before calling store_globals().  When
  using estimates for stack start, we reduce stack_size with
  MY_STACK_SAFE_MARGIN (8192) to take into account the stack used
  before calling store_globals().

I also added a unittest, stack_allocation-t, to verify the new code.

Reviewed-by: Sergei Golubchik <serg@mariadb.org>
2024-10-16 17:24:46 +03:00
Oleksandr Byelkin
1d0e94c55f Merge branch '10.5' into 10.6 2024-10-09 08:38:48 +02:00
Sergei Golubchik
3ea71a2c8e MDEV-16699 heap-use-after-free in group_concat with compressed or GIS columns
Field_blob::store() has special code for GROUP_CONCAT temporary table
(to store blob values in Blob_mem_storage - this prevents them
from being freed/overwritten when a next row is read).

Field_geom and Field_blob_compressed inherit from Field_blob but they
have their own ::store() method without this special Blob_mem_storage
support.

Considering that non-grouping CONCAT() of such fields converts
them to plain BLOB, let's do the same for GROUP_CONCAT. To do it,
Item_func_group_concat::setup will signal that it's creating
a temporary table for GROUP_CONCAT, and Field_blog::make_new_field()
override will create base Field_blob when under group concat.
2024-10-08 15:31:02 +02:00
Aleksey Midenkov
706a8bcb5b MDEV-33470 Unique hash index is broken on DML for system-versioned table
Hash index is vcol-based wrapper (MDEV-371). row_end is added to
unique index. So when row_end is updated unique hash index must be
recalculated via vcol_update_fields(). DELETE did not update virtual
fields, so DELETE HISTORY was getting wrong hash value.

The fix does update_virtual_fields() on vers_update_end() so in every
case row_end is updated virtual fields are updated as well.
2024-10-08 13:08:10 +03:00
Aleksey Midenkov
d37bb140b1 MDEV-31297 Create table as select on system versioned tables do not
work consistently on replication

Row-based replication does not execute CREATE .. SELECT but instead
CREATE TABLE. CREATE .. SELECT creates implict system fields on
unusual place: in-between declared fields and select fields. That was
done because select_field_pos logic requires select fields go last in
create_list.

So, CREATE .. SELECT on master and CREATE TABLE on slave create system
fields on different positions and replication gets field mismatch.

To fix this we've changed CREATE .. SELECT to create implicit system
fields on usual place in the end and updated select_field_pos for
handling this case.
2024-10-08 13:08:10 +03:00
Julius Goryavsky
80fff4c6b1 Merge branch '10.5' into '10.6' 2024-09-16 16:39:59 +02:00
Marko Mäkelä
b331cde26b MDEV-34921 MemorySanitizer reports errors for non-debug builds
my_b_encr_write(): Initialize also block_length, and at the same time
last_block_length, so that all 128 bits can be initialized with fewer
writes. This fixes an error that was caught in the test
encryption.tempfiles_encrypted.

test_my_safe_print_str(): Skip a test that would attempt to
display uninitialized data in the test unit.stacktrace.
Previously, our CI did not build unit tests with MemorySanitizer.

handle_delayed_insert(): Remove a redundant call to pthread_exit(0),
which would for some reason cause MemorySanitizer in clang-19 to
report a stack overflow in a RelWithDebInfo build. This fixes a
failure of several tests.

Reviewed by: Vladislav Vaintroub
2024-09-13 14:34:08 +03:00
Yuchen Pei
60b93cdd30 Merge branch '10.5' into 10.6 2024-09-06 13:52:57 +10:00
Yuchen Pei
2c3e07df47 MDEV-34447: Memory leakage is detected on running the test main.ps against the server 11.1
The memory leak happened on second execution of a prepared statement
that runs UPDATE statement with correlated subquery in right hand side of
the SET clause. In this case, invocation of the method
  table->stat_records()
could return the zero value that results in going into the 'if' branch
that handles impossible where condition. The issue is that this condition
branch missed saving of leaf tables that has to be performed as first
condition optimization activity. Later the PS statement memory root
is marked as read only on finishing first time execution of the prepared
statement. Next time the same statement is executed it hits the assertion
on attempt to allocate a memory on the PS memory root marked as read only.
This memory allocation takes place by the sequence of the following
invocations:
 Prepared_statement::execute
  mysql_execute_command
   Sql_cmd_dml::execute
    Sql_cmd_update::execute_inner
     Sql_cmd_update::update_single_table
      st_select_lex::save_leaf_tables
       List<TABLE_LIST>::push_back

To fix the issue, add the flag SELECT_LEX::leaf_tables_saved to control
whether the method SELECT_LEX::save_leaf_tables() has to be called or
it has been already invoked and no more invocation required.

Similar issue could take place on running the DELETE statement with
the LIMIT clause in PS/SP mode. The reason of memory leak is the same as for
UPDATE case and be fixed in the same way.
2024-09-06 11:41:58 +10:00
Oleksandr Byelkin
fc5772ce17 Merge branch '10.5' into 10.6 2024-08-20 09:11:34 +02:00
Dmitry Shulga
ba5482ffc2 MDEV-34718: Trigger doesn't work correctly with bulk update
Running an UPDATE statement in PS mode and having positional
parameter(s) bound with an array of actual values (that is
prepared to be run in bulk mode) results in incorrect behaviour
in presence of on update trigger that also executes an UPDATE
statement. The same is true for handling a DELETE statement in
presence of on delete trigger. Typically, the visible effect of
such incorrect behaviour is expressed in a wrong number of
updated/deleted rows of a target table. Additionally, in case UPDATE
statement, a number of modified rows and a state message returned
by a statement contains wrong information about a number of modified rows.

The reason for incorrect number of updated/deleted rows is that
a data structure used for binding positional argument with its
actual values is stored in THD (this is thd->bulk_param) and reused
on processing every INSERT/UPDATE/DELETE statement. It leads to
consuming actual values bound with top-level UPDATE/DELETE statement
by other DML statements used by triggers' body.

To fix the issue, reset the thd->bulk_param temporary to the value
nullptr before invoking triggers and restore its value on finishing
its execution.

The second part of the problem relating with wrong value of affected
rows reported by Connector/C API is caused by the fact that diagnostics
area is reused by an original DML statement and a statement invoked
by a trigger. This fact should be take into account on finalizing a
state of diagnostics area on completion running of a statement.

Important remark: in case the macros DBUG_OFF is on, call of the method
  Diagnostics_area::reset_diagnostics_area()
results in reset of the data members
  m_affected_rows, m_statement_warn_count.
Values of these data members of the class Diagnostics_area are used on
sending OK and EOF messages. In case DML statement is executed in PS bulk
mode such resetting results in sending wrong result values to a client
for affected rows in case the DML statement fires a triggers. So, reset
these data members only in case the current statement being processed
is not run in bulk mode.
2024-08-19 12:13:43 +07:00
Yuchen Pei
d7042ec4da Merge branch '10.5' into 10.6 2024-06-26 09:16:54 +08:00
Dmitry Shulga
8b169949d6 MDEV-24411: Trigger doesn't work correctly with bulk insert
Executing an INSERT statement in PS mode having positional parameter
bound with an array could result in incorrect number of inserted rows
in case there is a BEFORE INSERT trigger that executes yet another
INSERT statement to put a copy of row being inserted into some table.

The reason for incorrect number of inserted rows is that a data structure
used for binding positional argument with its actual values is stored
in THD (this is thd->bulk_param) and reused on processing every INSERT
statement. It leads to consuming actual values bound with top-level
INSERT statement by other INSERT statements used by triggers' body.

To fix the issue, reset the thd->bulk_param temporary to the value nullptr
before invoking triggers and restore its value on finishing its execution.
2024-06-25 09:52:52 +07:00
Marko Mäkelä
0076eb3d4e Merge 10.5 into 10.6 2024-06-24 13:09:47 +03:00
Dave Gosselin
db0c28eff8 MDEV-33746 Supply missing override markings
Find and fix missing virtual override markings.  Updates cmake
maintainer flags to include -Wsuggest-override and
-Winconsistent-missing-override.
2024-06-20 11:32:13 -04:00
Sergei Golubchik
7b53672c63 Merge branch '10.5' into 10.6 2024-05-08 20:06:00 +02:00
Sergei Golubchik
4f5dea43df cleanup
* remove dead code
* simplify the check for table->s->next_number_index
* misc
2024-05-05 21:37:08 +02:00
Nikita Malyavin
72429cad7f MDEV-30046 wrong row targeted with "insert ... on duplicate" and "replace"
When HA_DUPLICATE_POS is not supported, the row to replace was navigated by
ha_index_read_idx_map, which uses only hash to navigate.

Suchwise, given a hash collision it may choose an incorrect row.

handler::position would be correct and very convenient to use here.

dup_ref is already set by handler independently of the engine
capabilities, when an extra lookup is made (for long unique or something else,
for example WITHOUT OVERLAPS) such error will be indicated by
file->lookup_errkey != -1.
2024-05-05 18:38:34 +02:00
Monty
b5d65fc105 Optimize performance of my_bitmap
MDEV-33502 Slowdown when running nested statement with many partitions

This change was triggered to help some MariaDB users with close to
10000 bits in their bitmaps.

- Change underlaying storage to be 64 bit instead of 32bit.
  - This reduses number of loops to scan bitmaps.
  - This can cause some bitmaps to be 4 byte large.
- Ensure that all not used top-bits are always 0 (simplifes code as
  the last 64 bit storage is not a special case anymore).
- Use my_find_first_bit() to find the first set bit which is much faster
  than scanning trough things byte by byte and then bit by bit.

Other things:
- Added a bool to remember if my_bitmap_init() did allocate the bitmap
  array. my_bitmap_free() will only free arrays it did allocate.
  This allowed me to remove setting 'bitmap=0' before calling
  my_bitmap_free() for cases where the bitmap's where allocated externally.
- my_bitmap_init() sets bitmap to 0 in case of failure.
- Added 'universal' asserts to most bitmap functions.
- Change all remaining calls to bitmap_init() to my_bitmap_init().
  - To finish the change from 2014.
- Changed all usage of uint32 in my_bitmap.h to my_bitmap_map.
- Updated bitmap_copy() to handle bitmaps of different size.
- Removed const from bitmap_exists_intersection() as this caused casts
  on all usage.
- Removed not used function bitmap_set_above().
- Renamed create_last_word_mask() to create_last_bit_mask() (to match
  name changes in my_bitmap.cc)
- Extended bitmap-t with test for more bitmap functions.
2024-02-27 14:51:33 +02:00
Sergei Golubchik
3f6038bc51 Merge branch '10.5' into 10.6 2024-01-31 18:04:03 +01:00
Alexander Barkov
97fcafb9ec MDEV-32837 long unique does not work like unique key when using replace
write_record() when performing REPLACE has an optimization:
- if the unique violation happened in the last unique key, then do UPDATE
- otherwise, do DELETE+INSERT

This patch changes the way of detecting if this optimization
can be applied if the table has long (hash based) unique
(i.e. UNIQUE..USING HASH) constraints.

Problem:

The old condition did not take into account that
TABLE_SHARE and TABLE see long uniques differently:
- TABLE_SHARE sees as HA_KEY_ALG_LONG_HASH and HA_NOSAME
- TABLE sees as usual non-unique indexes
So the old condition could erroneously decide that the UPDATE optimization
is possible when there are still some unique hash constraints in the table.

Fix:

- If the current key is a long unique, it now works as follows:

  UPDATE can be done if the current long unique is the last
  long unique, and there are no in-engine (normal) uniques.

- For in-engine uniques nothing changes, it still works as before:

  If the current key is an in-engine (normal) unique:
  UPDATE can be done if it is the last normal unique.
2024-01-24 17:19:54 +04:00
Sergei Golubchik
e95bba9c58 Merge branch '10.5' into 10.6 2023-12-17 11:20:43 +01:00
Sergei Golubchik
98a39b0c91 Merge branch '10.4' into 10.5 2023-12-02 01:02:50 +01:00
Monty
acdb8b6779 Fixed crash in Delayed_insert::get_local_table()
This was a bug in my previous commit, found by buildbot
2023-11-28 15:31:44 +02:00
Monty
08e6431c8c Fixed memory leak introduces by a fix for MDEV-29932
The leaks are all 40 bytes and happens in this call stack when running
mtr vcol.vcol_syntax:

alloc_root()
...
Virtual_column_info::fix_and_check_exp()
...
Delayed_insert::get_local_table()

The problem was that one copied a MEM_ROOT from THD to a TABLE without
taking into account that new blocks would be allocated through the
TABLE memroot (and would thus be leaked).
In general, one should NEVER copy MEM_ROOT from one object to another
without clearing the copied memroot!

Fixed by, at end of get_local_table(), copy all new allocated objects
to client_thd->mem_root.

Other things:
- Removed references to MEM_ROOT::total_alloc that was wrongly left
  after a previous commit
2023-11-27 19:08:14 +02:00
Denis Protivensky
e39c497c80 MDEV-22232: Fix CTAS replay & retry in case it gets BF-aborted
- Add selected tables as shared keys for CTAS certification
- Set proper security context on the replayer thread
- Disallow CTAS command retry

Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
2023-11-21 08:02:23 +01:00
Aleksey Midenkov
f7552313d4 MDEV-29932 Invalid expr in cleanup_session_expr() upon INSERT DELAYED
There are two TABLE objects in each thread: first one is created in
delayed thread by Delayed_insert::open_and_lock_table(), second one is
created in connection thread by Delayed_insert::get_local_table(). It
is copied from the delayed thread table.

When the second table is copied copy-assignment operator copies
vcol_refix_list which is already filled with an item from delayed
thread. Then get_local_table() adds its own item. Thus both tables
contains the same list with two items which is wrong. Then connection
thread finishes and its item freed. Then delayed thread tries to
access it in vcol_cleanup_expr().

The fix just clears vcol_refix_list in the copied table.

Another problem is that copied table contains the same mem_root, any
allocations on it will be invalid if the original table is freed (and
that is indeterministic as it is done in another thread). Since copied
table is allocated in connection THD and lives not longer than
thd->mem_root we may assign its mem_root from thd->mem_root.

Third, it doesn't make sense to do open_and_lock_tables() on NULL
pointer.
2023-11-10 15:46:15 +03:00
Oleksandr Byelkin
b83c379420 Merge branch '10.5' into 10.6 2023-11-08 15:57:05 +01:00
Oleksandr Byelkin
6cfd2ba397 Merge branch '10.4' into 10.5 2023-11-08 12:59:00 +01:00
Jan Lindström
f57deb314f MDEV-31660 : Assertion `client_state.transaction().active() in wsrep_append_key
At the moment we cannot support
wsrep_forced_binlog_format=[MIXED|STATEMENT]
during CREATE TABLE AS SELECT.
Statement will use ROW instead and give
a warning.

Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
2023-09-29 12:54:04 +02:00