1
0
mirror of https://github.com/MariaDB/server.git synced 2025-11-09 11:41:36 +03:00
Commit Graph

3192 Commits

Author SHA1 Message Date
Sergei Golubchik
aab83aecdc Merge branch '11.8' into 12.0
main/statistics_json.result is updated for f8ba5ced55 (MDEV-36099)

The test uses 'delete from t1' in many places and then populates
the table again. The natural order of rows in a MyISAM table is well
defined and the test was implicitly relying on that.

before f8ba5ced55 delete was deleting rows one by one, using
ha_myisam::delete_row() because the connection was stuck in rbr mode.
This caused rows to be shown in the reverse insertion order (because of
the delete link list).

MDEV-36099 fixes this bug and the server now correctly uses
ha_myisam::delete_all_rows(). This makes rows to be shown in the
insertion order as expected.
2025-07-31 20:55:47 +02:00
Sergei Golubchik
b565b3e7e0 Merge branch '11.4' into 11.8 2025-07-28 21:29:29 +02:00
Sergei Golubchik
c4ed889b74 Merge branch '10.11' into 11.4 2025-07-28 19:40:10 +02:00
Sergei Golubchik
053f9bcb5b Merge branch '10.6' into 10.11 2025-07-28 18:06:31 +02:00
Sergei Golubchik
633417308f MDEV-37312 ASAN errors or assertion failure upon attempt to UPDATE FOR PORTION violating long unique under READ COMMITTED
in case of a long unique conflict ha_write_row() used delete_row()
to remove the newly inserted row, and it used rnd_pos() to position
the cursor before deletion. This rnd_pos() was freeing and reallocating
blobs in record[0]. So when the code for FOR PORTION OF did
  store_record(record[2]);
  ha_write_row()
  restore_record(record[2]);
it ended up with blob pointers to a freed memory.

Let's use lookup_handler for deletion.
2025-07-26 10:54:28 +02:00
Sergei Golubchik
fb2f324f85 MDEV-37310 Non-debug failing assertion node->pcur->rel_pos == BTR_PCUR_ON upon violating long unique under READ-COMMITTED
let's disallow UPDATE IGNORE in READ COMMITTED with the table
has UNIQUE constraint that is USING HASH or is WITHOUT OVERLAPS

This rarely-used combination should not block a release, with be
fixed in MDEV-37233
2025-07-25 12:28:30 +02:00
Sergei Golubchik
a158bbb1a2 MDEV-36777 create vector table failed with VECTOR INDEX when innodb_force_primary_key=on
mark creating an index as an SQLCOM_CREATE_INDEX operation.

currently the only effect of this is to tell InnoDB that it's not
a "CREATE TABLE" as such, so not affected by innodb_force_primary_key
2025-07-21 10:24:14 +02:00
Sergei Golubchik
5622f3f5e8 MDEV-37268 HA_ERR_KEY_NOT_FOUND upon UPDATE or partitioned table with unique hash under READ-COMMITTED
followup for 9703c90712 (MDEV-37199 UNIQUE KEY USING HASH accepting duplicate records)

ha_partition can return HA_ERR_KEY_NOT_FOUND even in the middle
of the index scan
2025-07-21 08:59:08 +02:00
Sergei Golubchik
2b11a0e991 MDEV-37268 assert upon UPDATE or partitioned table with unique hash under READ-COMMITTED
followup for 9703c90712 (MDEV-37199 UNIQUE KEY USING HASH accepting duplicate records)

maintain the invariant, that handler::ha_update_row()
is always invoked as handler::ha_update_row(record[0], record[1])
2025-07-21 08:59:08 +02:00
Sergei Golubchik
774039e410 MDEV-37268 ER_DUP_ENTRY upon REPLACE into table with unique hash under READ-COMMITTED
followup for 9703c90712 (MDEV-37199 UNIQUE KEY USING HASH accepting duplicate records)

when looking for long unique duplicates and the new row is already
inserted, we cannot simply "skip one conflict" we must skip exactly
the new row and find a conflict which isn't a new row - otherwise
table->file->dup_ref can be set incorrectly and REPLACE won't work.
2025-07-21 08:59:08 +02:00
Sergei Golubchik
3a2e1f87a1 MDEV-37268 ER_NOT_KEYFILE or assertion failure upon REPLACE into table with unique hash under READ-COMMITTED
followup for 9703c90712 (MDEV-37199 UNIQUE KEY USING HASH accepting duplicate records)

don't forget to rnd_init()/rnd_end() around rnd_pos()
2025-07-21 08:59:08 +02:00
Sergei Golubchik
cb7978a12d MDEV-36720 Possible memory leak on updating table with index without overlaps
when closing a lookup_handler, don't forget to close it in PSI too
2025-07-17 09:18:17 +02:00
Sergei Golubchik
9703c90712 MDEV-37199 UNIQUE KEY USING HASH accepting duplicate records
Server-level UNIQUE constraints (namely, WITHOUT OVERLAPS and USING HASH)
only worked with InnoDB in REPEATABLE READ isolation mode, when the
constraint was checked first and then the row was inserted or updated.
Gap locks prevented race conditions when a concurrent connection
could've also checked the constraint and inserted/updated a row
at the same time.

In READ COMMITTED there are no gap locks. To avoid race conditions,
we now check the constraint *after* the row operation. This is
enabled by the HA_CHECK_UNIQUE_AFTER_WRITE table flag that InnoDB
sets in the READ COMMITTED transactions.

Checking the constraint after the row operation is more complex.
First, the constraint will see the current (inserted/updated) row,
and needs to skip it. Second, IGNORE operations become tricky,
as we need to revert the insert/update and continue statement execution.

write_row() (INSERT IGNORE) is reverted with delete_row(). Conveniently
it deletes the current row, that is, the last inserted row.

update_row(a,b) (UPDATE IGNORE) is reverted with a reversed update,
update_row(b,a). Conveniently, it updates the current row too.

Except in InnoDB when the PK is updated - in this case InnoDB internally
performs delete+insert, but does not move the cursor, so the "current"
row is the deleted one and the reverse update doesn't work.
This combination now throws an "unsupported" error and will
be fixed in MDEV-37233
2025-07-16 13:02:44 +02:00
Sergei Golubchik
2746c19a9c MDEV-37203 UBSAN: applying zero offset to null pointer in strings/ctype-uca.inl | my_uca_strnncollsp_onelevel_utf8mb4 | handler::check_duplicate_long_entries_update 2025-07-16 13:02:44 +02:00
Sergei Golubchik
d8c2362912 cleanup: long unique checks
consolidate and unify long unique checks.

fix a bug where an update of a long unique blob
was ignoring the prefix length
2025-07-16 13:02:44 +02:00
Sergei Golubchik
c27d78beb5 MDEV-36870 Spurious unrelated permission error when selecting from table with default that uses nextval(sequence)
Lots of different cases, SELECT, SELECT DEFAULT(),
UPDATE t SET x=DEFAULT, prepares statements,
opening of a table for the I_S, prelocking (so TL_WRITE),
insert with subquery (so SQLCOM_SELECT), etc.

Don't check NEXTVAL privileges in fix_fields() anymore, it cannot
possibly handle all the cases correctly. Make a special method
Item_func_nextval::check_access() for that and invoke it from

* fix_fields on explicit SELECT NEXTVAL()
  (but not if NEXTVAL() is used in a DEFAULT clause)
* when DEFAULT bareword in used in, say, UPDATE t SET x=DEFAULT
  (but not if DEFAULT() itself is used in a DEFAULT clause)
* in CREATE TABLE
* in ALTER TABLE ALGORITHM=INPLACE (that doesn't go CREATE TABLE path)
* on INSERT

helpers
* Virtual_column_info::check_access() to walk the item tree and invoke
  Item::check_access()
* TABLE::check_sequence_privileges() to iterate default expressions
  and invoke Virtual_column_info::check_access()

also, single-table UPDATE in prepared statements now associates
value items with fields just as multi-update already did, fixes the
case of PREPARE s "UPDATE t SET x=?"; EXECUTE s USING DEFAULT.
2025-07-09 18:04:46 +02:00
Monty
2d5dfc47a9 Define error message for HA_ERR_INCOMPATIBLE_DEFINITION 2025-06-30 18:34:47 +03:00
Oleksandr Byelkin
dfcb5c91e0 Merge branch '11.8' into 12.0 2025-06-18 07:50:39 +02:00
Oleksandr Byelkin
a65f7dc71d Merge branch '11.4' into 11.8 2025-06-18 07:43:24 +02:00
Oleksandr Byelkin
89c7e2b9c7 Merge branch '10.11' into 11.4 2025-06-17 09:50:22 +02:00
Oleksandr Byelkin
28d6530571 Merge branch '10.6' into 10.11 2025-06-04 14:09:23 +02:00
Monty
ce4f83e6b9 MDEV-29157 SELECT using ror_merged scan fails with s3 tables
handler::clone() call did not work with read only tables like S3.
It gave a wrong error message (out of memory instead of a permission
error) and aborted the query.

The issue was that the clone call had a wrong parameter to ha_open().
This now fixed. I also changed the clone call to provide the correct
error message if things fails.

This patch fixes an 'out of memory' error when using the S3 engine
for queries that could use multiple indexes together to find the matching
rows, like the following:
SELECT * FROM t1 WHERE key1 = 99 OR key2 = 2
2025-06-02 14:02:53 +03:00
Monty
22024da64e MDEV-36143 Row event replication with Aria does not honour BLOCK_COMMIT
This commit fixes a bug where Aria tables are used in
(master->slave1->slave2) and a backup is taken on slave2. In this case
it is possible that the replication position in the backup, stored in
mysql.gtid_slave_pos, will be wrong. This will lead to replication
errors if one is trying to use the backup as a new slave.

Analyze:
Replicated row events are committed with trans_commit_stmt() and
thd->transaction->all.ha_list != 0.
This means that backup_commit_lock is not taken for Aria tables,
which means the rows are committed and binary logged on the slave
under BLOCK_COMMIT which should not happen.

This issue does not occur on the master as thd->transaction->all.ha_list
is == 0 under AUTO_COMMIT, which sets 'is_real_trans' and 'rw_trans'
which in turn causes backup_commit_lock to be taken.

Fixed by checking in ha_check_and_coalesce_trx_read_only() if all handlers
supports rollback and if not, then wait for BLOCK_COMMIT also for
statement commit.
2025-06-02 14:02:53 +03:00
Oleksandr Byelkin
f1102da37a Merge branch '11.8' into 12.0 2025-05-22 09:22:55 +02:00
Vasilii Lakhin
40c5b62531 Fix remaining typos 2025-04-29 11:18:00 +10:00
Monty
ce8a74f235 MDEV-36425 Extend read_only to also block share locks and super user
The main purpose of this allow one to use the --read-only
option to ensure that no one can issue a query that can
block replication.

The --read-only option can now take 4 different values:
0  No read only (as before).
1  Blocks changes for users without the 'READ ONLY ADMIN'
   privilege (as before).
2  Blocks in addition LOCK TABLES and SELECT IN SHARE MODE
   for not 'READ ONLY ADMIN' users.
3  Blocks in addition 'READ_ONLY_ADMIN' users for all the
   previous statements.

read_only is changed to an enum and one can use the following
names for the lock levels:
OFF, ON, NO_LOCK, NO_LOCK_NO_ADMIN

Too keep things compatible with older versions config files, one can
still use values FALSE and TRUE, which are mapped to OFF and ON.

The main visible changes are:
- 'show variables like "read_only"' now returns a string
   instead of a number.
- Error messages related to read_only violations now contains
  the current value off readonly.

Other things:
- is_read_only_ctx() renamed to check_read_only_with_error()
- Moved TL_READ_SKIP_LOCKED to it's logical place

Reviewed by: Sergei Golubchik <serg@mariadb.org>
2025-04-28 12:59:39 +03:00
Aleksey Midenkov
1f85eeeb53 MDEV-25292 Refactoring: moved select_field_count into Alter_info.
There is a need in MDEV-25292 to have both C_ALTER_TABLE and
select_field_count in one call. Semantically creation mode and field
count are two different things. Making creation mode negative
constants and field count positive variable into one parameter seems
to be a lazy hack for not making the second parameter.

select_count does not make sense without alter_info->create_list, so
the natural way is to hold it in Alter_info too. select_count is now
stored in member select_field_count.

Merged and updated by: Monty
2025-04-28 12:59:39 +03:00
Yuchen Pei
6f8ef26885 MDEV-36032 Check whether a table can be a sequence when ALTERed with SEQUENCE=1
To check the rows, the table needs to be opened. To that end, and like
MDEV-36038, we force COPY algorithm on ALTER TABLE ... SEQUENCE=1.
This also results in checking the sequence state / metadata.

The table structure was already validated before this patch.
2025-04-29 16:28:01 +10:00
Sergei Golubchik
237e24497b Merge remote-tracking branch 'github/bb-11.4-release' into bb-11.8-serg 2025-04-27 19:40:00 +02:00
Oleksandr Byelkin
a8d4642375 Merge branch '10.11' into 11.4 2025-04-26 10:53:02 +02:00
Sergei Golubchik
63a69ab936 cleanup: remote automatic conversion char* -> Lex_ident
considered harmful, see e.g. changes in check_period_fields()
2025-04-22 12:03:05 +02:00
Sergei Golubchik
9b824e62d4 Merge branch '11.8' into main 2025-04-18 17:11:01 +02:00
Sergey Vojtovich
29823f3b96 MDEV-35152 DATA/INDEX DIRECTORY options are ignored for vector index
Let high-level indexes honor INDEX DIRECTORY table option
2025-04-18 09:41:24 +02:00
Sergei Golubchik
7db60533c7 MDEV-36188 assert with vector index and very long PK
limit PK length if the table has a vector index
2025-04-18 09:41:23 +02:00
Julius Goryavsky
1a013cea95 Merge branch '10.6' into '10.11' 2025-04-16 03:34:40 +02:00
Julius Goryavsky
88dfa6bcee Merge branch '10.5' into '10.6' 2025-04-15 01:49:48 +02:00
Nikita Malyavin
e6ea5d568c MDEV-36507 fix dbug_print_row concurrent access
7544fd4cae had to make use of a static array to avoid memory
use-after-free or leak.

Instead, let us make a function returning String, this is the only way
to automatically manage the memory after the function returned.

To make it all correct, move constructor is added. Normally, it is
expected, that the constructor will be elided upon return of an object
by value, but if something goes different, or -fno-elide-constructors is
used, we can have a problem. So this was a move constructor avoids
copy elision-related UB.

dbug_print_row returning char* is still there for convenient use in a
debugger.
2025-04-11 13:42:53 +02:00
Andrei Elkin
c06c36218a MDEV-35506 commit policy of one-phase-commit even at errored-out binlogging leads to assert
Currently execution of commit in one phase proceeds to commit by
engines when binlog_commit() does not succeed.

There are two issues with that:

1. absence of binlog_rollback() or lower-level
`binlog_cache_data::reset()` along the following execution of the
failing statement eventually will raise an assert on non-empty binlog
cache, find in the MDEV description

  # --error assert(sql/log.cc:1712(binlog_close_connection))
  # --disconnect default

2. engines, including ones that are rollback capable, commit in this
particular error situation.

Both effects can be observed with a new mtr test that would fail when run on
a BASE of this commit.
The BASE has to include MDEV-35207 et all fixes because the test is written
with CREATE-TABLE-SELECTs.

A new test file verifies the new behaviour to rollback including
cases with a side effect of modified non-transactional engine which
expose another MDEV-36027 (TODO: fix).
2025-04-03 20:13:10 +03:00
Marko Mäkelä
bb1d88b6dc Merge 11.4 into 11.8 2025-04-02 14:07:01 +03:00
Julius Goryavsky
74f0b99edf Merge branch '10.6' into '10.11' 2025-04-02 06:33:39 +02:00
Julius Goryavsky
b983a911e9 galera mtr tests: synchronization between branches and editions 2025-04-02 04:50:11 +02:00
Julius Goryavsky
03c31ab099 Merge branch '10.5' into '10.6' 2025-04-02 04:43:24 +02:00
Julius Goryavsky
41565615c5 galera: synchronization changes to stop random test failures 2025-04-02 04:29:34 +02:00
Marko Mäkelä
f5bd250f5b Merge 10.11 into 11.4 2025-03-28 13:55:21 +02:00
Julius Goryavsky
c61345169a galera tests: synchronization after merge 2025-03-28 02:53:59 +01:00
Marko Mäkelä
ab0f2a00b6 Merge 10.6 into 10.11 2025-03-27 08:01:47 +02:00
Dave Gosselin
7e4233746e MDEV-34413 Index Condition Pushdown for reverse ordered scans
Allows index condition pushdown for reverse ordered scans, a previously
disabled feature due to poor performance.  This patch adds a new
API to the handler class called set_end_range which allows callers to
tell the handler what the end of the index range will be when scanning.
Combined with a pushed index condition, the handler can scan the index
efficiently and not read beyond the end of the given range.  When
checking if the pushed index condition matches, the handler will also
check if scanning has reached the end of the provided range and stop if
so.

If we instead only enabled ICP for reverse ordered scans without
also calling this new API, then the handler would perform unnecessary
index condition checks.  In fact this would continue until the end of
the index is reached.

These changes are agnostic of storage engine.  That is, any storage
engine that supports index condition pushdown will inhereit this new
behavior as it is implemented in the SQL and storage engine
API layers.

The partitioned tables storage meta-engine (ha_partition) adds an
override of set_end_range which recursively calls set_end_range on its
child storage engine (handler) implementations.

This commit updates the test made in an earlier commit to show that
ICP matches happen for the reverse ordered case.

This patch is based on changes written by Olav Sandstaa in
MySQL commit da1d92fd46071cd86de61058b6ea39fd9affcd87
2025-03-19 16:03:29 -04:00
Vasilii Lakhin
717c12de0e Fix typos in C comments inside sql/ 2025-03-14 12:08:56 +04:00
Monty
cc4d9200c4 MDEV-33813 ERROR 1021 (HY000): Disk full (./org/test1.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
The problem with MariaDB waiting was fixed earlier.
However the server still gives the old error,in case of disk full,
that includes "waiting for someone to free some space" even if
there is now wait.

This commit changes the error message for the non waiting case to:
Disk got full writing 'db.table' (Errcode: 28 "No space left on device")

Disk got full writing 'test.t1' (Errcode: 28 "No space left on device")Disk got full writing 'test.t1' (Errcode: 28 "No space left on device")Disk got full writing 'test.t1' (Errcode: 28 "No space left on device")
2025-03-06 09:40:55 +02:00
Julius Goryavsky
15139c88a8 Merge branch '10.5' into '10.6' 2025-03-05 01:54:40 +01:00