The main purpose of this allow one to use the --read-only
option to ensure that no one can issue a query that can
block replication.
The --read-only option can now take 4 different values:
0 No read only (as before).
1 Blocks changes for users without the 'READ ONLY ADMIN'
privilege (as before).
2 Blocks in addition LOCK TABLES and SELECT IN SHARE MODE
for not 'READ ONLY ADMIN' users.
3 Blocks in addition 'READ_ONLY_ADMIN' users for all the
previous statements.
read_only is changed to an enum and one can use the following
names for the lock levels:
OFF, ON, NO_LOCK, NO_LOCK_NO_ADMIN
Too keep things compatible with older versions config files, one can
still use values FALSE and TRUE, which are mapped to OFF and ON.
The main visible changes are:
- 'show variables like "read_only"' now returns a string
instead of a number.
- Error messages related to read_only violations now contains
the current value off readonly.
Other things:
- is_read_only_ctx() renamed to check_read_only_with_error()
- Moved TL_READ_SKIP_LOCKED to it's logical place
Reviewed by: Sergei Golubchik <serg@mariadb.org>
REPAIR of InnoDB tables was logging ALTER TABLE and REPAIR to ddl log.
ALTER TABLE contained the new tableid and REPAIR, wrongly, contained the
old rowid.
Now only REPAIR is logged
ddl.log changes:
REPAIR TABLE and OPTIMIZE TABLE that are done through ALTER TABLE will
now contain the old and new table id. If not done through ALTER TABLE,
only the current rowid will be shown (as before).
MDEV-36563 Assertion `!mysql_bin_log.is_open()' failed in
THD::mark_tmp_table_as_free_for_reuse
The purpose of this commit is to ensure that creation and changes of
temporary tables are properly and predicable logged to the binary
log. It also fixes some bugs where ROW logging was used in MIXED mode,
when STATEMENT would be a better (and expected) choice.
In this comment STATEMENT stands for logging to binary log in
STATEMENT format, MIXED stands for MIXED binlog format and ROW for ROW
binlog format.
New rules for logging of temporary tables
- CREATE of temporary tables are now by default binlogged only if
STATEMENT binlog format is used. If it is binlogged, 1 is stored in
TABLE_SHARE->table_creation_was_logged. The user can change this
behavior by setting create_temporary_table_binlog_formats to
MIXED,STATEMENT in which case the create is logged in statement
format also in MIXED mode (as before).
- Changes to temporary tables are only binlogged if and only if
the CREATE was logged. The logging happens under STATEMENT or MIXED.
If binlog_format=ROW, temporary table changes are not binlogged. A
temporary table that are changed under ROW are marked as 'not up to
date in binlog' and no future row changes are logged. Any usage of
this temporary table will force row logging of other tables in any
future statements using the temporary table to be row logged.
- DROP TEMPORARY is binlogged only of the CREATE was binlogged.
Changes done:
- Row logging is forced for any statement using temporary tables that
are not up to date in the binary log.
(Before the row logging was forced if the user has a temporary table)
- If there is any changes to the temporary table that is not binlogged,
the table is marked as not up to date.
- TABLE_SHARE->table_creation_was_logged has a new definition for
temporary tables:
0 Table creating was not logged to binary log
1 Table creating was logged to binary log and table is up to date.
2 Table creating was logged to binary log but some changes where
not logged to binary log.
Table is not up to date in binary log is defined as value 0 or 2.
- If a multi-table-update or multi-table-delete fails then
all updated temporary tables are marked as not up to date.
- Enforce row logging if the query is using temporary tables
that are not up to date.
Before row logging was enforced if the user had any
temporary tables.
- When dropping temporary tables use IF EXISTS. This ensures
that slave will not stop if it had crashed and lost the
temporary tables.
- Remove comment and version from DROP /*!4000 TEMPORARY.. generated when
a connection closes that has open temporary tables. Added 'generated by
server' at the end of the DROP.
Bugs fixed:
- When using temporary tables with commands that forced row based,
like INSERT INTO temporary_table VALUES (UUID()), this was never
logged which causes the temporary table to be inconsistent on
master and slave.
- Used binlog format is now clearly defined. It is now only depending
on the current binlog_format and the tables used.
Before it was depending on the user had ANY temporary tables and
the state of 'current_stmt_binlog_format' set by previous queries.
This also caused temporary tables to be logged to binary log in
some cases.
- CREATE TABLE t1 LIKE not_logged_temporary_table caused replication
to stop.
- Rename of not binlogged temporary tables where binlogged to binary log
which caused replication to stop.
Changes in behavior:
- By default create_temporary_table_binlog_formats=STATEMENT, which
means that CREATE TEMPORARY is not logged to binary log under MIXED
binary logging. This can be changed by setting
create_temporary_table_binlog_formats to MIXED,STATEMENT.
- Using temporary tables that was not logged to the binary log will
cause any query using them for updating other tables to be logged in
ROW format. Before all queries was logged in ROW format if the user had
any temporary tables, even if they were not used by the query.
- Generated DROP TEMPORARY TABLE is now always using IF EXISTS and
has a "generated by server" comment in the binary log.
The consequences of the above is that manipulations of a lot of rows
through temporary tables will by default be be slower in mixed mode.
For example:
BEGIN;
CREATE TEMPORARY TABLE tmp AS SELECT a, b, c FROM
large_table1 JOIN large_table2 ON ...;
INSERT INTO other_table SELECT b, c FROM tmp WHERE a <100;
DROP TEMPORARY TABLE tmp;
COMMIT;
By default this will create a huge entry in the binary log, compared
to just a few hundred bytes in statement mode. However the change in
this commit will make usage of temporary tables more reliable and
predicable and is thus worth it. Using statement mode or
create_temporary_table_binlog_formats can be used to avoid this issue.
This is needed to make it easy for users to automatically ignore long
char and varchars when using ANALYZE TABLE PERSISTENT.
These fields can cause problems as they will consume
'CHARACTERS * MAX_CHARACTER_LENGTH * 2 * number_of_rows' space on disk
during analyze, which can easily be much bigger than the analyzed table.
This commit adds a new user variable, analyze_max_length, default value 4G.
Any field that is bigger than this in bytes, will be ignored by
ANALYZE TABLE PERSISTENT unless it is specified in FOR COLUMNS().
While doing this patch, I noticed that we do not skip GEOMETRY columns from
ANALYZE TABLE, like we do with BLOB. This should be fixed when merging
to the 'main' branch. At the same time we should add a resonable default
value for analyze_max_length, probably 1024, like we have for
max_sort_length.
There were two issues with the test:
1. A race between a race_condition.inc and status variable, where the
status variable Rpl_semi_sync_master_status could be ON before the
semi-sync connection finished establishing, resulting in
Rpl_semi_sync_master_clients showing 0 (instead of 1). To fix this,
we simply instead wait for Rpl_semi_sync_master_clients to be 1
before proceeding.
2. Another race between a race_condition.inc and status variable,
where the wait_condition waited on a process_list command of
'BINLOG DUMP' to disappear to infer the binlog dump thread was
killed, to where we then verified semi-sync state was correct
using status variables. However, the 'BINLOG DUMP' command is
overridden with a killed status before the semi-sync tear-down
happens, and thereby we could see invalid values. The fix for
this is to change the wait_condition to instead wait for the
connection with the replication user is gone, because that stays
through the binlog dump thread tear-down life-cycle
To check the rows, the table needs to be opened. To that end, and like
MDEV-36038, we force COPY algorithm on ALTER TABLE ... SEQUENCE=1.
This also results in checking the sequence state / metadata.
The table structure was already validated before this patch.
Add two new variables:
file_key_management_digest={sha1|sha224|sha256|sha384|sha512}
This specify the digest function to use in key derivation of the key
used for decryption of the keyfile.
file_key_management_use_pbkdf2=N
This specify whether pbkdf2 is used in the key derivation, and if
so (N != 0), how many iterations.
Extend loose index scan to support descending indexes.
This is achieved by removing a block skipping creating loose index
scan plan for descending index, as well as generalising the execution
of such plans.
The generalisation applies to all levels looking for min/max in loose
index scan. In the highest level (get_next), generalise min and max to
first and last, so that it still proceeds in the direction agreeing
with the index parity. In the lower levels, combine next_min and
next_max methods into next_min_max, and combine next_min_in_range and
next_max_in_range into next_min_max_in_range. This retains existing
logic of these functions and reduces code duplication, while allowing
handling of all four combinations (min, max) x (asc index, desc
index).
Get rid of need of matherialization for usual INSERT (cache results in
Item_cache* if needed)
- subqueries in VALUE do not see new records in the table we are
inserting to
- subqueries in RETIRNING prohibited to use the table we are inserting to
According to buildbot cross-reference atomic.alter_table is failing ~10
times a day due to 900 seconds test case timeout. Split it into two
tests: atomic.alter_table_myisam and atomic.alter_table_innodb.
It should reduce failure frequency down to once a day or so, similarly
to atomic.alter_table_aria test.
Diabling InnoDB for MyISAM/Aria/RocksDB tests makes them 20-35% faster.
When a replica stops an established semi-sync connection, it is
supposed to kill the corresponding binlog dump thread on the primary
server. However, when connections are configured to use SSL, this new
connection created by the replica to kill the dump thread doesn't have
any logic to configure SSL options, and thereby the connection can't be
made, and the dump thread will never be killed.
This patch adds logic to configure the semi-sync kill connection with
SSL. The exising logic to set up the connection options for the regular
connection was extracted into a function that the semi-sync kill
connection invokes.
Co-author: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Test changes only. Idea of the test is to test two concurrent
nonconflicting DDL-clauses. Therefore, use debug sync to
really execute two DDL-clauses as concurrently as Galera
allows.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
CHANGE MASTER can set `Master_Retry_Count` to be lower than
`Connects_Tried`, which’ll look strange to those unaware of that CHANGE.
Now, setting `Master_Retry_Count` also resets `Connects_Tried` to 0.
Reviewed-by: Susil Behera <susil.behera@mariadb.com>
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Set solution is to check if transaction, which modified a record, is
still active in lock_clust_rec_read_check_and_lock(). if yes, then just
request a lock. If no, then, depending on if the current transaction read
view can see the changes, return eighter DB_RECORD_CHANGED or request a
lock.
We can do the check in lock_clust_rec_read_check_and_lock() because
transaction tries to set a lock on the record which cursor points to after
transaction resuming and cursor position restoring. If the lock already
exists, then we don't request the lock again. But for the current commit
it's important that lock_clust_rec_read_check_and_lock() will be invoked
again for the same record, so we can do the check again after
transaction, which modified a record, was committed or rolled back.
MDEV-33802(4aa9291) is partially reverted. If some transaction holds
implicit lock on some record and transaction with snapshot isolation level
requests conflicting lock on the same record, it should be blocked instead
of returning DB_RECORD_CHANGED to have ability to continue execution when
implicit lock owner is rolled back.
The construction
--------------------------------------------------------------------------
let $wait_condition=
select count(*) = 1 from information_schema.processlist
where state = 'Updating' and info = 'UPDATE t SET b = 2 WHERE a';
--source include/wait_condition.inc
--------------------------------------------------------------------------
is not reliable enought to make sure transaction is blocked in test
case, the test failed sporadically with
--------------------------------------------------------------------------
./mtr --max-test-fail=1 --parallel=96 lock_isolation{,,,,,,,}{,,,}{,,} \
--repeat=500
--------------------------------------------------------------------------
command. That's why it was replaced with debug sync-points.
Reviewed by: Marko Mäkelä
- InnoDB fails to check the table is being dropped or evicted
while acquiring the MDL for the table when table open operation
mode is DICT_TABLE_OP_OPEN_ONLY_IF_CACHED. This is caused by
the commit 337bf8ac4b (MDEV-36122)
Fix:
===
dict_acquire_mdl_shared(): If the table is evicted or dropped when
table operation mode is DICT_TABLE_OP_OPEN_IF_CACHED then return
nullptr
Improve the `multi_source.connects_tried` feature test:
* Replace the main course from
*checking `Connect_Tried` after sleeping a specific time* to
*checking the duration after reaching a minimum `Connect_Tried`*
* Not using a specific SLEEP eliminates the possibility (of failure)
that it awakens before the `Connects_Tried` counter increments.
* In replacement-based tests of conventional views (SHOW & variants),
assert the parallel values without distinguishing which is which
* There were rare cases where both values end up equal (even though
one was configured to tick more frequently than the other).
* Comment touchups
Reviewed-by: Kristian Nielsen <knielsen@knielsen-hq.org>
MDEV-27037 added functionality to warn users that a specified
stop-position or stop-datetime was never reached. It only worked for
local files though. The patch in MDEV-35528 changed the
implementation for stop-datetime to work to provide the warning also
when using --read-from-remote-server. The PR for that MDEV (#3670)
was limited to only the stop-datetime field.
This patch updates the --stop-position warning to also work with
--read-from-remote-server.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
Jimmy Hu <jimmy.hu@mariadb.com>
Added capability to create a trigger associated with several trigger
events. For this goal, the syntax of the CREATE TRIGGER statement
was extended to support the syntax structure { event [ OR ... ] }
for the `trigger_event` clause. Since one trigger will be able to
handle several events it should be provided a way to determine what
kind of event is handled on execution of a trigger. For this goal
support of the clauses INSERTING, UPDATING , DELETING was added by
this patch. These clauses can be used inside a trigger body to detect
what kind of trigger action is currently processed using the following
boilerplate:
IF INSERTING THEN ...
ELSIF UPDATING THEN ...
ELSIF DELETING THEN ...
In case one of the clauses INSERTING, UPDATING, DELETING specified in
a trigger's body not matched with a trigger event type, the error
ER_INCOMPATIBLE_EVENT_FLAG is emitted.
After this patch be pushed, one Trigger object will be associated with
several trigger events. It means that the array
Table_triggers_list::triggers
can contain several pointers to the same Trigger object in array members
corresponding to different events. Moreover, support of several trigger
events for the same trigger requires that the data members `next` and
`action_order` of the Trigger class be converted to arrays to store
relating information per trigger event base.
Ability to specify the same trigger for different event types results in
necessity to handle invalid cases on execution of the multi-event
trigger, when the OLD or NEW qualifiers doesn't match a current event
type against that the trigger is run. The clause OLD should produces
the NULL value for INSERT event, whereas the clause NEW should produce
the NULL value for DELETE event.
Add ssl_passphrase server parameter, which works similarly
to --passout/--passin openssl command line parameters.
Pass phrase value can be formatted as follows.
- pass:password
Provide actual password after the pass: prefix.
- env:var
Obtain the password from the environment variable 'var'a
- file:pathname
Reads the password from the specified file pathname.
Only the first line, up to the newline character, is read from the stream.
If ssl_passphrase was set, SHOW VARIABLE will show "file:", "env:" or
"pass:" (but won't reveal sensitive data)
This patch adds support for SYS_REFCURSOR (a weakly typed cursor)
for both sql_mode=ORACLE and sql_mode=DEFAULT.
Works as a regular stored routine variable, parameter and return value:
- can be passed as an IN parameter to stored functions and procedures
- can be passed as an INOUT and OUT parameter to stored procedures
- can be returned from a stored function
Note, strongly typed REF CURSOR will be added separately.
Note, to maintain dependencies easier, some parts of sql_class.h
and item.h were moved to new header files:
- select_results.h:
class select_result_sink
class select_result
class select_result_interceptor
- sp_cursor.h:
class sp_cursor_statistics
class sp_cursor
- sp_rcontext_handler.h
class Sp_rcontext_handler and its descendants
The implementation consists of the following parts:
- A new class sp_cursor_array deriving from Dynamic_array
- A new class Statement_rcontext which contains data shared
between sub-statements of a compound statement.
It has a member m_statement_cursors of the sp_cursor_array data type,
as well as open cursor counter. THD inherits from Statement_rcontext.
- A new data type handler Type_handler_sys_refcursor in plugins/type_cursor/
It is designed to store uint16 references -
positions of the cursor in THD::m_statement_cursors.
- Type_handler_sys_refcursor suppresses some derived numeric features.
When a SYS_REFCURSOR variable is used as an integer an error is raised.
- A new abstract class sp_instr_fetch_cursor. It's needed to share
the common code between "OPEN cur" (for static cursors) and
"OPER cur FOR stmt" (for SYS_REFCURSORs).
- New sp_instr classes:
* sp_instr_copen_by_ref - OPEN sys_ref_curor FOR stmt;
* sp_instr_cfetch_by_ref - FETCH sys_ref_cursor INTO targets;
* sp_instr_cclose_by_ref - CLOSE sys_ref_cursor;
* sp_instr_destruct_variable - to destruct SYS_REFCURSOR variables when
the execution goes out of the BEGIN..END block
where SYS_REFCURSOR variables are declared.
- New methods in LEX:
* sp_open_cursor_for_stmt - handles "OPEN sys_ref_cursor FOR stmt".
* sp_add_instr_fetch_cursor - "FETCH cur INTO targets" for both
static cursors and SYS_REFCURSORs.
* sp_close - handles "CLOSE cur" both for static cursors and SYS_REFCURSORs.
- Changes in cursor functions to handle both static cursors and SYS_REFCURSORs:
* Item_func_cursor_isopen
* Item_func_cursor_found
* Item_func_cursor_notfound
* Item_func_cursor_rowcount
- A new system variable @@max_open_cursors - to limit the number
of cursors (static and SYS_REFCURSORs) opened at the same time.
Its allowed range is [0-65536], with 50 by default.
- A new virtual method Type_handler::can_return_bool() telling
if calling item->val_bool() is allowed for Items of this data type,
or if otherwise the "Illegal parameter for operation" error should be raised
at fix_fields() time.
- New methods in Sp_rcontext_handler:
* get_cursor()
* get_cursor_by_ref()
- A new class Sp_rcontext_handler_statement to handle top level statement
wide cursors which are shared by all substatements.
- A new virtual method expr_event_handler() in classes Item and Field.
It's needed to close (and make available for a new OPEN)
unused THD::m_statement_cursors elements which do not have any references
any more. It can happen in various moments in time, e.g.
* after evaluation parameters of an SQL routine
* after assigning a cursor expression into a SYS_REFCURSOR variable
* when leaving a BEGIN..END block with SYS_REFCURSOR variables
* after setting OUT/INOUT routine actual parameters from formal
parameters.
consistently issue a
Note 1618 DATA DIRECTORY option ignored
Note 1618 INDEX DIRECTORY option ignored
in archive/csv/innodb/rocksdb whenever an option is ignored.
Note that csv doesn't say "INDEX DIRECTORY option ignored"
because it does not create index files at all anywhere.
Other engines don't say "INDEX DIRECTORY option ignored"
if the table has no indexes.
additionally InnoDB doesn't say that if INDEX DIRECTORY is
the same as DATA DIRECTORY, because in that case indexes are
technically stored in INDEX DIRECTORY.
collateral fix: use strmake to zero-terminate the string
is_package_public_routine() ignores errors that might've
happened during its execution and does not return an error status
to the caller. Thus there must be no errors during its execution.
MTR's internal check of the test case 'multi_source.master_info_file' failed.
...
-Master_SSL_Verify_Server_Cert No
+Master_SSL_Verify_Server_Cert Yes
also simplify replace/eval pattern
Under `@@rpl_semi_sync_master_wait_no_slave = 0`,
when `rpl_semi_sync_master_clients` decrements to zero, the primary
reverts to async replication. This code did not check whether Semi-
Sync is still globally enabled or not as it didn’t matter before.
However, after MDEV-33551 (#3089) split the transactions’ACK condition
variables to per-transaction, this function now needs Semi-Sync’s
transaction tracker to unblock these condition variables in batch,
but this tracker is `NULL` when Semi-Sync Primary is disabled.
Co-authored-by: Kristian Nielsen <knielsen@knielsen-hq.org>