Modified the bug fix for MDEV-36554 by replacing the general
mechanishm for disabling assertions in InnoDB code with solution
specific to this rollback issue.
The appliers sets a flag in the applier thread for the duration of the
local rollback. This is used in InnoDB code to detect a local
rollback.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
This bug fix prevents debug assertion failure when Galera feature
retry applying is enabled. This patch introduces also a general
mechanism for temporarily disabling some debug assertions in InnoDB
code.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
- Checking that the key expression is compatible with the INDEX BY data type
for assignment in expressions:
assoc_array_variable(key_expr)
assoc_array_variable(key_expr).field
in all contexts: SELECT, assignment target, INTO target.
Raising an error in case it's not compatible.
- Disallowing non-constant expressions as a key,
as the key is evaluated during the fix_fields() time.
- Disallowing stored functions as a key:
assoc_array(stored_function())
assoc_array(stored_function()).field
The underlying MariaDB code is not ready to call a stored function
during the fix_fields() time. This will be fixed in a separate MDEV.
- Removing the move Assoc_array_data's constructor.
Using the usual constructor instead.
- Setting m_key.thread_specific and m_value.thread_specific to true
in the Assoc_array_data constructor. This is needed to get assoc array
element data counted by the @@session.memory_used status variable.
Adding DBUG_ASSERTs to make sure the thread_specific flag never
disappears in Assoc_array_data members.
- Removing my_free(item) from Field_assoc_array::element_by_key.
It was a remainder from an earlier patch version.
In the current patch version all Items behind an assoc array are
created on a mem_root. It's wrong to use my_free() with them.
- Adding a helper method Field_assoc_array::assoc_tree_search()
- Fixing assoc_array_var.delete() to work as a procedure
rather than a function. It does not need SELECT/DO any more.
- Fixing the crash in a few ctype_xxx tests, caused by the grammar change.
- Fixing compilation failure on Windows
- Adding a new method LEX::set_field_type_udt_or_typedef()
and removing duplicate code from sql_yacc.yy
- Renaming the grammar rule field_type_all_with_composites to
field_type_all_with_typedefs
- Removing the grammar rule assoc_array_index_types.
Changing the grammar to "INDEX_SYM BY field_type".
Removing the grammar rule field_type_all_with_record.
Allow field_type_all_with_typedefs as an assoc array element.
Catching wrong index and element data types has been moved to
Type_handler_assoc_array::Column_definition_set_attributes().
It raises an SQL error on things like:
* assoc array of assoc arrays in TABLE OF
* index by a non-supported types in INDEX BY
- Removing four methods:
* sp_type_def_list::type_defs_add_record()
* sp_type_def_list::type_defs_add_composite2()
* sp_pcontext::type_defs_declare_record()
* sp_type_def_list::type_defs_declare_composite2()
Adding two methods instead:
* sp_type_def_list::type_defs_add()
* sp_pcontext::type_defs_add()
This allows to get rid of the duplicate code detecting data type
declarations with the same name in the same sp_pcontext frame.
- Adding new methods:
* LEX::declare_type_assoc_array()
* LEX::LEX::declare_type_record()
They create a type specific sp_type_def_xxx and the call the generic
sp_pcontext::type_defs_add().
- m_key_def.sp_prepare_create_field() inside
Field_assoc_array::create_fields() is now called for all key data types
(not only for integers)
- Removing the assignment of key_def->charset in
Type_handler_assoc_array::sp_variable_declarations_finalize().
The charset is now evaluated in m_key_def.sp_prepare_create_field().
- Fixing Item_assoc_array::get_key() to set the character set of the "key"
to utf8mb3 instead of binary
- Fixing Field_assoc_array::copy_and_convert_key() to set the key length
limit in terms of the character length as specified in
INDEX BY VARCHAR(N), instead of octet length. This is needed to make
keys with multi-byte characters work correctly.
Also it now raises different errors depending on the reason of the
key conversion failures:
* ER_INVALID_CHARACTER_STRING
* ER_CANNOT_CONVERT_CHARACTER
- Changing the prototype for Type_handler_composite::key_to_lex_cstring() to
virtual LEX_CSTRING key_to_lex_cstring(THD *thd,
const sp_rcontext_addr &var,
Item **key,
String *buffer) const;
* Now it returns a LEX_CSTRING, instead of getting it as an out parameter.
* Gets an sp_rcontext_addr instead of "name" and "def"
* Gets a String buffer which can be used to be passed to val_str(),
or for character set conversion purposes.
- Removing Field_assoc_array::m_key_def, as all required information
is available from Field_assoc_array::m_key_field.
In Field_assoc_array::create_fields turning m_key_def to a local variable
key_def.
- Fixing Field_assoc_array::copy_and_convert_key() to follow MariaDB coding
style: only constants can be passed by-reference, not-constants should
be passed by-pointer.
- Adding DBUG_ASSERTs into Type_handler_assoc_array::get_item()
and Type_handler_assoc_array::get_or_create_item() that the passed
key in "name" is well formed according to the charset of INDEX BY.
- Changing the error ER_TOO_LONG_KEY to ER_WRONG_STRING_LENGTH.
The former prints length limit in bytes, which is not applicable
for INDEX BY values, because its limit is in characters.
Also, the latter is more verbose.
- Fixing the problem that these wrong uses of an assoc array variable:
BEGIN
assoc_var;
assoc_var(1);
END;
raised a weird error message:
ERROR 1054 (42S22): Unknown column 'assoc_var' in '(null)'
Now a more readable parse error is raised.
- Adding a "Duplicate key" warning for the cases when assigning
between two assoc arrays rejects some records due to different
collations in their INDEX BY key definitions.
- Disallow INDEX OF propagation from VARCHAR to TEXT.
The underlying code cannot handle TEXT.
Adding tests.
- Adding a helper class StringBufferKey to pass to val_str() when
a key value is evaluated.
Fixing all val_str() calls to val_str(&buffer), as the former is
not desirable.
- Fixing a wrong use of args[0]->null_value in
Item_func_assoc_array_exists::val_bool()
- Fixing a problem that using TABLE OF TEXT crashed the server.
Thanks to Iqbal Hassan for the proposed patch.
- Changes in Qualified_ident:
* Fixing the Qualified_ident constructors to get all parst as
Lex_ident_cli_st, rather than the first part as Lex_ident_cli_st
with the following parts Lex_ident_sys.
This makes the code more symmetric.
* Fixing the grammar in sql_yacc.yy accordinly.
* Fixing the data type storing the possition in the client query
from "const char *" to Lex_ident_cli.
* Adding a new method Qualified_ident::is_sane().
It allows to reduce the code side in sql_yacc.yy.
Thanks to Iqbal Hassan for the idea.
- Replacing qs_append() to append_ulonglong() in:
* Item_method_func::print()
* Item_splocal_assoc_array_element::print()
* Item_splocal_assoc_array_element_field::print()
These methods do not use reserve()/alloc(), so calling qs_append()
was wrong and caused a crash.
- Changing the output formats of these methods:
* Item_splocal_assoc_array_element::print()
* Item_splocal_assoc_array_element_field::print()
not to print the key two times.
Also moving the `@123` part (the variable offset) immediately
after the variabl name and before the `[key]` part.
- Fixing a memory leak happened when trying to insert a duplicate
key into an assoc array. Also adding a new "THD *" parameter to
Field_assoc_array::insert_element(). Thanks to Iqbal Hassan for the fix.
Adding a test into sp-assoc-array-ctype.test.
- In Field_assoc_array::create_fields: m_element_field->field_name is now
set for all element data types (not only for records).
This fixed a wrong variable name in warnings. Adding tests.
- Adding tests:
* Adding tests for assoc array elements in UNIONs.
* Copying from an assoc array with a varchar key
to an assoc array with a shorter varchar key.
* A relatively big associative array.
* Memory usage for x86_64.
* Package variable as assoc array keys.
* Character set conversion
* TABLE OF TEXT
* TABLE OF VARCHAR(>64k bytes) propagation to TABLE OF TEXT.
* TEXT element fields in an array of records.
* VARCHAR->TEXT propagation in elements in an array of records.
* Some more tests
This patch adds support for associative arrays in stored procedures
for sql_mode=ORACLE.
The syntax follows Oracle's PL/SQL syntax for associative arrays -
TYPE assoc_array_t IS TABLE OF VARCHAR2(100) INDEX BY INTEGER;
or
TYPE assoc_array_t IS TABLE OF record_t INDEX BY VARCHAR2(100);
where record_t is a record type.
The following functions were added for associative arrays:
- COUNT - Retrieve the number of elements within the arra
- EXISTS - Check whether given key exists in the array
- FIRST - Retrieve the first key in the array
- LAST - Retrieve the last key in the array
- PRIOR - Retrieve the key before the given key
- NEXT - Retrieve the key after the given key
- DELETE - Remove the element with the given key or remove all elements
if no key is given
The arrays/elements can be initialized with the following methods:
- Constructor
i.e. array:= assoc_array_t('key1'=>1, 'key2'=>2, 'key3'=>3)
- Assignment
i.e. array(key):= record_t(1, 2)
- SELECT INTO
i.e. SELECT x INTO array(key)
TODOs:
- Nested tables are not supported yet.
i.e. TYPE assoc_array_t IS TABLE OF other_assoc_array_t INDEX BY INTEGER;
- Associative arrays comparisons are not supported yet.
- Moving the definition of "class Type_handler_row" into a new file
sql_type_row.h. Also moving *some* of its methods into sql_type_row.cc.
The rest of the methods will be moved in the patch for MDEV-34319.
Moving the definition of my_var_sp_row_field into sql_type_row.cc.
- Fixing the grammar for function_call_generic to get the first
production as "ident_cli_func" rather than "ident_func".
The upcoming patch needs to know the position of the function name
within the client query.
- Adding new data types to store data types defined by "TYPE" declarations:
* sp_type_def
* sp_type_def_list
sp_pcontext now derives from sp_type_def_list
- A new virtual method in Field:
virtual Item_field *make_item_field_spvar(THD *thd,
const Spvar_definition &def);
Using it in sp_rcontext::init_var_items().
- Fixing my_var_sp to get sp_rcontext_addr in the parameter
instead of two separate parameters (rcontext_handler + offset).
- Adding new virtual methods in my_var:
virtual bool set_row(THD *thd, List<Item> &select_list);
It's used when a select_list record is assigned to a
single composite variable, such as ROW, specified in the INTO clause.
Using it in select_dumpvar::send_data().
virtual bool check_assignability(THD *thd,
const List<Item> &select_list,
bool *assign_as_row) const;
It's used to check if the select_list is compatible with
a single INTO variable, in select_dumpvar::prepare().
- Fixing LEX methods create_outvar() to get identifiers
a Lex_ident_sys_st values instead of generic LEX_CSTRING values.
- Adding virtual methods in Type_handler:
// Used in Item_func_null_predicate::check_arguments()
virtual bool has_null_predicate() const;
// Used in LEX::sp_variable_declarations_finalize()
virtual bool sp_variable_declarations_finalize(THD *thd,
LEX *lex, int nvars,
const Column_definition &def)
const;
// Handle SELECT 1 INTO spvar;
virtual my_var *make_outvar(THD *thd,
const Lex_ident_sys_st &name,
const sp_rcontext_addr &addr,
sp_head *sphead,
bool validate_only) const;
// Handle SELECT 1 INTO spvar.field;
virtual my_var *make_outvar_field(THD *thd,
const Lex_ident_sys_st &name,
const sp_rcontext_addr &addr,
const Lex_ident_sys_st &field,
sp_head *sphead,
bool validate_only) const;
// create the value in: DECLARE var rec_t DEFAULT rec_t(1,'c');
virtual Item *make_typedef_constructor_item(THD *thd,
const sp_type_def &def,
List<Item> *arg_list) const;
- A new helper method:
Row_definition_list *Row_definition_list::deep_copy(THD *thd) const;
main/statistics_json.result is updated for f8ba5ced55 (MDEV-36099)
The test uses 'delete from t1' in many places and then populates
the table again. The natural order of rows in a MyISAM table is well
defined and the test was implicitly relying on that.
before f8ba5ced55 delete was deleting rows one by one, using
ha_myisam::delete_row() because the connection was stuck in rbr mode.
This caused rows to be shown in the reverse insertion order (because of
the delete link list).
MDEV-36099 fixes this bug and the server now correctly uses
ha_myisam::delete_all_rows(). This makes rows to be shown in the
insertion order as expected.
MDEV-6247 added PROCESSLIST states for when a Replication
SQL thread processes Row events, including a WSRep variant
that dynamically includes the Galera Sequence Number.
MDEV-7409 further expanded on it by adding the table name to the states.
However, PROCESSLIST __cannot__ support generated states.
Because it loads the state texts asynchronously,
only permanently static strings are safe.
Even thread-local memory can become invalid when the thread terminates,
which can happen in the middle of generating a PROCESSLIST.
To prioritize memory safety, this commit reverts both variants to
static strings as the non-WSRep variant was before MDEV-7409.
* __Fully__ revert MDEV-7409 (d9898c9a71)
* Remove the WSRep override from MDEV-6247
* Remove `THD::wsrep_info` and its compiler
flag `WSREP_PROC_INFO` as they are now unused
This commit also includes small optimizations
from MDEV-36839’s previous draft, #4133.
Reviewed-by: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
Anonymous block is represented internally by the class sp_head,
so every statement inside an anonymous block is a SP instruction.
On the other hand, the anonymous block specified in the FROM clause of
the PREPARE statement is treated as a single statement. In result,
all parameter markers (represented by the character ?) are parts of
the anonymous block specified in the prepared statement and at the same
time parameter are markers, internally represented by instances of
the class Item_param and distributed among SP instructions representing
SQL statements (every SQL statement is represented by an instance of
the class sp_instr_stmt)
In case table metadata changed on running an anonymous block in prepared
statement mode, only SP instruction's statement is re-parsed. Before
re-parsing a SP's statement, all items are cleaned up including
instances of the class Item_param that represent positional parameters.
Unfortunately, this leads to presence of a dangling pointer in
Prepared_statement::param_array that references to the deleted
Item_param while invoking reset_stmt_params happening on every execution
of a prepared statement.
To fix the issue, no instances of Item_param created on re-parsings
a statement for failed SP instruction, rather instances of Item_param
left from first time parsing are re-used. As a consequence, all pointers
to instances of the class Item_param stored in the array
Prepared_statememt::param_array and possibly spread along the code base
(e.g. select_lex->limit_params.select_limit)
still point to valid Items.
This feature stores the ddls of the tables/views that are used in
a query, to the optimizer trace. It is currently controlled by a
system variable store_ddls_in_optimizer_trace, and is not enabled
by default. All the ddls will be stored in a single json array, with each
element having table/view name, and the associated create definition
of the table/view.
The approach taken is to read global query_tables from the thd->lex,
and read them in reverse. Create a record with table_name, ddl of
the table and add the table_name to the hash,
along with dumping the information to the trace.
dbName_plus_tableName is used as a key,
and the duplicate entries are not added to the hash.
The main suite tests are also run with the feature enabled, and they
all succeed.
Assertion fails because table is opened by admin_recreate_table():
71 result_code= (thd->open_temporary_tables(table_list) ||
72 mysql_recreate_table(thd, table_list, recreate_info, false));
And that is called because t2 is failed with HA_ADMIN_NOT_IMPLEMENTED:
1093 if (result_code == HA_ADMIN_NOT_IMPLEMENTED && need_repair_or_alter)
1094 {
1095 /*
1096 repair was not implemented and we need to upgrade the table
1097 to a new version so we recreate the table with ALTER TABLE
1098 */
1099 result_code= admin_recreate_table(thd, table, &recreate_info);
1100 }
Actually 'table' is t2 but open_temporary_tables() opens whole list,
i.e. t2 and everything what follows it before first_not_own_table().
Therefore t3 is also opened for t2 processing what is wrong.
The fix opens exactly one specific table for HA_ADMIN_NOT_IMPLEMENTED.
Problem:
Empty queries are incremented if no rows are sent to the client in the
EXECUTE phase of select query. With cursor protocol, rows are not sent
during EXECUTE phase; they are sent later in FETCH phase. Hence,
queries executed with cursor protocol are always falsely treated as
empty in EXECUTE phase.
Fix:
For cursor protocol, empty queries are now counted during the FETCH
phase. This ensures counter correctly reflects whether any rows were
actually sent to the client.
Tests included in `mysql-test/main/show.test`.
1. Disable opt_hint_timeout.test for embedded server. Make sure
the test does not crash even when started for embedded server.
Disable view-protocol since hints are not supported inside views.
2. Hints are designed to behave like regular /* ... */ comments:
`SELECT /*+ " */ 1;` -- a valid SQL query
However, the mysql client program waits for the closing doublequote
character.
Another problem is observed when there is no space character between
closing `*/` of the hint and the following `*`:
`SELECT /*+ some_hints(...) */* FROM t1;`
In this case the client treats `/*` as a comment section opening and
waits for the closing `*/` sequence.
This commit fixes all of these issues
It places a limit N (a timeout value in milliseconds) on how long
a statement is permitted to execute before the server terminates it.
Syntax:
SELECT /*+ MAX_EXECUTION_TIME(milliseconds) */ ...
Only top-level SELECT statements support the hint.
Valgrind is single threaded and only changes threads as part of
system calls or waits.
Some busy loops were identified and fixed where the server assumes
that some other thread will change the state, which will not happen
with valgrind.
Based on patch by Monty. Original patch introduced VALGRIND_YIELD,
which emits pthread_yield() only in valgrind builds. However it was
agreed that it is a good idea to emit yield() unconditionally, such
that other affected schedulers (like SCHED_FIFO) benefit from this
change. Also avoid pthread_yield() in favour of standard
std::this_thread::yield().
The main purpose of this allow one to use the --read-only
option to ensure that no one can issue a query that can
block replication.
The --read-only option can now take 4 different values:
0 No read only (as before).
1 Blocks changes for users without the 'READ ONLY ADMIN'
privilege (as before).
2 Blocks in addition LOCK TABLES and SELECT IN SHARE MODE
for not 'READ ONLY ADMIN' users.
3 Blocks in addition 'READ_ONLY_ADMIN' users for all the
previous statements.
read_only is changed to an enum and one can use the following
names for the lock levels:
OFF, ON, NO_LOCK, NO_LOCK_NO_ADMIN
Too keep things compatible with older versions config files, one can
still use values FALSE and TRUE, which are mapped to OFF and ON.
The main visible changes are:
- 'show variables like "read_only"' now returns a string
instead of a number.
- Error messages related to read_only violations now contains
the current value off readonly.
Other things:
- is_read_only_ctx() renamed to check_read_only_with_error()
- Moved TL_READ_SKIP_LOCKED to it's logical place
Reviewed by: Sergei Golubchik <serg@mariadb.org>
REPAIR of InnoDB tables was logging ALTER TABLE and REPAIR to ddl log.
ALTER TABLE contained the new tableid and REPAIR, wrongly, contained the
old rowid.
Now only REPAIR is logged
ddl.log changes:
REPAIR TABLE and OPTIMIZE TABLE that are done through ALTER TABLE will
now contain the old and new table id. If not done through ALTER TABLE,
only the current rowid will be shown (as before).
MDEV-36563 Assertion `!mysql_bin_log.is_open()' failed in
THD::mark_tmp_table_as_free_for_reuse
The purpose of this commit is to ensure that creation and changes of
temporary tables are properly and predicable logged to the binary
log. It also fixes some bugs where ROW logging was used in MIXED mode,
when STATEMENT would be a better (and expected) choice.
In this comment STATEMENT stands for logging to binary log in
STATEMENT format, MIXED stands for MIXED binlog format and ROW for ROW
binlog format.
New rules for logging of temporary tables
- CREATE of temporary tables are now by default binlogged only if
STATEMENT binlog format is used. If it is binlogged, 1 is stored in
TABLE_SHARE->table_creation_was_logged. The user can change this
behavior by setting create_temporary_table_binlog_formats to
MIXED,STATEMENT in which case the create is logged in statement
format also in MIXED mode (as before).
- Changes to temporary tables are only binlogged if and only if
the CREATE was logged. The logging happens under STATEMENT or MIXED.
If binlog_format=ROW, temporary table changes are not binlogged. A
temporary table that are changed under ROW are marked as 'not up to
date in binlog' and no future row changes are logged. Any usage of
this temporary table will force row logging of other tables in any
future statements using the temporary table to be row logged.
- DROP TEMPORARY is binlogged only of the CREATE was binlogged.
Changes done:
- Row logging is forced for any statement using temporary tables that
are not up to date in the binary log.
(Before the row logging was forced if the user has a temporary table)
- If there is any changes to the temporary table that is not binlogged,
the table is marked as not up to date.
- TABLE_SHARE->table_creation_was_logged has a new definition for
temporary tables:
0 Table creating was not logged to binary log
1 Table creating was logged to binary log and table is up to date.
2 Table creating was logged to binary log but some changes where
not logged to binary log.
Table is not up to date in binary log is defined as value 0 or 2.
- If a multi-table-update or multi-table-delete fails then
all updated temporary tables are marked as not up to date.
- Enforce row logging if the query is using temporary tables
that are not up to date.
Before row logging was enforced if the user had any
temporary tables.
- When dropping temporary tables use IF EXISTS. This ensures
that slave will not stop if it had crashed and lost the
temporary tables.
- Remove comment and version from DROP /*!4000 TEMPORARY.. generated when
a connection closes that has open temporary tables. Added 'generated by
server' at the end of the DROP.
Bugs fixed:
- When using temporary tables with commands that forced row based,
like INSERT INTO temporary_table VALUES (UUID()), this was never
logged which causes the temporary table to be inconsistent on
master and slave.
- Used binlog format is now clearly defined. It is now only depending
on the current binlog_format and the tables used.
Before it was depending on the user had ANY temporary tables and
the state of 'current_stmt_binlog_format' set by previous queries.
This also caused temporary tables to be logged to binary log in
some cases.
- CREATE TABLE t1 LIKE not_logged_temporary_table caused replication
to stop.
- Rename of not binlogged temporary tables where binlogged to binary log
which caused replication to stop.
Changes in behavior:
- By default create_temporary_table_binlog_formats=STATEMENT, which
means that CREATE TEMPORARY is not logged to binary log under MIXED
binary logging. This can be changed by setting
create_temporary_table_binlog_formats to MIXED,STATEMENT.
- Using temporary tables that was not logged to the binary log will
cause any query using them for updating other tables to be logged in
ROW format. Before all queries was logged in ROW format if the user had
any temporary tables, even if they were not used by the query.
- Generated DROP TEMPORARY TABLE is now always using IF EXISTS and
has a "generated by server" comment in the binary log.
The consequences of the above is that manipulations of a lot of rows
through temporary tables will by default be be slower in mixed mode.
For example:
BEGIN;
CREATE TEMPORARY TABLE tmp AS SELECT a, b, c FROM
large_table1 JOIN large_table2 ON ...;
INSERT INTO other_table SELECT b, c FROM tmp WHERE a <100;
DROP TEMPORARY TABLE tmp;
COMMIT;
By default this will create a huge entry in the binary log, compared
to just a few hundred bytes in statement mode. However the change in
this commit will make usage of temporary tables more reliable and
predicable and is thus worth it. Using statement mode or
create_temporary_table_binlog_formats can be used to avoid this issue.
This is needed to make it easy for users to automatically ignore long
char and varchars when using ANALYZE TABLE PERSISTENT.
These fields can cause problems as they will consume
'CHARACTERS * MAX_CHARACTER_LENGTH * 2 * number_of_rows' space on disk
during analyze, which can easily be much bigger than the analyzed table.
This commit adds a new user variable, analyze_max_length, default value 4G.
Any field that is bigger than this in bytes, will be ignored by
ANALYZE TABLE PERSISTENT unless it is specified in FOR COLUMNS().
While doing this patch, I noticed that we do not skip GEOMETRY columns from
ANALYZE TABLE, like we do with BLOB. This should be fixed when merging
to the 'main' branch. At the same time we should add a resonable default
value for analyze_max_length, probably 1024, like we have for
max_sort_length.
When the SELECT sub-statement executes a stored function that is defined
to modify a non-transactional table, like
delimiter |;
create function f_ia(arg int)
returns integer
begin
insert into ti_pk set a=1;
insert into ta set a=1;
insert into ti_pk set a=arg;
return 1;
end |
delimiter ;|
any modified records that the function has succeeded
on must be binlogged as a "side effect" of CREATE-SELECT.
It is expected that a failing CREATE-SELECT like
--error ER_DUP_ENTRY
set statement binlog_format = ROW for create table t_y (a int) engine=aria select f_ia(1 /* err in Innodb after Aria stmt is done */) as a;
leaves upon itself the following state:
include/show_binlog_events.inc
Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 # Gtid # # BEGIN GTID #-#-#
master-bin.000001 # Table_map # #
table_id: # (test. ta)
master-bin.000001 # Write_rows_v1 # #
table_id: # flags: STMT_END_F
master-bin.000001 # Query # # COMMIT
select * from ta;
a
1
select count(*) = 0 from ti_pk;
true
However it's not so for the binlog part.
The reason is that prior to MDEV-34150 fixes the CREATE-SELECT's
errored phase leaves the binlog caches intact (the file:pos from 10.11 c06c36218a)
to defer their reset to the rollback phase of the top-level
/* the statement cache gets binlogged */
where the side-effect changes gets binlogged.
MDEV-34150 fixes harmed (+#4 line) the statement cache in particular
in the error phase (file:pos are from 395db6f1d5 the current 11.8 )
/* The caches incl the statement cache are gone */
/* 'cos of MDEV-34150 */
+#4 0x00005d75f9b6a92e in THD::binlog_remove_rows_events (this=0x52c000240288) at log.cc:579
Apparently it should not have been there, as proper emptying (either
with reset for the transactional cache or flush and then reset for the
statement cache) is (must be) always done via binlog_rollback of the
top-level statement.
To observe the above requirement the case is fixed with the removal of
thd->binlog_remove_rows_events() and its definition.
Tested with rpl.rpl_create_select_row.
Reviewed-by Brandon Nesterenko.
Added capability to create a trigger associated with several trigger
events. For this goal, the syntax of the CREATE TRIGGER statement
was extended to support the syntax structure { event [ OR ... ] }
for the `trigger_event` clause. Since one trigger will be able to
handle several events it should be provided a way to determine what
kind of event is handled on execution of a trigger. For this goal
support of the clauses INSERTING, UPDATING , DELETING was added by
this patch. These clauses can be used inside a trigger body to detect
what kind of trigger action is currently processed using the following
boilerplate:
IF INSERTING THEN ...
ELSIF UPDATING THEN ...
ELSIF DELETING THEN ...
In case one of the clauses INSERTING, UPDATING, DELETING specified in
a trigger's body not matched with a trigger event type, the error
ER_INCOMPATIBLE_EVENT_FLAG is emitted.
After this patch be pushed, one Trigger object will be associated with
several trigger events. It means that the array
Table_triggers_list::triggers
can contain several pointers to the same Trigger object in array members
corresponding to different events. Moreover, support of several trigger
events for the same trigger requires that the data members `next` and
`action_order` of the Trigger class be converted to arrays to store
relating information per trigger event base.
Ability to specify the same trigger for different event types results in
necessity to handle invalid cases on execution of the multi-event
trigger, when the OLD or NEW qualifiers doesn't match a current event
type against that the trigger is run. The clause OLD should produces
the NULL value for INSERT event, whereas the clause NEW should produce
the NULL value for DELETE event.
This patch adds support for SYS_REFCURSOR (a weakly typed cursor)
for both sql_mode=ORACLE and sql_mode=DEFAULT.
Works as a regular stored routine variable, parameter and return value:
- can be passed as an IN parameter to stored functions and procedures
- can be passed as an INOUT and OUT parameter to stored procedures
- can be returned from a stored function
Note, strongly typed REF CURSOR will be added separately.
Note, to maintain dependencies easier, some parts of sql_class.h
and item.h were moved to new header files:
- select_results.h:
class select_result_sink
class select_result
class select_result_interceptor
- sp_cursor.h:
class sp_cursor_statistics
class sp_cursor
- sp_rcontext_handler.h
class Sp_rcontext_handler and its descendants
The implementation consists of the following parts:
- A new class sp_cursor_array deriving from Dynamic_array
- A new class Statement_rcontext which contains data shared
between sub-statements of a compound statement.
It has a member m_statement_cursors of the sp_cursor_array data type,
as well as open cursor counter. THD inherits from Statement_rcontext.
- A new data type handler Type_handler_sys_refcursor in plugins/type_cursor/
It is designed to store uint16 references -
positions of the cursor in THD::m_statement_cursors.
- Type_handler_sys_refcursor suppresses some derived numeric features.
When a SYS_REFCURSOR variable is used as an integer an error is raised.
- A new abstract class sp_instr_fetch_cursor. It's needed to share
the common code between "OPEN cur" (for static cursors) and
"OPER cur FOR stmt" (for SYS_REFCURSORs).
- New sp_instr classes:
* sp_instr_copen_by_ref - OPEN sys_ref_curor FOR stmt;
* sp_instr_cfetch_by_ref - FETCH sys_ref_cursor INTO targets;
* sp_instr_cclose_by_ref - CLOSE sys_ref_cursor;
* sp_instr_destruct_variable - to destruct SYS_REFCURSOR variables when
the execution goes out of the BEGIN..END block
where SYS_REFCURSOR variables are declared.
- New methods in LEX:
* sp_open_cursor_for_stmt - handles "OPEN sys_ref_cursor FOR stmt".
* sp_add_instr_fetch_cursor - "FETCH cur INTO targets" for both
static cursors and SYS_REFCURSORs.
* sp_close - handles "CLOSE cur" both for static cursors and SYS_REFCURSORs.
- Changes in cursor functions to handle both static cursors and SYS_REFCURSORs:
* Item_func_cursor_isopen
* Item_func_cursor_found
* Item_func_cursor_notfound
* Item_func_cursor_rowcount
- A new system variable @@max_open_cursors - to limit the number
of cursors (static and SYS_REFCURSORs) opened at the same time.
Its allowed range is [0-65536], with 50 by default.
- A new virtual method Type_handler::can_return_bool() telling
if calling item->val_bool() is allowed for Items of this data type,
or if otherwise the "Illegal parameter for operation" error should be raised
at fix_fields() time.
- New methods in Sp_rcontext_handler:
* get_cursor()
* get_cursor_by_ref()
- A new class Sp_rcontext_handler_statement to handle top level statement
wide cursors which are shared by all substatements.
- A new virtual method expr_event_handler() in classes Item and Field.
It's needed to close (and make available for a new OPEN)
unused THD::m_statement_cursors elements which do not have any references
any more. It can happen in various moments in time, e.g.
* after evaluation parameters of an SQL routine
* after assigning a cursor expression into a SYS_REFCURSOR variable
* when leaving a BEGIN..END block with SYS_REFCURSOR variables
* after setting OUT/INOUT routine actual parameters from formal
parameters.
wait_for_prior_commit() can be called multiple times per event group,
only do my_error() the first time the call fails.
Remove redundant set_overwrite_status() calls.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Reviewed-by: Monty <monty@mariadb.org>
It was not possible to use a package body variable as a
fetch target:
CREATE PACKAGE BODY pkg AS
vc INT := 0;
FUNCTION f1 RETURN INT AS
CURSOR cur IS SELECT 1 AS c FROM DUAL;
BEGIN
OPEN cur;
FETCH cur INTO vc; -- this returned "Undeclared variable: vc" error.
CLOSE cur;
RETURN vc;
END;
END;
FETCH assumed that all fetch targets reside of the same sp_rcontext
instance with the cursor. This patch fixes the problem.
Now a cursor and its fetch target can reside in different sp_rcontext
instances.
Details:
- Adding a helper class sp_rcontext_addr
(a combination of Sp_rcontext_handler pointer and an offset in the rcontext)
- Adding a new class sp_fetch_target deriving from sp_rcontext_addr.
Fetch targets in "FETCH cur INTO target1, target2 ..." are now collected
into this structure instead of sp_variable.
sp_variable cannot be used any more to store fetch targets,
because it does not have a pointer to Sp_rcontext_handler
(it only has the current rcontext offset).
- Removing members sp_instr_set members m_rcontext_handler and m_offset.
Deriving sp_instr_set from sp_rcontext_addr instead.
- Renaming sp_instr_cfetch member "List<sp_variable> m_varlist"
to "List<sp_fetch_target> m_fetch_target_list".
- Fixing LEX::sp_add_cfetch() to return the pointer to the
created sp_fetch_target instance (instead of returning bool).
This helps to make the grammar in sql_yacc.c simpler
- Renaming LEX::sp_add_cfetch() to LEX::sp_add_instr_cfetch(),
as `if(sp_add_cfetch())` changed its meaning to the opposite,
to avoid automatic wrong merge from earlier versions.
- Chaning the "List<sp_variable> *vars" parameter to sp_cursor::fetch
to have the data type "List<sp_fetch_target> *".
- Changing the data type of "List<sp_variable> &vars" in
sp_cursor::Select_fetch_into_spvars::send_data_to_variable_list()
to "List<sp_fetch_target> &".
- Adding THD helper methods get_rcontext() and get_variable().
- Moving the code from sql_yacc.yy into a new LEX method
LEX::make_fetch_target().
- Simplifying the grammar in sql_yacc.yy using the new LEX method.
Changing the data type of the bison rule sp_fetch_list from "void"
to "List<sp_fetch_target> *".
* rpl.rpl_system_versioning_partitions updated for MDEV-32188
* innodb.row_size_error_log_warnings_3 changed error for MDEV-33658
(checks are done in a different order)
Reintroduces delete_while_scanning optimization for multi_delete.
Reverse some test changes from the initial feature devlopment now
that we delete-on-the-fly once again.
We now allow multitable queries with order by and limit, such as:
delete t1.*, t2.* from t1, t2 order by t1.id desc limit 3;
To predict what rows will be deleted, run the equivalent select:
select t1.*, t2.* from t1, t2 order by t1.id desc limit 3;
Additionally, index hints are now supported with single table delete statements:
delete from t2 use index(xid) order by (id) limit 2;
This approach changes the multi_delete SELECT result interceptor to use a temporary
table to collect row ids pertaining to the rows that will be deleted, rather than
directly deleting rows from the target table(s). Row ids are collected during
send_data, then read during send_eof to delete target rows. In the event that the
temporary table created in memory is not big enough for all matching rows, it is
converted to an aria table.
Other changes:
- Deleting from a sequence now affects zero rows instead of emitting an error
Limitations:
- The federated connector does not create implicit row ids, so we to use a key
when conditionally deleting. See the change in federated_maybe_16324629.test
This patch includes a few changes to make the code easier to maintain:
- Renamed SQL_I_List::link_in_list to SQL_I_List::insert. link_in_list was
ambiguous as it could refer to a link or it could refer to a node
- Remove field_name local variable in multi_update::initialize_tables because
it is not used when creating the temporary tables
- multi_update changes:
- Move temp table callocs to init, a more natural location for them, and moved
tables_to_update to const member variable so we don't recompute it.
- Filter out jtbm tables and tables not in the update map, pushing those that
will be updated into an update_targets container. This simplifies checks and
loops in initialize_tables.
Implementation of this task adds ability to raise the signal with
SQLSTATE '02TRG' from a BEFORE INSERT/UPDATE/DELETE trigger and handles
this signal as an indicator meaning 'to throw away the current row'
on processing the INSERT/UPDATE/DELETE statement. The signal with
SQLSTATE '02TRG' has special meaning only in case it is raised inside
BEFORE triggers, for AFTER trigger's this value of SQLSTATE isn't treated
in any special way. In according with SQL standard, the SQLSTATE class '02'
means NO DATA and sql_errno for this class is set to value
ER_SIGNAL_NOT_FOUND by current implementation of MariaDB server.
Implementation of this task assigns the value ER_SIGNAL_SKIP_ROW_FROM_TRIGGER
to sql_errno in Diagnostics_area in case the signal is raised from a trigger
and SQLSTATE has value '02TRG'.
To catch signal with SQLTSATE '02TRG' and handle it in special way, the methods
Table_triggers_list::process_triggers
select_insert::store_values
select_create::store_values
Rows_log_event::process_triggers
and the overloaded function
fill_record_n_invoke_before_triggers
were extended with extra out parameter for returning the flag whether
to skip the current values being processed by INSERT/UPDATE/DELETE
statement. This extra parameter is passed as nullptr in case of AFTER trigger
and BEFORE trigger this parameter points to a variable to store a marker
whether to skip the current record or store it by calling write_record().