when generating new name.
If find_uniq_filename returns an error, then this error is not
being propagated upwards, and execution does not report error to
the user (although a entry in the error log is generated).
Additionally, some more errors were ignored in new_file_impl:
- when writing the rotate event
- when reopening the index and binary log file
This patch addresses this by propagating the error up in the
execution stack. Furthermore, when rotation of the binary log
fails, an incident event is written, because there may be a
chance that some changes for a given statement, were not properly
logged. For example, in SBR, LOAD DATA INFILE statement requires
more than one event to be logged, should rotation fail while
logging part of the LOAD DATA events, then the logged data would
become inconsistent with the data in the storage engine.
InnoDB does not attempt to handle lower_case_table_names == 2 when looking
up foreign table names and referenced table name. It turned that server
variable into a boolean and ignored the possibility of it being '2'.
The setting lower_case_table_names == 2 means that it should be stored and
displayed in mixed case as given, but compared internally in lower case.
Normally the server deals with this since it stores table names. But
InnoDB stores referential constraints for the server, so it needs to keep
track of both lower case and given names.
This solution creates two table name pointers for each foreign and referenced
table name. One to display the name, and one to look it up. Both pointers
point to the same allocated string unless this setting is 2. So the overhead
added is not too much.
Two functions are created in dict0mem.c to populate the ..._lookup versions
of these pointers. Both dict_mem_foreign_table_name_lookup_set() and
dict_mem_referenced_table_name_lookup_set() are called 5 times each.
InnoDB AUTOINC code expects the locks to be released in strict reverse order
at the end of the statement. However, nested stored proedures and partition
tables break this rule. We now allow the locks to be deleted from the
trx->autoinc_locks vector in any order but optimise for the common (old) case.
rb://441 Approved by Marko Makela
It is not necessary to support INSERT DELAYED for a single value insert,
while we do not support that for multi-values insert when binlog is
enabled in SBR.
The lock_type is upgrade to TL_WRITE from TL_WRITE_DELAYED for
INSERT DELAYED for single value insert as multi-values insert
did when binlog is enabled. Then it's safe. And binlog it as
INSERT without DELAYED.
When using BINLOG statement to execute rows log events, session variables
foreign_key_checks and unique_checks are changed temporarily. As each rows
log event has their own special session environment and its own
foreign_key_checks and unique_checks can be different from current session
which executing the BINLOG statement. But these variables are not restored
correctly after BINLOG statement. This problem will cause that the following
statements fail or generate unexpected data.
In this patch, code is added to backup and restore these two variables.
So BINLOG statement will not affect current session's variables again.
win x86 debug_max
The windows MTR run exhibited a different test execution
ordering (due to the fact that in these platforms MTR is invoked
with --parallel > 1). This uncovered a bug in the aforementioned
test case, which is triggered by the following conditions:
1. server is not restarted between two different tests;
2. the test before binlog.binlog_row_failure_mixing_engines
issues flush logs;
3. binlog.binlog_row_failure_mixing_engines uses binlog
positions to limit the output of show_binlog_events;
4. binlog.binlog_row_failure_mixing_engines does not state which
binlog file to use, thence it uses a wrong binlog file with
the correct position.
There are two possible fixes: 1. make sure that the test start
from a clean slate - binlog wise; 2. in addition to the position,
also state the binary log file before sourcing
show_binlog_events.inc .
We go for fix#1, ie, deploy a RESET MASTER before the test is
actually started.
OPTIMIZE TABLE recreates the whole table. That is why the counter gets reset.
Making the next autoinc column persistent is a separate issue from resetting
the value after an OPTIMIZE TABLE. We already have a check for ALTER TABLE
and CREATE INDEX to preserve the value on table recreate. We should be able to
add an additional check for OPTIMIZE TABLE to preserve the next value.
rb://519 Approved by Jimmy Yang.
In case of low memory sort buffer QUICK_INDEX_MERGE_SELECT creates
temporary file where is stores row ids which meet QUICK_SELECT ranges
except of clustered pk range, clustered range is processed separately.
In init_read_record we check if temporary file is used and choose
appropriate record access method. It does not take into account that
temporary file contains partial result in case of QUICK_INDEX_MERGE_SELECT
with clustered pk range.
The fix is always to use rr_quick if QUICK_INDEX_MERGE_SELECT
with clustered pk range is used.
Text conflict in mysql-test/suite/perfschema/r/dml_setup_instruments.result
Text conflict in mysql-test/suite/perfschema/r/global_read_lock.result
Text conflict in mysql-test/suite/perfschema/r/server_init.result
Text conflict in mysql-test/suite/perfschema/t/global_read_lock.test
Text conflict in mysql-test/suite/perfschema/t/server_init.test
bug #57006 "Deadlock between HANDLER and FLUSH TABLES WITH READ
LOCK" and bug #54673 "It takes too long to get readlock for
'FLUSH TABLES WITH READ LOCK'".
The first bug manifested itself as a deadlock which occurred
when a connection, which had some table open through HANDLER
statement, tried to update some data through DML statement
while another connection tried to execute FLUSH TABLES WITH
READ LOCK concurrently.
What happened was that FTWRL in the second connection managed
to perform first step of GRL acquisition and thus blocked all
upcoming DML. After that it started to wait for table open
through HANDLER statement to be flushed. When the first connection
tried to execute DML it has started to wait for GRL/the second
connection creating deadlock.
The second bug manifested itself as starvation of FLUSH TABLES
WITH READ LOCK statements in cases when there was a constant
stream of concurrent DML statements (in two or more
connections).
This has happened because requests for protection against GRL
which were acquired by DML statements were ignoring presence of
pending GRL and thus the latter was starved.
This patch solves both these problems by re-implementing GRL
using metadata locks.
Similar to the old implementation acquisition of GRL in new
implementation is two-step. During the first step we block
all concurrent DML and DDL statements by acquiring global S
metadata lock (each DML and DDL statement acquires global IX
lock for its duration). During the second step we block commits
by acquiring global S lock in COMMIT namespace (commit code
acquires global IX lock in this namespace).
Note that unlike in old implementation acquisition of
protection against GRL in DML and DDL is semi-automatic.
We assume that any statement which should be blocked by GRL
will either open and acquires write-lock on tables or acquires
metadata locks on objects it is going to modify. For any such
statement global IX metadata lock is automatically acquired
for its duration.
The first problem is solved because waits for GRL become
visible to deadlock detector in metadata locking subsystem
and thus deadlocks like one in the first bug become impossible.
The second problem is solved because global S locks which
are used for GRL implementation are given preference over
IX locks which are acquired by concurrent DML (and we can
switch to fair scheduling in future if needed).
Important change:
FTWRL/GRL no longer blocks DML and DDL on temporary tables.
Before this patch behavior was not consistent in this respect:
in some cases DML/DDL statements on temporary tables were
blocked while in others they were not. Since the main use cases
for FTWRL are various forms of backups and temporary tables are
not preserved during backups we have opted for consistently
allowing DML/DDL on temporary tables during FTWRL/GRL.
Important change:
This patch changes thread state names which are used when
DML/DDL of FTWRL is waiting for global read lock. It is now
either "Waiting for global read lock" or "Waiting for commit
lock" depending on the stage on which FTWRL is.
Incompatible change:
To solve deadlock in events code which was exposed by this
patch we have to replace LOCK_event_metadata mutex with
metadata locks on events. As result we have to prohibit
DDL on events under LOCK TABLES.
This patch also adds extensive test coverage for interaction
of DML/DDL and FTWRL.
Performance of new and old global read lock implementations
in sysbench tests were compared. There were no significant
difference between new and old implementations.
with on duplicate key update
There was a missed corner case in the partitioning
handler, which caused the next_insert_id to be changed
in the second level handlers (i.e the hander of a partition),
which caused this debug assertion.
The solution was to always ensure that only the partitioning
level generates auto_increment values, since if it was done
within a partition, it may fail to match the partition
function.