example files)
The system variable 'thread_concurrency' has been
(re-)enabled on all platforms, to prevent startup
errors.
'thread_concurrency' is unused and has no effect,
on any platform, in MySQL 5.1 and later versions. It
will be deprecated, and removed, in context of
worklog WL#5265
After BUG#36649, warnings for sub-statements are cleared when a
new sub-statement is started. This is problematic since it suppresses
warnings for unsafe statements in some cases. It is important that we
always give a warning to the client, because the user needs to know
when there is a risk that the slave goes out of sync.
We fixed the problem by generating warning messages for unsafe statements
while returning from a stored procedure, function, trigger or while
executing a top level statement.
We also started checking unsafeness when both performance and log tables are
used. This is necessary after the performance schema which does a distinction
between performance and log tables.
FLUSH TABLES <list> WITH READ LOCK are incompatible" to
be pushed as separate patch.
Replaced thread state name "Waiting for table", which was
used by threads waiting for a metadata lock or table flush,
with a set of names which better reflect types of resources
being waited for.
Also replaced "Table lock" thread state name, which was used
by threads waiting on thr_lock.c table level lock, with more
elaborate "Waiting for table level lock", to make it
more consistent with other thread state names.
Updated test cases and their results according to these
changes.
Fixed sys_vars.query_cache_wlock_invalidate_func test to not
to wait for timeout of wait_condition.inc script.
******
This patch fixes the following bugs:
- Bug#5889: Exit handler for a warning doesn't hide the warning in
trigger
- Bug#9857: Stored procedures: handler for sqlwarning ignored
- Bug#23032: Handlers declared in a SP do not handle warnings generated
in sub-SP
- Bug#36185: Incorrect precedence for warning and exception handlers
The problem was in the way warnings/errors during stored routine execution
were handled. Prior to this patch the logic was as follows:
- when a warning/an error happens: if we're executing a stored routine,
and there is a handler for that warning/error, remember the handler,
ignore the warning/error and continue execution.
- after a stored routine instruction is executed: check for a remembered
handler and activate one (if any).
This logic caused several problems:
- if one instruction generates several warnings (errors) it's impossible
to choose the right handler -- a handler for the first generated
condition was chosen and remembered for activation.
- mess with handling conditions in scopes different from the current one.
- not putting generated warnings/errors into Warning Info (Diagnostic
Area) is against The Standard.
The patch changes the logic as follows:
- Diagnostic Area is cleared on the beginning of each statement that
either is able to generate warnings, or is able to work with tables.
- at the end of a stored routine instruction, Diagnostic Area is left
intact.
- Diagnostic Area is checked after each stored routine instruction. If
an instruction generates several condition, it's now possible to take a
look at all of them and determine an appropriate handler.
/*![:version:] Query Code */, where [:version:] is a sequence of 5
digits representing the mysql server version(e.g /*!50200 ... */),
is a special comment that the query in it can be executed on those
servers whose versions are larger than the version appearing in the
comment. It leads to a security issue when slave's version is larger
than master's. A malicious user can improve his privileges on slaves.
Because slave SQL thread is running with SUPER privileges, so it can
execute queries that he/she does not have privileges on master.
This bug is fixed with the logic below:
- To replace '!' with ' ' in the magic comments which are not applied on
master. So they become common comments and will not be applied on slave.
- Example:
'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /*!99999 ,(3)*/
will be binlogged as
'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /* 99999 ,(3)*/
TABLES <list> WITH READ LOCK are incompatible".
The problem was that FLUSH TABLES <list> WITH READ LOCK
which was issued when other connection has acquired global
read lock using FLUSH TABLES WITH READ LOCK was blocked
and has to wait until global read lock is released.
This issue stemmed from the fact that FLUSH TABLES <list>
WITH READ LOCK implementation has acquired X metadata locks
on tables to be flushed. Since these locks required acquiring
of global IX lock this statement was incompatible with global
read lock.
This patch addresses problem by using SNW metadata type of
lock for tables to be flushed by FLUSH TABLES <list> WITH
READ LOCK. It is OK to acquire them without global IX lock
as long as we won't try to upgrade those locks. Since SNW
locks allow concurrent statements using same table FLUSH
TABLE <list> WITH READ LOCK now has to wait until old
versions of tables to be flushed go away after acquiring
metadata locks. Since such waiting can lead to deadlock
MDL deadlock detector was extended to take into account
waits for flush and resolve such deadlocks.
As a bonus code in open_tables() which was responsible for
waiting old versions of tables to go away was refactored.
Now when we encounter old version of table in open_table()
we don't back-off and wait for all old version to go away,
but instead wait for this particular table to be flushed.
Such approach supported by deadlock detection should reduce
number of scenarios in which FLUSH TABLES aborts concurrent
multi-statement transactions.
Note that active FLUSH TABLES <list> WITH READ LOCK still
blocks concurrent FLUSH TABLES WITH READ LOCK statement
as the former keeps tables open and thus prevents the
latter statement from doing flush.
This patch also fixes Bug#55452 "SET PASSWORD is
replicated twice in RBR mode".
The goal of this patch is to remove the release of
metadata locks from close_thread_tables().
This is necessary to not mistakenly release
the locks in the course of a multi-step
operation that involves multiple close_thread_tables()
or close_tables_for_reopen().
On the same token, move statement commit outside
close_thread_tables().
Other cleanups:
Cleanup COM_FIELD_LIST.
Don't call close_thread_tables() in COM_SHUTDOWN -- there
are no open tables there that can be closed (we leave
the locked tables mode in THD destructor, and this
close_thread_tables() won't leave it anyway).
Make open_and_lock_tables() and open_and_lock_tables_derived()
call close_thread_tables() upon failure.
Remove the calls to close_thread_tables() that are now
unnecessary.
Simplify the back off condition in Open_table_context.
Streamline metadata lock handling in LOCK TABLES
implementation.
Add asserts to ensure correct life cycle of
statement transaction in a session.
Remove a piece of dead code that has also become redundant
after the fix for Bug 37521.
A change in the default values of some config parameters
caused this test to fail, adjust the test and make it more
robust so it does not fail for the same reason in the future.
The reason for the bug above is unclear but
- Modify pfs_upgrade so that it's result is easier to analyze in case something fails
- Fix several minor weaknesses which could cause that a successing test (either an
already existing or a to be developed one) fails because of imperfect cleanup,
too slow disconnected sessions etc.
should either fix the bug or reduce it's probability or at least
make the analysis of failures easier.
table with active trx
Essentially, the problem is that InnoDB does a implicit commit
when a cursor (table handler) is unlocked/closed, creating
a dissonance between the transaction state within the server
layer and the storage engine layer. Theoretically, a statement
transaction can encompass several table instances in a similar
manner to a multiple statement transaction, hence it does not
make sense to limit a statement transaction to the lifetime of
the table instances (cursors) used within it.
Since this particular instance of the problem is only triggerable
on 5.1 and is masked on 5.5 due 2PC being skipped (assertion is in
the prepare phase of a 2PC), the solution (which is less risky) is
to explicitly end the transaction before the cached table is unlock
on rename table.
The patch is to be null merged into trunk.
Problem: when SHOW BINLOG EVENTS was issued, it increased the value of
@@session.max_allowed_packet. This allowed a non-root user to increase
the amount of memory used by her thread arbitrarily. Thus, it removes
the bound on the amount of system resources used by a client, so it
presents a security risk (DoS attack).
Fix: it is correct to increase the value of @@session.max_allowed_packet
while executing SHOW BINLOG EVENTS (see BUG 30435). However, the
increase should only be temporary. Thus, the fix is to restore the value
when SHOW BINLOG EVENTS ends.
The value of @@session.max_allowed_packet is also increased in
mysql_binlog_send (i.e., the binlog dump thread). It is not clear if this
can cause any trouble, since normally the client that issues
COM_BINLOG_DUMP will not issue any other commands that would be affected
by the increased value of @@session.max_allowed_packet. However, we
restore the value just in case.