in fact, in MariaDB it cannot, but it can show spurious slaves
in SHOW SLAVE HOSTS.
slave was registered in COM_REGISTER_SLAVE and un-registered after
COM_BINLOG_DUMP. If there was no COM_BINLOG_DUMP, it would never
unregister.
If translation table present when we materialize the derived table then
change it to point to the materialized table.
Added debug info to see really what happens with what derived.
This is backport of WL#7195 to MySQL-5.5. In 5.5, we
offload connection authentication from the acceptor
thread to tp worker threads.
Connection authentication happens in the acceptor thread that
accepts the connection for thread pool plugin. Connection authentication
involves exchanging packets with client and disk I/O
which is time consuming. This can cause other client
connections to starve and wait in the queue possibly increasing the
connect latency and decreasing throughput. In the worst case, some
connections could be dropped. n addition, SSL handshakes are quite
expensive and can stall connections in the accept queue.
This patch offloads connection authentication when thread pool
plugin is used for client connection. Each thread group
shall have a queue of connection_context objects, which represents
new connections that need to be processed by thread group threads.
The connection context is composed of THD object & list pointers
for intrusive queue implementation. Whenever a new connection
arrives, connection context object is created and added to the
queue. A new connect handler thread is created or woken up to handle
the authentication task. The worker thread loop is modified to
process connection events on connect handler threads in addition to
checking for query processing events. The initial number of connect
handler threads is one per thread group and it is restricted to
a maximum of 4 threads per thread group.
it used current_thd->alloc() and allocated on the thd's execution arena,
not on table->expr_arena.
Remove THD::arena_for_cached_items that is temporarily set in
update_virtual_fields(), and replaces THD arena in get_datetime_value().
Instead set THD arena to table->expr_arena for the whole duration
of update_virtual_fields()
RESTRICTED IN ALL GA RELEASES
Back port of WL#6782 to 5.5 and 5.6. This also includes
back port of Bug#20771331, Bug#20741572 and Bug#20770671.
Bug#24695274 and Bug#24679907 are also handled along with
this.
Problem:
In debug builds, there is a chance that an out-of-bounds
read is performed when tables are locked in
LTM_PRELOCKED_UNDER_LOCK_TABLES mode. It can happen because
the debug code uses enum values as index for an array of
mode descriptions, but it only takes into consideration 3
out of 4 of the enum values.
Fix:
This patch fixes it by implementing a getter for the enum which
returns a string representation of the enum,
effectively removing the out-of-bounds read.
Moreover, it also fixes the lock mode descriptions that
would be print out in debug builds.
Problem:
At the end of first execution select_lex->prep_where is pointing to
a runtime created object (temporary table field). As a result
server exits trying to access a invalid pointer during second
execution.
Analysis:
While optimizing the join conditions for the query, after the
permanent transformation, optimizer makes a copy of the new
where conditions in select_lex->prep_where. "prep_where" is what
is used as the "where condition" for the query at the start of execution.
W.r.t the query in question, "where" condition is actually pointing
to a field in the temporary table. As a result, for the second
execution the pointer is no more valid resulting in server exit.
Fix:
At the end of the first execution, select_lex->where will have the
original item of the where condition.
Make prep_where the new place where the original item of select->where
has to be rolled back.
Fixed in 5.7 with the wl#7082 - Move permanent transformations from
JOIN::optimize to JOIN::prepare
Patch for 5.5 includes the following backports from 5.6:
Bugfix for Bug12603141 - This makes the first execute statement in the testcase
pass in 5.5
However it was noted later in in Bug16163596 that the above bugfix needed to
be modified. Although Bug16163596 is reproducible only with changes done for
Bug12582849, we have decided include the fix.
Considering that Bug12582849 is related to Bug12603141, the fix is
also included here. However this results in Bug16317817, Bug16317685,
Bug16739050. So fix for the above three bugs is also part of this patch.
Problem & Analysis: If DML invokes a trigger or a
stored function that inserts into an AUTO_INCREMENT column,
that DML has to be marked as 'unsafe' statement. If the
tables are locked in the transaction prior to DML statement
(using LOCK TABLES), then the same statement is not marked as
'unsafe' statement. The logic of checking whether unsafeness
is protected with if (!thd->locked_tables_mode). Hence if
we lock the tables prior to DML statement, it is *not* entering
into this if condition. Hence the statement is not marked
as unsafe statement.
Fix: Irrespective of locked_tables_mode value, the unsafeness
check should be done. Now with this patch, the code is moved
out to 'decide_logging_format()' function where all these checks
are happening and also with out 'if(!thd->locked_tables_mode)'.
Along with the specified test case in the bug scenario
(BINLOG_STMT_UNSAFE_AUTOINC_COLUMNS), we also identified that
other cases BINLOG_STMT_UNSAFE_AUTOINC_NOT_FIRST,
BINLOG_STMT_UNSAFE_WRITE_AUTOINC_SELECT, BINLOG_STMT_UNSAFE_INSERT_TWO_KEYS
are also protected with thd->locked_tables_mode which is not right. All
of those checks also moved to 'decide_logging_format()' function.
The old code used pthread_setspecific() to store temporary data used by the thread.
This is not safe when used with thread pool, as the thread may change for the transaction.
The fix is to save the data in THD, which is guaranteed to be properly freed.
I also fixed the code so that we don't do a malloc() for every transaction.
on disconnect THD must clean user_var_events array before
dropping temporary tables. Otherwise when binlogging a DROP,
it'll access user_var_events, but they were allocated
in the already freed memroot.
~40% bugfixed(*) applied
~40$ bugfixed reverted (incorrect or we're not buggy)
~20% bugfixed applied, despite us being not buggy
(*) only changes in the server code, e.g. not cmakefiles
SHOW PROCESSLIST, SHOW BINLOGS
Problem: A deadlock was occurring when 4 threads were
involved in acquiring locks in the following way
Thread 1: Dump thread ( Slave is reconnecting, so on
Master, a new dump thread is trying kill
zombie dump threads. It acquired thread's
LOCK_thd_data and it is about to acquire
mysys_var->current_mutex ( which LOCK_log)
Thread 2: Application thread is executing show binlogs and
acquired LOCK_log and it is about to acquire
LOCK_index.
Thread 3: Application thread is executing Purge binary logs
and acquired LOCK_index and it is about to
acquire LOCK_thread_count.
Thread 4: Application thread is executing show processlist
and acquired LOCK_thread_count and it is
about to acquire zombie dump thread's
LOCK_thd_data.
Deadlock Cycle:
Thread 1 -> Thread 2 -> Thread 3-> Thread 4 ->Thread 1
The same above deadlock was observed even when thread 4 is
executing 'SELECT * FROM information_schema.processlist' command and
acquired LOCK_thread_count and it is about to acquire zombie
dump thread's LOCK_thd_data.
Analysis:
There are four locks involved in the deadlock. LOCK_log,
LOCK_thread_count, LOCK_index and LOCK_thd_data.
LOCK_log, LOCK_thread_count, LOCK_index are global mutexes
where as LOCK_thd_data is local to a thread.
We can divide these four locks in two groups.
Group 1 consists of LOCK_log and LOCK_index and the order
should be LOCK_log followed by LOCK_index.
Group 2 consists of other two mutexes
LOCK_thread_count, LOCK_thd_data and the order should
be LOCK_thread_count followed by LOCK_thd_data.
Unfortunately, there is no specific predefined lock order defined
to follow in the MySQL system when it comes to locks across these
two groups. In the above problematic example,
there is no problem in the way we are acquiring the locks
if you see each thread individually.
But If you combine all 4 threads, they end up in a deadlock.
Fix:
Since everything seems to be fine in the way threads are taking locks,
In this patch We are changing the duration of the locks in Thread 4
to break the deadlock. i.e., before the patch, Thread 4
('show processlist' command) mysqld_list_processes()
function acquires LOCK_thread_count for the complete duration
of the function and it also acquires/releases
each thread's LOCK_thd_data.
LOCK_thread_count is used to protect addition and
deletion of threads in global threads list. While show
process list is looping through all the existing threads,
it will be a problem if a thread is exited but there is no problem
if a new thread is added to the system. Hence a new mutex is
introduced "LOCK_thd_remove" which will protect deletion
of a thread from global threads list. All threads which are
getting exited should acquire LOCK_thd_remove
followed by LOCK_thread_count. (It should take LOCK_thread_count
also because other places of the code still thinks that exit thread
is protected with LOCK_thread_count. In this fix, we are changing
only 'show process list' query logic )
(Eg: unlink_thd logic will be protected with
LOCK_thd_remove).
Logic of mysqld_list_processes(or file_schema_processlist)
will now be protected with 'LOCK_thd_remove' instead of
'LOCK_thread_count'.
Now the new locking order after this patch is:
LOCK_thd_remove -> LOCK_thd_data -> LOCK_log ->
LOCK_index -> LOCK_thread_count
SHOW PROCESSLIST, SHOW BINLOGS
Problem: A deadlock was occurring when 4 threads were
involved in acquiring locks in the following way
Thread 1: Dump thread ( Slave is reconnecting, so on
Master, a new dump thread is trying kill
zombie dump threads. It acquired thread's
LOCK_thd_data and it is about to acquire
mysys_var->current_mutex ( which LOCK_log)
Thread 2: Application thread is executing show binlogs and
acquired LOCK_log and it is about to acquire
LOCK_index.
Thread 3: Application thread is executing Purge binary logs
and acquired LOCK_index and it is about to
acquire LOCK_thread_count.
Thread 4: Application thread is executing show processlist
and acquired LOCK_thread_count and it is
about to acquire zombie dump thread's
LOCK_thd_data.
Deadlock Cycle:
Thread 1 -> Thread 2 -> Thread 3-> Thread 4 ->Thread 1
The same above deadlock was observed even when thread 4 is
executing 'SELECT * FROM information_schema.processlist' command and
acquired LOCK_thread_count and it is about to acquire zombie
dump thread's LOCK_thd_data.
Analysis:
There are four locks involved in the deadlock. LOCK_log,
LOCK_thread_count, LOCK_index and LOCK_thd_data.
LOCK_log, LOCK_thread_count, LOCK_index are global mutexes
where as LOCK_thd_data is local to a thread.
We can divide these four locks in two groups.
Group 1 consists of LOCK_log and LOCK_index and the order
should be LOCK_log followed by LOCK_index.
Group 2 consists of other two mutexes
LOCK_thread_count, LOCK_thd_data and the order should
be LOCK_thread_count followed by LOCK_thd_data.
Unfortunately, there is no specific predefined lock order defined
to follow in the MySQL system when it comes to locks across these
two groups. In the above problematic example,
there is no problem in the way we are acquiring the locks
if you see each thread individually.
But If you combine all 4 threads, they end up in a deadlock.
Fix:
Since everything seems to be fine in the way threads are taking locks,
In this patch We are changing the duration of the locks in Thread 4
to break the deadlock. i.e., before the patch, Thread 4
('show processlist' command) mysqld_list_processes()
function acquires LOCK_thread_count for the complete duration
of the function and it also acquires/releases
each thread's LOCK_thd_data.
LOCK_thread_count is used to protect addition and
deletion of threads in global threads list. While show
process list is looping through all the existing threads,
it will be a problem if a thread is exited but there is no problem
if a new thread is added to the system. Hence a new mutex is
introduced "LOCK_thd_remove" which will protect deletion
of a thread from global threads list. All threads which are
getting exited should acquire LOCK_thd_remove
followed by LOCK_thread_count. (It should take LOCK_thread_count
also because other places of the code still thinks that exit thread
is protected with LOCK_thread_count. In this fix, we are changing
only 'show process list' query logic )
(Eg: unlink_thd logic will be protected with
LOCK_thd_remove).
Logic of mysqld_list_processes(or file_schema_processlist)
will now be protected with 'LOCK_thd_remove' instead of
'LOCK_thread_count'.
Now the new locking order after this patch is:
LOCK_thd_remove -> LOCK_thd_data -> LOCK_log ->
LOCK_index -> LOCK_thread_count
The fill_schema_table() function used to call get_table_share() for a table name in WHERE
then clear the error list. That way plugins receive the superfluous error notification if it
happens in it. Also the problem was that error handler didn't prevent the suppressed
error message from logging anyway as the logging happens in THD::raise_condition
before the handler call.
Trigger_error_handler is remade into Warnings_only_error_handler, so it stores the error
message in all cases in the thd->stmt_da.
Then later the stored error is raised.
Reason for the bug was an optimization for higher connect speed where we moved when global status was updated,
but forgot to update states when slave thread dies.
Fixed by adding thd->add_status_to_global() before deleting slave thread's thd.
mysys/my_delete.c:
Added missing newline
sql/mysqld.cc:
Use add_status_to_global()
sql/slave.cc:
Added missing add_status_to_global()
sql/sql_class.cc:
Use add_status_to_global()
sql/sql_class.h:
Simplify adding local status to global by adding add_status_to_global()
on server shutdown after SELECT with CONVERT_TZ
It's wrong to return my_empty_string from val_str().
Removing my_empty_string. Using make_empty_result() instead.
Plugins get error notifications only when my_message_sql() is called.
But errors are launched with THD::raise_condition() calls in other
places. These are push_warning(), implementations of SIGNAL and
RESIGNAL commands.
So it makes sence to notify plugins there in THD::raise_condition().
(and valgrind warnings)
* move thd userstat initialization to the same function
that was adding thd userstat to global counters.
* initialize thd->start_bytes_received in THD::init
(when thd->userstat_running is set)
ORDER BY does not work
Use "dynamic" row format (instead of "block") for MARIA internal
temporary tables created for cursors.
With "block" row format MARIA may shuffle rows, with "dynamic" row
format records are inserted sequentially (there are no gaps in data
file while we fill temporary tables).
This is needed to preserve row order when scanning materialized cursors.
Description:
Original fix Bug#11765744 changed mutex to read write lock
to avoid multiple recursive lock acquire operation on
LOCK_status mutex.
On Windows, locking read-write lock recursively is not safe.
Slim read-write locks, which MySQL uses if they are supported by
Windows version, do not support recursion according to their
documentation. For our own implementation of read-write lock,
which is used in cases when Windows version doesn't support SRW,
recursive locking of read-write lock can easily lead to deadlock
if there are concurrent lock requests.
Fix:
This patch reverts the previous fix for bug#11765744 that used
read-write locks. Instead problem of recursive locking for
LOCK_status mutex is solved by tracking recursion level using
counter in THD object and acquiring lock only once when we enter
fill_status() function first time.
Description:
Original fix Bug#11765744 changed mutex to read write lock
to avoid multiple recursive lock acquire operation on
LOCK_status mutex.
On Windows, locking read-write lock recursively is not safe.
Slim read-write locks, which MySQL uses if they are supported by
Windows version, do not support recursion according to their
documentation. For our own implementation of read-write lock,
which is used in cases when Windows version doesn't support SRW,
recursive locking of read-write lock can easily lead to deadlock
if there are concurrent lock requests.
Fix:
This patch reverts the previous fix for bug#11765744 that used
read-write locks. Instead problem of recursive locking for
LOCK_status mutex is solved by tracking recursion level using
counter in THD object and acquiring lock only once when we enter
fill_status() function first time.
Description:
Original fix Bug#11765744 changed mutex to read write lock
to avoid multiple recursive lock acquire operation on
LOCK_status mutex.
On Windows, locking read-write lock recursively is not safe.
Slim read-write locks, which MySQL uses if they are supported by
Windows version, do not support recursion according to their
documentation. For our own implementation of read-write lock,
which is used in cases when Windows version doesn't support SRW,
recursive locking of read-write lock can easily lead to deadlock
if there are concurrent lock requests.
Fix:
This patch reverts the previous fix for bug#11765744 that used
read-write locks. Instead problem of recursive locking for
LOCK_status mutex is solved by tracking recursion level using
counter in THD object and acquiring lock only once when we enter
fill_status() function first time.
Description:
Original fix Bug#11765744 changed mutex to read write lock
to avoid multiple recursive lock acquire operation on
LOCK_status mutex.
On Windows, locking read-write lock recursively is not safe.
Slim read-write locks, which MySQL uses if they are supported by
Windows version, do not support recursion according to their
documentation. For our own implementation of read-write lock,
which is used in cases when Windows version doesn't support SRW,
recursive locking of read-write lock can easily lead to deadlock
if there are concurrent lock requests.
Fix:
This patch reverts the previous fix for bug#11765744 that used
read-write locks. Instead problem of recursive locking for
LOCK_status mutex is solved by tracking recursion level using
counter in THD object and acquiring lock only once when we enter
fill_status() function first time.
SERIALIZABLE
Problem:
The documentation claims that WITH CONSISTENT SNAPSHOT will work for both
REPEATABLE READ and SERIALIZABLE isolation levels. But it will work only
for REPEATABLE READ isolation level. Also, the clause WITH CONSISTENT
SNAPSHOT is silently ignored when it is not applicable to the given isolation
level.
Solution:
Generate a warning when the clause WITH CONSISTENT SNAPSHOT is ignored.
rb#2797 approved by Kevin.
Note: Support team wanted to push this to 5.5+.
SERIALIZABLE
Problem:
The documentation claims that WITH CONSISTENT SNAPSHOT will work for both
REPEATABLE READ and SERIALIZABLE isolation levels. But it will work only
for REPEATABLE READ isolation level. Also, the clause WITH CONSISTENT
SNAPSHOT is silently ignored when it is not applicable to the given isolation
level.
Solution:
Generate a warning when the clause WITH CONSISTENT SNAPSHOT is ignored.
rb#2797 approved by Kevin.
Note: Support team wanted to push this to 5.5+.