Problem:-
The condition that checks for node readiness is too strict as it does
not allow SELECTs even if these selects do not access any tables.
For example,if we run
SELECT 1;
OR
SELECT @@max_allowed_packet;
Solution:-
We need not to report this error when all_tables(lex->query_tables)
is NULL:
Problem:-
The condition that checks for node readiness is too strict as it does
not allow SELECTs even if these selects do not access any tables.
For example,if we run
SELECT 1;
OR
SELECT @@max_allowed_packet;
Solution:-
We need not to report this error when all_tables(lex->query_tables)
is NULL:
700101
ANALYSIS:
=========
To set the time 'start_time' of query in THD, current time
is obtained by calling 'gettimeofday()'. On Solaris
platform, due to some system level issues, time obtained is
invalid i.e. its either greater than 2038 (max signed value
to hold microseconds since 1970) or 1970 (0 microseconds
since 1970). In these cases, validation checks infer that
the 'start_time' is invalid and mysql server initiates the
shutdown process. But the reason for shutdown is not logged.
FIX:
====
We are now logging appropriate message when shutdown is
triggered in the above mentioned scenarios. Now, even if
the initial validation checks infer that the 'start_time'
is invalid, server shutdown is not initiated immediately.
Before initiating the server shutdown, the process of
setting 'start_time' and validating it is reiterated (for
max 5 times). If correct time is obtained in these 5
iterations then server continues to run.
use get_current_user() to distinguish user name without
a hostname and a role name.
move privilege checks inside mysql_show_grants() to remove
duplicate get_current_user() calls
This fix also fixes a connection hang when trying to do INSERT DELAYED to a crashed table.
Added crash_mysqld.inc to allow easy crash+restart of mysqld
The cause of the issue is when DROP DATABASE takes
metadata lock and is in progress through it's
execution, a concurrently running CREATE FUNCTION checks
for the existence of database which it succeeds and then it
waits on the metadata lock. Once DROP DATABASE writes to
BINLOG and finally releases the metadata lock on schema
object, the CREATE FUNCTION waiting on metadata lock
gets in it's code path and succeeds and writes to binlog.
Fix remaining issues with wsrep_sync_wait and query cache.
- Fixes misplaced call to invalidate query cache in
Rows_log_event::do_apply_event().
Query cache was invalidated too early, and allowed old
entries to be inserted to the cache.
- Reset thd->wsrep_sync_wait_gtid on query cache hit.
THD->cleanup_after_query is not called in such cases,
and thd->wsrep_sync_wait_gtid remained initialized.
The admin commands in question are:
> OPTIMIZE
> REPAIR
> ANALYZE
For LOCAL or NO_WRITE_TO_BINLOG invocations of these commands, ie
OPTIMIZE LOCAL TABLE <t1>
they are not binlogged as expected.
Also, in addition, they are not executed under TOI.
Hence, they are not propagated to other nodes.
The effect is same as that of wsrep_on=0.
Also added tests for this.
A WSREP_DEBUG for wsrep_register_hton has also been added.
The galera_flush_local test has also been updated for verifying that effects
of NO_WRITE_TO_BINLOG / LOCAL are equivalent to wsrep_on=0 from wsrep
perspective.
(cherry picked from commit 5065122f94a8002d4da231528a46f8d9ddbffdc2)
Conflicts:
sql/sql_admin.cc
sql/sql_reload.cc
sql/wsrep_hton.cc
- Moves call wsrep_free_status() to THD::cleanup_after_query().
Wsrep status variables were previously freed only on SHOW STATUS.
- Removes valgrind suppression from mysql-test/valgrind.
- Added testing if connection is killed to shortcut reading of connection data
This will allow us later in 10.2 to do a cleaner shutdown of slaves (less errors in the log)
- Add new status variables: Slaves_connected, Slaves_running and Slave_connections.
- Use MYSQL_SLAVE_NOT_RUN instead of 0 with slave_running.
- Don't print obvious extra warnings to the error log when slave is shut down normally.
Problem is that FLUSH TABLES WITH READ LOCK first blocks threads from
starting new commits, then waits for running commits to complete. But
in-order parallel replication needs commits to happen in a particular
order, so this can easily deadlock.
To fix this problem, this patch introduces a way to temporarily pause
the parallel replication worker threads. Before starting FTWRL, we let
all worker threads complete in-progress transactions, and then
wait. Then we proceed to take the global read lock. Once the lock is
obtained, we unpause the worker threads. Now commits are blocked from
starting by the global read lock, so the deadlock will no longer occur.