Problem:
=======
Executing command, "mysqlbinlog --read-from-remote-server --host='xx.xx.xx.xx'
--port=3306 --user=xxx --password=xxx --database=mysql --to-last-log
mysql-bin.000001 --start-position=1098699 --stop-never |mysql -uxxx -pxxx", we
found that last data read from remote couldn't commit.
Analysis:
========
The purpose of 'Write_on_release_cache' is that the contents of the Cache will
automatically be written to a dedicated result file on destruction. Flush
operation on the result file is controlled by a flag 'FLUSH_F'. Events which
require force flush upon their destruction will have to enable this
'Write_on_release_cache::FLUSH_F'. At present the 'FLUSH_F' flag is defined as
an enum as shown below.
enum flag
{
FLUSH_F
};
Since 'FLUSH_F' is the first member without initialization it get the default
value '0'. Because of this the following flush condition never succeeds.
if (m_flags & FLUSH_F)
fflush(m_file);
At present the file gets flushed only during my_fclose(result_file) operation.
When continuous streaming is enabled through --stop-never option it never gets
flushed and hence events are not replicated.
Fix:
===
Initialize the enum value to non zero value.
The problem was that sp_head::MULTI_RESULTS was not set correctly for ANALYZE statement
with SELECT ... INTO variable.
This is a follow up fix for MDEV-7023
cmake -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_BUILD_TYPE=Debug
Maintainer mode makes all warnings errors. This patch fix warnings. Mostly about
deprecated `register` keyword.
Too much warnings came from Mroonga and I gave up on it.
in where clause
The classes Item_func_isnottrue and Item_func_isnotfalse inherited the
implementation of the eval_not_null_tables method from the Item_func
class. As a result the not_null_tables_cache was set incorrectly for
the objects of these classes. It led to improper conversion of outer
joins to inner joins when the where clause of the processed query
contained IS NOT TRUE or IS NOT FALSE predicates. The coverted query
in many cases produced a wrong result set.
Plugin fixed to not lock the LOCK_operations when not active.
Server fixed to lock the LOCK_plugin less - do it once per
thread and then only if a plugin was installed/uninstalled.
This patch fixes 10.2 issue reported in MDEV-16467 by partial backport of
c2118a0. Specifically "Remove not needed LOCK_thread_count from
thd_get_error_context_description()".
Handling of top level conjuncts in WHERE whose used_tables() contained
RAND_TABLE_BIT in the function make_join_select() was incorrect.
As a result if such a conjunct referred to fields non of which belonged
to the last joined table it was pushed twice. (This could be seen
for a test case from subselect.test whose output was changed after this
patch had been applied. In 10.1 when running EXPLAIN FORMAT=JSON for
the query from this test case we clearly see that one of the conjuncts
is pushed twice.) This fact by itself was not good. Besides, if such a
conjunct was pushed to a table that was the result of materialization
of a semi-join the query could return a wrong result set. In particular
we could watch it for queries with semi-join subqueries whose left parts
used stored functions without "deterministic' specifier.
as well as
MDEV-19500 Update with join stopped worked if there is a call to a procedure in a trigger
MDEV-19521 Update Table Fails with Trigger and Stored Function
MDEV-19497 Replication stops because table not found
MDEV-19527 UPDATE + JOIN + TRIGGERS = table doesn't exists error
Reimplement the fix for (5d510fdbf0)
MDEV-18507 can't update temporary table when joined with table with triggers on read-only
instead of calling open_tables() twice, put multi-update
prepare code inside open_tables() loop.
Add a test for a MDL backoff-and-retry loop inside open_tables()
across multi-update prepare code.
Problem:
========
Following typo in error log:
2019-03-13 15:58:10 0 [Note] Reading of all Master_info entries succeded
Should be 'succeeded'
Fix:
===
Fixed the typo with the right word 'succeeded'.
This patch complements the patch that fixes bug MDEV-18479.
This patch takes care of possible overflow when calculating the
estimated number of rows in a materialized derived table / view.
This bug could happen when queries with nested outer joins were
executed employing join buffers. At such an execution if the method
JOIN_CACHE::join_records() is called when a join buffer has become
full no 'first_unmatched' field should be cleaned up in the JOIN_TAB
structure to which the join cache with this buffer is attached.
or server crashes in JOIN::fix_all_splittings_in_plan after EXPLAIN
This patch resolves the problem of overflowing when performing
calculations to estimate the cost of an evaluated query execution plan.
The overflowing in a non-debug build could cause different kind of
problems uncluding crashes of the server.
This patch corrects the patch for the bug 10006. The latter incorrectly
calculates the attribute TABLE_LIST::dep_tables for inner tables
of outer joins that are to be converted into inner joins.
As a result after the patch some valid join orders were not evaluated
and the optimizer could choose an execution plan that was far from
being optimal.
The code in best_access_path function, when it does not find a key suitable for ref access
and join_cache_level is set to a value so that hash_join is possible we build a hash key.
Later in the function we compare the cost of ref access with table scan (or index scan
or quick selects). No need to do this when we have got the hash key.
This patch complements the original patch for MDEV-18896 that prevents
conversions to semi-joins in tableless selects used in INSERT statements
in post-5.5 versions of the server.
The test case was corrected as well to ensure that potential conversion
to jtbm semi-joins is also checked (the problem was that one of
the preceeding testcases in subselect_sj.test did not restore the
state of the optimizer switch leaving the 'materialization' in the state
'off' and so blocking this check).
Noticed an inconsistency in the state of select_lex::table_list used
in INSERT statements and left a comment about this.
we had the statistics tables in the FROM list of the select.
The statistics for tables are not read in such cases, so we need
to check this case separately.
Some places didn't match the previous rules, making the Floor
address wrong.
Additional sed rules:
sed -i -e 's/Place.*Suite .*, Boston/Street, Fifth Floor, Boston/g'
sed -i -e 's/Suite .*, Boston/Fifth Floor, Boston/g'
This commit is based on the work of Michal Schorm, rebased on the
earliest MariaDB version.
Th command line used to generate this diff was:
find ./ -type f \
-exec sed -i -e 's/Foundation, Inc., 59 Temple Place, Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
-exec sed -i -e 's/Foundation, Inc. 59 Temple Place.* Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
-exec sed -i -e 's/MA.*.....-1307.*USA/MA 02110-1335 USA/g' {} \; \
-exec sed -i -e 's/Foundation, Inc., 59 Temple/Foundation, Inc., 51 Franklin/g' {} \; \
-exec sed -i -e 's/Place, Suite 330, Boston, MA.*02111-1307.*USA/Street, Fifth Floor, Boston, MA 02110-1335 USA/g' {} \; \
-exec sed -i -e 's/MA.*.....-1307/MA 02110-1335/g' {} \;
No functional change.
Call my_timer_init() only once and then reuse it from InnoDB and
perfschema storage engines.
This patch speeds up empty test for me like this:
./mtr -mem innodb.kevg,xtradb 1.21s user 0.84s system 34% cpu 5.999 total
./mtr -mem innodb.kevg,xtradb 1.12s user 0.60s system 31% cpu 5.385 total
The crash happens when writing into log file.
The reason is likely that the call to WriteFile() was missing a valid
parameter for lpNumberOfBytesWritten. This seems only to happen on ancient
version of Windows.
Since the fix to MDEV-16430 in 141bc58ac9, null pointer was passed
instead of valid pointer.
The fix is to provide a valid lpNumberOfBytesWritten parameter.
A sequel to 9180e86 and 149b754.
ALTER TABLE ... ADD FOREIGN KEY may crash if parent table is updated
concurrently.
Block FK parent table updates even earlier, before intermediate child
table is created.
Use proper charset info for my_casedn_str() and don't update original
identifiers so that lower_cast_table_names == 2 is honoured.
To read histograms for a table, we should check if the allocation of statistics was done or not,
if not done we should not try to read histograms for such a table.
Restore EXPAIN flag in SELECT_LEX before execution multi-update by flag in LEX
(the same but in other way made before INSERT/DELETE/SELECT)
Without it, mysql_update() didn't know that there will be EXPLAIN result set and was sending OK at the end of the update, which conflicted with the EOF sent later by EXPLAIN.
copy_if_not_alloced() did not handle situations when
"from" is a constant string pointing to a substring of "to",
so this code part freed "to" but then tried to copy its old (already freed)
content to a new buffer:
if (to->realloc(from_length))
return from;
if ((to->str_length=MY_MIN(from->str_length,from_length)))
memcpy(to->Ptr,from->Ptr,to->str_length);
Adding a new code piece that catches such constant substrings
and propery reallocs "to" to preserve its important part referenced
by "from".