Remove Aria state history for drop/rename
mysql-test/suite/maria/r/maria-recovery2.result:
Updated old (wrong) test result
sql/handler.cc:
Fixed wrong argument to implict_commit
storage/maria/ha_maria.cc:
Ensure that we don't use file->trn if THD_TRN is 0. (This means that implict_commit() has been called and the trn object is not ours anymore)
storage/maria/ma_extra.c:
Remove Aria state history for drop/rename
storage/maria/ma_rename.c:
Remove Aria state history for rename
storage/maria/ma_state.c:
More DBUG_PRINT
- The problem was that convert_subq_to_jtbm() attached the semi-join
TABLE_LIST object into the wrong list: they used to attach it to the
end of parent_lex->leaf_tables.head()->next_local->...->next_local.
This was apparently inccorect, as one can construct an example where
JTBM nest is attached to a table that is inside some mergeable VIEW, which
breaks (causes crash) for name resolution on the subsequent statement
re-execution.
- Solution: Attach to the "right" list. The "wording" was copied from
st_select_lex::handle_derived.
- do not attempt loading federatedx dynamically - does not work on Windows embedded
- race condition in rpl_start_stop_slave
- fix exclusion rule to catch warning in partition test
The test case must insert all the records using a single transaction. Otherwise the test
case takes more than 15 minutes and will time out in pb2 and mtr.
AND SAVEPOINT.
The bug was introduced by the patch for bug#11766752. This patch sets too
strong condition on XA state for SAVEPOINT statement that disallows its
execution during XA transaction. But since the statement SAVEPOINT doesn't
imply implicit commit we can allow its handling during XA transaction.
The patch explicitly check for transaction state against states XA_NOTR
and XA_ACTIVE for which the handling of statement SAVEPOINT for XA
transaction is allowed.
mysql-test/t/xa.test:
Testcase was adjusted for bug#13737343. Now SAVEPOINT is allowed for XA
transactions in ACTIVE state.
The table contains one time value: '00:00:32'
This value is converted to timestamp by a subquery.
In convert_constant_item we call (*item)->is_null()
which triggers execution of the Item_singlerow_subselect subquery,
and the string "0000-00-00 00:00:32" is cached
by Item_cache_datetime.
We continue execution and call update_null_value, which calls val_int()
on the cached item, which converts the time value to ((longlong) 32)
Then we continue to do (*item)->save_in_field()
which ends up in Item_cache_datetime::val_str() which fails,
since (32 < 101) in number_to_datetime, and val_str() returns NULL.
Item_singlerow_subselect::val_str isnt prepared for this:
if exec() succeeds, and return !null_value, then val_str()
*must* succeed.
Solution: refuse to cache strings like "0000-00-00 00:00:32"
in Item_cache_datetime::cache_value, and return NULL instead.
This is similar to the solution for
Bug#11766860 - 60085: CRASH IN ITEM::SAVE_IN_FIELD() WITH TIME DATA TYPE
This patch is for 5.5 only.
The issue is not present after WL#946, since a time value
will be converted to a proper timestamp, with the current date
rather than "0000-00-00"
mysql-test/r/subselect.result:
New test case.
mysql-test/t/subselect.test:
New test case.
sql/item.cc:
Verify proper date format before caching timestamps.
sql/item_timefunc.cc:
Use named constant for readability.
We are trying to sort a lot of text/blob fields,
so the buffer is indeed too small.
Memory available = thd->variables.sortbuff_size = 262144
min_sort_memory = param.sort_length*MERGEBUFF2 = 292245
So the decision to abort the query is correct.
filesort() calls my_error(), the error is reported.
But, since we have DELETE IGNORE ... the error is converted to a warning by
THD::raise_condition
filesort currently expects an error to be recorded in the THD diagnostics
area.
If we lift this restriction (remove the assert) we end up in the familiar
void Protocol::end_statement()
default:
DBUG_ASSERT(0);
The solution seems to be to call my_error(ME_FATALERROR) in filesort,
so that the error is propagated as an error rather than a warning.
mysql-test/r/filesort_debug.result:
New test case.
mysql-test/t/filesort_debug.test:
New test case.
ENOUGH - CONCAT() HACKS. ALSO WRONG
ERROR MESSAGE WHILE TRYING TO CREATE
A VIEW ON A NON EXISTING DATABASE
PROBLEM:
The first part of the problem is concluded as not a
bug, as 'concat' is not a reserved word and it is
completely valid to create a view with the name
'concat'.
The second issue is, while trying to create a view on
a non existing database, we are not giving a proper error
message.
FIX:
We have added a check for the database existence while
trying to create a view. This check would give an error
as 'unknown database' when the database does not exist.
This patch is a backport of the patch for Bug#13601606
mysql-test/r/view.result:
Added test case result of Bug#12626844
mysql-test/t/view.test:
Added test case for Bug#12626844
sql/sql_view.cc:
Added a check for database existence in mysql_create_view
Do not call, directly or indirectly, SQL_SELECT::test_quick_select()
for derived materialized tables / views when optimizing joins referring
to these tables / views to get cost estimates of materialization.
The current code does not create B-tree indexes for materialized
derived tables / views. So now it's not possible to get any estimates
for ranges conditions over the results of the materialization.
The function mysql_derived_create() must take into account the fact
that array of the KEY structures specifying the keys over a derived
table / view may be moved after the optimization phase if the
derived table / view is materialized.
Added 'from_end' as extra parameter to Field::unpack() to detect wrong from data.
Change ha_archive::unpack_row() to detect wrong field lengths.
Replication code changed to detect wrong field information in events.
mysql-test/r/archive.result:
dded test case for lp:917689
sql/field.cc:
Added 'from_end' as extra parameter to Field::unpack() to detect wrong from data.
Removed not used 'unpack_key' functions.
sql/field.h:
Added 'from_end' as extra parameter to Field::unpack() to detect wrong from data.
Removed not used 'unpack_key' functions.
Removed some not needed unpack() functions.
sql/filesort.cc:
Added buffer end parameter to unpack_addon_fields()
sql/log_event.h:
Added end of buffer argument to unpack_row()
sql/log_event_old.cc:
Added end of buffer argument to unpack_row()
sql/log_event_old.h:
Added end of buffer argument to unpack_row()
sql/records.cc:
Added buffer end parameter to unpack_addon_fields()
sql/rpl_record.cc:
Added end of buffer argument to unpack_row()
Added detection of wrong field information in events
sql/rpl_record.h:
Added end of buffer argument to unpack_row()
sql/rpl_record_old.cc:
Added end of buffer argument to unpack_row()
Added detection of wrong field information in events
sql/rpl_record_old.h:
Added end of buffer argument to unpack_row()
sql/table.h:
Added buffer end parameter to unpack()
storage/archive/ha_archive.cc:
Change ha_archive::unpack_row() to detect wrong field lengths.
This fixes lp:917689
- The bug would show up
- when using PS (so that we get re-execution)
- the left_expr of the subquery is a reference to viewname.column_name, so that it crashes
when one tries to use it without having called fix_fields for it.
- when using SJ-Materialization, which makes use of sj_subq_pred->left_expr expression
- The fix is to have setup_conds() fix sj_subq_pred->left_expr for semi-join nests it finds.
BUG#64503: mysql frequently ignores --relay-log-space-limit
When the SQL thread goes to sleep, waiting for more events, it sets
the flag ignore_log_space_limit to true. This gives the IO thread a
chance to queue some more events and ultimately the SQL thread will be
able to purge the log once it is rotated. By then the SQL thread
resets the ignore_log_space_limit to false. However, between the time
the SQL thread has set the ignore flag and the time it resets it, the
IO thread will be queuing events in the relay log, possibly going way
over the limit.
This patch makes the IO and SQL thread to synchronize when they reach
the space limit and only ask for one event at a time. Thus the SQL
thread sets ignore_log_space_limit flag and the IO thread resets it to
false everytime it processes one more event. In addition, everytime
the SQL thread processes the next event, and the limit has been
reached, it checks if the IO thread should rotate. If it should, it
instructs the IO thread to rotate, giving the SQL thread a chance to
purge the logs (freeing space). Finally, this patch removes the
resetting of the ignore_log_space_limit flag from purge_first_log,
because this is now reset by the IO thread every time it processes the
next event when the limit has been reached.
If the SQL thread is in a transaction, it cannot purge so, there is no
point in asking the IO thread to rotate. The only thing it can do is
to ask for more events until the transaction is over (then it can ask
the IO to rotate and purge the log right away). Otherwise, there would
be a deadlock (SQL would not be able to purge and IO thread would not
be able to queue events so that the SQL would finish the transaction).
Problem: Grouping results by VALUES(alias for string literal) causes
the server to crash.
Item_insert_values is not constructed to handle other types of
arguments than field and reference to field. In this case, the
argument is an Item_string, and this causes
Item_insert_values::fix_fields() to crash.
Fix: Issue an error message when the argument to Item_insert_values is
not a field or a reference to a field.
This is slightly in breach with documentation, which states that
VALUES should return NULL, but the error message is only issued in
cases where the server otherwise would crash, so there is no change in
behavior for queries that already work. Future versions will restrict
syntax so that using VALUES in this way is illegal.
mysql-test/r/errors.result:
Add test case for bug #13031606.
mysql-test/t/errors.test:
Add test case for bug #13031606.
sql/item.cc:
Issue error message if argument is not field or reference to field.
https://mariadb.atlassian.net/browse/MDEV-28
This task implements a new clause LIMIT ROWS EXAMINED <num>
as an extention to the ANSI LIMIT clause. This extension
allows to limit the number of rows and/or keys a query
would access (read and/or write) during query execution.
This bug was introduced into mariadb 5.2 in the December 2010 with
the patch that added a new engine property: the ability to support
virtual columns.
As a result of this bug the information from frm files for tables
that contained virtual columns did not appear in the information schema
tables.
USER VARIABLE = CRASH
Moved the preparation of the variables that receive the output from
SELECT INTO from execution time (JOIN:execute) to compile time
(JOIN::prepare). This ensures that if the same variable is used in the
SELECT part of SELECT INTO it will be properly marked as non-const
for this query.
Test case added.
Used proper fast iterator.