followup for 97c2a7354b6 - don't use thd->is_error(),
the error could've been set before TABLE_LIST::cleanup_items.
Use the error handler to count errors.
This fixes rpl.rpl_row_binlog_max_cache_size - it was failing when
ER_STMT_CACHE_FULL happened duing multi-table update. Because
multi_update::abort_result_set() calls do_updates() to update
as much as possible, so one cannot rely on thd->is_error() after that.
For running the Galera tests, the variable my_disable_leak_check
was set to true in order to avoid assertions due to memory leaks
at shutdown.
Some adjustments due to MDEV-13625 (merge InnoDB tests from MySQL 5.6)
were performed. The most notable behaviour changes from 10.0 and 10.1
are the following:
* innodb.innodb-table-online: adjustments for the DROP COLUMN
behaviour change (MDEV-11114, MDEV-13613)
* innodb.innodb-index-online-fk: the removal of a (1,NULL) record
from the result; originally removed in MySQL 5.7 in the
Oracle Bug #16244691 fix
377774689b
* innodb.create-index-debug: disabled due to MDEV-13680
(the MySQL Bug #77497 fix was not merged from 5.6 to 5.7.10)
* innodb.innodb-alter-autoinc: MariaDB 10.2 behaves like MySQL 5.6/5.7,
while MariaDB 10.0 and 10.1 assign different values when
auto_increment_increment or auto_increment_offset are used.
Also MySQL 5.6/5.7 exhibit different behaviour between
LGORITHM=INPLACE and ALGORITHM=COPY, so something needs to be tested
and fixed in both MariaDB 10.0 and 10.2.
* innodb.innodb-wl5980-alter: disabled because it would trigger an
InnoDB assertion failure (MDEV-13668 may need additional effort in 10.2)
Fixed by making sure that the sort buffer would have atleast MERGEBUFF2 keys.
Also fixed MDEV-13457 by making sure that an empty tree is never dumped to the disk
The problem was that the introduction of max-thread-mem-used can cause
an allocation error very early, even before mysql_parse() is called.
As mysql_parse() calls thd->reset_for_next_command(), which called
clear_error(), the error number was lost.
Fixed by adding an option to have unique messages for each KILL
signal and change max-thread-mem-used to use this new feature.
This removes a lot of problems with the original approach, where
one could get errors signaled silenty almost any time.
ixed by moving clear_error() from reset_for_next_command() to
do_command(), before any memory allocation for the thread.
Related changes:
- reset_for_next_command() now have an optional parameter if we should
call clear_error() or not. By default it's called, but not anymore from
dispatch_command() which was the original problem.
- Added optional paramater to clear_error() to force calling of
reset_diagnostics_area(). Before clear_error() only called
reset_diagnostics_area() if there was no error, so we normally
called reset_diagnostics_area() twice.
- This change removed several duplicated calls to clear_error()
when starting a query.
- Reset max_mem_used on COM_QUIT, to protect against kill during
quit.
- Use fatal_error() instead of setting is_fatal_error (cleanup)
- Set fatal_error if max_thead_mem_used is signaled.
(Same logic we use for other places where we are out of resources)
CREATE/DROP TEMPORARY TABLE are not safe to optimistically replicate in
parallel with other transactions, so they need to be marked as "ddl" in the
binlog.
This was already done for stand-alone CREATE/DROP TEMPORARY. But temporary
tables can also be created and dropped inside a BEGIN...END transaction, and
such transactions were not marked as ddl. Nor was the DROP TEMPORARY TABLE
statement emitted implicitly when a client connection is closed.
So this patch adds such ddl mark for the missing cases.
The difference to Kristian's original patch is mainly a fix in
mysql_trans_commit_alter_copy_data() to remember the unsafe_rollback_flags
over the temporary commit.
- Added variable tmp_disk_table_size
- Added variable tmp_memory_table_size as an alias for tmp_table_size
- Changed internal variable tmp_table_size to tmp_memory_table_size
- create_info.data_file_length is now set with tmp_disk_table_size
- Fixed that Aria doesn't reset max_data_file_length for internal tables
- Added status flag if table is full so that we can detect this on next insert.
This ensures that the table is always 'correct', but we get the error one
row after the row that grow the table too big.
- Removed some mutex lock for internal temporary tables
Define my_thread_id as an unsigned type, to avoid mismatch with
ulonglong. Change some parameters to this type.
Use size_t in a few more places.
Declare many flag constants as unsigned to avoid sign mismatch
when shifting bits or applying the unary ~ operator.
When applying the unary ~ operator to enum constants, explictly
cast the result to an unsigned type, because enum constants can
be treated as signed.
In InnoDB, change the source code line number parameters from
ulint to unsigned type. Also, make some InnoDB functions return
a narrower type (unsigned or uint32_t instead of ulint;
bool instead of ibool).
The temporary tables created for recursive table references
should be closed in close_thread_tables(), because they might
be used in the statements like ANALYZE WITH r AS (...) SELECT * from r
where r is defined through recursion.
Most notably, this includes MDEV-11623, which includes a fix and
an upgrade procedure for the InnoDB file format incompatibility
that is present in MariaDB Server 10.1.0 through 10.1.20.
In other words, this merge should address
MDEV-11202 InnoDB 10.1 -> 10.2 migration does not work
it used current_thd->alloc() and allocated on the thd's execution arena,
not on table->expr_arena.
Remove THD::arena_for_cached_items that is temporarily set in
update_virtual_fields(), and replaces THD arena in get_datetime_value().
Instead set THD arena to table->expr_arena for the whole duration
of update_virtual_fields()
- Changed error handlers interface so that they can change error level in
the handler
- Give warnings and errors when calculating virtual columns
- On insert/update error is fatal in strict mode.
- SELECT and DELETE will only give a warning if a virtual field generates an error
- Added VCOL_UPDATE_FOR_DELETE and VCOL_UPDATE_INDEX_FOR_REPLACE to be able to
easily detect in update_virtual_fields() if we should use an error
handler to mask errors or not.
Implementation of MDEV-7660 introduced unwanted incompatible change:
modifications under LOCK TABLES with autocommit enabled are rolled back on
disconnect. Previously everything was committed, because LOCK TABLES didn't
adjust autocommit setting.
This patch restores original behavior by reverting some changes done in
MDEV-7660:
- sql/sql_parse.cc: do not reset autocommit on LOCK TABLES
- sql/sql_base.cc: do not set autocommit on UNLOCK TABLES
- test cases: main.lock_tables_lost_commit, main.partition_explicit_prune,
rpl.rpl_switch_stm_row_mixed, tokudb.nested_txn_implicit_commit,
tokudb_bugs.db806
But it makes InnoDB tables under LOCK TABLES ... READ [LOCAL] not protected
against DML. To restore protection some changes from WL#6671 were merged,
specifically MDL_SHARED_READ_ONLY and test cases.
WL#6671 merge highlights:
- Not all tests merged.
- In MySQL LOCK TABLES ... READ acquires MDL_SHARED_READ_ONLY for all engines,
in MariaDB MDL_SHARED_READ is always acquired first and then upgraded to
MDL_SHARED_READ_ONLY for InnoDB only.
- The above allows us to omit MDL_SHARED_WRITE_LOW_PRIO implementation in
MariaDB, which is rather useless with InnoDB. In MySQL it is needed to
preserve locking behavior between low priority writes and LOCK TABLES ... READ
for non-InnoDB engines (covered by sys_vars.sql_low_priority_updates_func).
- Omitted HA_NO_READ_LOCAL_LOCK, we rely on lock_count() instead.
- Omitted "piglets": in MariaDB stream of DML against InnoDB table may lead to
concurrent LOCK TABLES ... READ starvation.
- HANDLER ... OPEN acquires MDL_SHARED_READ instead of MDL_SHARED in MariaDB.
- Omitted SNRW->X MDL lock upgrade for IMPORT/DISCARD TABLESPAECE under LOCK
TABLES.
- Omitted strong locks for views, triggers and SP under LOCK TABLES.
- Omitted IX schema lock for LOCK TABLES READ.
- Omitted deadlock weight juggling for LOCK TABLES.
Full WL#6671 merge status:
- innodb.innodb-lock: fully merged
- main.alter_table: not merged due to different HANDLER solution
- main.debug_sync: fully merged
- main.handler_innodb: not merged due to different HANDLER solution
- main.handler_myisam: not merged due to different HANDLER solution
- main.innodb_mysql_lock: fully merged
- main.insert_notembedded: fully merged
- main.lock: not merged (due to no strong locks for views)
- main.lock_multi: not merged
- main.lock_sync: fully merged (partially in MDEV-7660)
- main.mdl_sync: not merged
- main.partition_debug_sync: not merged due to different HANDLER solution
- main.status: fully merged
- main.view: fully merged
- perfschema.mdl_func: not merged (no such test in MariaDB)
- perfschema.table_aggregate_global_2u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_global_2u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_global_4u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_global_4u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_hist_2u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_hist_2u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_hist_4u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_hist_4u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_thread_2u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_thread_2u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_thread_4u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_aggregate_thread_4u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_global_2u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_global_2u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_global_4u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_global_4u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_hist_2u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_hist_2u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_hist_4u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_hist_4u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_thread_2u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_thread_2u_3t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_thread_4u_2t: not merged (didn't fail in MariaDB)
- perfschema.table_lock_aggregate_thread_4u_3t: not merged (didn't fail in MariaDB)
- sys_vars.sql_low_priority_updates_func: not merged
- include/thr_rwlock.h: not merged, rw_pr_lock_assert_write_owner and
rw_pr_lock_assert_not_write_owner are macros in MariaDB
- sql/handler.h: not merged (HA_NO_READ_LOCAL_LOCK)
- sql/mdl.cc: partially merged (MDL_SHARED_READ_ONLY only)
- sql/mdl.h: partially merged (MDL_SHARED_READ_ONLY only)
- sql/lock.cc: fully merged
- sql/sp_head.cc: not merged
- sql/sp_head.h: not merged
- sql/sql_base.cc: partially merged (MDL_SHARED_READ_ONLY only)
- sql/sql_base.h: not merged
- sql/sql_class.cc: fully merged
- sql/sql_class.h: fully merged
- sql/sql_handler.cc: merged partially (different solution in MariaDB)
- sql/sql_parse.cc: partially merged, mostly omitted low priority write part
- sql/sql_reload.cc: not merged comment change
- sql/sql_table.cc: not merged SNRW->X upgrade for IMPORT/DISCARD TABLESPACE
- sql/sql_view.cc: not merged
- sql/sql_yacc.yy: not merged (MDL_SHARED_WRITE_LOW_PRIO, MDL_SHARED_READ_ONLY)
- sql/table.cc: not merged (MDL_SHARED_WRITE_LOW_PRIO)
- sql/table.h: not merged (MDL_SHARED_WRITE_LOW_PRIO)
- sql/trigger.cc: not merged
- storage/innobase/handler/ha_innodb.cc: merged store_lock()/lock_count()
changes (in MDEV-7660), didn't merge HA_NO_READ_LOCAL_LOCK
- storage/innobase/handler/ha_innodb.h: fully merged in MDEV-7660
- storage/myisammrg/ha_myisammrg.cc: not merged comment change
- storage/perfschema/table_helper.cc: not merged (no MDL support in MariaDB PFS)
- unittest/gunit/mdl-t.cc: not merged
- unittest/gunit/mdl_sync-t.cc: not merged
MariaDB specific changes:
- handler.heap: different HANDLER solution, MDEV-7660
- handler.innodb: different HANDLER solution, MDEV-7660
- handler.interface: different HANDLER solution, MDEV-7660
- handler.myisam: different HANDLER solution, MDEV-7660
- main.mdl_sync: MDEV-7660 specific changes
- main.partition_debug_sync: removed test due to different HANDLER solution,
MDEV-7660
- main.truncate_coverage: removed test due to different HANDLER solution,
MDEV-7660
- mysql-test/include/mtr_warnings.sql: additional cleanup, MDEV-7660
- mysql-test/lib/v1/mtr_report.pl: additional cleanup, MDEV-7660
- plugin/metadata_lock_info/metadata_lock_info.cc: not in MySQL
- sql/sql_handler.cc: MariaDB specific fix for mysql_ha_read(), MDEV-7660
* remove old 5.2+ InnoDB support for virtual columns
* enable corresponding parts of the innodb-5.7 sources
* copy corresponding test cases from 5.7
* copy detailed Alter_inplace_info::HA_ALTER_FLAGS flags from 5.7
- and more detailed detection of changes in fill_alter_inplace_info()
* more "innodb compatibility hooks" in sql_class.cc to
- create/destroy/reset a THD (used by background purge threads)
- find a prelocked table by name
- open a table (from a background purge thread)
* different from 5.7:
- new service thread "thd_destructor_proxy" to make sure all THDs are
destroyed at the correct point in time during the server shutdown
- proper opening/closing of tables for vcol evaluations in
+ FK checks (use already opened prelocked tables)
+ purge threads (open the table, MDLock it, add it to tdc, close
when not needed)
- cache open tables in vc_templ
- avoid unnecessary allocations, reuse table->record[0] and table->s->default_values
- not needed in 5.7, because it overcalculates:
+ tell the server to calculate vcols for an on-going inline ADD INDEX
+ calculate vcols for correct error messages
* update other engines (mroonga/tokudb) accordingly
When updating a table with virtual BLOB columns, the following might happen:
- an old record is read from the table, it has no virtual blob values
- update_virtual_fields() is run, vcol blob gets its value into the
record. But only a pointer to the value is in the table->record[0],
the value is in Field_blob::value String (but it doesn't have to be!
it can be in the record, if the column is just a copy of another
columns: ... b VARCHAR, c BLOB AS (b) ...)
- store_record(table,record[1]), old record now is in record[1]
- fill_record() prepares new values in record[0], vcol blob is updated,
new value replaces the old one in the Field_blob::value
- now both record[1] and record[0] have a pointer that points to the
*new* vcol blob value. Or record[1] has a pointer to nowhere if
Field_blob::value had to realloc.
To resolve this we unlink vcol blobs from the pointer to the
data (in the record[1]). Because the value is not *always* in
the Field_blob::value String, we need to remember what blobs
were unlinked. The orphan memory must be freed manually.
To complicate the matter, ha_update_row() is also used in
multi-update, in REPLACE, in INSERT ... ON DUPLICATE KEY UPDATE,
also on REPLACE ... SELECT, REPLACE DELAYED, and LOAD DATA REPLACE, etc
multi-update was setting up read_set/vcol_set in
multi_update::initialize_tables() that is invoked after
the optimizer (JOIN::optimize_inner()). But some rows - if they're from
const tables - will be read already in the optimizer, and these rows
will not have all necessary column/vcol values.
* multi_update::initialize_tables() uses results from the optimizer
and cannot be moved to be called earlier.
* multi_update::prepare() is called before the optimizer, but
it cannot set up read_set/vcol_set, because the optimizer
might reset them (see SELECT_LEX::update_used_tables()).
As a fix I've added a new method, select_result::prepare_to_read_rows(),
it's called from inside the optimizer just before make_join_statistics().
The idea of this fix was taken from the patch by Roy Lyseng
for mysql-5.6 bug iBug#14740889: "Wrong result for aggregate
functions when executing query through cursor".
Here's Roy's comment for his patch:
"
The problem was that a grouped query did not behave properly when
executed using a cursor. On further inspection, the query used one
intermediate temporary table for the grouping.
Then, Select_materialize::send_result_set_metadata created a temporary
table for storing the query result. Notice that get_unit_column_types()
is used to retrieve column meta-data for the query. The items contained
in this list are later modified so that their result_field points to
the row buffer of the materialized temporary table for the cursor.
But prior to this, these result_field objects have been prepared for
use in the grouping operation, by JOIN::make_tmp_tables_info(), hence
the grouping operation operates on wrong column buffers.
The problem is solved by using the list JOIN::fields when copying data
to the materialized table. This list is set by JOIN::make_tmp_tables_info()
and points to the columns of the last intermediate temporary table of
the executed query. For a UNION, it points to the temporary table
that is the result of the UNION query.
Notice that we have to assign a value to ::fields early in JOIN::optimize()
in case the optimization shortcuts due to a const plan detection.
A more optimal solution might be to avoid creating the final temporary
table when the query result is already stored in a temporary table.
"
The patch does not contain a test case, but the description of the
problem corresponds exactly what could be observed in the test
case for mdev-11081.
Add some event types for the compressed event, there are:
QUERY_COMPRESSED_EVENT,
WRITE_ROWS_COMPRESSED_EVENT_V1,
UPDATE_ROWS_COMPRESSED_EVENT_V1,
DELETE_POWS_COMPRESSED_EVENT_V1,
WRITE_ROWS_COMPRESSED_EVENT,
UPDATE_ROWS_COMPRESSED_EVENT,
DELETE_POWS_COMPRESSED_EVENT.
These events inheritance the uncompressed editor events. One of their constructor functions and write
function have been overridden for uncompressing and compressing. Anything but this is totally the same.
On slave, The IO thread will uncompress and convert them When it receiving the events from the master.
So the SQL and worker threads can be stay unchanged.
Now we use zlib as compress algorithm. It maybe support other algorithm in the future.