This patch contains a fix for the MDEV-17262/17243 issues and
new mtr test.
These issues (MDEV-17262/17243) have two reasons:
1) After an intermediate commit, a transaction loses its status
of "transaction that registered in the MySQL for 2pc coordinator"
(in the InnoDB) due to the fact that since version 10.2 the
write_row() function (which located in the ha_innodb.cc) does
not call trx_register_for_2pc(m_prebuilt->trx) during the processing
of split transactions. It is necessary to restore this call inside
the write_row() when an intermediate commit was made (for a split
transaction).
Similarly, we need to set the flag of the started transaction
(m_prebuilt->sql_stat_start) after intermediate commit.
The table->file->extra(HA_EXTRA_FAKE_START_STMT) called from the
wsrep_load_data_split() function (which located in sql_load.cc)
will also do this, but it will be too late. As a result, the call
to the wsrep_append_keys() function from the InnoDB engine may be
lost or function may be called with invalid transaction identifier.
2) If a transaction with the LOAD DATA statement is divided into
logical mini-transactions (of the 10K rows) and binlog is rotated,
then in rare cases due to the wsrep handler re-registration at the
boundary of the split, the last portion of data may be lost. Since
splitting of the LOAD DATA into mini-transactions is technical,
I believe that we should not allow these mini-transactions to fall
into separate binlogs. Therefore, it is necessary to prohibit the
rotation of binlog in the middle of processing LOAD DATA statement.
https://jira.mariadb.org/browse/MDEV-17262 and
https://jira.mariadb.org/browse/MDEV-17243
st_select_lex::handle_derived() and mysql_handle_list_of_derived() had
exactly the same implementations.
- Adding a new method LEX::handle_list_of_derived() instead
- Removing public function mysql_handle_list_of_derived()
- Reusing LEX::handle_list_of_derived() in st_select_lex::handle_derived()
If wsrep_load_data_splitting is configured, change streaming replication
parameters internally to match the original behavior, i.e. replicate
on every 10000 rows. After load data is over, restore original
streaming replication settings.
Removed redundant wsrep_tc_log_commit().
Methods:
- Item_user_var_as_out_param::print_for_load()
- sql_exchange::escaped_given(void)
Parameters:
- sql_exchange in write_execute_load_query_log_event()
- sql_exchange in mysql_load()
- sql_exchange in Load_log_event::Load_log_event()
Also, removing cast to "char*" in a few places in
Load_log_event::Load_log_event()
- Adding a new virtual method Field::load_data_set_no_data().
- Overriding Field_timestamp::load_data_set_no_data() and moving
the TIMESTAMP specific code there.
- Overriding Field_geom::load_data_set_no_data() and implementing
GEOMETRY specific behavior, to prevent writing empty strings
when the loaded file ends unexpectedly. This fixes the bug.
- Adding a new test gis-loaddaata.test.
- The test in loaddata.test for CHAR was added simply to record behavior.
The CHAR data type did not change its behaviour (only GEOMRYRY did).
- Additionally, moving duplicate code into a new method
Field::load_data_set_value() and reusing it in three places.
Also, allow the MariaDB 10.2 server to link InnoDB dynamically
against ha_innodb.so (which is what mysql-test-run.pl expects
to exist, instead of the default name ha_innobase.so).
wsrep_load_data_split(): Instead of referring to innodb_hton_ptr,
check the handlerton::db_type. This was recently broken by me in
MDEV-11415.
innodb_lock_schedule_algorithm: Define as a weak global symbol,
so that WITH_WSREP will not depend on InnoDB being linked statically.
I tested this manually. Notably, running a test that only does
SET GLOBAL wsrep_on=1;
with a static or dynamic InnoDB and
./mtr --mysqld=--loose-innodb-lock-schedule-algorithm=fcfs
will crash with SIGSEGV at shutdown. With the default VATS
combination the wsrep_on is properly refused for both the
static and dynamic InnoDB.
ha_close_connection(): Do invoke the method also for plugins
for which UNINSTALL PLUGIN was deferred due to open connections.
Thanks to @svoj for pointing this out.
thd_to_trx(): Return a pointer, not a reference to a pointer.
check_trx_exists(): Invoke thd_set_ha_data() for assigning a transaction.
log_write_checkpoint_info(): Remove an unused DEBUG_SYNC point
that would cause an assertion failure on shutdown after deferred
UNINSTALL PLUGIN.
This was tested as follows:
cmake -DWITH_WSREP=1 -DPLUGIN_INNOBASE:STRING=DYNAMIC \
-DWITH_MARIABACKUP:BOOL=OFF ...
make
cd mysql-test
./mtr innodb.innodb_uninstall
This was done in, among other things:
- thd->db and thd->db_length
- TABLE_LIST tablename, db, alias and schema_name
- Audit plugin database name
- lex->db
- All db and table names in Alter_table_ctx
- st_select_lex db
Other things:
- Changed a lot of functions to take const LEX_CSTRING* as argument
for db, table_name and alias. See init_one_table() as an example.
- Changed some function arguments from LEX_CSTRING to const LEX_CSTRING
- Changed some lists from LEX_STRING to LEX_CSTRING
- threads_mysql.result changed because process list_db wasn't always
correctly updated
- New append_identifier() function that takes LEX_CSTRING* as arguments
- Added new element tmp_buff to Alter_table_ctx to separate temp name
handling from temporary space
- Ensure we store the length after my_casedn_str() of table/db names
- Removed not used version of rename_table_in_stat_tables()
- Changed Natural_join_column::table_name and db_name() to never return
NULL (used for print)
- thd->get_db() now returns db as a printable string (thd->db.str or "")
MDEV-11415 Remove excessive undo logging during ALTER TABLE…ALGORITHM=COPY
Move a test from innodb.rename_table_debug to innodb.alter_copy.
ha_innobase::extra(HA_EXTRA_BEGIN_ALTER_COPY): Register id-versioned
tables so that mysql.transaction_registry will be updated, even for
empty tables that are subjected to ALTER TABLE…ALGORITHM=COPY.
If a crash occurs during ALTER TABLE…ALGORITHM=COPY, InnoDB would spend
a lot of time rolling back writes to the intermediate copy of the table.
To reduce the amount of busy work done, a work-around was introduced in
commit fd069e2bb36a3c1c1f26d65dd298b07e6d83ac8b in MySQL 4.1.8 and 5.0.2,
to commit the transaction after every 10,000 inserted rows.
A proper fix would have been to disable the undo logging altogether and
to simply drop the intermediate copy of the table on subsequent server
startup. This is what happens in MariaDB 10.3 with MDEV-14717,MDEV-14585.
In MariaDB 10.2, the intermediate copy of the table would be left behind
with a name starting with the string #sql.
This is a backport of a bug fix from MySQL 8.0.0 to MariaDB,
contributed by jixianliang <271365745@qq.com>.
Unlike recent MySQL, MariaDB supports ALTER IGNORE. For that operation
InnoDB must for now keep the undo logging enabled, so that the latest
row can be rolled back in case of an error.
In Galera cluster, the LOAD DATA statement will retain the existing
behaviour and commit the transaction after every 10,000 rows if
the parameter wsrep_load_data_splitting=ON is set. The logic to do
so (the wsrep_load_data_split() function and the call
handler::extra(HA_EXTRA_FAKE_START_STMT)) are joint work
by Ji Xianliang and Marko Mäkelä.
The original fix:
Author: Thirunarayanan Balathandayuthapani <thirunarayanan.balathandayuth@oracle.com>
Date: Wed Dec 2 16:09:15 2015 +0530
Bug#17479594 AVOID INTERMEDIATE COMMIT WHILE DOING ALTER TABLE ALGORITHM=COPY
Problem:
During ALTER TABLE, we commit and restart the transaction for every
10,000 rows, so that the rollback after recovery would not take so long.
Fix:
Suppress the undo logging during copy alter operation. If fts_index is
present then insert directly into fts auxiliary table rather
than doing at commit time.
ha_innobase::num_write_row: Remove the variable.
ha_innobase::write_row(): Remove the hack for committing every 10000 rows.
row_lock_table_for_mysql(): Remove the extra 2 parameters.
lock_get_src_table(), lock_is_table_exclusive(): Remove.
Reviewed-by: Marko Mäkelä <marko.makela@oracle.com>
Reviewed-by: Shaohua Wang <shaohua.wang@oracle.com>
Reviewed-by: Jon Olav Hauglid <jon.hauglid@oracle.com>
The loop in read_xml_field(), unlike the same loop in read_sep_field(),
cannot end with item<>NULL, as it does not have any "break" statements.
The entire block "if (item) {...}" was a dead code.
The fixes for these bugs:
Bug#27586 Wrong autoinc value assigned by LOAD DATA in the NO_AUTO_VALUE_ON_ZERO mode
Bug#22372 Disable spatial key, load data, enable spatial key, crashes table
fixed only LOAD DATA INFILE, but did not fix LOAD XML INFILE.
This patch does for LOAD XML FILE what patches for Bug#27586 and Bug#22372
earlier did for LOAD DATA INFILE.
1. Fixing the auto_increment problem:
a. table->auto_increment_field_not_null is not set to TRUE
anymore when a column does not have a corresponding XML tag.
b. Adding "table->auto_increment_field_not_null= false"
in the end of read_xml_field().
These two changes resemble the patch for Bug#27586.
2. Fixing the GEOMETRY problem:
The result for "reset()" was not tested for errors in read_xml_field(),
which made it possible for empty string to sneak into a "GEOMETRY NOT NULL"
column when this column does not have a corresponding XML tag with data.
After this patch the result of reset() is tested and and an error is
returned in such cases.
This change effectively resembles the patch for Bug#22372
3. Spliting the code into a new virtual method Field::load_data_set_null().
Rationale:
a. To avoid duplicate code in read_sep_field() and read_xml_field():
Changes #1 and #2 made the code handling NULL values for Field
exactly the same in read_sep_field() and read_xml_field().
b. To avoid tests for field_type(), which is not friendly to
upcoming data type plugins.
This change makes it possible for data type plugins
to implement their own special way for handling NULL values in LOAD DATA
by overriding Field_xxx::load_data_set_null(),
like Field_geom and Field_timestamp do.
This was done to get more information about where time is spent.
Now we can get proper timing for time spent in commit, rollback,
binlog write etc.
Following stages was added:
- Commit
- Commit_implicit
- Rollback
- Rollback implicit
- Binlog write
- Init for update
- This is used instead of "Init" for insert, update and delete.
- Staring cleanup
Following stages where changed:
- "Unlocking tables" stage reset stage to previous stage at end
- "binlog write" stage resets stage to previous stage at end
- "end" -> "end of update loop"
- "cleaning up" -> "Reset for next command"
- Added stage_searching_rows_for_update when searching for rows
to be deleted.
Other things:
- Renamed all stages to start with big letter (before there was no
consitency)
- Increased performance_schema_max_stage_classes from 150 to 160.
- Most of the test changes in performance schema comes from renaming of
stages.
- Removed duplicate output of variables and inital state in a lot of
performance schema tests.
This was done to make it easier to change a default value for a
performance variable without affecting all tests.
- Added start_server_variables.test to check configuration
- Removed some duplicate "closing tables" stages
- Updated position for "stage_init_update" and "stage_updating" for
delete, insert and update to be just before update loop (for more
exact timing).
- Don't set "Checking permissions" twice in a row.
- Remove stage_end stage from creating views (not done for create table
either).
- Updated default performance history size from 10 to 20 because of new
stages
- Ensure that ps_enabled is correct (to be used in a later patch)