* remove old 5.2+ InnoDB support for virtual columns
* enable corresponding parts of the innodb-5.7 sources
* copy corresponding test cases from 5.7
* copy detailed Alter_inplace_info::HA_ALTER_FLAGS flags from 5.7
- and more detailed detection of changes in fill_alter_inplace_info()
* more "innodb compatibility hooks" in sql_class.cc to
- create/destroy/reset a THD (used by background purge threads)
- find a prelocked table by name
- open a table (from a background purge thread)
* different from 5.7:
- new service thread "thd_destructor_proxy" to make sure all THDs are
destroyed at the correct point in time during the server shutdown
- proper opening/closing of tables for vcol evaluations in
+ FK checks (use already opened prelocked tables)
+ purge threads (open the table, MDLock it, add it to tdc, close
when not needed)
- cache open tables in vc_templ
- avoid unnecessary allocations, reuse table->record[0] and table->s->default_values
- not needed in 5.7, because it overcalculates:
+ tell the server to calculate vcols for an on-going inline ADD INDEX
+ calculate vcols for correct error messages
* update other engines (mroonga/tokudb) accordingly
When updating a table with virtual BLOB columns, the following might happen:
- an old record is read from the table, it has no virtual blob values
- update_virtual_fields() is run, vcol blob gets its value into the
record. But only a pointer to the value is in the table->record[0],
the value is in Field_blob::value String (but it doesn't have to be!
it can be in the record, if the column is just a copy of another
columns: ... b VARCHAR, c BLOB AS (b) ...)
- store_record(table,record[1]), old record now is in record[1]
- fill_record() prepares new values in record[0], vcol blob is updated,
new value replaces the old one in the Field_blob::value
- now both record[1] and record[0] have a pointer that points to the
*new* vcol blob value. Or record[1] has a pointer to nowhere if
Field_blob::value had to realloc.
To resolve this we unlink vcol blobs from the pointer to the
data (in the record[1]). Because the value is not *always* in
the Field_blob::value String, we need to remember what blobs
were unlinked. The orphan memory must be freed manually.
To complicate the matter, ha_update_row() is also used in
multi-update, in REPLACE, in INSERT ... ON DUPLICATE KEY UPDATE,
also on REPLACE ... SELECT, REPLACE DELAYED, and LOAD DATA REPLACE, etc
multi-update was setting up read_set/vcol_set in
multi_update::initialize_tables() that is invoked after
the optimizer (JOIN::optimize_inner()). But some rows - if they're from
const tables - will be read already in the optimizer, and these rows
will not have all necessary column/vcol values.
* multi_update::initialize_tables() uses results from the optimizer
and cannot be moved to be called earlier.
* multi_update::prepare() is called before the optimizer, but
it cannot set up read_set/vcol_set, because the optimizer
might reset them (see SELECT_LEX::update_used_tables()).
As a fix I've added a new method, select_result::prepare_to_read_rows(),
it's called from inside the optimizer just before make_join_statistics().
The idea of this fix was taken from the patch by Roy Lyseng
for mysql-5.6 bug iBug#14740889: "Wrong result for aggregate
functions when executing query through cursor".
Here's Roy's comment for his patch:
"
The problem was that a grouped query did not behave properly when
executed using a cursor. On further inspection, the query used one
intermediate temporary table for the grouping.
Then, Select_materialize::send_result_set_metadata created a temporary
table for storing the query result. Notice that get_unit_column_types()
is used to retrieve column meta-data for the query. The items contained
in this list are later modified so that their result_field points to
the row buffer of the materialized temporary table for the cursor.
But prior to this, these result_field objects have been prepared for
use in the grouping operation, by JOIN::make_tmp_tables_info(), hence
the grouping operation operates on wrong column buffers.
The problem is solved by using the list JOIN::fields when copying data
to the materialized table. This list is set by JOIN::make_tmp_tables_info()
and points to the columns of the last intermediate temporary table of
the executed query. For a UNION, it points to the temporary table
that is the result of the UNION query.
Notice that we have to assign a value to ::fields early in JOIN::optimize()
in case the optimization shortcuts due to a const plan detection.
A more optimal solution might be to avoid creating the final temporary
table when the query result is already stored in a temporary table.
"
The patch does not contain a test case, but the description of the
problem corresponds exactly what could be observed in the test
case for mdev-11081.
Add some event types for the compressed event, there are:
QUERY_COMPRESSED_EVENT,
WRITE_ROWS_COMPRESSED_EVENT_V1,
UPDATE_ROWS_COMPRESSED_EVENT_V1,
DELETE_POWS_COMPRESSED_EVENT_V1,
WRITE_ROWS_COMPRESSED_EVENT,
UPDATE_ROWS_COMPRESSED_EVENT,
DELETE_POWS_COMPRESSED_EVENT.
These events inheritance the uncompressed editor events. One of their constructor functions and write
function have been overridden for uncompressing and compressing. Anything but this is totally the same.
On slave, The IO thread will uncompress and convert them When it receiving the events from the master.
So the SQL and worker threads can be stay unchanged.
Now we use zlib as compress algorithm. It maybe support other algorithm in the future.
This is similar to MysQL Worklog 3253, but with
a different implementation. The disk format and
SQL syntax is identical with MySQL 5.7.
Fetures supported:
- "Any" ammount of any trigger
- Supports FOLLOWS and PRECEDES to be
able to put triggers in a certain execution order.
Implementation details:
- Class Trigger added to hold information about a trigger.
Before this trigger information was stored in a set of lists in
Table_triggers_list and in Table_triggers_list::bodies
- Each Trigger has a next field that poinst to the next Trigger with the
same action and time.
- When accessing a trigger, we now always access all linked triggers
- The list are now only used to load and save trigger files.
- MySQL trigger test case (trigger_wl3253) added and we execute these
identically.
- Even more gracefully handling of wrong trigger files than before. This
is useful if a trigger file uses functions or syntax not provided by
the server.
- Each trigger now has a "Created" field that shows when the trigger was
created, with 2 decimals.
Other comments:
- Many of the changes in test files was done because of the new "Created"
field in the trigger file. This shows up in SHOW ... TRIGGER and when
using information_schema.trigger.
- Don't check if all memory is released if on uses --gdb; This is needed
to be able to get a list from safemalloc of not freed memory while
debugging.
- Added option to trim_whitespace() to know how many prefix characters
was skipped.
- Changed a few ulonglong sql_mode to sql_mode_t, to find some wrong usage
of sql_mode.
* remove new InnoDB-specific ER_ and HA_ERR_ codes
* renamed few old ER_ and HA_ERR_ error messages to be less MyISAM-specific
* remove duplicate enum definitions (durability_properties, icp_result)
* move new mysql-test include files to their owner suite
* rename xtradb.rdiff files to *-disabled
* remove mistakenly committed helper perl module
* remove long obsolete handler::ha_statistic_increment() method
* restore the standard C xid_t structure to not have setters and getters
* remove xid_t::reset that was cleaning too much
* move MySQL-5.7 ER_ codes where they belong
* fir innodb to include service_wsrep.h not internal wsrep headers
* update tests and results
When a deadlock kill is detected inside the storage engine, the kill
is not done immediately, to avoid calling back into the storage engine
kill_query method with various lock subsystem mutexes held. Instead the
kill is queued and done later by a slave background thread.
This patch in preparation for fixing TokuDB optimistic parallel
replication, as well as for removing locking hacks in InnoDB/XtraDB in
10.2.
Signed-off-by: Kristian Nielsen <knielsen at knielsen-hq.org>
Added comments.
Added reaction for exceeding maximum number of elements in with clause.
Added a test case to check this reaction.
Added a test case where the specification of a recursive table
uses two non-recursive with tables.
- When waiting for events, start time is now counted from start of wait
- Instead of having "Connect" as "Command" for all replication threads we
now have:
- Slave_IO for Slave thread reading relay log
- Slave_SQL for slave executing SQL commands or distribution queries to
Slave workers
- Slave_worker for slave threads executin SQL commands in parallel replication
- reverted from tracking donor servicing thread. With xtrabackup SST,
xtrabackup thread will call FTWRL and node is desynced upfront
- Skipping desync in FTWRL if node is operating as donor
- enveloped FTWRL processing with wsrep desync/resync calls. This way FTWRL processing node
will not cause flow control to kick in
- donor servicing thread is unfortunate exception, we must let him to pause provider as part
of FTWRL phase, but not desync/resync as this is done as part of donor control on higher
level
- Fixed typos
- Added --core-on-failure to mysql-test-run
- More DBUG_PRINT in viosocket.c
- Don't forget CLIENT_REMEMBER_OPTIONS for compressed slave protocol
- Removed not used stage variables
(like DROP TABLE) has been scheduled before conflicting DDL's (like INSERT)
are commited.
What makes these bugs hard to detect is that in most cases any wrong
schduling are caught by MDL locks. It's only when there are timing issues
that the bugs (usually deadlocks) are noticed.
This patch adds a DBUG_ASSERT() that detects, in parallel replication,
if a DDL is scheduled before any depending DML'S are commited.
It does this be checking if there are any conflicting replication locks
when the DDL is about to wait for getting it's MDL lock.
I also did some minor code cleanups in sql_base.cc to make this code
similar to other related code.
Moved checking whether the limit set for the number of iterations
when executing a recursive query has been reached from
st_select_lex_unit::exec_recursive to TABLE_LIST::fill_recursive.
Changed the name of the system variable max_recursion_level for
max_recursive_iterations.
Adjusted test cases.
Transaction replay causes the THD to re-apply the replication
events from execution, using the same path appliers do. While
applying the log events, the THD's timestamp is set to the
timestamp of the event.
Setting the timestamp explicitly causes function NOW() to
always the timestamp that was set. To avoid this behavior we
reset the timestamp after replaying is done.
* remove a confusing method name - Field::set_default_expression()
* remove handler::register_columns_for_write()
* rename stuff
* add asserts
* remove unlikely unlikely
* remove redundant if() conditions
* fix mark_unsupported_function() to report the most important violation
* don't scan vfield list for default values (vfields don't have defaults)
* move handling for DROP CONSTRAINT IF EXIST where it belongs
* don't protect engines from Alter_inplace_info::ALTER_ADD_CONSTRAINT
* comments
- Force usage of () around complex DEFAULT expressions
- Give error if DEFAULT expression contains invalid characters
- Don't use const_charset_conversion for stored Item_func_sysconf expressions
as the result is not constaint over different executions
- Fixed Item_func_user() to not store calculated value in str_value