Currently InnoDB uses internal parser for adding foreign keys. Remove
internal parser and use data parsed by SQL parser (sql_yacc) for
adding foreign keys.
- create_table_info_t::create_foreign_keys() replacement for
dict_create_foreign_constraints_low();
- Pass constraint name via Foreign_key object.
Temporary until MDEV-20865:
- Pass alter_info as part of create_info.
The fix consists of three commits backported from 10.3:
1) Cleanup isnan() portability checks
(cherry picked from commit 7ffd7fe962)
2) Cleanup isinf() portability checks
Original problem reported by Wlad: re-compilation of 10.3 on top of 10.2
build would cache undefined HAVE_ISINF from 10.2, whereas it is expected
to be 1 in 10.3.
std::isinf() seem to be available on all supported platforms.
(cherry picked from commit bc469a0bdf)
3) Use std::isfinite in C++ code
This is addition to parent revision fixing build failures.
(cherry picked from commit 54999f4e75)
On order to unify the two *.yy files easier,
this patch collects all different rules to the end of *.yy files,
so the rule section looks like this:
%%
common rules
different rules
This PR contains a mtr test for reproducing a failure with replicating create table as select statement (CTAS) through asynchronous mariadb replication to mariadb galera cluster.
The problem happens when CTAS replication contains both create table statement followed by row events for populating the table. In such situation, the galera node operating as mariadb replication slave, will first replicate only the create table part into the cluster, and then perform another replication containing both the create table and row events. This will lead all other nodes to fail for duplicate table create attempt, and crash due to this failure.
PR contains also a fix, which identifies the situation when CTAS has been replicated, and makes further scan in async replication stream to see if there are following row events. The slave node will replicate either single TOI in case the CTAS table is empty, or if CTAS table contains rows, then single bundled write set with create table and row events is replicated to galera cluster.
This fix should keep master server's GTID's for CTAS replication in sync with GTID's in galera cluster.
Make sure that the sort buffers can store atleast one sort key.
This is needed to make sure that all merge buffers are read else
with no sort keys some merge buffers are skipped because the code
makes a conclusion there is no data to be read.
Adding:
- new class sp_expr_lex
- new grammar rule expr_lex, which includes both reset_lex()
and its corresponding restore_lex()
Also:
- Moving a few methods from LEX to sp_expr_lex.
- Moving the code from *.yy to new method sp_expr_lex methods
sp_repeat_loop_finalize() and sp_if_expr().
This change makes it easier to edit the related grammar
(and makes it easier to unify sql_yacc.yy and sql_yacc_ora.yy later).
Fix partitioning and DS-MRR to work together
- In ha_partition::index_end(): take into account that ha_innobase (and
other engines using DS-MRR) will have inited=RND when initialized for
DS-MRR scan.
- In ha_partition::multi_range_read_next(): if the MRR scan is using
HA_MRR_NO_ASSOCIATION mode, it is not guaranteed that the partition's
handler will store anything into *range_info.
- In DsMrr_impl::choose_mrr_impl(): ha_partition will inquire partitions
about how much memory their MRR implementation needs by passing
*buffer_size=0. DS-MRR code didn't know about this (actually it used
uint for buffer size calculation and would have an under-flow).
Returning *buffer_size=0 made ha_partition assume that partitions do
not need MRR memory and pass the same buffer to each of them.
Now, this is fixed. If DS-MRR gets *buffer_size=0, it will return
the amount of buffer space needed, but not more than about
@@mrr_buffer_size.
* Fix ha_{innobase,maria,myisam}::clone. If ha_partition uses MRR on its
partitions, and partition use DS-MRR, the code will call handler->clone
with TABLE (*NOT partition*) name as an argument.
DS-MRR has no way of knowing the partition name, so the solution was
to have the ::clone() function for the affected storage engine to ignore
the name argument and get it elsewhere.
This is a prerequisite patch required to remove Innodb's
thd_destructor_proxy thread.
The patch implement pre-shutdown functionality for handlers.
A storage engine might need to perform some work after all user
connections are shut down, but before killing off the plugins.
The reason is that an SE could still be using some of the
server infrastructure. In case of Innodb this would be purge threads,
that call into the server to calculate results of virtual function,
acquire MDL locks on tables, or possibly also use the audit plugins.
Create background THDs that are not counted, and do not block
close_connections(), in other words they are just something that plugin
can use internally.
These THD will be used later by Innodb purge threads, and they need THD
to calculate virtual functions.
revision-id: 673e253724979fd9fe43a4a22bd7e1b2c3a5269e
Author: Kristian Nielsen
Fix missing memory barrier in wait_for_commit.
The function wait_for_commit::wait_for_prior_commit() has a fast path where it
checks without locks if wakeup_subsequent_commits() has already been called.
This check was missing a memory barrier. The waitee thread does two writes to
variables `waitee' and `wakeup_error', and if the waiting thread sees the
first write it _must_ also see the second or incorrect behavior will occur.
This requires memory barriers between both the writes (release semantics) and
the reads (acquire semantics) of those two variables.
Other accesses to these variables are done under lock or where only one thread
will be accessing them, and can be done without barriers (relaxed semantics).
Fix incorrect change introduced in the fix for MDEV-20109.
The patch tried to compute a more precise estimate for the record_count
value in SJ-Materialization-Scan strategy (in
Sj_materialization_picker::check_qep). However the new formula is worse
as it produces extremely optimistic results in common cases where
SJ-Materialization-Scan should be used)
The old formula produces pessimistic results in cases when Sj-Materialization-
Scan is unlikely to be a good choice anyway. So, the old behavior is better.
Count the "gap" time between table accesses and display it as
r_other_time_ms in the "table" element.
* The advantage of this approach is that it doesn't add any new
my_timer_cycles() calls.
* The disadvantage is that the definition of what is done during
"other time" is not that clear: it includes checking the WHERE
(for this table), constructing index lookup tuple (for the next table)
writing to GROUP BY temporary table (as we dont account for that time
separately [yet], etc)
New iterator has the fastest possible implementation: just moves one pointer.
It's faster that List_iterator and List_iterator_fast: both do more on increment.
Overall patch brings:
1) work compile times
2) possibly(!) worse debug build performance
3) definitely better optimized build performance
4) ability to write less code
5) ability to write less bug-prone code
The assert indicates that the current transaction got caught uncleaned from
the semisync master's cache when it is signaled to proceed upon its
ack receive.
The reason of missed cleanup turns out to be a flaw in the gtid
connect mode.
A submitted by connecting slave value of its last received event's
binlog file *name* was adopted into
{{Repl_semi_sync_master::m_reply_file_name}} as a part of semisync
initialization.
Notice that the initialization still refines the position part of the
submitted last received event's binlog coordinates.
The master side binlog filename:pos refinement is
specific to the gtid connect mode for purpose of computing the latest
binlog file to resume slave feeding from.
Effectively in the gtid connect mode the computed resumption filename:pos
may appear smaller in which case a new post-connect time committing
transaction may be logged with its filename:pos also less than the
submitted coordinates and that triggers the assert.
Fixed with making the semisync initialization to use the refined filename:pos.
It is guaranteed to be less than any new generated transaction's binlog:pos.
* Explicit STARTS syntax
* SHOW CREATE
* Default STARTS rounding depending on INTERVAL type
* Warn when STARTS timestamp is later than query time
* Fix uninitialized Lex->create_last_non_select_table under
mysql_unpack_partition()
Default STARTS rounding depending on INTERVAL type
If STARTS clause is omitted, default one is assigned with value
derived from query timestamp. The rounding is done on STARTS value
depending on INTERVAL type:
SECOND: no rounding is done;
MINUTE: timestamp seconds is set to 0;
HOUR: timestamp seconds and minutes are set to 0;
DAY, WEEK, MONTH and YEAR: timestamp seconds, minutes and hours are
set to 0 (the date of rotation is kept as current date).
The issue here is the wrong estimate of the cardinality of a partial join,
the cardinality is too high because the function table_cond_selectivity()
returns an absurd number 100 while selectivity cannot be greater than 1.
When accessing table t by outer reference t1.a via index we do not perform any
range analysis for t. Yet we see TABLE::quick_key_parts[key] and
TABLE->quick_rows[key] contain a non-zero value though these should have been
remained untouched and equal to 0.
Thus real cause of the problem is that TABLE::init does not clean the arrays
TABLE::quick_key_parts[] and TABLE::>quick_rows[].
It should have done it because the TABLE structure created for any
instance of a table can be reused for many queries.
A conflict between MDEV-19514 (b42294bc64)
and MDEV-20934 (d7a2401750)
was resolved. We will not invoke the function ibuf_delete_recs()
from ibuf_merge_or_delete_for_page(). Instead, we will add that
logic to the function ibuf_read_merge_pages().
wsrep_init_provider_status_variables(): Always set wsrep_inited
to ensure that the memory will be freed.
The initial patch was provided by Julius Goryavsky.
In the function prev_record_reads where one finds the different row combinations for a
subset of partial join, it did not take into account the selectivity of tables
involved in the subset of partial join.
Unfortunate DROP TEMPORARY..IF EXISTS on a regular table may allow
subsequent CREATE TABLE statements to steal away the PFS_table_share
instance from the dropped table.
Partition table with the AUTO_INCREMENT column we ahve to check if the
max value is properly loaded. So we need to open all tables in INSERT
PARTITION statement if necessary. Also we need to check if some
tables are pruned away and not count the max autoincrement in this case.