* remove one level of virtual functions
* remove redundant checks
* remove an if() as the value is always known at compilation time
don't pretend that "DEFAULT expr" and "ON UPDATE DEFAULT NOW"
are "basically the same thing"
This patch allows the server to open old tables that have
"bad" generated columns (i.e. indexed virtual generated columns,
persistent generated columns) that depend on sql_mode,
for general things like SELECT, INSERT, DROP, etc.
Warning are issued in such cases.
Only these commands are now disallowed and return an error:
- CREATE TABLE introducing a "bad" generated column
- ALTER TABLE introducing a "bad" generated column
- CREATE INDEX introdicing a "bad" generated column
(i.e. adding an index on a virtual generated column
that depends on sql_mode).
Note, these commands are allowed:
- ALTER TABLE removing a "bad" generate column
- ALTER TABLE removing an index from a "bad" virtual generated column
- DROP INDEX removing an index from a "bad" virtual generated column
but only if the table does not have any "bad" columns as a result.
This patch introduces the optimization that allows range optimizer to
consider index range scans that are built employing NOT NULL predicates
inferred from WHERE conditions and ON expressions.
The patch adds a new optimizer switch not_null_range_scan.
The parser returned a syntax error message for the queries with join
expressions like this t1 JOIN t2 [LEFT | RIGHT] JOIN t3 ON ... ON ... when
the second operand of the outer JOIN operation with ON clause was another
join expression with ON clause. In this expression the JOIN operator is
right-associative, i.e. expression has to be parsed as the expression
t1 JOIN (t2 [LEFT | RIGHT] JOIN t3 ON ... ) ON ...
Such join expressions are hard to parse because the outer JOIN is
left-associative if there is no ON clause for the first outer JOIN operator.
The patch implements the solution when the JOIN operator is always parsed
as right-associative and builds first the right-associative tree. If it
happens that there is no corresponding ON clause for this operator the
tree is converted to left-associative.
The idea of the solution was taken from the patch by Martin Hansson
"WL#8083: Fixed the join_table rule" from MySQL-8.0 code line.
As the grammar rules related to join expressions in MySQL-8.0 and
MariaDB-5.5+ are quite different MariaDB solution could not borrow
any code from the MySQL-8.0 solution.
Removed Field_map, since it was used only in a single function.
Fixed is_indexed_agg_distinct(), since it relied on initialization of
Bitmap in constructor.
The MDEV-17262 commit 26432e49d3
was skipped. In Galera 4, the implementation would seem to require
changes to the streaming replication.
In the tests archive.rnd_pos main.profiling, disable_ps_protocol
for SHOW STATUS and SHOW PROFILE commands until MDEV-18974
has been fixed.
There were two newly enabled warnings:
1. cast for a function pointers. Affected sql_analyse.h, mi_write.c
and ma_write.cc, mf_iocache-t.cc, mysqlbinlog.cc, encryption.cc, etc
2. memcpy/memset of nontrivial structures. Fixed as:
* the warning disabled for InnoDB
* TABLE, TABLE_SHARE, and TABLE_LIST got a new method reset() which
does the bzero(), which is safe for these classes, but any other
bzero() will still cause a warning
* Table_scope_and_contents_source_st uses `TABLE_LIST *` (trivial)
instead of `SQL_I_List<TABLE_LIST>` (not trivial) so it's safe to
bzero now.
* added casts in debug_sync.cc and sql_select.cc (for JOIN)
* move assignment method for MDL_request instead of memcpy()
* PARTIAL_INDEX_INTERSECT_INFO::init() instead of bzero()
* remove constructor from READ_RECORD() to make it trivial
* replace some memcpy() with c++ copy assignments
This patch implements engine independent unique hash index.
Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key
length > handler->max_key_length()
or it can be explicitly specified.
Automatic Creation:-
Create TABLE t1 (a blob unique);
Explicit Creation:-
Create TABLE t1 (a int , unique(a) using HASH);
Internal KEY_PART Representations:-
Long unique key_info will have 2 representations.
(lets understand this with an example create table t1(a blob, b blob , unique(a, b)); )
1. User Given Representation:- key_info->key_part array will be similar to what user has defined.
So in case of example it will have 2 key_parts (a, b)
2. Storage Engine Representation:- In this case there will be only one key_part and it will point to
HASH_FIELD. This key_part will be always after user defined key_parts.
So:- User Given Representation [a] [b] [hash_key_part]
key_info->key_part ----^
Storage Engine Representation [a] [b] [hash_key_part]
key_info->key_part ------------^
Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine
Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function.
Working:-
1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table
One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH.
2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags)
3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields,
When Explicit length is given by user then Item_left is used to concatenate Item_field values.
4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from
table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result
field by field.
* inject portion of time updates into mysql_delete main loop
* triggered case emits delete+insert, no updates
* PORTION OF `SYSTEM_TIME` is forbidden
* `DELETE HISTORY .. FOR PORTION OF ...` is forbidden as well
Due to inconsistent usage of different cost models to calculate
the cost of ref accesses we have to make the calculation of the
gain promising by usage a range filter more complex.
Find indexes of one table which parts participate in one constraint.
These indexes are called constraint correlated.
New methods: TABLE::find_constraint_correlated_indexes() and
virtual method check_index_dependence() were added.
For each index it's own constraint correlated index map was created
where all indexes that are constraint correlated with the current are
marked.
The results of this task are used for MDEV-16188 (Use in-memory
PK filters built from range index scans).
MDEV-17631 select_handler for a full query pushdown
Interfaces + Proof of Concept for federatedx with test cases.
The interfaces have been developed for integration of ColumnStore engine.
remove TABLE_SHARE::error_table_name() and TABLE_SHARE::orig_table_name
(that was allocated in a wrong memroot in this bug).
instead, simply set TABLE_SHARE::table_name correctly.
This patch contains a full implementation of the optimization
that allows to use in-memory rowid / primary filters built for range
conditions over indexes. In many cases usage of such filters reduce
the number of disk seeks spent for fetching table rows.
In this implementation the choice of what possible filter to be applied
(if any) is made purely on cost-based considerations.
This implementation re-achitectured the partial implementation of
the feature pushed by Galina Shalygina in the commit
8d5a11122c.
Besides this patch contains a better implementation of the generic
handler function handler::multi_range_read_info_const() that
takes into account gaps between ranges when calculating the cost of
range index scans. It also contains some corrections of the
implementation of the handler function records_in_range() for MyISAM.
This patch supports the feature for InnoDB and MyISAM.