In main.index_merge_myisam we remove the test that was added in
commit a2d24def8c because
it duplicates the test case that was added in
commit 5af12e4635.
Turn read cache off for periodic update.
Like 498a96a4 says:
Aria with row_format=fixed uses IO_CACHE of type READ_CACHE for
sequential read in update loop. When history row is inserted inside
this loop the cache misses it and fails with error.
This applicable to any additional row inserts on UPDATE. In this case
it was initiated by UPDATE FOR PORTION.
Related to MDEV-20441.
Cause
Join tmp table inserts null row because of OUTER JOIN, that's
expected. Since `multi_update::prepare2()` converted
`Item_temptable_rowid` into `Item_field` (28dbdf3)
`multi_update::send_data()` accesses join tmp record directly and
treats it as a normal row ignoring null status of ref field. NULL ref
field is then treated as normal in `multi_update::do_updates()` which
tries to position updated table by reference 0.
Note that reference 0 may be valid reference and the first row of
table can be wrongly updated (see multi_update.test).
Fix
Do not add row into multi-update tmp table in case of null ref
field. Join tmp table does not have null_row status at this time (as
well as `STATUS_NULL_ROW`) and cannot be skipped by these properties
(see first comment in multi_update::send_data()). But it has all null
fields (including the ref field).
Wrong assertion condition. SYSTEM_TIME_ALL indicates that
vers_setup_conds() is done. In case FOR SYSTEM_TIME ALL is specified
in command the assertion passes but not checks anything.
Turn read cache off for update and multi-update for versioned
table. no_cache is reinited on each TABLE open because it is
applicable for specific algorithms.
As a side fix vers_insert_history_row() honors vers_write setting.
Aria with row_format=fixed uses IO_CACHE of type READ_CACHE for
sequential read in update loop. When history row is inserted inside
this loop the cache misses it and fails with error.
TODO:
Currently maria_extra() does not support SEQ_READ_APPEND. Probably it
might be possible to use this type of cache.
MDEV-18957 UPDATE with LIMIT clause is wrong for versioned partitioned tables
UPDATE, DELETE: replace linear search of current/historical records
with vers_setup_conds().
Additional DML cases in view.test
- Any temporary tables created under read-only mode will never be logged
to binary log. Any usage of these tables to update normal tables, even
after read-only has been disabled, will use row base logging (as the
temporary table will not be on the slave).
- Analyze, check and repair table will not be logged in read-only mode.
Other things:
- Removed not used varaibles in
MYSQL_BIN_LOG::flush_and_set_pending_rows_event.
- Set table_share->table_creation_was_logged for all normal tables.
- THD::binlog_query() now returns -1 if statement was not logged., This
is used to update table_share->table_creation_was_logged.
- Don't log admin statements in opt_readonly is set.
- Table's that doesn't have table_creation_was_logged will set binlog format to row
logging.
- Removed not needed/wrong setting of table->s->table_creation_was_logged
in create_table_from_items()
Three issues here:
* ON UPDATE DEFAULT NOW columns were updated after generated columns
were computed - this broke indexed virtual columns
* ON UPDATE DEFAULT NOW columns were updated after BEFORE triggers,
so triggers didn't see the correct NEW value
* in case of a multi-update generated columns were also updated
after BEFORE triggers
on UPDATE, compare_record() was comparing all columns that are marked
for writing. But generated columns that are written to the table are
always deterministic and cannot change unless normal non-generated
columns were changed. So it's enough to compare only non-generated
columns that were explicitly assigned values in the SET clause.
* remove one level of virtual functions
* remove redundant checks
* remove an if() as the value is always known at compilation time
don't pretend that "DEFAULT expr" and "ON UPDATE DEFAULT NOW"
are "basically the same thing"
as well as
MDEV-19500 Update with join stopped worked if there is a call to a procedure in a trigger
MDEV-19521 Update Table Fails with Trigger and Stored Function
MDEV-19497 Replication stops because table not found
MDEV-19527 UPDATE + JOIN + TRIGGERS = table doesn't exists error
Reimplement the fix for (5d510fdbf0)
MDEV-18507 can't update temporary table when joined with table with triggers on read-only
instead of calling open_tables() twice, put multi-update
prepare code inside open_tables() loop.
Add a test for a MDL backoff-and-retry loop inside open_tables()
across multi-update prepare code.
triggers are opened and tables used in triggers are prelocked in
open_tables(). But multi-update can detect what tables will actually
be updated only later, after all main tables are opened.
Meaning, if a table is used in multi-update, but is not actually updated,
its on-update treggers will be opened and tables will be prelocked,
even if it's unnecessary. This can cause more tables to be
write-locked than needed, causing read_only errors, privilege errors
and lock waits.
Fix: don't open/prelock triggers unless table->updating is true.
In multi-update after setting table->updating=true, do a second
open_tables() for newly added tables, if any.
it always required UPDATE privilege on views, not being able to detect
when a views was not actually updated in multi-update.
fix: instead of marking all tables as "updating" by default,
only set "updating" on tables that will actually be updated
by multi-update. And mark the view "updating" if any of the
view's tables is.
For single table updates and multi-table updates , engine independent statistics were not being
read even if the statistics were collected.
Fixed it, so when the optimizer_use_condition_selectivity > 2 then we would read the available
statistics for update queries.
The main problem was lack of proper QueryArena handling in
`period_setup_conds`. Since mysql_prepare_update/mysql_prepare_delete
are called during `PREPARE` statement, period conditions, should be
allocated on statement query arena.
Another problem is incorrect statement state handling in
period_setup_conds, which led to unexpected mysql_update termination.
* mysql_update: move period_setup_conds() to mysql_prepare_update to
store conditions in statement's mem_root
* mtr: add period suite to default list, since --ps-protocol is now
fixed
Fixes bugs:
MDEV-18853 Assertion `0' failed in Protocol::end_statement upon DELETE .. FOR PORTION via prepared statement
MDEV-18852 Server crashes in reinit_stmt_before_use upon UPDATE .. FOR PORTION via prepared statement
close table->update_handler in close_thread_tables().
it's not enough to do it in sql_update.cc only, because
sql_insert.cc can also do updates (REPLACE) and even
sql_delete.cc can (DELETE ... FOR PORTION OF)
This patch implements engine independent unique hash index.
Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key
length > handler->max_key_length()
or it can be explicitly specified.
Automatic Creation:-
Create TABLE t1 (a blob unique);
Explicit Creation:-
Create TABLE t1 (a int , unique(a) using HASH);
Internal KEY_PART Representations:-
Long unique key_info will have 2 representations.
(lets understand this with an example create table t1(a blob, b blob , unique(a, b)); )
1. User Given Representation:- key_info->key_part array will be similar to what user has defined.
So in case of example it will have 2 key_parts (a, b)
2. Storage Engine Representation:- In this case there will be only one key_part and it will point to
HASH_FIELD. This key_part will be always after user defined key_parts.
So:- User Given Representation [a] [b] [hash_key_part]
key_info->key_part ----^
Storage Engine Representation [a] [b] [hash_key_part]
key_info->key_part ------------^
Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine
Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function.
Working:-
1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table
One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH.
2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags)
3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields,
When Explicit length is given by user then Item_left is used to concatenate Item_field values.
4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from
table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result
field by field.