Problem was that the number of NULL bit's was record wrong in the
.frm file because there could be more fields marked NOT_NULL after the
number of not_null fields where recorded.
Fixed by copying test for virtual fields from prepare_create_field()
The code change, only the test, doesn't have to be merged to 10.3
as this is fixed there.
Make all system tables in mysql directory of type
engine=Aria
Privilege tables are using transactional=1
Statistical tables are using transactional=0, to allow them
to be quickly updated with low overhead.
Help tables are also using transactional=0 as these are only
updated at init time.
Other changes:
- Aria store engine is now a required engine
- Update comment for Aria tables to reflect their new usage
- Fixed that _ma_reset_trn_for_table() removes unlocked table
from transaction table list. This was needed to allow one
to lock and unlock system tables separately from other
tables, for example when reading a procedure from mysql.proc
- Don't give a warning when using transactional=1 for engines
that is using transactions. This is both logical and also
to avoid warnings/errors when doing an alter of a privilege
table to InnoDB.
- Don't abort on warnings from ALTER TABLE for changes that
would be accepted by CREATE TABLE.
- New created Aria transactional tables are marked as not movable
(as they include create_rename_lsn).
- bootstrap.test was changed to kill orignal server, as one
can't anymore have two servers started at same time on same
data directory and data files.
- Disable maria.small_blocksize as one can't anymore change
aria block size after system tables are created.
- Speed up creation of help tables by using lock tables.
- wsrep_sst_resync now also copies Aria redo logs.
- Adding a helper class Sec6 to store (neg,seconds,microseconds)
- Adding a helper class VSec6 (Sec6 with a flag for "IS NULL")
- Wrapping related functions as methods of Sec6;
* number_to_datetime()
* number_to_time()
* my_decimal2seconds()
* Item::get_seconds()
* A big piece of code in Item_func_sec_to_time::get_date()
- Using the new classes in places where second-to-temporal
conversion takes place:
* Field_timestamp::store(double)
* Field_timestamp::store(longlong)
* Field_timestamp_with_dec::store_decimal(my_decimal)
* Field_temporal_with_date::store(double)
* Field_temporal_with_date::store(longlong)
* Field_time::store(double)
* Field_time::store(longlong)
* Field_time::store_decimal(my_decimal)
* Field_temporal_with_date::store_decimal(my_decimal)
* get_interval_value()
* Item_func_sec_to_time::get_date()
* Item_func_from_unixtime::get_date()
* Item_func_maketime::get_date()
This change simplifies these methods and functions a lot.
- Warnings are now sent at VSec6 initialization time, when the source
data is available in its original data type representation.
If Sec6::to_time() or Sec6::to_datetime() truncate data again during
conversion to MYSQL_TIME, they send warnings, but only if no warnings
were sent during VSec6 initialization. This helps prevents double warnings.
The call for val_str() in Item_func_sec_to_time::get_date() is not
needed any more, so it's removed. This change actually fixes the problem.
As a good effect, FROM_UNIXTIME() and MAKETIME() now also send warnings
in case if the seconds arguments is out of range. Previously these
functions returned NULL silently.
- Splitting the code in the global function make_truncated_value_warning()
into a number of methods THD::raise_warning_xxxx().
This was needed to reuse the logic that chooses between:
* ER_TRUNCATED_WRONG_VALUE
* ER_WRONG_VALUE
* ER_TRUNCATED_WRONG_VALUE_FOR_FIELD
for non-temporal data types (Sec6).
- Removing:
* Item::get_seconds()
* number_to_time_with_warn()
as this code now resides inside methods of Sec6.
- Cleanup (changes that are not directly related to the fix):
* Removing calls for field_name_or_null() and passing NULL instead
in Item_func_hybrid_field_type::get_date_from_{int|real}_op,
because Item_func_hybrid_field_type::field_name_or_null()
always returns NULL
* Replacing a number of calls for make_truncated_value_warning()
to calls for THD::raise_warning_xxx(). In these places
we know that the execution went through a certain
branch of make_truncated_value_warning(),
(e.g. the exact error code is known, or field name is always NULL,
or field name is always not-NULL). So calls for the entire
make_truncated_value_warning() after splitting are not necessary.
MEMORY table could be renamed into a non-extistent database.
rename() is documented to return ENOENT when the source file does not
exist OR when the target directory not exist. Nonexistent source .frm
file is ok (table can still exist in the engine), nonexistent target
directory is not.
Make my_rename to use ENOTDIR for the latter case. Make RENAME TABLE
issue an appropriate error ("unknown database" instead of "unknown table")
* ignore CHECK constraint for historical rows;
* FOREIGN KEY test case.
TODO:
MDEV-16301 IB: use real table name for error messages on ALTER
Closes tempesta-tech/mariadb#491
Closes#748
NULL values when there is no DEFAULT
Copy and inplace algorithm works similarly for
NULL to NOT NULL conversion for the following cases:
(1) strict sql mode - Should give error.
(2) non-strict sql mode - Should give warnings alone
(3) alter ignore table command. - Should give warnings alone.
Changing columns WITH/WITHOUT SYSTEM VERSIONING doens't require to read data at
all. Thus it should be an instant operation.
Patch also fixes a bug when ALTER_COLUMN_UNVERSIONED wasn't passed to InnoDB
to change its internal structures.
change_field_versioning_try(): apply WITH/WITHOUT SYSTEM VERSIONING
change in SYS_COLUMNS for one field.
change_fields_versioning_try(): apply WITH/WITHOUT SYSTEM VERSIONING
change in SYS_COLUMNS for every changed field in a table.
change_fields_versioning_cache(): update cache for versioning property
of columns.
don't read all columns from the source table, but only
those that the will be inserted into the target table
and cannot be generated by the target table.
test case is in the following commits
Fixed by deleting the sequence if we where not able to initialize it
I also noticed that we didn't always set the error message when
check_killed(), which could lead to aborted queries without error
beeing properly set. Fixed by default setting error message if
check_error() noticed that killed had been called.
This allowed me to remove a lot of calls to thd->send_kill_message().
Crash happened because in discover, table->work_part_info was not properly
reset before execution.
Fixed by resetting before calling execute alter table, create table or
mysql_create_frm_image.
Crash happened when deleting all columns that was part of a check constraint
The bug was that read map for from table was used when
checking CHECK constraint and was not properly reset
in copy_data_between_tables()
Problem was that if copy_data_between_tables() didn't do proper
clean up in case of failures:
- copy object was not properly freed
- end_bulk_insert() was not called
- mysql_trans_prepare_alter_copy_data() set THD->transaction.on to
false which was not properly restored
The last part caused a crash in Aria as Aria depends on that THD
is correct.
Other things:
- Reset info->switched_transactional after usage (safety)
- Reset bulk_insert_single_undo (safety)
Part one, non-temporary tables.
Rrenaming a column can make destructive changes
to the TABLE. This TABLE cannot be used anymore
and needs to be reopened even if ALTER TABLE
was aborted with an error.
10.3+ fix
On ALTER TABLE, if a non-changed column default
might need a charset conversion, it must
be a blob. Because blob's defaults ar stored
as expressions, and for any other type
a basic_const_item() will be in the record,
so it'll have correct charset and won't need
converting. For the same reason it makes no
sense to convert blob defaults (and it's
unsafe, see MDEV-15746).
test case is already in main/ps.test
don't try to convert a default value string from a user character set
into a column character set, if this particular default value string did
not came from the user at all (that is, if it's an ALTER TABLE and the
default value string is the *old* default value of the unaltered
column).
This used to crash, because old defaults are allocated on the old
table's memroot, which is freed mid-ALTER when the old table is closed.
So thd->rollback_item_tree_changes() at the end of the ALTER was writing
into the freed memory.
Introduced new alter algorithm type called NOCOPY & INSTANT for
inplace alter operation.
NOCOPY - Algorithm refuses any alter operation that would
rebuild the clustered index. It is a subset of INPLACE algorithm.
INSTANT - Algorithm allow any alter operation that would
modify only meta data. It is a subset of NOCOPY algorithm.
Introduce new variable called alter_algorithm. The values are
DEFAULT(0), COPY(1), INPLACE(2), NOCOPY(3), INSTANT(4)
Message to deprecate old_alter_table variable and make it alias
for alter_algorithm variable.
alter_algorithm variable for slave is always set to default.
As thd->alloc() and new automatically calls my_error(ER_OUTOFMEORY)
there is no reason to call mem_alloc_error()
Other things:
- Fixed bug in mysql_unpack_partition() where lex.part_info was
changed even if it would be a null pointer