Add support for VARCHAR with 1 or 2 length bytes
Enable VARCHAR packing in MyISAM files (previous patch didn't pack data properly)
Give error if we got problems in temporary tables during a SELECT
Don't use new table generated by ALTER TABLE if index generation fails
Fixed wrong call by range_end() (Could cause an ASSERT in debug mode)
Renamed HA_VAR_LENGTH to HA_VAR_LENGTH_PART
Renamed in all files FIELD_TYPE_STRING and FIELD_TYPE_VAR_STRING to MYSQL_TYPE_STRING and MYSQL_TYPE_VAR_STRING to make it easy to catch all possible errors
Added support for VARCHAR KEYS to heap
Removed support for ISAM
Now only long VARCHAR columns are changed to TEXT on demand (not CHAR)
Internal temporary files can now use fixed length tables if the used VARCHAR columns are short
CREATE DATABASE statement used the current database instead of the
database created when checking conditions for replication.
CREATE/DROP/ALTER DATABASE statements are now replicated based on
the manipulated database.
- add_field_to_list() now uses <List>String
instead of TYPELIB to be able to distinguish
literals 'aaa' and hex literals 0xaabbcc.
- move some code from add_field_to_list() where
we don't know column charset yet, to
mysql_prepare_table(), where we do.
out of order". (final version)
Now instead of binding Item_trigger_field to TABLE objects during
trigger definition parsing at table open, we perform pass through
special list of all such objects in trigger. This allows easily check
all references to fields in old/new version of row in trigger during
execution of CREATE TRIGGER statement (this is more courtesy for users
since we can't check everything anyway).
We also report that such reference is bad by returning error from
Item_trigger_field::fix_fields() method (instead of setup_field())
This means that if trigger is broken we will bark during trigger
execution instead of trigger definition parsing at table open.
(i.e. now we allow to open tables with broken triggers).
binlog coordinates corresponding to the dump".
The good news is that now mysqldump can be used to get an online backup of InnoDB *which works for
point-in-time recovery and replication slave creation*. Formerly, mysqldump --master-data --single-transaction
used to call in fact mysqldump --master-data, so the dump was not an online dump (took big lock all time of dump).
The only lock which is now taken in this patch is at the beginning of the dump: mysqldump does:
FLUSH TABLES WITH READ LOCK; START TRANSACTION WITH CONSISTENT SNAPSHOT; SHOW MASTER STATUS; UNLOCK TABLES;
so the lock time is in fact the time FLUSH TABLES WITH READ LOCK takes to return (can be 0 or very long, if
a table is undergoing a huge update).
I have done some more minor changes listed in the paragraph of mysqldump.c.
WL#2237 "WITH CONSISTENT SNAPSHOT clause for START TRANSACTION":
it's a START TRANSACTION which additionally starts a consistent read on all
capable storage engine (i.e. InnoDB). So, can serve as a replacement for
BEGIN; SELECT * FROM some_innodb_table LIMIT 1; which starts a consistent read too.