the auto_increment value
This is an alternative patch that instead of allowing RECREATE TABLE
on TRUNCATE TABLE it implements reset_auto_increment that is called
after delete_all_rows.
Note: this bug was fixed by Mattias Jonsson:
Pusing this patch: http://lists.mysql.com/commits/70370
- Added a handler call (prepare_index_scan()) to inform storage engines that an index scan is about to take place.
- Extended the maximun key parts for an index from 16 to 32
- Extended MyISAM and Maria engines to support up to 32 parts
Added checks for return value from ha_index_init()
include/my_handler.h:
Extended number of key parts for MyISAM and Maria from 16 to 32
include/my_pthread.h:
Ensure we always have 256M of stack.
(Required to be able to handle the current number of keys and key parts in MyISAM)
mysql-test/r/create.result:
Extended to test for 32 key parts
mysql-test/r/myisam.result:
Test that we can create 32 but not 33 key parts
mysql-test/r/ps_1general.result:
Length of ref is now 2048 as we can have more key parts
mysql-test/r/ps_2myisam.result:
Length of ref is now 2048 as we can have more key parts
mysql-test/r/ps_3innodb.result:
Length of ref is now 2048 as we can have more key parts
mysql-test/r/ps_4heap.result:
Length of ref is now 2048 as we can have more key parts
mysql-test/r/ps_5merge.result:
Length of ref is now 2048 as we can have more key parts
mysql-test/suite/maria/r/maria.result:
Max key length is now 1208 bytes
mysql-test/suite/maria/r/maria3.result:
Max key length is now 1208 bytes
mysql-test/suite/maria/r/ps_maria.result:
Max key length is now 1208 byte
mysql-test/t/create.test:
Extended to test for 32 key parts
mysql-test/t/myisam.test:
Test that we can create 32 but not 33 key parts
sql/handler.cc:
Check return value from ha_index_init()
sql/handler.h:
Added a handler call (prepare_index_scan()) to inform storage engines that an index scan is about to take place.
sql/sql_select.cc:
Checks all return values from ha_index_init()
Call prepare_index_scan()) to inform storage engines that an index scan is about to take place.
Fixed indentation
sql/table.cc:
Fixed wrong types for key_length (rest of code assumed this was 32 bit)
sql/unireg.h:
Extended the maximun key parts for an index from 16 to 32
storage/maria/ha_maria.cc:
Don't allocate HA_CHECK on the stack in functions where we call repair() as HA_CHECK is HUGE and will overflow stack
storage/myisam/ha_myisam.cc:
Don't allocate HA_CHECK on the stack in functions where we call repair() as HA_CHECK is HUGE and will overflow stack
storage/myisam/mi_check.c:
Fixed wrong check if value overflow
tests/mysql_client_test.c:
Added fflush() to fix output in case of error
Fixed wrong check of 'ref' length in EXPLAIN
This patch fixes compilation warning, "conversion from 'time_t' to 'ulong',
possible loss of data".
The fix is to typecast time_t to ulong before assigning it to ulong.
Backported this from 6.0-bugteam tree.
storage/archive/ha_archive.cc:
type casting time_t to ulong before assigning.
storage/federated/ha_federated.cc:
type casting time_t to ulong before assigning.
storage/innobase/handler/ha_innodb.cc:
type casting time_t to ulong before assigning.
storage/myisam/ha_myisam.cc:
type casting time_t to ulong before assigning.
This patch fixes compilation warning, "conversion from 'time_t' to 'ulong',
possible loss of data".
The fix is to typecast time_t to ulong before assigning it to ulong.
Backported this from 6.0-bugteam tree.
This patch adds corrections to the original patch
submitted 2009-04-08 (http://lists.mysql.com/commits/71607):
- fixed that the original patch didn't work because of an
incorrect condition;
- added a test case.
mysql-test/r/upgrade.result:
Bug#37631 Incorrect key file for table after upgrading from 5.0 to 5.1
Result file for test case
mysql-test/std_data/bug37631.MYD:
Bug#37631 Incorrect key file for table after upgrading from 5.0 to 5.1
table created in mysql 4.0
mysql-test/std_data/bug37631.MYI:
Bug#37631 Incorrect key file for table after upgrading from 5.0 to 5.1
table created in mysql 4.0
mysql-test/std_data/bug37631.frm:
Bug#37631 Incorrect key file for table after upgrading from 5.0 to 5.1
table created in mysql 4.0
mysql-test/t/upgrade.test:
Bug#37631 Incorrect key file for table after upgrading from 5.0 to 5.1
Adds test for checking that upgrade works
on a table which is created by a mysql
server version <= 4.0.
storage/myisam/ha_myisam.cc:
Bug#37631 Incorrect key file for table after upgrading from 5.0 to 5.1
Fix the conformance checker to relax checking
for the correct version of the tables (for tables
created by mysql server version <= 4.0)
This patch adds corrections to the original patch
submitted 2009-04-08 (http://lists.mysql.com/commits/71607):
- fixed that the original patch didn't work because of an
incorrect condition;
- added a test case.
Killing insert-select statement on MyISAM corrupts the table.
Killing the insert-select statement corrupts the MyISAM table only
when the destination table is empty and when it has indexes. When
we bulk insert huge data and if the destination table is empty we
disable the indexes for fast inserts, data is then inserted and
indexes are re-enabled after bulk_insert operation
Killing the query, aborts the repair table operation during enable
indexes phase leading to table corruption.
We now truncate the table when we detect that enable indexes is
killed for bulk insert query.As we have an empty table before the
operation, we can fix by truncating the table.
mysql-test/r/myisam.result:
Result file for BUG#40827
mysql-test/t/myisam.test:
Testcase for BUG#40827
storage/myisam/ha_myisam.cc:
Fixed end_bulk_insert() method to truncate the table when we detect enable
index operation is killed.
Killing insert-select statement on MyISAM corrupts the table.
Killing the insert-select statement corrupts the MyISAM table only
when the destination table is empty and when it has indexes. When
we bulk insert huge data and if the destination table is empty we
disable the indexes for fast inserts, data is then inserted and
indexes are re-enabled after bulk_insert operation
Killing the query, aborts the repair table operation during enable
indexes phase leading to table corruption.
We now truncate the table when we detect that enable indexes is
killed for bulk insert query.As we have an empty table before the
operation, we can fix by truncating the table.
The conformance checker was not taking into
account, and, making concessions for acceptable
incompatibilites in tables created by
versions earlier than 4.1.
The current patch relaxes the conformance
checker to ignore differences in key_alg
and language for tables created by versions
earlier than 4.1.
storage/myisam/ha_myisam.cc:
Modify check_definition to ignore differences
in key_alg and language for tables created
by versions earlier than 4.1.
The conformance checker was not taking into
account, and, making concessions for acceptable
incompatibilites in tables created by
versions earlier than 4.1.
The current patch relaxes the conformance
checker to ignore differences in key_alg
and language for tables created by versions
earlier than 4.1.
The problem is that select queries executed concurrently with
a concurrent insert on a MyISAM table could be cached if the
select started after the query cache invalidation but before
the unlock of tables performed by the concurrent insert. This
race could happen because the concurrent insert was failing
to prevent cache of select queries happening at the same time.
The solution is to add a 'uncacheable' status flag to signal
that a concurrent insert is being performed on the table and
that queries executing at the same time shouldn't cache the
results.
mysql-test/r/query_cache_debug.result:
Add test case result for Bug#41098
mysql-test/t/disabled.def:
Re-enable test case.
mysql-test/t/query_cache_debug.test:
Add test case for Bug#41098
sql/sql_cache.cc:
Debug sync point for regression testing purposes.
sql/sql_insert.cc:
Remove meaningless query cache invalidate. There is already
a preceding invalidate for queries that started before the
concurrent insert.
storage/myisam/ha_myisam.cc:
Check for a active concurrent insert.
storage/myisam/mi_locking.c:
Signal the start of a concurrent insert. Flag is zeroed once
the state is updated back.
storage/myisam/myisamdef.h:
Add flag to signal a active concurrent insert.
The problem is that select queries executed concurrently with
a concurrent insert on a MyISAM table could be cached if the
select started after the query cache invalidation but before
the unlock of tables performed by the concurrent insert. This
race could happen because the concurrent insert was failing
to prevent cache of select queries happening at the same time.
The solution is to add a 'uncacheable' status flag to signal
that a concurrent insert is being performed on the table and
that queries executing at the same time shouldn't cache the
results.
- Remove bothersome warning messages. This change focuses on the warnings
that are covered by the ignore file: support-files/compiler_warnings.supp.
- Strings are guaranteed to be max uint in length
- Remove bothersome warning messages. This change focuses on the warnings
that are covered by the ignore file: support-files/compiler_warnings.supp.
- Strings are guaranteed to be max uint in length
MyISAM did copy of key statistics incorrectly, which may cause
server crash or incorrect cardinality values. This may happen only on
platforms where size of long differs from size of pointer.
To determine number of bytes to be copied from array of ulong,
MyISAM mistakenly used sizoef(pointer) instead of sizeof(ulong).
MyISAM did copy of key statistics incorrectly, which may cause
server crash or incorrect cardinality values. This may happen only on
platforms where size of long differs from size of pointer.
To determine number of bytes to be copied from array of ulong,
MyISAM mistakenly used sizoef(pointer) instead of sizeof(ulong).
code backported from 6.0
per-file messages:
include/my_global.h
Remove SC_MAXWIDTH. This is unused and irrelevant nowadays.
include/my_sys.h
Remove errbuf declaration and unused definitions.
mysys/my_error.c
Remove errbuf definition and move and adjust ERRMSGSIZE.
mysys/my_init.c
Declare buffer on the stack and use my_snprintf.
mysys/safemalloc.c
Use size explicitly. It's more than enough for the message at hand.
sql/sql_error.cc
Use size explicitly. It's more than enough for the message at hand.
sql/sql_parse.cc
Declare buffer on the stack. Use my_snprintf as it will result in
less stack space being used than by a system provided sprintf --
this allows us to put the buffer on the stack without causing much
trouble. Also, the use of errbuff here was not thread-safe as the
function can be entered concurrently from multiple threads.
sql/sql_table.cc
Use MYSQL_ERRMSG_SIZE. Extra space is not needed as my_snprintf will
nul terminate strings.
storage/myisam/ha_myisam.cc
Use MYSQL_ERRMSG_SIZE.
sql/share/errmsg.txt
Error message truncation in test "innodb" in embedded mode
filename in the error message can safely take up to 210 symbols.
code backported from 6.0
per-file messages:
include/my_global.h
Remove SC_MAXWIDTH. This is unused and irrelevant nowadays.
include/my_sys.h
Remove errbuf declaration and unused definitions.
mysys/my_error.c
Remove errbuf definition and move and adjust ERRMSGSIZE.
mysys/my_init.c
Declare buffer on the stack and use my_snprintf.
mysys/safemalloc.c
Use size explicitly. It's more than enough for the message at hand.
sql/sql_error.cc
Use size explicitly. It's more than enough for the message at hand.
sql/sql_parse.cc
Declare buffer on the stack. Use my_snprintf as it will result in
less stack space being used than by a system provided sprintf --
this allows us to put the buffer on the stack without causing much
trouble. Also, the use of errbuff here was not thread-safe as the
function can be entered concurrently from multiple threads.
sql/sql_table.cc
Use MYSQL_ERRMSG_SIZE. Extra space is not needed as my_snprintf will
nul terminate strings.
storage/myisam/ha_myisam.cc
Use MYSQL_ERRMSG_SIZE.
sql/share/errmsg.txt
Error message truncation in test "innodb" in embedded mode
filename in the error message can safely take up to 210 symbols.
mmap is slower that caching indeed.
Here the problem is that mmap is used even if --myisam-use-mmap=OFF
solved by checking the flag in ha_myisam::extra() as it is called in
init_read_record()
per-file comments:
storage/myisam/ha_myisam.cc
Bug#40634 table scan temporary table is 4x slower due to mmap instead instead of caching
do nothing for HA_EXTRA_MMAP if no opt_myisam_use_mmap
mmap is slower that caching indeed.
Here the problem is that mmap is used even if --myisam-use-mmap=OFF
solved by checking the flag in ha_myisam::extra() as it is called in
init_read_record()
per-file comments:
storage/myisam/ha_myisam.cc
Bug#40634 table scan temporary table is 4x slower due to mmap instead instead of caching
do nothing for HA_EXTRA_MMAP if no opt_myisam_use_mmap
This also adds a check that MyISAM tables with incompatible checksums are detected by CHECK TABLE ... [FOR UPGRADE] and thus also by mysql_upgrade.
The tables that are incomatible are MyISAM tables with ROW_FORMAT=fixed and has VARCHAR fields and have CHECKSUM enabled.
Before these tables gave different checksum if you used CHECK TABLE with or without EXTENDED
mysql-test/r/old-mode.result:
Now we get same results with and without EXTENDED
mysql-test/r/row-checksum-old.result:
Initial results
mysql-test/r/row-checksum.result:
Initial results
mysql-test/t/old-mode.test:
Added test with QUICK to show that the live checksum is not used when running with --old
mysql-test/t/row-checksum-old-master.opt:
Start mysqld with --old mode to enable old checksum code
mysql-test/t/row-checksum-old.test:
Run row-checksum test under mysqld --old
mysql-test/t/row-checksum.test:
Verify that checksum are calculated the same way with and without EXTENDED
We run this with several storage engines to ensure results are the same over storage engines
sql/ha_partition.cc:
Use new HA_HAS_xxx_CHECKSUM flags
sql/handler.cc:
Use new HA_HAS_xxx_CHECKSUM flags
sql/handler.h:
Split HA_HAS_CHECKSUM into HA_HAS_NEW_CHECKSUM and HA_HAS_OLD_CHECKSUM flags.
This is a safe API change as only MyISAM and Maria should use these handler flags.
sql/sql_show.cc:
Use new HA_HAS_xxx_CHECKSUM flags
sql/sql_table.cc:
Use file->checksum() for live checksums if the life checksum method corresponds to the mysqld --old flag
storage/maria/ha_maria.cc:
Use new HA_HAS_xxx_CHECKSUM flags
storage/myisam/ha_myisam.cc:
Set HA_HAS_OLD_CHECKSUM and/or HA_HAS_NEW_CHECKSUM flags depending on if this is a new myisam or old myisam file
Add method check_for_upgrade() to detect if the table is of old version with a checksum that is incompatible with CHECK TABLE ... EXTENDED
storage/myisam/ha_myisam.h:
Added check_for_upgrade()
storage/myisam/mi_open.c:
Removed not neede cast
Initialize share->has_null_fields and share->has_varchar_fields variables
storage/myisam/myisamdef.h:
Added share->has_null_fields and share->has_varchar_fields
Fixed also some similar issues in MyISAM. This was not noticed before as MyISAM did a second retry without key cache (which just made the second repair attempty slower)
storage/maria/ha_maria.cc:
Print information if we retry without quick in case of CHECK TABLE table_name QUICK
Remove T_QUICK flag when retrying repair, but set T_SAFE_REPAIR to ensure we don't loose any rows
Remember T_RETRY_WITH_QUICK flag when restoring repair flags
Don't print 'checking table' if we are not checking table in auto-repair
Don't use T_QUICK in auto repair (safer)
Changed parameter of type HA_PARAM ¶m to HA_PARAM *param
storage/maria/ha_maria.h:
Changed parameter of type HA_PARAM ¶m to HA_PARAM *param
storage/maria/ma_check.c:
Added retry without T_QUICK if there is a problem reading a row in BLOCK_RECORD
storage/myisam/ha_myisam.cc:
Remove T_QUICK flag when retrying repair, but set T_SAFE_REPAIR to ensure we don't loose any rows
Remember T_RETRY_WITH_QUICK flag when restoring repair flags