Simplified and made more determenistic myisam_crash_before_flush_keys
test.
mysql-test/r/myisam_crash_before_flush_keys.result:
We don't need myisamchk to test this bug fix. CHECK TABLE
after server restart is enough to ensure that the fix
works.
mysql-test/t/myisam_crash_before_flush_keys.test:
We don't need myisamchk to test this bug fix. CHECK TABLE
after server restart is enough to ensure that the fix
works.
columns without where/group
Simple SELECT with implicit grouping used to return many rows if
the query was ordered by the aggregated column in the SELECT
list. This was incorrect because queries with implicit grouping
should only return a single record.
The problem was that when JOIN:exec() decided if execution needed
to handle grouping, it was assumed that sum_func_count==0 meant
that there were no aggregate functions in the query. This
assumption was not correct in JOIN::exec() because the aggregate
functions might have been optimized away during JOIN::optimize().
The reason why queries without ordering behaved correctly was
that sum_func_count is only recalculated if the optimizer chooses
to use temporary tables (which it does in the ordered case).
Hence, non-ordered queries were correctly treated as grouped.
The fix for this bug was to remove the assumption that
sum_func_count==0 means that there is no need for grouping. This
was done by introducing variable "bool implicit_grouping" in the
JOIN object.
mysql-test/r/func_group.result:
Add test for BUG#47280
mysql-test/t/func_group.test:
Add test for BUG#47280
sql/opt_sum.cc:
Improve comment for opt_sum_query()
sql/sql_class.h:
Add comment for variables in TMP_TABLE_PARAM
sql/sql_select.cc:
Introduce and use variable implicit_grouping instead of (!group_list && sum_func_count) in places that need to test if grouping is required. Also added comments for: optimization of aggregate fields for implicitly grouped queries (JOIN::optimize) and choice of end_select method (JOIN::execute)
sql/sql_select.h:
Add variable implicit_grouping, which will be TRUE for queries that contain aggregate functions but no GROUP BY clause. Also added comment to sort_and_group variable.
The problem was in incorrect handling of predicates involving
NULL as a constant value by the range optimizer.
For example, when creating a SEL_ARG node from a condition of
the form "field < const" (which would normally result in the
"NULL < field < const" SEL_ARG), the special case when "const"
is NULL was not taken into account, so "NULL < field < NULL"
was produced for the "field < NULL" condition.
As a result, SEL_ARG structures of this form could not be
further optimized which in turn could lead to incorrectly
constructed SEL_ARG trees. In particular, code assuming SEL_ARG
structures to always form a sequence of ordered disjoint
intervals could enter an infinite loop under some
circumstances.
Fixed by changing get_mm_leaf() so that for any sargable
predicate except "<=>" involving NULL as a constant, "empty"
SEL_ARG is returned, since such a predicate is always false.
mysql-test/r/range.result:
Added a test case for bug #47123.
mysql-test/t/range.test:
Added a test case for bug #47123.
sql/opt_range.cc:
Fixed get_mm_leaf() so that for any sargable
predicate except "<=>" involving NULL as a constant, "empty"
SEL_ARG is returned, since such a predicate is always false.
Problem: using null microsecond part (e.g. "YYYY-MM-DD HH:MM:SS.0000")
in a WHERE condition may lead to wrong results due to improper
DATETIMEs comparison in some cases.
Fix: as we compare DATETIMEs as strings we must trim trailing 0's
in such cases.
mysql-test/r/innodb_mysql.result:
Fix for bug#47963: Wrong results when index is used
- test result.
mysql-test/t/innodb_mysql.test:
Fix for bug#47963: Wrong results when index is used
- test case.
sql/item.cc:
Fix for bug#47963: Wrong results when index is used
- comparing DATETIMEs trim trailing 0's in the
microsecond part.
In MySQL when the mapping for space is changed to something other than
0x20 by defining a different collation, then space is not ignored when
comparing two strings.
This was happening because the function that performs the comparison
of two strings while ignoring ending spaces, was comparing the collation
value of a space with the ascii value of the ' ' character. This should
be changed to do comparison between the collated values.
mysql-test/r/ctype_ldml.result:
Bug#46448 trailing spaces are not ignored when user collation maps space != 0x20
Result file for test case.
mysql-test/std_data/Index.xml:
Bug#46448 trailing spaces are not ignored when user collation maps space != 0x20
Added entry for new test collation in the index file.
mysql-test/std_data/latin1.xml:
Bug#46448 trailing spaces are not ignored when user collation maps space != 0x20
Added support for new collation for test.
mysql-test/t/ctype_ldml.test:
Bug#46448 trailing spaces are not ignored when user collation maps space != 0x20
Added test case to ensure trailing spaces are not ignored.
strings/ctype-simple.c:
Bug#46448 trailing spaces are not ignored when user collation maps space != 0x20
change my_strnncollsp_simple to compare collated values when checking
for trailing spaces. Currently the comparison happens between a collated
value and the ascii value.
low myisam_sort_buffer_size
Repair by sort (default) or parallel repair of a MyISAM table
(doesn't matter partitioned or not) as well as bulk inserts
and enable indexes some times didn't failover to repair with
key cache.
The problem was that after unsuccessful attempt, data file was
closed. Whereas repair with key cache requires open data file.
Fixed by reopening data file.
Also fixed a valgrind warning, which may appear during repair
by sort or parallel repair with certain myisam_sort_buffer_size
number of rows and length of an index entry (very dependent).
mysql-test/r/myisam.result:
A test case for BUG#47073.
mysql-test/t/myisam.test:
A test case for BUG#47073.
storage/myisam/ha_myisam.cc:
Reverted fix for BUG25289. Not needed anymore.
storage/myisam/mi_check.c:
Reopen data file, when repair by sort or parallel repair
fails.
When repair by sort is requested to rebuild data file, data file
gets rebuilt while fixing first index. When rebuild is completed,
info->dfile is pointing to temporary data file, original data file
is closed.
It may happen that repair has successfully fixed first index and
rebuilt data file, but failed to fix second index. E.g.
myisam_sort_buffer_size was big enough to fix first shorter index,
but not enough to fix subsequent longer index.
In this case we end up with info->dfile pointing to temporary file,
which is removed and info->dfile is set to -1.
Though repair by sort failed, the upper layer may still want to
try repair with key cache. But it needs info->dfile pointing to
valid data file.
storage/myisam/sort.c:
When performing a copy of IO_CACHE structure, current_pos and
current_end must be updated separatly to point to memory we're
copying to (not to memory we're copying from).
As t_file2 is always WRITE cache, proper members are write_pos
and write_end accordingly.
We set up DATE and TIMESTAMP differently in field-creation than we
did in field-MD creation (for CREATE). Admirably, ALTER TABLE
detected this and didn't damage any data, but it did initiate a
full copy/conversion, which we don't really need to do.
Now we describe Field and Create_field the same for those types.
As a result, ALTER TABLE that only changes meta-data (like a
field's name) no longer forces a data-copy when there needn't
be one.
mysql-test/r/alter_table.result:
0 rows should be affected when a meta-data change is enough ALTER TABLE.
mysql-test/t/alter_table.test:
add test-case: show that we don't do a full data-copy on ALTER TABLE
when we don't need to.
sql/field.cc:
Remove Field_str::compare_str_field_flags() (now in Field/Create_field as
field_flags_are_binary().
Correct some field-lengths!
sql/field.h:
Clean-up: use defined constants rather than numeric literals for certain
field-lengths.
Add enquiry-functions binaryp() to classes Field and Create_field.
This replaces field.cc's Field_str::compare_str_field_flags().
covering index
When two range predicates were combined under an OR
predicate, the algorithm tried to merge overlapping ranges
into one. But the case when a range overlapped several other
ranges was not handled. This lead to
1) ranges overlapping, which gave repeated results and
2) a range that overlapped several other ranges was cut off.
Fixed by
1) Making sure that a range got an upper bound equal to the
next range with a greater minimum.
2) Removing a continue statement
mysql-test/r/group_min_max.result:
Bug#42846: Changed query plans
mysql-test/r/range.result:
Bug#42846: Test result.
mysql-test/t/range.test:
Bug#42846: Test case.
sql/opt_range.cc:
Bug#42846: The fix.
Part1: Previously, both endpoints from key2 were copied,
which is not safe. Since ranges are processed in ascending
order of minimum endpoints, it is safe to copy the minimum
endpoint from key2 but not the maximum. The maximum may only
be copied if there is no other range or the other range's
minimum is greater than key2's maximum.
backport for bug#44059 from mysql-pe to mysql-5.1-bugteam
Using the partition with most rows instead of first partition
to estimate the cardinality of indexes.
mysql-test/r/partition.result:
Bug#44059: Incorrect cardinality of indexes on a partitioned table
Added test result
mysql-test/t/partition.test:
Bug#44059: Incorrect cardinality of indexes on a partitioned table
Added test case
sql/ha_partition.cc:
Bug#44059: Incorrect cardinality of indexes on a partitioned table
Checking which partition that has the most rows, and using that
partition for HA_STATUS_CONST instead of first partition
is reached
Problem was bad error handling, leaving some new temporary
partitions locked and initialized and some not yet initialized
and locked, leading to a crash when trying to unlock the not
yet initialized and locked partitions
Solution was to unlock the already locked partitions, and not
include any of the new temporary partitions in later unlocks
mysql-test/r/partition_open_files_limit.result:
Bug#46922: crash when adding partitions and open_files_limit
is reached
New test result
mysql-test/t/partition_open_files_limit-master.opt:
Bug#46922: crash when adding partitions and open_files_limit
is reached
New test opt-file for testing when open_files_limit is reached
mysql-test/t/partition_open_files_limit.test:
Bug#46922: crash when adding partitions and open_files_limit
is reached
New test case testing when open_files_limit is reached
sql/ha_partition.cc:
Bug#46922: crash when adding partitions and open_files_limit
is reached
When cleaning up the partitions already locked need to be unlocked,
and not be unlocked/closed after cleaning up.
can lead to bad memory access
Problem: Field_bit is the only field which returns INT_RESULT
and doesn't have unsigned flag. As it's not a descendant of the
Field_num, so using ((Field_num *) field_bit)->unsigned_flag may lead
to unpredictable results.
Fix: check the field type before casting.
mysql-test/r/type_bit.result:
Fix for bug #42803: Field_bit does not have unsigned_flag field,
can lead to bad memory access
- test result.
mysql-test/t/type_bit.test:
Fix for bug #42803: Field_bit does not have unsigned_flag field,
can lead to bad memory access
- test case.
sql/opt_range.cc:
Fix for bug #42803: Field_bit does not have unsigned_flag field,
can lead to bad memory access
- don't cast to (Field_num *) Field_bit, as it's not a Field_num
descendant and is always unsigned by nature.
buffering is used
FORCE INDEX FOR ORDER BY now prevents the optimizer from
using join buffering. As a result the optimizer can use
indexed access on the first table and doesn't need to
sort the complete resultset at the end of the statement.
Mask part of EXPLAIN output with '#' to account for varying row count estimation.
mysql-test/include/mix1.inc:
Mask 'rows' column in EXPLAIN output (number varies sometimes between 1 and 2).
mysql-test/r/innodb_mysql.result:
Update result file after masking of rows estimation in EXPLAIN output.
1. BUG#46000 - using index called GEN_CLUST_INDEX crashes server
Detailed revision comments:
r5895 | jyang | 2009-09-15 03:39:21 +0300 (Tue, 15 Sep 2009) | 5 lines
branches/5.1: Disallow creating index with the name of
"GEN_CLUST_INDEX" which is reserved for the default system
primary index. (Bug #46000) rb://149 approved by Marko Makela.
SELECT ... WHERE ... IN (NULL, ...) does full table scan,
even if the same query without the NULL uses efficient range scan.
The bugfix for the bug 18360 introduced an optimization:
if
1) all right-hand arguments of the IN function are constants
2) result types of all right argument items are compatible
enough to use the same single comparison function to
compare all of them to the left argument,
then
we can convert the right-hand list of constant items to an array
of equally-typed constant values for the further
QUICK index access etc. (see Item_func_in::fix_length_and_dec()).
The Item_null constant item objects have STRING_RESULT
result types, so, as far as Item_func_in::fix_length_and_dec()
is aware of NULLs in the right list, this improvement efficiently
optimizes IN function calls with a mixed right list of NULLs and
string constants. However, the optimization doesn't affect mixed
lists of NULLs and integers, floats etc., because there is no
unique common comparator.
New optimization has been added to ignore the result type
of NULL constants in the static analysis of mixed right-hand lists.
This is safe, because at the execution phase we care about
presence of NULLs anyway.
1. The collect_cmp_types() function has been modified to optionally
ignore NULL constants in the item list.
2. NULL-skipping code of the Item_func_in::fix_length_and_dec()
function has been modified to work not only with in_string
vectors but with in_vectors of other types.
mysql-test/r/func_in.result:
Added test case for the bug #44139.
mysql-test/t/func_in.test:
Added test case for the bug #44139.
sql/item_cmpfunc.cc:
Bug #44139: Table scan when NULL appears in IN clause
1. The collect_cmp_types() function has been modified to optionally
ignore NULL constants in the item list.
2. NULL-skipping code of the Item_func_in::fix_length_and_dec()
function has been modified to work not only with in_string
vectors but with in_vectors of other types.
On Mac OS X or Windows, sending a SIGHUP to the server or a
asynchronous flush (triggered by flush_time), would cause the
server to crash.
The problem was that a hook used to detach client API handles
wasn't prepared to handle cases where the thread does not have
a associated session.
The solution is to verify whether the thread has a associated
session before trying to detach a handle.
mysql-test/r/federated_debug.result:
Add test case result for Bug#47525
mysql-test/t/federated_debug-master.opt:
Debug point.
mysql-test/t/federated_debug.test:
Add test case for Bug#47525
sql/slave.cc:
Check whether a the thread has a associated session.
sql/sql_parse.cc:
Add debug code to simulate a reload without thread session.
The 'BEGIN/COMMIT/ROLLBACK' log event could be filtered out if the
database is not selected by --database option of mysqlbinlog command.
This can result in problem if there are some statements in the
transaction are not filtered out.
To fix the problem, mysqlbinlog will output 'BEGIN/ROLLBACK/COMMIT'
in regardless of the database filtering rules.
client/mysqlbinlog.cc:
Skip the database check for BEGIN/COMMIT/ROLLBACK log events.
mysql-test/r/mysqlbinlog.result:
Test result for bug#46998
mysql-test/suite/binlog/t/binlog_row_mysqlbinlog_db_filter.test:
The test case is updated duo to the patch of bug#46998
mysql-test/t/mysqlbinlog.test:
Added test to verify if the 'BEGIN', 'COMMIT' and 'ROLLBACK' are output
in regardless of database filtering
The 'BEGIN/COMMIT/ROLLBACK' log event could be filtered out if the
database is not selected by --database option of mysqlbinlog command.
This can result in problem if there are some statements in the
transaction are not filtered out.
To fix the problem, mysqlbinlog will output 'BEGIN/ROLLBACK/COMMIT'
in regardless of the database filtering rules.
client/mysqlbinlog.cc:
Skip the database check for BEGIN/COMMIT/ROLLBACK log events.
mysql-test/r/mysqlbinlog.result:
Test result for bug#46998
mysql-test/t/mysqlbinlog.test:
Added test to verify if the 'BEGIN', 'COMMIT' and 'ROLLBACK' are output
in regardless of database filtering
Backport from 6.0 to 5.1.
Only those sync points are included, which are used in debug_sync.test.
The Debug Sync Facility allows to place synchronization points
in the code:
open_tables(...)
DEBUG_SYNC(thd, "after_open_tables");
lock_tables(...)
When activated, a sync point can
- Send a signal and/or
- Wait for a signal
Nomenclature:
- signal: A value of a global variable that persists
until overwritten by a new signal. The global
variable can also be seen as a "signal post"
or "flag mast". Then the signal is what is
attached to the "signal post" or "flag mast".
- send a signal: Assign the value (the signal) to the global
variable ("set a flag") and broadcast a
global condition to wake those waiting for
a signal.
- wait for a signal: Loop over waiting for the global condition until
the global value matches the wait-for signal.
Please find more information in the top comment in debug_sync.cc
or in the worklog entry.
.bzrignore:
WL#4259 - Debug Sync Facility
Added the symbolic link libmysqld/debug_sync.cc.
CMakeLists.txt:
WL#4259 - Debug Sync Facility
Added definition for ENABLED_DEBUG_SYNC.
configure.in:
WL#4259 - Debug Sync Facility
Added definition for ENABLED_DEBUG_SYNC.
include/my_sys.h:
WL#4259 - Debug Sync Facility
Added definition for the DEBUG_SYNC_C macro.
libmysqld/CMakeLists.txt:
WL#4259 - Debug Sync Facility
Added sql/debug_sync.cc.
libmysqld/Makefile.am:
WL#4259 - Debug Sync Facility
Added sql/debug_sync.cc.
mysql-test/include/have_debug_sync.inc:
WL#4259 - Debug Sync Facility
New include file.
mysql-test/mysql-test-run.pl:
WL#4259 - Debug Sync Facility
Added option --debug_sync_timeout.
mysql-test/r/debug_sync.result:
WL#4259 - Debug Sync Facility
New test result.
mysql-test/r/have_debug_sync.require:
WL#4259 - Debug Sync Facility
New require file.
mysql-test/t/debug_sync.test:
WL#4259 - Debug Sync Facility
New test file.
mysys/my_static.c:
WL#4259 - Debug Sync Facility
Added definition for debug_sync_C_callback_ptr.
mysys/thr_lock.c:
WL#4259 - Debug Sync Facility
Added sync point "wait_for_lock".
sql/CMakeLists.txt:
WL#4259 - Debug Sync Facility
Added debug_sync.cc and debug_sync.h.
sql/Makefile.am:
WL#4259 - Debug Sync Facility
Added debug_sync.cc and debug_sync.h.
sql/debug_sync.cc:
WL#4259 - Debug Sync Facility
New source file.
sql/debug_sync.h:
WL#4259 - Debug Sync Facility
New header file.
sql/mysqld.cc:
WL#4259 - Debug Sync Facility
Added opt_debug_sync_timeout.
Added calls to debug_sync_init() and debug_sync_end().
Fixed a purecov comment (unrelated).
sql/set_var.cc:
WL#4259 - Debug Sync Facility
Added server variable "debug_sync".
sql/set_var.h:
WL#4259 - Debug Sync Facility
Added declaration for server variable "debug_sync".
sql/share/errmsg.txt:
WL#4259 - Debug Sync Facility
Added error messages ER_DEBUG_SYNC_TIMEOUT and ER_DEBUG_SYNC_HIT_LIMIT.
sql/sql_base.cc:
WL#4259 - Debug Sync Facility
Added sync points "after_flush_unlock" and "before_lock_tables_takes_lock".
sql/sql_class.cc:
WL#4259 - Debug Sync Facility
Added initialization for debug_sync_control to THD::THD.
Added calls to debug_sync_init_thread() and debug_sync_end_thread().
sql/sql_class.h:
WL#4259 - Debug Sync Facility
Added element debug_sync_control to THD.
storage/myisam/myisamchk.c:
Fixed a typo in an error message string (unrelated).
The problem was that appending values to the end of an existing
ENUM or SET column was being treated as table data modification,
preventing a immediately (fast) table alteration that occurs when
only table metadata is being modified.
The cause was twofold: adding a enumeration or set members to the
end of the list of valid member values was not being considered
a "compatible" table alteration, and for SET columns, the check
was being done upon the max display length and not the underlying
(pack) length of the field.
The solution is to augment the function that checks wether two ENUM
or SET fields are compatible -- by comparing the pack lengths and
performing a limited comparison of the member values.
mysql-test/r/alter_table.result:
Add test case result for Bug#45567
mysql-test/t/alter_table.test:
Add test case for Bug#45567
sql/field.cc:
Check whether two fields can be considered 'equal' for table
alteration purposes. Fields are equal if they retain the same
pack length and if new members are added to the end of the list.
sql/field.h:
Add comment and remove method.
The bug is not related to MERGE table or TRIGGER. More correct description
would be 'assertion on multi-table UPDATE + NATURAL JOIN + MERGEABLE VIEW'.
On PREPARE stage(see test case) we call mark_common_columns() func which
creates ON condition for NATURAL JOIN and sets appropriate
table read_set bitmaps for fields which are used in ON condition.
On EXECUTE stage mark_common_columns() is not called, we set
necessary read_set bitmaps in setup_conds(). But 'B.f1' field
is already processed and related item alredy fixed before
setup_conds() as updated field and setup_conds can not set
read_set bitmap because of that.
The fix is to set read_set bitmap for appropriate table field even
if Item_direct_view_ref item which represents a refernce to this field
is fixed.
mysql-test/r/join.result:
test result
mysql-test/t/join.test:
test case
sql/item.cc:
The bug is not related to MERGE table or TRIGGER. More correct description
would be 'assertion on multi-table UPDATE + NATURAL JOIN + MERGEABLE VIEW'.
On PREPARE stage(see test case) we call mark_common_columns() func which
creates ON condition for NATURAL JOIN and sets appropriate
table read_set bitmaps for fields which are used in ON condition.
On EXECUTE stage mark_common_columns() is not called, we set
necessary read_set bitmaps in setup_conds(). But 'B.f1' field
is already processed and related item alredy fixed before
setup_conds() as updated field and setup_conds can not set
read_set bitmap because of that.
The fix is to set read_set bitmap for appropriate table field even
if Item_direct_view_ref item which represents a refernce to this field
is fixed.
"load data" statements were written to the binlog as a mix of the original statement
and bits recreated from parse-info. This relied on implementation details and broke
with IGNORE_SPACES and versioned comments.
We now completely resynthesize the query for LOAD DATA for binlog (which among other
things normalizes them somewhat with regard to case, spaces, etc.).
We have already parsed the query properly, so we make use of that rather
than mix-and-match string literals and parsed items.
This should make us safe with regard to versioned comments, even those
spanning multiple tokens. Also no longer affected by IGNORE_SPACES.
mysql-test/r/mysqlbinlog.result:
LOAD DATA INFILE normalized
mysql-test/suite/binlog/r/binlog_killed_simulate.result:
LOAD DATA INFILE normalized
mysql-test/suite/binlog/r/binlog_row_mix_innodb_myisam.result:
LOAD DATA INFILE normalized
mysql-test/suite/binlog/r/binlog_stm_blackhole.result:
LOAD DATA INFILE normalized
mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result:
LOAD DATA INFILE normalized
mysql-test/suite/rpl/r/rpl_innodb_mixed_dml.result:
LOAD DATA INFILE normalized
mysql-test/suite/rpl/r/rpl_loaddata.result:
LOAD DATA INFILE normalized
mysql-test/suite/rpl/r/rpl_loaddata_fatal.result:
LOAD DATA INFILE normalized; offsets adjusted to reflect that
mysql-test/suite/rpl/r/rpl_loaddata_map.result:
LOAD DATA INFILE normalized
mysql-test/suite/rpl/r/rpl_loaddatalocal.result:
test for #43746 - trying to break LOAD DATA part of parser
mysql-test/suite/rpl/r/rpl_stm_log.result:
LOAD DATA INFILE normalized
mysql-test/suite/rpl/t/rpl_loaddatalocal.test:
try to break the LOAD DATA part of the parser (test for #43746)
mysql-test/t/mysqlbinlog.test:
LOAD DATA INFILE normalized; adjust offsets to reflect that
sql/log_event.cc:
clean up Load_log_event::print_query and friends so they don't print
excess spaces. add support for printing charset names to print_query.
sql/log_event.h:
We already have three places where we synthesize LOAD DATA queries.
Better use one of those!
sql/sql_lex.h:
When binlogging LOAD DATA statements, we make up the statement to
be logged (from the parse-info, rather than substrings of the
original query) now. Consequently, we no longer need (string-)
pointers into the original query.
sql/sql_load.cc:
Completely rewrote write_execute_load_query_log_event() to synthesize the
LOAD DATA statement wholesale, rather than piece it together from
synthesized bits and literal excerpts from the original query. This
will not only give us a nice, normalized statement (all uppercase,
no excess spaces, etc.), it will also handle comments, including
versioned comments right, which is certainly more than we can say
about the previous incarnation.
sql/sql_yacc.yy:
We're no longer assembling LOAD DATA statements from bodyparts of the
original query, so some bookkeeping in the parser can go.
view definition
During SHOW CREATE VIEW there is no reason to 'anonymize'
errors that name objects that a user does not have access
to. Moreover it was inconsistently implemented. For example
base tables being referenced from a view appear to be ok,
but not views. The manual on the other hand is clear: If a
user has the privileges SELECT and SHOW VIEW, the view
definition is available to that user, period. The fix
changes the behavior to support the manual.
mysql-test/r/information_schema_db.result:
Bug#35996: Changed warnings.
mysql-test/r/view_grant.result:
Bug#35996: Changed warnings, test result.
mysql-test/t/information_schema_db.test:
Bug#35996: Changed test case to reflect new behavior.
mysql-test/t/view_grant.test:
Bug#35996: Test case.
sql/sql_acl.cc:
Bug#35996: Code no longer necessary, we may as well exempt
SHOW CREATE VIEW from this check.
sql/sql_show.cc:
Bug#35996: The fix: An Internal_error_handler that hides
most errors raised by access checking as they are not
relevant to SHOW CREATE VIEW.
sql/table.cc:
Bug#35996: Restricting this hack to act only when there is
no Internal_error_handler.
trigger, merge table
The problem with break statements is that they have very
local effects. Hence a break statement within the inner loop
of a nested-loops join caused execution to proceed to the
next table even though a serious error occurred. The problem
was fixed by breaking out the inner loop into its own
method. The change empowers all errors to terminate the
execution.
The errors that will now halt multi-DELETE execution
altogether are
- triggers returning errors
- handler errors
- server being killed
mysql-test/r/delete.result:
Bug#46958: Test result.
mysql-test/t/delete.test:
Bug#46958: Test case.
sql/sql_class.h:
Bug#46958: New method declaration.
sql/sql_delete.cc:
Bug#46958: New method implementation.
Invalid (old?) table or database name in logs
Problem was still not completely fixed, due to
qouting.
This is the server side only fix (in explain_filename),
the change from filename_to_tablename to use explain_filename
in the InnoDB code must be done before the bug is
fixed.
mysql-test/include/have_not_innodb_plugin.inc:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
Added include file to allow test for only the
'old' built-in innodb engine
mysql-test/r/not_true.require:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
Added require to match 'not' TRUE
mysql-test/r/partition_innodb_builtin.result:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
New result file for partitioning specific to
the 'old' built-in innodb engine
mysql-test/r/partition_innodb_plugin.result:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
New result file for partitioning specific to
the new plugin innodb engine
mysql-test/t/disabled.def:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
Disabling the new test until the fix is
included in the InnoDB source too.
mysql-test/t/partition_innodb_builtin.test:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
New test file for partitioning specific to
the 'old' built-in innodb engine
mysql-test/t/partition_innodb_plugin.test:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
New test file for partitioning specific to
the new plugin innodb engine
sql/mysql_priv.h:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
Added thd as a parameter to explain_filename
to be able to use the correct quote character
sql/sql_table.cc:
Bug#32430: 'show innodb status' causes errors
Invalid (old?) table or database name in logs
Changed explain_filename, so that it does qouting
correctly according to the sessions qoute char.
value from the index (PRIMARY)
With the fix for BUG#46760, we correctly flag the presence of row_type
only when it's actually changed and enables the FAST ALTER TABLE which was
disabled with the BUG#39200.
So the changes made by BUG#46760 makes MySQL data dictionaries to be out of
sync but they are handled already by InnoDB with this BUG#44030.
The test was originally written to handle this but we requested Innodb to
update the test as the data dictionaries were in sync after the fix for
BUG#39200.
Adjusting the innodb-autoinc testcase as mentioned in the comments.
mysql-test/lib/mtr_cases.pm:
Re-enable the innodb-autoinc test case for plugin as we have a common
result file.
mysql-test/r/innodb-autoinc.result:
Additional Fix for BUG#44030 - Error: (1500) Couldn't read the MAX(ID) autoinc
value from the index (PRIMARY)
Adjust the innodb-autoinc testcase as the patch for BUG#46760 enables the
FAST ALTER TABLE and makes the data dictonaries go out of sync. This is
expected in the testcase.
mysql-test/t/innodb-autoinc.test:
Additional Fix for BUG#44030 - Error: (1500) Couldn't read the MAX(ID) autoinc
value from the index (PRIMARY)
Adjust the innodb-autoinc testcase as the patch for BUG#46760 enables the
FAST ALTER TABLE and makes the data dictonaries go out of sync. This is
expected in the testcase.
The "socket" variable is not available on Windows even though
the --socket option can be used to specify the pipe name for
local connections that use a named pipe.
The solution is to ensure that the variable is always defined.
mysql-test/r/windows.result:
Add test case result for Bug#45498
mysql-test/t/windows.test:
Add test case for Bug#45498
sql/set_var.cc:
socket variable must always be present.
checksum)"
The problem was that checksum of GEOMETRY type used memory addresses
in the computation, making it un-repeatable thus useless.
(This patch is a backport from 6.0 branch)
mysql-test/r/myisam.result:
test case for bug35570 that same tables give same checksums
mysql-test/t/myisam.test:
test case for bug35570 that same tables give same checksums
sql/sql_table.cc:
Type GEOMETRY is implemented on top of type BLOB, so, just like for BLOB,
its 'field' contains pointers which it does not make sense to include in
the checksum; it rather has to be converted to a string and then we can
compute the checksum.
Despite copying the value of the old table's row type
we don't always have to mark row type as being specified.
Innodb uses this to check if it can do fast ALTER TABLE
or not.
Fixed by correctly flagging the presence of row_type
only when it's actually changed.
Added a test case for 39200.