grants are reapplied.
After renaming a user and trying to re-apply grants results in additional
grants.
This is because we use username as part of the key for GRANT_TABLE structure.
When the user is renamed, we only change the username stored and the hash key
still contains the old user name and this results in the extra privileges
Fixed by rebuilding the hash key and updating the column_priv_hash structure
when the user is renamed
mysql-test/r/grant3.result:
Bug #41597 - After rename of user, there are additional grants when
grants are reapplied.
Testcase for BUG#41597
mysql-test/t/grant3.test:
Bug #41597 - After rename of user, there are additional grants when
grants are reapplied.
Testcase for BUG#41597
sql/sql_acl.cc:
Bug #41597 - After rename of user, there are additional grants when
grants are reapplied.
Fixed handle_grant_struct() to update the hash key when the user is renamed.
Added to set_user_details() method to GRANT_NAME class
Problem: the "caseinfo" member of CHARSET_INFO structure was not
initialized for user-defined Unicode collations, which made the
server crash.
Fix: initializing caseinfo properly.
Adding @@session and @@global prefixes to a
declared variable in a stored procedure the server
would lead to a crash.
The reason was that during the parsing of the
syntactic rule 'option_value' an uninitialized
set_var object was pushed to the parameter stack
of the SET statement. The parent rule
'option_type_value' interpreted the existence of
variables on the parameter stack as an assignment
and wrapped it in a sp_instr_set object.
As the procedure later was executed an attempt
was made to run the method 'check()' on an
uninitialized member object (NULL value) belonging
to the previously created but uninitialized object.
sql/sql_yacc.yy:
* Assign the option_type at once since it is needed by the next
parsing rule (internal_variable_name)
* Rearranged the if statement to reduce negations and gain more
clarity of code.
* Added check for option_type to better detect if current
variable is a SP local variable or a system variable.
Problem: using null microsecond part in a WHERE condition
(e.g. WHERE date_time_field <= "YYYY-MM-DD HH:MM:SS.0000")
may lead to wrong results due to improper DATETIMEs
comparison in some cases.
Fix: comparing DATETIMEs as strings we must trim trailing 0's
in such cases.
mysql-test/r/innodb_mysql.result:
Fix for bug#47963: Wrong results when index is used
- test result.
mysql-test/t/innodb_mysql.test:
Fix for bug#47963: Wrong results when index is used
- test case.
sql/item.cc:
Fix for bug#47963: Wrong results when index is used
- comparing DATETIMEs as strings we must trim trailing 0's in the
microsecond part to ensure
'YYYY-MM-DD HH:MM:SS.000' == 'YYYY-MM-DD HH:MM:SS'
The problem was in incorrect handling of predicates involving
NULL as a constant value by the range optimizer.
For example, when creating a SEL_ARG node from a condition of
the form "field < const" (which would normally result in the
"NULL < field < const" SEL_ARG), the special case when "const"
is NULL was not taken into account, so "NULL < field < NULL"
was produced for the "field < NULL" condition.
As a result, SEL_ARG structures of this form could not be
further optimized which in turn could lead to incorrectly
constructed SEL_ARG trees. In particular, code assuming SEL_ARG
structures to always form a sequence of ordered disjoint
intervals could enter an infinite loop under some
circumstances.
Fixed by changing get_mm_leaf() so that for any sargable
predicate except "<=>" involving NULL as a constant, "empty"
SEL_ARG is returned, since such a predicate is always false.
mysql-test/r/partition_pruning.result:
Fixed a broken test case.
mysql-test/r/range.result:
Added a test case for bug #47123.
mysql-test/r/subselect.result:
Fixed a broken test cases.
mysql-test/t/range.test:
Added a test case for bug #47123.
sql/opt_range.cc:
Fixed get_mm_leaf() so that for any sargable
predicate except "<=>" involving NULL as a constant, "empty"
SEL_ARG is returned, since such a predicate is always false.
view that has Group By
When SELECT'ing from a view that mentions another,
materialized, view, access was being denied. The issue was
resolved by lifting a special case which avoided such access
checking in check_single_table_access. In the past, this was
necessary since if such a check were performed, the error
message would be downgraded to a warning in the case of SHOW
CREATE VIEW. The downgrading of errors was meant to handle
only that scenario, but could not distinguish the two as it
read only the error messages.
The special case was needed in the fix of bug no 36086.
Before that, views were confused with derived tables.
After bug no 35996 was fixed, the manipulation of errors
during SHOW CREATE VIEW execution is not dependent on the
actual error messages in the queue, it rather looks at the
actual cause of the error and takes appropriate
action. Hence the aforementioned special case is now
superfluous and the bug is fixed.
mysql-test/r/view_grant.result:
Bug#46019: Test result.
mysql-test/t/view_grant.test:
Bug#46019: Test case.
sql/sql_parse.cc:
Bug#46019: fix.
Simplified and made more determenistic myisam_crash_before_flush_keys
test.
mysql-test/r/myisam_crash_before_flush_keys.result:
We don't need myisamchk to test this bug fix. CHECK TABLE
after server restart is enough to ensure that the fix
works.
mysql-test/t/myisam_crash_before_flush_keys.test:
We don't need myisamchk to test this bug fix. CHECK TABLE
after server restart is enough to ensure that the fix
works.
columns without where/group
Simple SELECT with implicit grouping used to return many rows if
the query was ordered by the aggregated column in the SELECT
list. This was incorrect because queries with implicit grouping
should only return a single record.
The problem was that when JOIN:exec() decided if execution needed
to handle grouping, it was assumed that sum_func_count==0 meant
that there were no aggregate functions in the query. This
assumption was not correct in JOIN::exec() because the aggregate
functions might have been optimized away during JOIN::optimize().
The reason why queries without ordering behaved correctly was
that sum_func_count is only recalculated if the optimizer chooses
to use temporary tables (which it does in the ordered case).
Hence, non-ordered queries were correctly treated as grouped.
The fix for this bug was to remove the assumption that
sum_func_count==0 means that there is no need for grouping. This
was done by introducing variable "bool implicit_grouping" in the
JOIN object.
mysql-test/r/func_group.result:
Add test for BUG#47280
mysql-test/t/func_group.test:
Add test for BUG#47280
sql/opt_sum.cc:
Improve comment for opt_sum_query()
sql/sql_class.h:
Add comment for variables in TMP_TABLE_PARAM
sql/sql_select.cc:
Introduce and use variable implicit_grouping instead of (!group_list && sum_func_count) in places that need to test if grouping is required. Also added comments for: optimization of aggregate fields for implicitly grouped queries (JOIN::optimize) and choice of end_select method (JOIN::execute)
sql/sql_select.h:
Add variable implicit_grouping, which will be TRUE for queries that contain aggregate functions but no GROUP BY clause. Also added comment to sort_and_group variable.
The problem was in incorrect handling of predicates involving
NULL as a constant value by the range optimizer.
For example, when creating a SEL_ARG node from a condition of
the form "field < const" (which would normally result in the
"NULL < field < const" SEL_ARG), the special case when "const"
is NULL was not taken into account, so "NULL < field < NULL"
was produced for the "field < NULL" condition.
As a result, SEL_ARG structures of this form could not be
further optimized which in turn could lead to incorrectly
constructed SEL_ARG trees. In particular, code assuming SEL_ARG
structures to always form a sequence of ordered disjoint
intervals could enter an infinite loop under some
circumstances.
Fixed by changing get_mm_leaf() so that for any sargable
predicate except "<=>" involving NULL as a constant, "empty"
SEL_ARG is returned, since such a predicate is always false.
mysql-test/r/range.result:
Added a test case for bug #47123.
mysql-test/t/range.test:
Added a test case for bug #47123.
sql/opt_range.cc:
Fixed get_mm_leaf() so that for any sargable
predicate except "<=>" involving NULL as a constant, "empty"
SEL_ARG is returned, since such a predicate is always false.
Problem: using null microsecond part (e.g. "YYYY-MM-DD HH:MM:SS.0000")
in a WHERE condition may lead to wrong results due to improper
DATETIMEs comparison in some cases.
Fix: as we compare DATETIMEs as strings we must trim trailing 0's
in such cases.
mysql-test/r/innodb_mysql.result:
Fix for bug#47963: Wrong results when index is used
- test result.
mysql-test/t/innodb_mysql.test:
Fix for bug#47963: Wrong results when index is used
- test case.
sql/item.cc:
Fix for bug#47963: Wrong results when index is used
- comparing DATETIMEs trim trailing 0's in the
microsecond part.
In MySQL when the mapping for space is changed to something other than
0x20 by defining a different collation, then space is not ignored when
comparing two strings.
This was happening because the function that performs the comparison
of two strings while ignoring ending spaces, was comparing the collation
value of a space with the ascii value of the ' ' character. This should
be changed to do comparison between the collated values.
mysql-test/r/ctype_ldml.result:
Bug#46448 trailing spaces are not ignored when user collation maps space != 0x20
Result file for test case.
mysql-test/std_data/Index.xml:
Bug#46448 trailing spaces are not ignored when user collation maps space != 0x20
Added entry for new test collation in the index file.
mysql-test/std_data/latin1.xml:
Bug#46448 trailing spaces are not ignored when user collation maps space != 0x20
Added support for new collation for test.
mysql-test/t/ctype_ldml.test:
Bug#46448 trailing spaces are not ignored when user collation maps space != 0x20
Added test case to ensure trailing spaces are not ignored.
strings/ctype-simple.c:
Bug#46448 trailing spaces are not ignored when user collation maps space != 0x20
change my_strnncollsp_simple to compare collated values when checking
for trailing spaces. Currently the comparison happens between a collated
value and the ascii value.
low myisam_sort_buffer_size
Repair by sort (default) or parallel repair of a MyISAM table
(doesn't matter partitioned or not) as well as bulk inserts
and enable indexes some times didn't failover to repair with
key cache.
The problem was that after unsuccessful attempt, data file was
closed. Whereas repair with key cache requires open data file.
Fixed by reopening data file.
Also fixed a valgrind warning, which may appear during repair
by sort or parallel repair with certain myisam_sort_buffer_size
number of rows and length of an index entry (very dependent).
mysql-test/r/myisam.result:
A test case for BUG#47073.
mysql-test/t/myisam.test:
A test case for BUG#47073.
storage/myisam/ha_myisam.cc:
Reverted fix for BUG25289. Not needed anymore.
storage/myisam/mi_check.c:
Reopen data file, when repair by sort or parallel repair
fails.
When repair by sort is requested to rebuild data file, data file
gets rebuilt while fixing first index. When rebuild is completed,
info->dfile is pointing to temporary data file, original data file
is closed.
It may happen that repair has successfully fixed first index and
rebuilt data file, but failed to fix second index. E.g.
myisam_sort_buffer_size was big enough to fix first shorter index,
but not enough to fix subsequent longer index.
In this case we end up with info->dfile pointing to temporary file,
which is removed and info->dfile is set to -1.
Though repair by sort failed, the upper layer may still want to
try repair with key cache. But it needs info->dfile pointing to
valid data file.
storage/myisam/sort.c:
When performing a copy of IO_CACHE structure, current_pos and
current_end must be updated separatly to point to memory we're
copying to (not to memory we're copying from).
As t_file2 is always WRITE cache, proper members are write_pos
and write_end accordingly.
covering index
When two range predicates were combined under an OR
predicate, the algorithm tried to merge overlapping ranges
into one. But the case when a range overlapped several other
ranges was not handled. This lead to
1) ranges overlapping, which gave repeated results and
2) a range that overlapped several other ranges was cut off.
Fixed by
1) Making sure that a range got an upper bound equal to the
next range with a greater minimum.
2) Removing a continue statement
mysql-test/r/group_min_max.result:
Bug#42846: Changed query plans
mysql-test/r/range.result:
Bug#42846: Test result.
mysql-test/t/range.test:
Bug#42846: Test case.
sql/opt_range.cc:
Bug#42846: The fix.
Part1: Previously, both endpoints from key2 were copied,
which is not safe. Since ranges are processed in ascending
order of minimum endpoints, it is safe to copy the minimum
endpoint from key2 but not the maximum. The maximum may only
be copied if there is no other range or the other range's
minimum is greater than key2's maximum.
backport for bug#44059 from mysql-pe to mysql-5.1-bugteam
Using the partition with most rows instead of first partition
to estimate the cardinality of indexes.
mysql-test/r/partition.result:
Bug#44059: Incorrect cardinality of indexes on a partitioned table
Added test result
mysql-test/t/partition.test:
Bug#44059: Incorrect cardinality of indexes on a partitioned table
Added test case
sql/ha_partition.cc:
Bug#44059: Incorrect cardinality of indexes on a partitioned table
Checking which partition that has the most rows, and using that
partition for HA_STATUS_CONST instead of first partition
is reached
Problem was bad error handling, leaving some new temporary
partitions locked and initialized and some not yet initialized
and locked, leading to a crash when trying to unlock the not
yet initialized and locked partitions
Solution was to unlock the already locked partitions, and not
include any of the new temporary partitions in later unlocks
mysql-test/r/partition_open_files_limit.result:
Bug#46922: crash when adding partitions and open_files_limit
is reached
New test result
mysql-test/t/partition_open_files_limit-master.opt:
Bug#46922: crash when adding partitions and open_files_limit
is reached
New test opt-file for testing when open_files_limit is reached
mysql-test/t/partition_open_files_limit.test:
Bug#46922: crash when adding partitions and open_files_limit
is reached
New test case testing when open_files_limit is reached
sql/ha_partition.cc:
Bug#46922: crash when adding partitions and open_files_limit
is reached
When cleaning up the partitions already locked need to be unlocked,
and not be unlocked/closed after cleaning up.
can lead to bad memory access
Problem: Field_bit is the only field which returns INT_RESULT
and doesn't have unsigned flag. As it's not a descendant of the
Field_num, so using ((Field_num *) field_bit)->unsigned_flag may lead
to unpredictable results.
Fix: check the field type before casting.
mysql-test/r/type_bit.result:
Fix for bug #42803: Field_bit does not have unsigned_flag field,
can lead to bad memory access
- test result.
mysql-test/t/type_bit.test:
Fix for bug #42803: Field_bit does not have unsigned_flag field,
can lead to bad memory access
- test case.
sql/opt_range.cc:
Fix for bug #42803: Field_bit does not have unsigned_flag field,
can lead to bad memory access
- don't cast to (Field_num *) Field_bit, as it's not a Field_num
descendant and is always unsigned by nature.
buffering is used
FORCE INDEX FOR ORDER BY now prevents the optimizer from
using join buffering. As a result the optimizer can use
indexed access on the first table and doesn't need to
sort the complete resultset at the end of the statement.
Mask part of EXPLAIN output with '#' to account for varying row count estimation.
mysql-test/include/mix1.inc:
Mask 'rows' column in EXPLAIN output (number varies sometimes between 1 and 2).
mysql-test/r/innodb_mysql.result:
Update result file after masking of rows estimation in EXPLAIN output.
1. BUG#46000 - using index called GEN_CLUST_INDEX crashes server
Detailed revision comments:
r5895 | jyang | 2009-09-15 03:39:21 +0300 (Tue, 15 Sep 2009) | 5 lines
branches/5.1: Disallow creating index with the name of
"GEN_CLUST_INDEX" which is reserved for the default system
primary index. (Bug #46000) rb://149 approved by Marko Makela.
SELECT ... WHERE ... IN (NULL, ...) does full table scan,
even if the same query without the NULL uses efficient range scan.
The bugfix for the bug 18360 introduced an optimization:
if
1) all right-hand arguments of the IN function are constants
2) result types of all right argument items are compatible
enough to use the same single comparison function to
compare all of them to the left argument,
then
we can convert the right-hand list of constant items to an array
of equally-typed constant values for the further
QUICK index access etc. (see Item_func_in::fix_length_and_dec()).
The Item_null constant item objects have STRING_RESULT
result types, so, as far as Item_func_in::fix_length_and_dec()
is aware of NULLs in the right list, this improvement efficiently
optimizes IN function calls with a mixed right list of NULLs and
string constants. However, the optimization doesn't affect mixed
lists of NULLs and integers, floats etc., because there is no
unique common comparator.
New optimization has been added to ignore the result type
of NULL constants in the static analysis of mixed right-hand lists.
This is safe, because at the execution phase we care about
presence of NULLs anyway.
1. The collect_cmp_types() function has been modified to optionally
ignore NULL constants in the item list.
2. NULL-skipping code of the Item_func_in::fix_length_and_dec()
function has been modified to work not only with in_string
vectors but with in_vectors of other types.
mysql-test/r/func_in.result:
Added test case for the bug #44139.
mysql-test/t/func_in.test:
Added test case for the bug #44139.
sql/item_cmpfunc.cc:
Bug #44139: Table scan when NULL appears in IN clause
1. The collect_cmp_types() function has been modified to optionally
ignore NULL constants in the item list.
2. NULL-skipping code of the Item_func_in::fix_length_and_dec()
function has been modified to work not only with in_string
vectors but with in_vectors of other types.
On Mac OS X or Windows, sending a SIGHUP to the server or a
asynchronous flush (triggered by flush_time), would cause the
server to crash.
The problem was that a hook used to detach client API handles
wasn't prepared to handle cases where the thread does not have
a associated session.
The solution is to verify whether the thread has a associated
session before trying to detach a handle.
mysql-test/r/federated_debug.result:
Add test case result for Bug#47525
mysql-test/t/federated_debug-master.opt:
Debug point.
mysql-test/t/federated_debug.test:
Add test case for Bug#47525
sql/slave.cc:
Check whether a the thread has a associated session.
sql/sql_parse.cc:
Add debug code to simulate a reload without thread session.
The 'BEGIN/COMMIT/ROLLBACK' log event could be filtered out if the
database is not selected by --database option of mysqlbinlog command.
This can result in problem if there are some statements in the
transaction are not filtered out.
To fix the problem, mysqlbinlog will output 'BEGIN/ROLLBACK/COMMIT'
in regardless of the database filtering rules.
client/mysqlbinlog.cc:
Skip the database check for BEGIN/COMMIT/ROLLBACK log events.
mysql-test/r/mysqlbinlog.result:
Test result for bug#46998
mysql-test/suite/binlog/t/binlog_row_mysqlbinlog_db_filter.test:
The test case is updated duo to the patch of bug#46998
mysql-test/t/mysqlbinlog.test:
Added test to verify if the 'BEGIN', 'COMMIT' and 'ROLLBACK' are output
in regardless of database filtering
The 'BEGIN/COMMIT/ROLLBACK' log event could be filtered out if the
database is not selected by --database option of mysqlbinlog command.
This can result in problem if there are some statements in the
transaction are not filtered out.
To fix the problem, mysqlbinlog will output 'BEGIN/ROLLBACK/COMMIT'
in regardless of the database filtering rules.
client/mysqlbinlog.cc:
Skip the database check for BEGIN/COMMIT/ROLLBACK log events.
mysql-test/r/mysqlbinlog.result:
Test result for bug#46998
mysql-test/t/mysqlbinlog.test:
Added test to verify if the 'BEGIN', 'COMMIT' and 'ROLLBACK' are output
in regardless of database filtering
Backport from 6.0 to 5.1.
Only those sync points are included, which are used in debug_sync.test.
The Debug Sync Facility allows to place synchronization points
in the code:
open_tables(...)
DEBUG_SYNC(thd, "after_open_tables");
lock_tables(...)
When activated, a sync point can
- Send a signal and/or
- Wait for a signal
Nomenclature:
- signal: A value of a global variable that persists
until overwritten by a new signal. The global
variable can also be seen as a "signal post"
or "flag mast". Then the signal is what is
attached to the "signal post" or "flag mast".
- send a signal: Assign the value (the signal) to the global
variable ("set a flag") and broadcast a
global condition to wake those waiting for
a signal.
- wait for a signal: Loop over waiting for the global condition until
the global value matches the wait-for signal.
Please find more information in the top comment in debug_sync.cc
or in the worklog entry.
.bzrignore:
WL#4259 - Debug Sync Facility
Added the symbolic link libmysqld/debug_sync.cc.
CMakeLists.txt:
WL#4259 - Debug Sync Facility
Added definition for ENABLED_DEBUG_SYNC.
configure.in:
WL#4259 - Debug Sync Facility
Added definition for ENABLED_DEBUG_SYNC.
include/my_sys.h:
WL#4259 - Debug Sync Facility
Added definition for the DEBUG_SYNC_C macro.
libmysqld/CMakeLists.txt:
WL#4259 - Debug Sync Facility
Added sql/debug_sync.cc.
libmysqld/Makefile.am:
WL#4259 - Debug Sync Facility
Added sql/debug_sync.cc.
mysql-test/include/have_debug_sync.inc:
WL#4259 - Debug Sync Facility
New include file.
mysql-test/mysql-test-run.pl:
WL#4259 - Debug Sync Facility
Added option --debug_sync_timeout.
mysql-test/r/debug_sync.result:
WL#4259 - Debug Sync Facility
New test result.
mysql-test/r/have_debug_sync.require:
WL#4259 - Debug Sync Facility
New require file.
mysql-test/t/debug_sync.test:
WL#4259 - Debug Sync Facility
New test file.
mysys/my_static.c:
WL#4259 - Debug Sync Facility
Added definition for debug_sync_C_callback_ptr.
mysys/thr_lock.c:
WL#4259 - Debug Sync Facility
Added sync point "wait_for_lock".
sql/CMakeLists.txt:
WL#4259 - Debug Sync Facility
Added debug_sync.cc and debug_sync.h.
sql/Makefile.am:
WL#4259 - Debug Sync Facility
Added debug_sync.cc and debug_sync.h.
sql/debug_sync.cc:
WL#4259 - Debug Sync Facility
New source file.
sql/debug_sync.h:
WL#4259 - Debug Sync Facility
New header file.
sql/mysqld.cc:
WL#4259 - Debug Sync Facility
Added opt_debug_sync_timeout.
Added calls to debug_sync_init() and debug_sync_end().
Fixed a purecov comment (unrelated).
sql/set_var.cc:
WL#4259 - Debug Sync Facility
Added server variable "debug_sync".
sql/set_var.h:
WL#4259 - Debug Sync Facility
Added declaration for server variable "debug_sync".
sql/share/errmsg.txt:
WL#4259 - Debug Sync Facility
Added error messages ER_DEBUG_SYNC_TIMEOUT and ER_DEBUG_SYNC_HIT_LIMIT.
sql/sql_base.cc:
WL#4259 - Debug Sync Facility
Added sync points "after_flush_unlock" and "before_lock_tables_takes_lock".
sql/sql_class.cc:
WL#4259 - Debug Sync Facility
Added initialization for debug_sync_control to THD::THD.
Added calls to debug_sync_init_thread() and debug_sync_end_thread().
sql/sql_class.h:
WL#4259 - Debug Sync Facility
Added element debug_sync_control to THD.
storage/myisam/myisamchk.c:
Fixed a typo in an error message string (unrelated).
The problem was that appending values to the end of an existing
ENUM or SET column was being treated as table data modification,
preventing a immediately (fast) table alteration that occurs when
only table metadata is being modified.
The cause was twofold: adding a enumeration or set members to the
end of the list of valid member values was not being considered
a "compatible" table alteration, and for SET columns, the check
was being done upon the max display length and not the underlying
(pack) length of the field.
The solution is to augment the function that checks wether two ENUM
or SET fields are compatible -- by comparing the pack lengths and
performing a limited comparison of the member values.
mysql-test/r/alter_table.result:
Add test case result for Bug#45567
mysql-test/t/alter_table.test:
Add test case for Bug#45567
sql/field.cc:
Check whether two fields can be considered 'equal' for table
alteration purposes. Fields are equal if they retain the same
pack length and if new members are added to the end of the list.
sql/field.h:
Add comment and remove method.
The bug is not related to MERGE table or TRIGGER. More correct description
would be 'assertion on multi-table UPDATE + NATURAL JOIN + MERGEABLE VIEW'.
On PREPARE stage(see test case) we call mark_common_columns() func which
creates ON condition for NATURAL JOIN and sets appropriate
table read_set bitmaps for fields which are used in ON condition.
On EXECUTE stage mark_common_columns() is not called, we set
necessary read_set bitmaps in setup_conds(). But 'B.f1' field
is already processed and related item alredy fixed before
setup_conds() as updated field and setup_conds can not set
read_set bitmap because of that.
The fix is to set read_set bitmap for appropriate table field even
if Item_direct_view_ref item which represents a refernce to this field
is fixed.
mysql-test/r/join.result:
test result
mysql-test/t/join.test:
test case
sql/item.cc:
The bug is not related to MERGE table or TRIGGER. More correct description
would be 'assertion on multi-table UPDATE + NATURAL JOIN + MERGEABLE VIEW'.
On PREPARE stage(see test case) we call mark_common_columns() func which
creates ON condition for NATURAL JOIN and sets appropriate
table read_set bitmaps for fields which are used in ON condition.
On EXECUTE stage mark_common_columns() is not called, we set
necessary read_set bitmaps in setup_conds(). But 'B.f1' field
is already processed and related item alredy fixed before
setup_conds() as updated field and setup_conds can not set
read_set bitmap because of that.
The fix is to set read_set bitmap for appropriate table field even
if Item_direct_view_ref item which represents a refernce to this field
is fixed.
"load data" statements were written to the binlog as a mix of the original statement
and bits recreated from parse-info. This relied on implementation details and broke
with IGNORE_SPACES and versioned comments.
We now completely resynthesize the query for LOAD DATA for binlog (which among other
things normalizes them somewhat with regard to case, spaces, etc.).
We have already parsed the query properly, so we make use of that rather
than mix-and-match string literals and parsed items.
This should make us safe with regard to versioned comments, even those
spanning multiple tokens. Also no longer affected by IGNORE_SPACES.
mysql-test/r/mysqlbinlog.result:
LOAD DATA INFILE normalized
mysql-test/suite/binlog/r/binlog_killed_simulate.result:
LOAD DATA INFILE normalized
mysql-test/suite/binlog/r/binlog_row_mix_innodb_myisam.result:
LOAD DATA INFILE normalized
mysql-test/suite/binlog/r/binlog_stm_blackhole.result:
LOAD DATA INFILE normalized
mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result:
LOAD DATA INFILE normalized
mysql-test/suite/rpl/r/rpl_innodb_mixed_dml.result:
LOAD DATA INFILE normalized
mysql-test/suite/rpl/r/rpl_loaddata.result:
LOAD DATA INFILE normalized
mysql-test/suite/rpl/r/rpl_loaddata_fatal.result:
LOAD DATA INFILE normalized; offsets adjusted to reflect that
mysql-test/suite/rpl/r/rpl_loaddata_map.result:
LOAD DATA INFILE normalized
mysql-test/suite/rpl/r/rpl_loaddatalocal.result:
test for #43746 - trying to break LOAD DATA part of parser
mysql-test/suite/rpl/r/rpl_stm_log.result:
LOAD DATA INFILE normalized
mysql-test/suite/rpl/t/rpl_loaddatalocal.test:
try to break the LOAD DATA part of the parser (test for #43746)
mysql-test/t/mysqlbinlog.test:
LOAD DATA INFILE normalized; adjust offsets to reflect that
sql/log_event.cc:
clean up Load_log_event::print_query and friends so they don't print
excess spaces. add support for printing charset names to print_query.
sql/log_event.h:
We already have three places where we synthesize LOAD DATA queries.
Better use one of those!
sql/sql_lex.h:
When binlogging LOAD DATA statements, we make up the statement to
be logged (from the parse-info, rather than substrings of the
original query) now. Consequently, we no longer need (string-)
pointers into the original query.
sql/sql_load.cc:
Completely rewrote write_execute_load_query_log_event() to synthesize the
LOAD DATA statement wholesale, rather than piece it together from
synthesized bits and literal excerpts from the original query. This
will not only give us a nice, normalized statement (all uppercase,
no excess spaces, etc.), it will also handle comments, including
versioned comments right, which is certainly more than we can say
about the previous incarnation.
sql/sql_yacc.yy:
We're no longer assembling LOAD DATA statements from bodyparts of the
original query, so some bookkeeping in the parser can go.
view definition
During SHOW CREATE VIEW there is no reason to 'anonymize'
errors that name objects that a user does not have access
to. Moreover it was inconsistently implemented. For example
base tables being referenced from a view appear to be ok,
but not views. The manual on the other hand is clear: If a
user has the privileges SELECT and SHOW VIEW, the view
definition is available to that user, period. The fix
changes the behavior to support the manual.
mysql-test/r/information_schema_db.result:
Bug#35996: Changed warnings.
mysql-test/r/view_grant.result:
Bug#35996: Changed warnings, test result.
mysql-test/t/information_schema_db.test:
Bug#35996: Changed test case to reflect new behavior.
mysql-test/t/view_grant.test:
Bug#35996: Test case.
sql/sql_acl.cc:
Bug#35996: Code no longer necessary, we may as well exempt
SHOW CREATE VIEW from this check.
sql/sql_show.cc:
Bug#35996: The fix: An Internal_error_handler that hides
most errors raised by access checking as they are not
relevant to SHOW CREATE VIEW.
sql/table.cc:
Bug#35996: Restricting this hack to act only when there is
no Internal_error_handler.
trigger, merge table
The problem with break statements is that they have very
local effects. Hence a break statement within the inner loop
of a nested-loops join caused execution to proceed to the
next table even though a serious error occurred. The problem
was fixed by breaking out the inner loop into its own
method. The change empowers all errors to terminate the
execution.
The errors that will now halt multi-DELETE execution
altogether are
- triggers returning errors
- handler errors
- server being killed
mysql-test/r/delete.result:
Bug#46958: Test result.
mysql-test/t/delete.test:
Bug#46958: Test case.
sql/sql_class.h:
Bug#46958: New method declaration.
sql/sql_delete.cc:
Bug#46958: New method implementation.