--Bug#52157 various crashes and assertions with multi-table update, stored function
--Bug#54475 improper error handling causes cascading crashing failures in innodb/ndb
--Bug#57703 create view cause Assertion failed: 0, file .\item_subselect.cc, line 846
--Bug#57352 valgrind warnings when creating view
--Recently discovered problem when a nested materialized derived table is used
before being populated and it leads to incorrect result
We have several modes when we should disable subquery evaluation.
The reasons for disabling are different. It could be
uselessness of the evaluation as in case of 'CREATE VIEW'
or 'PREPARE stmt', or we should disable subquery evaluation
if tables are not locked yet as it happens in bug#54475, or
too early evaluation of subqueries can lead to wrong result
as it happened in Bug#19077.
Main problem is that if subquery items are treated as const
they are evaluated in ::fix_fields(), ::fix_length_and_dec()
of the parental items as a lot of these methods have
Item::val_...() calls inside.
We have to make subqueries non-const to prevent unnecessary
subquery evaluation. At the moment we have different methods
for this. Here is a list of these modes:
1. PREPARE stmt;
We use UNCACHEABLE_PREPARE flag.
It is set during parsing in sql_parse.cc, mysql_new_select() for
each SELECT_LEX object and cleared at the end of PREPARE in
sql_prepare.cc, init_stmt_after_parse(). If this flag is set
subquery becomes non-const and evaluation does not happen.
2. CREATE|ALTER VIEW, SHOW CREATE VIEW, I_S tables which
process FRM files
We use LEX::view_prepare_mode field. We set it before
view preparation and check this flag in
::fix_fields(), ::fix_length_and_dec().
Some bugs are fixed using this approach,
some are not(Bug#57352, Bug#57703). The problem here is
that we have a lot of ::fix_fields(), ::fix_length_and_dec()
where we use Item::val_...() calls for const items.
3. Derived tables with subquery = wrong result(Bug19077)
The reason of this bug is too early subquery evaluation.
It was fixed by adding Item::with_subselect field
The check of this field in appropriate places prevents
const item evaluation if the item have subquery.
The fix for Bug19077 fixes only the problem with
convert_constant_item() function and does not cover
other places(::fix_fields(), ::fix_length_and_dec() again)
where subqueries could be evaluated.
Example:
CREATE TABLE t1 (i INT, j BIGINT);
INSERT INTO t1 VALUES (1, 2), (2, 2), (3, 2);
SELECT * FROM (SELECT MIN(i) FROM t1
WHERE j = SUBSTRING('12', (SELECT * FROM (SELECT MIN(j) FROM t1) t2))) t3;
DROP TABLE t1;
4. Derived tables with subquery where subquery
is evaluated before table locking(Bug#54475, Bug#52157)
Suggested solution is following:
-Introduce new field LEX::context_analysis_only with the following
possible flags:
#define CONTEXT_ANALYSIS_ONLY_PREPARE 1
#define CONTEXT_ANALYSIS_ONLY_VIEW 2
#define CONTEXT_ANALYSIS_ONLY_DERIVED 4
-Set/clean these flags when we perform
context analysis operation
-Item_subselect::const_item() returns
result depending on LEX::context_analysis_only.
If context_analysis_only is set then we return
FALSE that means that subquery is non-const.
As all subquery types are wrapped by Item_subselect
it allow as to make subquery non-const when
it's necessary.
when generating new name.
If find_uniq_filename returns an error, then this error is not
being propagated upwards, and execution does not report error to
the user (although a entry in the error log is generated).
Additionally, some more errors were ignored in new_file_impl:
- when writing the rotate event
- when reopening the index and binary log file
This patch addresses this by propagating the error up in the
execution stack. Furthermore, when rotation of the binary log
fails, an incident event is written, because there may be a
chance that some changes for a given statement, were not properly
logged. For example, in SBR, LOAD DATA INFILE statement requires
more than one event to be logged, should rotation fail while
logging part of the LOAD DATA events, then the logged data would
become inconsistent with the data in the storage engine.
This is a regression from the fix for bug no 38999. A storage engine capable
of reading only a subset of a table's columns updates corresponding bits in
the read buffer to signal that it has read NULL values for the corresponding
columns. It cannot, and should not, update any other bits. Bug no 38999
occurred because the implementation of UPDATE statements compare the NULL bits
using memcmp, inadvertently comparing bits that were never requested from the
storage engine. The regression was caused by the storage engine trying to
alleviate the situation by writing to all NULL bits, even those that it had no
knowledge of. This has devastating effects for the index merge algorithm,
which relies on all NULL bits, except those explicitly requested, being left
unchanged.
The fix reverts the fix for bug no 38999 in both InnoDB and InnoDB plugin and
changes the server's method of comparing records. For engines that always read
entire rows, we proceed as usual. For engines capable of reading only select
columns, the record buffers are now compared on a column by column basis. An
assertion was also added so that non comparable buffers are never read. Some
relevant copy-pasted code was also consolidated in a new function.
/*![:version:] Query Code */, where [:version:] is a sequence of 5
digits representing the mysql server version(e.g /*!50200 ... */),
is a special comment that the query in it can be executed on those
servers whose versions are larger than the version appearing in the
comment. It leads to a security issue when slave's version is larger
than master's. A malicious user can improve his privileges on slaves.
Because slave SQL thread is running with SUPER privileges, so it can
execute queries that he/she does not have privileges on master.
This bug is fixed with the logic below:
- To replace '!' with ' ' in the magic comments which are not applied on
master. So they become common comments and will not be applied on slave.
- Example:
'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /*!99999 ,(3)*/
will be binlogged as
'INSERT INTO t1 VALUES (1) /*!10000, (2)*/ /* 99999 ,(3)*/
strict aliasing violations.
One somewhat major source of strict-aliasing violations and
related warnings is the SQL_LIST structure. For example,
consider its member function `link_in_list` which takes
a pointer to pointer of type T (any type) as a pointer to
pointer to unsigned char. Dereferencing this pointer, which
is done to reset the next field, violates strict-aliasing
rules and might cause problems for surrounding code that
uses the next field of the object being added to the list.
The solution is to use templates to parametrize the SQL_LIST
structure in order to deference the pointers with compatible
types. As a side bonus, it becomes possible to remove quite
a few casts related to acessing data members of SQL_LIST.
without FOR UPDATE is causing a lock".
SELECT statements with subqueries referencing InnoDB tables
were acquiring shared locks on rows in these tables when they
were executed in REPEATABLE-READ mode and with statement or
mixed mode binary logging turned on.
This was a regression which were introduced when fixing
bug 39843.
The problem was that for tables belonging to subqueries
parser set TL_READ_DEFAULT as a lock type. In cases when
statement/mixed binary logging at open_tables() time this
type of lock was converted to TL_READ_NO_INSERT lock at
open_tables() time and caused InnoDB engine to acquire
shared locks on reads from these tables. Although in some
cases such behavior was correct (e.g. for subqueries in
DELETE) in case of SELECT it has caused unnecessary locking.
This patch implements minimal version of the fix for the
specific problem described in the bug-report which supposed
to be not too risky for pushing into 5.1 tree.
The 5.5 tree already contains a more appropriate solution
which also addresses other related issues like bug 53921
"Wrong locks for SELECTs used stored functions may lead
to broken SBR".
This patch tries to solve the problem by ensuring that
TL_READ_DEFAULT lock which is set in the parser for
tables participating in subqueries at open_tables()
time is interpreted as TL_READ_NO_INSERT or TL_READ.
TL_READ is used only if we know that this is a SELECT
and that this particular table is not used by a stored
function.
Test coverage is added for both InnoDB and MyISAM.
This patch introduces an "incompatible" change in locking
scheme for subqueries used in SELECT ... FOR UPDATE and
SELECT .. IN SHARE MODE.
In 4.1 (as well as in 5.0 and 5.1 before fix for bug 39843)
the server would use a snapshot InnoDB read for subqueries
in SELECT FOR UPDATE and SELECT .. IN SHARE MODE statements,
regardless of whether the binary log is on or off.
If the user required a different type of read (i.e. locking
read), he/she could request so explicitly by providing FOR
UPDATE/IN SHARE MODE clause for each individual subquery.
The patch for bug 39843 broke this behaviour (which was not
documented or tested), and started to use locking reads for
all subqueries in SELECT ... FOR UPDATE/IN SHARE MODE.
This patch restores 4.1 behaviour.
This patch should be mostly null-merged into 5.5 tree.
data directory name command
The check_db_name function has been modified to validate tails of
#mysql50#-prefixed database names for compliance with MySQL 5.0
database name encoding rules (the check_table_name function call
has been reused).
This is the 5.1 merge and extension of the fix.
The server was happily accepting paths in table name in all places a table
name is accepted (e.g. a SELECT). This allowed all users that have some
privilege over some database to read all tables in all databases in all
mysql server instances that the server file system has access to.
Fixed by :
1. making sure no path elements are allowed in quoted table name when
constructing the path (note that the path symbols are still valid in table names
when they're properly escaped by the server).
2. checking the #mysql50# prefixed names the same way they're checked for
path elements in mysql-5.0.
Iterative patch improvement. Previously committed patch
caused wrong result on Windows. The previous patch also
broke secure_file_priv for symlinks since not all file
paths which must be compared against this variable are
normalized using the same norm.
The server variable opt_secure_file_priv wasn't
normalized properly and caused the operations
LOAD DATA INFILE .. INTO TABLE ..
and
SELECT load_file(..)
to do different interpretations of the
--secure-file-priv option.
The patch moves code to the server initialization
routines so that the path always is normalized
once and only once.
It was also intended that setting the option
to an empty string should be equal to
lifting all previously set restrictions. This
is also fixed by this patch.
DBUG_SYNC_POINT has at least one strong limitation that it's not defined
on all platforms. It has issues cooperating with @@debug.
All in all its functionality is superseded by DEBUG_SYNC facility and
there is no reason to maintain the old less flexible one.
Fixed with adding debug_sync_set_action() function as a facility to set up
a sync-action in the server sources code and re-writing existing simulations
(found 3) to use it.
Couple of tests have been reworked as well.
The patch offers a pattern for setting sync-points in replication threads
where the standard DEBUG_SYNC does not suffice to reach goals.
concurrent I_S query
There were two problem:
1) MYSQL_LOCK_IGNORE_FLUSH also ignored name locks
2) there was a race between abort_and_upgrade_locks and
alter_close_tables
(i.e. remove_table_from_cache and
close_data_files_and_morph_locks)
Which allowed the table to be opened with MYSQL_LOCK_IGNORE_FLUSH flag
resulting in renaming a partition that was already in use,
which could cause the table to be unusable.
Solution was to not allow IGNORE_FLUSH to skip waiting for
a named locked table.
And to not release the LOCK_open mutex between the
calls to remove_table_from_cache and
close_data_files_and_morph_locks by merging the functions
abort_and_upgrade_locks and alter_close_tables.
removed in MySQL 6.0
CREATE TABLE... TYPE= returns the warning "The syntax
'TYPE=storage_engine' is deprecated and will be removed in
MySQL 6.0. Please use 'ENGINE=storage_engine' instead"
This syntax is deprecated already from version 5.4.4, so
the message has been changed.
In addition, the deprecation macro was changed to reflect
the ServerPT decision not to include version number in the
warning message.
A number of test result files have been changed as a
consequence of the change in the deprecation macro.
Grouping by a subquery in a query with a distinct aggregate
function lead to a wrong result (wrong and unordered
grouping values).
There are two related problems:
1) The query like this:
SELECT (SELECT t1.a) aa, COUNT(DISTINCT b) c
FROM t1 GROUP BY aa
returned wrong result, because the outer reference "t1.a"
in the subquery was substituted with the Item_ref item.
The Item_ref item obtains data from the result_field object
that refreshes once after the end of each group. This data
is not applicable to filesort since filesort() doesn't care
about groups (and doesn't update result_field objects with
copy_fields() and so on). Also that data is not applicable
to group separation algorithm: end_send_group() checks every
record with test_if_group_changed() that evaluates Item_ref
items, but it refreshes those Item_ref-s only after the end
of group, that is a vicious circle and the grouped column
values in the output are shifted.
Fix: if
a) we grouping by a subquery and
b) that subquery has outer references to FROM list
of the grouping query,
then we substitute these outer references with
Item_direct_ref like references under aggregate
functions: Item_direct_ref obtains data directly
from the current record.
2) The query with a non-trivial grouping expression like:
SELECT (SELECT t1.a) aa, COUNT(DISTINCT b) c
FROM t1 GROUP BY aa+0
also returned wrong result, since JOIN::exec() substitutes
references to top-level aliases in SELECT list with Item_copy
caching items. Item_copy items have same refreshing policy
as Item_ref items, so the whole groping expression with
Item_copy inside returns wrong result in filesort() and
end_send_group().
Fix: include aliased items into GROUP BY item tree instead
of Item_ref references to them.
Several items said to be deprecated in the 4.1 manual
have never been removed. This worklog adds deprecation
warnings when these items are used, and warns the user
that the items will be removed in MySQL 5.6.
A couple of previously deprecation decision have been
reversed (see single file comments)
Invalid (old?) table or database name in logs
Problem was still not completely fixed, due to
qouting.
This is the server side only fix (in explain_filename),
the change from filename_to_tablename to use explain_filename
in the InnoDB code must be done before the bug is
fixed.
Invalid (old?) table or database name in logs
Post push patch.
Bug was that a non partitioned table file was not
converted to system_charset, (due to table_name_len was not set).
Also missing DBUG_RETURN.
And Innodb adds quotes after calling the function,
so I added one more mode where explain_filename does not
add quotes. But it still appends the [sub]partition name
as a comment.
Also caught a minor quoting bug, the character '`' was
not quoted in the identifier. (so 'a`b' was quoted as `a`b`
and not `a``b`, this is mulitbyte characters aware.)
or database name in logs
Problem was that InnoDB used filenam_to_tablename,
which do not handle partitions (due to the '#' in
the filename).
Solution is to add a new function for explaining
what the filename means: explain_filename.
It expands the database, table, partition and subpartition
parts and uses errmsg.txt for localization.
It also converts from my_charset_filename to system_charset_info
(i.e. human readable form for non ascii characters).
http://lists.mysql.com/commits/70370
2773 Mattias Jonsson 2009-03-25
It has three different output styles.
NOTE: This is the server side ONLY part (introducing the explain_filename
function). There will be a patch for InnoDB using this function to solve
the bug.
old_password() functions
The PASSWORD() and OLD_PASSWORD() functions could lead to
memory reads outside of an internal buffer when used with BLOB
arguments.
String::c_ptr() assumes there is at least one extra byte
in the internally allocated buffer when adding the trailing
'\0'. This, however, may not be the case when a String object
was initialized with externally allocated buffer.
The bug was fixed by adding an additional "length" argument to
make_scrambled_password_323() and make_scrambled_password() in
order to avoid String::c_ptr() calls for
PASSWORD()/OLD_PASSWORD().
However, since the make_scrambled_password[_323] functions are
a part of the client library ABI, the functions with the new
interfaces were implemented with the 'my_' prefix in their
names, with the old functions changed to be wrappers around
the new ones to maintain interface compatibility.
always rollsback.
The global variable max_binlog_cache_size cannot be set more than 4GB on
32 bit systems, limiting transactions of all storage engines to 4G of changes.
The problem is max_binlog_cache_size is declared as ulong which is 4 bytes
on 32 bit and 8 bytes on 64 bit machines.
Fixed by using ulonglong for max_binlog_cache_size which is 8bytes on 32
and 64 bit machines.The range for max_binlog_cache_size on 32 bit and 64 bit
systems is 4096-18446744073709547520 bytes.
UNION could convert fixed-point FLOAT(M,D)/DOUBLE(M,D) columns
to FLOAT/DOUBLE when aggregating data types from the SELECT
substatements. While there is nothing particularly wrong with
this behavior, especially when M is greater than the hardware
precision limits, it could be confusing in cases when all
SELECT statements in a union have the same
FLOAT(M,D)/DOUBLE(M,D) columns with equal precision
specifications listed in the same position.
Since the manual is quite vague on what data type should be
returned in such cases, the bug was fixed by implementing the
most 'expected' behavior: do not convert FLOAT(M,D)/DOUBLE(M,D)
to anything else if all SELECT statements in a UNION have the
same precision for that column.
When the thread executing a DDL was killed after finished its
execution but before writing the binlog event, the error code in
the binlog event could be set wrongly to ER_SERVER_SHUTDOWN or
ER_QUERY_INTERRUPTED.
This patch fixed the problem by ignoring the kill status when
constructing the event for DDL statements.
This patch also included the following changes in order to
provide the test case.
1) modified mysqltest to support variable for connection command
2) modified mysql-test-run.pl, add new variable MYSQL_SLAVE to
run mysql client against the slave mysqld.