An assertion abort could occur for some grouping queries that employed
decimal user variables with assignments to them.
The problem appeared the constructors of the class Field_new_decimal
because the function my_decimal_length_to_precision did not guarantee
returning decimal precision not greater than DECIMAL_MAX_PRECISION.
The cast operation ignored the cases when the precision and/or the scale exceeded
the limits, 65 and 30 respectively. No errors were reported in these cases.
For some queries this may lead to an assertion abort.
Fixed by throwing errors for such cases.
For GCov builds, if the server crashes, the normal exit handler for writing
coverage information is not executed due to the abnormal termination.
Fix this by explicitly calling the __gcov_flush function in our crash handler.
The SELECT INTO OUTFILE FIELDS ENCLOSED BY digit or minus sign,
followed by the same LOAD DATA INFILE statement, used wrond encoding
of non-string fields contained the enclosed character in their text
representation.
Example:
SELECT 15, 9 INTO OUTFILE 'text' FIELDS ENCLOSED BY '5';
Old encoded result in the text file:
5155 595
^ was decoded as the 1st enclosing character of the 2nd field;
^ was skipped as garbage;
^ ^ was decoded as a pair of englosing characters of the 1st field;
^ was decoded as traling space of the first field;
^^ was decoded as a doubled enclosed character.
New encoded result in the text file:
51\55 595
^ ^ pair of enclosing characters of the 1st field;
^^ escaped enclosed character.
AsText() needs to know the maximum number of
characters a IEEE double precision value can
occupy to make sure there's enough buffer space.
The number was too small to hold all possible
values and this caused buffer overruns.
Fixed by correcting the calculation of the
maximum digits in a string representation of an
IEEE double precision value as printed by
String::qs_append(double).
Problem: logging queries not using indexes we check a special flag which
is set only at the server startup and is not changing with a corresponding
server variable together.
Fix: check the variable value instead of the flag.
Problem: in case of failed 'show binlog events...' we don't inform that
the log is not in use anymore. That may confuse following 'purge logs...'
command as it takes into account logs in use.
Fix: always notify that the log is not in use anymore.
This bug may manifest itself for select queries over a multi-table view
that includes an ORDER BY clause in its definition. If the select list of
the query contains references to the same view column with different
aliases the names of the columns in the result output will be nevertheless
the same, coinciding with one of the alias.
The bug happened because the method Item_ref::get_tmp_table_item that
was inherited by the class Item_direct_view_ref ignored the fact that
the name of the view column reference must be inherited by the fields
of the temporary table that was created in order to get the result rows
sorted.
The `SELECT 'r' INTO OUTFILE ... FIELDS ENCLOSED BY 'r' ' statement
encoded the 'r' string to a 4 byte string of value x'725c7272'
(sequence of 4 characters: r\rr).
The LOAD DATA statement decoded this string to a 1 byte string of
value x'0d' (ASCII Carriage Return character) instead of the original
'r' character.
The same error also happened with the FIELDS ENCLOSED BY clause
followed by special characters: 'n', 't', 'r', 'b', '0', 'Z' and 'N'.
NOTE 1: This is a result of the undocumented feature: the LOAD DATA INFILE
recognises 2-byte input sequences like \n, \t, \r and \Z in addition
to documented 2-byte sequences: \0 and \N. This feature should be
documented (here backspace character is a default ESCAPED BY character,
in the real-life example it may be any ESCAPED BY character).
NOTE 2, changed behaviour:
Now the `SELECT INTO OUTFILE' statement with the `FIELDS ENCLOSED BY'
clause followed by one of: 'n', 't', 'r', 'b', '0', 'Z' or 'N' characters
encodes this special character itself by doubling it ('r' --> 'rr'),
not by prepending it with an escape character.
This bug may manifest itself not only with the queries for which
the index-merge access method is chosen. It also may display
itself for queries with DISTINCT.
The bug was in how the Unique::get method used the merge_buffers
function. To compare elements in the the queue employed by
merge_buffers() it must use the buffpek_compare function rather
than the function for binary comparison.
When a UNION statement forced conversion of an UTF8
charset value to a binary charset value, the byte
length of the result values was truncated to the
CHAR_LENGTH of the original UTF8 value.
query / no aggregate of subquery
The optimizer counts the aggregate functions that
appear as top level expressions (in all_fields) in
the current subquery. Later it makes a list of these
that it uses to actually execute the aggregates in
end_send_group().
That count is used in several places as a flag whether
there are aggregates functions.
While collecting the above info it must not consider
aggregates that are not aggregated in the current
context. It must treat them as normal expressions
instead. Not doing that leads to incorrect data about
the query, e.g. running a query that actually has no
aggregate functions as if it has some (and hence is
expected to return only one row).
Fixed by ignoring the aggregates that are not aggregated
in the current context.
One other smaller omission discovered and fixed in the
process : the place of aggregation was not calculated for
user defined functions. Fixed by calling
Item_sum::init_sum_func_check() and
Item_sum::check_sum_func() as it's done for the rest of
the aggregate functions.
"Federared Transactions Failure"
Bug occurs when the user performs an operation which inserts more than
one row into the federated table and the federated table references a
remote table stored within a transactional storage engine. When the
insert operation for any one row in the statement fails due to
constraint violation, the federated engine is unable to perform
statement rollback and so the remote table contains a partial commit.
The user would expect a statement to perform the same so a statement
rollback is expected.
This bug was fixed by implementing bulk-insert handling into the
federated storage engine. This will relieve the bug for most common
situations by enabling the generation of a multi-row insert into the
remote table and thus permitting the remote table to perform
statement rollback when neccessary.
The multi-row insert is limited to the maximum packet size between
servers and should the size overflow, more than one insert statement
will be sent and this bug will reappear. Multi-row insert is disabled
when an "INSERT...ON DUPLICATE KEY UPDATE" is being performed.
The bulk-insert handling will offer a significant performance boost
when inserting a large number of small rows.
This patch builds on Bug29019 and Bug25511