A change in the default values of some config parameters
caused this test to fail, adjust the test and make it more
robust so it does not fail for the same reason in the future.
concurrent SHOW CREATE
The problem was that a SHOW CREATE TABLE statement issued inside
a transaction did not release its metadata locks at the end of the
statement execution. This happened even if SHOW CREATE TABLE is an
information statement.
The consequence was that SHOW CREATE TABLE was able to block other
connections from accessing the table (e.g. using ALTER TABLE).
This patch fixes the problem by explicitly releasing any metadata
locks taken by SHOW CREATE TABLE after the statement completes.
Test case added to show_check.test.
SHOW DATABASES LIKE ... was not converting to lowercase on comparison as the
documentation is suggesting.
Fixed it to behave similarly to SHOW TABLES LIKE ... and updated the failing
on MacOSX lowercase_table2 test case.
The reason for the bug above is unclear but
- Modify pfs_upgrade so that it's result is easier to analyze in case something fails
- Fix several minor weaknesses which could cause that a successing test (either an
already existing or a to be developed one) fails because of imperfect cleanup,
too slow disconnected sessions etc.
should either fix the bug or reduce it's probability or at least
make the analysis of failures easier.
table with active trx
Essentially, the problem is that InnoDB does a implicit commit
when a cursor (table handler) is unlocked/closed, creating
a dissonance between the transaction state within the server
layer and the storage engine layer. Theoretically, a statement
transaction can encompass several table instances in a similar
manner to a multiple statement transaction, hence it does not
make sense to limit a statement transaction to the lifetime of
the table instances (cursors) used within it.
Since this particular instance of the problem is only triggerable
on 5.1 and is masked on 5.5 due 2PC being skipped (assertion is in
the prepare phase of a 2PC), the solution (which is less risky) is
to explicitly end the transaction before the cached table is unlock
on rename table.
The patch is to be null merged into trunk.
Problem: when SHOW BINLOG EVENTS was issued, it increased the value of
@@session.max_allowed_packet. This allowed a non-root user to increase
the amount of memory used by her thread arbitrarily. Thus, it removes
the bound on the amount of system resources used by a client, so it
presents a security risk (DoS attack).
Fix: it is correct to increase the value of @@session.max_allowed_packet
while executing SHOW BINLOG EVENTS (see BUG 30435). However, the
increase should only be temporary. Thus, the fix is to restore the value
when SHOW BINLOG EVENTS ends.
The value of @@session.max_allowed_packet is also increased in
mysql_binlog_send (i.e., the binlog dump thread). It is not clear if this
can cause any trouble, since normally the client that issues
COM_BINLOG_DUMP will not issue any other commands that would be affected
by the increased value of @@session.max_allowed_packet. However, we
restore the value just in case.
This bug is a design flaw of the fix for the bug#33546. It assumed that an
item can be used only in one comparison context, but actually it isn't the
case. Item_cache_datetime is used to store result for MIX/MAX aggregate
functions. Because Arg_comparator always compares datetime values as INTs when
possible the Item_cache_datetime most time caches only INT value. But
since all datetime values has STRING result type MIN/MAX functions are asked
for a STRING value when the result is being sent to a client. The
Item_cache_datetime was designed to avoid conversions and get INT/STRING
values from an underlying item, but at the moment the values is asked
underlying item doesn't hold it anymore thus wrong result is returned.
Beside that MIN/MAX aggregate functions was wrongly initializing cached result
and this led to a wrong result.
The Item::has_compatible_context helper function is added. It checks whether
this and given items has the same comparison context or can be compared as
DATETIME values by Arg_comparator. The equality propagation optimization is
adjusted to take into account that items which being compared as DATETIME
can have different comparison contexts.
The Item_cache_datetime now converts cached INT value to a correct STRING
DATETIME value by means of number_to_datetime & my_TIME_to_str functions.
The Arg_comparator::set_cmp_context_for_datetime helper function is added.
It sets comparison context of items being compared as DATETIMEs to INT if
items will be compared as longlong.
The Item_sum_hybrid::setup function now correctly initializes its result
value.
In order to avoid unnecessary conversions Item_sum_hybrid now states that it
can provide correct longlong value if the item being aggregated can do it
too.
This assert checks that the server does not try to send OK to the
client if there has been some error during processing. This is done
to make sure that the error is in fact sent to the client.
The problem was that view errors during processing of WHERE conditions
in UPDATE statements where not detected by the update code. It therefore
tried to send OK to the client, triggering the assert.
The bug was only noticeable in debug builds.
This patch fixes the problem by making sure that the update code
checks for errors during condition processing and acts accordingly.
Remove --loose-skip-innodb from startup options
This is a simple backport of change done in WL #5349
Same as shown as "temporary fix", cherry picked to -mtr branch
and reverse() function
3 problems fixed :
1. The reported problem : caused by incorrect parsing of
the file as ucs data resulting in wrong length of the parsed
string. Fixed by truncating the invalid trailing bytes
(non-complete multibyte characters) when reading from the file
2. LOAD DATA when reading from a proper UCS2 file wasn't
recognizing the new line characters. Fixed by first looking
if a byte is a new line (or any other special) character before
reading it as a part of a multibyte character.
3. When using user variables to hold the column data in LOAD
DATA the character set of the user variable was set incorrectly
to the database charset. Fixed by setting it to the charset
specified by LOAD DATA (if any).
naming scheme for tests related to functions, rename analyse.test to
func_analyse.test (test for the ANALYSE() procedure). Avoids confusion
with the ANALYZE statement (tested in analyze.test).
The problem there is that HAVING condition evaluates const
parts of condition despite the condition has references
on aggregate functions. Table t1 became const tables
after make_join_statistics and table1.pk = 1, HAVING is
transformed into MAX(1) < 7 and taken away from HAVING.
The fix is to skip evaluation of HAVING conts parts if
HAVING condition has references on aggregate functions.
Problem: Item_str_ascii_func::val_str() did not set
charset of the returned value properly.
mysql-test/include/ctype_numconv.inc
mysql-test/r/ctype_binary.result
mysql-test/r/ctype_cp1251.result
mysql-test/r/ctype_latin1.result
mysql-test/r/ctype_ucs.result
- Adding tests
sql/item_strfunc.cc
- Adding initialization of charset
Essentially, the problem is that safemalloc is excruciatingly
slow as it checks all allocated blocks for overrun at each
memory management primitive, yielding a almost exponential
slowdown for the memory management functions (malloc, realloc,
free). The overrun check basically consists of verifying some
bytes of a block for certain magic keys, which catches some
simple forms of overrun. Another minor problem is violation
of aliasing rules and that its own internal list of blocks
is prone to corruption.
Another issue with safemalloc is rather the maintenance cost
as the tool has a significant impact on the server code.
Given the magnitude of memory debuggers available nowadays,
especially those that are provided with the platform malloc
implementation, maintenance of a in-house and largely obsolete
memory debugger becomes a burden that is not worth the effort
due to its slowness and lack of support for detecting more
common forms of heap corruption.
Since there are third-party tools that can provide the same
functionality at a lower or comparable performance cost, the
solution is to simply remove safemalloc. Third-party tools
can provide the same functionality at a lower or comparable
performance cost.
The removal of safemalloc also allows a simplification of the
malloc wrappers, removing quite a bit of kludge: redefinition
of my_malloc, my_free and the removal of the unused second
argument of my_free. Since free() always check whether the
supplied pointer is null, redudant checks are also removed.
Also, this patch adds unit testing for my_malloc and moves
my_realloc implementation into the same file as the other
memory allocation primitives.
from mysql-next-mr-opt-backporting.
Bug#54515: Crash in opt_range.cc::get_best_group_min_max on
SELECT from VIEW with GROUP BY
When handling the grouping items in get_best_group_min_max, the
items need to be of type Item_field. In this bug, an ASSERT
triggered because the item used for grouping was an
Item_direct_view_ref (i.e., the group column is from a view).
The fix is to get the real_item since Item_ref* pointing to
Item_field is ok.