'MAX_BINLOG_CACHE_SIZE' ERROR
Problem:
=======
MySQL returns following error in win64.
"ERROR 1197 (HY000): Multi-statement transaction required more than
'max_binlog_cache_size' bytes of storage; increase this mysqld variable
and try again" when user tries to load >4G file even if
max_binlog_cache_size set to maximum value. On Linux everything
works fine.
Analysis:
========
The `max_binlog_cache_size' variable is of type `ulonglong'. This
value is set to `ULONGLONG_MAX' at the time of server start up. The
above value is stored in an intermediate variable named
`saved_max_binlog_cache_size' which is of type `ulong'. In visual
c++ complier the `ulong' type is of 4bytes in size and hence the value
is getting truncated to '4GB' and the cache is not able to grow beyond
4GB size. The same limitation is observed with
"max_binlog_stmt_cache_size" as well. Similar fix has been applied.
Fix:
===
As part of fix the type "ulong" is replaced with "my_off_t" which is of
type "ulonglong".
mysys/mf_iocache.c:
Added debug statement to simulate a scenario where the cache
file's current position is set to >4GB
sql/log.cc:
Replaced the type of `saved_max_binlog_cache_size' from "ulong" to
"my_off_t", which is a type def for "ulonglong".
Add a macro "runselftest" to the spec file for RPM builds.
If its value is 1 (the default), the test suite will be run during
the RPM build.
To prevent that, add this to the rpmbuild command line:
--define "runselftest 0"
Failures of the test suite will NOT make the RPM build fail!
support-files/mysql.spec.sh:
Add the "runselftest" macro following the model provided by RedHat.
This code is similar to what we plan to use for ULN RPMs.
ISSUE: Incorrect key file. Key file is corrupted,
Reading incorrect key information (keyseg)
from index file. Key definition in .MYI
and .FRM file differs. Starting pointer
to read the keyseg information is changed
to a value greater than the pack_reclength.
Memcpy tries to read keyseg information from
unallocated memory which causes the crash.
SOLUTION: One more check added to compare the
the key definition in .MYI and .FRM
file. If the definition differ, server
produces an error.
Problem description:
Giving "help 'contents'" in the mysql client as a first statement
gives error
Analysis:
In com_server_help() function the "server_cmd" variable was
initialised with buffer->ptr(). And the "server_cmd" variable is not
updated since we are passing "'contents'"(with single quote) so the
buffer->ptr() consists of the previous buffer values and it was sent
to the mysql_real_query() hence we are getting error.
Fix:
We are not initialising the "server_cmd" variable and we are updating
the variable with "server_cmd= cmd_buf" in any of the case i.e with
single quote or without single quote for the contents.
As part of error message improvement, added new error message in case
of "help 'contents'".
client/mysql.cc:
com_server_help(): Properly updated the server_cmd variable and improved
the error message.
- index_merge/intersection is unable to work on GIS indexes, because:
1. index scans have no Rowid-Ordered-Retrieval property
2. When one does an index-only read over a GIS index, they do not
get the index tuple, because index only contains bounding box of the geometry.
This is why key_copy() call crashed.
This patch fixes#1, which makes the problem go away. Theoretically, it would
be nice to check #2, too, but SE API semantics is not sufficiently precise to do it.
"ORDER BY" AND "LIMIT BY" CLAUSE
PROBLEM:
When a 'limit' clause is specified in a query along with
group by and order by, optimizer chooses wrong index
there by examining more number of rows than required.
However without the 'limit' clause, optimizer chooses
the right index.
ANALYSIS:
With respect to the query specified, range optimizer chooses
the first index as there is a range present ( on 'a'). Optimizer
then checks for an index which would give records in sorted
order for the 'group by' clause.
While checking chooses the second index (on 'c,b,a') based on
the 'limit' specified and the selectivity of
'quick_condition_rows' (number of rows present in the range)
in 'test_if_skip_sort_order' function.
But, it fails to consider that an order by clause on a
different column will result in scanning the entire index and
hence the estimated number of rows calculated above are
wrong (which results in choosing the second index).
FIX:
Do not enforce the 'limit' clause in the call to
'test_if_skip_sort_order' if we are creating a temporary
table. Creation of temporary table indicates that there would be
more post-processing and hence will need all the rows.
This fix is backported from 5.6. This problem is fixed in 5.6 as
part of changes for work log #5558
mysql-test/r/subselect.result:
Changes for Bug#11762052 results in the correct number of rows.
sql/sql_select.cc:
Do not pass the actual 'limit' value if 'need_tmp' is true.
Now partition engine adds underlying tables to the QC and ask underlying tables engine permittion to cache the query and return result of the query.
Incorrect QC cleanup in case of table registration failure fixe.
Unified interface for myisammrg & partitioned engnes for QC.
rpl_cant_read_event_incident:
Slave applies updates from bug11747416_32228_binlog.000001 file which
contains a CREATE TABLE t statement and an incident, when SQL thread is
running slowly IO thread may reach the incident before SQL thread
executes the create table statement.
Execute "drop table if exists t" and also perform a RESET MASTER to
clean slave binary logs.
rpl_bug41902:
Error "MYSQL_BIN_LOG::purge_logs was called with file
./master-bin.000001 not listed in the index." suppression is not
considering windows path, there is ".\master-bin.000001".
Changed suppression to: "MYSQL_BIN_LOG::purge_logs was called with file
..master-bin.000001 not listed in the index", to match ".\" and "./".
RBR AND RC
Description: When scanning and locking rows with < or <=, InnoDB locks
the next row even though row based binary logging and read committed
is used.
Solution: In the handler, when the row is identified to fall outside
of the range (as specified in the query predicates), then request the
storage engine to unlock the row (if possible). This is done in
handler::read_range_first() and handler::read_range_next().
COUNT DISTINCT GROUP BY
PROBLEM:
To calculate the final result of the count(distinct(select 1))
we call 'end_send' function instead of 'end_send_group'.
'end_send' cannot be called if we have aggregate functions
that need to be evaluated.
ANALYSIS:
While evaluating for a possible loose_index_scan option for
the query, the variable 'is_agg_distinct' is set to 'false'
as the item in the distinct clause is not a field. But, we
choose loose_index_scan by not taking this into
consideration.
So, while setting the final 'select_function' to evaluate
the result, 'precomputed_group_by' is set to TRUE as in
this case loose_index_scan is chosen and we do not have
agg_distinct in the query (which is clearly wrong as we
have one).
As a result, 'end_send' function is chosen as the final
select_function instead of 'end_send_group'. The difference
between the two being, 'end_send_group' evaluates the
aggregates while 'end_send' does not. Hence the wrong result.
FIX:
The variable 'is_agg_distinct' always represents if
'loose_idnex_scan' can be chosen for aggregate_distinct
functions present in the select.
So, we check for this variable to continue with
loose_index_scan option.
sql/opt_range.cc:
Do not continue if is_agg_distinct is not set in case
of agg_distinct functions.
primary key with innodb tables
The bug was triggered if a single ALTER TABLE statement both
added and dropped indexes and ALTER TABLE failed during drop
(e.g. because the index was needed in a foreign key constraint).
In such cases, the server index information would get out of
sync with InnoDB - the added index would be present inside
InnoDB, but not in the server. This could then lead to InnoDB
error messages and/or server crashes.
The root cause is that new indexes are added before old indexes
are dropped. This means that if ALTER TABLE fails while dropping
indexes, index changes will be reverted in the server but not
inside InnoDB.
This patch fixes the problem by dropping any added indexes
if drop fails (for ALTER TABLE statements that both adds
and drops indexes).
However, this won't work if we added a primary key as this
key might not be possible to drop inside InnoDB. Therefore,
we resort to the copy algorithm if a primary key is added
by an ALTER TABLE statement that also drops an index.
In 5.6 this bug is more properly fixed by the handler interface
changes done in the scope of WL#5534 "Online ALTER".