unix_timestamp() is implemented in this part of the code in place of current_time().
Also, since the pb2 machines may be extremely fast, instead of looping through the code,
we use sleep(1.1) so that the variables t0 and t1 have different values.
CREATE TABLE bug13510739 (c INTEGER NOT NULL, PRIMARY KEY (c)) ENGINE=INNODB;
INSERT INTO bug13510739 VALUES (1), (2), (3), (4);
DELETE FROM bug13510739 WHERE c=2;
HANDLER bug13510739 OPEN;
HANDLER bug13510739 READ `primary` = (2);
HANDLER bug13510739 READ `primary` NEXT; <-- crash
The bug is that in the particular testcase row_search_for_mysql() picked up
a delete-marked record and quit, leaving the cursor non-positioned state and
on the subsequent 'get next' call the code crashed because of the
non-positioned cursor.
In row0sel.cc (line numbers from mysql-trunk):
4653 if (rec_get_deleted_flag(rec, comp)) {
...
4679 if (index == clust_index && unique_search) {
4680
4681 err = DB_RECORD_NOT_FOUND;
4682
4683 goto normal_return;
4684 }
it quit from here, not storing the cursor position.
In contrast, if the record=2 is not found at all (e.g. sleep(1) after DELETE
to let the purge wipe it away completely) then 'get = 2' does find record=3
and quits from here:
4366 if (0 != cmp_dtuple_rec(search_tuple, rec, offsets)) {
...
4394 btr_pcur_store_position(pcur, &mtr);
4395
4396 err = DB_RECORD_NOT_FOUND;
4397 #if 0
4398 ut_print_name(stderr, trx, FALSE, index->name);
4399 fputs(" record not found 3\n", stderr);
4400 #endif
4401
4402 goto normal_return;
Another fix could be to extend the condition on line 4366 to hold only if
seach_tuple matches rec AND if rec is not delete marked.
Notice that in the above test case if we wait about 1 second somewhere after
DELETE and before 'get = 2', then the testcase does not crash and returns 4
instead. Not sure if this is the correct behavior, but this bugfix removes
the crash and makes the code return what it also returns in the non-crashing
case (if rec=2 is not found during 'get = 2', e.g. we have sleep(1) there).
Approved by: Marko (http://bur03.no.oracle.com/rb/r/863/)
The time comparison using current_time() stored in an int variable was giving wrong results as
the current_time() format as an int implementation has been changed in mysql-trunk but not in mysql-5.5.
The time is stored in the format hh:mm:ss as 'time' datatype.But as an int, it is stored as hhmmss,
but only on the trunk. On mysql-5.5,as an int, it is stored as hh.
Hence, the current_time() function has been changed to unix_timestamp() function.
This patch changes the mechanism by which the client enables a
plugin. Instead of using INSERT IGNORE to reload a plugin library,
it now uses REPLACE INTO. This allows users to load a library
multiple times replacing the existing values in the mysql.plugin
table. This allows users to replace the symbol reference to a
different dl name in the table. Thus permitting enabling of
multiple versions of the same library without first disabling
the old version.
A regression test was added to ensure this feature works.
Problem description:
When a view is created using function FORMAT and if FORMAT function uses locale
option,definition of view saved into server doesn't contain that locale information,
Ex:
create table test2 (bb decimal (10,2));
insert into test2 values (10.32),(10009.2),(12345678.21);
create view test3 as select format(bb,1,'sk_SK') as cc from test2;
select * from test3;
+--------------+
| cc |
+--------------+
| 10.3 |
| 10,009.2 |
| 12,345,678.2 |
+--------------+
3 rows in set (0.02 sec)
show create view test3
View: test3
Create View: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost`
SQL SECURITY DEFINER VIEW `test3` AS select format(`test2`.`bb`,1) AS `cc`
from `test2`
character_set_client: latin1
collation_connection: latin1_swedish_ci
1 row in set (0.02 sec)
Problem Analysis:
The function Item_func_format::print() which prints the query string to create
the view does not print the third argument (i.e the locale information). Hence
view is created without locale information.
Problem Solution:
If argument count is more than 2 we now print the third argument onto the query string.
Files changed:
sql/item_strfunc.cc
Function call changes: Item_func_format::print()
mysql-test/t/select.test
Added test case to test the bug
mysql-test/r/select.result
Result of the test case appended here
There was memory leak when running some tests on PB2.
The reason of the failure is an early return from change_master()
that was supposed to deallocate a dyn-array.
Actually the same bug58915 was fixed in trunk with relocating the dyn-array
destruction into THD::cleanup_after_query() which can't be bypassed.
The current patch backports magne.mahre@oracle.com-20110203101306-q8auashb3d7icxho
and adds two optimizations: were done: the static buffer for the dyn-array to base on,
and the array initialization is called precisely when it's necessary rather than
per each CHANGE-MASTER as before.
The counter handler_read_key (SSV::ha_read_key_count) is incremented
incorrectly.
The mysql server maintains a per thread system_status_var (SSV)
object. This object contains among other things the counter
SSV::ha_read_key_count. The purpose of this counter is to measure the
number of requests to read a row based on a key (or the number of
index lookups).
This counter was wrongly incremented in the
ha_innobase::innobase_get_index(). The fix removes
this increment statement (for both innodb and innodb_plugin).
The various callers of the innobase_get_index() was checked to
determine if anybody must increment this counter (if they first call
innobase_get_index() and then perform an index lookup). It was found
that no caller of innobase_get_index() needs to worry about the
SSV::ha_read_key_count counter.
SMALL KEY CACHE
The server crashed on division by zero because the key cache was not
initialized and the block length was 0 which was used in a division.
The fix was to not allow CACHE INDEX if the key cache was not initiallized.
Thus never try LOAD INDEX INTO CACHE for an uninitialized key cache.
Also added some windows files/directories to .bzrignore.
WITH LARGE BUFFER POOL
(Note: this a backport of revno:3472 from mysql-trunk)
rb://845
approved by: Marko
When dropping a table (with an .ibd file i.e.: with
innodb_file_per_table set) we scan entire LRU to invalidate pages from
that table. This can be painful in case of large buffer pools as we hold
the buf_pool->mutex for the scan. Note that gravity of the problem does
not depend on the size of the table. Even with an empty table but a
large and filled up buffer pool we'll end up scanning a very long LRU
list.
The fix is to scan flush_list and just remove the blocks belonging to
the table from the flush_list, marking them as non-dirty. The blocks
are left in the LRU list for eventual eviction due to aging. The
flush_list is typically much smaller than the LRU list but for cases
where it is very long we have the solution of releasing the
buf_pool->mutex after scanning 1K pages.
buf_page_[set|unset]_sticky(): Use new IO-state BUF_IO_PIN to ensure
that a block stays in the flush_list and LRU list when we release
buf_pool->mutex. Previously we have been abusing BUF_IO_READ to achieve
this.
The predicate is re-written from
((`test`.`g1`.`a` = geometryfromtext('')) or ...
to
((`test`.`g1`.`a` = <cache>(geometryfromtext(''))) or ...
The range optimizer calls save_in_field_no_warnings, in order to fetch keys.
save_in_field_no_warnings returns 0 because of the cache wrapper,
and get_mm_leaf() proceeded to call Field_blob::get_key_image()
which accesses un-initialized data.
Refactored the test case: hardened and extended it. Created test inc file
to abstract the task of relocating binlogs.
Also, disabled it on windows for not cluttering the test case any further,
as it depends heavily on doing filesystem operations and path handling.
There was memory leak when running some tests on PB2.
The reason of the failure is an early return from change_master()
that was supposed to deallocate a dyn-array.
Fixed with relocating the dyn-array's destructor at ~LEX() that is
the end of the session, per Gleb's patch idea.
Two optimizations were done: the static buffer for the dyn-array to base on,
and the array initialization is called precisely when it's necessary rather than
per each CHANGE-MASTER as before.
BIN LOG HAS BEEN MOVED
When moving the binary/relay log files from one location to
another and restarting the server with a different log-bin or
relay-log paths, would cause the startup process to abort. The
root cause was that the server would not be able to find the log
files because it would consider old paths for entries in the
index file instead of the new location. What's even worse, the
relative paths would not be considered relative to the path
provided in log-bin and relay-log, but to mysql_data_dir.
We fix the cases where the server contains relative paths. When
the server is reading from the index file, it checks whether the
entry contains relative paths. If it does, we replace it with the
absolute path set in log-bin/relay-log option. Absolute paths
remain unchanged and the index must be manually edited to
consider the new log-bin and/or relay-log path (this should be
documented). This is a fix for a GA version, that does not break
behavior (that much).
For development versions, we should go with Zhenxing's approach
that removes paths altogether from index files.
When passing an empty user to the connect function will cause
valgrind warnings. Seems that the client code is not prepared
to handle empty users. On 5.6 this can even be triggered by
START SLAVE PASSWORD='...'; i.e., without setting USER='...' on
the START SLAVE command (see WL#4143 for details on the new
additional START SLAVE commands).
To fix this, we disallow empty users when configuring the slave
connection parameters (this decision might be revisited if the
client code accepts empty users in the future).
AND HANG IN SHOW TABLE STATUS.
ISSUE: Table corruption due to concurrent queries.
Different threads running insert and check
query leads to table corruption. Not properly locked,
rows are inserted in between check query.
SOLUTION: In check query mutex lock is acquired
for a longer time to handle concurrent
insert and check query.
NOTE: Additionally we backported the fix for CHECKSUM
issue(bug#11758979).
A patch for alter_table-big.test has been committed earlier.
This is a patch for create-big.test:
The test used to time-out after 900 seconds.
It relied on debug sleeps that are no longer present in the
code. Since the sleeps are long gone, fixing the problem didn't
involve just updating the result file or using macro
"show_binlog_events2.inc" instead of "show binlog events"
statement. The test needed to be rewritten using debug sync
points, and result then needed to be updated.
So, the sleeps have been replaced by debug_sync points and the test execution time has
been reduced significantly.
Setting query_cache_size to larger values might fail depending on the memory
pressure being put on the system. This can be seen on pushbuild as the test
case query_cache_size_basic tries to allocate a +3GB query cache, which
succeeds in some machines and fails in others.
So this part of the code where the test case tries to allocate +3GB query cache has been
disabled for now to get the test running on pb2.
rb://816
approved by: Marko Makela
The title is misleading. This bug was actually introduced by
bug 12635227 and was unearthed by a later optimization.
We need to free buf_page_t structs that we are allocating using
malloc() at shutdown.
BY CACHING OR REDUCING CREATEEVENT CALLS".
5.5 versions of MySQL server performed worse than 5.1 versions
under single-connection workload in autocommit mode on Windows XP.
Part of this slowdown can be attributed to overhead associated
with constant creation/destruction of MDL_lock objects in the MDL
subsystem. The problem is that creation/destruction of these
objects causes creation and destruction of associated
synchronization primitives, which are expensive on Windows XP.
This patch tries to alleviate this problem by introducing a cache
of unused MDL_object_lock objects. Instead of destroying such
objects we put them into the cache and then reuse with a new
key when creation of a new object is requested.
To limit the size of this cache, a new --metadata-locks-cache-size
start-up parameter was introduced.
OPTION SKIP-WRITE-BINLOG
System tables were not getting upgraded when
mysql_upgrade was run with --skip-write-binlog
option. (Same for --write-binlog.) Also, with
this option, mysql_upgrade_info file was not
getting created after the upgrade.
mysql_upgrade makes use of mysql client tool in
order to run upgrade scripts, while doing so it
passes some of the command line options (used to
start mysql_upgrade) directly to mysql client.
The reason behind this bug being, some options
like skip-write-binlog and upgrade-system-tables
were being passed to mysql tool along with other
options, and hence mysql execution failed due
presence of these invalid options.
Fixed this issue by filtering out the above mentioned
options from the list of options that will be passed to
mysql and mysqlcheck tools. However, since --write-binlog
is supported by mysqlcheck, this option would be used
explicitly while running mysqlcheck. (not part of patch,
already there)
Checking the contents of general log after the upgrade
is not doable via an mtr test. So performed manual test.
Added a test to verify the creation of mysql_upgrade_info.
SCAN/CPU) => SLAVE FAILURE
When a statement containing a large amount of ROWs to be applied on
the slave, and the slave's table does not contain a PK, it can take a
considerable amount of time to find and change all the rows that are
to be changed.
The proper slave enhancement will be implemented in WL 5597. However,
in this bug we are making it clear to the user what the problem is, by
printing a message to the error log if the execution time, for a given
statement in RBR, takes more than LONG_FIND_ROW_THRESHOLD (set to 60
seconds). This shall help the DBA to diagnose what's happening when
facing a slave server that is quiet for no apparent reason...
The note is only printed to the error log if log_warnings is set to be
greater than 1.
The bug was accidentally fixed by fixing
Bug#11759688 52020: InnoDB can still deadlock on just INSERT...ON DUPLICATE KEY
a.k.a. the reintroduction of
Bug#7975 deadlock without any locking, simple select and update
alter_treable-big.test was failing due to the use of RAND() function which is no more
replication safe.
This has been modified using static values.
Also, 'sleep' has been replaced using 'debug_sync' and the execution time of the
test has been reduced significantly.
This test is now taken out of the disabled.def file and is being enabled.
a.k.a. Bug#7975 deadlock without any locking, simple select and update
Bug#7975 was reintroduced when the storage engine API was made
pluggable in MySQL 5.1. Instead of looking at thd->lex directly, we
rely on handler::extra(). But, we were looking at the wrong extra()
flag, and we were ignoring the TRX_DUP_REPLACE flag in places where we
should obey it.
innodb_replace.test: Add tests for hopefully all affected statement
types, so that bug should never ever resurface. This kind of tests
should have been added when fixing Bug#7975 in MySQL 5.0.3 in the
first place.
rb:806 approved by Sunny Bains