closing temp tables through end_thread
had a flaw in binlog-off branch of close_temporary_tables where
next table to close was reset via table->next
for (table= thd->temporary_tables; table; table= table->next)
which was wrong since the current table instance got destoyed at
close_temporary(table, 1);
The fix adapts binlog-on branch method to engage the loop's internal 'next' variable which holds table->next prior table's destoying.
a too large value": the bug was that if MySQL generated a value for an
auto_increment column, based on auto_increment_* variables, and this value
was bigger than the column's max possible value, then that max possible
value was inserted (after issuing a warning). But this didn't honour
auto_increment_* variables (and so could cause conflicts in a master-master
replication where one master is supposed to generated only even numbers,
and the other only odd numbers), so now we "round down" this max possible
value to honour auto_increment_* variables, before inserting it.
auto_increment breaks binlog":
if slave's table had a higher auto_increment counter than master's (even
though all rows of the two tables were identical), then in some cases,
REPLACE and INSERT ON DUPLICATE KEY UPDATE failed to replicate
statement-based (it inserted different values on slave from on master).
write_record() contained a "thd->next_insert_id=0" to force an adjustment
of thd->next_insert_id after the update or replacement. But it is this
assigment introduced indeterminism of the statement on the slave, thus
the bug. For ON DUPLICATE, we replace that assignment by a call to
handler::adjust_next_insert_id_after_explicit_value() which is deterministic
(does not depend on slave table's autoinc counter). For REPLACE, this
assignment can simply be removed (as REPLACE can't insert a number larger
than thd->next_insert_id).
We also move a too early restore_auto_increment() down to when we really know
that we can restore the value.
It was possible that fetching a record by an exact key value
(including the record pointer) could return a record with a
different key value. This happened only if a concurrent insert
added a record with the searched key value after the fetching
statement locked the table for read.
The search succeded on the key value, but the record was
rejected as it was past the file length that was remembered
at start of the fetching statement. With other words it was
rejected as being a concurrently inserted record.
The action to recover from this problem was to fetch the
record that is pointed at by the next key of the index.
This was repeated until a record below the file length was
found.
I do now avoid this loop if an exact match was searched.
If this match is beyond the file length, it is now treated
as "key not found". There cannot be another key with the
same record pointer.
and BUG#19208 "Test 'rpl000017' hangs on Windows".
Both bugs are caused by attempting to delete an opened
file and to create immediatedly a new one with the same
name. On Windows it can be supported only on NT-platforms
(by using FILE_SHARE_DELETE mode and with renaming the
file before deletion). Because deleting not-closed files
is not supported on all platforms (e.g. Win 98|ME) this
is to be considered harmful and should be eliminated by
a "code redesign".
Produce a warning if DATA/INDEX DIRECTORY is specified in
ALTER TABLE statement.
Ignoring of these options is documented in the symbolic links
section of the manual.
- make sure to allocate just enough pages in the fragments by using the actual
row count from the backup, to avoid over allocation of pages to fragments, and
thus avoid the bug
An UNIQUE KEY consisting of NOT NULL columns
was displayed as PRIMARY KEY in "DESC t1".
According to the code, that was intentional
behaviour for some reasons unknown to me.
This code was written before bitkeeper time,
so I cannot check who and why made this.
After discussing on dev-public, a decision
was made to remove this code
The AsBinary function returns VARCHAR data type with binary collation.
It can cause problem for clients that treat that kind of data as
different from BLOB type.
So now AsBinary returns BLOB.
This bug in Field_string::cmp resulted in a wrong comparison
with keys in partial indexes over multi-byte character fields.
Given field a is declared as a varchar(16) collate utf8_unicode_ci
INDEX(a(4)) gives us an example of such an index.
Wrong key comparisons could lead to wrong result sets if
the selected query execution plan used a range scan by
a partial index over a utf8 character field.
This also caused wrong results in many other cases.
functions in queries
Using MAX()/MIN() on table with disabled indexes (by ALTER TABLE)
results in error 124 (wrong index) from storage engine.
The problem was that optimizer use disabled index to optimize
MAX()/MIN(). Normally it must skip disabled index and perform
table scan.
This patch skips disabled indexes for min/max optimization.
Added test case for bug#18759 Incorrect string to numeric conversion.
select.test:
Added test case for bug#18759 Incorrect string to numeric conversion.
item_cmpfunc.cc:
Cleanup after fix for bug#18360 removal
Fixes bug#17264, for alter table on win32 for successfull operation completion
it is used TL_WRITE(=10) lock instead of TL_WRITE_ALLOW_READ(=6), however here
in innodb handler TL_WRTIE is lifted to TL_WRITE_ALLOW_WRITE, which causes
race condition when several clients do alter table simultaneously.
there was two problems about charsets in embedded server
1. mysys/charset.c - defined there default_charset_info variable is
modified by both server and client code (particularly when
--default-charset option is handled)
In embedded server we get two codelines modifying one variable.
I created separate default_client_charset_info for client code
2. mysql->charset and mysql->options.charset initialization isn't
properly done for embedded server - necessary calls added
BUG#18036 - update of table joined to self reports table as crashed
Set exclude_from_table_unique_test value back to FALSE. It is needed for
further check in multi_update::prepare whether to use record cache.