Invalid (old?) table or database name in logs
Problem was still not completely fixed, due to
qouting.
This is the server side only fix (in explain_filename),
the change from filename_to_tablename to use explain_filename
in the InnoDB code must be done before the bug is
fixed.
value from the index (PRIMARY)
With the fix for BUG#46760, we correctly flag the presence of row_type
only when it's actually changed and enables the FAST ALTER TABLE which was
disabled with the BUG#39200.
So the changes made by BUG#46760 makes MySQL data dictionaries to be out of
sync but they are handled already by InnoDB with this BUG#44030.
The test was originally written to handle this but we requested Innodb to
update the test as the data dictionaries were in sync after the fix for
BUG#39200.
Adjusting the innodb-autoinc testcase as mentioned in the comments.
The "socket" variable is not available on Windows even though
the --socket option can be used to specify the pipe name for
local connections that use a named pipe.
The solution is to ensure that the variable is always defined.
checksum)"
The problem was that checksum of GEOMETRY type used memory addresses
in the computation, making it un-repeatable thus useless.
(This patch is a backport from 6.0 branch)
Despite copying the value of the old table's row type
we don't always have to mark row type as being specified.
Innodb uses this to check if it can do fast ALTER TABLE
or not.
Fixed by correctly flagging the presence of row_type
only when it's actually changed.
Added a test case for 39200.
query
The fix for bug 46749 removed the check for OUTER_REF_TABLE_BIT
and substituted it for a check on the presence of
Item_ident::depended_from.
Removing it altogether was wrong : OUTER_REF_TABLE_BIT should
still be checked in addition to depended_from (because it's not
set in all cases and doesn't contradict to the check of depended_from).
Fixed by returning the old condition back as a compliment to the
new one.
But there is no Last_IO_Error reported.
On the master, if a binary log event is larger than max_allowed_packet,
ER_MASTER_FATAL_ERROR_READING_BINLOG and the specific reason of this error is
sent to a slave when it requests a dump from the master, thus leading
the I/O thread to stop.
On a slave, the I/O thread stops when receiving a packet larger than max_allowed_packet.
In both cases, however, there was no Last_IO_Error reported.
This patch adds code to report the Last_IO_Error and exact reason before stopping the
I/O thread and also reports the case the out memory pops up while
handling packets from the master.
1. Fixes BUG#44030 - Error: (1500) Couldn't read the MAX(ID) autoinc value
from the index (PRIMARY)
2. Disables the innodb-autoinc test for innodb plugin temporarily.
The testcase for this bug has different result file for InnoDB plugin.
Should add the testcase to Innodb suite with a different result file.
Detailed revision comments:
r5243 | sunny | 2009-06-04 03:17:14 +0300 (Thu, 04 Jun 2009) | 14 lines
branches/5.1: When the InnoDB and MySQL data dictionaries go out of sync, before
the bug fix we would assert on missing autoinc columns. With this fix we allow
MySQL to open the table but set the next autoinc value for the column to the
MAX value. This effectively disables the next value generation. INSERTs will
fail with a generic AUTOINC failure. However, the user should be able to
read/dump the table, set the column values explicitly, use ALTER TABLE to
set the next autoinc value and/or sync the two data dictionaries to resume
normal operations.
Fix Bug#44030 Error: (1500) Couldn't read the MAX(ID) autoinc value from the
index (PRIMARY)
rb://118
r5252 | sunny | 2009-06-04 10:16:24 +0300 (Thu, 04 Jun 2009) | 2 lines
branches/5.1: The version of the result file checked in was broken in r5243.
r5259 | vasil | 2009-06-05 10:29:16 +0300 (Fri, 05 Jun 2009) | 7 lines
branches/5.1:
Remove the word "Error" from the printout because the mysqltest suite
interprets it as an error and thus the innodb-autoinc test fails.
Approved by: Sunny (via IM)
r5466 | vasil | 2009-07-02 10:46:45 +0300 (Thu, 02 Jul 2009) | 6 lines
branches/5.1:
Adjust the failing innodb-autoinc test to conform to the latest behavior
of the MySQL code. The idea and the comment in innodb-autoinc.test come
from Sunny.
The test case rpl_do_grant fails sporadically on PB2 with "Access
denied for user 'create_rout_db'@'localhost' ...". Inspecting the
test case, one may find that if issues a GRANT on the master
connection and immediately after it creates two new connections
(one to the master and one to the slave) using the credentials
set with the GRANT.
Unfortunately, there is no synchronization between master and
slave after the grant and before the connections are
established. This can result in slave not having executed the
GRANT by the time the connection is attempted.
This patch fixes this by deploying a sync_slave_with_master
between the grant and the connections attempt.
The test case creates two temporary tables, then closes the
connection, waits for it to disconnect, then syncs the slave with
the master, checks for remaining opened temporary tables on
slave (which should be 0) and finally drops the used
database (mysqltest).
Unfortunately, sometimes, the test fails with one open table on
the slave. This seems to be caused by the fact that waiting for
the connection to be closed is not sufficient. The test needs to
wait for the DROP event to be logged and only then synchronize
the slave with the master and proceed with the check. This is
caused by the asynchronous nature of the disconnect wrt
binlogging of the DROP temporary table statement.
We fix this by deploying a call to wait_for_binlog_event.inc
on the test case, which makes execution to wait for the DROP
temp tables event before synchronizing master and slave.
The problem is that argument buffer can be used as result buffer
and it leads to argument value change.
The fix is to use 'old buffer' as result buffer only
if first argument is not constant item.
In RBR, There is an inconsistency between slaves and master.
When INSERT statement which includes an auto_increment field is executed,
Store engine of master will check the value of the auto_increment field.
It will generate a sequence number and then replace the value, if its value is NULL or empty.
if the field's value is 0, the store engine will do like encountering the NULL values
unless NO_AUTO_VALUE_ON_ZERO is set into SQL_MODE.
In contrast, if the field's value is 0, Store engine of slave always generates a new sequence number
whether or not NO_AUTO_VALUE_ON_ZERO is set into SQL_MODE.
SQL MODE of slave sql thread is always consistency with master's.
Another variable is related to this bug.
If generateing a sequence number is decided by the values of
table->auto_increment_field_not_null and SQL_MODE(if includes MODE_NO_AUTO_VALUE_ON_ZERO)
The table->auto_increment_is_not_null is FALSE, which causes this bug to appear. ..
Archive engine returns wrong values for average record length
and max data length.
With this fix they're calculated as following:
- max data length is 2 ^ 63 where large files are supported
and INT_MAX32 where this is not supported;
- average record length is data length / records in data file.
Create temporary InnoDB table fails on case insensitive
filesystems, when lower_case_table_names is 2 (e.g. OS X)
and temporary directory path contains upper case letters.
The problem was that tmpdir prefix was converted to lower
case when table was created, but was passed as is when
table was opened.
Fixed by leaving tmpdir prefix part intact.
The parser rule for expressions in a udf parameter list contains
two hacks:
First, the parser input stream is read verbatim, bypassing
the lexer.
Second, the Item::name field is overwritten. If the argument to a
udf was a field, the field's name as seen by name resolution was
overwritten this way.
If the field name was quoted or escaped, it would appear as e.g. "`field`".
Fixed by not overwriting field names.
This test case uses mysqlbinlog to dump the content of master-bin.000001,
but the content of master-bin.000001 is not that this test needs.
MTR runs a lot of test cases on one server, so when this test starts, the current binlog file
might not be master-bin.000001, or there are other events are written by tests before.
'RESET MASTER' command must be called at the begin, it ensures that binlog of this test
is wrote to master-bin.000001 correctly.
Three other tests have the same problem, They were fixed together.
mysqlbinlog-cp932
binlog_incident
binlog_tmp_table
Postfix.
extra/rpl_tests/rpl_row_sp006.test had changed to fix this bug.
extra/rpl_tests/rpl_row_sp006.test is also referenced by rpl_ndb_sp006,
So rpl_row_sp006.result must be changed too.
The external 'for' loop in remove_dup_with_compare() handled
HA_ERR_RECORD_DELETED by just starting over without advancing
to the next record which caused an infinite loop.
This condition could be triggered on certain data by a SELECT
query containing DISTINCT, GROUP BY and HAVING clauses.
Fixed remove_dup_with_compare() so that we always advance to
the next record when receiving HA_ERR_RECORD_DELETED from
rnd_next().
on subquery inside a SP
Problem: repeated call of a SP containing an incorrect query with a
subselect may lead to failed ASSERT().
Fix: set proper sublelect's state in case of error occured during
subquery transformation.
SELECT with join (not only self-join) from archive table may
return incomplete result set, when result set size exceeds
join buffer size.
The problem was that archive row counter was initialzed too
early, when ha_archive::info() method was called. Later,
when optimizer exceeds join buffer, it attempts to reuse
handler without calling ha_archive::info() again (which is
correct).
Fixed by moving row counter initialization from
ha_archive::info() to ha_archive::rnd_init().
name as existing view
When trying to create a table with the same name as existing view with
join, mysql server crashes.
The problem is when create table is issued with the same name as view, while
verifying with the existing tables, we assume that base table object is
created always.
In this case, since it is a view over multiple tables, we don't have the
mysql derived table object.
Fixed the logic which checks if there is an existing table to not to assume
that table object is created when the base table is view over multiple
tables.
Inserting a negative value in the autoincrement column of a
partitioned innodb table was causing the value of the auto
increment counter to wrap around into a very large positive
value. The consequences are the same as if a very large positive
value was inserted into a column, e.g. reduced autoincrement
range, failure to read autoincrement counter.
The current patch ensures that before calculating the next
auto increment value, the current value is within the positive
maximum allowed limit.