Bug#49836 reports that the geometry type does not work
with WL#5151 applied.
The GEOMETRY type inherits the blob comparison function,
which read the pack length from the metadata. The GEOMETRY
type does not fill in the metadata with anything sensible,
so it is always zero, meaning that the pack length for the
source type is considered zero, rendering it always "smaller"
than the target type which has pack length 4 (without pointer).
This patch fixes the problem by defining
Field_geom::pack_length_from_metadata() to always use the
same as Field_geom::row_pack_length().
When the $diff_statement variable for diff_master_slave.inc was
put in multiple lines, the rear part of the statement would be
missing when being executed on Windows systems.
Fixed the problem by always putting the value for $diff_statement
in one line.
when replicating
The function create_virtual_tmp_table does not
set db_low_byte_first in the same way as
create_tmp_table does, causing copying from
the virtual table to a real table to get strange
values for SET types on big-endian machines.
The bug is caused by a race condition between the
INSERT DELAYED thread and the client thread's FLUSH TABLE. The
FLUSH TABLE does not guarantee (as is (wrongly) suggested in the
test case) that the INSERT DELAYED is ever executed. The
execution of the test case will thus not be deterministic.
The fix has been to do a deterministic verification that both
threads are complete by checking the content of the table.
This test case tests a circular replication of four hosts.
A--->B--->C--->D--->A
The replicate is slow and needs more time to replicate all data in the circle.
The time it spends to replicate, sometimes, is longer than the time that
wait_condition.inc spends to wait that all data has been replicated. This
cause sporadical failure of this test case.
This patch uses sync_slave_with_master to ensure that all data can be replicated
successfully in the circle.
for InnoDB
The class Field_bit_as_char stores the metadata for the
field incorrecly because bytes_in_rec and bit_len are set
to (field_length + 7 ) / 8 and 0 respectively, while
Field_bit has the correct values field_length / 8 and
field_length % 8.
Solved the problem by re-computing the values for the
metadata based on the field_length instead of using the
bytes_in_rec and bit_len variables.
To handle compatibility with old server, a table map
flag was added to indicate that the bit computation is
exact. If the flag is clear, the slave computes the
number of bytes required to store the bit field and
compares that instead, effectively allowing replication
*without conversion* from any field length that require
the same number of bytes to store.
'LOAD DATA CONCURRENT [LOCAL] INFILE ...' statment only is binlogged as
'LOAD DATA [LOCAL] INFILE ...' in SBR and MBR. As a result, if replication is on,
queries on slaves will be blocked by the replication SQL thread.
This patch write code to write 'CONCURRENT' into the log event if 'CONCURRENT' option
is in the original statement in SBR and MBR.
Row-based replication requires the types of columns on the
master and slave to be approximately the same (some safe
conversions between strings are allowed), but does not
allow safe conversions between fields of similar types such
as TINYINT and INT.
This patch implement type conversions between similar fields
on the master and slave.
The conversions are controlled using a new variable
SLAVE_TYPE_CONVERSIONS of type SET('ALL_LOSSY','ALL_NON_LOSSY').
Non-lossy conversions are any conversions that do not run the
risk of losing any information, while lossy conversions can
potentially truncate the value. The column definitions are
checked to decide if the conversion is acceptable.
If neither conversion is enabled, it is required that the
definitions of the columns are identical on master and slave.
Conversion is done by creating an internal conversion table,
unpacking the master data into it, and then copy the data to
the real table on the slave.
Bug #39675 rename tables on innodb tables with pending
transactions causes slave data issue
Bug was already fixed as part of patch for Bug#989
(If DROP TABLE while there's an active transaction,
wrong binlog order)
Test case added to rpl_innodb.test.
{PROCEDURE|FUNCTION} FROM ...'
The master would hit an assertion when binary log was
active. This was due to the fact that the thread's diagnostics
area was being cleared before writing to the binlog,
independently of mysql_routine_grant returning an error or
not. When mysql_routine_grant was to return an error, the return
value and the diagnostics area contents would
mismatch. Consequently, neither my_ok would be called nor an
error would be signaled in the diagnostics area, eventually
triggering the assertion in net_end_statement.
We fix this by not clearing the diagnostics area at binlogging
time.
escaped field names
When in mixed or statement mode, the master logs LOAD DATA
queries by resorting to an Execute_load_query_log_event. This
event does not contain the original query, but a rewritten
version of it, which includes the table field names. However, the
rewrite does not escape the field names. If these names match a
reserved keyword, then the slave will stop with a syntax error
when executing the event.
We fix this by escaping the fields names as it happens already
for the table name.
2617.31.12, 2617.31.15, 2617.31.15, 2617.31.16, 2617.43.1
- initial changeset that introduced the fix for
Bug#989 and follow up fixes for all test suite failures
introduced in the initial changeset.
------------------------------------------------------------
revno: 2617.31.1
committer: Davi Arnaut <Davi.Arnaut@Sun.COM>
branch nick: 4284-6.0
timestamp: Fri 2009-03-06 19:17:00 -0300
message:
Bug#989: If DROP TABLE while there's an active transaction, wrong binlog order
WL#4284: Transactional DDL locking
Currently the MySQL server does not keep metadata locks on
schema objects for the duration of a transaction, thus failing
to guarantee the integrity of the schema objects being used
during the transaction and to protect then from concurrent
DDL operations. This also poses a problem for replication as
a DDL operation might be replicated even thought there are
active transactions using the object being modified.
The solution is to defer the release of metadata locks until
a active transaction is either committed or rolled back. This
prevents other statements from modifying the table for the
entire duration of the transaction. This provides commitment
ordering for guaranteeing serializability across multiple
transactions.
- Incompatible change:
If MySQL's metadata locking system encounters a lock conflict,
the usual schema is to use the try and back-off technique to
avoid deadlocks -- this schema consists in releasing all locks
and trying to acquire them all in one go.
But in a transactional context this algorithm can't be utilized
as its not possible to release locks acquired during the course
of the transaction without breaking the transaction commitments.
To avoid deadlocks in this case, the ER_LOCK_DEADLOCK will be
returned if a lock conflict is encountered during a transaction.
Let's consider an example:
A transaction has two statements that modify table t1, then table
t2, and then commits. The first statement of the transaction will
acquire a shared metadata lock on table t1, and it will be kept
utill COMMIT to ensure serializability.
At the moment when the second statement attempts to acquire a
shared metadata lock on t2, a concurrent ALTER or DROP statement
might have locked t2 exclusively. The prescription of the current
locking protocol is that the acquirer of the shared lock backs off
-- gives up all his current locks and retries. This implies that
the entire multi-statement transaction has to be rolled back.
- Incompatible change:
FLUSH commands such as FLUSH PRIVILEGES and FLUSH TABLES WITH READ
LOCK won't cause locked tables to be implicitly unlocked anymore.
Before this patch, semisync assumed transactions running in parallel
can not be larger than max_connections, but this is not true when
the event scheduler is executing events, and cause semisync run out
of preallocated transaction nodes.
Fix the problem by allocating transaction nodes dynamically.
This patch also fixed a possible deadlock when running UNINSTALL
PLUGIN rpl_semi_sync_master and updating in parallel. Fixed by
releasing the internal Delegate lock before unlock the plugins.
Support for flushing individual logs, so that the user can
selectively flush a subset of the server logs.
Flush of individual logs is done according to the
following syntax:
FLUSH <log_category> LOGS;
The syntax is extended so that the user is able to flush a
subset of logs:
FLUSH [log_category LOGS,];
where log_category is one of:
SLOW
ERROR
BINARY
ENGINE
GENERAL
RELAY.