cant find record
Some engines return data for the record. Despite the fact that
the null bit is set for some fields, their old value may still in
the row. This can happen when unpacking an AI from the binlog on
top of a previous record in which a field is set to NULL, which
previously contained a value. Ultimately, this may cause the
comparison of records to fail when the slave is doing an index or
range scan.
We fix this by deploying a call to reset() for each field that is
set to null while unpacking a row from the binary log.
Furthermore, we also add mixed mode test case to cover the
scenario where updating and setting a field to null through a
Query event and later searching it through a rows event will
succeed.
Finally, we also change the reset() method, from Field_bit class,
so that it takes into account bits stored among the null bits and
not only the ones stored in the record.
Conflicts:
Text conflict in .bzr-mysql/default.conf
Text conflict in mysql-test/extra/rpl_tests/rpl_loaddata.test
Text conflict in mysql-test/r/mysqlbinlog2.result
Text conflict in mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result
Text conflict in mysql-test/suite/binlog/r/binlog_unsafe.result
Text conflict in mysql-test/suite/rpl/r/rpl_insert_id.result
Text conflict in mysql-test/suite/rpl/r/rpl_loaddata.result
Text conflict in mysql-test/suite/rpl/r/rpl_stm_auto_increment_bug33029.result
Text conflict in mysql-test/suite/rpl/r/rpl_udf.result
Text conflict in mysql-test/suite/rpl/t/rpl_slow_query_log.test
Text conflict in sql/field.h
Text conflict in sql/log.cc
Text conflict in sql/log_event.cc
Text conflict in sql/log_event_old.cc
Text conflict in sql/mysql_priv.h
Text conflict in sql/share/errmsg.txt
Text conflict in sql/sp.cc
Text conflict in sql/sql_acl.cc
Text conflict in sql/sql_base.cc
Text conflict in sql/sql_class.h
Text conflict in sql/sql_db.cc
Text conflict in sql/sql_delete.cc
Text conflict in sql/sql_insert.cc
Text conflict in sql/sql_lex.cc
Text conflict in sql/sql_lex.h
Text conflict in sql/sql_load.cc
Text conflict in sql/sql_table.cc
Text conflict in sql/sql_update.cc
Text conflict in sql/sql_view.cc
Conflict adding files to storage/innobase. Created directory.
Conflict because storage/innobase is not versioned, but has versioned children. Versioned directory.
Conflict adding file storage/innobase. Moved existing file to storage/innobase.moved.
Conflict adding files to storage/innobase/handler. Created directory.
Conflict because storage/innobase/handler is not versioned, but has versioned children. Versioned directory.
Contents conflict in storage/innobase/handler/ha_innodb.cc
for InnoDB
The class Field_bit_as_char stores the metadata for the
field incorrecly because bytes_in_rec and bit_len are set
to (field_length + 7 ) / 8 and 0 respectively, while
Field_bit has the correct values field_length / 8 and
field_length % 8.
Solved the problem by re-computing the values for the
metadata based on the field_length instead of using the
bytes_in_rec and bit_len variables.
To handle compatibility with old server, a table map
flag was added to indicate that the bit computation is
exact. If the flag is clear, the slave computes the
number of bytes required to store the bit field and
compares that instead, effectively allowing replication
*without conversion* from any field length that require
the same number of bytes to store.
Row-based replication requires the types of columns on the
master and slave to be approximately the same (some safe
conversions between strings are allowed), but does not
allow safe conversions between fields of similar types such
as TINYINT and INT.
This patch implement type conversions between similar fields
on the master and slave.
The conversions are controlled using a new variable
SLAVE_TYPE_CONVERSIONS of type SET('ALL_LOSSY','ALL_NON_LOSSY').
Non-lossy conversions are any conversions that do not run the
risk of losing any information, while lossy conversions can
potentially truncate the value. The column definitions are
checked to decide if the conversion is acceptable.
If neither conversion is enabled, it is required that the
definitions of the columns are identical on master and slave.
Conversion is done by creating an internal conversion table,
unpacking the master data into it, and then copy the data to
the real table on the slave.
----------------------------------------------------------
revno: 2630.10.1
committer: Konstantin Osipov <konstantin@mysql.com>
branch nick: mysql-6.0-lock-tables-tidyup
timestamp: Wed 2008-06-11 15:49:58 +0400
message:
WL#3726, review fixes.
Now that we have metadata locks, we don't need to keep a crippled
TABLE instance in the table cache to indicate that a table is locked.
Remove all code that used this technique. Instead, rely on metadata
locks and use the standard open_table() and close_thread_table()
to manipulate with the table cache tables.
Removes a list of functions that have become unused (see the comment
for sql_base.cc for details).
Under LOCK TABLES, keep a TABLE_LIST instance for each table
that may be temporarily closed. For that, implement an own class for
LOCK TABLES mode, Locked_tables_list.
This is a pre-requisite patch for WL#4144.
This is not exactly a backport: there is no new
online ALTER table in Celosia, so the old alter table
code was changed to work with the new table cache API.
Bug #48370 Absolutely wrong calculations with GROUP BY and
decimal fields when using IF
Added the test cases in the above two bugs for regression
testing.
Added additional tests that demonstrate a incomplete fix.
Added a new factory method for Field_new_decimal to
create a field from an (decimal returning) Item.
In the new method made sure that all the precision and
length variables are capped in a proper way.
This is required because Item's can have larger precision
than the decimal fields and thus need to be capped when
creating a field based on an Item type.
Fixed the wrong typecast to Item_decimal.
STRING_RESULT argument
There is a "magic" number for precision : NOT_FIXED_DEC.
This means that the precision is not a fixed number.
But this constant was re-defined in several files and
was not available to the UDF developers.
Moved the NOT_FIXED_DEC definition to the correct header
and removed the redundant definitions.
Backported to 5.6.0 (mysql-next-mr-runtime)
Conflicts
=========
Text conflict in .bzr-mysql/default.conf
Text conflict in libmysqld/CMakeLists.txt
Text conflict in libmysqld/Makefile.am
Text conflict in mysql-test/collections/default.experimental
Text conflict in mysql-test/extra/rpl_tests/rpl_row_sp006.test
Text conflict in mysql-test/suite/binlog/r/binlog_tmp_table.result
Text conflict in mysql-test/suite/rpl/r/rpl_loaddata.result
Text conflict in mysql-test/suite/rpl/r/rpl_loaddata_fatal.result
Text conflict in mysql-test/suite/rpl/r/rpl_row_create_table.result
Text conflict in mysql-test/suite/rpl/r/rpl_row_sp006_InnoDB.result
Text conflict in mysql-test/suite/rpl/r/rpl_stm_log.result
Text conflict in mysql-test/suite/rpl_ndb/r/rpl_ndb_circular_simplex.result
Text conflict in mysql-test/suite/rpl_ndb/r/rpl_ndb_sp006.result
Text conflict in mysql-test/t/mysqlbinlog.test
Text conflict in sql/CMakeLists.txt
Text conflict in sql/Makefile.am
Text conflict in sql/log_event_old.cc
Text conflict in sql/rpl_rli.cc
Text conflict in sql/slave.cc
Text conflict in sql/sql_binlog.cc
Text conflict in sql/sql_lex.h
21 conflicts encountered.
NOTE
====
mysql-5.1-rpl-merge has been made a mirror of mysql-next-mr:
- "mysql-5.1-rpl-merge$ bzr pull ../mysql-next-mr"
This is the first cset (merge/...) committed after pulling
from mysql-next-mr.
----------------------------------------------------------
revno: 2630.22.8
committer: Konstantin Osipov <konstantin@mysql.com>
branch nick: mysql-6.0-runtime
timestamp: Sun 2008-08-10 18:49:52 +0400
message:
Get rid of typedef struct for the most commonly used types:
TABLE, TABLE_SHARE, LEX. This simplifies use of tags
and forward declarations.
The problem was that appending values to the end of an existing
ENUM or SET column was being treated as table data modification,
preventing a immediately (fast) table alteration that occurs when
only table metadata is being modified.
The cause was twofold: adding a enumeration or set members to the
end of the list of valid member values was not being considered
a "compatible" table alteration, and for SET columns, the check
was being done upon the max display length and not the underlying
(pack) length of the field.
The solution is to augment the function that checks wether two ENUM
or SET fields are compatible -- by comparing the pack lengths and
performing a limited comparison of the member values.
The problem was that creating a DECIMAL column from a decimal
value could lead to a failed assertion as decimal values can
have a higher precision than those attached to a table. The
assert could be triggered by creating a table from a decimal
with a large (> 30) scale. Also, there was a problem in
calculating the number of digits in the integral and fractional
parts if both exceeded the maximum number of digits permitted
by the new decimal type.
The solution is to ensure that truncation procedure is executed
when deducing a DECIMAL column from a decimal value of higher
precision. If the integer part is equal to or bigger than the
maximum precision for the DECIMAL type (65), the integer part
is truncated to fit and the fractional becomes zero. Otherwise,
the fractional part is truncated to fit into the space left
after the integer part is copied.
This patch borrows code and ideas from Martin Hansson's patch.
Altering a table to update a column with types DATE or TIMESTAMP
would incorrectly be seen as a significant change that necessitates
a slow copy+rename operation instead of a fast update.
There were two problems:
The character set is magically set for TIMESTAMP to be "binary",
but that was done too deep in field use code for ALTER TABLE to
know of it. Now, put that in the constructor for Field_timestamp.
Also, when we set the character set for the new replacement/
comparison field, also raise the "binary" field flag that tells us
we should compare it exactly. That is necessary to match the old
stored definition.
Next is the problem that the default length for TIMESTAMP and DATE
fields is different than the length read from the .frm . The
compressed size is written to the file, but the human-readable,
part-delimited length is used as default length. IIRC, for
timestamp it was 19!=14, and for date it was 8!=10. Length
mismatch causes a table copy.
Also, clean up a place where a comparison function alters one of its
parameters and replace it with an assertion of the condition it
mutates.
Problem: mysqld doesn't detect that enum data must be reinserted performing
'ALTER TABLE' in some cases.
Fix: reinsert data altering an enum field if enum values are changed.
A stored procedure involving substrings could crash the server on certain
platforms because of invalid memory reads.
During storing the new blob-field value, the cached value's address range
overlapped that of the new field value. This caused problems when the
cached value storage was reallocated to provide access for a new
characater set representation. The patch checks the address ranges, and if
they overlap, the new field value is copied to a new storage before it is
converted to the new character set.
returns unexpected result
If:
1. a table has a not nullable BIT column c1 with a length
shorter than 8 bits and some additional not nullable
columns c2 etc, and
2. the WHERE clause is like: (c1 = constant) AND c2 ...,
the SELECT query returns unexpected result set.
The server stores BIT columns in a tricky way to save disk
space: if column's bit length is not divisible by 8, the
server places reminder bits among the null bits at the start
of a record. The rest bytes are stored in the record itself,
and Field::ptr points to these rest bytes.
However if a bit length of the whole column is less than 8,
there are no remaining bytes, and there is nothing to store in
the record at its regular place. In this case Field::ptr points
to bytes actually occupied by the next column in a record.
If both columns (BIT and the next column) are NOT NULL,
the Field::eq function incorrectly deduces that this is the
same column, so query transformation/equal item elimination
code (see build_equal_items_for_cond) may mix these columns
and damage conditions containing references to them.
This fix is for 5.0 only : back porting the 6.0 patch manually
The parser code in sql/sql_yacc.yy needs to be more robust to out of
memory conditions, so that when parsing a query fails due to OOM,
the thread gracefully returns an error.
Before this fix, a new/alloc returning NULL could:
- cause a crash, if dereferencing the NULL pointer,
- produce a corrupted parsed tree, containing NULL nodes,
- alter the semantic of a query, by silently dropping token values or nodes
With this fix:
- C++ constructors are *not* executed with a NULL "this" pointer
when operator new fails.
This is achieved by declaring "operator new" with a "throw ()" clause,
so that a failed new gracefully returns NULL on OOM conditions.
- calls to new/alloc are tested for a NULL result,
- The thread diagnostic area is set to an error status when OOM occurs.
This ensures that a request failing in the server properly returns an
ER_OUT_OF_RESOURCES error to the client.
- OOM conditions cause the parser to stop immediately (MYSQL_YYABORT).
This prevents causing further crashes when using a partially built parsed
tree in further rules in the parser.
No test scripts are provided, since automating OOM failures is not
instrumented in the server.
Tested under the debugger, to verify that an error in alloc_root cause the
thread to returns gracefully all the way to the client application, with
an ER_OUT_OF_RESOURCES error.
Tables in the table definition cache are keeping a cache buffer for blob
fields which can consume a lot of memory.
This patch introduces a maximum size threshold for these buffers.
In order to handle CHAR() fields, 8 bits were reserved for
the size of the CHAR field. However, instead of denoting the
number of characters in the field, field_length was used which
denotes the number of bytes in the field.
Since UTF-8 fields can have three bytes per character (and
has been extended to have four bytes per character in 6.0),
an extra two bits have been encoded in the field metadata
work for fields of type Field_string (i.e., CHAR fields).
Since the metadata word is filled, the extra bits have been
encoded in the upper 4 bits of the real type (the most
significant byte of the metadata word) by computing the
bitwise xor of the extra two bits. Since the upper 4 bits
of the real type always is 1111 for Field_string, this
means that for fields of length <256, the encoding is
identical to the encoding used in pre-5.1.26 servers, but
for lengths of 256 or more, an unrecognized type is formed,
causing an old slave (that does not handle lengths of 256
or more) to stop.
or incorrect.
For better conformance with standard, truncation procedure of CHAR columns
has been changed to ignore truncation of trailing whitespace characters
(note has been removed).
Finally, for columns with non-binary charsets:
1. CHAR(N) columns silently ignore trailing whitespace truncation;
2. VARCHAR and TEXT columns issue Note about truncation.
BLOBs and other columns with BINARY charset are unaffected.