Issue:
When libmysqld/example/mysql_embedded is executed, it was getting abort. Its a
regression as it was working in 5.1 and failed in 5.5. Issue is there because
remaining_argc/remaining_argv were not getting assigned correctly in
init_embedded_server() which were being used later in init_common_variable().
Solution:
Rectified code to pass correct argc/argv to be used in init_common_variable().
This test case was failing on 5.5 and trunk for two reasons.
1) It waited for the "Waiting for table level lock" process
state while this state was renamed "Waiting for table
metadata lock" with the introduction of MDL in 5.5.
2) SET GLOBAL query_cache_size= 100000; gave a warning since
query_cache_size is supposed to be multiples of 1024.
This patch fixes these two issues and re-enables the test case.
RESULT CONSISTED OF MORE THAN ONE ROW
MySQL converts incorrect DATEs and DATETIMEs to '0000-00-00' on
insertion by default. This means that this sequence is possible:
CREATE TABLE t1(date_notnull DATE NOT NULL);
INSERT INTO t1 values (NULL);
SELECT * FROM t1;
0000-00-00
At the same time, ODBC drivers do not (or at least did not in the
90's) understand the DATE and DATETIME value '0000-00-00'. Thus,
to be able to query for the value 0000-00-00 it was decided in
MySQL 4.x (or maybe even before that) that for the special case
of DATE/DATETIME NOT NULL columns, the query "SELECT ... WHERE
date_notnull IS NULL" should return rows with date_notnull ==
'0000-00-00'. This is documented misbehavior that we do not want
to change.
The hack used to make MySQL return these rows is to convert
"date_notnull IS NULL" to "date_notnull = 0". This is, however,
only done if the table date_notnull belongs to is not an inner
table of an outer join. The rationale for this seems to be that
if there is no join match for the row in the outer table,
null-complemented rows would otherwise not be returned because
the null-complemented DATE value is actually NULL. On the other
hand, this means that the "return rows with 0000-00-00 when the
query asks for IS NULL"-hack is not in effect for outer joins.
In this bug, we have a LEFT JOIN that does not misbehave like
the documentation says it should. The fix is to rewrite
"date_notnull IS NULL" to "date_notnull IS NULL OR
date_notnull = 0"
if dealing with an OUTER JOIN, otherwise
"date_notnull IS NULL" to "date_notnull = 0"
as was done before.
Note:
The bug was originally reported as different result on first
and second execution of SP. The reason was that during first
execution the query was correctly rewritten to an inner join
due to a null-rejecting predicate. On second execution the
"IS NULL" -> "= 0" rewrite was done because there was no outer
join. The real problem, though, was incorrect date/datetime
IS NULL handling for OUTER JOINs.
SYNTAX TRIGGERS IN ANY WAY
Table with triggers which were using deprecated (5.0-only) syntax became
unavailable for any DML and DDL after upgrade to 5.1 version of server.
Attempt to execute any statement on such a table resulted in parsing
error reported. Since this included DROP TRIGGER and DROP TABLE
statements (actually, the latter was allowed but was not functioning
properly for such tables) it was impossible to fix the problem without
manual operations on .TRG and .TRN files in data directory.
The problem was that failure to parse trigger body (due to 5.0-only
syntax) when opening trigger file for a table prevented the table
from being open. This made all operations on the table impossible
(except DROP TABLE which due to peculiarity in its implementation
dropped the table but left trigger files around).
This patch solves this problem by silencing error which occurs when
we parse trigger body during table open. Error message is preserved
for the future use and table is marked as having a broken trigger.
We also try to analyze parse tree to recover trigger name, which
will be needed in order to drop the broken trigger. DML statements
which invoke triggers on the table marked as having broken trigger
are prohibited and emit saved error message. The same happens for
DDL which change triggers except DROP TRIGGER and DROP TABLE which
try their best to do what was requested. Table becomes no longer
marked as having broken trigger when last such trigger is dropped.
THE EVENT STATUS.
Any ALTER EVENT statement on a disabled event enabled it back
(unless this ALTER EVENT statement explicitly disabled the event).
The problem was that during processing of an ALTER EVENT statement
value of status field was overwritten unconditionally even if new
value was not specified explicitly. As a consequence this field
was set to default value for status which corresponds to ENABLE.
The solution is to check if status field was explicitly specified in
ALTER EVENT statement before assigning new value to status field.
STATEMENTS FAIL".
Attempt to execute CREATE TABLE LIKE statement on a MyISAM
table with INDEX or DATA DIRECTORY options specified as a
source resulted in "MyISAM table '...' is in use..." error.
According to our documentation such a statement should create
a copy of source table with DATA/INDEX DIRECTORY options
omitted.
The problem was that new implementation of CREATE TABLE LIKE
statement in 5.5 tried to copy value of INDEX and DATA DIRECTORY
parameters from the source table. Since in description of source
table this parameters also included name of this table, attempt
to create target table with these parameter led to file name
conflict and error.
This fix addresses the problem by preserving documented and
backward-compatible behavior. I.e. by ensuring that contents
of DATA/INDEX DIRECTORY clauses for the source table is
ignored when target table is created.
FUNCTION DOES NOT EXIST IF NOT-PRIV USER RECONNECTS".
The bug itself was fixed by the same patch as bug@11747137
"30977: CONCURRENT STATEMENT USING STORED FUNCTION AND DROP
FUNCTION BREAKS SBR".
Problem: in case of wrong data insert into indexed GEOMETRY fields
(e.g. NULL value for a not NULL field) MyISAM reported
"ERROR 126 (HY000): Incorrect key file for table; try to repair it"
due to misuse of the key deletion function.
Fix: always use R-tree key functions for R-tree based indexes
and B-tree key functions for B-tree based indexes.
SECONDARY INDEX IN INNODB
This is a follow-up patch.
This patch moves part of the new test coverage to a test
file that is only run on debug builds since it used debug-
only features and therefore broke the test case on
release builds.
SECONDARY INDEX IN INNODB
The patches for Bug#11751388 and Bug#11784056 enabled concurrent
reads while creating secondary indexes in InnoDB. However, they
introduced a regression. This regression occured if ALTER TABLE
failed after the index had been added, for example during the
lock upgrade needed to update .FRM. If this happened, InnoDB
and the server got out of sync with regards to which indexes
actually existed. Therefore the patch for Bug#11815600 again
disabled concurrent reads.
This patch re-enables concurrent reads. The original regression
is fixed by splitting the ADD INDEX operation into two parts.
First the new index is created but not made active. This is
done while concurrent reads are allowed. The second part of
the operation makes the index active (or reverts the change).
This is done after lock upgrade, which prevents the original
regression.
In order to implement this change, the patch changes the storage
API for in-place index creation. handler::add_index() is split
into two functions, handler_add_index() and
handler::final_add_index(). The former for creating indexes without
making them visible and the latter for commiting (i.e. making
visible) new indexes or reverting the changes.
Large parts of this patch were written by Marko Mäkelä.
Test case added to innodb_mysql_lock.test.
The test case problem stemmed from the fact that a debug sync
signal is a global variable that persists until overwritten
by a new signal. This means that if two different signals
are raised in sequence, a thread waiting for the first signal
might miss it if the second signal sets the global variable
before the thread wakes up.
The solution is to deliver a subsequent signal only after the
waiting thread has received it.
will create multiple running events.
A CREATE IF NOT EXIST on an event that existed and was enabled caused
multiple instances of the event to run. Disabling the event didn't help.
If the event was dropped, the event stopped running, but when created
again, multiple instances of the event were still running. The only way
to get out of this situation was to restart the server.
The problem was that Event_db_repository::create_event() didn't return
enough information to discriminate between situation when event didn't
exist and was created and when event did exist and was not created
(but a warning was emitted). As result in the latter case event
was added to in-memory queue of events second time. And this led to
unwarranted multiple executions of the same event.
The solution is to add out-parameter to Event_db_repository::create_event()
method which will signal that event was not created because it already
exists and so it should not be added to the in-memory queue.
Assertion happens due to missing NULL value check in
Item_func_round::fix_length_and_dec() function.
The fix: added NULL value check for second parameter.
LEAK WITH PARTITIONED ARCHIVE TABLES
CHECK TABLE against archive table, when file descriptors
are exhausted, caused server crash.
Archive didn't handle errors when opening data file for
CHECK TABLE.
There are two problems:
1. There is a missing check for 'year' parameter(year can not be greater than 9999) in
makedate function. fix: added check that year can not be greater than 9999.
2. There is a missing check for zero date in from_days() function.
fix: added zero date check into Item_func_from_days::get_date()
function.
Impementing Test Review Comment.
Bug test scenario:
SELECT is not returning result set for "equal" (=) and "NULL safe equal
operator" (<=>) on BIT data type. Extending this scenario for all data types
and innodb
The 5.5 version of the patch.
The server doesn't restrict the data that can be inserted into integer columns
with explicitly specified length that's smaller than what the type can handle,
e.g. 1234 can be inserted into an INT(2) column just fine.
Thus, when calcualting the maximum width of expressions involving such
restricted integer columns we need to use the implicit maximum width of
the field instead of the explicitly speficied one.
Fixed the server to use the implicit maximum in such cases and made sure
the implicit maximum is addjusted the same way as the explicit one wrt
signedness.
Fixed several test case results (ctype_*.result, metadata.result and
type_ranges.result) to reflect the extended column widths.
Added a regression test case in distinct.test.
Note : this is the behavior preserving fix that makes 5.5 behave as 5.1 and
earlier. In the mysql trunk we'll add a insert time check for the explict
maximum size.
Attempts to assign value to a table column from trigger by using
NEW.column_name pseudo-variable might result in garbled data.
That happened when:
- the column had a BLOB-based type (e.g. TEXT)
and
- the value being assigned was retrieved from stored routine variable
of the same type.
The problem was that BLOB values were not copied correctly in this
case. Instead of doing a copy of a real value, the value's representation
in record buffer was copied. This representation is essentially a
pointer to a buffer associated with the virtual table for routine
variables where the real value is stored. Since this buffer got
freed once trigger was left or could have changed its contents when
new value was assigned to corresponding routine variable such a shallow
copying resulted in garbled data in NEW.colum_name column.
It worked in 5.1 due to a subtle bug in create_virtual_tmp_table():
- in 5.1 create_virtual_tmp_table() returned a table which
had db_low_byte_first == false.
- in 5.5 and up create_virtual_tmp_table() returns a table which
has db_low_byte_first == true.
Actually, db_low_byte_first == false only for ISAM storage engine,
which was deprecated and removed in 5.0.
Having db_low_byte_first == false led to getting false in the
complex condition for the 2nd "if" in field_conv(), which in turn
led to copy-blob-behavior as a fall-back strategy:
- to->table->s->db_low_byte_first was true (correct value)
- from->table->s->db_low_byte_first was false (incorrect value)
In 5.5 and up that condition is true, which means blob-values are
not copied.
WORK WITH --START-POSITION
If setting --start-position to start after the FD event, mysqlbinlog
will output an error stating that it has not found an FD event.
However, its not that mysqlbinlog does not find it but rather that it
does not processes it in the regular way (i.e., it does not print it).
Given that one is using --base64-output=DECODE-ROWS then not printing
it is actually fine.
To fix this, we make mysqlbinlog not to complain when it has not
printed the FD event, is outputing in base64, but is decoding the
rows.
The problem was that wrong structure of mysql.event was not detected and
the server continued to use wrongly-structured data.
The fix is to check the structure of mysql.event after opening before
any use. That makes operations with events more strict -- some operations
that might work before throw errors now. That seems to be Ok.
Another side-effect of the patch is that if mysql.event is corrupted,
unrelated DROP DATABASE statements issue an SQL warning about inability
to open mysql.event table.
The partitioning engine checked the auto_increment column even if it was not to be written,
triggering a DBUG_ASSERT.
Fixed by checking if table->write_set for that column was set.