primary_key_no == 0".
Attempt to create InnoDB table with non-nullable column of
geometry type having an unique key with length 12 on it and
with some other candidate key led to server crash due to
assertion failure in both non-debug and debug builds.
The problem was that such a non-candidate key could have
been sorted as the first key in table/.FRM, before any legit
candidate keys. This resulted in assertion failure in InnoDB
engine which assumes that primary key should either be the
first key in table/.FRM or should not exist at all.
The reason behind such an incorrect sorting was an wrong
value of Create_field::key_length member for geometry field
(which was set to its pack_length == 12) which confused code
in mysql_prepare_create_table(), so it would skip marking
such key as a key with partial segments.
This patch fixes the problem by ensuring that this member
gets the same value of Create_field::key_length member as
for other blob fields (from which geometry field class is
inherited), and as result unique keys on geometry fields
are correctly marked as having partial segments.
Write an additional warning message to the server log,
explaining why a sort operation is aborted.
The output in mysqld.err will look something like:
110127 15:07:54 [ERROR] mysqld: Sort aborted: Out of memory (Needed 24 bytes)
110127 15:07:54 [ERROR] mysqld: Out of sort memory, consider increasing server sort buffer size
110127 15:07:54 [ERROR] mysqld: Sort aborted: Out of sort memory, consider increasing server sort buffer size
110127 15:07:54 [ERROR] mysqld: Sort aborted: Incorrect number of arguments for FUNCTION test.f1; expected 0, got 1
If --log-warn=2 is enabled, we output information about host/user/query as well.
in combination with IS NULL'
As this bug is a duplicate of bug#49322, it also includes test cases
covering this bugreport
Qualifying an OUTER JOIN with the condition 'WHERE <column> IS NULL',
where <column> is declared as 'NOT NULL' causes the
'not_exists_optimize' to be enabled by the optimizer.
In evaluate_join_record() the 'not_exists_optimize' caused
'NESTED_LOOP_NO_MORE_ROWS' to be returned immediately
when a matching row was found.
However, as the 'not_exists_optimize' is derived from
'JOIN_TAB::select_cond', the usual rules for condition guards
also applies for 'not_exist_optimize'. It is therefore incorrect
to check 'not_exists_optimize' without ensuring that all guards
protecting it is 'open'.
This fix uses the fact that 'not_exists_optimize' is derived from
a 'is_null' predicate term in 'tab->select_cond'. Furthermore,
'is_null' will evaluate to 'false' for any 'non-null' rows
once all guards protecting the is_null is open.
We can use this knowledge as an implicit guard check for the
'not_exists_optimize' by moving 'if (...not_exists_optimize)'
inside the handling of 'select_cond==false'. It will then
not take effect before its guards are open.
We also add an assert which requires that a
'not_exists_optimize' always comes together with
a select_cond. (containing 'is_null').
Root cause for this bug is that the optimizer try to detect&
optimize the special case:
'<field> BETWEEN c1 AND c1' and handle this as the condition '<field> = c1'
This was implemented inside add_key_field(.. *field, *value[]...)
which assumed field to refer key Field, and value[] to refer a [low...high]
constant pair. value[0] and value[1] was then compared for equality.
In a 'normal' BETWEEN condition of the form '<field> BETWEEN val1 and val2' the
BETWEEN operation is represented with an argementlist containing the
values [<field>, val1, val2] - add_key_field() is then called with
parameters field=<field>, *value=val1.
However, if the BETWEEN predicate specified:
1) '<const1> BETWEEN<const2> AND<field>
the 'field' and 'value' arguments to add_key_field() had to be swapped.
This was implemented by trying to cheat add_key_field() to handle it like:
2) '<const1> GE<const2> AND<const1> LE<field>'
As we didn't really replace the BETWEEN operation with 'ge' and 'le',
add_key_field() still handled it as a 'BETWEEN' and compared the (swapped)
arguments<const1> and<const2> for equality. If they was equal, the
condition 1) was incorrectly 'optimized' to:
3) '<field> EQ <const1>'
This fix moves this optimization of '<field> BETWEEN c1 AND c1' into
add_key_fields() which then calls add_key_equal_fields() to collect
key equality / comparison for the key fields in the BETWEEN condition.
Third updated patch - this version also includes copyright notice in added Perl script.
This patch implements a check for such modules at runtime. If modules are not found or unable to load, the test is skipped with
the following message:
[ skipped ] Test needs Perl modules DBI and DBD::mysql
Checks are done via a helper Perl script which looks for the module in a runtime environment that is as similar to that of the
mysqlhotcopy script as possible (thus not intended for Windows environments at this time).
The helper script tells mysql-test about the result by writing information to a temporary file that is later read by mysql-test.
See comments in added files (have_dbi_dbd-mysql.inc and checkDBI_DBD-mysql.pl) for details.
The patch also removes the mysqlhotcopy tests from the list of disabled tests.
Do as mysqld_safe: if running mysqld-debug, plugins are in debug subdirs
NB mtr --debug won't work in this context until 47141 is fixed
Also moved read_plugin_defs; no point running this in all worker threads
In SBR, if a statement does not fail, it is always written to the binary
log, regardless if rows are changed or not. If there is a failure, a
statement is only written to the binary log if a non-transactional (.e.g.
MyIsam) engine is updated.
INSERT ON DUPLICATE KEY UPDATE and INSERT IGNORE were not following the
rule above and were not written to the binary log, if then engine was
Innodb.
There are two calls to read_log_event() on master in mysql_binlog_send().
Each call reads 19 bytes in this test case and the error of the second
read_log_event() is reported to the slave.
The second read_log_event() starts from position 94 (75 + 19) to 113
(75 + 19 + 19). Usually, there are two events in the binary log:
. 0 - 3 - Header
. 4 - 105 - Format Descriptor Event
. 106 - 304 - Query Event
and both reads fail because operations are reading from invalid positions
as expected.
However, mysql_binlog_send() does not use the same IO_CACHE that is used to
write into binary log (i.e. mysql_bin_log.log_file) for the hot binary log.
It opens the binary log file directly by calling open_binlog() and creates a
separated IO_CACHE. So there is a possibly that after a master has flushed
the binary log file, the content has been cached by the filesystem, and has
not updated the disk file. If this happens, then a slave will only see part
of the file, and thus the second read_log_event() will report event truncated
error.
To fix the problem, if the first read_log_event() has failed, we ensure that
the second one will try to read from the same position.
Added --mysqld-env option, propagate via safe_process
Simplified: should be safe to set in parent safe_process after it's started
Addendum: catch cases of --mysqld-env w/o value, assume env.var
name never begins with "--"
that implement add_index
The problem was that ALTER TABLE blocked reads on an InnoDB table
while adding a secondary index, even if this was not needed. It is
only needed for the final step where the .frm file is updated.
The reason queries were blocked, was that ALTER TABLE upgraded the
metadata lock from MDL_SHARED_NO_WRITE (which blocks writes) to
MDL_EXCLUSIVE (which blocks all accesses) before index creation.
The way the server handles index creation, is that storage engines
publish their capabilities to the server and the server determines
which of the following three ways this can be handled: 1) build a
new version of the table; 2) change the existing table but with
exclusive metadata lock; 3) change the existing table but without
metadata lock upgrade.
For InnoDB and secondary index creation, option 3) should have been
selected. However this failed for two reasons. First, InnoDB did
not publish this capability properly.
Second, the ALTER TABLE code failed to made proper use of the
information supplied by the storage engine. A variable
need_lock_for_indexes was set accordingly, but was not later used.
This patch fixes this problem by only doing metadata lock upgrade
before index creation/deletion if this variable has been set.
This patch also changes some of the related terminology used
in the code. Specifically the use of "fast" and "online" with
respect to ALTER TABLE. "Fast" was used to indicate that an
ALTER TABLE operation could be done without involving a
temporary table. "Fast" has been renamed "in-place" to more
accurately describe the behavior.
"Online" meant that the operation could be done without taking
a table lock. However, in the current implementation writes
are always prohibited during ALTER TABLE and an exclusive
metadata lock is held while updating the .frm, so ALTER TABLE
is not completely online. This patch replaces "online" with
"in-place", with additional comments indicating if concurrent
reads are allowed during index creation/deletion or not.
An important part of this update of terminology is renaming
of the handler flags used by handlers to indicate if index
creation/deletion can be done in-place and if concurrent reads
are allowed. For example, the HA_ONLINE_ADD_INDEX_NO_WRITES
flag has been renamed to HA_INPLACE_ADD_INDEX_NO_READ_WRITE,
while HA_ONLINE_ADD_INDEX is now HA_INPLACE_ADD_INDEX_NO_WRITE.
Note that this is a rename to clarify current behavior, the
flag values have not changed and no flags have been removed or
added.
Test case added to innodb_mysql_sync.test.
Regression introduced in bug#52455. Problem was that the
fixed function did not set the last used partition variable, resulting
in wrong partition used when storing the position of the newly
retrieved row.
Fixed by setting the last used partition in ha_partition::index_read_idx_map.
Race condition may occur: mtr sees the .expect file but it's empty
Fix: wait and try again if file is empty
Addendum: try again if line isn't 'wait' or 'restart'
Also added verbose printout of extra restart options