DATE and DATETIME can be compared either as strings or as int. Both
methods have their disadvantages. Strings can contain valid DATETIME value
but have insignificant zeros omitted thus became non-comparable with
other DATETIME strings. The comparison as int usually will require conversion
from the string representation and the automatic conversion in most cases is
carried out in a wrong way thus producing wrong comparison result. Another
problem occurs when one tries to compare DATE field with a DATETIME constant.
The constant is converted to DATE losing its precision i.e. losing time part.
This fix addresses the problems described above by adding a special
DATE/DATETIME comparator. The comparator correctly converts DATE/DATETIME
string values to int when it's necessary, adds zero time part (00:00:00)
to DATE values to compare them correctly to DATETIME values. Due to correct
conversion malformed DATETIME string values are correctly compared to other
DATE/DATETIME values.
As of this patch a DATE value equals to DATETIME value with zero time part.
For example '2001-01-01' equals to '2001-01-01 00:00:00'.
The compare_datetime() function is added to the Arg_comparator class.
It implements the correct comparator for DATE/DATETIME values.
Two supplementary functions called get_date_from_str() and get_datetime_value()
are added. The first one extracts DATE/DATETIME value from a string and the
second one retrieves the correct DATE/DATETIME value from an item.
The new Arg_comparator::can_compare_as_dates() function is added and used
to check whether two given items can be compared by the compare_datetime()
comparator.
Two caching variables were added to the Arg_comparator class to speedup the
DATE/DATETIME comparison.
One more store() method was added to the Item_cache_int class to cache int
values.
The new is_datetime() function was added to the Item class. It indicates
whether the item returns a DATE/DATETIME value.
Added a test case.
The problem was fixed by the fix for bug #17379.
The problem was that because of some conditions
the optimizer always preferred range or full index
scan access methods to lookup access methods even
when the latter were much cheaper.
The optimizer transforms DISTINCT into a GROUP BY
when possible.
It does that by constructing the same structure
(a list of ORDER instances) the parser makes when
parsing GROUP BY.
While doing that it also eliminates duplicates.
But if a duplicate is found it doesn't advance the
pointer to ref_pointer array, so the next
(and subsequent) ORDER structures point to the wrong
element in the SELECT list.
Fixed by advancing the pointer in ref_pointer_array
even in the case of a duplicate.
NO_AUTO_VALUE_ON_ZERO mode.
The table->auto_increment_field_not_null variable wasn't reset after
reading a row which may lead to inserting a wrong value to the auto-increment
field to the following row.
The table->auto_increment_field_not_null variable is reset now right after a
row is being written in the read_fixed_length() and the read_sep_field()
functions.
Removed wrong setting of the table->auto_increment_field_not_null variable in
the read_sep_field() function.
In certain cases AFTER UPDATE/DELETE triggers on NDB tables that referenced
subject table didn't see the results of operation which caused invocation
of those triggers. In other words AFTER trigger invoked as result of update
(or deletion) of particular row saw version of this row before update (or
deletion).
The problem occured because NDB handler in those cases postponed actual
update/delete operations to be able to perform them later as one batch.
This fix solves the problem by disabling this optimization for particular
operation if subject table has AFTER trigger for this operation defined.
To achieve this we introduce two new flags for handler::extra() method:
HA_EXTRA_DELETE_CANNOT_BATCH and HA_EXTRA_UPDATE_CANNOT_BATCH.
These are called if there exists AFTER DELETE/UPDATE triggers during a
statement that potentially can generate calls to delete_row()/update_row().
This includes multi_delete/multi_update statements as well as insert statements
that do delete/update as part of an ON DUPLICATE statement.
IN/BETWEEN predicates in sorting expressions.
Wrong results may occur when the select list contains an expression
with IN/BETWEEN predicate that differs from a sorting expression by
an additional NOT only.
Added the method Item_func_opt_neg::eq to compare correctly expressions
containing [NOT] IN/BETWEEN.
The eq method inherited from the Item_func returns TRUE when comparing
'a IN (1,2)' with 'a NOT IN (1,2)' that is not, of course, correct.
LEFT JOIN
Fixed that in certain situations MATCH ... AGAINST returns false hits
for NULLs produced by LEFT JOIN when there is no fulltext index
available.