This pads the value of CHAR columns with spaces up to full column length (according to ANSI)
It's not makde part of oracle or ansi mode yet, as this would cause a notable behaviour change.
Added uuid_short(), a generator for increasing 'unique' longlong integers (8 bytes)
The BETWEEN function was comparing DATE/DATETIME values either as ints or as
strings. Both methods have their disadvantages and may lead to a wrong
result.
Now BETWEEN function checks whether all of its arguments has the STRING result
types and at least one of them is a DATE/DATETIME item. If so it sets up
two Arg_comparator obects to compare with the compare_datetime() comparator
and uses them to compare such items.
Added two Arg_comparator object members and one flag to the
Item_func_between class for the correct DATE/DATETIME comparison.
The Item_func_between::fix_length_and_dec() function now detects whether
it's used for DATE/DATETIME comparison and sets up newly added Arg_comparator
objects to do this.
The Item_func_between::val_int() now uses Arg_comparator objects to perform
correct DATE/DATETIME comparison.
The owner variable of the Arg_comparator class now can be set to NULL if the
caller wants to handle NULL values by itself.
Now the Item_date_add_interval::get_date() function ajusts cached_field type according to the detected type.
DATE and DATETIME can be compared either as strings or as int. Both
methods have their disadvantages. Strings can contain valid DATETIME value
but have insignificant zeros omitted thus became non-comparable with
other DATETIME strings. The comparison as int usually will require conversion
from the string representation and the automatic conversion in most cases is
carried out in a wrong way thus producing wrong comparison result. Another
problem occurs when one tries to compare DATE field with a DATETIME constant.
The constant is converted to DATE losing its precision i.e. losing time part.
This fix addresses the problems described above by adding a special
DATE/DATETIME comparator. The comparator correctly converts DATE/DATETIME
string values to int when it's necessary, adds zero time part (00:00:00)
to DATE values to compare them correctly to DATETIME values. Due to correct
conversion malformed DATETIME string values are correctly compared to other
DATE/DATETIME values.
As of this patch a DATE value equals to DATETIME value with zero time part.
For example '2001-01-01' equals to '2001-01-01 00:00:00'.
The compare_datetime() function is added to the Arg_comparator class.
It implements the correct comparator for DATE/DATETIME values.
Two supplementary functions called get_date_from_str() and get_datetime_value()
are added. The first one extracts DATE/DATETIME value from a string and the
second one retrieves the correct DATE/DATETIME value from an item.
The new Arg_comparator::can_compare_as_dates() function is added and used
to check whether two given items can be compared by the compare_datetime()
comparator.
Two caching variables were added to the Arg_comparator class to speedup the
DATE/DATETIME comparison.
One more store() method was added to the Item_cache_int class to cache int
values.
The new is_datetime() function was added to the Item class. It indicates
whether the item returns a DATE/DATETIME value.
Reason:
This test executes DML statements on a NDB table to detect if some SQL statements of special interest commits the ongoing transaction.
When running in MIXED mode, automatic switching from statement-based to row-based replication takes place when a DML statement
updates an NDB table.
That means running this test on NDB with binlog-format=mixed and binlog-format=row mostly checks the same routines twice.
Therefore we skip the variant with binlog-format=mixed.
Validity checks for nested set functions
were not taking into account that the enclosed
set function may be on a nest level that is
lower than the nest level of the enclosing set
function.
Fixed by :
- propagating max_sum_func_level
up the enclosing set functions chain.
- updating the max_sum_func_level of the
enclosing set function when the enclosed set
function is aggregated above or on the same
nest level of as the level of the enclosing
set function.
- updating the max_arg_level of the enclosing
set function on a reference that refers to
an item above or on the same nest level
as the level of the enclosing set function.
- Treating both Item_field and Item_ref as possibly
referencing items from outer nest levels.
INSERT into InnoDB table may cause "ERROR 1062 (23000): Duplicate entry..."
errors or lost records after multi-row INSERT of the form:
"INSERT INTO t (id...) VALUES (NULL...) ON DUPLICATE KEY UPDATE id=VALUES(id)",
where "id" is an AUTO_INCREMENT column.
It happens because InnoDB handler forgets to save next insert id after
updating of auto_increment column with new values. As result of that
last insert id stored inside InnoDB dictionary tables differs from it's
cached thd->next_insert_id value.
---
Merge pippilotta.erinye.com:/shared/home/df/mysql/build/mysql-5.0-build-work-vanilla-building
into pippilotta.erinye.com:/shared/home/df/mysql/build/mysql-5.1-build-work-vanilla-building
---
Fix test cases to pass for a plain ./configure && make build. This includes disabling two test cases when certain features are not present in the server. We're not losing coverage from this because these features are usually present, and disabling them here only serves the purpose to make the test cases work in the unlikely case that they aren't.
---
fixes
When fields are inserted instead of * in the select list they were not marked
for check for the ONLY_FULL_GROUP_BY mode.
The Field_iterator_table::create_item() function now marks newly created
items for check when in the ONLY_FULL_GROUP_BY mode.
The setup_wild() and the insert_fields() functions now maintain the
cur_pos_in_select_list counter for the ONLY_FULL_GROUP_BY mode.
It was later disabled because the test failed with timeout on one testing box.
The reason for this failing test could not be found because we do not have informations about the conditions on the box during this test.
Jeb and I tried this test on other boxes and it passed.
My experience is that
- tests using NDB need in general often significant more runtime
than comparable tests of other storage engines
- the actual load of the box where the test is running and the
filesystem (nfs could be extreme slow) where the tests are
executed might have a huge impact on the test performance
(runtime * 2 till 3)
- there are sometimes problems with the ports most probably
caused by OS properties (NDB+RPL need many ports) or
parallel tests accidently running with the same ports.
AFAIK these are the reasons why the NDB tests fail sometimes with timeout.
Conclusion: We enable rpl_ndb_ddl again because the failure happens in rare cases
and seems not to be caused by errors within the server or test code.