- Make it possible to use MTR_MAXNDB to set the upper limit of number of
parallel ndb test to run.
- Very useful on machines with many cores and lots of RAM
Enabled test snippet for bug 4374, tested on Mac OS X 10.6 as well as Solaris.
Moved test snippet to a different place in the file, in order to avoid having
to save and restore "SET NAMES" setting. New surroundings expect latin1, as is
used in the testsnippet.
An extra copy of the commented test snippet is removed, a comment is added,
SQL keywords are converted to uppercase, and engine name "heap" is updated to
"Memory".
Also added Copyright statement and a notice about the file's encoding(s).
For all the boolean system variables we now issue warnings if the
value wasn't recognized. Before that we just silently set them
to FALSE in this case.
per-file comments:
mysys/my_getopt.c
Bug #46393 If for slow_query_log a string is entered it does not complain.
warning issued if no documented value was specified.
When the server sends the name of the plugin it's using in the handshake
packet it was null terminating it in it's buffer, but was sending a length of
the packet 1 byte short.
Fixed to send the terminating 0 as well by increasing the length of the
packet to include it.
In this way the handshake packet becomes similar to the change user packet
where the plugin name is null terminated.
No test suite added as the fix can only be observed by analyzing the bytes
sent over the wire.
The problem from a user point of view was that on Solaris the
time related functions (e.g. NOW(), SYSDATE(), etc) would always
return a fixed time.
This bug was happening due to a logic in the time retrieving
wrapper function which would only call the time() function every
half second. This interval between calls would be calculated
using the gethrtime() and the logic relied on the fact that time
returned by it is monotonic.
Unfortunately, due to bugs in the gethrtime() implementation,
there are some cases where the time returned by it can drift
(See Solaris bug id 6600939), potentially causing the interval
calculation logic to fail.
Since newer versions of Solaris (10+) have alleviated the
performance degradation associated with time(2), the solution is
to simply directly rely on time() at each invocation.
This simplification has an upside that it allows us to eliminate
a lock which was used to control access to the variables used
to track the half second interval, thus improving the overall
scalability of timekeeping related functions (e.g. NOW()).
Benchmarks runs have shown no significant degradation associated
with this change. With this, there are actually improvements in
performance for cases involving many connections.
In summary, the changes introduced by this patch are:
a) my_time() and my_micro_time_and_time() no longer use gethrtime().
Instead, time() and gettimeofdate() are used correspondingly.
b) my_micro_time() is changed to not use gethrtime() so as to
have the same time source as my_micro_time_and_time().
There shouldn't be any performance impact from this change
since this function is used only a few times during statement
execution and, on Solaris, gettimeofday() shows acceptable
performance.
Fixed incorrect checks in join_read_const_table() for when to
accept a non-existing, or empty const-row as a part of the const'ified
set of tables.
Intention of this test is to only accept NULL-rows if this table is outer joined
into the resultset. (In case of an inner-join we can conclude at this point that
resultset will be empty, end we want to return 'error' to signal this.)
Initially 'maybe_null' is set to the same value as 'outer_join' in
setup_table_map(), mysql_priv.h ~line 2424. Later simplify_joins() will
attemp to replace outer joins by inner join whenever possible. This
will cause 'outer_join' to be updated. However, 'maybe_null' is *not* updated
to reflect this rewrite as this field is used to currectly set the 'nullability'
property for the columns in the resultset.
We should therefore change join_read_const_table() to check the 'outer_join'
property instead of 'maybe_null', as this correctly reflect the nullability of
the *execution plan* (not *resultset*).
An incorrect 'table_map' containing both the table itself,
and possible any outer-refs if this was the last table in
the subquery, was presented to make_cond_for_table().
As a pushed condition is only able to refer column from the table
the condition is pushed to, nothing else than columns from the
table itself (tab->table->map) may be refered in the pushed condition
constructed by 'push_cond= make_cond_for_table()'.
Also fix a minor 'copy and paste' bug in a comment
inside make_cond_for_table().
No testcase is possible on mainbranch as the NDB engine is not available (yet)
on mysql >= 5.5
Item_equal::val_int() checked for NULL-values by checking Item::null_value
*before* the respective ::store_value() and ::cmp(Item*) metods where called.
As Item::null_value is set by these metods, the value of 'null_value'
is not valid until *after* ::store_value() or ::cmp() has
been called for the Item object.
Fix is to swap order of ::store_value()/::cmp() and checking of Item::null_value.
This pattern is widely used other places inside item_cmpfunc.cc .
The test started failing on the same day patch for BUG 49978 was
pushed. BUG 49978 changed part of the replication testing
infrastructure in mysql-test-run. This caused the test to fail
sporadically with result differences on relay log file
names. When the test fails the relay-log filenames are shifted by
one, eg:
-show relaylog events in 'slave-relay-bin.000002' from <binlog_start>;
+show relaylog events in 'slave-relay-bin.000003' from <binlog_start>;
The problem was caused by a bad cleanup when using the include
files:
- include/setup_fake_relay_log.inc
- include/cleanup_fake_relay_log.inc
Which would leave a spurious relay-log file around (not listed in
slave-relay-bin.index), causing the server to shift the name of
the relay logs by one, even if cleaning up with RESET SLAVE.
We fix this by removing the relay-log file when it is not needed
anymore, ie at setup time and after recreating the fake relay-log
index.
Additionally, to make the affected test more resilient, we
deployed a call to rpl_reset.inc (which resets both master and
slave, including log files) before actually running the test
case.
Finally, appart from the reported bug, we also fix: (a) an
unrelated issue with the failing test itself - in some cases, the
test was not setting the log file name to use when it should;
(b) one typo.
TIMESTAMP.
Item_cache::get_cache wasn't treating TIMESTAMP as a DATETIME value thus
returning string cache for items with TIMESTAMP type. This led to incorrect
TIMESTAMP -> INT conversion and to a wrong query result.
Fixed by using Item::is_datetime function to check for DATETIME type group.
If the ::single_value_transformer() find an existing HAVING condition it used
to do the transformation:
1) HAVING cond -> (HAVING Cond) AND (cond_guard (Item_ref_null_helper(...))
As the AND condition in 1) is Mc'Carty evaluated, the
right side of the AND cond should be executed only if the
original 'HAVING evaluated' to true.
However, as we failed to set 'top_level' for the tranformed HAVING condition,
'abort_on_null' was FALSE after transformation. An
UNKNOWN having condition will then not terminate evaluation of the
transformed having condition, and we incorrectly continued
into the Item_ref_null_helper() part.