makedate() will fold years below 100 into the 1970-2069 range. CS removes code
that also wrongly folded years between 100 and 200 into that range, which should
be left unchanged. Backport from 5.1.
Before this patch, failures to write to the log tables (mysql.slow_log
and mysql.general_log) were improperly printed (the time was printed twice),
or not printed at all.
With this patch, failures to write to the log tables is reported in the
error log, for all cases of failures.
"DECLARE CURSOR FOR SHOW ..." is a syntax that currently appears to work,
but is untested for some SHOW commands and does not work for other SHOW
commands.
Since this is an un-intended feature that leaked as a result of a coding bug
(in the parser grammar), the correct fix is to fix the grammar to not accept
this construct.
In other words, "DECLARE CURSOR FOR SHOW <other commands that don't work>"
is not considered a bug, and we will not implement other features to make all
the SHOW commands usable inside a cursor just because someone exploited a bug.
Refactoring code to add parameter to pack() and unpack() functions with
purpose of indicating if data should be packed in little-endian or
native order. Using new functions to always pack data for binary log
in little-endian order. The purpose of this refactoring is to allow
proper implementation of endian-agnostic pack() and unpack() functions.
Eliminating several versions of virtual pack() and unpack() functions
in favor for one single virtual function which is overridden in
subclasses.
Implementing pack() and unpack() functions for some field types that
packed data in native format regardless of the value of the
st_table_share::db_low_byte_first flag.
The field types that were packed in native format regardless are:
Field_real, Field_decimal, Field_tiny, Field_short, Field_medium,
Field_long, Field_longlong, and Field_blob.
Before the patch, row-based logging wrote the rows incorrectly on
big-endian machines where the storage engine defined its own
low_byte_first() to be FALSE on big-endian machines (the default
is TRUE), while little-endian machines wrote the fields in correct
order. The only known storage engine that does this is NDB. In effect,
this means that row-based replication from or to a big-endian
machine where the table was using NDB as storage engine failed if the
other engine was either non-NDB or on a little-endian machine.
With this patch, row-based logging is now always done in little-endian
order, while ORDER BY uses the native order if the storage engine
defines low_byte_first() to return FALSE for big-endian machines.
In addition, the max_data_length() function available in Field_blob
was generalized to the entire Field hierarchy to give the maximum
number of bytes that Field::pack() will write.
Problem: GROUP_CONCAT(DISTINCT BIT_FIELD...) uses a tree to store keys;
which are constructed using a temporary table fields,
see Item_func_group_concat::setup().
As a) we don't store null bits in the tree where the bit fields store parts
of their data and b) there's no method to properly compare two table records
we've got problem.
Fix: convert BIT fields to INT in the temporary table used.
Bug#30982 CHAR(..USING..) can return a not-well-formed string
Bug#30986 Character set introducer followed by a HEX string can return bad result
check_well_formed_result moved to Item from Item_str_func
fixed Item_func_char::val_str for proper ucs symbols converting
added check for well formed strings for correct conversion of constants with underscore
charset
myisam_sort_buffer_size.
An incorrect length of the sort buffer was used when calculating the
maximum number of keys. When myisam_sort_buffer_size is small enough,
this could result in the number of keys < number of
BUFFPEK structures which in turn led to use of uninitialized BUFFPEKs.
Fixed by correcting the buffer length calculation.
The special case with NULL as a regular expression
was handled at prepare time. But in this special case
the item was not marked as fixed. This caused an assertion
at execution time.
Fixed my marking the item as fixed even when known to
return NULL at prepare time.
Bug#31481 test suite parts: Many tests fail because of changed server error codes
Bug#31243 Test "partition_basic_myisam" truncates path names
+ minor cleanup
Problem: rpl_stm_mystery22 is unstable.
Reason: At one place, the test case *should* wait until the SQL thread on the
slave receives an error, but instead it waits until the SQL thread stops. The
SQL thread may stop before the error flag is set, so that when the test case
continues to execute, the error flag is not set.
Fix: Introduce the subroutine mysql-test/include/wait_for_slave_sql_error.inc,
which waits until there is an error in the sql thread of the slave.
Re-commit: fixed one logical error and two smaller things noted by Mats.
precision > 0 && scale <= precision'.
A sign of a resulting item of the IFNULL function was not
updated and the maximal length of this result was calculated
improperly. Correct algorithm was copy&pasted from the IF
function implementation.
Delete: mysql-test/suite/rpl/t/rpl_stm_extraColmaster_ndb.test
.del-rpl_row_extraColmaster_ndb.result~a2c64bae75b49d2:
Delete: mysql-test/suite/rpl/r/rpl_row_extraColmaster_ndb.result
.del-rpl_row_extraColmaster_ndb.test~523b0954869c4423:
Delete: mysql-test/suite/rpl/t/rpl_row_extraColmaster_ndb.test
Many files:
merged and cleanup of test cases
Fixed the usage of spatial data (and Point in specific) with
non-spatial indexes.
Several problems :
- The length of the Point class was not updated to include the
spatial reference system identifier. Fixed by increasing with 4
bytes.
- The storage length of the spatial columns was not accounting for
the length that is prepended to it. Fixed by treating the
spatial data columns as blobs (and thus increasing the storage
length)
- When creating the key image for comparison in index read wrong
key image was created (the one needed for and r-tree search,
not the one for b-tree/other search). Fixed by treating the
spatial data columns as blobs (and creating the correct kind of
image based on the index type).
Bug#29816 Syntactically wrong query fails with misleading error message
The core problem is that an SQL-invoked function name can be a <schema
qualified routine name> that contains no <schema name>, but the mysql
parser insists that all stored procedures (function, procedures and
triggers) must have a <schema name>, which is not true for functions.
This problem is especially visible when trying to create a function
or when a query contains a syntax error after a function call (in the
same query), both will fail with a "No database selected" message if
the session is not attached to a particular schema, but the first
one should succeed and the second fail with a "syntax error" message.
Part of the fix is to revamp the sp name handling so that a schema
name may be omitted for functions -- this means that the internal
function name representation may not have a dot, which represents
that the function doesn't have a schema name. The other part is
to place schema checks after the type (function, trigger or procedure)
of the routine is known.