"Federated Denial of Service"
Federated storage engine used to attempt to open connections within
the ::create() and ::open() methods which are invoked while LOCK_open
mutex is being held by mysqld. As a result, no other client sessions
can open tables while Federated is attempting to open a connection.
Long DNS lookup times would stall mysqld's operation and a rogue
connection string which connects to a remote server which simply
stalls during handshake can stall mysqld for a much longer period of
time.
This patch moves the opening of the connection much later, when the
federated actually issues queries, by which time the LOCK_open mutex is
no longer being held.
Stopping mysql server could result in an entry in mysql error
file: "InnoDB: Error: MySQL is freeing a thd".
This happened because InnoDB assumes that the server will never
call external_lock(F_UNLCK) in case external_lock(READ/WRITE)
failed.
Prior to this patch we haven't had strict definition whether
external_lock(F_UNLCK) must be called in case external_lock(READ/WRITE)
fails.
This patch states that we never call external_lock(F_UNLCK) in case
external_lock(READ/WRITE) fails.
Creating an EVENT to be executed at a time close to the end of the allowed
range (2038.01.19 03:14:07 UTC) would cause the server to crash. The
expected behavior is to accept all calendar times within the interval and
reject all other values without crashing.
This patch replaces the function 'sec_to_epoch_TIME' with a Time_zone API call.
This function was broken because it invoked the internal function 'sec_to_epoch'
without respecting the restrictions on the function parameters (and this caused
assertion failure). It also was used as a reverse function to
Time_zone_utc::gmt_sec_to_TIME which it isn't.
Now we don't take any mutexes when creating or dropping internal HEAP tables during SELECT.
Change buffer sizes to size_t to make keycache 64 bit safe on platforms where sizeof(ulong) != sizeof(size_t)
When the SQL_BIG_RESULT flag is specified SELECT should store items from the
select list in the filesort data and use them when sending to a client.
The get_addon_fields function is responsible for creating necessary structures
for that. But this function was allowed to do so only for SELECT and
INSERT .. SELECT queries. This makes the SQL_BIG_RESULT useless for
the CREATE .. SELECT queries.
Now the get_addon_fields allows storing select list items in the filesort
data for the CREATE .. SELECT queries.
"getGeneratedKeys() does not work with FEDERATED table"
mysql_insert() expected the storage engine to update the row data
during the write_row() operation with the value of the new auto-
increment field. The field must be updated when only one row has
been inserted as mysql_insert() would ignore the thd->last_insert.
This patch implements HA_STATUS_AUTO support in ha_federated::info()
and ensures that ha_federated::write_row() does update the row's
auto-increment value.
The test case was written in C as the protocol's 'id' value is
accessible through libmysqlclient and not via SQL statements.
mysql-test-run.pl was extended to enable running the test binary.
If a primary key is defined over column c of enum type then
the EXPLAIN command for a look-up query of the form
SELECT * FROM t WHERE c=0
said that the query was with an impossible where condition though the
query correctly returned non-empty result set when the table indeed
contained rows with error empty strings for column c.
This kind of misbehavior was due to a bug in the function
Field_enum::store(longlong,bool) that erroneously returned 1 if
the the value to be stored was equal to 0.
Note that the method
Field_enum::store(const char *from,uint length,CHARSET_INFO *cs)
correctly returned 0 if a value of the error empty string
was stored.
This bug manifested itself for join queries with GROUP BY and HAVING clauses
whose SELECT lists contained DISTINCT. It occurred when the optimizer could
deduce that the result set would have not more than one row.
The bug could lead to wrong result sets for queries of this type because
HAVING conditions were erroneously ignored in some cases in the function
remove_duplicates.