on decimal column
The problem was that there was no check to disallow DECIMAL
columns in the code (it was accepted as if it was INTEGER).
Solution was to correctly disallow DECIMAL columns in
COLUMNS partitioning. As documented.
work in 5.1.40)
MERGE engine fails to open child table from a different
database if child table/database name contains characters
that are subject for table name to filename encoding
(WL1324).
Another problem is that MERGE engine didn't properly open
child table from the same database if child table name
contains characters like '/', '#'.
The problem was that table name to file name encoding was
applied inconsistently:
* On CREATE: encode table name + database name if child
table is in different database; do not encode table
name if child table is in the same database;
* No decoding on open.
With this fix child table/database names are always
encoded on CREATE and decoded on open. Compatibility
with older tables preserved.
Along with this patch comes fix for SHOW CREATE TABLE,
which used to show child table/database path instead
of child table/database names.
Diagnostics_area::set_ok_status on DROP FUNCTION
This assert tests that the server is not trying to send "ok" to
the client if an error has occured during statement processing.
In this case, the assert was triggered by lock timeout errors when
accessing system tables to do an implicit REVOKE after executing
DROP FUNCTION/PROCEDURE. In practice, this was only likely to
happen with very low values for "lock_wait_timeout" (in the bug report
1 second was used). These errors were ignored and the server tried
to send "ok" to the client, triggering the assert.
The patch for Bug#45225 introduced lock timeouts for metadata locks.
This made it possible to get timeouts when accessing system tables.
Note that a followup patch for Bug#45225 pushed after this
bug was reported, changed accessing of system tables such
that the user-supplied timeout value is ignored and the maximum
timeout value is used instead. This exact bug was therefore
only noticeable in the period between the initial Bug#45225 patch
and the followup patch.
However, the same problem could occur for any errors during revoking
of privileges - not just timeouts. This patch fixes the problem by
making sure that any errors during revoking of privileges are
reported to the client.
Test case added to sp-destruct.test. Since the original bug is not
reproducable now that system tables are accessed using a a long
timeout value, this test instead calls DROP FUNCTION with a grant
system table missing.
If an outer query is broken, a subquery might not even get set up.
EXPLAIN EXTENDED did not expect this and merrily tried to de-ref all
of the half-setup info.
We now catch this case and print as much as we have, as it doesn't cost us
anything (doesn't make regular execution slower).
backport from 5.1
insert...select
Queries following bulk insert into an empty MyISAM table
may break it. This was pure MyISAM problem.
When bulk insert into an empty table is complete, MyISAM
may want to enable indexes via repair by sort. If repair
by sort fails (e.g. insufficient buffer), MyISAM failover
to repair with key cache, requesting repair of data file.
Repair of data file performs data file substitution. This
means that current table instance will point to new data
file. Other cached table instances are still pointing to
an old, deleted data file.
This is fixed by not requesting repair of data file
during enable indexes.
Explicit REPAIR is not affected, since it flushes all
table instances.
for same data when using bit fields
Problem: checksum for BIT fields may be computed incorrectly
in some cases due to its storage peculiarity.
Fix: convert a BIT field to a string then calculate its checksum.
Extend and implement the grammar that allows to FLUSH WITH READ LOCK
a list of tables, rather than all of them.
Incompatible grammar change:
Previously one could perform FLUSH TABLES, HOSTS, PRIVILEGES in a single
statement.
After this change, FLUSH TABLES must always be alone on the list.
Judging by the test suite, however, the old extended syntax
was never or very rarely used.
The new statement requires RELOAD ACL global privilege and
LOCK_TABLES_ACL | SELECT_ACL on individual tables.
In other words, it's an atomic combination of LOCK TALBES <list> READ
and FLUSH TABLES <list>, and requires respective privileges.
For additional information about the semantics, please
see WL#5000 and the comment for flush_tables_with_read_lock()
function in sql_parse.cc
The problem was that ALTER TABLE on a merge table which was locked
using LOCK TABLE ... WRITE, by mistake gave
ER_TABLE_NOT_LOCKED_FOR_WRITE.
During opening of the table to be ALTERed, open_table() tried to
get an upgradable metadata lock. In LOCK TABLEs mode, this lock
must already exist (i.e. taken by LOCK TABLE) as new locks of this
type cannot be acquired for fear of deadlock. So in LOCK TABLEs
mode, open_table() tried to find an existing upgradable lock for
the table to be altered.
The problem was that open_table() also tried to find upgradable
metadata locks for children of merge tables even if no such
locks are needed to execute ALTER TABLE on merge tables.
This patch fixes the problem by making sure that open tables code
only searches for upgradable metadata locks for the merge table
and not for the merge children tables.
The patch also fixes a related bug where an upgradable metadata
lock was aquired outside of LOCK TABLEs mode even if the table in
question was temporary. This bug meant that LOCK TABLES or DDL on
temporary tables by mistake could be blocked/aborted by locks held
on base tables with the same table name by other connections.
Test cases added to merge.test and lock_multi.test.
The problem was that the CSV storage engine does not support NULL
fields, yet in some early 5.1 version the log tables (general_log
and slow_log) were created with null fields. On top of this, when
altering a CSV table column, all fields of the table must be NOT
NULL otherwise the alteration fails.
The solution is to ensure that during upgrade all columns of the
log tables are NOT NULL.
The problem is that cond->fix_fields(thd, 0) breaks
condition(cuts off 'having'). The reason of that is
that NULL valued Item pointer is present in the
middle of Item list and it breaks the Item processing
loop.
performance degradation.
Filesort + join cache combination is preferred to full index scan because it
is usually faster. But it's not the case when the index is clustered one.
Now test_if_skip_sort_order function prefers filesort only if index isn't
clustered.
Attempts to execute RESET statements within a transaction that
had acquired metadata locks, led to an assertion failure on
debug servers. This bug didn't cause any problems on release
builds.
The triggered assert is designed to check that caches are not
flushed or reset while having active transactions. It is triggered
if acquired metadata locks exist that are not from LOCK TABLE or
HANDLER statements.
In this case it was triggered by RESET QUERY CACHE while having
an active transaction that had acquired locks. The reason the
assertion was triggered, was that RESET statements, unlike the
similar FLUSH statements, was not causing an implicit commit.
This patch fixes the problem by making sure RESET statements
commit the current transaction before executing. The commit
causes acquired metadata locks to be released, preventing the
assertion from being triggered.
Incompatible change: This patch changes RESET statements so
that they cause an implicit commit.
Test case added to query_cache.test.
Detailed revision comments:
r6538 | sunny | 2010-01-30 00:43:06 +0200 (Sat, 30 Jan 2010) | 6 lines
branches/5.1: Check *first_value every time against the column max
value and set *first_value to next autoinc if it's > col max value.
ie. not rely on what is passed in from MySQL.
[49497] Error 1467 (ER_AUTOINC_READ_FAILED) on inserting a negative value
rb://236
Detailed revision comments:
r6536 | sunny | 2010-01-30 00:13:42 +0200 (Sat, 30 Jan 2010) | 6 lines
branches/5.1: Check *first_value everytime against the column max
value and set *first_value to next autoinc if it's > col max value.
ie. not rely on what is passed in from MySQL.
[49497] Error 1467 (ER_AUTOINC_READ_FAILED) on inserting a negative value
rb://236
Propagation of a large unsigned numeric constant
in the WHERE expression led to wrong result.
For example,
"WHERE a = CAST(0xFFFFFFFFFFFFFFFF AS USIGNED) AND FOO(a)",
where a is an UNSIGNED BIGINT, and FOO() accepts strings,
was transformed to "... AND FOO('-1')".
That has been fixed.
Also EXPLAIN EXTENDED printed incorrect numeric constants in
transformed WHERE expressions like above. That has been
fixed too.
The problem was in an incorrect debug assertion. The expression
used in the failing assertion states that when finding
references matching ORDER BY expressions, there can be only one
reference to a single table. But that does not make any sense,
all test cases for this bug are valid examples with multiple
identical WHERE expressions referencing the same table which
are also present in the ORDER BY list.
Fixed by removing the failing assertion. We also have to take
care of the 'found' counter so that we count multiple
references only once. We rely on this fact later in
eq_ref_table().