Creating an IBMDB2I table with the macce character set
is successful, but any attempt to insert data into the
table was failing.
This was happening because the character set name "macce"
is not a valid iconv descriptor for IBM i PASE. This patch
adds an override to convertTextDesc to use the equivalent
valid iconv descriptor "IBM-1282" instead.
mysql-test/suite/ibmdb2i/r/ibmdb2i_bug_45793.result:
Bug#45793 macce charset causes error with IBMDB2I
Result file for the test case.
mysql-test/suite/ibmdb2i/t/ibmdb2i_bug_45793.test:
Bug#45793 macce charset causes error with IBMDB2I
Add a testcase for the macce charater set.
storage/ibmdb2i/db2i_charsetSupport.cc:
Bug#45793 macce charset causes error with IBMDB2I
The character set name "macce" is not a valid iconv
descriptor for IBM i PASE. Add an override to convertTextDesc
to use the equivalent valid iconv descriptor "IBM-1282"
instead.
Some collations--including cp1250_czech_cs,latin2_czech_cs,
ucs2/utf8_czech_ci, ucs2/utf8_danish_ci--are not being
sorted correctly by the IBMDB2I storage engine. This
was being caused because the sort order used by DB2 is
incompatible with the order expected by MySQL.
This patch removes support for the cp1250_czech_cs and
latin2_czech_cs collations because it has been determined
that the sort order used by DB2 is incompatible with the
order expected by MySQL. Users needing a czech collation
with IBMDB2I are encouraged to use a Unicode-based collation
instead of these single-byte collations. This patch also
modifies the DB2 sort sequence used for ucs2/utf8_czech_ci
and ucs2/utf8_danish_ci collations to better match the
sorting expected by MySQL. This will only affect indexes
or tables that are newly created through the IBMDB2I storage
engine. Existing IBMDB2I tables will retain the old sort
sequence until recreated.
mysql-test/suite/ibmdb2i/r/ibmdb2i_bug_45196.result:
Bug#45196 Some collations do not sort correctly with IBMDB2I
Result file for the test case.
mysql-test/suite/ibmdb2i/t/ibmdb2i_bug_45196.test:
Bug#45196 Some collations do not sort correctly with IBMDB2I
Adding tests for testing the sort order with the modified collations.
storage/ibmdb2i/db2i_collationSupport.cc:
Bug#45196 Some collations do not sort correctly with IBMDB2I
Remove the support for the cp1250_czech_cs and latin2_czech_cs
collations because it has been determined that the sort order
used by DB2 is incompatible with the order expected by MySQL.
Users needing a czech collation with IBMDB2I are encouraged to
use a Unicode-based collation instead of these single-byte
collations. This patch also modifies the DB2 sort sequence
used for ucs2/utf8_czech_ci and ucs2/utf8_danish_ci collations
to better match the sorting expected by MySQL. This will only
affect indexes or tables that are newly created through the
IBMDB2I storage engine. Existing IBMDB2I tables will retain
the old sort sequence until recreated.
format." warnings
Despite the fact that a statement would be filtered out from binlog, a
warning would still be thrown if it was issued with the LIMIT.
This patch addresses this issue by checking the filtering rules before
printing out the warning.
mysql-test/suite/binlog/t/binlog_stm_unsafe_warning-master.opt:
Parameter to filter out database: "b42851".
mysql-test/suite/binlog/t/binlog_stm_unsafe_warning.test:
Added a new test case.
sql/sql_class.cc:
Added filtering rules check to condition used to decide whether to
printout warning or not.
The TABLE::reginfo.impossible_range is used by the optimizer to indicate
that the condition applied to the table is impossible. It wasn't initialized
at table opening and this might lead to an empty result on complex queries:
a query might set the impossible_range flag on a table and when the query finishes,
all tables are returned back to the table cache. The next query that uses the table
with the impossible_range flag set and an index over the table will see the flag
and thus return an empty result.
The open_table function now initializes the TABLE::reginfo.impossible_range
variable.
mysql-test/r/select.result:
A test case for the bug#45266: Uninitialized variable lead to an empty result.
mysql-test/t/select.test:
A test case for the bug#45266: Uninitialized variable lead to an empty result.
sql/sql_base.cc:
Bug#45266: Uninitialized variable lead to an empty result.
The open_table function now initializes the TABLE::reginfo.impossible_range
variable.
sql/sql_select.cc:
Bug#45266: Uninitialized variable lead to an empty result.
The open_table function now initializes the TABLE::reginfo.impossible_range
variable.
sql/structs.h:
Bug#45266: Uninitialized variable lead to an empty result.
A comment is added.
The problem is that the one phase commit function failed to
properly end a empty transaction. The solution is to ensure
that the transaction cleanup procedure is invoked even for
empty transactions.
mysql-test/r/xa.result:
Add test case result for Bug#45548
mysql-test/t/xa.test:
Add test case for Bug#45548
sql/handler.cc:
Invoke transaction cleanup function whenever a transaction is ended.
The test case added failed sporadically on PB. This is due to the
fact that the user thread in some cases is waiting for slave IO
to stop and then check the error number. Thence, sometimes the
user thread would race for the error number with IO thread.
This post push fix addresses this by replacing the wait for slave
io to stop with a wait for slave io error (as it seems it was
added in 6.0 also after patch on which this is based was
pushed). This implied backporting wait_for_slave_io_error.inc
from 6.0 also.
Added privilege checking to SHOW CREATE TRIGGER code.
mysql-test/r/trigger_notembedded.result:
test result
mysql-test/t/trigger_notembedded.test:
test case
sql/sql_show.cc:
Added privilege checking to SHOW CREATE TRIGGER code.
BUG#40565 - Update Query Results in "1 Row Affected" But Should Be "Zero Rows"
Detailed revision comments:
r5232 | marko | 2009-06-03 14:31:04 +0300 (Wed, 03 Jun 2009) | 21 lines
branches/5.0: Merge r3590 from branches/5.1 in order to fix Bug #40565
(Update Query Results in "1 Row Affected" But Should Be "Zero Rows").
Also, add a test case for Bug #40565.
rb://128 approved by Heikki Tuuri
------------------------------------------------------------------------
r3590 | marko | 2008-12-18 15:33:36 +0200 (Thu, 18 Dec 2008) | 11 lines
branches/5.1: When converting a record to MySQL format, copy the default
column values for columns that are SQL NULL. This addresses failures in
row-based replication (Bug #39648).
row_prebuilt_t: Add default_rec, for the default values of the columns in
MySQL format.
row_sel_store_mysql_rec(): Use prebuilt->default_rec instead of
padding columns.
rb://64 approved by Heikki Tuuri
------------------------------------------------------------------------
In Item_param::set_from_user_var
value.cs_info.character_set_client is set
to 'fromcs' value. It's wrong, it should be set to
thd->variables.character_set_client.
mysql-test/r/ctype_gbk_binlog.result:
test result
mysql-test/t/ctype_gbk_binlog.test:
test case
sql/item.cc:
In Item_param::set_from_user_var
value.cs_info.character_set_client is set
to 'fromcs' value. It's wrong, it should be set to
thd->variables.character_set_client.
When opening a table, it is imperative that the flag
TABLE::auto_increment_field_not_null be false. But if an error occured during
the creation of a table (e.g. the table exists already) with an auto_increment
column and a BEFORE trigger that used the INSERT ... SELECT construct, the
flag was not reset until after error checking. Thus if an error occured,
select_insert::send_data() returned immediately and it was not reset (see * in
pseudocode below). Crash happened if the table was opened again. Fixed by
resetting the flag after error checking.
nested-loops_join():
for each row in SELECT table {
select_insert::send_data():
if a values is supplied for AUTO_INCREMENT column
table->auto_increment_field_not_null= TRUE
else
table->auto_increment_field_not_null= FALSE
if (error)
return 1; *
if (table->auto_increment_field_not_null == FALSE)
...
table->auto_increment_field_not_null == FALSE
}
<-- table returned to table cache and later retrieved by open_table:
open_table():
assert(table->auto_increment_field_not_null)
mysql-test/r/trigger.result:
Bug#44653: Test result
mysql-test/t/trigger.test:
Bug#44653: Test case
sql/sql_insert.cc:
Bug#44653: Fix: Make sure to unset this field before returning in case of error
1. BUG#45357 - 5.1.35 crashes with Failing assertion: index->type & DICT_CLUSTERED
2. Also fixes the compilation problem when the flag -DUNIV_MUST_NOT_INLINE
Detailed revision comments:
r5340 | marko | 2009-06-17 12:11:49 +0300 (Wed, 17 Jun 2009) | 4 lines
branches/5.1: row_unlock_for_mysql(): When the clustered index is unknown,
refuse to unlock the record.
(Bug #45357, caused by the fix of Bug #39320).
rb://132 approved by Sunny Bains.
r5339 | marko | 2009-06-17 11:01:37 +0300 (Wed, 17 Jun 2009) | 2 lines
branches/5.1: Add missing #include "mtr0log.h" so that the code compiles
with -DUNIV_MUST_NOT_INLINE.
Details:
- Limit the queries to character sets and collations
which are most probably available in all build types.
But try to preserve the intention of the tests.
- Remove the variants adjusted to some build types.
Note:
1. The results of the review by Bar are included.
2. I am not able to check the correctness of this patch
on any existing build type and any MySQL version.
So it could happen that the new test fails somewhere.
Inconsistent behavior of session variable max_allowed_packet
(and net_buffer_length); only assignment to the global variable
has any effect, without this being obvious to the user.
The patch for Bug#22891 is backported to 5.0, making the two
session variables read-only. As this is a backport to GA
software, the error used when trying to assign to the read-
only variable is ER_UNKNOWN_ERROR. The error message is the
same as in 5.1+.
mysql-test/t/variables.test:
Tests are changed to account for the new semantics, and assignment to the read-only variables is added to test
the emission of the correct error message.
sql/set_var.cc:
Both max_allowed_packet and net_buffer_length are changed
to be of type sys_var_thd_ulong_session_readonly. ER_UNKNOWN_ERROR is used to indicate an attempt to assign
to an instance of a read-only variable.
sql/set_var.h:
Class sys_var_thd_ulong_session_readonly is added.
Large transactions and statements may corrupt the binary log if the size of the
cache, which is set by the max_binlog_cache_size, is not enough to store the
the changes.
In a nutshell, to fix the bug, we save the position of the next character in the
cache before starting processing a statement. If there is a problem, we simply
restore the position thus removing any effect of the statement from the cache.
Unfortunately, to avoid corrupting the binary log, we may end up loosing changes
on non-transactional tables if they do not fit in the cache. In such cases, we
store an Incident_log_event in order to stop the slave and alert users that some
changes were not logged.
Precisely, for every non-transactional changes that do not fit into the cache,
we do the following:
a) the statement is *not* logged
b) an incident event is logged after committing/rolling back the transaction,
if any. Note that if a failure happens before writing the incident event to
the binary log, the slave will not stop and the master will not have reported
any error.
c) its respective statement gives an error
For transactional changes that do not fit into the cache, we do the following:
a) the statement is *not* logged
b) its respective statement gives an error
To work properly, this patch requires two additional things. Firstly, callers to
MYSQL_BIN_LOG::write and THD::binlog_query must handle any error returned and
take the appropriate actions such as undoing the effects of a statement. We
already changed some calls in the sql_insert.cc, sql_update.cc and sql_insert.cc
modules but the remaining calls spread all over the code should be handled in
BUG#37148. Secondly, statements must be either classified as DDL or DML because
DDLs that do not get into the cache must generate an incident event since they
cannot be rolled back.
Item_func_spatial_collection::val_str
When the concatenation function for geometry data collections
reads the binary data it was not rigorous in checking that there
is data available, leading to invalid reads and crashes.
Fixed by making checking stricter.
mysql-test/r/gis.result:
Bug#44684: Test result
mysql-test/t/gis.test:
Bug#44684: Test case
sql/item_geofunc.cc:
Bug#44684: fix(es)
- Check that there are 4 bytes available for type code.
- Check that there is at least one point available for linestring.
- Check that there are at least 2 points in a polygon and
data for all the points.
This patch corrects a misstake in the test case for bug patch 43658.
There was a race in the test case when the thread id was retrieved from the processlist.
The result was that the same thread id was signalled twice and one thread id wasn't
signalled at all.
The affected platforms appears to be limited to linux.
mysql-test/r/query_cache_debug.result:
There was a race in the test case when the thread id was retrieved from the processlist.
The result was that the same thread id was signalled twice and one thread id wasn't
signalled at all.
mysql-test/t/query_cache_debug.test:
There was a race in the test case when the thread id was retrieved from the processlist.
The result was that the same thread id was signalled twice and one thread id wasn't
signalled at all.
The assertion in String::copy was added in order to avoid
valgrind errors when the destination was the same as the source.
Eased restriction to allow for the case when str == NULL.
mysql-test/r/func_set.result:
Bug#45168: Test result
mysql-test/t/func_set.test:
Bug#45168: Test case
sql/item_strfunc.cc:
Bug#45168: Code cleanup and grammar correction in comment
sql/sql_string.cc:
Bug#45168: Fix
Early patch submitted for discussion.
It is possible for more than one thread to enter the condition
in query_cache_insert(), but the condition predicate is to
signal one thread each time the cache status changes between
the following states: {NO_FLUSH_IN_PROGRESS,FLUSH_IN_PROGRESS,
TABLE_FLUSH_IN_PROGRESS}
Consider three threads THD1, THD2, THD3
THD2: select ... => Got a writer in ::store_query
THD3: select ... => Got a writer in ::store_query
THD1: flush tables => qc status= FLUSH_IN_PROGRESS;
new writers are blocked.
THD2: select ... => Still got a writer and enters cond in
query_cache_insert
THD3: select ... => Still got a writer and enters cond in
query_cache_insert
THD1: flush tables => finished and signal status change.
THD2: select ... => Wakes up and completes the insert.
THD3: select ... => Happily waiting for better times. Why hurry?
This patch is a refactoring of this lock system. It introduces four new methods:
Query_cache::try_lock()
Query_cache::lock()
Query_cache::lock_and_suspend()
Query_cache::unlock()
This change also deprecates wait_while_table_flush_is_in_progress(). All threads are
queued and put on a conditional wait. On each unlock the queue is signalled. This resolve
the issues with left over threads. To assure that no threads are spending unnecessary
time waiting a signal broadcast is issued every time a lock is taken before a full
cache flush.
mysql-test/r/query_cache_debug.result:
* Added test case for bug43758
mysql-test/t/query_cache_debug.test:
* Added test case for bug43758
sql/sql_cache.cc:
* Replaced calls to wait_while_table_flush_is_in_progress() with
calls to try_lock(), lock_and_suspend() and unlock().
* Renamed enumeration Cache_status to Cache_lock_status.
* Renamed enumeration items to UNLOCKED, LOCKED_NO_WAIT and LOCKED.
If the LOCKED_NO_WAIT lock type is used to lock the query cache, other
threads using try_lock() will fail to acquire the lock.
This is useful if the query cache is temporary disabled due to
a full table flush.
sql/sql_cache.h:
* Replaced calls to wait_while_table_flush_is_in_progress() with
calls to try_lock(), lock_and_suspend() and unlock().
* Renamed enumeration Cache_status to Cache_lock_status.
* Renamed enumeration items to UNLOCKED, LOCKED_NO_WAIT and LOCKED.
If the LOCKED_NO_WAIT lock type is used to lock the query cache, other
threads using try_lock() will fail to acquire the lock.
This is useful if the query cache is temporary disabled due to
a full table flush.
times (ie: 2:16:20).
mysql-test/r/log_tables_debug.result:
Update test case result.
mysql-test/t/log_tables_debug.test:
Skip spaces and handle case when a leading zero is not printed.
statements missed from general log
A FLUSH LOGS is added to ensure that the log info hits
the file before attempting to process.
mysql-test/t/log_tables_debug.test:
A FLUSH LOGS is added, and in the event that a match is
not found, <FILE> is reset and the contents of the log
file is dumped for debugging purposes.
crashes server!
The problem affects the scenario when index merge is followed by a filesort
and the sort buffer is not big enough for all the sort keys.
In this case the filesort function will read the data to the end through the
index merge quick access method (and thus closing the cursor etc),
but will leave the pointer to the quick select method in place.
It will then create a temporary file to hold the results of the filesort and
will add it as a sort output file (in sort.io_cache).
Note that filesort will copy the original 'sort' structure in an automatic
variable and restore it after it's done.
As a result at exiting filesort() we have a sort.io_cache filled in and
nothing else (as a result of close of the cursors at end of reading data
through index merge).
Now create_sort_index() will note that there is a select and will clean it up
(as it's been used already by filesort() reading the data in). While doing that
a special case in the index merge destructor will clean up the sort.io_cache,
assuming it's an output of the index merge method and is not needed anymore.
As a result the code that tries to read the data back from the filesort output
will get no data in both memory and disk and will crash.
Fixed similarly to how filesort() does it : by copying the sort.io_cache structure
to a local variable, removing the pointer to the io_cache (so that it's not freed
by QUICK_INDEX_MERGE_SELECT::~QUICK_INDEX_MERGE_SELECT) and restoring the original
structure (together with the valid pointer) after the cleanup is done.
This is a safe thing to do because all the structures are already cleaned up by
hitting the end of the index merge's read method (QUICK_INDEX_MERGE_SELECT::get_next())
and the cleanup code being written in a way that tolerates repeating cleanups.
mysql-test/r/index_merge.result:
Bug #44810: test case
mysql-test/t/index_merge.test:
Bug #44810: test case
sql/sql_select.cc:
Bug #44810: preserve the io_cache produced by filesort while cleaning up
the index merge quick access method (QUICK_INDEX_MERGE_SELECT).