The assertion in String::copy was added in order to avoid
valgrind errors when the destination was the same as the source.
Eased restriction to allow for the case when str == NULL.
Early patch submitted for discussion.
It is possible for more than one thread to enter the condition
in query_cache_insert(), but the condition predicate is to
signal one thread each time the cache status changes between
the following states: {NO_FLUSH_IN_PROGRESS,FLUSH_IN_PROGRESS,
TABLE_FLUSH_IN_PROGRESS}
Consider three threads THD1, THD2, THD3
THD2: select ... => Got a writer in ::store_query
THD3: select ... => Got a writer in ::store_query
THD1: flush tables => qc status= FLUSH_IN_PROGRESS;
new writers are blocked.
THD2: select ... => Still got a writer and enters cond in
query_cache_insert
THD3: select ... => Still got a writer and enters cond in
query_cache_insert
THD1: flush tables => finished and signal status change.
THD2: select ... => Wakes up and completes the insert.
THD3: select ... => Happily waiting for better times. Why hurry?
This patch is a refactoring of this lock system. It introduces four new methods:
Query_cache::try_lock()
Query_cache::lock()
Query_cache::lock_and_suspend()
Query_cache::unlock()
This change also deprecates wait_while_table_flush_is_in_progress(). All threads are
queued and put on a conditional wait. On each unlock the queue is signalled. This resolve
the issues with left over threads. To assure that no threads are spending unnecessary
time waiting a signal broadcast is issued every time a lock is taken before a full
cache flush.
crashes server!
The problem affects the scenario when index merge is followed by a filesort
and the sort buffer is not big enough for all the sort keys.
In this case the filesort function will read the data to the end through the
index merge quick access method (and thus closing the cursor etc),
but will leave the pointer to the quick select method in place.
It will then create a temporary file to hold the results of the filesort and
will add it as a sort output file (in sort.io_cache).
Note that filesort will copy the original 'sort' structure in an automatic
variable and restore it after it's done.
As a result at exiting filesort() we have a sort.io_cache filled in and
nothing else (as a result of close of the cursors at end of reading data
through index merge).
Now create_sort_index() will note that there is a select and will clean it up
(as it's been used already by filesort() reading the data in). While doing that
a special case in the index merge destructor will clean up the sort.io_cache,
assuming it's an output of the index merge method and is not needed anymore.
As a result the code that tries to read the data back from the filesort output
will get no data in both memory and disk and will crash.
Fixed similarly to how filesort() does it : by copying the sort.io_cache structure
to a local variable, removing the pointer to the io_cache (so that it's not freed
by QUICK_INDEX_MERGE_SELECT::~QUICK_INDEX_MERGE_SELECT) and restoring the original
structure (together with the valid pointer) after the cleanup is done.
This is a safe thing to do because all the structures are already cleaned up by
hitting the end of the index merge's read method (QUICK_INDEX_MERGE_SELECT::get_next())
and the cleanup code being written in a way that tolerates repeating cleanups.
The SQL-mode PAD_CHAR_TO_FULL_LENGTH could prevent a DROP USER
statement from privileges associated with the user being dropped.
What ocurred was that reading from the User and Host fields of
the tables tables_priv or columns_priv would yield values padded
with spaces, causing a failure to match a specified user or host
('user' != 'user ');
The solution is to disregard the PAD_CHAR_TO_FULL_LENGTH mode
when iterating over and matching values in the privileges tables
for a DROP USER statement.
statements missed from general log
A refinement of the test in the previous patch to avoid
using sleep as a means to ensure that timestamps are
added to the log entries.
WHERE and GROUP BY clause
Loose index scan may use range conditions on the argument of
the MIN/MAX aggregate functions to find the beginning/end of
the interval that satisfies the range conditions in a single go.
These range conditions may have open or closed minimum/maximum
values. When the comparison returns 0 (equal) the code should
check the type of the min/max values of the current interval
and accept or reject the row based on whether the limit is
open or not.
There was a wrong composite condition on checking this and it was
not working in all cases.
Fixed by simplifying the conditions and reversing the logic.
Merge the test case from 5.0 to 5.1 for BUG#40565
Detailed revision comments:
r5233 | marko | 2009-06-03 15:12:44 +0300 (Wed, 03 Jun 2009) | 11 lines
branches/5.1: Merge the test case from r5232 from branches/5.0:
------------------------------------------------------------------------
r5232 | marko | 2009-06-03 14:31:04 +0300 (Wed, 03 Jun 2009) | 21 lines
branches/5.0: Merge r3590 from branches/5.1 in order to fix Bug #40565
(Update Query Results in "1 Row Affected" But Should Be "Zero Rows").
Also, add a test case for Bug #40565.
rb://128 approved by Heikki Tuuri
------------------------------------------------------------------------
the mysql test suite.
Tests removed:
1. innodb_trx_weight.test
2. innodb_bug35220.test
Include files removed:
1. have_innodb.inc
2. ctype_innodb_like.inc
3. innodb_trx_weight.inc
Also add the missing opt file for the test innodb-use-sys-malloc.test
Created a test suite 'innodb' under mysql-test/suite/innodb for the innodb plugin tests.
test suite 'innodb' has tests only which are not under any other mysql-test suites.
Total 14 testcases are added to the test suite.
Note: the patches in storage/innodb_plugin/mysql-test/patches are not applied yet
Range analysis did not request sorted output from the storage engine,
which cause partitioned handlers to process one partition at a time
while reading key prefixes in ascending order, causing values to be
missed. Fixed by always requesting sorted order during range analysis.
This fix is introduced in 6.0 by the fix for bug no 41136.
This test uses SHOW STATUS and the like, which may be unstable in the face
of logging to table, since the CSV handler is actively executing operations
and thus incrementing the counters.
Fixed by disabling logging to table for the duration of the test and restoring
it afterwards. This causes various counters to properly start counting from zero
and never advance due to CSV operations.
memory issue ?
The mysql command line client could misinterpret some character
sequences as commands under some circumstances.
The upper limit for internal readline buffer was raised to 1 GB
(the same as for server's max_allowed_packet) so that any input
line is processed by add_line() as a whole rather than in
chunks.
mysqlbinlog --database parameter was being ignored when processing
row events. As such no event filtering would take place.
This patch addresses this by deploying a call to shall_skip_database
when table_map_events are handled (as these contain also the name of
the database). All other rows events referencing the table id for the
filtered map event, will also be skipped.
uninitialized variable used as subscript
Grouping select from a "constant" InnoDB table (a table
of a single row) joined with other tables caused a crash.
The problem is that when a optimization of read-only transactions
(bypass 2-phase commit) was implemented, it removed the code that
reseted the XID once a transaction wasn't active anymore:
sql/sql_parse.cc:
- bzero(&thd->transaction.stmt, sizeof(thd->transaction.stmt));
- if (!thd->active_transaction())
- thd->transaction.xid_state.xid.null();
+ thd->transaction.stmt.reset();
This mostly worked fine as the transaction commit and rollback
functions (in handler.cc) reset the XID once the transaction is
ended. But those functions wouldn't reset the XID in case of
a empty transaction, leading to a assertion when a new starting
a new XA transaction.
The solution is to ensure that the XID state is reset when empty
transactions are ended (by either commit or rollback). This is
achieved by reorganizing the code so that the transaction cleanup
routine is invoked whenever a transaction is ended.