* Finished Monty and Jani's merge
* Some InnoDB tests still fail (because it's old xtradb code run against
newer testsuite). They are expected to go after mergning with the latest
xtradb.
The external 'for' loop in remove_dup_with_compare() handled
HA_ERR_RECORD_DELETED by just starting over without advancing
to the next record which caused an infinite loop.
This condition could be triggered on certain data by a SELECT
query containing DISTINCT, GROUP BY and HAVING clauses.
Fixed remove_dup_with_compare() so that we always advance to
the next record when receiving HA_ERR_RECORD_DELETED from
rnd_next().
mysql-test/r/distinct.result:
Added a test case for bug #46159.
mysql-test/t/distinct.test:
Added a test case for bug #46159.
sql/sql_select.cc:
Fixed remove_dup_with_compare() so that we always advance to
the next record when receiving HA_ERR_RECORD_DELETED from
rnd_next().
The external 'for' loop in remove_dup_with_compare() handled
HA_ERR_RECORD_DELETED by just starting over without advancing
to the next record which caused an infinite loop.
This condition could be triggered on certain data by a SELECT
query containing DISTINCT, GROUP BY and HAVING clauses.
Fixed remove_dup_with_compare() so that we always advance to
the next record when receiving HA_ERR_RECORD_DELETED from
rnd_next().
Memory allocated in TMP_TABLE_PARAM::copy_field is not cleaned up.
The fix is to clean up TMP_TABLE_PARAM::copy_field array in JOIN::destroy.
mysql-test/r/explain.result:
test result
mysql-test/t/explain.test:
test case
sql/sql_select.cc:
Memory allocated in TMP_TABLE_PARAM::copy_field is not cleaned up.
The fix is to clean up TMP_TABLE_PARAM::copy_field array in JOIN::destroy.
function,file sql_base.cc
When uncacheable queries are written to a temp table the optimizer must
preserve the original JOIN structure, because it is re-using the JOIN
structure to read from the resulting temporary table.
This was done only for uncacheable sub-queries.
But top level queries can also benefit from this mechanism, specially if
they're using index access and need a reset.
Fixed by not limiting the saving of JOIN structure to subqueries
exclusively.
Added a new test file to extend the existing (large) subquery.test.
function,file sql_base.cc
When uncacheable queries are written to a temp table the optimizer must
preserve the original JOIN structure, because it is re-using the JOIN
structure to read from the resulting temporary table.
This was done only for uncacheable sub-queries.
But top level queries can also benefit from this mechanism, specially if
they're using index access and need a reset.
Fixed by not limiting the saving of JOIN structure to subqueries
exclusively.
Added a new test file to extend the existing (large) subquery.test.
Added (rewritten) patch from Percona to get extended statistics in slow.log:
- Added handling of 'set' variables to set_var.cc. Changed sql_mode to use this
- Added extra logging to slow log of 'Thread_id, Schema, Query Cache hit, Rows sent and Rows examined'
- Added optional logging to slow log, through log_slow_verbosity, of query plan statistics
- Added new user variables log_slow_rate_limit, log_slow_verbosity, log_slow_filter
- Added log-slow-file as synonym for 'slow-log-file', as most slow-log variables starts with 'log-slow'
- Added log-slow-time as synonym for long-query-time
Some trivial MyISAM optimizations:
- In prepare for drop, flush key blocks
- Don't call mi_lock_database if my_disable_locking is used
******
Automatic merge with trunc
******
Updated documentation files to reflect MariaDB and not the Maria storage engine or MySQL
Added (rewritten) patch from Percona to get extended statistics in slow.log:
- Added handling of 'set' variables to set_var.cc. Changed sql_mode to use this
- Added extra logging to slow log of 'Thread_id, Schema, Query Cache hit, Rows sent and Rows examined'
- Added optional logging to slow log, through log_slow_verbosity, of query plan statistics
- Added new user variables log_slow_rate_limit, log_slow_verbosity, log_slow_filter
- Added log-slow-file as synonym for 'slow-log-file', as most slow-log variables starts with 'log-slow'
- Added log-slow-time as synonym for long-query-time
Some trivial MyISAM optimizations:
- In prepare for drop, flush key blocks
- Don't call mi_lock_database if my_disable_locking is used
KNOWN_BUGS.txt:
Updated file to reflect MariaDB and not the Maria storage engine
******
Updated file to reflect MariaDB and not the Maria storage engine
README:
Updated file to reflect MariaDB
******
Updated file to reflect MariaDB
mysql-test/r/log_slow.result:
Test new options for slow query log
******
Test new options for slow query log
mysql-test/r/variables.result:
Updated result (old version cut of things at 79 characters)
******
Updated result (old version cut of things at 79 characters)
mysql-test/t/log_slow.test:
Test new options for slow query log
******
Test new options for slow query log
sql/Makefile.am:
Added log_slow.h
******
Added log_slow.h
sql/event_data_objects.cc:
Removed not needed test for enable_slow_log (is done when the flag is tested elsewhere)
******
Removed not needed test for enable_slow_log (is done when the flag is tested elsewhere)
sql/events.cc:
Use the general make_set() function instead of 'symbolic_mode_representation'
******
Use the general make_set() function instead of 'symbolic_mode_representation'
sql/filesort.cc:
Added status for used query plans
******
Added status for used query plans
sql/log.cc:
Reset counters if no query_length (from Percona's patch; Not sure if needed, but can do no harm)
Added extra logging to slow log of 'Thread_id, Schema, Query Cache hit, Rows sent and Rows examined'
Added optional logging to slow log, through log_slow_verbosity, of query plan statistics
Fixed wrong test of error condition
******
Reset counters if no query_length (from Percona's patch; Not sure if needed, but can do no harm)
Added extra logging to slow log of 'Thread_id, Schema, Query Cache hit, Rows sent and Rows examined'
Added optional logging to slow log, through log_slow_verbosity, of query plan statistics
Fixed wrong test of error condition
sql/log_slow.h:
Defines and variables for log_slow_verbosity and log_slow_filter
******
Defines and variables for log_slow_verbosity and log_slow_filter
sql/mysql_priv.h:
Include log_slow.h
******
Include log_slow.h
sql/mysqld.cc:
Added new user variables log_slow_rate_limit, log_slow_verbosity, log_slow_filter
Added log-slow-file as synonym for 'slow-log-file', as most slow-log variables starts with 'log-slow'
Added log-slow-time as synonym for long-query-time
Added note that one should use log-slow-filter instead of log-slow-admin-statements
Updated comment from 'slow_query_log_file'
******
Added new user variables log_slow_rate_limit, log_slow_verbosity, log_slow_filter
Added log-slow-file as synonym for 'slow-log-file', as most slow-log variables starts with 'log-slow'
Added log-slow-time as synonym for long-query-time
Added note that one should use log-slow-filter instead of log-slow-admin-statements
Updated comment from 'slow_query_log_file'
sql/set_var.cc:
Added long_slow_time as synonym for long_query_time
Added new user variables log_slow_rate_limit, log_slow_verbosity, log_slow_filter
dded handling of 'set' variables to set_var.cc. Changed sql_mode to use this
******
Added long_slow_time as synonym for long_query_time
Added new user variables log_slow_rate_limit, log_slow_verbosity, log_slow_filter
dded handling of 'set' variables to set_var.cc. Changed sql_mode to use this
sql/set_var.h:
- Added handling of 'set' variables. Changed sql_mode to use this
******
- Added handling of 'set' variables. Changed sql_mode to use this
sql/slave.cc:
Use global filter also for slaves
******
Use global filter also for slaves
sql/sp_head.cc:
Simplify saving of general_slow_log state
Use the general make_set() function instead of 'symbolic_mode_representation'
******
Simplify saving of general_slow_log state
Use the general make_set() function instead of 'symbolic_mode_representation'
sql/sql_cache.cc:
Added status for used query plans
******
Added status for used query plans
sql/sql_class.cc:
Remember/restore query_plan_flags over complex statements
******
Remember/restore query_plan_flags over complex statements
sql/sql_class.h:
Added variables to handle extended slow log statistics
******
Added variables to handle extended slow log statistics
sql/sql_parse.cc:
Added status for used query plans
Added test for filtering slow_query_log
******
Added status for used query plans
Added test for filtering slow_query_log
sql/sql_select.cc:
Added status for used query plans
******
Added status for used query plans
sql/sql_show.cc:
Use the general make_set() function instead of 'symbolic_mode_representation'
******
Use the general make_set() function instead of 'symbolic_mode_representation'
sql/strfunc.cc:
Report first error (not last) if something is wrong in a set
Removed compiler warning
******
Report first error (not last) if something is wrong in a set
Removed compiler warning
storage/myisam/mi_extra.c:
In prepare for drop, flush key blocks (speed optimization)
******
In prepare for drop, flush key blocks (speed optimization)
storage/myisam/mi_locking.c:
Don't call mi_lock_database if my_disable_locking is used (speed optimization)
******
Don't call mi_lock_database if my_disable_locking is used (speed optimization)
Added (rewritten) patch from Percona to get extended statistics in slow.log:
- Added handling of 'set' variables to set_var.cc. Changed sql_mode to use this
- Added extra logging to slow log of 'Thread_id, Schema, Query Cache hit, Rows sent and Rows examined'
- Added optional logging to slow log, through log_slow_verbosity, of query plan statistics
- Added new user variables log_slow_rate_limit, log_slow_verbosity, log_slow_filter
- Added log-slow-file as synonym for 'slow-log-file', as most slow-log variables starts with 'log-slow'
- Added log-slow-time as synonym for long-query-time
Some trivial MyISAM optimizations:
- In prepare for drop, flush key blocks
- Don't call mi_lock_database if my_disable_locking is used
KNOWN_BUGS.txt:
Updated file to reflect MariaDB and not the Maria storage engine
README:
Updated file to reflect MariaDB
mysql-test/r/log_slow.result:
Test new options for slow query log
mysql-test/r/variables.result:
Updated result (old version cut of things at 79 characters)
mysql-test/t/log_slow.test:
Test new options for slow query log
sql/Makefile.am:
Added log_slow.h
sql/event_data_objects.cc:
Removed not needed test for enable_slow_log (is done when the flag is tested elsewhere)
sql/events.cc:
Use the general make_set() function instead of 'symbolic_mode_representation'
sql/filesort.cc:
Added status for used query plans
sql/log.cc:
Reset counters if no query_length (from Percona's patch; Not sure if needed, but can do no harm)
Added extra logging to slow log of 'Thread_id, Schema, Query Cache hit, Rows sent and Rows examined'
Added optional logging to slow log, through log_slow_verbosity, of query plan statistics
Fixed wrong test of error condition
sql/log_slow.h:
Defines and variables for log_slow_verbosity and log_slow_filter
sql/mysql_priv.h:
Include log_slow.h
sql/mysqld.cc:
Added new user variables log_slow_rate_limit, log_slow_verbosity, log_slow_filter
Added log-slow-file as synonym for 'slow-log-file', as most slow-log variables starts with 'log-slow'
Added log-slow-time as synonym for long-query-time
Added note that one should use log-slow-filter instead of log-slow-admin-statements
Updated comment from 'slow_query_log_file'
sql/set_var.cc:
Added long_slow_time as synonym for long_query_time
Added new user variables log_slow_rate_limit, log_slow_verbosity, log_slow_filter
dded handling of 'set' variables to set_var.cc. Changed sql_mode to use this
sql/set_var.h:
- Added handling of 'set' variables. Changed sql_mode to use this
sql/slave.cc:
Use global filter also for slaves
sql/sp_head.cc:
Simplify saving of general_slow_log state
Use the general make_set() function instead of 'symbolic_mode_representation'
sql/sql_cache.cc:
Added status for used query plans
sql/sql_class.cc:
Remember/restore query_plan_flags over complex statements
sql/sql_class.h:
Added variables to handle extended slow log statistics
sql/sql_parse.cc:
Added status for used query plans
Added test for filtering slow_query_log
sql/sql_select.cc:
Added status for used query plans
sql/sql_show.cc:
Use the general make_set() function instead of 'symbolic_mode_representation'
sql/strfunc.cc:
Report first error (not last) if something is wrong in a set
Removed compiler warning
storage/myisam/mi_extra.c:
In prepare for drop, flush key blocks (speed optimization)
storage/myisam/mi_locking.c:
Don't call mi_lock_database if my_disable_locking is used (speed optimization)
field references
This error requires a combination of factors :
1. An "impossible where" in the outermost SELECT
2. An aggregate in the outermost SELECT
3. A correlated subquery with a WHERE clause that includes an outer
field reference as a top level WHERE sargable predicate
When JOIN::optimize detects an "impossible WHERE" it will bail out
without doing the rest of the work and initializations. It will not
call make_join_statistics() as well. And make_join_statistics fills
in various structures for each table referenced.
When processing the result of the "impossible WHERE" the query must
send a single row of data if there are aggregate functions in it.
In this case the server marks all the aggregates as having received
no rows and calls the relevant Item::val_xxx() method on the SELECT
list. However if this SELECT list happens to contain a correlated
subquery this subquery is evaluated in a normal evaluation mode.
And if this correlated subquery has a reference to a field from the
outermost "impossible where" SELECT the add_key_fields will mistakenly
consider the outer field reference as a "local" field reference when
looking for sargable predicates.
But since the SELECT where the outer field reference refers to is not
completely initialized due to the "impossible WHERE" in this level
we'll get a NULL pointer reference.
Fixed by making a better condition for discovering if a field is "local"
to the SELECT level being processed.
It's not enough to look for OUTER_REF_TABLE_BIT in this case since
for outer references to constant tables the Item_field::used_tables()
will return 0 regardless of whether the field reference is from the
local SELECT or not.
field references
This error requires a combination of factors :
1. An "impossible where" in the outermost SELECT
2. An aggregate in the outermost SELECT
3. A correlated subquery with a WHERE clause that includes an outer
field reference as a top level WHERE sargable predicate
When JOIN::optimize detects an "impossible WHERE" it will bail out
without doing the rest of the work and initializations. It will not
call make_join_statistics() as well. And make_join_statistics fills
in various structures for each table referenced.
When processing the result of the "impossible WHERE" the query must
send a single row of data if there are aggregate functions in it.
In this case the server marks all the aggregates as having received
no rows and calls the relevant Item::val_xxx() method on the SELECT
list. However if this SELECT list happens to contain a correlated
subquery this subquery is evaluated in a normal evaluation mode.
And if this correlated subquery has a reference to a field from the
outermost "impossible where" SELECT the add_key_fields will mistakenly
consider the outer field reference as a "local" field reference when
looking for sargable predicates.
But since the SELECT where the outer field reference refers to is not
completely initialized due to the "impossible WHERE" in this level
we'll get a NULL pointer reference.
Fixed by making a better condition for discovering if a field is "local"
to the SELECT level being processed.
It's not enough to look for OUTER_REF_TABLE_BIT in this case since
for outer references to constant tables the Item_field::used_tables()
will return 0 regardless of whether the field reference is from the
local SELECT or not.
The problem was that creating a DECIMAL column from a decimal
value could lead to a failed assertion as decimal values can
have a higher precision than those attached to a table. The
assert could be triggered by creating a table from a decimal
with a large (> 30) scale. Also, there was a problem in
calculating the number of digits in the integral and fractional
parts if both exceeded the maximum number of digits permitted
by the new decimal type.
The solution is to ensure that truncation procedure is executed
when deducing a DECIMAL column from a decimal value of higher
precision. If the integer part is equal to or bigger than the
maximum precision for the DECIMAL type (65), the integer part
is truncated to fit and the fractional becomes zero. Otherwise,
the fractional part is truncated to fit into the space left
after the integer part is copied.
This patch borrows code and ideas from Martin Hansson's patch.
mysql-test/r/type_newdecimal.result:
Add test case result for Bug#45261. Also, update test case to
reflect that an additive operation increases the precision of
the resulting type by 1.
mysql-test/t/type_newdecimal.test:
Add test case for Bug#45261
sql/field.cc:
Added DBUG_ASSERT to ensure object's invariant is maintained.
Implement method to create a field to hold a decimal value
from an item.
sql/field.h:
Explain member variable. Add method to create a new decimal field.
sql/item.cc:
The precision should only be capped when storing the value
on a table. Also, this makes it impossible to calculate the
integer part if Item::decimals (the scale) is larger than the
precision.
sql/item.h:
Simplify calculation of integer part.
sql/item_cmpfunc.cc:
Do not limit the precision. It will be capped later.
sql/item_func.cc:
Use new method for allocating a new decimal field.
Add a specialized method for retrieving the precision
of a user variable item.
sql/item_func.h:
Add method to return the precision of a user variable.
sql/item_sum.cc:
Use new method for allocating a new decimal field.
sql/my_decimal.h:
The integer part could be improperly calculated for a decimal
with 31 digits in the fractional part.
sql/sql_select.cc:
Use new method which truncates the integer or decimal parts
as needed.
The problem was that creating a DECIMAL column from a decimal
value could lead to a failed assertion as decimal values can
have a higher precision than those attached to a table. The
assert could be triggered by creating a table from a decimal
with a large (> 30) scale. Also, there was a problem in
calculating the number of digits in the integral and fractional
parts if both exceeded the maximum number of digits permitted
by the new decimal type.
The solution is to ensure that truncation procedure is executed
when deducing a DECIMAL column from a decimal value of higher
precision. If the integer part is equal to or bigger than the
maximum precision for the DECIMAL type (65), the integer part
is truncated to fit and the fractional becomes zero. Otherwise,
the fractional part is truncated to fit into the space left
after the integer part is copied.
This patch borrows code and ideas from Martin Hansson's patch.
Problem 1:
When the 'Using index' optimization is used, the optimizer may still - after
cost-based optimization - decide to use another index in order to avoid using
a temporary table. But when this happens, the flag to the storage engine to
read index only (not table) was still set. Fixed by resetting the flag in the
storage engine and TABLE structure in the above scenario, unless the new index
allows for the same optimization.
Problem 2:
When a 'ref' access method was employed by cost-based optimizer, (when the column
is non-NULLable), it was assumed that it needed no initialization if 'quick' access
methods (since they are based on range scan). When ORDER BY optimization overrides
the decision, however, it expects to have this initialized and hence crashes.
Fixed in 5.1 (was fixed in 6.0 already) by initializing 'quick' even when there's
'ref' access.
mysql-test/r/order_by.result:
Bug#46454: Test result.
mysql-test/t/order_by.test:
Bug#46454: Test case.
sql/sql_select.cc:
Bug#46454:
Problem 1 fixed in make_join_select()
Problem 2 fixed in test_if_skip_sort_order()
sql/table.h:
Bug#46454: Added comment to field.
Problem 1:
When the 'Using index' optimization is used, the optimizer may still - after
cost-based optimization - decide to use another index in order to avoid using
a temporary table. But when this happens, the flag to the storage engine to
read index only (not table) was still set. Fixed by resetting the flag in the
storage engine and TABLE structure in the above scenario, unless the new index
allows for the same optimization.
Problem 2:
When a 'ref' access method was employed by cost-based optimizer, (when the column
is non-NULLable), it was assumed that it needed no initialization if 'quick' access
methods (since they are based on range scan). When ORDER BY optimization overrides
the decision, however, it expects to have this initialized and hence crashes.
Fixed in 5.1 (was fixed in 6.0 already) by initializing 'quick' even when there's
'ref' access.
bzr branch mysql-5.1-performance-version mysql-trunk # Summit
cd mysql-trunk
bzr merge mysql-5.1-innodb_plugin # which is 5.1 + Innodb plugin
bzr rm innobase # remove the builtin
Next step: build, test fixes.
bzr branch mysql-5.1-performance-version mysql-trunk # Summit
cd mysql-trunk
bzr merge mysql-5.1-innodb_plugin # which is 5.1 + Innodb plugin
bzr rm innobase # remove the builtin
Next step: build, test fixes.
In create_myisam_from_heap() mark all errors as fatal except
HA_ERR_RECORD_FILE_FULL for a HEAP table.
Not doing so could lead to problems, e.g. in a case when a
temporary MyISAM table gets overrun due to its MAX_ROWS limit
while executing INSERT/REPLACE IGNORE ... SELECT.
The SELECT execution was aborted, but the error was
converted to a warning due to IGNORE clause, so neither 'ok'
nor 'error' packet could be sent back to the client. This
condition led to hanging client when using 5.0 server, or
assertion failure in 5.1.
mysql-test/r/insert_select.result:
Added a test case for bug #46075.
mysql-test/t/insert_select.test:
Added a test case for bug #46075.
sql/sql_select.cc:
In create_myisam_from_heap() mark all errors as fatal except
HA_ERR_RECORD_FILE_FULL for a HEAP table.
In create_myisam_from_heap() mark all errors as fatal except
HA_ERR_RECORD_FILE_FULL for a HEAP table.
Not doing so could lead to problems, e.g. in a case when a
temporary MyISAM table gets overrun due to its MAX_ROWS limit
while executing INSERT/REPLACE IGNORE ... SELECT.
The SELECT execution was aborted, but the error was
converted to a warning due to IGNORE clause, so neither 'ok'
nor 'error' packet could be sent back to the client. This
condition led to hanging client when using 5.0 server, or
assertion failure in 5.1.
use partial primary key if another index can prevent filesort
The fix for bug #28404 causes the covering ordering indexes to be
preferred unconditionally over non-covering and ref indexes.
Fixed by comparing the cost of using a covering index to the cost of
using a ref index even for covering ordering indexes.
Added an assertion to clarify the condition the local variables should
be in.
mysql-test/include/mix1.inc:
Bug #36259: fixed a non-stable test case
mysql-test/r/innodb_mysql.result:
Bug #36259 and #45828 : test case
mysql-test/t/innodb_mysql.test:
Bug #36259 and #45828 : test case
sql/sql_select.cc:
Bug #36259 and #45828 : don't consider covering indexes supperior to
ref keys.