Fixed an incomplete historical ALTER TABLE MODIFY trimming the trigger
privilege bit from mysql.tables_priv.Table_priv column.
Removed the duplicate ALTER TABLE MODIFY.
Test suite added.
The problem is that QUICK_SELECT_DESC behaviour depends
on used_key_parts value which can be bigger than selected
best_key_parts value if an engine supports clustered key.
But used_key_parts is overwritten with best_key_parts
value that prevents from correct selection of index
access method. The fix is to preserve used_key_parts
value for further use in QUICK_SELECT_DESC.
mysql-test/r/innodb_mysql.result:
test case
mysql-test/t/innodb_mysql.test:
test case
sql/sql_select.cc:
preserve used_key_parts value for further use in QUICK_SELECT_DESC
5.3 had
handler::index_only_read_time(uint keynr, double records)
while 5.2 got:
handler::keyread_read_time(uint index, uint ranges, ha_rows rows)
which causes floor()'ing of rows parameter, which makes all further costs different.
This deadlock happened if DROP DATABASE was blocked due to an open
HANDLER table from a different connection. While DROP DATABASE
is blocked, it holds the LOCK_mysql_create_db mutex. This results
in a deadlock if the connection with the open HANDLER table tries
to execute a CREATE/ALTER/DROP DATABASE statement as they all
try to acquire LOCK_mysql_create_db.
This patch makes this deadlock scenario very unlikely by closing and
marking for re-open all HANDLER tables for which there are pending
conflicing locks, before LOCK_mysql_create_db is acquired.
However, there is still a very slight possibility that a connection
could access one of these HANDLER tables between closing/marking for
re-open and the acquisition of LOCK_mysql_create_db.
This patch is for 5.1 only, a separate and complete fix will be
made for 5.5+.
Test case added to schema.test.
returns nothing
When looking for table or database names inside INFORMATION_SCHEMA
we must convert the table and database names to lowercase (just as it's
done in the rest of the server) when lowercase_table_names is non-zero.
This will allow us to find the same tables that we would find if there
is no condition.
Fixed by converting to lower case when extracting the database and
table name conditions.
Test case added.
During creation of the table list of
processed tables hidden I_S table 'VARIABLES'
is erroneously added into the table list.
it leads to ER_UNKNOWN_TABLE error in
TABLE_LIST::add_table_to_list() function.
The fix is to skip addition of hidden I_S
tables into the table list.
mysql-test/r/information_schema.result:
test case
mysql-test/t/information_schema.test:
test case
sql/sql_show.cc:
The fix is to skip addition of hidden I_S
tables into the table list.
require O(#scans) memory
When an index merge operation was restarted, it would
re-allocate the Unique object controlling the duplicate row
ID elimination. Fixed by making the Unique object a member
of QUICK_INDEX_MERGE_SELECT and thus reusing it throughout
the lifetime of this object.
file .\filesort.cc, line 149 (part II)
Problem: the server didn't disregard sort order
for some zero length tuples.
Fix: skip sort order in such a case
(zero length NOT NULL string functions).
mysql-test/r/select.result:
Fix for bug #54459: Assertion failed: param.sort_length,
file .\filesort.cc, line 149 (part II)
- test result.
mysql-test/t/select.test:
Fix for bug #54459: Assertion failed: param.sort_length,
file .\filesort.cc, line 149 (part II)
- test case.
sql/sql_select.cc:
Fix for bug #54459: Assertion failed: param.sort_length,
file .\filesort.cc, line 149 (part II)
- disregard sort order for zero length NOT NULL string functions
along with zero length NOT NULL fields.
and reverse() function
3 problems fixed :
1. The reported problem : caused by incorrect parsing of
the file as ucs data resulting in wrong length of the parsed
string. Fixed by truncating the invalid trailing bytes
(non-complete multibyte characters) when reading from the file
2. LOAD DATA when reading from a proper UCS2 file wasn't
recognizing the new line characters. Fixed by first looking
if a byte is a new line (or any other special) character before
reading it as a part of a multibyte character.
3. When using user variables to hold the column data in LOAD
DATA the character set of the user variable was set incorrectly
to the database charset. Fixed by setting it to the charset
specified by LOAD DATA (if any).
The problem there is that HAVING condition evaluates const
parts of condition despite the condition has references
on aggregate functions. Table t1 became const tables
after make_join_statistics and table1.pk = 1, HAVING is
transformed into MAX(1) < 7 and taken away from HAVING.
The fix is to skip evaluation of HAVING conts parts if
HAVING condition has references on aggregate functions.
mysql-test/r/having.result:
test case
mysql-test/t/having.test:
test case
sql/sql_select.cc:
skip evaluation of HAVING conts parts if
HAVING condition has references on aggregate functions.
Incorrect handling of NULL arguments could lead to a crash on
the IN or CASE operations when either NULL arguments were
passed explicitly as arguments (IN) or implicitly generated by
the WITH ROLLUP modifier (both IN and CASE).
Item_func_case::find_item() assumed all necessary comparators
to be instantiated in fix_length_and_dec(). However, in the
presence of WITH ROLLUP modifier, arguments could be
substituted with an Item_null leading to an "unexpected"
STRING_RESULT comparator being invoked.
In addition to the problem identical to the above,
Item_func_in::val_int() could crash even with explicitly passed
NULL arguments due to an optimization in fix_length_and_dec()
leading to NULL arguments being ignored during comparators
creation.
mysql-test/r/func_in.result:
Test cases for bug#54477.
mysql-test/t/func_in.test:
Test cases for bug#54477.
sql/item_cmpfunc.cc:
Added additional checks for Item_nulls in
Item_func_case::find_item() and Item_func_in::val_int().
In process of record search it is not taken into account
that inital quick->file->ref value could be inapplicable
to range interval. After proper row is found this value is
stored into the record buffer and later the record is
filtered out at condition evaluation stage.
The fix is store a refernce of found row to the handler ref field.
mysql-test/r/innodb_mysql.result:
test case
mysql-test/std_data/intersect-bug50389.tsv:
test case
mysql-test/t/innodb_mysql.test:
test case
sql/opt_range.cc:
store a refernce of found row to the handler ref field.
Problem: a flaw (derefencing a NULL pointer) in the LIKE optimization
code may lead to a server crash in some rare cases.
Fix: check the pointer before its dereferencing.
mysql-test/r/func_like.result:
Fix for bug #54575: crash when joining tables with unique set column
- test result.
mysql-test/t/func_like.test:
Fix for bug #54575: crash when joining tables with unique set column
- test case.
sql/item_cmpfunc.cc:
Fix for bug #54575: crash when joining tables with unique set column
- check res2 buffer pointer before its dereferencing
as it may be NULL in some cases.
- Changed default recovery mode from OFF to NORMAL to get automatic repair of not properly closed tables.
- Fixed a rase condition when two threads calls external_lock and thr_lock() in different order. When this happend the transaction that called external lock first
and thr_lock() last did not see see the rows from the other transaction, even if if it had to wait in thr_lock() for other to complete.
- Fixed that one can run maria_chk on an automatcally recovered tables without warnings about too small transaction id
- Don't give warning that crashed table could not be repaired if repair was disabled (and thus not run)
- Fixed a error result from flush_key_cache() which caused a DBUG_ASSERT() when one was using concurrent reads on non transactional tables that was updated.
client/mysqldump.c:
Add "" around error message to make it more readable
client/mysqltest.cc:
Free environment variables
mysql-test/r/mysqldump.result:
Updated results
mysql-test/r/openssl_1.result:
Updated results
mysql-test/suite/maria/r/maria-recover.result:
Updated results
mysql-test/suite/maria/r/maria3.result:
Updated results
mysql-test/suite/maria/t/maria3.test:
Added more test of temporary tables
storage/maria/ha_maria.cc:
Changed default recovery mode from OFF to NORMAL to get automatic repair of not properly closed tables.
Start transaction in ma_block_get_status() instead of in ha_maria::external_lock().
- This fixes a rase condition when two threads calls external lock and thr_lock() in different order. When this happend the transaction that called external lock first and thr_lock() last did not see see the rows from the other transaction, even if if it had to wait in thr_lock() for other to complete.
Store latest transaction id in controll file if recovery was done.
- This allows one to run maria_chk on an automatcally recovered tables without warnings about too small transaction id
storage/maria/ha_maria.h:
Don't give warning that crashed table could not be repaired if repair was disabled (and thus not run)
storage/maria/ma_blockrec.h:
Added new function "_ma_block_get_status_no_versioning()"
storage/maria/ma_init.c:
Added hook to create trn in ma_block_get_status() if we are using MariaDB
storage/maria/ma_open.c:
Ensure we call _ma_block_get_status_no_versioning() for transactional tables without versioning (like tables with fulltext)
storage/maria/ma_pagecache.c:
Allow one to flush blocks that are pinned for read.
This fixed a error result from flush_key_cache() which caused a DBUG_ASSERT() when one was using concurrent reads on non transactional tables that was updated.
storage/maria/ma_recovery.c:
Set maria_recovery_changed_data to 1 if recover changed something.
Set max_trid_in_control_file to max found trn if we found a bigger trn.
The allows will ensure that the control file is up to date after recovery which allows one to run maria_chk on the tables without warnings about too big trn
storage/maria/ma_state.c:
Call maria_create_trn_hook() in _ma_setup_live_state() instead of ha_maria::external_lock()
This ensures that 'state' and trn are in sync and thus fixes the race condition mentioned for ha_maria.cc
storage/maria/ma_static.c:
Added maria_create_trn_hook() and maria_recovery_changed_data
storage/maria/maria_def.h:
Added MARIA_HANDLER->external_ptr, which is used to hold MariaDB thd.
Added some new external variables
Removed reference to non existing function: maria_concurrent_inserts()
Item*) at opt_sum.cc:305
Queries applying MIN/MAX functions to indexed columns are
optimized to read directly from the index if all key parts
of the index preceding the aggregated key part are bound to
constants by the WHERE clause. A prefix length is also
produced, equal to the total length of the bound key
parts. If the aggregated column itself is bound to a
constant, however, it is also included in the prefix.
Such full search keys are read as closed intervals for
reasons beyond the scope of this bug. However, the procedure
missed one case where a key part meant for use as range
endpoint was being overwritten with a NULL value destined
for equality checking. In this case the key part was
overwritten but the range flag remained, causing open
interval reading to be performed.
Bug was fixed by adding more stringent checking to the
search key building procedure (matching_cond) and never
allow overwrites of range predicates with non-range
predicates.
An assertion was added to make sure open intervals are never
used with full search keys.
* fully support --mysqld=--plugin-load=xxxx
* uniformly support all loadable plugins, no need to hard-code
every new plugin in mtr
* autodetect MTR_VS_CONFIG on windows
Problem: the server missed the fact that one can read from
2 indexes alternately using HANDLER interface.
Fix: check if the same (initialized) index is involved
reading next/prev values from the index.
mysql-test/r/handler_myisam.result:
Fix for bug #54007: assert in ha_myisam::index_next, HANDLER
- test result.
mysql-test/t/handler_myisam.test:
Fix for bug #54007: assert in ha_myisam::index_next, HANDLER
- test case.
sql/sql_handler.cc:
Fix for bug #54007: assert in ha_myisam::index_next, HANDLER
- check if we use the same (initialized) index
to read next/prev values from the index.
The problem is in the Item_func_isnull::update_used_tables() function,
bracket is at the wrong place. Because of that isnull item erroneously
is treated as const item. The fix is to set brackets in the right place.
mysql-test/r/func_isnull.result:
test case
mysql-test/t/func_isnull.test:
test case
sql/item_cmpfunc.h:
set brackets in the right place.
* remove handler::index_read_last()
* create handler::keyread_read_time() (was get_index_only_read_time() in opt_range.cc)
* ha_show_status() allows engine's show_status() to fail
* remove HTON_FLUSH_AFTER_RENAME
* fix key_cmp_if_same() to work for floats and doubles
* set table->status in the server, don't force engines to do it
* increment status vars in the server, don't force engines to do it
mysql-test/r/status_user.result:
correct test results - innodb was wrongly counting internal
index searches as handler_read_* calls.
sql/ha_partition.cc:
compensate for handler incrementing status counters -
we want to count only calls to underlying engines
sql/handler.h:
inline methods moved to sql_class.h
sql/key.cc:
simplify the check
sql/opt_range.cc:
move get_index_only_read_time to the handler class
sql/sp.cc:
don't use a key that's stored in the record buffer -
the engine can overwrite the buffer with anything, destroying the key
sql/sql_class.h:
inline handler methods that need to see THD and TABLE definitions
sql/sql_select.cc:
no ha_index_read_last_map anymore
sql/sql_table.cc:
remove HTON_FLUSH_AFTER_RENAME
sql/table.cc:
set HA_CAN_MEMCMP as appropriate
sql/tztime.cc:
don't use a key that's stored in the record buffer -
the engine can overwrite the buffer with anything, destroying the key
storage/myisam/ha_myisam.cc:
engines don't need to update table->status or use ha_statistic_increment anymore
storage/myisam/ha_myisam.h:
index_read_last_map is no more
Some of the server implementations don't support dates later
than 2038 due to the internal time type being 32 bit.
Added checks so that the server will refuse dates that cannot
be handled by either throwing an error when setting date at
runtime or by refusing to start or shutting down the server if
the system date cannot be stored in my_time_t.
Field_time::get_date method does not initialize MYSQL_TIME::time_type field.
The fix is to init this field.
mysql-test/r/type_time.result:
test case
mysql-test/t/type_time.test:
test case
sql/field.cc:
--use Field_time::get_time in Field_time::get_date
--removed duplicated code in Field_time::get_date method
and .tar.gz, windows vs linux..
On Intel x86 machines index selection by the MySQL query
optimizer could sometimes depend on the compiler version and
optimization flags used to build the server binary.
The problem was a result of a known issue with floating point
calculations on x86: since internal FPU precision (80 bit)
differs from precision used by programs (32-bit float or 64-bit
double), the result of calculating a complex expression may
depend on how FPU registers are allocated by the compiler and
whether intermediate values are spilled from FPU to memory. In
this particular case compiler versions and optimization flags
had an effect on cost calculation when choosing the best index
in best_access_path().
A possible solution to this problem which has already been
implemented in mysql-trunk is to limit FPU internal precision
to 64 bits. So the fix is a backport of the relevant code to
5.1 from mysql-trunk.
configure.in:
Configure check for fpu_control.h
mysql-test/r/explain.result:
Test case for bug #48537.
mysql-test/t/explain.test:
Test case for bug #48537.
sql/mysqld.cc:
Backport of the code to switch FPU on x86 to 64-bit precision.
without FOR UPDATE is causing a lock".
SELECT statements with subqueries referencing InnoDB tables
were acquiring shared locks on rows in these tables when they
were executed in REPEATABLE-READ mode and with statement or
mixed mode binary logging turned on.
This was a regression which were introduced when fixing
bug 39843.
The problem was that for tables belonging to subqueries
parser set TL_READ_DEFAULT as a lock type. In cases when
statement/mixed binary logging at open_tables() time this
type of lock was converted to TL_READ_NO_INSERT lock at
open_tables() time and caused InnoDB engine to acquire
shared locks on reads from these tables. Although in some
cases such behavior was correct (e.g. for subqueries in
DELETE) in case of SELECT it has caused unnecessary locking.
This patch implements minimal version of the fix for the
specific problem described in the bug-report which supposed
to be not too risky for pushing into 5.1 tree.
The 5.5 tree already contains a more appropriate solution
which also addresses other related issues like bug 53921
"Wrong locks for SELECTs used stored functions may lead
to broken SBR".
This patch tries to solve the problem by ensuring that
TL_READ_DEFAULT lock which is set in the parser for
tables participating in subqueries at open_tables()
time is interpreted as TL_READ_NO_INSERT or TL_READ.
TL_READ is used only if we know that this is a SELECT
and that this particular table is not used by a stored
function.
Test coverage is added for both InnoDB and MyISAM.
This patch introduces an "incompatible" change in locking
scheme for subqueries used in SELECT ... FOR UPDATE and
SELECT .. IN SHARE MODE.
In 4.1 (as well as in 5.0 and 5.1 before fix for bug 39843)
the server would use a snapshot InnoDB read for subqueries
in SELECT FOR UPDATE and SELECT .. IN SHARE MODE statements,
regardless of whether the binary log is on or off.
If the user required a different type of read (i.e. locking
read), he/she could request so explicitly by providing FOR
UPDATE/IN SHARE MODE clause for each individual subquery.
The patch for bug 39843 broke this behaviour (which was not
documented or tested), and started to use locking reads for
all subqueries in SELECT ... FOR UPDATE/IN SHARE MODE.
This patch restores 4.1 behaviour.
This patch should be mostly null-merged into 5.5 tree.
mysql-test/include/check_concurrent_insert.inc:
Added auxiliary script which allows to check if statement
reading table allows concurrent inserts in it.
mysql-test/include/check_no_concurrent_insert.inc:
Added auxiliary script which allows to check that statement
reading table doesn't allow concurrent inserts in it.
mysql-test/include/check_no_row_lock.inc:
Added auxiliary script which allows to check if statement
reading table doesn't take locks on its rows.
mysql-test/include/check_shared_row_lock.inc:
Added auxiliary script which allows to check if statement
reading table takes shared locks on some of its rows.
mysql-test/r/bug39022.result:
After bug #46947 'Embedded SELECT without FOR UPDATE is
causing a lock' was fixed test case for bug 39022 has to
be adjusted in order to trigger execution path on which
original problem was encountered.
mysql-test/r/innodb_mysql_lock2.result:
Added coverage for handling of locking in various cases when
we read data from InnoDB tables (includes test case for
bug #46947 'Embedded SELECT without FOR UPDATE is causing a
lock').
mysql-test/r/lock_sync.result:
Added coverage for handling of locking in various cases when
we read data from MyISAM tables.
mysql-test/t/bug39022.test:
After bug #46947 'Embedded SELECT without FOR UPDATE is
causing a lock' was fixed test case for bug 39022 has to
be adjusted in order to trigger execution path on which
original problem was encountered.
mysql-test/t/innodb_mysql_lock2.test:
Added coverage for handling of locking in various cases when
we read data from InnoDB tables (includes test case for
bug #46947 'Embedded SELECT without FOR UPDATE is causing a
lock').
mysql-test/t/lock_sync.test:
Added coverage for handling of locking in various cases when
we read data from MyISAM tables.
sql/mysql_priv.h:
Function read_lock_type_for_table() now takes pointers to
LEX and TABLE_LIST elements as its arguments since to
correctly determine lock type it needs to know what
statement is being performed and whether table element for
which lock type to be determined belongs to prelocking list.
sql/sql_base.cc:
Changed read_lock_type_for_table() to return a weak TL_READ
type of lock in cases when we are executing SELECT (and so
won't update tables directly) and table doesn't belong to
statement's prelocking list and thus can't be used by a
stored function. It is OK to do so since in this case table
won't be used by statement or function call which will be
written to the binary log, so serializability requirements
for it can be relaxed.
One of results from this change is that SELECTs on InnoDB
tables no longer takes shared row locks for tables which
are used in subqueries (i.e. bug #46947 is fixed).
Another result is that for similar SELECTs on MyISAM tables
concurrent inserts are allowed.
In order to implement this change signature of
read_lock_type_for_table() function was changed to
take pointers to LEX and TABLE_LIST objects.
sql/sql_update.cc:
Function read_lock_type_for_table() now takes pointers to
LEX and TABLE_LIST elements as its arguments since to
correctly determine lock type it needs to know what
statement is being performed and whether table element for
which lock type to be determined belongs to prelocking list.
Fixed failing test innodb.innodb-autoinc.test
Enabled innodb test suite
mysql-test/mysql-test-run.pl:
Enabled innodb test suite
mysql-test/r/innodb-autoinc.result:
Removed test as it exists in suite innodb
mysql-test/suite/innodb/t/disabled.def:
Removed innodb-autoinc
mysql-test/suite/innodb/t/innodb-autoinc.test:
Update to be able to run with plugin
mysql-test/t/innodb-autoinc.test:
Removed test as it exists in suite innodb
sql/filesort.cc:
Removed not used variable
sql/slave.cc:
Remove compiler warnings
storage/pbxt/src/ha_pbxt.cc:
Removed not used variable
storage/xtradb/dict/dict0crea.c:
Fixed compiler warning about unsigned comparison
support-files/compiler_warnings.supp:
Disable some not relevant warnings