An assertion failure was triggered for a 6-way join query that uses two
join buffers.
The failure happened because every call of the function flush_cached_records()
saved and restored status of all tables before the table join_tab. It
must do it only for those of them that follow the last table that uses
a join buffer.
other crashes
Some string manipulating SQL functions use a shared string object intended to
contain an immutable empty string. This object was used by the SQL function
SUBSTRING_INDEX() to return an empty string when one argument was of the wrong
datatype. If the string object was then modified by the sql function INSERT(),
undefined behavior ensued.
Fixed by instead modifying the string object representing the function's
result value whenever string manipulating SQL functions return an empty
string.
Relevant code has also been documented.
The test case fails with out of memory while updating a table
with several multi-megabytes sized rows. This can probably be too
exhausting for PB2 env.
The quick fix here is to reduce the size of the biggest
row (256MB) so that it becomes a little smaller (64MB).
mysql-test/suite/handler/innodb.result:
Added test case
mysql-test/suite/handler/innodb.test:
Added test case
sql/handler.h:
Move setting/resetting of active_index to ha_index_init()/ha_index_end() to simplify handler functions index_init()/index_end()
Fixed that get_index() returns MAX_KEY if index is not inited (this fixed LP#697610)
storage/federated/ha_federated.cc:
Settting of active_index is not needed anymore
storage/maria/ma_pagecache.c:
Added error message if we have too little memory for Maria page cache
INVOKER-security view access check wrong".
When privilege checks were done for tables used from an
INVOKER-security view which in its turn was used from
a DEFINER-security view connection's active security
context was incorrectly used instead of security context
with privileges of the second view's creator.
This meant that users which had enough rights to access
the DEFINER-security view and as result were supposed to
be able successfully access it were unable to do so in
cases when they didn't have privileges on underlying tables
of the INVOKER-security view.
This problem was caused by the fact that for INVOKER-security
views TABLE_LIST::security_ctx member for underlying tables
were set to 0 even in cases when particular view was used from
another DEFINER-security view. This meant that when checks of
privileges on these underlying tables was done in
setup_tables_and_check_access() active connection security
context was used instead of context corresponding to the
creator of caller view.
This fix addresses the problem by ensuring that underlying
tables of an INVOKER-security view inherit security context
from the view and thus correct security context is used for
privilege checks on underlying tables in cases when such view
is used from another view with DEFINER-security.
mysql-test/r/view_grant.result:
Added coverage for various combinations of DEFINER and
INVOKER-security views, including test for bug #58499
"DEFINER-security view selecting from INVOKER-security
view access check wrong".
mysql-test/t/view_grant.test:
Added coverage for various combinations of DEFINER and
INVOKER-security views, including test for bug #58499
"DEFINER-security view selecting from INVOKER-security
view access check wrong".
sql/sql_view.cc:
When opening a non-suid view ensure that its underlying
tables will get the same security context as use for
checking privileges on the view, i.e. security context
of view invoker. This context can be different from the
security context which is currently active for connection
in cases when this non-suid view is used from a view with
suid security. Inheriting security context in such situation
allows correctly apply privileges of creator of suid view
in checks for tables of non-suid view (since in this
situation creator/definer of suid view serves as invoker
for non-suid view).
Item_func_spatial_collection::fix_length_and_dec didn't call parent's method, so
the maybe_null was set to '0' after it. But in this case the result was
just NULL, that caused wrong behaviour.
per-file comments:
mysql-test/r/gis.result
Bug #57321 crashes and valgrind errors from spatial types
test result updated.
mysql-test/t/gis.test
Bug #57321 crashes and valgrind errors from spatial types
test case added.
sql/item_geofunc.h
Bug #57321 crashes and valgrind errors from spatial types
Item_func_geometry::fix_length_and_dec() called in
Item_func_spatial_collection::fix_length_and_dec().
get_year_value() contains code to convert 2-digits year to
4-digits. The fix for Bug#49910 added a check on the size of
the underlying field so that this conversion is not done for
YEAR(4) values. (Since otherwise one would convert invalid
YEAR(4) values to valid ones.)
The existing check does not work when Item_cache is used, since
it is not detected when the cache is based on a Field. The
reported change in behavior is due to Bug#58030 which added
extra cached items in min/max computations.
The elegant solution would be to implement
Item_cache::real_item() to return the underlying Item.
However, some side effects are observed (change in explain
output) that indicates that such a change is not straight-
forward, and definitely not appropriate for an MRU.
Instead, a Item_cache::field() method has been added in order
to get access to the underlying field. (This field() method
eliminates the need for Item_cache::eq_def() used in
test_if_ref(), but in order to limit the scope of this fix,
that code has been left as is.)
mysql-test/r/type_year.result:
Added test case for Bug#59211.
mysql-test/t/type_year.test:
Added test case for Bug#59211.
sql/item.h:
Added function Item_cache::field() to get access to the
underlying Field of a cached field Value.
sql/item_cmpfunc.cc:
Also check underlying fields of Item_cache, not just Item_Field,
when checking whether the value is of type YEAR(4) or not.
tmptable needed
The function DEFAULT() works by modifying the the data buffer pointers (often
referred to as 'record' or 'table record') of its argument. This modification
is done during name resolution (fix_fields().) Unfortunately, the same
modification is done when creating a temporary table, because default values
need to propagate to the new table.
Fixed by skipping the pointer modification for fields that are arguments to
the DEFAULT function.
- Fix for MySQL BUG#52357 added NESTED_JOIN::is_fully_covered() which would
not take into account that MariaDB's table elimination could eliminate tables
from join plan (and so, from join nest).
Fixed the check in the function to compare post-table-elimination numbers.
- Added test case for Aria
- Tested HANDLER with HEAP (changes to HEAP code will be pushed in 5.3)
- Moved all HANDLER test to suite/handler.
mysql-test/Makefile.am:
Added suite/handler
mysql-test/mysql-test-run.pl:
Added suite/handler
mysql-test/r/lock_multi.result:
Remove test that is already in handler test suite
mysql-test/suite/handler/aria.result:
Test for HANDLER with Aria storage engine
mysql-test/suite/handler/aria.test:
Test for HANDLER with Aria storage engine
mysql-test/suite/handler/handler.inc:
Extended the general handler test
Moved interface testing to 'interface.test'
mysql-test/suite/handler/init.inc:
Common init for handler tests.
mysql-test/suite/handler/innodb.result:
New results
mysql-test/suite/handler/innodb.test:
Update to use new include files
mysql-test/suite/handler/interface.result:
Test of HANDLER interface (not storage engine dependent parts)
mysql-test/suite/handler/interface.test:
Test of HANDLER interface (not storage engine dependent parts)
mysql-test/suite/handler/myisam.result:
New results
mysql-test/suite/handler/myisam.test:
Update to use new include files
mysql-test/t/lock_multi.test:
Remove test that is already in handler test suite
mysys/tree.c:
Added missing handling of read previous (showed up in HEAP testing)
sql/handler.cc:
Don't marka 'HA_ERR_RECORD_CHANGED' as fatal (can be used with HANDLER READ, especially with MEMORY ENGINE)
sql/handler.h:
Added prototype for can_continue_handler_scan()
sql/sql_handler.cc:
Re-initialize search if we switch from key to table search.
Check if handler can continue searching between calls (via can_continue_handler_scan())
Don't write common not fatal errors to log
storage/maria/ma_extra.c:
Don't set index 0 as default. This forces call to ma_check_index() to set up index variables.
storage/maria/ma_ft_boolean_search.c:
Ensure that info->last_key.keyinfo is set
storage/maria/ma_open.c:
Don't set index 0 as default. This forces call to ma_check_index() to set up index variables.
storage/maria/ma_rkey.c:
Trivial optimization
storage/maria/ma_rnext.c:
Added missing code from mi_rnext.c to ensure that handler next/prev works.
storage/maria/ma_rsame.c:
Simple optimizations
storage/maria/ma_search.c:
Initialize info->last_key once and for all when we change keys.
storage/maria/ma_unique.c:
Ensure that info->last_key.keyinfo is up to date.
multiple columns in the partition key
ndb crash if duplicate columns in the partitioning key.
Backport from mysql-5.1-telco-7.0, see bug#53354.
Changed from case sensitive field name comparision
to non case sensitive too.
mysql-test/r/partition_error.result:
updated result
mysql-test/t/partition_error.test:
Added test for the error in non-ndb partitioned table.
sql/sql_partition.cc:
Added check for duplicated field names in the
partitioning key.
This assert could be triggered if -1 was inserted into
an auto increment column by a statement writing more than
one row.
Unless explicitly given, an interval of auto increment values
is generated when a statement first needs an auto increment
value. The triggered assert checks that the auto increment
counter is equal to or higher than the lower bound of this
interval.
Generally, the auto increment counter starts at 1 and is
incremented by 1 each time it is used. However, inserting an
explicit value into the auto increment column, sets the auto
increment counter to this value + 1 if this value is higher
than the current value of the auto increment counter.
This bug was triggered if the explicit value was -1. Since the
value was converted to unsigned before any comparisons were made,
it was found to be higher than the current vale of the auto
increment counter and the counter was set to -1 + 1. This value
was below the reserved interval and caused the assert to be
triggered the next time the statement tried to write a row.
With the patch for Bug#39828, this bug is no longer repeatable.
Now, -1 + 1 is detected as an "overflow" which causes the auto
increment counter to be set to ULONGLONG_MAX. This avoids hitting
the assert for the next insert and causes a new interval of
auto increment values to be generated. This resolves the issue.
This patch therefore only contains a regression test and no code
changes. Test case added to auto_increment.test.
The patch also fixes a race in rpl_stop_slave.test.
On machines with lots of CPU and memory, something like `mtr --parallel=10`
can speed up the test suite enormously. However, we have a few test cases
that run for long (several minutes), and if we are unlucky and happen to
schedule those towards the end of the test suite, we end up with most
workers idle while waiting for the last slow test to end, significantly
delaying the finish of the entire suite.
Improve this by marking the offending tests as taking "long", and trying
to schedule those tests early. This reduces the time towards the end of
the test suite run where some workers are waiting with nothing to do for
the remaining workers each to finish their last test.
Also, the rpl_stop_slave test had a race which could cause it to take
a 300 seconds debug_sync timeout; this is fixed.
Testing on a 4-core 8GB machine, this patch speeds up the test suite with
around 30% for --parallel=10 (debug build), allowing to run the entire
suite in 5 minutes.
mysqlbinlog only prints "use $database" statements to its output stream
when the active default database changes between events. This will cause
"No Database Selected" error when dropping and recreating that database.
To fix the problem, we clear print_event_info->db when printing an event
of CREATE/DROP/ALTER database statements, so that the Query_log_event
after such statements will be printed with the use 'db' anyway except
transaction keywords.
mysql-test/r/mysqlbinlog.result:
Test result for Bug#50914.
mysql-test/t/mysqlbinlog.test:
Added test to verify if the approach of the mysqlbinlog prints
"use $database" statements to its output stream will cause
"No Database Selected" error when dropping and recreating
that database.
sql/log_event.cc:
Updated code to clear print_event_info->db when printing an event
of CREATE/DROP/ALTER database statements, so that the Query_log_event
after such statements will be printed with the use 'db' anyway except
transaction keywords.
- Removed files specific to compiling on OS/2
- Removed files specific to SCO Unix packaging
- Removed "libmysqld/copyright", text is included in documentation
- Removed LaTeX headers for NDB Doxygen documentation
- Removed obsolete NDB files
- Removed "mkisofs" binaries
- Removed the "cvs2cl.pl" script
- Changed a few GPL texts to use "program" instead of "library"
When the optimizer creates items out of other items it does
not have to call the fix_fields method. Usually in these
cases it calls quick_fix_field() that just marks the
created item as fixed. If the created item is an Item_func
object then calling quick_fix_field() works fine if the
arguments of the created functional item are already fixed.
Otherwise some unfixed nodes remain in the item tree and
it triggers an assertion failure whenever the item is
evaluated.
Fixed the problem by making the method quick_fix_field
virtual and providing an implementation for the class
Item_func objects that recursively calls the method
for unfixed arguments of any functional item.
ASSERT happens due to improper calculation of the max_length
in Item_func_div object, if dividend has max_length == 0 then
Item_func_div::max_length is set to 0 under some circumstances.
The fix:
If decimals == NOT_FIXED_DEC then set
Item_func_div::max_length to max possible
DOUBLE length value.
mysql-test/r/func_math.result:
test case
mysql-test/t/func_math.test:
test case
sql/item_func.cc:
The fix:
If decimals == NOT_FIXED_DEC then set
Item_func_div::max_length to max possible
DOUBLE length value.
Bug#57071: EXTRACT(WEEK from date_col) cannot be allowed as partitioning function
There were functions allowed as partitioning functions
that implicit allowed cast. That could result in unacceptable
behaviour.
Solution was to check that the arguments of date and time functions
have allowed types (field and date/datetime/time depending on function).
mysql-test/r/partition.result:
Updated result
mysql-test/r/partition_error.result:
Updated result
mysql-test/suite/parts/inc/part_supported_sql_funcs_main.inc:
disabled test with not allowed arguments.
mysql-test/suite/parts/r/part_supported_sql_func_innodb.result:
Updated result
mysql-test/suite/parts/r/part_supported_sql_func_myisam.result:
Updated result
mysql-test/t/partition.test:
Fixed typo in bug number and removed non allowed function (bad argument)
mysql-test/t/partition_error.test:
Added tests to verify correct type of argument.
sql/item.h:
Renamed processor since it is no longer only for timezone
sql/item_func.h:
Added help functions for checking date/time/datetime arguments.
sql/item_timefunc.h:
Added processors for argument correctness
sql/sql_partition.cc:
renamed the processor for checking arguments.
BUILD/compile-pentium64:
Added --with-zlib-dir=bundled when doing static build.
mysql-test/suite/maria/r/maria.result:
Added test case
mysql-test/suite/maria/t/maria.test:
Added test case
storage/maria/ma_blockrec.c:
We need to flush blob pages to disk to ensure they overwrite any reused head/tail pages.
If not, REPAIR will find rows that was previously deleted.
Problem: master executed a statement that would fail on slave
(namely, DROP USER 'create_rout_db'@'localhost').
Then the test did:
--let $rpl_only_running_threads= 1
--source include/rpl_reset.inc
rpl_reset.inc calls rpl_sync.inc, which first checks which of
the threads are running and then syncs those threads that are
running. If the SQL thread fails after the check, the sync will
fail. So there was a race in the test and it failed on some
slow hosts.
Fix: Don't replicate the failing statement.
Item_sum_max/Item_sum_min incorrectly set null_value flag and
attempt to get result in parent functions leads to crash.
This happens due to double evaluation of the function argumet.
First evaluation happens in the comparator and second one
happens in Item_cache::cache_value().
The fix is to introduce new Item_cache object which
holds result of the argument and use this cached value
as an argument of the comparator.
mysql-test/r/func_group.result:
test case
mysql-test/t/func_group.test:
test case
sql/item.cc:
added assertion that ether we have some result or result is NULL.
sql/item_sum.cc:
introduce new Item_cache object which
holds result of the argument and use this cached value
as an argument of the comparator.
sql/item_sum.h:
introduce new Item_cache object which
holds result of the argument and use this cached value
as an argument of the comparator.
row_upd_changes_ord_field_binary(): Do not return TRUE if the update
vector changes a column that is covered by a prefix index, but does
not change the column prefix. Add the row_ext_t parameter for
determining whether the prefixes of externally stored columns match.
dfield_datas_are_binary_equal(): Add the parameter len, for comparing
column prefixes when len > 0.
innodb.test: Add a test case where the patch of Bug #55284 failed
without this fix.
rb:537 approved by Jimmy Yang