Item_func_spatial_collection::fix_length_and_dec()
changed to use argument's print() method to print
the ER_ILLEGAL_VALUE_FOR_TYPE error.
mysql-test/r/gis.result:
Fix for bug#56679: gis.test: valgrind error
- test result adjusted.
sql/item_geofunc.h:
Fix for bug#56679: gis.test: valgrind error
- use argument's print() method instead of improper val_str()
call in the Item_func_spatial_collection::fix_length_and_dec(), as
it's applicable only for constant items.
Fixed typo that caused warnings from mysql-test-run
mysql-test/mysql-test-run.pl:
Fixed typo
sql/mysqld.cc:
Give a more precise warning why something fails.
Fixed test failures in buildbot
Don't write errors when failing to send ok packet
mysql-test/suite/pbxt/r/range.result:
Don't write number of rows as it varies.
mysql-test/suite/pbxt/t/range.test:
Don't write number of rows as it varies.
sql/mysqld.cc:
Don't write errors when failing to send ok packet
storage/maria/ma_bitmap.c:
Added DBUG_ASSERT to detect wrong bitmap pages
storage/maria/ma_blockrec.c:
Don't reset BLOCKUSED_USE_ORG_BITMAP flag. This fixed a bug where bitmap could be wrong after UNDO of row with blobs
Convertion from a floating point number to a string caused a
crash.
During rare circumstances a String object could crash when
it was requested to allocate new memory.
A crash could occcur in Field_double::val_str() because of
a pointer referencing memory inside a String object which was
of unknown size.
And finally, the geometric collection should not accept
arguments which are non geometric.
mysql-test/r/gis.result:
* Test cases change because we intercept the error behind the
previous crashes much earlier.
sql/field.cc:
* It makes no sense to impose a lower limit on the length
and not setting a upper limit will cause crashes later.
sql/item_geofunc.h:
* Disallow for binding with field- and item types which
differ from MYSQL_TYPE_GEOMETRY types.
The EXISTS transformation has additional switches to catch the known corner
cases that appear when transforming an IN predicate into EXISTS. Guarded
conditions are used which are deactivated when a NULL value is seen in the
outer expression's row. When the inner query block supplies NULL values,
however, they are filtered out because no distinction is made between the
guarded conditions; guarded NOT x IS NULL conditions in the HAVING clause that
filter out NULL values cannot be de-activated in isolation from those that
match values or from the outer expression or NULL's.
The above problem is handled by making the guarded conditions remember whether
they have rejected a NULL value or not, and index access methods are taking
this into account as well.
The bug consisted of
1) Not resetting the property for every nested loop iteration on the inner
query's result.
2) Not propagating the NULL result properly from inner query to IN optimizer.
3) A hack that may or may not have been needed at some point. According to a
comment it was aimed to fix#2 by returning NULL when FALSE was actually
the result. This caused failures when #2 was properly fixed. The hack is
now removed.
The fix resolves all three points.
multi-table UPDATE IGNORE.
The problem was that if there was an active SELECT statement
during trigger execution, an error risen during the execution
may cause a crash. The fix is to temporary reset LEX::current_select
before trigger execution and restore it afterwards. This way
errors risen during the trigger execution are processed as
if there was no active SELECT.
mysql-test/r/trigger_notembedded.result:
added test case result for bug #55421.
mysql-test/t/trigger_notembedded.test:
added test case for bug #55421.
sql/sql_trigger.cc:
Reset thd->lex->current_select before start trigger execution
and restore its original value after execution is finished.
This is neccessery in order to set error status in
diagnostic_area in case of trigger execution failure.
inited==INDEX
When an error occurs while sending the data in a temporary table there was no
cleanup performed. This caused a failed assertion in the case when different
access methods were used for populating the table vs. retrieving the data from
the table if IGNORE was specified and sql_safe_updates = 0. In this case
execution continues, but the handler expects to continue with the access
method used for row retrieval.
Fixed by doing the cleanup even if errors occur.
Fixed bug in Aria when replacing short keys with long keys and a key tree both overflow and underflow at same time.
Fixed several bugs when generating recovery logs when using RGQ with replacing long keys with short keys and vice versa.
Lots of new DBUG_ASSERT()'s
Added more information to recovery log to make it easier to know from where log entry orginated.
Introduced MARIA_PAGE->org_size that tells what the size of the page was in last log entry. This allows us to find out if all key changes for index page was logged.
Small code cleanups:
- Introduced _ma_log_key_changes() to log crc of key page changes
- Added share->max_index_block_size as max size of data one can put in key block (block_size - KEYPAGE_CHECKSUM_SIZE)
This will later simplify adding a directory to index pages.
- Write page number instead of page postition to DBUG log
mysql-test/lib/v1/mysql-test-run.pl:
Use --general-log instead of --log to disable warning when using RQG
sql/mysqld.cc:
If we have already sent ok to client when we get an error, log this to stderr
Don't disable option --log-output if CSV engine is not supported.
storage/maria/ha_maria.cc:
Log queries to recovery log also in LOCK TABLES
storage/maria/ma_check.c:
If param->max_trid is set, use this value instead of max_trid_in_system().
This is used by recovery to set max_trid to max seen trid so far.
keyinfo->block_length - KEYPAGE_CHECKSUM_SIZE -> max_index_block_size (Style optimization)
storage/maria/ma_delete.c:
Mark tables crashed early
Write page number instead of page position to debug log.
Added parameter to ma_log_delete() and ma_log_prefix() that is logged so that we can find where wrong log entries where generated.
Fixed bug where a page was not proplerly written when same key tree had both an overflow and underflow when deleting a key.
keyinfo->block_length - KEYPAGE_CHECKSUM_SIZE => max_index_block_size (Style optimization)
ma_log_delete() now has extra parameter of how many bytes from end of page should be appended to log for page (for page overflows)
storage/maria/ma_key_recover.c:
Added extra parameter to ma_log_prefix() to indicate what caused log entry.
Update MARIA_PAGE->org_size when logging info about page.
Much more DBUG_ASSERT()'s.
Fix some bugs in maria_log_add() to handle page overflows.
Added _ma_log_key_changes() to log crc of key page changes.
If EXTRA_STORE_FULL_PAGE_IN_KEY_CHANGES is defines, log the resulting pages to log so one can trivally
see how the resulting page should have looked like (for errors in CRC values)
storage/maria/ma_key_recover.h:
Added _ma_log_key_changes() which is only called if EXTRA_DEBUG_KEY_CHANGES is defined.
Updated function prototypes.
storage/maria/ma_loghandler.h:
Added more values to en_key_debug, to get more exact location where things went wrong when logging to recovery log.
storage/maria/ma_open.c:
Initialize share->max_index_block_size
storage/maria/ma_page.c:
Added updating and testing of MARIA_PAGE->org_size
Write page number instead of page postition to DBUG log
Generate error if we read page with wrong data.
Removed wrong assert: key_del_current != share->state.key_del.
Simplify _ma_log_compact_keypage()
storage/maria/ma_recovery.c:
Set param.max_trid to max seen trid before running repair table (used for alter table to create index)
storage/maria/ma_rt_key.c:
Update call to _ma_log_delete()
storage/maria/ma_rt_split.c:
Use _ma_log_key_changes()
Update MARIA_PAGE->org_size
storage/maria/ma_unique.c:
Remove casts
storage/maria/ma_write.c:
keyinfo->block_length - KEYPAGE_CHECKSUM_SIZE => share->max_index_block_length.
Updated calls to _ma_log_prefix()
Changed code to use _ma_log_key_changes()
Update ma_page->org_size
Fixed bug in _ma_log_split() for pages that overflow
Added KEY_OP_DEBUG logging to functions
Log KEYPAGE_FLAG in all log entries
storage/maria/maria_def.h:
Added SHARE->max_index_block_size
Added MARIA_PAGE->org_size
storage/maria/trnman.c:
Reset flags for new transaction.
Cleaned up mysql_upgrade --help and mysqlcheck --help
client/mysql_upgrade.c:
Increased version number.
Marked all options that are not used 'Not used'.
Don't write 'Looking for tool' if not using --verbose
client/mysqlcheck.c:
Cleanup output for --help
Reset not initialzed variable
mysql-test/r/log_tables_upgrade.result:
Updated results
mysql-test/r/mysql_upgrade.result:
Updated results
mysql-test/r/mysqlcheck.result:
Updated results
mysql-test/t/log_tables_upgrade.test:
mysql_upgrade is now run without --skip-verbose
mysql-test/t/mysql_upgrade.test:
mysql_upgrade is now run without --skip-verbose
mysql-test/r/not_partition.result:
Test result changed after I fixed the error message for not existing engine
mysql-test/suite/funcs_1/r/is_columns_is.result:
Updated results
mysql-test/suite/funcs_1/r/is_engines_innodb.result:
Updated results
mysql-test/suite/funcs_1/r/is_tables_is.result:
Updated results
mysql-test/suite/funcs_1/t/is_tables_is.test:
Test requires innodb as results depends on innodb
mysql-test/suite/innodb_plugin/t/disabled.def:
Disable test as it shows errors in valgrind
mysql-test/suite/innodb_plugin/t/innodb-use-sys-malloc.test:
Test can't be run under valgrind as mysql-test-run resets the innodb_use_sys_malloc flag
storage/xtradb/buf/buf0buf.c:
Fixed compiler warning by adding casts
Made long file names from previous patch shorter
mysql-test/r/archive.result:
Added testing of repair (for upgrade) of 5.0 tables.
mysql-test/std_data/archive_5_0.ARM:
Archive table created in MySQL 5.0
mysql-test/std_data/archive_5_0.ARZ:
Archive table created in MySQL 5.0
mysql-test/std_data/archive_5_0.frm:
Archive table created in MySQL 5.0
mysql-test/std_data/long_table_name.MYD:
Made long file names shorter
mysql-test/std_data/long_table_name.MYI:
Made long file names shorter
mysql-test/std_data/long_table_name.frm:
Made long file names shorter
mysql-test/t/archive.test:
Added testing of repair (for upgrade) of 5.0 tables.
sql/sql_table.cc:
Allow recreate to open crashed tables.
sql/table.cc:
Fix error message if storage engine doesn't exists.
storage/archive/azio.c:
Reset status values in case archive is of old versions
storage/archive/ha_archive.cc:
Fix to allow one to open old versions of table during repair
Reset status variables for old version tables
If the the table is of old version, force upgrade with ALTER TABLE when doing repair
storage/archive/ha_archive.h:
Added variables to detect old versions
Fall back to use ALTER TABLE for engines that doesn't support REPAIR when doing repair for upgrade.
Nicer output from mysql_upgrade and mysql_check
Updated all arrays that used NAME_LEN to use SAFE_NAME_LEN to ensure that we don't break things accidently as names can now have a #mysql50# prefix.
client/mysql_upgrade.c:
If we are using verbose, also run mysqlcheck in verbose mode.
client/mysqlcheck.c:
Add more information if running in verbose mode
Print 'Needs upgrade' instead of complex error if table needs to be upgraded
Don't write connect information if verbose is not 2 or above
mysql-test/r/drop.result:
Updated test and results as we now support full table names
mysql-test/r/grant.result:
Now you get a correct error message if using #mysql with paths
mysql-test/r/show_check.result:
Update results as table names can temporarly be bigger than NAME_LEN (during upgrade)
mysql-test/r/upgrade.result:
Test upgrade for long table names.
mysql-test/suite/funcs_1/r/is_tables_is.result:
Updated old test result (had note been updated in a while)
mysql-test/t/drop.test:
Updated test and results as we now support full table names
mysql-test/t/grant.test:
Now you get a correct error message if using #mysql with paths
mysql-test/t/upgrade.test:
Test upgrade for long table names.
sql/ha_partition.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/item.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/log_event.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/mysql_priv.h:
Added SAFE_NAME_LEN
sql/rpl_filter.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sp.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sp_head.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_acl.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_base.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_connect.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_parse.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_prepare.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_select.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_show.cc:
NAME_LEN -> SAFE_NAME_LEN
Enlarge table names for SHOW TABLES to also include optional #mysql50#
sql/sql_table.cc:
Fall back to use ALTER TABLE for engines that doesn't support REPAIR when doing repair for upgrade.
sql/sql_trigger.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_udf.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/sql_view.cc:
NAME_LEN -> SAFE_NAME_LEN
sql/table.cc:
Fixed check_table_name() to not count #mysql50# as part of name
If #mysql50# is part of the name, don't allow path characters in name.
case than in corr index".
Server was unable to find existing or explicitly created supporting
index for foreign key if corresponding statement clause used field
names in case different than one used in key specification and created
yet another supporting index.
In cases when name of constraint (and thus name of generated index)
was the same as name of existing/explicitly created index this led
to duplicate key name error.
The problem was that unlike all other code Key_part_spec::operator==()
compared field names in case sensitive fashion. As result routines
responsible for getting rid of redundant generated supporting indexes
for foreign key were not working properly for versions of field names
using different cases.
(backported from mysql-trunk)
sql/sql_class.cc:
Make field name comparison case-insensitive like it is
in the rest of server.
"Access compatibility" syntax
The "wild" "DELETE FROM table_name.* ... USING ..." syntax
for multi-table DELETE statements is documented but it was
lost in the fix for the bug 30234.
The table_ident_opt_wild parser rule has been added
to restore the lost syntax.
mysql-test/r/delete.result:
Test case for bug #53034.
mysql-test/t/delete.test:
Test case for bug #53034.
sql/sql_yacc.yy:
Bug #53034: Multiple-table DELETE statements not accepting
"Access compatibility" syntax
The table_ident_opt_wild parser rule has been added
to restore the lost syntax.
Note: simple extending of table_ident with opt_wild in
the table_alias_ref rule is not acceptable, because
a) it adds one conflict more and b) this conflict resolves
in the inappropriate way.
Check for number of line strings in the incoming polygon data (wkb) and
for number of points in the incoming linestring wkb.
mysql-test/r/gis.result:
Fix for bug #51875: crash when loading data into geometry function polyfromwkb
- test result.
mysql-test/t/gis.test:
Fix for bug #51875: crash when loading data into geometry function polyfromwkb
- test case.
sql/spatial.cc:
Fix for bug #51875: crash when loading data into geometry function polyfromwkb
- creating a polygon from wkb check for number of line strings,
- creating a linestring from wkb check for number of line points.
mysql-test/t/bug46080-master.opt:
Lower limits to be able to run tests
regex/main.c:
Fixed compiler warnings
storage/maria/ma_key_recover.c:
Fixed compiler warnings
storage/maria/ma_recovery.c:
Fixed compiler warnings
storage/maria/ma_unique.c:
Fixed compiler warnings
strings/ctype-uca.c:
Added comment
strings/xml.c:
Fixed compiler warnings
support-files/compiler_warnings.supp:
Added suppressions for windows
unittest/strings/strings-t.c:
Added ifdef to fix compilation failure when compiling without UCA
mysql-test/suite/binlog/t/binlog_row_binlog.test:
Don't run test if utf8_unicode_ci is not available
mysql-test/suite/binlog/t/binlog_stm_binlog.test:
Don't run test if utf8_unicode_ci is not available
mysql-test/suite/funcs_1/r/is_columns_is.result:
Update result
mysql-test/suite/innodb/t/innodb_misc1.test:
Don't run test if utf8_unicode_ci is not available
mysql-test/suite/innodb/t/innodb_mysql.test:
Don't run test if utf8_unicode_ci is not available
- Changed to still use bcmp() in certain cases becasue
- Faster for short unaligneed strings than memcmp()
- Bettern when using valgrind
- Changed to use my_sprintf() instead of sprintf() to get higher portability for old systems
- Changed code to use MariaDB version of select->skip_record()
- Removed -%::SCCS/s.% from Makefile.am:s to remove automake warnings
== MYSQL_TYPE_LONGLONG
A MIN/MAX() function with a subquery as its argument could lead
to a debug assertion on debug builds or wrong data on release
ones.
The problem was a combination of the following factors:
- Item_sum_hybrid::fix_fields() might use the argument
(args[0]) to calculate 'hybrid_field_type' which was later used
to decide how the data should be sent to the client.
- Item_sum::make_field() might use the argument again to
calculate the field's type when sending result set metadata to
the client.
- The argument could be changed in between these two calls via
Item::set_arg() leading to inconsistent metadata being
reported.
Here is what was happening for the bug's test case:
1. Item_sum_hybrid::fix_fields() calculates hybrid_field_type
as MYSQL_TYPE_LONGLONG based on args[0] which is an
Item::SUBSELECT_ITEM at that time.
2. A temporary table is created to execute the
query. create_tmp_field_from_item() creates a Field_long object
according to the subselect's max_length.
3. The subselect item in Item_sum_hybrid is replaced by the
Item_field object referencing the newly created Field_long.
4. Item_sum::make_field() rightfully returns the
MYSQL_TYPE_LONG type when calculating the result set metadata.
5. When sending the actual data, Item::send() relies on the
virtual field_type() function which in our case returns
previously calculated hybrid_field_type == MYSQL_TYPE_LONGLONG.
It looks like the only solution is to never refer to the
argument's metadata after the result metadata has been
calculated in fix_fields(), since the argument itself may be
different by then. In this sense, Item_sum::make_field() should
never be used, because it may rely on the argument's metadata
and is only called after fix_fields(). The "default"
implementation in Item::make_field() should be used instead as
it relies only on field_type(), but not on the argument's type.
Fixed by removing Item_sum::make_field() so that the superclass
implementation Item::make_field() is always used.
mysql-test/r/func_group.result:
Added a test case for bug #54465.
mysql-test/t/func_group.test:
Added a test case for bug #54465.
sql/item_sum.cc:
Removed Item_sum::make_field() so that the superclass
implementation Item::make_field() is always used.
sql/item_sum.h:
Removed Item_sum::make_field() so that the superclass
implementation Item::make_field() is always used.
Bug#46754: 'rows' field doesn't reflect partition pruning
Update of test results after fixing the above bugs.
(fix in separate commit).
mysql-test/r/partition.result:
Updated test result after fixing bugs 46754 and 53806
mysql-test/r/partition_hash.result:
Updated test result after fixing bugs 46754 and 53806
mysql-test/r/partition_innodb.result:
Updated test result after fixing bugs 46754 and 53806
mysql-test/r/partition_range.result:
Updated test result after fixing bugs 46754 and 53806
mysql-test/suite/parts/r/partition_alter3_innodb.result:
Updated test result after fixing bugs 46754 and 53806
mysql-test/suite/parts/r/partition_alter3_myisam.result:
Updated test result after fixing bugs 46754 and 53806
Bug#46754: 'rows' field doesn't reflect partition pruning
The EXPLAIN's result in 'rows' field
was evaluated to number of rows when the table was opened
(not from the table cache) and only the partitions left
after pruning was updated with its correct number
of rows.
The evaluation of the 'rows' field was using handler::records()
which is a potentially expensive call, and ignores the partitioning
pruning.
The fix was to use the handlers stats.records after updating it
with ::info(HA_STATUS_VARIABLE) instead.
mysql-test/r/partition_pruning.result:
updated result
mysql-test/t/partition_pruning.test:
Added test.
sql/sql_select.cc:
Use ::info + stats.records instead of ::records().
called twice in a row
Queries with nested joins could cause an infinite loop in the
server when used from SP/PS.
When flattening nested joins, simplify_joins() tracks if the
name resolution list needs to be updated by setting
fix_name_res to TRUE if the current loop iteration has done any
transformations to the join table list. The problem was that
the flag was not reset before the next loop iteration leading
to unnecessary "fixing" of the name resolution list which in
turn could lead to a loop (i.e. circularly-linked part) in that
list. This was causing problems on subsequent execution when
used together with stored procedures or prepared statements.
Fixed by making sure fix_name_res is reset on every loop
iteration.
mysql-test/r/join.result:
Added a test case for bug #53544.
mysql-test/t/join.test:
Added a test case for bug #53544.
sql/sql_select.cc:
Make sure fix_name_res is reset on every loop iteration.