feature_dynamic_columns,feature_fulltext,feature_gis,feature_locale,feature_subquery,feature_timezone,feature_trigger,feature_xml
Opened_views, Executed_triggers, Executed_events
Added new process status 'updating status' as part of 'freeing items'
mysql-test/r/features.result:
Test of feature_xxx status variables
mysql-test/r/mysqld--help.result:
Removed duplicated 'language' variable.
mysql-test/r/view.result:
Test of opened_views
mysql-test/suite/rpl/t/rpl_start_stop_slave.test:
Write more information on failure
mysql-test/t/features.test:
Test of feature_xxx status variables
mysql-test/t/view.test:
Test of opened_views
sql/event_scheduler.cc:
Increment executed_events status variable
sql/field.cc:
Increment status variable
sql/item_func.cc:
Increment status variable
sql/item_strfunc.cc:
Increment status variable
sql/item_subselect.cc:
Increment status variable
sql/item_xmlfunc.cc:
Increment status variable
sql/mysqld.cc:
Add new status variables to 'show status'
sql/mysqld.h:
Added executed_events
sql/sql_base.cc:
Increment status variable
sql/sql_class.h:
Add new status variables
sql/sql_parse.cc:
Added new process status 'updating status' as part of 'freeing items'
sql/sql_trigger.cc:
Increment status variable
sql/sys_vars.cc:
Increment status variable
sql/tztime.cc:
Increment status variable
Opening system statistical tables and reading statistical data from
them for a regular table should be done after opening and locking
this regular table.
No test case is provided with this patch.
Link view/derived table fields to a real table to check turning the table record to null row.
Item_direct_view_ref wrapper now checks if table is turned to null row.
Original patch by Eric Herman
client/mysql.cc:
Added LAST_VALUE
mysql-test/r/last_value.result:
Testing of LAST_VALUE
mysql-test/t/last_value.test:
Testing of LAST_VALUE
sql/item_func.cc:
Added LAST_VALUE()
sql/item_func.h:
Added LAST_VALUE()
sql/lex.h:
Added LAST_VALUE()
sql/sql_yacc.yy:
Added LAST_VALUE()
If a table is already in the table cache but without data from persistent
statistical tables then the function open_and_process_table should not
only allocate memory for this statistical data in the corresponding
TABLE_SHARE object, but also should copy the references to the data into
certain fields of the TABLE data structure: for each key of the table
KEY::read_stats should be copied, and for each column of the table
Field::read_stats should be copied.
Autointersections of an object were treated as nodes, so the wrong result.
per-file comments:
mysql-test/r/gis.result
Bug #1043845 st_distance() results are incorrect depending on variable order.
test result updated.
mysql-test/t/gis.test
Bug #1043845 st_distance() results are incorrect depending on variable order.
test case added.
sql/item.cc
small fix to make compilers happy.
sql/item_geofunc.cc
Bug #1043845 st_distance() results are incorrect depending on variable order.
Skip intersection points when calculate distance.
THOUGH IT IS NOT.
The following error message is misleading because it claims
that the BLOB space is not counted.
"ERROR 1118 (42000): Row size too large. The maximum row size for
the used table type, not counting BLOBs, is 8126. You have to
change some columns to TEXT or BLOBs"
When the ROW_FORMAT=compact or ROW_FORMAT=REDUNDANT is used,
the BLOB prefix is stored inline along with the row. So
the above error message is changed as follows depending on
the row format used:
For ROW_FORMAT=COMPRESSED or ROW_FORMAT=DYNAMIC, the error
message is as follows:
"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB may help. In current row format,
BLOB prefix of 0 bytes is stored inline."
For ROW_FORMAT=COMPACT or ROW_FORMAT=REDUNDANT, the error
message is as follows:
"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or
ROW_FORMAT=COMPRESSED may help. In current row
format, BLOB prefix of 768 bytes is stored inline."
rb://1252 approved by Marko Makela
1. Field_newdate::get_date should refuse to return a date with zeros when
TIME_NO_ZERO_IN_DATE is set, not when TIME_FUZZY_DATE is unset
2. Item_func_to_days and Item_date_add_interval can only work with valid dates,
no zeros allowed.
fix Item_func_add_time::get_date() to generate valid dates.
Move the validity check inside get_date_from_daynr()
instead of relying on callers
(5 that had it, and 2 that did not, but should've)
- Full table scan internally uses LIMIT n, and re-starts the scan from
the last seen rowkey value. rowkey ranges are inclusive, so we will
see the same rowkey again. We should ignore it.
The function collect_statistics_for_table() when scanning a table
did not take into account that the handler function ha_rnd_next
could return the code HA_ERR_RECORD_DELETE that should not be
considered as an indication of an error.
Also fixed a potential memory leak in this function.
- We use HA_MRR_NO_ASSOC ("optimizer_switch=join_cache_hashed") mode
- Not able to use BKA's buffers yet.
- There is a variable to control batch size
- There are status counters.
- Nedeed to make some fixes in BKA code (to be checked with Igor)
The problem was that was_null and null_value variables was reset in each reexecution of IN subquery, but engine rerun only for non-constant subqueries.
Fixed checking constant in Item_equal sort.
Fix constant reporting in Item_subselect.