The assertion failed in handler::ha_reset upon SELECT under
READ UNCOMMITTED from table with index on virtual column.
This was the debug-only failure, though the problem is mush wider:
* MY_BITMAP is a structure containing my_bitmap_map, the latter is a raw
bitmap.
* read_set, write_set and vcol_set of TABLE are the pointers to MY_BITMAP
* The rest of MY_BITMAPs are stored in TABLE and TABLE_SHARE
* The pointers to the stored MY_BITMAPs, like orig_read_set etc, and
sometimes all_set and tmp_set, are assigned to the pointers.
* Sometimes tmp_use_all_columns is used to substitute the raw bitmap
directly with all_set.bitmap
* Sometimes even bitmaps are directly modified, like in
TABLE::update_virtual_field(): bitmap_clear_all(&tmp_set) is called.
The last three bullets in the list, when used together (which is mostly
always) make the program flow cumbersome and impossible to follow,
notwithstanding the errors they cause, like this MDEV-17556, where tmp_set
pointer was assigned to read_set, write_set and vcol_set, then its bitmap
was substituted with all_set.bitmap by dbug_tmp_use_all_columns() call,
and then bitmap_clear_all(&tmp_set) was applied to all this.
To untangle this knot, the rule should be applied:
* Never substitute bitmaps! This patch is about this.
orig_*, all_set bitmaps are never substituted already.
This patch changes the following function prototypes:
* tmp_use_all_columns, dbug_tmp_use_all_columns
to accept MY_BITMAP** and to return MY_BITMAP * instead of my_bitmap_map*
* tmp_restore_column_map, dbug_tmp_restore_column_maps to accept
MY_BITMAP* instead of my_bitmap_map*
These functions now will substitute read_set/write_set/vcol_set directly,
and won't touch underlying bitmaps.
Also add a new member Saved_Size in the Global structure.
modified: storage/connect/global.h
modified: storage/connect/plugutil.cpp
modified: storage/connect/user_connect.cc
modified: storage/connect/jsonudf.cpp
- Add session variables json_all_path and default_depth
modified: storage/connect/ha_connect.cc
modified: storage/connect/mongo.cpp
modified: storage/connect/tabjson.cpp
modified: storage/connect/tabxml.cpp
- ADD column options JPATH and XPATH
Work as FIELD_FORMAT but are more readable
modified: storage/connect/ha_connect.cc
modified: storage/connect/ha_connect.h
modified: storage/connect/mysql-test/connect/r/json_java_2.result
modified: storage/connect/mysql-test/connect/r/json_java_3.result
modified: storage/connect/mysql-test/connect/r/json_mongo_c.result
- Handle negative numbes in the option list
modified: storage/connect/ha_connect.cc
- Fix Json parse that could crash the server.
Was because it could use THROW out of the TRY block.
Also handle all error by THROW.
It is now done by a new class JSON.
modified: storage/connect/json.cpp
modified: storage/connect/json.h
- Add a new UDF function jfile_translate.
It translate a Json file to pretty = 0.
Fast because it does not a real parse of the file.
modified: storage/connect/jsonudf.cpp
modified: storage/connect/jsonudf.h
- Add a now options JSIZE and STRINGIFY to Json tables.
STRINGIFY makes Objects or Arrays to be returned by their
json representation instead of by their concatenated values.
JSIZE allows to specify the LRECL (was 256) defaults to 1024.
Also fix a bug about locating the sub-table by its path.
modified: storage/connect/tabjson.cpp
modified: storage/connect/tabjson.h
modified: storage/connect/ha_connect.cc
- Allow JSON columns to be "binary"
By setting their type as VARBINAY(132)
and their name begin with Jbin_
modified: storage/connect/json.h
modified: storage/connect/jsonudf.cpp
modified: storage/connect/tabjson.cpp
modified: storage/connect/value.cpp
modified: storage/connect/value.h
- CHARSET BINARY cannot be used for text columns
modified: storage/connect/mysql-test/connect/r/updelx.result
modified: storage/connect/mysql-test/connect/t/updelx.test
The variable connect_work_size is now ulong or ulonglong for 64bit machines.
modified: storage/connect/ha_connect.cc
modified: storage/connect/user_connect.cc
All variables handling sizes that were uint are now size_t.
The variable connect_work_size is now ulong (was uint);
Also make Json functiosn to allocate a larger memory (M=9 was 7)
modified: storage/connect/global.h
modified: storage/connect/ha_connect.cc
modified: storage/connect/json.cpp
modified: storage/connect/jsonudf.cpp
modified: storage/connect/plgdbutl.cpp
modified: storage/connect/plugutil.cpp
modified: storage/connect/user_connect.cc
- Fix uninitialised variable (pretty) in Json_File.
Make Jbin_file accept the same arguments as Json_File ones.
modified: storage/connect/jsonudf.cpp
- Change the Level option to Depth (the word currently used)
(Level being still accepted)
modified: storage/connect/mongo.cpp
modified: storage/connect/tabjson.cpp
modified: storage/connect/tabxml.cpp
- Suppress 2nd argument default value for MYSQLtoPLG function
modified: storage/connect/myutil.h
- Allow REST tables to be create not specifying a file_name
modified: storage/connect/tabrest.cpp
and enable using special column in them.
modified: storage/connect/tabzip.cpp
modified: storage/connect/tabzip.h
- Fix some compiler errors
modified: storage/connect/tabcmg.cpp
The used code is largely based on code from Tencent
The problem is that in some rare cases there may be a conflict between .frm
files and the files in the storage engine. In this case the DROP TABLE
was not able to properly drop the table.
Some MariaDB/MySQL forks has solved this by adding a FORCE option to
DROP TABLE. After some discussion among MariaDB developers, we concluded
that users expects that DROP TABLE should always work, even if the
table would not be consistent. There should not be a need to use a
separate keyword to ensure that the table is really deleted.
The used solution is:
- If a .frm table doesn't exists, try dropping the table from all storage
engines.
- If the .frm table exists but the table does not exist in the engine
try dropping the table from all storage engines.
- Update storage engines using many table files (.CVS, MyISAM, Aria) to
succeed with the drop even if some of the files are missing.
- Add HTON_AUTOMATIC_DELETE_TABLE to handlerton's where delete_table()
is not needed and always succeed. This is used by ha_delete_table_force()
to know which handlers to ignore when trying to drop a table without
a .frm file.
The disadvantage of this solution is that a DROP TABLE on a non existing
table will be a bit slower as we have to ask all active storage engines
if they know anything about the table.
Other things:
- Added a new flag MY_IGNORE_ENOENT to my_delete() to not give an error
if the file doesn't exist. This simplifies some of the code.
- Don't clear thd->error in ha_delete_table() if there was an active
error. This is a bug fix.
- handler::delete_table() will not abort if first file doesn't exists.
This is bug fix to handle the case when a drop table was aborted in
the middle.
- Cleaned up mysql_rm_table_no_locks() to ensure that if_exists uses
same code path as when it's not used.
- Use non_existing_Table_error() to detect if table didn't exists.
Old code used different errors tests in different position.
- Table_triggers_list::drop_all_triggers() now drops trigger file if
it can't be parsed instead of leaving it hanging around (bug fix)
- InnoDB doesn't anymore print error about .frm file out of sync with
InnoDB directory if .frm file does not exists. This change was required
to be able to try to drop an InnoDB file when .frm doesn't exists.
- Fixed bug in mi_delete_table() where the .MYD file would not be dropped
if the .MYI file didn't exists.
- Fixed memory leak in Mroonga when deleting non existing table
- Fixed memory leak in Connect when deleting non existing table
Bugs fixed introduced by the original version of this commit:
MDEV-22826 Presence of Spider prevents tables from being force-deleted from
other engines
Prototype change:
- virtual ha_rows records_in_range(uint inx, key_range *min_key,
- key_range *max_key)
+ virtual ha_rows records_in_range(uint inx, const key_range *min_key,
+ const key_range *max_key,
+ page_range *res)
The handler can ignore the page_range parameter. In the case the handler
updates the parameter, the optimizer can deduce the following:
- If previous range's last key is on the same block as next range's first
key
- If the current key range is in one block
- We can also assume that the first and last block read are cached!
This can be used for a better calculation of IO seeks when we
estimate the cost of a range index scan.
The parameter is fully implemented for MyISAM, Aria and InnoDB.
A separate patch will update handler::multi_range_read_info_const() to
take the benefits of this change and also remove the double
records_in_range() calls that are not anymore needed.
- Only indentation changes in sql_rename.cc
- Ignore some WSREP error messages when there isn't a internet connection
- Force restart of stat_tables_part.test to make result stable
- Fixed compiler warnings in CONNECT
Import complex XML from multiple files in MariaDB
Some row results are missing and replaced by the last file one.
Thats because Nx and Sx column members are not reset when changing file.
modified: storage/connect/tabxml.cpp
modified: storage/connect/tabxml.h
Failed compile when XML table type is not supported.
Was because XMLDEF was unconditionally called from REST table.
modified: storage/connect/tabrest.cpp
- Make cmake less verbose
modified: storage/connect/CMakeLists.txt
- Hide Switch_to_definer_security_ctx not defined for 10.1 and 10.0
modified: storage/connect/ha_connect.cc