1
0
mirror of https://github.com/MariaDB/server.git synced 2025-11-09 11:41:36 +03:00
Commit Graph

91 Commits

Author SHA1 Message Date
Oleksandr Byelkin
ba00960fda Merge branch 'bb-11.8-release' into bb-12.1-release 2025-11-05 08:58:12 +01:00
Sergei Golubchik
5ce9a03602 Merge branch '11.4' into 11.8 2025-11-04 12:39:27 +01:00
Sergei Golubchik
1093a2f3b8 Merge branch '10.11' into 11.4 2025-11-03 14:23:51 +01:00
Oleksandr Byelkin
4af88ced48 Merge branch '11.8' into bb-12.1-release 2025-10-28 15:26:26 +01:00
Oleksandr Byelkin
6d0be016fa Merge branch '11.4' into bb-11.8-release 2025-10-24 12:25:01 +02:00
Rucha Deodhar
85567aba45 MDEV-34081: View containing JSON_TABLE does not return JSON
Analysis:
While writing the view to .FRM file, we check the datatype of each column
and append the appropriate type to the string (which will be written to
the frm). This is where the conversion from JSON to longtext happens because
that is how it is stored internally.
Now, while SELECT, when the frm is read it has longtext instead of JSON
which also results in changing the handler type. Since the handler types
dont match, m_format_json becomes false for that specific column.
Now, when filling the values, since the format is not json, it does not
get added in the result. Hence the output is NULL.

Fix:
Before writing the view to the FRM file, check if the datatype for the
column is JSON (which means the m_format_json will be true). If it is JSON
append JSON.
2025-10-22 22:49:26 +05:30
Marko Mäkelä
dfdf572910 Merge 10.11 into 11.4 2025-10-17 09:05:29 +03:00
Daniel Black
5bd91f6c0e MDEV-27898 CREATE VIEW AS SELECT FROM JSON_TABLE column requires global privileges
JSON_TABLE is marked as a special "*any_db*" table. Because this special
marking is processed all the way though to get_column_grant where
its processed like its in database called "*any_db*". As this
doesn't exist, only those users with global privileges can create
views on a JSON_TABLE.

Under a Prepared Statement protocol a Create_tmp_table is
used for the JSON_TABLE, but it gets assigned an "" database
name.

We correct this to give it "*any_db*" like the SQL parser
indicating that no database is needed.

To commonly correct the fill_effective_table_privileges
by looking explictly for the "*any_db*", those tables that
have this as the database name get SELECT privileges.

While correcting the database for the JSON_TABLE, lets give
it a "json_table" name rather than "(temporary)" for
greater clarity in warning messages.
2025-10-16 10:24:29 +11:00
Oleksandr Byelkin
58a2184f5d Add Flags to Item::walk.
Removed hack passing flags through parametes.

Order of arguments changed to prevent uncontrolled merge
(also this order looks better)
2025-08-04 12:05:53 +02:00
Sergei Golubchik
aab83aecdc Merge branch '11.8' into 12.0
main/statistics_json.result is updated for f8ba5ced55 (MDEV-36099)

The test uses 'delete from t1' in many places and then populates
the table again. The natural order of rows in a MyISAM table is well
defined and the test was implicitly relying on that.

before f8ba5ced55 delete was deleting rows one by one, using
ha_myisam::delete_row() because the connection was stuck in rbr mode.
This caused rows to be shown in the reverse insertion order (because of
the delete link list).

MDEV-36099 fixes this bug and the server now correctly uses
ha_myisam::delete_all_rows(). This makes rows to be shown in the
insertion order as expected.
2025-07-31 20:55:47 +02:00
Sergei Golubchik
b565b3e7e0 Merge branch '11.4' into 11.8 2025-07-28 21:29:29 +02:00
Sergei Golubchik
c4ed889b74 Merge branch '10.11' into 11.4 2025-07-28 19:40:10 +02:00
Daniel Black
dbeef00562 MDEV-37052 JSON_SCHEMA_VALID stack overflow handling errors
Since MDEV-33209 (09ea2dc788)
the the stack overflow errors are just injected instead of
frailer mechanisms to consume stack. These mechanims where
not carried forward to the JSON_TABLE or JSON_SCHEMA_VALID where
the pattern was the same.

add_extra_deps also no-longer recursively iterates in
out of stack conditions.

Tests performed in json_debug_nonembedded(_noasan).
2025-07-05 10:47:44 +10:00
Daniel Black
96045fb53a MDEV-37052: JSON_TABLE stack overflow handling errors
The recursive nature of add_table_function_dependencies
resolution meant that the detection of a stack overrun
would continue to recursively call itself.
Its quite possible that a user SQL could get multiple
ER_STACK_OVERRUN_NEED_MORE errors.

Additionaly the results of the stack overrrun check
result was incorrectly assigned to a table_map result.

Its only because of the "if error" check after
add_table_function_dependencies is called, that would
detected the stack overrun error, prevented a
potential corruped tablemap is from being processed.

Corrected add_table_function_dependencies to stop and
return on the detection of a stack overrun error.

The add_extra_deps call also was true on a stack overrun.
2025-07-09 08:57:47 +10:00
Daniel Black
e79aa9ca38 MDEV-37052: JSON_TABLE stack overflow handling errors
main.json_debug_nonembedded_noasan fails because of stack
overrun on Debug + MSAN testing.

Since MDEV-33209 (09ea2dc788)
the the stack overflow errors are just injected instead of
frailer mechanisms to consume stack. These mechanims where
not carried forward to the JSON_TABLE functions where
the pattern was the same.

Related MDEV-34099 (cf1c381bb8) makes check_stack_overrun never fail
under Address Sanitizer (only).

The previous ALLOCATE_MEM_ON_STACK did in MemorySanitizer consume
memory, but check_stack_overrun did fail because its 16000 byte
safety margin was exceeded. The allocation of the 448 byte error
ER_STACK_OVERRUN_NEED_MORE is well within these bounds, however
under the safemalloc implementation, "backtrace" library call is called,
which does further allocation for every stack frame. This exceeds the stack.

Fixes:

JSON_TABLE functions that trigger on out of memory debug instrumentation
replaced with the mechanism from MDEV-33209.

The get_disallowed_table_deps_for_list in a non-Debug build returned
incorrectly 1, instead of -1 indicating the out of memory condition.

In json_table add_extra_deps never passed the out of memory error
condition to the caller and would continue to run in a loop, potentially
recursively under these near out of stack conditions.

The Memory, Undefined Behaviour, Address and Thread sanitizers provide
sufficient instrumentation and a backtrace so the safemalloc
functionality provides insufficent value with these. As such is
disabled under WITH_SAFEMALLOC=AUTO.

With all of thse corrected the main.json_debug_nonembedded_noasan no
longer needs its ASAN exclusion.

The JSON_TABLE tests in this test case was dropped in a merge from 10.6
so these tests are re-added.
2025-07-05 10:44:07 +10:00
Vasilii Lakhin
717c12de0e Fix typos in C comments inside sql/ 2025-03-14 12:08:56 +04:00
Sergei Golubchik
ae59127158 cleanup: lex_string_set3() 2024-11-05 14:00:47 -08:00
Marko Mäkelä
43465352b9 Merge 11.4 into 11.6 2024-10-03 16:09:56 +03:00
Marko Mäkelä
12a91b57e2 Merge 10.11 into 11.2 2024-10-03 13:24:43 +03:00
Marko Mäkelä
63913ce5af Merge 10.6 into 10.11 2024-10-03 10:55:08 +03:00
Rucha Deodhar
753e7d6d7c MDEV-27412: JSON_TABLE doesn't properly unquote strings
Analysis:
The value gets appended as string instead of unescaped json value

Fix:
Append the value of json in a temporary string and then store it in the
field instead of directly storing as string.
2024-10-01 13:45:46 +05:30
Sergei Petrunia
d0a6a7886b MDEV-25822 JSON_TABLE: default values should allow non-string literals
(Polished initial patch by Alexey Botchkov)
Make the code handle DEFAULT values of any datatype

- Make Json_table_column::On_response::m_default be Item*, not LEX_STRING.
- Change the parser to use string literal non-terminals for producing
  the DEFAULT value
-- Also, stop updating json_table->m_text_literal_cs for the DEFAULT
   value literals as it is not used.
2024-09-27 13:52:17 +03:00
Oleksandr Byelkin
ea75a0b600 Merge branch '11.4' into 11.5 2024-08-05 17:50:18 +02:00
Oleksandr Byelkin
dced6cbdb6 Merge branch '11.1' into 11.2 2024-08-03 09:50:16 +02:00
Oleksandr Byelkin
80abd847da Merge branch '10.11' into 11.1 2024-08-03 09:32:42 +02:00
Oleksandr Byelkin
0fe39d368a Merge branch '10.6' into 10.11 2024-07-22 15:14:50 +02:00
Dave Gosselin
02e38e2ece MDEV-33971 NAME_CONST in WHERE clause replaced by inner item
Improve performance of queries like
  SELECT * FROM t1 WHERE field = NAME_CONST('a', 4);
by, in this example, replacing the WHERE clause with field = 4
in the case of ref access.

The rewrite is done during fix_fields and we disambiguate this
case from other cases of NAME_CONST by inspecting where we are
in parsing.  We rely on THD::where to accomplish this.  To
improve performance there, we change the type of THD::where to
be an enumeration, so we can avoid string comparisons during
Item_name_const::fix_fields.  Consequently, this patch also
changes all usages of THD::where to conform likewise.
2024-07-10 17:23:43 -04:00
Monty
b9f5793176 MDEV-9101 Limit size of created disk temporary files and tables
Two new variables added:
- max_tmp_space_usage : Limits the the temporary space allowance per user
- max_total_tmp_space_usage: Limits the temporary space allowance for
  all users.

New status variables: tmp_space_used & max_tmp_space_used
New field in information_schema.process_list: TMP_SPACE_USED

The temporary space is counted for:
- All SQL level temporary files. This includes files for filesort,
  transaction temporary space, analyze, binlog_stmt_cache etc.
  It does not include engine internal temporary files used for repair,
  alter table, index pre sorting etc.
- All internal on disk temporary tables created as part of resolving a
  SELECT, multi-source update etc.

Special cases:
- When doing a commit, the last flush of the binlog_stmt_cache
  will not cause an error even if the temporary space limit is exceeded.
  This is to avoid giving errors on commit. This means that a user
  can temporary go over the limit with up to binlog_stmt_cache_size.

Noteworthy issue:
- One has to be careful when using small values for max_tmp_space_limit
  together with binary logging and with non transactional tables.
  If a the binary log entry for the query is bigger than
  binlog_stmt_cache_size and one hits the limit of max_tmp_space_limit
  when flushing the entry to disk, the query will abort and the
  binary log will not contain the last changes to the table.
  This will also stop the slave!
  This is also true for all Aria tables as Aria cannot do rollback
  (except in case of crashes)!
  One way to avoid it is to use @@binlog_format=statement for
  queries that updates a lot of rows.

Implementation:
- All writes to temporary files or internal temporary tables, that
  increases the file size, are routed through temp_file_size_cb_func()
  which updates and checks the temp space usage.
- Most of the temporary file monitoring is done inside IO_CACHE.
  Temporary file monitoring is done inside the Aria engine.
- MY_TRACK and MY_TRACK_WITH_LIMIT are new flags for ini_io_cache().
  MY_TRACK means that we track the file usage. TRACK_WITH_LIMIT means
  that we track the file usage and we give an error if the limit is
  breached. This is used to not give an error on commit when
  binlog_stmp_cache is flushed.
- global_tmp_space_used contains the total tmp space used so far.
  This is needed quickly check against max_total_tmp_space_usage.
- Temporary space errors are using EE_LOCAL_TMP_SPACE_FULL and
  handler errors are using HA_ERR_LOCAL_TMP_SPACE_FULL.
  This is needed until we move general errors to it's own error space
  so that they cannot conflict with system error numbers.
- Return value of my_chsize() and mysql_file_chsize() has changed
  so that -1 is returned in the case my_chsize() could not decrease
  the file size (very unlikely and will not happen on modern systems).
  All calls to _chsize() are updated to check for > 0 as the error
  condition.
- At the destruction of THD we check that THD::tmp_file_space == 0
- At server end we check that global_tmp_space_used == 0
- As a precaution against errors in the tmp_space_used code, one can set
  max_tmp_space_usage and max_total_tmp_space_usage to 0 to disable
  the tmp space quota errors.
- truncate_io_cache() function added.
- Aria tables using static or dynamic row length are registered in 8K
  increments to avoid some calls to update_tmp_file_size().

Other things:
- Ensure that all handler errors are registered.  Before, some engine
  errors could be printed as "Unknown error".
- Fixed bug in filesort() that causes a assert if there was an error
  when writing to the temporay file.
- Fixed that compute_window_func() now takes into account write errors.
- In case of parallel replication, rpl_group_info::cleanup_context()
  could call trans_rollback() with thd->error set, which would cause
  an assert. Fixed by resetting the error before calling trans_rollback().
- Fixed bug in subselect3.inc which caused following test to use
  heap tables with low value for max_heap_table_size
- Fixed bug in sql_expression_cache where it did not overflow
  heap table to Aria table.
- Added Max_tmp_disk_space_used to slow query log.
- Fixed some bugs in log_slow_innodb.test
2024-05-27 12:39:04 +02:00
Alexander Barkov
fd247cc21f MDEV-31340 Remove MY_COLLATION_HANDLER::strcasecmp()
This patch also fixes:
  MDEV-33050 Build-in schemas like oracle_schema are accent insensitive
  MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0
  MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0
  MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0
  MDEV-33088 Cannot create triggers in the database `MYSQL`
  MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0
  MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0
  MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0
  MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS
  MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0

- Removing the virtual function strnncoll() from MY_COLLATION_HANDLER

- Adding a wrapper function CHARSET_INFO::streq(), to compare
  two strings for equality. For now it calls strnncoll() internally.
  In the future it will turn into a virtual function.

- Adding new accent sensitive case insensitive collations:
    - utf8mb4_general1400_as_ci
    - utf8mb3_general1400_as_ci
  They implement accent sensitive case insensitive comparison.
  The weight of a character is equal to the code point of its
  upper case variant. These collations use Unicode-14.0.0 casefolding data.

  The result of
     my_charset_utf8mb3_general1400_as_ci.strcoll()
  is very close to the former
     my_charset_utf8mb3_general_ci.strcasecmp()

  There is only a difference in a couple dozen rare characters, because:
    - the switch from "tolower" to "toupper" comparison, to make
      utf8mb3_general1400_as_ci closer to utf8mb3_general_ci
    - the switch from Unicode-3.0.0 to Unicode-14.0.0
  This difference should be tolarable. See the list of affected
  characters in the MDEV description.

  Note, utf8mb4_general1400_as_ci correctly handles non-BMP characters!
  Unlike utf8mb4_general_ci, it does not treat all BMP characters
  as equal.

- Adding classes representing names of the file based database objects:

    Lex_ident_db
    Lex_ident_table
    Lex_ident_trigger

  Their comparison collation depends on the underlying
  file system case sensitivity and on --lower-case-table-names
  and can be either my_charset_bin or my_charset_utf8mb3_general1400_as_ci.

- Adding classes representing names of other database objects,
  whose names have case insensitive comparison style,
  using my_charset_utf8mb3_general1400_as_ci:

  Lex_ident_column
  Lex_ident_sys_var
  Lex_ident_user_var
  Lex_ident_sp_var
  Lex_ident_ps
  Lex_ident_i_s_table
  Lex_ident_window
  Lex_ident_func
  Lex_ident_partition
  Lex_ident_with_element
  Lex_ident_rpl_filter
  Lex_ident_master_info
  Lex_ident_host
  Lex_ident_locale
  Lex_ident_plugin
  Lex_ident_engine
  Lex_ident_server
  Lex_ident_savepoint
  Lex_ident_charset
  engine_option_value::Name

- All the mentioned Lex_ident_xxx classes implement a method streq():

  if (ident1.streq(ident2))
     do_equal();

  This method works as a wrapper for CHARSET_INFO::streq().

- Changing a lot of "LEX_CSTRING name" to "Lex_ident_xxx name"
  in class members and in function/method parameters.

- Replacing all calls like
    system_charset_info->coll->strcasecmp(ident1, ident2)
  to
    ident1.streq(ident2)

- Taking advantage of the c++11 user defined literal operator
  for LEX_CSTRING (see m_strings.h) and Lex_ident_xxx (see lex_ident.h)
  data types. Use example:

  const Lex_ident_column primary_key_name= "PRIMARY"_Lex_ident_column;

  is now a shorter version of:

  const Lex_ident_column primary_key_name=
    Lex_ident_column({STRING_WITH_LEN("PRIMARY")});
2024-04-18 15:22:10 +04:00
Alexander Barkov
75f25e4ca7 MDEV-30164 System variable for default collations
This patch adds a way to override default collations
(or "character set collations") for desired character sets.

The SQL standard says:
> Each collation known in an SQL-environment is applicable to one
> or more character sets, and for each character set, one or more
> collations are applicable to it, one of which is associated with
> it as its character set collation.

In MariaDB, character set collations has been hard-coded so far,
e.g. utf8mb4_general_ci has been a hard-coded character set collation
for utf8mb4.

This patch allows to override (globally per server, or per session)
character set collations, so for example, uca1400_ai_ci can be set as a
character set collation for Unicode character sets
(instead of compiled xxx_general_ci).

The array of overridden character set collations is stored in a new
(session and global) system variable @@character_set_collations and
can be set as a comma separated list of charset=collation pairs, e.g.:

SET @@character_set_collations='utf8mb3=uca1400_ai_ci,utf8mb4=uca1400_ai_ci';

The variable is empty by default, which mean use the hard-coded
character set collations (e.g. utf8mb4_general_ci for utf8mb4).

The variable can also be set globally by passing to the server startup command
line, and/or in my.cnf.
2023-07-17 14:56:17 +04:00
Rucha Deodhar
358b8495f5 MDEV-27128: Implement JSON Schema Validation FUNCTION
Implementation:
Implementation is made according to json schema validation draft 2020

JSON schema basically has same structure as that of json object, consisting
of key-value pairs. So it can be parsed in the same manner as
any json object.

However, none of the keywords are mandatory, so making guess about the
json value type based only on the keywords would be incorrect.
Hence we need separate objects denoting each keyword.

So during create_object_and_handle_keyword() we create appropriate objects
based on the keywords and validate each of them individually on the json
document by calling respective validate() function if the type matches.
If any of them fails, return false, else return true.
2023-04-26 11:00:08 +05:30
Marko Mäkelä
4c355d4e81 Merge 10.11 into 11.0 2023-03-17 15:03:17 +02:00
Alexander Barkov
4703638775 MDEV-30805 SIGSEGV in my_convert and UBSAN: member access within null pointer of type 'const struct MY_CHARSET_HANDLER' in my_convert
Type_handler::partition_field_append_value() erroneously
passed the address of my_collation_contextually_typed_binary
to conversion functions copy_and_convert() and my_convert().

This happened because generate_partition_syntax_for_frm()
was called from mysql_create_frm_image() in the stage when
the fields in List<Create_field> can still contain unresolved
contextual collations, like "binary" in the reported crash scenario:

  ALTER TABLE t CHANGE COLUMN a a CHAR BINARY;

Fix:

1. Splitting mysql_prepare_create_table() into two parts:
   - mysql_prepare_create_table_stage1() interates through
     List<Create_field> and calls Create_field::prepare_stage1(),
     which performs basic attribute initialization, including
     context collation resolution.
   - mysql_prepare_create_table_finalize() - the rest of the
     old mysql_prepare_create_table() code.

2. Changing mysql_create_frm_image():
   It now calls:
   - mysql_prepare_create_table_stage1() in the very
     beginning, before the partition related code.
   - mysql_prepare_create_table_finalize() in the end,
    instead of the old mysql_prepare_create_table() call

3. Adding mysql_prepare_create_table() as a wrapper
   for two calls:
     mysql_prepare_create_table_stage1() ||
     mysql_prepare_create_table_finalize()
   so the code stays unchanged in the other places
   where mysql_prepare_create_table() was used.

4. Changing prototype for Type_handler::Column_definition_prepare_stage1()
   Removing arguments:
   - handler *file
   - ulonglong table_flags
   Adding a new argument instead:
   - column_definition_type_t type
   This allows to call Column_definition_prepare_stage1() and
   therefore to call mysql_prepare_create_table_stage1()
   before instantiation of a handler.
   This simplifies the code, because in case of a partitioned table,
   mysql_create_frm_image() creates a handler of the underlying
   partition first, the frees it and created a ha_partition
   instance instead.
   mysql_prepare_create_table() before the fix was called with the final
   (ha_partition) handler.

5. Moving parts of Column_definition_prepare_stage1() which
   need a pointer to handler and table_flags to
   Column_definition_prepare_stage2().
2023-03-14 11:50:52 +04:00
Monty
c1512b1e7c Added "override" to ha_heap.h, ha_myisam.h, ha_myisammrg.h and ha_sequence.h
Added override to a few functions in ha_partition.h
2023-02-03 10:56:49 +03:00
Monty
b66cdbd1ea Changing all cost calculation to be given in milliseconds
This makes it easier to compare different costs and also allows
the optimizer to optimizer different storage engines more reliably.

- Added tests/check_costs.pl, a tool to verify optimizer cost calculations.
  - Most engine costs has been found with this program. All steps to
    calculate the new costs are documented in Docs/optimizer_costs.txt

- User optimizer_cost variables are given in microseconds (as individual
  costs can be very small). Internally they are stored in ms.
- Changed DISK_READ_COST (was DISK_SEEK_BASE_COST) from a hard disk cost
  (9 ms) to common SSD cost (400MB/sec).
- Removed cost calculations for hard disks (rotation etc).
- Changed the following handler functions to return IO_AND_CPU_COST.
  This makes it easy to apply different cost modifiers in ha_..time()
  functions for io and cpu costs.
  - scan_time()
  - rnd_pos_time() & rnd_pos_call_time()
  - keyread_time()
- Enhanched keyread_time() to calculate the full cost of reading of a set
  of keys with a given number of ranges and optional number of blocks that
  need to be accessed.
- Removed read_time() as keyread_time() + rnd_pos_time() can do the same
  thing and more.
- Tuned cost for: heap, myisam, Aria, InnoDB, archive and MyRocks.
  Used heap table costs for json_table. The rest are using default engine
  costs.
- Added the following new optimizer variables:
  - optimizer_disk_read_ratio
  - optimizer_disk_read_cost
  - optimizer_key_lookup_cost
  - optimizer_row_lookup_cost
  - optimizer_row_next_find_cost
  - optimizer_scan_cost
- Moved all engine specific cost to OPTIMIZER_COSTS structure.
- Changed costs to use 'records_out' instead of 'records_read' when
  recalculating costs.
- Split optimizer_costs.h to optimizer_costs.h and optimizer_defaults.h.
  This allows one to change costs without having to compile a lot of
  files.
- Updated costs for filter lookup.
- Use a better cost estimate in best_extension_by_limited_search()
  for the sorting cost.
- Fixed previous issues with 'filtered' explain column as we are now
  using 'records_out' (min rows seen for table) to calculate filtering.
  This greatly simplifies the filtering code in
  JOIN_TAB::save_explain_data().

This change caused a lot of queries to be optimized differently than
before, which exposed different issues in the optimizer that needs to
be fixed.  These fixes are in the following commits.  To not have to
change the same test case over and over again, the changes in the test
cases are done in a single commit after all the critical change sets
are done.

InnoDB changes:
- Updated InnoDB to not divide big range cost with 2.
- Added cost for InnoDB (innobase_update_optimizer_costs()).
- Don't mark clustered primary key with HA_KEYREAD_ONLY. This will
  prevent that the optimizer is trying to use index-only scans on
  the clustered key.
- Disabled ha_innobase::scan_time() and ha_innobase::read_time() and
  ha_innobase::rnd_pos_time() as the default engine cost functions now
  works good for InnoDB.

Other things:
- Added  --show-query-costs (\Q) option to mysql.cc to show the query
  cost after each query (good when working with query costs).
- Extended my_getopt with GET_ADJUSTED_VALUE which allows one to adjust
  the value that user is given. This is used to change cost from
  microseconds (user input) to milliseconds (what the server is
  internally using).
- Added include/my_tracker.h  ; Useful include file to quickly test
  costs of a function.
- Use handler::set_table() in all places instead of 'table= arg'.
- Added SHOW_OPTIMIZER_COSTS to sys variables. These are input and
  shown in microseconds for the user but stored as milliseconds.
  This is to make the numbers easier to read for the user (less
  pre-zeros).  Implemented in 'Sys_var_optimizer_cost' class.
- In test_quick_select() do not use index scans if 'no_keyread' is set
  for the table. This is what we do in other places of the server.
- Added THD parameter to Unique::get_use_cost() and
  check_index_intersect_extension() and similar functions to be able
  to provide costs to called functions.
- Changed 'records' to 'rows' in optimizer_trace.
- Write more information to optimizer_trace.
- Added INDEX_BLOCK_FILL_FACTOR_MUL (4) and INDEX_BLOCK_FILL_FACTOR_DIV (3)
  to calculate usage space of keys in b-trees. (Before we used numeric
  constants).
- Removed code that assumed that b-trees has similar costs as binary
  trees. Replaced with engine calls that returns the cost.
- Added Bitmap::find_first_bit()
- Added timings to join_cache for ANALYZE table (patch by Sergei Petrunia).
- Added records_init and records_after_filter to POSITION to remember
  more of what best_access_patch() calculates.
- table_after_join_selectivity() changed to recalculate 'records_out'
  based on the new fields from best_access_patch()

Bug fixes:
- Some queries did not update last_query_cost (was 0). Fixed by moving
  setting thd->...last_query_cost in JOIN::optimize().
- Write '0' as number of rows for const tables with a matching row.

Some internals:
- Engine cost are stored in OPTIMIZER_COSTS structure.  When a
  handlerton is created, we also created a new cost variable for the
  handlerton. We also create a new variable if the user changes a
  optimizer cost for a not yet loaded handlerton either with command
  line arguments or with SET
  @@global.engine.optimizer_cost_variable=xx.
- There are 3 global OPTIMIZER_COSTS variables:
  default_optimizer_costs   The default costs + changes from the
                            command line without an engine specifier.
  heap_optimizer_costs      Heap table costs, used for temporary tables
  tmp_table_optimizer_costs The cost for the default on disk internal
                            temporary table (MyISAM or Aria)
- The engine cost for a table is stored in table_share. To speed up
  accesses the handler has a pointer to this. The cost is copied
  to the table on first access. If one wants to change the cost one
  must first update the global engine cost and then do a FLUSH TABLES.
  This was done to be able to access the costs for an open table
  without any locks.
- When a handlerton is created, the cost are updated the following way:
  See sql/keycaches.cc for details:
  - Use 'default_optimizer_costs' as a base
  - Call hton->update_optimizer_costs() to override with the engines
    default costs.
  - Override the costs that the user has specified for the engine.
  - One handler open, copy the engine cost from handlerton to TABLE_SHARE.
  - Call handler::update_optimizer_costs() to allow the engine to update
    cost for this particular table.
  - There are two costs stored in THD. These are copied to the handler
    when the table is used in a query:
    - optimizer_where_cost
    - optimizer_scan_setup_cost
- Simply code in best_access_path() by storing all cost result in a
  structure. (Idea/Suggestion by Igor)
2023-02-02 23:54:45 +03:00
Oleksandr Byelkin
d86ad1f127 Merge branch '10.8' into 10.9 2022-10-17 12:39:25 +02:00
Oleksandr Byelkin
ec2b30e736 Merge branch '10.6' into 10.7 2022-10-16 21:40:33 +02:00
Rucha Deodhar
5a9a80a213 MDEV-28480: Assertion `0' failed in Item_row::illegal_method_call on
SELECT FROM JSON_TABLE

Analysis: When fix_fields_if_needed() is called, it doesnt check if operands
are valid because check_cols() is not called. So it doesn't error out and
eventually crashes.
Fix: Use fix_fields_if_needed_for_scalar() instead of
fix_fields_if_needed(). It filters the scalar and returns the error if
it occurs.
2022-10-13 14:55:27 +05:30
Alexander Barkov
0333ddd3ec 10.9 specific fixes for MDEV-29446 Change SHOW CREATE TABLE to display default collations
Fixing the JSON_TABLE related code according to MDEV-29446.
2022-09-22 12:53:05 +04:00
Marko Mäkelä
97d16c7544 Merge 10.6 into 10.7 2022-08-02 08:30:18 +03:00
Rucha Deodhar
bfdc4ff22e MDEV-28762: recursive call of some json functions without stack control
Fixup to MDEV-28762. Fixes warnings about unused variable "stack_used_up"
during building with RelWithDebInfo
2022-07-29 16:34:57 +05:30
Rucha Deodhar
a6f7c8edc9 MDEV-28762: recursive call of some json functions without stack control
Fixup to MDEV-28762. Fixes warnings about unused variable "stack_used_up"
during building with RelWithDebInfo
2022-07-29 15:58:38 +05:30
Marko Mäkelä
f53f64b7b9 Merge 10.8 into 10.9 2022-07-28 10:47:33 +03:00
Marko Mäkelä
742e1c727f Merge 10.6 into 10.7 2022-07-27 18:26:21 +03:00
Rucha Deodhar
e94902c062 MDEV-28762: recursive call of some json functions without stack control
This commit is a fixup for MDEV-28762

Analysis: Some recursive json functions dont check for stack control
Fix: Add check_stack_overrun(). The last argument is NULL because it is not
used
2022-07-26 10:05:37 +05:30
Rucha Deodhar
0ea221e12b MDEV-28762: recursive call of some json functions without stack control
Analysis: Some recursive json functions dont check for stack control
Fix: Add check_stack_overrun(). The last argument is NULL because it is not
used
2022-07-20 19:37:29 +05:30
Marko Mäkelä
5a33a37682 Merge 10.8 into 10.9 2022-06-07 09:20:07 +03:00
Marko Mäkelä
712b443a3c Merge 10.6 into 10.7 2022-06-02 07:48:30 +03:00
Rucha Deodhar
a61603562e MDEV-25875: JSON_TABLE: extract document fragment into JSON column
Fixup
2022-05-31 12:09:29 +05:30
Alexey Botchkov
a9f6abedde MDEV-25875: JSON_TABLE: extract document fragment into JSON column
Accept JSON values for the JSON fields.
2022-05-31 12:09:11 +05:30