Similar to the tables SYS_FOREIGN and SYS_FOREIGN_COLS,
the tables mysql.innodb_table_stats and mysql.innodb_index_stats
are updated by the InnoDB internal SQL parser, which fails to
enforce the size limits of the data. Due to this, it is possible
for InnoDB to hang when there are persistent statistics defined on
partitioned tables where the total length of table name,
partition name and subpartition name exceeds the incorrectly
defined limit VARCHAR(64). That column should have been defined
as VARCHAR(199).
btr_node_ptr_max_size(): Interpret the VARCHAR(64) as VARCHAR(199),
to prevent a hang in the case that the upgrade script has not been
run.
dict_table_schema_check(): Ignore difference in the length of the
table_name column.
ha_innobase::max_supported_key_length(): For innodb_page_size=4k,
return a larger value so that the table mysql.innodb_index_stats
can be created. This could allow "impossible" tables to be created,
such that it is not possible to insert anything into a secondary
index when both the secondary key and the primary key are long,
but this is the easiest and most consistent way. The Oracle fix
would only ignore the maximum length violation for the two
statistics tables.
os_file_get_status_posix(), os_file_get_status_win32(): Handle
ENAMETOOLONG as well.
This patch is based on the following change in MySQL 5.7.23.
Not all changes were applied, and our variant allows persistent
statistics to work without hangs even if the table definitions
were not upgraded.
From fdbdce701ab8145ae234c9d401109dff4e4106cb Mon Sep 17 00:00:00 2001
From: Aditya A <aditya.a@oracle.com>
Date: Thu, 17 May 2018 16:11:43 +0530
Subject: [PATCH] Bug #26390736 THE FIELD TABLE_NAME (VARCHAR(64)) FROM
MYSQL.INNODB_TABLE_STATS CAN OVERFLOW.
In mysql.innodb_index_stats and mysql.innodb_table_stats
tables the table name column didn't take into consideration
partition names which can be more than varchar(64).
Problem:
The logic in store_column_type() with a switch on field type was
hard to follow. The part for MEDIUMINT (MYSQL_TYPE_INT24) was not correct.
It erroneously calculated the precision of MEDIUMINT UNSIGNED
as 7 instead of 8.
A similar hard-to-follow switch doing some type specific calculations
resided in adjust_max_effective_column_length(). It was also wrong for
MEDIUMINT (reported as a separate issue in MDEV-15946).
Solution:
1. Introducing a new class Information_schema_numeric_attributes
2. Adding a new virtual method Field::information_schema_numeric_attributes()
3. Splitting the logic in store_column_type() into virtual
implementations of information_schema_numeric_attributes().
4. In order to avoid adding duplicate code for the integer data types,
adding a new virtual method Field_int::numeric_precision(),
which returns the number of digits.
Additional changes:
1. Adding the "const" qualifier to Field::max_display_length()
2. Moving the code from adjust_max_effective_column_length()
directly to Field::max_display_length().
There was no any sense to have two implementations:
- a set of wrong virtual implementations for Field_xxx::max_display_length()
- additional code in adjust_max_effective_column_length() fixing
bad results of Field_xxx::max_display_length()
This change is safe:
- The code using Field::max_display_length()
in field.cc, sql_show.cc, sql_type.cc is not affected.
- The code in rpl_utility.cc is also not affected.
See a new DBUG_ASSSERT and new comments explaining why.
In the new reduction, Field_xxx::max_display_length() returns
correct results for all integer types (except MEDIUMINT, see below).
Putting implementations of numeric_precision() and max_display_length()
near each other in field.h made the logic much clearer and thus
helped to reveal bad results for Field_medium::max_display_length(),
which returns 9 instead of 8 for signed MEDIUMINT fields.
This problem will be addressed separately (MDEV-15946).
Note, this change is also useful for pluggable data types (see MDEV-4912),
as now a user defined Field_xxx has a way to control what's returned
in INFORMATION_SCHEMA.COLUMNS.NUMERIC_PRECISION and
INFORMATION_SCHEMA.COLUMNS.NUMERIC_SCALE by implementing
a desired behavior in Field_xxx::information_schema_numeric_attributes().
- CREATE PACKAGE [BODY] statements are now
entirely written to mysql.proc with type='PACKAGE' and type='PACKAGE BODY'.
- CREATE PACKAGE BODY now supports IF NOT EXISTS
- DROP PACKAGE BODY now supports IF EXISTS
- CREATE OR REPLACE PACKAGE [BODY] is now supported
- CREATE PACKAGE [BODY] now support the DEFINER clause:
CREATE DEFINER user@host PACKAGE pkg ... END;
CREATE DEFINER user@host PACKAGE BODY pkg ... END;
- CREATE PACKAGE [BODY] now supports SQL SECURITY and COMMENT clauses, e.g.:
CREATE PACKAGE p1 SQL SECURITY INVOKER COMMENT "comment" AS ... END;
- Package routines are now created from the package CREATE PACKAGE BODY
statement and don't produce individual records in mysql.proc.
- CREATE PACKAGE BODY now supports package-wide variables.
Package variables can be read and set inside package routines.
Package variables are stored in a separate sp_rcontext,
which is cached in THD on the first packate routine call.
- CREATE PACKAGE BODY now supports the initialization section.
- All public routines (i.e. declared in CREATE PACKAGE)
must have implementations in CREATE PACKAGE BODY
- Only public package routines are available outside of the package
- {CREATE|DROP} PACKAGE [BODY] now respects CREATE ROUTINE and ALTER ROUTINE
privileges
- "GRANT EXECUTE ON PACKAGE BODY pkg" is now supported
- SHOW CREATE PACKAGE [BODY] is now supported
- SHOW PACKAGE [BODY] STATUS is now supported
- CREATE and DROP for PACKAGE [BODY] now works for non-current databases
- mysqldump now supports packages
- "SHOW {PROCEDURE|FUNCTION) CODE pkg.routine" now works for package routines
- "SHOW PACKAGE BODY CODE pkg" now works (the package initialization section)
- A new package body level MDL was added
- Recursive calls for package procedures are now possible
- Routine forward declarations in CREATE PACKATE BODY are now supported.
- Package body variables now work as SP OUT parameters
- Package body variables now work as SELECT INTO targets
- Package body variables now support ROW, %ROWTYPE, %TYPE
- Max_index_length is supported by MyISAM and Aria tables.
- Temporary is a placeholder to signal that a table is a
temporary table. For the moment this is always "N", except
"Y" for generated information_schema tables and NULL for
views. Full temporary table support will be done in another task.
(No reason to have to update a lot of result files twice in a row)
Standard compatible behavior for UPDATE: all assignments in SET
are executed "simultaneously", not left-to-right. And `SET a=b,b=a`
will swap the values.
This commit implements aggregate stored functions. The basic idea behind
the feature is:
* Implement a special instruction FETCH GROUP NEXT ROW that will pause
the execution of the stored function. When the instruction is reached,
execution of the initial query resumes "as if" the function returned.
This gives the server the opportunity to advance to the next row in the
result set.
* Stored aggregates behave like regular aggregate functions. The
implementation of thus resides in the class Item_sum_sp. Because it is
an aggregate function, for each new row in the group, the
Item_sum_sp::add() method will be called. This is when execution resumes
and the function does another iteration to "add" one extra element to
the final result.
* When the end of group is reached, val_xxx() method will be called for
the item. This case is handled by another execute step for the stored
function, only with a special flag to force a call to the return
handler. See Item_sum_sp::execute() for details.
To allow this pause and resume semantic, we must preserve the function
context across executions. This is stored in Item_sp::sp_query_arena only for
aggregate stored functions, but has no impact for regular functions.
We also enforce aggregate functions to include the "FETCH GROUP NEXT ROW"
instruction.
Signed-off-by: Vicențiu Ciorbaru <vicentiu@mariadb.org>
This was done to get more information about where time is spent.
Now we can get proper timing for time spent in commit, rollback,
binlog write etc.
Following stages was added:
- Commit
- Commit_implicit
- Rollback
- Rollback implicit
- Binlog write
- Init for update
- This is used instead of "Init" for insert, update and delete.
- Staring cleanup
Following stages where changed:
- "Unlocking tables" stage reset stage to previous stage at end
- "binlog write" stage resets stage to previous stage at end
- "end" -> "end of update loop"
- "cleaning up" -> "Reset for next command"
- Added stage_searching_rows_for_update when searching for rows
to be deleted.
Other things:
- Renamed all stages to start with big letter (before there was no
consitency)
- Increased performance_schema_max_stage_classes from 150 to 160.
- Most of the test changes in performance schema comes from renaming of
stages.
- Removed duplicate output of variables and inital state in a lot of
performance schema tests.
This was done to make it easier to change a default value for a
performance variable without affecting all tests.
- Added start_server_variables.test to check configuration
- Removed some duplicate "closing tables" stages
- Updated position for "stage_init_update" and "stage_updating" for
delete, insert and update to be just before update loop (for more
exact timing).
- Don't set "Checking permissions" twice in a row.
- Remove stage_end stage from creating views (not done for create table
either).
- Updated default performance history size from 10 to 20 because of new
stages
- Ensure that ps_enabled is correct (to be used in a later patch)
The background is that one user had a lot of views and using some complex
queries on information schema temporary memory of more than 2G was used.
- Added new element 'total_alloc' to MEM_ROOT for easier debugging.
- Added MAX_MEMORY_USED to information_schema.processlist.
- Added new status variable "Memory_used_initial" that shows how much MariaDB
uses at startup. This gives the base value for "Memory_used".
- Reuse memory continuously for information schema queries instead of
only freeing memory at query end.
Other things
- Removed some not needed set_notnull() calls for not null columns.
This is a 10.3 specific part of MDEV-13049.
It disables automatic sorting for
"SELECT .. FROM INFORMATION_SCHEMA.{SCHEMATA|TABLES}"
and adjusts the affected tests accordingly.
The old behavior of returning the affected rows for the last statement
in a stored procedure was more an accident than design. Having the number
of affected rows for all sub statements is more useful and will not change
just because on changes the order of statements in the stored procedure.