1
0
mirror of https://github.com/MariaDB/server.git synced 2025-12-03 05:41:09 +03:00
Commit Graph

3202 Commits

Author SHA1 Message Date
Sergei Golubchik
244f0e6dd8 Merge branch '10.3' into 10.4 2019-09-06 11:53:10 +02:00
Monty
a071e0e029 Merge branch '10.2' into 10.3 2019-09-03 13:17:32 +03:00
Monty
9cba6c5aa3 Updated mtr files to support different compiled in options
This allows one to run the test suite even if any of the following
options are changed:
- character-set-server
- collation-server
- join-cache-level
- log-basename
- max-allowed-packet
- optimizer-switch
- query-cache-size and query-cache-type
- skip-name-resolve
- table-definition-cache
- table-open-cache
- Some innodb options
etc

Changes:
- Don't print out the value of system variables as one can't depend on
  them to being constants.
- Don't set global variables to 'default' as the default may not
  be the same as the test was started with if there was an additional
  option file. Instead save original value and reset it at end of test.
- Test that depends on the latin1 character set should include
  default_charset.inc or set the character set to latin1
- Test that depends on the original optimizer switch, should include
  default_optimizer_switch.inc
- Test that depends on the value of a specific system variable should
  set it in the test (like optimizer_use_condition_selectivity)
- Split subselect3.test into subselect3.test and subselect3.inc to
  make it easier to set and reset system variables.
- Added .opt files for test that required specfic options that could
  be changed by external configuration files.
- Fixed result files in rockdsb & tokudb that had not been updated for
  a while.
2019-09-01 19:17:35 +03:00
Monty
d558981e2c Remove test that where only applicable for MySQL
These was part of an incomplete old merge from MySQL 5.6
2019-09-01 19:17:34 +03:00
Marko Mäkelä
db4a27ab73 Merge 10.3 into 10.4 2019-08-31 06:53:45 +03:00
Marko Mäkelä
e41eb044f1 Merge 10.2 into 10.3 2019-08-28 10:18:41 +03:00
Sujatha
e7b71e0daa MDEV-19925: Column ... cannot be converted from type 'varchar(20)' to type 'varchar(20)'
Cherry picking:
Bug#25135304: RBR: WRONG FIELD LENGTH IN ERROR MESSAGE
commit 47bd3f7cf3c8518f62b1580ec65af2ba7ac13b95

Description:
============
In row based replication, when replicating from a table with a field with
character set set to UTF8mb3 to the same table with the same field set to
character set UTF8mb4 I get a confusing error message:

For VARCHAR: VARCHAR(1) 'utf8mb3' to VARCHAR(1) 'utf8mb4'
"Column 0 of table 'test.t1' cannot be converted from type 'varchar(3)' to
type 'varchar(1)'"

Similar issue with CHAR type as well.

Issue with respect to BLOB types:

For BLOB: LONGBLOB to TINYBLOB - Error message displays incorrect blob type.
"Column 0 of table 'test.t1' cannot be converted from type 'tinyblob' to type
'tinyblob'"

For BINARY to BINARY - Error message displays incorrect type for master side
field.
"Column 0 of table 'test.t' cannot be converted from type 'char(1)' to type
'binary(10)'"
Similar issue exists for VARBINARY type. It is displayed as 'VARCHAR'.

Analysis:
=========
In Row based replication charset information is not sent as part of metadata
from master to slave.

For VARCHAR field its character length is converted into equivalent
octets/bytes and stored internally. At the time of displaying the data to user
it is converted back to original character length.

For example:
VARCHAR(2)- utf8mb3 is stored as:2*3 = VARCHAR(6)
At the time of displaying it to user
VARCHAR(6)- charset utf8mb3:6/3= VARCHAR(2).

At present the internally converted octect length is sent from master to slave
with out providing the charset information. On slave side if the type
conversion fails 'show_sql_type' function is used to get the type specific
information from metadata. Since there is no charset information is available
the filed type is displayed as VARCHAR(6).

This results in confused error message.

For CHAR fields
CHAR(1)- utf8mb3 - CHAR(3)
CHAR(1)- utf8mb4 - CHAR(4)

'show_sql_type' function which retrieves type information from metadata uses
(bytes/local charset length) to get actual character length. If slave's chaset
is 'utf8mb4' then

CHAR(3/4)-->CHAR(0)
CHAR(4/4)-->CHAR(1).

This results in confused error message.

Analysis for BLOB type issue:

BLOB's length is represented in two forms.
1. Actual length
i.e
  (length < 256) type= MYSQL_TYPE_TINY_BLOB;
  (length < 65536) type= MYSQL_TYPE_BLOB; ...

2. packlength - The number of bytes used to represent the length of the blob
  1- tinyblob
  2- blob ...

In row based replication only the packlength is written in the binary log. On
the slave side this packlength is interpreted as actual length of the blob.
Hence the length is always < 256 and the type is displayed as tiny blob.

Analysis for BINARY to BINARY type issue:
The character set information is needed to identify a filed's type as char or
binary. Since master side character set information is not available on the
slave side both binary and char fields are displayed as char.

Fix:
===
For CHAR and VARCHAR fields display their length in octets for both source and
target fields. For target field display the charset information if it is
relevant.

For blob type changed the code to use the packlength and display appropriate
blob type in error message.

For binary and varbinary fields use the slave side character set as reference
to map them to binary or varbinary fields.
2019-08-27 13:05:04 +05:30
Monty
c4fd167d5a Fixed access to unitialized memory when using unique HASH key
Fixed the following issues:
- Call info with HA_STATUS_CONST to ensure that (key_info->rec_per_key)
  contains latest data
- Don't access rec_per_key if key_info->algorithm == HA_KEY_ALG_LONG_HASH
  is in this case the rec_per_key points to uninitialized data
- Cleaned up code to avoid some extra 'if' and to make things more readable
- Updated test cases that used 'old' rec_per_key values
2019-08-13 17:19:00 +03:00
Oleksandr Byelkin
2792c6e7b0 Merge branch '10.3' into 10.4 2019-07-28 13:43:26 +02:00
Oleksandr Byelkin
d97342b6f2 Merge branch '10.2' into 10.3 2019-07-26 22:42:35 +02:00
Oleksandr Byelkin
32c6f40a63 Merge branch '10.1' into 10.2 2019-07-26 13:39:17 +02:00
Oleksandr Byelkin
4177181e16 Merge branch 'merge-tokudb-5.6' into 10.1 2019-07-26 10:48:12 +02:00
Oleksandr Byelkin
24a0d7c507 5.6.44-86.0 2019-07-26 08:48:46 +02:00
Marko Mäkelä
e9c1701e11 Merge 10.3 into 10.4 2019-07-25 18:42:06 +03:00
Sergei Golubchik
d78a14c599 cmake 3.14.3 warnings 2019-07-12 19:38:10 +02:00
Eugene Kosov
c9aa495fb6 MDEV-19955 make argument of handler::ha_write_row() const
MDEV-19486 and one more similar bug appeared because handler::write_row() interface
welcomes to modify buffer by storage engine. But callers are not ready for that
thus bugs are possible in future.

handler::write_row():
handler::ha_write_row(): make argument const
2019-07-05 13:14:19 +03:00
Eugene Kosov
a82e42fd13 NFC: refactor Field::is_equal() and related stuff
Make Field::is_equal() const and return bool as it's a naturally fitting
type for it. Also it's agrument was narrowed to Column_definition.

InnoDB can change type of some columns by itself. InnoDB-specific code used to
reside in Field_xxx:is_equal() methods. Now engine-specific stuff was
moved to a virtual methods of handler::can_convert{string,varstring,blob,geom}.
These methods are called by Field::can_be_converted_by_engine() which is a
double dispatch pattern.

Some InnoDB-specific code still resides in compare_keys_but_name(). It should
be moved from here someday to handler::compare_key_parts(...) or similar.

IS_EQUAL_WITH_REINTERPRET_COMPATIBLE_CHARSET
IS_EQUAL_WITH_REINTERPRET_COMPATIBLE_CHARSET_BUT_COLLATE: both was removed

IS_EQUAL_NO, IS_EQUAL_YES are not needed now and should be removed
along with deprecated handler::check_if_incompatible_data().

HA_EXTENDED_TYPES_CONVERSION: was removed as such logic is not needed now by
server code.

ALTER_COLUMN_EQUAL_PACK_LENGTH: was renamed to a more generic
ALTER_COLUMN_TYPE_CHANGE_BY_ENGINE
2019-06-22 14:09:12 +03:00
Oleksandr Byelkin
f66d1850ac Merge branch '10.3' into 10.4 2019-06-14 22:10:50 +02:00
Oleksandr Byelkin
4a3d51c76c Merge branch '10.2' into 10.3 2019-06-14 07:36:47 +02:00
Marko Mäkelä
4bbd8be482 Merge 10.1 into 10.2 2019-06-12 10:30:01 +03:00
Sergey Vojtovich
dd939d6f7e MDEV-15734 - calculation inside sizeof() warning
Reverted incorrect change introduced by 548d03d7.

As result is char**, third qsort() parameter must be sizeof(char*).
Not sizeof(result[0] + 2), which is same as sizeof(result[0]).
Not even sizeof(result[0]) + 2, which would cause invalid memory access.

Proper sorting is responsibility of logfilenamecompare() callback.
2019-05-30 19:52:31 +04:00
Oleksandr Byelkin
c07325f932 Merge branch '10.3' into 10.4 2019-05-19 20:55:37 +02:00
Sergei Golubchik
2400e06946 remove -fno-rtti 2019-05-18 20:34:03 +02:00
Marko Mäkelä
be85d3e61b Merge 10.2 into 10.3 2019-05-14 17:18:46 +03:00
Marko Mäkelä
26a14ee130 Merge 10.1 into 10.2 2019-05-13 17:54:04 +03:00
Oleksandr Byelkin
c51f85f882 Merge branch '10.2' into 10.3 2019-05-12 17:20:23 +02:00
Vicențiu Ciorbaru
cb248f8806 Merge branch '5.5' into 10.1 2019-05-11 22:19:05 +03:00
Vicențiu Ciorbaru
5543b75550 Update FSF Address
* Update wrong zip-code
2019-05-11 21:29:06 +03:00
Monty
44b8b002f5 Disable 5733_tokudb as the result is not stable 2019-05-09 11:24:06 +03:00
Oleksandr Byelkin
8cbb14ef5d Merge branch '10.1' into 10.2 2019-05-04 17:04:55 +02:00
Vladislav Vaintroub
e8778f1c7c MDEV-19265 Server should throw warning if event is created and event_scheduler = OFF 2019-04-28 12:49:59 +02:00
Sergei Golubchik
f22ed2779f Merge branch 'merge-tokudb-5.6' into 10.1 2019-04-26 18:26:23 +02:00
Sergei Golubchik
33d8a28367 5.6.43-84.3 2019-04-26 17:02:15 +02:00
Varun Gupta
d1a43973ef Adjusted result for tokudb_bugs.db756_card_part_hash_2_pick 2019-04-26 12:50:26 +05:30
Varun Gupta
87472974cd Results updated for tokudb tests 2019-04-25 22:05:54 +05:30
Marko Mäkelä
c8f8d5ceb7 Merge 10.3 into 10.4 2019-04-03 11:43:39 +03:00
Marko Mäkelä
c6b8b05be4 Merge 10.2 into 10.3 2019-04-03 11:22:51 +03:00
Marko Mäkelä
dbc716675b Merge 10.1 into 10.2 2019-04-03 10:32:21 +03:00
Sergei Golubchik
409f69cd74 cmake: only search for libraries that are needed
in particular, don't search for libjemalloc.a, which is only
needed for tokudb's ftcxx tests, when the tests aren't going
to be built.
2019-04-02 18:22:37 +02:00
Michael Widenius
b5615eff0d Write information about restart in .result
Idea comes from MySQL which does something similar
2019-04-01 19:47:24 +03:00
Sachin
d00f19e832 MDEV-371 Unique Index for long columns
This patch implements engine independent unique hash index.

Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key
 length > handler->max_key_length()
or it can be explicitly specified.

  Automatic Creation:-
   Create TABLE t1 (a blob unique);
  Explicit Creation:-
   Create TABLE t1 (a int , unique(a) using HASH);

Internal KEY_PART Representations:-
 Long unique key_info will have 2 representations.
 (lets understand this with an example create table t1(a blob, b blob , unique(a, b)); )

 1. User Given Representation:- key_info->key_part array will be similar to what user has defined.
 So in case of example it will have 2 key_parts (a, b)

 2. Storage Engine Representation:- In this case there will be only one key_part and it will point to
 HASH_FIELD. This key_part will be always after user defined key_parts.

 So:- User Given Representation          [a] [b] [hash_key_part]
                  key_info->key_part ----^
  Storage Engine Representation          [a] [b] [hash_key_part]
                  key_info->key_part ------------^

 Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine
 Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function.

Working:-

1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table
One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH.

2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags)

3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields,
   When Explicit length is given by user then Item_left is used to concatenate Item_field values.

4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from
table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result
field by field.
2019-02-22 00:35:40 +01:00
Oleksandr Byelkin
93ac7ae70f Merge branch '10.3' into 10.4 2019-02-21 14:40:52 +01:00
Igor Babaev
2e73c56120 Merge branch '10.4' into bb-10.4-mdev7486 2019-02-19 03:18:17 -08:00
Varun Gupta
d6db6df995 MDEV-17903: New optimizer defaults: change optimize_join_buffer_size to be ON
optimize_join_buffer_size is switched ON.
2019-02-19 14:27:24 +05:30
Galina Shalygina
7a77b221f1 MDEV-7486: Condition pushdown from HAVING into WHERE
Condition can be pushed from the HAVING clause into the WHERE clause
if it depends only on the fields that are used in the GROUP BY list
or depends on the fields that are equal to grouping fields.
Aggregate functions can't be pushed down.

How the pushdown is performed on the example:

SELECT t1.a,MAX(t1.b)
FROM t1
GROUP BY t1.a
HAVING (t1.a>2) AND (MAX(c)>12);

=>

SELECT t1.a,MAX(t1.b)
FROM t1
WHERE (t1.a>2)
GROUP BY t1.a
HAVING (MAX(c)>12);

The implementation scheme:

1. Extract the most restrictive condition cond from the HAVING clause of
   the select that depends only on the fields that are used in the GROUP BY
   list of the select (directly or indirectly through equalities)
2. Save cond as a condition that can be pushed into the WHERE clause
   of the select
3. Remove cond from the HAVING clause if it is possible

The optimization is implemented in the function
st_select_lex::pushdown_from_having_into_where().

New test file having_cond_pushdown.test is created.
2019-02-17 23:38:44 -08:00
Oleksandr Byelkin
dcc838168f Merge branch '10.3' into bb-10.3-merge 2019-02-12 12:05:10 +01:00
Igor Babaev
9fe1e83df0 MDEV-16188 Post merge fixes: more for TokuDB 2019-02-08 12:32:31 -08:00
Igor Babaev
651347b6c1 MDEV-16188 Post merge fixes fot TokuDB 2019-02-08 01:07:27 -08:00
Oleksandr Byelkin
65c5ef9b49 dirty merge 2019-02-07 13:59:31 +01:00
Monty
d0799a0479 Removed compiler warnings from tokudb
- Backport from 10.4
2019-02-06 22:18:20 +02:00