1
0
mirror of https://github.com/MariaDB/server.git synced 2025-07-05 12:42:17 +03:00
Commit Graph

681 Commits

Author SHA1 Message Date
fcbb672faa SQL: NOT NULL for timestamps [#63, #300] 2017-11-13 19:11:03 +03:00
497c6add88 System Versioning pre1.0
Merge branch '10.3' into trunk
2017-11-13 19:09:46 +03:00
d8d7251019 System Versioning pre0.12
Merge remote-tracking branch 'origin/archive/2017-10-17' into 10.3
2017-11-07 00:37:49 +03:00
9ec19b9b41 Reducing memory when using information schema
The background is that one user had a lot of views and using some complex
queries on information schema temporary memory of more than 2G was used.

- Added new element 'total_alloc' to MEM_ROOT for easier debugging.
- Added MAX_MEMORY_USED to information_schema.processlist.
- Added new status variable "Memory_used_initial" that shows how much MariaDB
  uses at startup. This gives the base value for "Memory_used".
- Reuse memory continuously for information schema queries instead of
  only freeing memory at query end.

Other things
- Removed some not needed set_notnull() calls for not null columns.
2017-11-02 20:37:25 +02:00
3356e42d01 Improved warning "xxx is not BASE TABLE/SEQUENCE"
- Changed warning to "'%-.192s.%-.192s' is not of type '%s'" to make the
  english a bit more correct
2017-06-02 13:52:47 +03:00
d9304914be Fixing a few problems with data type and metadata for INT result functions (MDEV-12852, MDEV-12853, MDEV-12869)
This is a joint patch for:
MDEV-12852 Out-of-range errors when CAST(1-2 AS UNSIGNED
MDEV-12853 Out-of-range errors when CAST('-1' AS UNSIGNED
MDEV-12869 Wrong metadata for integer additive and multiplicative operators

1. Fixing all Item_func_numhybrid descendants to set the precise
   data type handler (type_handler_long or type_handler_longlong)
   at fix_fields() time. This fixes MDEV-12869.

2. Fixing Item_func_unsigned_typecast to set the precise data type handler
   at fix_fields() time. This fixes MDEV-12852 and MDEV-12853.
   This is done by:
   - fixing Type_handler::Item_func_unsigned_fix_length_and_dec()
     and Type_handler_string_result::Item_func_unsigned_fix_length_and_dec()
     to properly detect situations when a negative epxression is converted
     to UNSIGNED. In this case, length of the result is now always set to
     MAX_BIGINT_WIDTH without trying to use args[0]->max_length, as very
     short arguments can produce very long result in such conversion:
        CAST(-1 AS UNSIGNED) -> 18446744073709551614
   - adding a new virtual method "longlong Item::val_int_max() const",
     to preserve the old behavior for expressions like this:
        CAST(1 AS UNSIGNED)
     to stay under the INT data type (instead of BIGINT) for small
     positive integer literals. Using Item::unsigned_flag would not help,
     because Item_int does not set unsigned_flag to "true" for positive
     numbers.

3. Adding helper methods:
  * Item::type_handler_long_or_longlong()
  * Type_handler::type_handler_long_or_longlong()
  and reusing them in a few places, to reduce code duplication.

4. Making reorganation in create_tmp_field() and
   create_field_for_create_select() for Item_hybrid_func and descendants,
   to reduce duplicate code. They all now have a similar behavior in
   respect of creating fields. Only Item_func_user_var descendants have
   a different behavior. So moving the default behvior to Item_hybrid_func,
   and overriding behavior on Item_func_user_var level.
2017-05-23 12:45:47 +04:00
44e581ebfc Tests: removed from main suite 2017-05-05 20:36:13 +03:00
4c5a2e504d Tests: create, select_pkeycache, select_jcl6 results
create: 5de65ef52df9f941ec8eea93f4801283e3d7fa05
select_pkeycache, select_jcl6: ded545be9f7a416d1d50d653ab278dd0fc7e6c5e
2017-05-05 20:36:12 +03:00
a9a56b2355 SQL: 0.3 per column versioning syntax, .frm, optimized updates and tests 2017-05-05 20:36:10 +03:00
d8c8d7b946 added implicitly generated fields in versioned tables support and refactored code a bit 2017-05-05 20:36:06 +03:00
be6f2d302c 0.1: SQL-level System Versioning 2017-05-05 20:35:08 +03:00
791374354c MDEV-9217 Split Item::tmp_table_field_from_field_type() into virtual methods in Type_handler
- Adding Type_handler::make_table_field() and moving pieces of the code
  from Item::tmp_table_field_from_field_type() to virtual implementations
  for various type handlers.

- Adding a new Type_all_attributes, to access to Item's extended
  attributes, such as decimal_precision() and geometry_type().

- Adding a new class Record_addr, to pass record related information
  to Type_handler methods (ptr, null_ptr and null_bit) as a single structure.
  Note, later it will possibly be extended for BIT-alike field purposes,
  by adding new members (bit_ptr_arg, bit_ofs_arg).

- Moving the code from Field_new_decimal::create_from_item()
  to Type_handler_newdecimal::make_table_field().

- Removing Field_new_decimal() and Field_geom() helper constructor
  variants that were used for temporary field creation.

- Adding Item_field::type_handler(), Field::type_handler() and
  Field_blob::type_handler() to return correct type handlers for
  blob variants, according to Field_blob::packlength.

- Adding Type_handler_blob_common, as a common parent for
  Type_handler_tiny_blob, Type_handler_blob, Type_handler_medium_blob
  and Type_handler_long_blob.

- Implementing Type_handler_blob_common::Item_hybrid_func_fix_attributes().

  It's needed for cases when TEXT variants of different character sets are mixed
  in LEAST, GREATEST, CASE and its abreviations (IF, IFNULL, COALESCE), e.g.:
      CREATE TABLE t1 (
        a TINYTEXT CHARACTER SET latin1,
        b TINYTEXT CHARACTER SET utf8
      );
      CREATE TABLE t2 AS SELECT COALESCE(a,b) FROM t1;
  Type handler aggregation returns TINYTEXT as a common data type
  for the two columns. But as conversion from latin1 to utf8
  happens for "a", the maximum possible length of "a" grows from 255 to 255*3.
  Type_handler_blob_common::Item_hybrid_func_fix_attributes() makes sure
  to update the blob type handler according to max_length.

- Adding Type_handler::blob_type_handler(uint max_octet_length).

- Adding a few m_type_aggregator_for_result.add() pairs, because
  now Item_xxx::type_handler() can return pointers to type_handler_tiny_blob,
  type_handler_blob, type_handler_medium_blob, type_handler_long_blob.
  Before the patch only type_handler_blob was possible result of type_handler().

- Making type_handler_tiny_blob, type_handler_blob, type_handler_medium_blob,
  type_handler_long_blob public.

- Removing the condition in Item_sum_avg::create_tmp_field()
  checking Item_sum_avg::result_type() against DECIMAL_RESULT.
  Now both REAL_RESULT and DECIMAL_RESULT are symmetrically handled
  by tmp_table_field_from_field_type().

- Removing Item_geometry_func::create_field_for_create_select(),
  as the inherited version perfectly works.

- Fixing Item_func_as_wkb::field_type() to return MYSQL_TYPE_LONG_BLOB
  rather than MYSQL_TYPE_BLOB. It's needed to make sure that
  tmp_table_field_from_field_type() creates a LONGBLOB field for AsWKB().

- Fixing Item_func_as_wkt::fix_length_and_dec() to set max_length to
  UINT32_MAX rather than MAX_BLOB_WIDTH, to make sure that
  tmp_table_field_from_field_type() creates a LONGTEXT field for AsWKT().

- Removing Item_func_set_user_var::create_field_for_create_select(),
  as the inherited version works fine.

- Adding Item_func_get_user_var::create_field_for_create_select() to
  make sure that "CREATE TABLE t1 AS SELECT @string_user variable"
  always creates a field of LONGTEXT/LONGBLOB type.

- Item_func_ifnull::create_field_for_create_select()
  behavior has changed. Before the patch it passed set_blob_packflag=false,
  which meant to create LONGBLOB for all blob variants.
  Now it takes into account max_length, which gives better column
  data types for:
    CREATE TABLE t2 AS SELECT IFNULL(blob_column1, blob_column2) FROM t1;

- Fixing Item_func_nullif::fix_length_and_dec() to use
  set_handler(args[2]->type_handler()) instead of
  set_handler_by_field_type(args[2]->field_type()).
  This is needed to distinguish between BLOB variants.

- Implementing Item_blob::type_handler(), to make sure to create
  proper BLOB field variant, according to max_length, for queries like:
    CREATE TABLE t1 AS
      SELECT some_blob_field FROM INFORMATION_SCHEMA.SOME_TABLE;

- Fixing Item_field::real_type_handler() to make sure that
  the code aggregating fields for UNION gets a proper BLOB
  variant type handler from fields.

- Adding a special code into Item_type_holder::make_field_by_type(),
  to make sure that after aggregating field types it also properly
  takes into account max_length when mixing TEXT variants of different
  character sets and chooses a proper TEXT variant:
      CREATE TABLE t1 (
        a TINYTEXT CHARACTER SET latin1,
        b TINYTEXT CHARACTER SET utf8
      );
      CREATE TABLE t2 AS SELECT a FROM t1 UNION SELECT b FROM t1;

- Adding tests, for better coverage of IFNULL, NULLIF, UNION.

- The fact that tmp_table_field_from_field_type() now takes
  into account BLOB variants (instead of always creating LONGBLOB),
  tests results for WEIGHT_STRING() and NULLIF() and UNION
  have become more precise.
2017-04-24 12:09:25 +04:00
847eb24b17 Fixed some wrong/inconsistent error message
- Added trigger name to "Trigger already exists" error message
- Added also missing query name to ER_DUP_QUERY_NAME
- Fixed wrong use of MASTER_DELAY_VALUE_OUT_OF_RANGE
2017-04-19 22:30:55 +03:00
f3914d10b6 Merge branch 'bb-10.2-serg-merge' into 10.2 2017-02-11 09:45:34 +01:00
2195bb4e41 Merge branch '10.1' into 10.2 2017-02-10 17:01:45 +01:00
8b2e642aa2 MDEV-7635: Update tests to adapt to the new default sql_mode 2017-02-10 06:30:42 -05:00
8e15768731 Merge branch '10.0' into 10.1 2017-01-16 03:18:14 +02:00
e9aed131ea Merge remote-tracking branch 'origin/5.5' into 10.0 2017-01-06 17:09:59 +02:00
4a5d25c338 Merge branch '10.1' into 10.2 2016-12-29 13:23:18 +01:00
19896d4b3a MDEV-10274 Bundling insert with create statement for table with unsigned Decimal primary key issues warning 1194.
Flags are important for key_length calculations, so them should
        be set before it, not after.
2016-12-19 16:09:20 +04:00
1db438c833 MDEV-11066 use MySQL terminology for "virtual columns" 2016-12-12 20:35:51 +01:00
a411d7f4f6 store/show vcols as item->print()
otherwise we'd need to store sql_mode *per vcol*
(consider CREATE INDEX...) and how SHOW CREATE TABLE would
support that?

Additionally, get rid of vcol::expr_str, just to make sure
the string is always generated and never leaked in the
original form.
2016-12-12 20:35:41 +01:00
2f20d297f8 Merge branch '10.0' into 10.1 2016-12-11 09:53:42 +01:00
7f2fd34500 MDEV-11231 Server crashes in check_duplicate_key on CREATE TABLE ... SELECT
be consistent and don't include the table name into the error message,
no other CREATE TABLE error does it.

(the crash happened, because thd->lex->query_tables was NULL)
2016-12-04 01:59:35 +01:00
af7490f95d Remove end . from error messages to get them consistent
Fixed a few failing tests
2016-10-05 01:11:08 +03:00
8be53a389c MDEV-6112 multiple triggers per table
This is similar to MysQL Worklog 3253, but with
a different implementation. The disk format and
SQL syntax is identical with MySQL 5.7.

Fetures supported:
- "Any" ammount of any trigger
- Supports FOLLOWS and PRECEDES to be
  able to put triggers in a certain execution order.

Implementation details:
- Class Trigger added to hold information about a trigger.
  Before this trigger information was stored in a set of lists in
  Table_triggers_list and in Table_triggers_list::bodies
- Each Trigger has a next field that poinst to the next Trigger with the
  same action and time.
- When accessing a trigger, we now always access all linked triggers
- The list are now only used to load and save trigger files.
- MySQL trigger test case (trigger_wl3253) added and we execute these
  identically.
- Even more gracefully handling of wrong trigger files than before. This
  is useful if a trigger file uses functions or syntax not provided by
  the server.
- Each trigger now has a "Created" field that shows when the trigger was
  created, with 2 decimals.

Other comments:
- Many of the changes in test files was done because of the new "Created"
  field in the trigger file. This shows up in SHOW ... TRIGGER and when
  using information_schema.trigger.
- Don't check if all memory is released if on uses --gdb;  This is needed
  to be able to get a list from safemalloc of not freed memory while
  debugging.
- Added option to trim_whitespace() to know how many prefix characters
  was skipped.
- Changed a few ulonglong sql_mode to sql_mode_t, to find some wrong usage
  of sql_mode.
2016-10-05 01:11:07 +03:00
06b7fce9f2 Merge branch '10.1' into 10.2 2016-09-09 08:33:08 +02:00
6820bf9ca9 do not quote numbers in the DEFAULT clause in SHOW CREATE 2016-08-27 16:59:11 +02:00
6b1863b830 Merge branch '10.0' into 10.1 2016-08-25 12:40:09 +02:00
6f31dd093a Added new status variables to make it easier to debug certain problems:
- Handler_read_retry
- Update_scan
- Delete_scan
2016-08-21 20:18:39 +03:00
6c173324ff Part of MDEV-10134 Add full support for DEFAULT
Print default values for BLOB's.
This is a part commit for automatic changes to make the real commit smaller.
All changes here are related to that we now print DEFAULT NULL for blob and
text fields, like we do for all other fields.
2016-06-30 11:43:02 +02:00
7305be2f7e MDEV-5535: Cannot reopen temporary table
mysqld maintains a list of TABLE objects for all temporary
tables created within a session in THD. Here each table is
represented by a TABLE object.

A query referencing a particular temporary table for more
than once, however, failed with ER_CANT_REOPEN_TABLE error
because a TABLE_SHARE was allocate together with the TABLE,
so temporary tables always had only one TABLE per TABLE_SHARE.

This patch lift this restriction by separating TABLE and
TABLE_SHARE objects and storing TABLE_SHAREs for temporary
tables in a list in THD, and TABLEs in a list within their
respective TABLE_SHAREs.
2016-06-10 18:39:43 -04:00
2dee76f435 MDEV-9518 Increase the range for INFORMATION_SCHEMA.MEMORY_USED column
make INFORMATION_SCHEMA.PROCESSLIST.MEMORY_USED bigint
2016-06-09 00:00:44 +02:00
282497dd6d MDEV-6720 - enable connection log in mysqltest by default 2016-03-31 10:11:16 +04:00
a5679af1b1 Merge branch '10.0' into 10.1 2016-02-23 21:35:05 +01:00
271fed4106 Merge branch '5.5' into 10.0 2016-02-15 22:50:59 +01:00
6b614c620e MDEV-7765: Crash (Assertion `!table || (!table->write_set || bitmap_is_set(table->write_set, field_index) || bitmap_is_set(table->vcol_set, field_index))' fails) on using function over not created table
Problem was that created table was not marked as used (not set query_id) and so opening tables for stored function pick it up (as opened place holder for it) and used changing TABLE internals.
2016-02-09 18:58:54 +01:00
a2bcee626d Merge branch '10.0' into 10.1 2015-12-21 21:24:22 +01:00
1623995158 Merge branch '5.5' into 10.0 2015-12-13 00:10:40 +01:00
0df22a539e MDEV-7050: MySQL#74603 - Assertion `comma_length > 0' failed in mysql_prepare_create_table
Incorrect array lenght for comma buffer.
2015-12-07 09:34:41 +02:00
7ec6558503 MDEV-9021: MYSQLD SEGFAULTS WHEN BUILT USING --WITH-MAX-INDEXES=128
The bitmap implementation defines two template Bitmap classes. One
optimized for 64-bit (default) wide bitmaps while the other is used for
all other widths.

In order to optimize the computations, Bitmap<64> class has defined its
own member functions for bitmap operations, the other one, however,
relies on mysys' bitmap implementation (mysys/my_bitmap.c).

Issue 1:
In case of non 64-bit Bitmap class, intersect() wrongly reset the
received bitmap while initialising a new local bitmap structure
(bitmap_init() clears the bitmap buffer) thus, the received bitmap was
getting cleared.

Fixed by initializing the local bitmap structure by using a temporary
buffer and later copying the received bitmap to the initialised bitmap
structure.

Issue 2:
The non 64-bit Bitmap class had the Iterator missing which caused
compilation failure.

Also added a cmake variable to hold the MAX_INDEXES value when supplied
from the command prompt. (eg. cmake .. -DMAX_INDEXES=128U). Checks have
been put in place to trigger build failure if MAX_INDEXES value is
greater than 128.

Test modifications:
* Introduced include/have_max_indexes_[64|128].inc to facilitate
skipping of tests for which the output differs with different
MAX_INDEXES.

* Introduced include/max_indexes.inc which would get modified by cmake
to reflect the MAX_INDEXES value used to build the server. This file
simply sets an mtr variable '$max_indexes' to show the MAX_INDEXES
value, which will then be consumed by the above introduced include file.

* Some tests (portions), dependent on MAX_INDEXES value, have been moved
to separate test files.
2015-11-09 09:28:00 -05:00
5c9c8ef1ea MDEV-3929 Add system variable explicit_defaults_for_timestamp for compatibility with MySQL 2015-09-22 14:01:54 +04:00
7dd137c4ac MDEV-6756: map a linux pid (child pid) to a connection id shown in the output of SHOW PROCESSLIST
Added tid (thread ID) for system where it is present.

 ps -eL -o tid,pid,command

shows the thread on Linux
2015-09-17 14:30:58 +02:00
6b20342651 Ensure that fields declared with NOT NULL doesn't have DEFAULT values if not specified and if not timestamp or auto_increment
In original code, sometimes one got an automatic DEFAULT value in some cases, in other cases not.

For example:
create table t1 (a int primary key)      - No default
create table t2 (a int, primary key(a))  - DEFAULT 0
create table t1 SELECT ....              - Default for all fields, even if they where defined as NOT NULL
ALTER TABLE ... MODIFY could sometimes add an unexpected DEFAULT value.

The patch is quite big because we had some many test cases that used
CREATE ... SELECT or CREATE ... (...PRIMARY KEY(xxx)) which doesn't have an automatic DEFAULT anymore.

Other things:
- Removed warnings from InnoDB when waiting from semaphore (got this when testing things with --big)
2015-08-18 11:18:57 +03:00
91ee98a8c8 MDEV-7807 information_schema.processlist truncates queries with binary strings
Adding a new column INFORMATION_SCHEMA.PROCESSLIST.INFO_BINARY.
2015-05-08 00:34:06 +04:00
2db62f686e Merge branch '10.0' into 10.1 2015-03-07 13:21:02 +01:00
87b0cc9912 MDEV-7286 TRIGGER: CREATE OR REPLACE, CREATE IF NOT EXISTS
Based on the patch by Sriram Patil, made under terms of GSoC 2014.
2015-03-04 09:52:01 +04:00
d7e7862364 Merge branch '5.5' into 10.0 2015-02-18 15:16:27 +01:00
8e80f91fa3 Merge remote-tracking branch 'mysql/5.5' into bb-5.5-merge @ mysql-5.5.42 2015-02-11 23:50:40 +01:00
aed8369e43 Bug #16869534 QUERYING SUBSET OF COLUMNS DOESN'T USE TABLE CACHE; OPENED_TABLES I
Description: When querying a subset of columns from the information_schema.TABLES

Analysis: When information about tables is collected for statements like
"SELECT ENGINE FROM I_S.TABLES" we do not perform full-blown table opens
in SE, instead we only use information from table shares from the Table
Definition Cache or .FRMs. Still in order to simplify I_S implementation
mock TABLE objects are created from TABLE_SHARE during this process.
This is done by calling open_table_from_share() function with special
arguments. Since this function always increments "Opened_tables" counter,
calls to it can be mistakingly interpreted as full-blown table opens in SE.

Note that claim that "'SELECT ENGINE FROM I_S.TABLES' statement doesn't
use Table Cache" is nevertheless factually correct. But it misses the
point, since such statements a) don't use full-blown TABLE objects and
therefore don't do table opens b) still use Table Definition Cache.

Fix: We are now incrementing the counter when db_stat(i.e open flags for ha_open(

we have considered an optimization which would use TABLE objects from
Table Cache when available instead of constructing mock TABLE objects,
but found it too intrusive for stable releases.
2014-11-26 16:59:58 +05:30