1
0
mirror of https://github.com/MariaDB/server.git synced 2025-11-28 17:36:30 +03:00
Commit Graph

1646 Commits

Author SHA1 Message Date
kkz
f85d488ad2 remove obsolete fix_session_vcol_expr{,_for_read} function declarations 2022-05-26 09:28:22 +10:00
Aleksey Midenkov
08c7ab404f MDEV-24176 Server crashes after insert in the table with virtual
column generated using date_format() and if()

vcol_info->expr is allocated on expr_arena at parsing stage. Since
expr item is allocated on expr_arena all its containee items must be
allocated on expr_arena too. Otherwise fix_session_expr() will
encounter prematurely freed item.

When table is reopened from cache vcol_info contains stale
expression. We refresh expression via TABLE::vcol_fix_exprs() but
first we must prepare a proper context (Vcol_expr_context) which meets
some requirements:

1. As noted above expr update must be done on expr_arena as there may
be new items created. It was a bug in fix_session_expr_for_read() and
was just not reproduced because of no second refix. Now refix is done
for more cases so it does reproduce. Tests affected: vcol.binlog

2. Also name resolution context must be narrowed to the single table.
Tested by: vcol.update main.default vcol.vcol_syntax gcol.gcol_bugfixes

3. sql_mode must be clean and not fail expr update.

sql_mode such as MODE_NO_BACKSLASH_ESCAPES, MODE_NO_ZERO_IN_DATE, etc
must not affect vcol expression update. If the table was created
successfully any further evaluation must not fail. Tests affected:
main.func_like

Reviewed by: Sergei Golubchik <serg@mariadb.org>
2022-04-18 12:44:27 +03:00
Aleksey Midenkov
c02ebf3510 MDEV-24176 Preparations
1. moved fix_vcol_exprs() call to open_table()

mysql_alter_table() doesn't do lock_tables() so it cannot win from
fix_vcol_exprs() from there. Tests affected: main.default_session

2. Vanilla cleanups and comments.
2022-04-18 12:44:27 +03:00
Aleksey Midenkov
241ac79e49 MDEV-26778 row_start is not updated in current row for InnoDB
Update was skipped (need_update was false) because compare_record()
used HA_PARTIAL_COLUMN_READ branch and it skipped row_start check
has_explicit_value() was false. When we set bit for row_start in
has_value_set the row is updated with new row_start value.

The bug was caused by combination of MDEV-23446 and 3789692d17. The
latter one says:

  ... But generated columns that are written to the table are always
  deterministic and cannot change unless normal non-generated columns
  were changed. ...

Since MDEV-23446 generated row_start can change while non-generated
columns are not changed.

Explicit value flag came from HAS_EXPLICIT_DEFAULT which was used to
distinguish default-generated value from user-supplied one.
2022-01-13 23:35:17 +03:00
Marko Mäkelä
8a33d36dac Fix GCC 11.2.0 -Wmaybe-uninitialized
TABLE_LIST::calc_md5(): Remove an untruthful const qualifier.

thd_get_query_start_data(): Pass empty_clex_str instead of
an uninitialized LEX_CSTRING.
2021-08-23 09:00:37 +03:00
Sergei Golubchik
6190a02f35 Merge branch '10.2' into 10.3 2021-07-21 20:11:07 +02:00
Nikita Malyavin
f64a4f672a follow-up MDEV-18166: rename marking functions
Reformulate mark_columns_used_by_index* function family in a more laconic
way:

mark_columns_used_by_index -> mark_index_columns
mark_columns_used_by_index_for_read_no_reset -> mark_index_columns_for_read
mark_columns_used_by_index_no_reset -> mark_index_columns_no_reset
static mark_index_columns -> do_mark_index_columns
2021-07-12 22:00:40 +03:00
Nikita Malyavin
0f6a5b4390 [2/2] MDEV-18166 ASSERT_COLUMN_MARKED_FOR_READ failed on tables with vcols
Several different test cases were failing under the same reason: the
fields in a vcol expression were not marked during marking columns of a key
contatining virtual column for read.

Fix: make marking columns of a key for read a special case where
register_field_in_read_map() is done instead of plain bitmap_set_bit().

Some test cases are only reproducible in 10.4+, but the fix is applicable
to 10.2+
2021-07-12 22:00:39 +03:00
Marko Mäkelä
1864a8ea93 Merge 10.2 into 10.3 2021-05-24 09:38:49 +03:00
Igor Babaev
43c9fcefc0 MDEV-23886 Reusing CTE inside a function fails with table doesn't exist
In the code existed just before this patch binding of a table reference to
the specification of the corresponding CTE happens in the function
open_and_process_table(). If the table reference is not the first in the
query the specification is cloned in the same way as the specification of
a view is cloned for any reference of the view. This works fine for
standalone queries, but does not work for stored procedures / functions
for the following reason.
When the first call of a stored procedure/ function SP is processed the
body of SP is parsed. When a query of SP is parsed the info on each
encountered table reference is put into a TABLE_LIST object linked into
a global chain associated with the query. When parsing of the query is
finished the basic info on the table references from this chain except
table references to derived tables and information schema tables is put
in one hash table associated with SP. When parsing of the body of SP is
finished this hash table is used to construct TABLE_LIST objects for all
table references mentioned in SP and link them into the list of such
objects passed to a pre-locking process that calls open_and_process_table()
for each table from the list.
When a TABLE_LIST for a view is encountered the view is opened and its
specification is parsed. For any table reference occurred in
the specification a new TABLE_LIST object is created to be included into
the list for pre-locking. After all objects in the pre-locking have been
looked through the tables mentioned in the list are locked. Note that the
objects referenced CTEs are just skipped here as it is impossible to
resolve these references without any info on the context where they occur.
Now the statements from the body of SP are executed one by one that.
At the very beginning of the execution of a query the tables used in the
query are opened and open_and_process_table() now is called for each table
reference mentioned in the list of TABLE_LIST objects associated with the
query that was built when the query was parsed.
For each table reference first the reference is checked against CTEs
definitions in whose scope it occurred. If such definition is found the
reference is considered resolved and if this is not the first reference
to the found CTE the the specification of the CTE is re-parsed and the
result of the parsing is added to the parsing tree of the query as a
sub-tree. If this sub-tree contains table references to other tables they
are added to the list of TABLE_LIST objects associated with the query in
order the referenced tables to be opened. When the procedure that opens
the tables comes to the TABLE_LIST object created for a non-first
reference to a CTE it discovers that the referenced table instance is not
locked and reports an error.
Thus processing non-first table references to a CTE similar to how
references to view are processed does not work for queries used in stored
procedures / functions. And the main problem is that the current
pre-locking mechanism employed for stored procedures / functions does not
allow to save the context in which a CTE reference occur. It's not trivial
to save the info about the context where a CTE reference occurs while the
resolution of the table reference cannot be done without this context and
consequentially the specification for the table reference cannot be
determined.

This patch solves the above problem by moving resolution of all CTE
references at the parsing stage. More exactly references to CTEs occurred in
a query are resolved right after parsing of the query has finished. After
resolution any CTE reference it is marked as a reference to to derived
table. So it is excluded from the hash table created for pre-locking used
base tables and view when the first call of a stored procedure / function
is processed.
This solution required recursive calls of the parser. The function
THD::sql_parser() has been added specifically for recursive invocations of
the parser.
2021-05-21 16:00:35 -07:00
Monty
6983ce704b MDEV-24710 Uninitialized value upon CREATE .. SELECT ... VALUE...
The failure happened for group by queries when all tables where marked as
'const tables' (tables with 0-1 matching rows) and no row matched the
where clause and there was in addition a direct reference to a field.

In this case the field would not be properly reset and the query would
return 'random data' that happended to be in table->record[0].

Fixed by marking all const tables as null tables in this particular case.

Sergei also provided an extra test case for the code.

@reviewer Sergei Petrunia <psergey@askmonty.org>
2021-03-01 22:09:05 +02:00
Sergei Golubchik
0ab1e3914c Merge branch '10.2' into 10.3 2021-02-22 22:42:27 +01:00
Varun Gupta
b87c342da5 MDEV-11172: EXPLAIN shows non-sensical value for key_len with type=index
The issue happens when the secondary keys are extended with primary
key parts. Inside the function TABLE_SHARE::init_from_binary_frm_image()
adds the length bytes for the primary key key parts to the length of the
secondary key. This is not needed because when the extended keys are
used we recalculate the length for the used key parts.

Also removed TABLE_SHARE::total_key_length as it is not used in the code

Apporved-by: Monty <monty@mariadb.org>
2021-01-30 14:41:43 +05:30
Nikita Malyavin
21809f9a45 MDEV-17556 Assertion `bitmap_is_set_all(&table->s->all_set)' failed
The assertion failed in handler::ha_reset upon SELECT under
READ UNCOMMITTED from table with index on virtual column.

This was the debug-only failure, though the problem is mush wider:
* MY_BITMAP is a structure containing my_bitmap_map, the latter is a raw
 bitmap.
* read_set, write_set and vcol_set of TABLE are the pointers to MY_BITMAP
* The rest of MY_BITMAPs are stored in TABLE and TABLE_SHARE
* The pointers to the stored MY_BITMAPs, like orig_read_set etc, and
 sometimes all_set and tmp_set, are assigned to the pointers.
* Sometimes tmp_use_all_columns is used to substitute the raw bitmap
 directly with all_set.bitmap
* Sometimes even bitmaps are directly modified, like in
TABLE::update_virtual_field(): bitmap_clear_all(&tmp_set) is called.

The last three bullets in the list, when used together (which is mostly
always) make the program flow cumbersome and impossible to follow,
notwithstanding the errors they cause, like this MDEV-17556, where tmp_set
pointer was assigned to read_set, write_set and vcol_set, then its bitmap
was substituted with all_set.bitmap by dbug_tmp_use_all_columns() call,
and then bitmap_clear_all(&tmp_set) was applied to all this.

To untangle this knot, the rule should be applied:
* Never substitute bitmaps! This patch is about this.
 orig_*, all_set bitmaps are never substituted already.

This patch changes the following function prototypes:
* tmp_use_all_columns, dbug_tmp_use_all_columns
 to accept MY_BITMAP** and to return MY_BITMAP * instead of my_bitmap_map*
* tmp_restore_column_map, dbug_tmp_restore_column_maps to accept
 MY_BITMAP* instead of my_bitmap_map*

These functions now will substitute read_set/write_set/vcol_set directly,
and won't touch underlying bitmaps.
2021-01-27 00:50:55 +10:00
Marko Mäkelä
5a1a714187 Merge 10.2 into 10.3 (except MDEV-17556)
The fix of MDEV-17556 (commit e25623e78a
and commit 61a362c949) has been
omitted due to conflicts and will have to be applied separately later.
2021-01-11 09:41:54 +02:00
Nikita Malyavin
e25623e78a MDEV-17556 Assertion `bitmap_is_set_all(&table->s->all_set)' failed
The assertion failed in handler::ha_reset upon SELECT under
READ UNCOMMITTED from table with index on virtual column.

This was the debug-only failure, though the problem is mush wider:
* MY_BITMAP is a structure containing my_bitmap_map, the latter is a raw
 bitmap.
* read_set, write_set and vcol_set of TABLE are the pointers to MY_BITMAP
* The rest of MY_BITMAPs are stored in TABLE and TABLE_SHARE
* The pointers to the stored MY_BITMAPs, like orig_read_set etc, and
 sometimes all_set and tmp_set, are assigned to the pointers.
* Sometimes tmp_use_all_columns is used to substitute the raw bitmap
 directly with all_set.bitmap
* Sometimes even bitmaps are directly modified, like in
TABLE::update_virtual_field(): bitmap_clear_all(&tmp_set) is called.

The last three bullets in the list, when used together (which is mostly
always) make the program flow cumbersome and impossible to follow,
notwithstanding the errors they cause, like this MDEV-17556, where tmp_set
pointer was assigned to read_set, write_set and vcol_set, then its bitmap
was substituted with all_set.bitmap by dbug_tmp_use_all_columns() call,
and then bitmap_clear_all(&tmp_set) was applied to all this.

To untangle this knot, the rule should be applied:
* Never substitute bitmaps! This patch is about this.
 orig_*, all_set bitmaps are never substituted already.

This patch changes the following function prototypes:
* tmp_use_all_columns, dbug_tmp_use_all_columns
 to accept MY_BITMAP** and to return MY_BITMAP * instead of my_bitmap_map*
* tmp_restore_column_map, dbug_tmp_restore_column_maps to accept
 MY_BITMAP* instead of my_bitmap_map*

These functions now will substitute read_set/write_set/vcol_set directly,
and won't touch underlying bitmaps.
2021-01-08 16:04:29 +10:00
Sujatha
608b0ee52e MDEV-23033: All slaves crash once in ~24 hours and loop restart with signal 11
Problem:
=======
Upon deleting or updating a row in a parent table (with primary key), if
the child table has virtual column and an associated key with ON UPDATE
CASCADE/ON DELETE CASCADE, it will result in slave crash.

Analysis:
========
Tables which are related through foreign key require prelocking similar to
triggers. i.e If a table has triggers/foreign keys we should add all tables
and routines used by them to the prelocking set.  This prelocking happens
during 'open_and_lock_tables' call.  Each table being opened is checked for
foreign key references. If foreign key reference exists then the child
table is opened and it is linked to the table_list. Upon any modification
to  parent table its corresponding child tables are retried from table_list
and they are updated accordingly. This prelocking work fine on master.

On slave  prelocking works for following cases.
 - Statement/mixed based replication
 - In row based replication when trigger execution is enabled through
   'slave_run_triggers_for_rbr=YES/LOGGING/ENFORCE'

Otherwise it results in an assert/crash, as the parent table will not find
the corresponding child table and it will be NULL. Dereferencing NULL
pointer leads to slave server exit.

Fix:
===
Introduce a new 'slave_fk_event_map' flag similar to 'trg_event_map'. This
flag will ensure that when foreign key is enabled in row based replication
all the parent and child tables are prelocked, so that parent is able to
locate the child table.

Note: This issue is specific to slave, hence only slave needs to be
      upgraded.
2021-01-04 15:06:12 +05:30
Marko Mäkelä
c7f322c91f Merge 10.2 into 10.3 2020-11-02 15:48:47 +02:00
Marko Mäkelä
8036d0a359 MDEV-22387: Do not violate __attribute__((nonnull))
This follows up commit
commit 94a520ddbe and
commit 7c5519c12d.

After these changes, the default test suites on a
cmake -DWITH_UBSAN=ON build no longer fail due to passing
null pointers as parameters that are declared to never be null,
but plenty of other runtime errors remain.
2020-11-02 14:19:21 +02:00
Sergei Golubchik
e64084d5a3 MDEV-21201 No records produced in information_schema query, depending on projection
Reimplement MDEV-14275 Improving memory utilization for information schema

Postpone temp table instantiation until after setup_fields().

Replace all unused (not marked in read_set) columns in an I_S table
with CHAR(0). This can drastically reduce the footprint of a MEMORY
table (a TABLE_CATALOG alone is 1538 bytes per row).

This does not change the engine. If the table was decided to be Aria
(because of, say, blobs) then after optimization it'll stay Aria
even if all blobs were removed.

Note 1: when transforming table structure, share->blob_fields is
preserved, otherwise Aria might switch from DYNAMIC to STATIC row format
and expect a special field for a deleted mark, which create_tmp_tabe
didn't provide.

Note 2: optimizer was doing handler::info() (to know the number of rows)
before the temp table is populated. That didn't make much sense. Now
it's done before the table is even instantiated. Preserve the old
behavior and report 0 rows.

This reverts e2664ee836 and a8458a2345
2020-10-23 13:37:26 +02:00
Marko Mäkelä
bafc5c1321 Merge 10.2 into 10.3 2020-08-10 18:40:57 +03:00
Marko Mäkelä
3b6dadb5eb Merge 10.1 into 10.2 2020-08-10 17:57:14 +03:00
Sergei Petrunia
85bd5314c5 Better comment about TABLE::maybe_null 2020-08-06 13:39:10 +03:00
Oleksandr Byelkin
a8458a2345 MDEV-21201 No records produced in information_schema query, depending on projection
In case of NATURAL JOIN / USING mark all field (one table can not be opened
in any case so optimisation does not worth it).

IMHO table should be checked for used fields and filled after prepare,
when we will fave whole info about used fields but it is too big change
for a bugfix. Which will be made later by Serg patch
2020-07-31 13:43:03 +02:00
Marko Mäkelä
66ec3a770f Merge 10.2 into 10.3 2020-07-31 13:51:28 +03:00
Nikita Malyavin
5acd391e8b MDEV-16039 Crash when selecting virtual columns generated using functions with DAYNAME()
* Allocate items on thd->mem_root while refixing vcol exprs
* Make vcol tree changes register and roll them back after the statement is executed.

Explanation:
Due to collation implementation specifics an Item tree could change while fixing.
The tricky thing here is to make it on a proper arena.
It's usually not a problem when a field is deterministic, however, makes a pain vice-versa, during allocation allocating.
A non-deterministic field should be refixed on each statement, since it depends on the environment state.
Changing the tree will be temporary and therefore it should be reverted after the statement execution.
2020-07-21 16:18:00 +10:00
Sergei Petrunia
697273554f MDEV-22866: Crash in join optimizer with constant outer join nest
Starting from 10.3, the optimizer is able to detect that entire outer join
nests are constants (because of "Impossible ON") and remove them (see
mark_join_nest_as_const)

However, this was not properly accounted for in NESTED_JOIN structure
and the way check_interleaving_with_nj() uses its n_tables member to
check if the join prefix order is allowed.

(The result was that the optimizer could conclude that no join prefix is
allowed and fail an assertion)
2020-06-23 15:20:48 +03:00
Marko Mäkelä
8300f639a1 Merge 10.2 into 10.3 2020-06-02 10:25:11 +03:00
Marko Mäkelä
d72eebaa3d Merge 10.1 into 10.2 2020-06-01 09:33:03 +03:00
Sergey Vojtovich
c279878493 Thread safe histograms loading
Previously multiple threads were allowed to load histograms concurrently.
There were no known problems caused by this. But given amount of data
races in this code, it'd happen sooner or later.

To avoid scalability bottleneck, histograms loading is protected by
per-TABLE_SHARE atomic variable.

Whenever histograms were loaded by preceding statement (hot-path), a
scalable load-acquire check is performed.

Whenever histograms have to be loaded anew, mutual exclusion for loaders
is established by atomic variable. If histograms are being loaded
concurrently, statement waits until load is completed.

- Table_statistics::total_hist_size moved to TABLE_STATISTICS_CB: only
  meaningful within TABLE_SHARE (not used for collected stats).
- TABLE_STATISTICS_CB::histograms_can_be_read and
  TABLE_STATISTICS_CB::histograms_are_read are replaced with a tri state
  atomic variable.
- Simplified away alloc_histograms_for_table_share().

Note: there's still likely a data race if a thread attempts accessing
histograms data after it failed to load it (because of concurrent load).
It was there previously and goes out of the scope of this effort. One way
of fixing it could be reviving TABLE::histograms_are_read and adding
appropriate checks whenever it is needed.

Part of MDEV-19061 - table_share used for reading statistical tables is
                     not protected
2020-05-29 21:53:54 +04:00
Sergey Vojtovich
609a0d3db3 Thread safe statistics loading
Previously multiple threads were allowed to load statistics concurrently.
There were no known problems caused by this. But given amount of data
races in this code, it'd happen sooner or later.

To avoid scalability bottleneck, statistics loading is protected by
per-TABLE_SHARE atomic variable.

Whenever statistics were loaded by preceding statement (hot-path), a
scalable load-acquire check is performed.

Whenever statistics have to be loaded anew, mutual exclusion for loaders
is established by atomic variable. If statistics are being loaded
concurrently, statement waits until load is completed.

TABLE_STATISTICS_CB::stats_can_be_read and
TABLE_STATISTICS_CB::stats_is_read are replaced with a tri state atomic
variable.

Part of MDEV-19061 - table_share used for reading statistical tables is
                     not protected
2020-05-29 21:53:54 +04:00
Marko Mäkelä
ecc7f305dd Merge 10.2 into 10.3 2020-05-25 19:41:58 +03:00
Monty
be647ff14d Fixed deadlock with LOCK TABLES and ALTER TABLE
MDEV-21398 Deadlock (server hang) or assertion failure in
Diagnostics_area::set_error_status upon ALTER under lock

This failure could only happen if one locked the same table
multiple times and then did an ALTER TABLE on the table.

Major change is to change all instances of
table->m_needs_reopen= true;
to
table->mark_table_for_reopen();

The main fix for the problem was to ensure that we mark all
instances of the table in the locked_table_list and when we
reopen the tables, we first close all tables before reopening
and locking them.

Other things:
- Don't call thd->locked_tables_list.reopen_tables if there
  are no tables marked for reopen. (performance)
2020-05-23 14:58:33 +03:00
Sergei Golubchik
607467bd63 Merge branch '10.2' into 10.3 2020-05-09 20:20:02 +02:00
Oleksandr Byelkin
985f63cce1 Merge branch '10.1' into 10.2 2020-05-08 13:38:36 +02:00
Sergei Golubchik
0fcc3abf4a MDEV-22180 Planner opens unnecessary tables when updated table is referenced by foreign keys
only MDL-prelock but do not open FK child tables for read-only (RESTRICT)
FK actions.

Tables still needs to be opened for CASCADE actions, see 9180e8666b
2020-05-06 20:24:48 +02:00
Oleksandr Byelkin
81bf7d3317 Merge branch 'bb-10.3-release' into 10.3 2019-12-04 15:01:54 +01:00
Aleksey Midenkov
cef2b34f25 MDEV-18929 2nd execution of SP does not detect ER_VERS_NOT_VERSIONED
Wrong assertion condition. SYSTEM_TIME_ALL indicates that
vers_setup_conds() is done. In case FOR SYSTEM_TIME ALL is specified
in command the assertion passes but not checks anything.
2019-12-03 15:28:41 +03:00
Faustin Lammler
2df2238cb8 Lintian complains on spelling error
The lintian check complains on spelling error:
https://salsa.debian.org/mariadb-team/mariadb-10.3/-/jobs/95739
2019-12-02 12:41:13 +02:00
Aleksey Midenkov
db32d9457e MDEV-18929 2nd execution of SP does not detect ER_VERS_NOT_VERSIONED
Don't do skip_setup_conds() unless all errors are checked.

Fixes following errors:
      ER_PERIOD_NOT_FOUND
      ER_VERS_QUERY_IN_PARTITION
      ER_VERS_ENGINE_UNSUPPORTED
      ER_VERS_NOT_VERSIONED
2019-12-02 11:48:37 +03:00
Aleksey Midenkov
6dd41e008e MDEV-21155 Assertion with versioned table upon DELETE from view of view after replacing first view
When view is merged by DT_MERGE_FOR_INSERT it is then skipped from
processing and doesn't update WHERE clause with
vers_setup_conds(). Note that view itself cannot work in
vers_setup_conds() because it doesn't have row_start, row_end
fields. Thus it is required to descend down to material TABLE_LIST
through calls of mysql_derived_prepare() and run vers_setup_conds()
from there. Luckily, all views (views of views, views of views of
views, etc.) are linked in one list through next_global pointer, so we
can skip all views of views and get straight to non-view TABLE_LIST by
checking its merge_underlying_list property for zero value (it is
assigned by DT_MERGE_FOR_INSERT for merged derived tables).

We have to do that only for UPDATE and DELETE. Other DML commands
don't use WHERE clause.

MDEV-21146 Assertion `m_lock_type == 2' in handler::ha_drop_table upon LOAD DATA

LOAD DATA does not use WHERE and the above call of vers_setup_conds()
is not needed. unit->prepare() led to wrongly locked temporary table.
2019-12-02 11:48:37 +03:00
Aleksey Midenkov
498a96a478 MDEV-20441 ER_CRASHED_ON_USAGE upon update on versioned Aria table
Turn read cache off for update and multi-update for versioned
table. no_cache is reinited on each TABLE open because it is
applicable for specific algorithms.

As a side fix vers_insert_history_row() honors vers_write setting.

Aria with row_format=fixed uses IO_CACHE of type READ_CACHE for
sequential read in update loop. When history row is inserted inside
this loop the cache misses it and fails with error.

TODO:

Currently maria_extra() does not support SEQ_READ_APPEND. Probably it
might be possible to use this type of cache.
2019-12-02 11:48:37 +03:00
Aleksey Midenkov
0076dce2c8 MDEV-18727 improve DML operation of System Versioning
MDEV-18957 UPDATE with LIMIT clause is wrong for versioned partitioned tables

UPDATE, DELETE: replace linear search of current/historical records
with vers_setup_conds().

Additional DML cases in view.test
2019-11-22 14:29:03 +03:00
Andrei Elkin
d103c5a489 merge 10.2->10.3 with conflict resolutions 2019-11-11 16:28:21 +02:00
Andrei Elkin
26fd880d5e manual merge 10.1->10.2 2019-11-11 16:03:43 +02:00
Varun Gupta
b1ab2ba583 MDEV-20519: Query plan regression with optimizer_use_condition_selectivity > 1
The issue here is the wrong estimate of the cardinality of a partial join,
the cardinality is too high because the function table_cond_selectivity()
returns an absurd number 100 while selectivity cannot be greater than 1.

When accessing table t by outer reference t1.a via index we do not perform any
range analysis for t. Yet we see TABLE::quick_key_parts[key] and
TABLE->quick_rows[key] contain a non-zero value though these should have been
remained untouched and equal to 0.

Thus real cause of the problem is that TABLE::init does not clean the arrays
TABLE::quick_key_parts[] and TABLE::>quick_rows[].
It should have done it because the TABLE structure created for any
instance of a table can be reused for many queries.
2019-11-07 15:24:21 +05:30
Monty
b62101f84b Fixes for binary logging --read-only mode
- Any temporary tables created under read-only mode will never be logged
  to binary log.  Any usage of these tables to update normal tables, even
  after read-only has been disabled, will use row base logging (as the
  temporary table will not be on the slave).
- Analyze, check and repair table will not be logged in read-only mode.

Other things:
- Removed not used varaibles in
  MYSQL_BIN_LOG::flush_and_set_pending_rows_event.
- Set table_share->table_creation_was_logged for all normal tables.
- THD::binlog_query() now returns -1 if statement was not logged., This
  is used to update table_share->table_creation_was_logged.
- Don't log admin statements in opt_readonly is set.
- Table's that doesn't have table_creation_was_logged will set binlog format to row
  logging.
- Removed not needed/wrong setting of table->s->table_creation_was_logged
  in create_table_from_items()
2019-10-20 11:52:29 +03:00
Oleksandr Byelkin
de2186dd2f MDEV-20074: Lost connection on update trigger
Instead of checking lex->sql_command which does not corect in case of triggers
mark tables for insert.
2019-10-17 17:32:14 +02:00
Marko Mäkelä
537f8594a6 Merge 10.2 into 10.3 2019-09-04 17:52:04 +03:00
Alexander Barkov
7e08ac0b41 Merge 10.2 (up to commit ef00ac4c86) into 10.3 2019-09-04 10:19:58 +04:00