Other fix of maybe_null problem and revert of revno: 3608 "MDEV-3873 & MDEV-3876 & MDEV-3912 : Wrong result (extra rows) with ALL subquery from a MERGE view."
ORDER BY does not work
Use "dynamic" row format (instead of "block") for MARIA internal
temporary tables created for cursors.
With "block" row format MARIA may shuffle rows, with "dynamic" row
format records are inserted sequentially (there are no gaps in data
file while we fill temporary tables).
This is needed to preserve row order when scanning materialized cursors.
When a non-nullable datetime field is used under an IS NULL predicate
of the WHERE condition in a query with outer joins the remove_eq_conds
function should check whether this field belongs to an inner table
of any outer join that can be, in a general case, a nested outer join.
Following variables do not require LOCK_open protection anymore:
- table_def_cache (renamed to tdc_hash) is protected by rw-lock
LOCK_tdc_hash;
- table_def_shutdown_in_progress doesn't need LOCK_open protection;
- last_table_id use atomics;
- TABLE_SHARE::ref_count (renamed to TABLE_SHARE::tdc.ref_count)
is protected by TABLE_SHARE::tdc.LOCK_table_share;
- TABLE_SHARE::next, ::prev (renamed to tdc.next and tdc.prev),
oldest_unused_share, end_of_unused_share are protected by
LOCK_unused_shares;
- TABLE_SHARE::m_flush_tickets (renamed to tdc.m_flush_tickets)
is protected by TABLE_SHARE::tdc.LOCK_table_share;
- refresh_version (renamed to tdc_version) use atomics.
Temorary fix for a number of replication tests (rpl.rpl_temp_table_mix_row
rpl.rpl_trunc_temp rpl.rpl_current_user rpl.rpl_gtid_master_promote):
- THD::decide_logging_format() should not assume that mysql.gtid_slave_pos is
a non-replicated table. This used to cause unintended behavior for COMMIT
statement: replication would switch to row-based, etc.
The question of what should be done when a user issues a statement that
explicitly modifies mysql.gtid_slave_pos table remains open.
includes:
* remove some remnants of "Bug#14521864: MYSQL 5.1 TO 5.5 BUGS PARTITIONING"
* introduce LOCK_share, now LOCK_ha_data is strictly for engines
* rea_create_table() always creates .par file (even in "frm-only" mode)
* fix a 5.6 bug, temp file leak on dummy ALTER TABLE
Includes 5.6 changesets for:
*****
Fix for BUG#13489996 valgrind:conditional jump or move depends on uninitialised values-field_blob.
blob_ptr_size was not initialized properly: remove this variable.
*****
Bug#14021323 CRASH IN FIELD::SET_NULL WHEN INSERTING ROWS TO NEW TABLE
*****
revno: 4559
committer: Marc Alff <marc.alff@oracle.com>
branch nick: mysql-5.6-bug14741537-v4
timestamp: Thu 2012-11-08 22:40:31 +0100
message:
Bug#14741537 - MYSQL 5.6, GTID AND PERFORMANCE_SCHEMA
Before this fix, statements using performance_schema tables:
- were marked as unsafe for replication,
- did cause warnings during execution,
- were written to the binlog, either in STATEMENT or ROW format.
When using replication with the new GTID feature,
unsafe warnings are elevated to errors,
which prevents to use both the performance_schema and GTID together.
The root cause of the problem is not related to raising warnings/errors
in some special cases, but deeper: statements involving the performance
schema should not even be written to the binary log in the first place,
because the content of the performance schema tables is 'local' to a server
instance, and may differ greatly between nodes in a replication
topology.
In particular, the DBA should be able to configure (INSERT, UPDATE, DELETE)
or flush (TRUNCATE) performance schema tables on one node,
without affecting other nodes.
This fix introduces the concept of a 'non-replicated' or 'local' table,
and adjusts the replication logic to ignore tables that are not replicated
when deciding if or how to log a statement to the binlog.
Note that while this issue was detected using the performance_schema,
other tables are also affected by the same problem.
This fix define as 'local' the following tables, which are then never
replicated:
- performance_schema.*
- mysql.general_log
- mysql.slow_log
- mysql.slave_relay_log_info
- mysql.slave_master_info
- mysql.slave_worker_info
Existing behavior for information_schema.* is unchanged by this fix,
to limit the scope of changes.
Coding wise, this fix implements the following changes:
1)
Performance schema tables are not using any replication flags,
since performance schema tables are not replicated.
2)
In open_table_from_share(),
tables with no replication capabilities (performance_schema.*),
tables with TABLE_CATEGORY_LOG (logs)
and tables with TABLE_CATEGORY_RPL_INFO (replication)
are marked as non replicated, with TABLE::no_replicate
3)
A new THD member, THD::m_binlog_filter_state,
indicate if the current statement is written to the binlog
(normal cases for most statements), or is to be discarded
(because the statements affects non replicated tables).
4)
In THD::decide_logging_format(), the replication logic
is changed to take into account non replicated tables.
Statements that affect only non replicated tables are
executed normally (no warning or errors), but not written
to the binlog.
Statements that affect (i.e., write to) a replicated table
while also using (i.e., reading from or writing to) a non replicated table
are executed normally in MIXED and ROW binlog format,
and cause a new error in STATEMENT binlog format.
THD::decide_logging_format() uses THD::m_binlog_filter_state
to indicate if a statement is to be ignored, when writing to
the binlog.
5)
In THD::binlog_query(), statements marked as ignored
are not written to the binary log.
6)
For row based replication, the existing test for 'table->no_replicate',
has been moved from binlog_log_row() to check_table_binlog_row_based().
Implement discovery of table non-existence, and related changes:
1. Split GTS_FORCE_DISCOVERY (that was meaning two different things in
two different functions) into GTS_FORCE_DISCOVERY and GTS_USE_DISCOVERY.
2. Move GTS_FORCE_DISCOVERY implementation into open_table_def().
3. In recover_from_failed_open() clear old errors *before* discovery,
not after successful discovery. The final error should come
from the discovery.
4. On forced discovery delete table .frm first. Discovery will write
a new one, if desired.
5. If the frm file exists, but not the table in the engine, force
rediscovery if the engine supports it.
1. default db type for partitions was stored as 1-byte DB_TYPE code,
which doesn't work for dynamically generated codes.
2. storage engine plugin for default db type wasn't locked at all,
which could trivially crash for dynamic plugins.
Now the storage engine name is stored in the extra2 section,
and the plugin is correctly locked.
1. DROP DATABASE should use ha_discover_table_names(), not look at .frm files.
2. filename_to_tablename() also encodes temp file names #sql- -> #mysql50##sql
3. no special treatment for #sql- files, no TABLE_LIST::internal_tmp_table
4. discover also table file names, that start from #
WITH COMPOSITE KEY COLUMNS
Problem:-
While running a SELECT query with several AGGR(DISTINCT) function
and these are referring to different field of same composite key,
Returned incorrect value.
Analysis:-
In a table, where we have composite key like (a,b,c)
and when we give a query like
select COUNT(DISTINCT b), SUM(DISTINCT a) from ....
here, we first make a list of items in Aggr(distinct) function
(which is a, b), where order of item doesn't matter.
and then we see, whether we have a composite key where the prefix
of index columns matches the items of the aggregation function.
(in this case we have a,b,c).
if yes, so we can use loose index scan and we need not perform
duplicate removal to distinct in our aggregate function.
In our table, we traverse column marked with <-- and get the result as
(a,b,c) count(distinct b) sum(distinct a)
treated as count b treated as sum(a)
(1,1,2)<-- 1 1
(1,2,2)<-- 1++=2 1+1=2
(1,2,3)
(2,1,2)<-- 2++=3 1+1+2=4
(2,2,2)<-- 3++=4 1+1+2+2=6
(2,2,3)
result will be 4,6, but it should be (2,3)
As in this case, our assumption is incorrect. If we have
query like
select count(distinct a,b), sum(distinct a,b)from ..
then we can use loose index scan
Solution:-
In our query, when we have more then one aggr(distinct) function
then they should refer to same fields like
select count(distinct a,b), sum(distinct a,b) from ..
-->we can use loose scan index as both aggr(distinct) refer to same fields a,b.
If they are referring to different field like
select count(distinct a), sum(distinct b) from ..
-->will not use loose scan index as both aggr(distinct) refer to different fields.
Merge of 10.0-mdev26 feature tree into 10.0-base.
Global transaction ID is prepended to each event group in the binlog.
Slave connect can request to start from GTID position instead of specifying
file name/offset of master binlog. This facilitates easy switch to a new
master.
Slave GTID state is stored in a table mysql.rpl_slave_state, which can be
InnoDB to get crash-safe slave state.
GTID includes a replication domain ID, allowing to keep track of distinct
positions for each of multiple masters.
* persistent table versions in the extra2
* ha_archive::frm_compare using TABLE_SHARE::tabledef_version
* distinguish between "important" and "optional" extra2 frm values
* write engine-defined attributes (aka "table options") to extra2, not to extra,
but still read from the old location, if they're found there.
* comments
* cosmetic changes, *(ptr+5) -> ptr[5]
* a couple of trivial functions -> inline
* remove unused argument from pack_header()
* create_frm() no longer creates frm file (the function used to prepare and
fill a memory buffer and call my_create at the end. Now it only prepares
a memory buffer). Renamed accordingly.
* don't call pack_screen twice, go for a smaller screen area in the first attempt
* remove useless calls to check_duplicate_warning()
* don't write unireg screens to .frm files
* remove make_new_entry(), it's basically dead code, always calculating
and writing into frm the same string value. replace the function call
with the constant string.
* print "table doesn't exist in engine" when a table doesn't exist in the engine,
instead of "file not found" (if no file was involved)
* print a complete filename that cannot be found ('t1.MYI', not 't1')
* it's not an error for a DROP if a table doesn't exist in the engine (or some table
files cannot be found) - if the DROP succeeded regardless