1
0
mirror of https://github.com/MariaDB/server.git synced 2025-05-08 15:01:49 +03:00

34399 Commits

Author SHA1 Message Date
Sergei Golubchik
a8e1fa173e fix file_contents to pass with git 2014-05-24 18:09:21 +02:00
Jan Lindström
d12dbe77e2 MDEV-6246: Merge 10.0.10-FusionIO to 10.1. 2014-05-22 14:24:00 +03:00
unknown
787c470cef MDEV-5262: Missing retry after temp error in parallel replication
Handle retry of event groups that span multiple relay log files.

 - If retry reaches the end of one relay log file, move on to the next.

 - Handle refcounting of relay log files, and avoid purging relay log
   files until all event groups have completed that might have needed
   them for transaction retry.
2014-05-15 15:52:08 +02:00
mithun
f220233512 Bug#17217128 : BAD INTERACTION BETWEEN MIN/MAX AND
"HAVING SUM(DISTINCT)": WRONG RESULTS.
ISSUE:
------
If a query uses loose index scan and it has both
AGG(DISTINCT) and MIN()/MAX()functions. Then, result values
of MIN/MAX() is set improperly.
When query has AGG(DISTINCT) then end_select is set to
end_send_group. "end_send_group" keeps doing aggregation
until it sees a record from next group. And, then it will
send out the result row of that group.
Since query also has MIN()/MAX() and loose index scan is
used, values of MIN/MAX() are set as part of loose index
scan itself. Setting MIN()/MAX() values as part of loose
index scan overwrites values computed in end_send_group.
This caused invalid result.
For such queries to work loose index scan should stop
performing MIN/MAX() aggregation. And, let end_send_group to
do the same. But according to current design loose index
scan can produce only one row per group key. If we have both
MIN() and MAX() then it has to give two records out. This is
not possible as interface has to use common buffer
record[0]! for both records at a time.

SOLUTIONS:
----------
For such queries to work we need a new interface for loose
index scan. Hence, do not choose loose_index_scan for such
cases. So a new rule SA7 is introduced to take care of the
same.

SA7: "If Q has both AGG_FUNC(DISTINCT ...) and
      MIN/MAX() functions then loose index scan access
      method is not used."

mysql-test/r/group_min_max.result:
  Expected result.
mysql-test/t/group_min_max.test:
  1. Test with various combination of AGG(DISTINCT) and
  MIN(), MAX() functions.
  2. Corrected the plan for old queries.
sql/opt_range.cc:
  A new rule SA7 is introduced.
2014-05-15 11:46:57 +05:30
unknown
d60915692c MDEV-5262: Missing retry after temp error in parallel replication
Implement that if first retry fails, we can do another attempt.

Add testcases to test multi-retry that succeeds in second attempt, and
multi-retry that eventually fails due to exceeding slave_trans_retries.
2014-05-13 13:42:06 +02:00
Jan Lindström
9d399c9f35 MDEV-6075: Allow > 16K pages on InnoDB
This patch allows up to 64K pages for tables with DYNAMIC, COMPACT 
and REDUNDANT row types. Tables with COMPRESSED row type allows 
still only <= 16K page size. Note that single row size must be
still <= 16K and max key length is not affected.
2014-05-13 13:28:57 +03:00
Sergei Golubchik
edf1fbd25b MDEV-6153 Trivial Lintian errors in MariaDB sources: spelling errors and wrong executable bits 2014-05-13 11:53:30 +02:00
Sergei Golubchik
d3e2e1243b 5.5 merge 2014-05-09 12:35:11 +02:00
Jan Lindström
124428a9e2 MDEV-4791: Assertion range_end >= range_start fails in log0online.c
on select from I_S.INNODB_CHANGED_PAGES

Analysis: limit_lsn_range_from_condition() incorrectly parses
start_lsn and/or end_lsn conditions.

Fix from SergeyP. Added some test cases.
2014-05-09 11:03:39 +03:00
Venkatesh Duggirala
aa992742db Bug#17283409 4-WAY DEADLOCK: ZOMBIES, PURGING BINLOGS,
SHOW PROCESSLIST, SHOW BINLOGS

Fixing post push test failure (MTR does not like giving
127.0.0.1 for localhost incase of --embedded run, it thinks
it is an external ip address)
2014-05-09 09:52:15 +05:30
unknown
45a91d8cbb MDEV-6193: Problems with multi-table updates that JOIN against read-only table
All underlying tables should share the same lock type.
2014-05-08 22:56:36 +03:00
Venkatesh Duggirala
2870bd7423 Bug#17283409 4-WAY DEADLOCK: ZOMBIES, PURGING BINLOGS,
SHOW PROCESSLIST, SHOW BINLOGS

Problem:  A deadlock was occurring when 4 threads were
involved in acquiring locks in the following way
Thread 1: Dump thread ( Slave is reconnecting, so on
              Master, a new dump thread is trying kill
              zombie dump threads. It acquired thread's
              LOCK_thd_data and it is about to acquire
              mysys_var->current_mutex ( which LOCK_log)
Thread 2: Application thread is executing show binlogs and
               acquired LOCK_log and it is about to acquire
               LOCK_index.
Thread 3: Application thread is executing Purge binary logs
               and acquired LOCK_index and it is about to
               acquire LOCK_thread_count.
Thread 4: Application thread is executing show processlist
               and acquired LOCK_thread_count and it is
               about to acquire zombie dump thread's
               LOCK_thd_data.
Deadlock Cycle:
     Thread 1 -> Thread 2 -> Thread 3-> Thread 4 ->Thread 1

The same above deadlock was observed even when thread 4 is
executing 'SELECT * FROM information_schema.processlist' command and
acquired LOCK_thread_count and it is about to acquire zombie
dump thread's LOCK_thd_data.

Analysis:
There are four locks involved in the deadlock.  LOCK_log,
LOCK_thread_count, LOCK_index and LOCK_thd_data.
LOCK_log, LOCK_thread_count, LOCK_index are global mutexes
where as LOCK_thd_data is local to a thread.
We can divide these four locks in two groups.
Group 1 consists of LOCK_log and LOCK_index and the order
should be LOCK_log followed by LOCK_index.
Group 2 consists of other two mutexes
LOCK_thread_count, LOCK_thd_data and the order should
be LOCK_thread_count followed by LOCK_thd_data.
Unfortunately, there is no specific predefined lock order defined
to follow in the MySQL system when it comes to locks across these
two groups. In the above problematic example,
there is no problem in the way we are acquiring the locks
if you see each thread individually.
But If you combine all 4 threads, they end up in a deadlock.

Fix: 
Since everything seems to be fine in the way threads are taking locks,
In this patch We are changing the duration of the locks in Thread 4
to break the deadlock. i.e., before the patch, Thread 4
('show processlist' command) mysqld_list_processes()
function acquires LOCK_thread_count for the complete duration
of the function and it also acquires/releases
each thread's LOCK_thd_data.

LOCK_thread_count is used to protect addition and
deletion of threads in global threads list. While show
process list is looping through all the existing threads,
it will be a problem if a thread is exited but there is no problem
if a new thread is added to the system. Hence a new mutex is
introduced "LOCK_thd_remove" which will protect deletion
of a thread from global threads list. All threads which are
getting exited should acquire LOCK_thd_remove
followed by LOCK_thread_count. (It should take LOCK_thread_count
also because other places of the code still thinks that exit thread
is protected with LOCK_thread_count. In this fix, we are changing
only 'show process list' query logic )
(Eg: unlink_thd logic will be protected with
LOCK_thd_remove).

Logic of mysqld_list_processes(or file_schema_processlist)
will now be protected with 'LOCK_thd_remove' instead of
'LOCK_thread_count'.

Now the new locking order after this patch is:
LOCK_thd_remove -> LOCK_thd_data -> LOCK_log ->
LOCK_index -> LOCK_thread_count
2014-05-08 18:13:01 +05:30
unknown
b0b60f2498 MDEV-5262: Missing retry after temp error in parallel replication
Start implementing that an event group can be re-tried in parallel replication
if it fails with a temporary error (like deadlock).

Patch is very incomplete, just some very basic retry works.

Stuff still missing (not complete list):

 - Handle moving to the next relay log file, if event group to be retried
   spans multiple relay log files.

 - Handle refcounting of relay log files, to ensure that we do not purge a
   relay log file and then later attempt to re-execute events out of it.

 - Handle description_event_for_exec - we need to save this somehow for the
   possible retry - and use the correct one in case it differs between relay
   logs.

 - Do another retry attempt in case the first retry also fails.

 - Limit the max number of retries.

 - Lots of testing will be needed for the various edge cases.
2014-05-08 14:20:18 +02:00
mithun
ee3c555ad9 Bug #17059925: UNIONS COMPUTES ROWS_EXAMINED INCORRECTLY
ISSUE:
------
For UNION of selects, rows examined by the query will be sum
of rows examined by individual select operations and rows
examined for union operation. The value of session level
global counter that is used to count the rows examined by a
select statement should be accumulated and reset before it
is used for next select statement. But we have missed to
reset the same. Because of this examined row count of a
select query is accounted more than once.

SOLUTION:
---------
In union reset the session level global counter used to
accumulate count of examined rows after its value is saved.

mysql-test/r/union.result:
  Expected output of testcase added.
mysql-test/t/union.test:
  Test to verify examined row count of Union operations.
sql/sql_union.cc:
  Reset the value of thd->examined_row_count after
  accumulating the value.
2014-05-08 14:49:53 +05:30
Sergey Petrunya
0eb84da147 Merge 10.0 -> 10.1 2014-05-08 13:09:15 +04:00
Sergei Golubchik
60dd52e3a3 fix innodb.row_lock test to work in 10.0 2014-05-08 11:09:00 +02:00
Sergei Golubchik
3e7519f17e fix mdl_sync test to work now when ALTER TABLE .. ENGINE=xxx may be executed online 2014-05-08 10:25:24 +02:00
Sergei Golubchik
c8ee83ee8a after merge test case fixes 2014-05-08 10:25:16 +02:00
Sergei Golubchik
914a2b38bf merge of "BUG# 13975227: ONLINE OPTIMIZE TABLE FOR INNODB TABLES"
revno: 5820
committer: Nisha Gopalakrishnan <nisha.gopalakrishnan@oracle.com>
branch nick: mysql-5.6-13975225
timestamp: Mon 2014-02-17 15:12:16 +0530
message:
  BUG# 13975227: ONLINE OPTIMIZE TABLE FOR INNODB TABLES
2014-05-07 22:36:25 +02:00
Sergei Golubchik
5ffe939a6c perfschema 5.6.17 2014-05-07 16:12:16 +02:00
Chaithra Gopalareddy
8ade414b28 Bug#17909656 - WRONG RESULTS FOR A SIMPLE QUERY WITH GROUP BY
Problem:
If there is a predicate on a column referenced by MIN/MAX and
that predicate is not present in all the disjunctions on
keyparts earlier in the compound index, Loose Index Scan will
not return correct result.

Analysis:
When loose index scan is chosen, range optimizer currently
groups all the predicates that contain group parts separately
and minmax parts separately. It therefore applies all the
conditions on the group parts first to the fetched row.
Then in the call to next_max, it processes the conditions
which have min/max keypart.

For ex in the following query:
Select f1, max(f2) from t1 where (f1 = 10 and f2 = 13) or
(f1 = 3) group by f1;
Condition (f2 = 13) would be applied even for rows that
satisfy (f1 = 3) thereby giving wrong results.

Solution:
Do not choose loose_index_scan for such cases. So a new rule
WA2 is introduced to take care of the same.

WA2: "If there are predicates on C, these predicates must
be in conjuction to all predicates on all earlier keyparts
in I."

Todo the same, fix reuses the function get_constant_key_infix().
Since this funciton will fail for all multi-range conditions, it
is re-written to recognize that if the sub-conditions are
equivalent across the disjuncts: it will now succeed.
And to achieve this a new helper function is introduced called
all_same().

The fix also moves the test of NGA3 up to the former only
caller, get_constant_key_infix().


mysql-test/r/group_min_max_innodb.result:
  Added test result change for Bug#17909656
mysql-test/t/group_min_max_innodb.test:
  Added test cases for Bug#17909656
sql/opt_range.cc:
  Introduced Rule WA2 because of Bug#17909656
2014-05-07 14:59:23 +05:30
Venkatesh Duggirala
3b43142623 Bug#17638477 UNINSTALL AND INSTALL SEMI-SYNC PLUGIN CAUSES SLAVES TO BREAK
Fixing post push failure
2014-05-07 14:33:58 +05:30
Sergei Golubchik
f55a0a6ad2 after perfschema-mergetree merge - update tests and results 2014-05-07 10:21:58 +02:00
Sergei Golubchik
374f0751a2 null-merge from perfschema-5.6 merge tree
(only new files and small style changes are accepted)
2014-05-07 10:21:41 +02:00
Sergei Golubchik
04bce7b569 5.6.17 2014-05-07 10:04:30 +02:00
unknown
3f80740aa8 merge 5.5->5.3 2014-05-07 09:28:12 +03:00
Sergei Golubchik
06d874ba90 perfschema 5.6.10 initial commit.
5.6 files
2014-05-06 23:22:16 +02:00
Sergei Golubchik
d74414399d perfschema 5.6.10 initial commit.
10.0 files
2014-05-06 23:20:50 +02:00
Sergei Golubchik
890982192a update test file for windows 2014-05-06 14:52:40 +02:00
Sergei Golubchik
4a84ee1c25 after InnoDB/XtraDB 5.6.16 merge 2014-05-06 13:57:56 +02:00
Mattias Jonsson
548db49210 Bug#17909699: WRONG RESULTS WITH PARTITION BY LIST COLUMNS()
Typo leading to not including the last list values (partition).

Also improved pruning to skip last partition if not used.

rb#4762 approved by Aditya and Marko.
2014-05-06 11:05:37 +02:00
Michael Widenius
a55c159424 MDEV-6245 Certain compressed tables with myisampack are corrupted by "CHECK TABLE"
- Fixed bug that we where using wrong checksum algorithm when using VARCHAR with fixed lenth rows
- Ensure in myisampack that HA_OPTION_NULL_FIELDS is set for tables with null fields.

mysql-test/r/myisampack.result:
  Updated results
mysql-test/t/myisampack.test:
  Added more tests
storage/myisam/mi_open.c:
  Use correct checksum algorithm when we have VARCHAR fields with fixed length records
storage/myisam/myisampack.c:
  Ensure HA_OPTION_NULL_FIELDS is set for tables with null fields.
  (This was not set by default for not compressed tables without checksums to keep MyISAM tables compatible with MySQL)
2014-05-17 10:42:59 +03:00
Sergei Golubchik
f6524e4963 MDEV-4925 Wrong result - count(distinct), Using index for group-by (scanning)
added the test case
2014-05-12 12:56:13 +02:00
Sergei Golubchik
e2e5d07b28 MDEV-6184 10.0.11 merge
InnoDB 5.6.16
2014-05-06 09:57:39 +02:00
Venkatesh Duggirala
a2333a489a Bug#17638477 UNINSTALL AND INSTALL SEMI-SYNC PLUGIN CAUSES SLAVES TO BREAK
Fixing post push failure
2014-05-06 11:23:42 +05:30
Sergei Golubchik
3792693f31 merge MySQL-5.6 bugfix "Bug#17862905: MYSQLDUMP CREATES USELESS METADATA LOCKS"
revno: 5716
committer: Praveenkumar Hulakund <praveenkumar.hulakund@oracle.com>
branch nick: mysql_5_6
timestamp: Sat 2013-12-28 22:08:40 +0530
message:
  Bug#17862905: MYSQLDUMP CREATES USELESS METADATA LOCKS
2014-05-05 23:53:31 +02:00
Venkatesh Duggirala
b6283d4f97 Bug#17638477 UNINSTALL AND INSTALL SEMI-SYNC PLUGIN CAUSES SLAVES TO BREAK
Problem: Uninstallation of semi sync plugin causes replication to
break.

Analysis: A semisync enabled replication is mutual agreement between
Master and Slave when the connection (I/O thread) is established.
Once I/O thread is started and if semisync is enabled on both
master and slave, master appends special magic header to events
using semisync plugin functions and sends it to slave. And slave
expects that each event will have that special magic header format
and reads those bytes using semisync plugin functions.

When semi sync replication is in use if users execute
uninstallation of the plugin on master, slave gets confused while
interpreting that event's content because it expects special 
magic header at the beginning of the event. Slave SQL thread will
be stopped with "Missing magic number in the header" error.

Similar problem will happen if uninstallation of the plugin happens
on slave when semi sync replication is in in use. Master sends
the events with magic header and slave does not know about the
added magic header and thinks that it received a corrupted event.
Hence slave SQL thread stops with "Found  corrupted event" error.

Fix: Uninstallation of semisync plugin will be blocked when semisync
replication is in use and will throw 'ER_UNKNOWN_ERROR' error.
To detect that semisync replication is in use, this patch uses
semisync status variable values.
 > On Master, it checks for 'Rpl_semi_sync_master_status' to be OFF
    before allowing the uninstallation of rpl_semi_sync_master plugin.
    >> Rpl_semi_sync_master_status is OFF when
        >>> there is no dump thread running
        >>> there are no semisync slaves
 > On Slave, it checks for 'Rpl_semi_sync_slave_status' to be OFF
    before allowing the uninstallation of rpl_semi_sync_slave plugin.
    >> Rpl_semi_sync_slave_status is OFF when
       >>> there is no I/O thread running
       >>> replication is asynchronous replication.
2014-05-05 22:22:15 +05:30
Sergei Golubchik
2f7f2de5cb update test results 2014-05-05 15:41:29 +02:00
Sergei Golubchik
a313864814 MDEV-6056 [PATCH] mysqldump writes usage to stdout even when not explicitly requested 2014-05-05 14:24:25 +02:00
unknown
285160dee2 MDEV-5981: name resolution issues with views and multi-update in ps-protocol
It is triple bug with one test suite:
1. Incorrect outer table detection
2. Incorrect leaf table processing for multi-update (should be full like for usual updates and inserts)
3. ON condition fix_fields() fould be called for all tables of the query.
2014-05-01 17:19:17 +03:00
Sergei Golubchik
ddc960db4b MDEV-6091 mysqldump goes in a loop and segfaults if --dump-slave is specified and it cannot connect to the server
do_start_slave_sql() is called from maybe_exit().
We should not recurse when maybe_exit() is called for an error during do_start_slave_sql().
Also remove a meaningless (but safe) "goto err".
2014-05-01 15:43:51 +02:00
Alexander Nozdrin
d14f191e6b Patch for Bug#18511348 (DDL_I18N_UTF8 AND DDL_I18N_KOI8R
ARE PERMANENTLY SKIPPED IN 5.5/5.6).

The problem was that some result files were not updated,
so the tests were skipped.

The fix is to record updated result files.
2014-04-30 20:48:29 +04:00
Nisha Gopalakrishnan
5e881cc435 BUG#17994219: CREATE TABLE .. SELECT PRODUCES INVALID STRUCTURE,
BREAKS RBR

Analysis:
--------
A table created using a query of the format:
CREATE TABLE t1 AS SELECT REPEAT('A',1000) DIV 1 AS a;
breaks the Row Based Replication.

The query above creates a table having a field of datatype
'bigint' with a display width of 3000 which is beyond the
maximum acceptable value of 255.

In the RBR mode, CREATE TABLE SELECT statement is
replicated as a combination of CREATE TABLE statement
equivalent to one the returned by SHOW CREATE TABLE and
row events for rows inserted. When this CREATE TABLE event
is executed on the slave, an error is reported:
Display width out of range for column 'a' (max = 255)

The following is the output of 'SHOW CREATE TABLE t1':
CREATE TABLE t1(`a` bigint(3000) DEFAULT NULL)
                  ENGINE=InnoDB DEFAULT CHARSET=latin1;

The problem is due to the combination of two facts:

1) The above CREATE TABLE SELECT statement uses the display
   width of the result of DIV operation as the display width
   of the column created without validating the width for out
   of bound condition.
2) The DIV operation incorrectly returns the length of its first
   argument as the display width of its result; thus allowing
   creation of a table with an incorrect display width of 3000
   for the field.

Fix:
----
This fix changes the DIV operation implementation to correctly
evaluate the display width of its result. We check if DIV's
results estimated width crosses maximum width for integer
value (21) and if yes set it to this maximum value.

This patch also fixes fixes maximum display width evaluation
for DIV function when its first argument is in UCS2.
2014-04-28 16:28:09 +05:30
Sergei Golubchik
2797f0c534 fix XtraDB version to tell the truth 2014-04-28 12:11:35 +02:00
Igor Babaev
b186575fc0 Merge 10.0->10.1 2014-04-23 23:14:29 -07:00
Tor Didriksen
c76e29c884 Backport from trunk:
Bug#18396916 MAIN.OUTFILE_LOADDATA TEST FAILS ON ARM, AARCH64, PPC/PPC64
  
  The recorded results for the failing tests were wrong.
  They were introduced by the patch for
  Bug#30946 mysqldump silently ignores --default-character-set when used with --tab
  
  Correct results were returned for platforms where 'char' is implemented as unsigned.
  This was reported as 
  Bug#46895 Test "outfile_loaddata" fails (reproducible)
  Bug#11755168 46895: TEST "OUTFILE_LOADDATA" FAILS (REPRODUCIBLE)
  The patch for that bug fixed only parts of the problem,
  leaving the incorrect results in the .result file.
  
  Solution: use 'uchar' for field_terminator and line_terminator on all platforms.
  Also: remove some un-necessary casts, leaving the ones we actually need.
2014-04-23 17:01:35 +02:00
unknown
010971a761 MDEV-6156: Parallel replication incorrectly caches charset between worker threads
Replication caches the character sets used in a query, to be able to quickly
reuse them for the next query in the common case of them not having changed.

In parallel replication, this caching needs to be per-worker-thread. The
code was not modified to handle this correctly, so the caching in one worker
could cause another worker to run a query using the wrong character set,
causing replication corruption.
2014-04-23 16:06:06 +02:00
Alexander Barkov
a24ea50d1a MDEV-5338 XML parser accepts malformed data 2014-04-23 15:53:47 +04:00
Sergey Petrunya
504068b093 MDEV-6209: Assertion `join->best_read < double(1.79769313486231570815e+308L ...
- Use floating-point division in selectivity calculations.
2014-05-05 13:24:54 +03:00
Sergey Petrunya
54265c5f50 MDEV-5980: EITS: if condition is used for REF access, its selectivity is still in filtered%
- Testcase. The bug is fixed by commit for MDEV-6003
2014-04-25 19:12:06 +04:00