1
0
mirror of https://github.com/MariaDB/server.git synced 2025-07-05 12:42:17 +03:00
Commit Graph

32064 Commits

Author SHA1 Message Date
8e95e6543b Changed a test case from join_cache.test to make it platform independent 2012-05-17 18:01:13 -07:00
61b86d004e Merge. 2012-05-16 22:33:22 -07:00
4dc963a9f3 Fixed LP bug #999251: Q13 from DBT3 uses table scan instead of covering index scan.
The optimizer chose a less efficient execution plan due to the following
defects of the code:
1. the generic handler function handler::keyread_time did not take into account
   that in clustered primary keys record data is included into each index entry
2. the function make_join_readinfo erroneously decided that index only scan
   could not be used if join cache was empoyed.

Added no additional test case.
Adjusted some of the test results.
2012-05-16 20:39:03 -07:00
b1485a4780 More fixes for LOCK TABLE and REPAIR/FLUSH
Changed HA_EXTRA_NORMAL to HA_EXTRA_NOT_USED (more clean)

mysql-test/suite/maria/lock.result:
  More extensive tests of LOCK TABLE with FLUSH and REPAIR
mysql-test/suite/maria/lock.test:
  More extensive tests of LOCK TABLE with FLUSH and REPAIR
sql/sql_admin.cc:
  Fix that REPAIR TABLE ... USE_FRM works with LOCK TABLES
sql/sql_base.cc:
  Ensure that transactions are closed in ARIA when doing flush
  HA_EXTRA_NORMAL -> HA_EXTRA_NOT_USED
  Don't call extra many times for a table in close_all_tables_for_name()
  Added test if table_list->table as this can happen in error situations
sql/sql_partition.cc:
  HA_EXTRA_NORMAL -> HA_EXTRA_NOT_USED
sql/sql_reload.cc:
  Fixed comment
sql/sql_table.cc:
  HA_EXTRA_NORMAL -> HA_EXTRA_NOT_USED
sql/sql_trigger.cc:
  HA_EXTRA_NORMAL -> HA_EXTRA_NOT_USED
sql/sql_truncate.cc:
  HA_EXTRA_FORCE_REOPEN -> HA_EXTRA_PREPARE_FOR_DROP for truncate, as this speeds up truncate by not having to flush the cache to disk.
2012-05-17 01:47:28 +03:00
26cc22f3fe Fixed LP:990187 Assertion `share->reopen == 1' failed at maria_extra on ADD PARTITION
mysql-test/suite/maria/maria-partitioning.result:
  New test case
mysql-test/suite/maria/maria-partitioning.test:
  New test case
sql/sql_base.cc:
  Ignore HA_EXTRA_NORMAL for wait_while_table_is_used()
  More DBUG
sql/sql_partition.cc:
  Don't use HA_EXTRA_FORCE_REOPEN for wait_while_table_is_used() as the table is opened multiple times (in prep_alter_part_table)
  This fixes the assert in Aria where we check if table is opened multiple times if HA_EXTRA_FORCE_REOPEN is issued
2012-05-16 22:04:48 +03:00
c39da19c4a Moved maria tests to suite/maria 2012-05-16 18:46:02 +03:00
6d8e329c9a Fixed bug LP:973039 - Assertion `share->in_trans == 0' failed in maria_close on DROP TABLE under LOCK
- 5.5 was missing calls to ha_extra(HA_PREPARE_FOR_DROP | HA_PREPARE_FOR_RENAME);  Lost in merge 5.3 -> 5.5


sql/sql_admin.cc:
  Updated arguments for close_all_tables_for_name
sql/sql_base.h:
  Updated arguments for close_all_tables_for_name
sql/sql_partition.cc:
  Updated arguments for close_all_tables_for_name
sql/sql_table.cc:
  Updated arguments for close_all_tables_for_name
  Removed test of kill, as we have already called 'ha_extra(HA_PREPARE_FOR_DROP)' and the table may be inconsistent.
sql/sql_trigger.cc:
  Updated arguments for close_all_tables_for_name
sql/sql_truncate.cc:
  For truncate that is done with drop + recreate, signal that the table will be dropped.
2012-05-16 18:44:17 +03:00
c17bace4f0 Added --continue-on-error to mysqltest and mysql-test-run
This will contune the test case even if there was an error
and makes it easier to run a test that contains many sub tests against one engine.

(originally by Monty)
2012-05-15 19:35:57 +02:00
f2cbc014d9 fix for LP bug#994392
The not_null_tables() of Item_func_not_all and Item_in_optimizer was inherited from
Item_func by mistake. It made the optimizer think that  subquery
predicates with ALL/ANY/IN were null-rejecting. This could trigger invalid
conversions of outer joins into inner joins.
2012-05-11 09:35:46 +03:00
326b40c9c8 Merging from mysql-5.1 to mysql-5.5. 2012-05-10 10:33:16 +05:30
391ea219c2 Bug #14007649 65111: INNODB SOMETIMES FAILS TO UPDATE ROWS INSERTED
BY A CONCURRENT TRANSACTIO

The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
a table handler clone. Innodb does not provide a clone operation.  
The ha_innobase::clone() is not there. The handler::clone() does not 
take care of the ha_innobase->prebuilt->select_lock_type.  Because of 
this what happens is that for one index we do a locking read, and 
for the other index we were doing a non-locking (consistent) read. 
The patch introduces ha_innobase::clone() member function.  
It is implemented similar to ha_myisam::clone().  It calls the 
base class handler::clone() and then does any additional operation 
required.  I am setting the ha_innobase->prebuilt->select_lock_type 
correctly. 

rb://1060 approved by Marko
2012-05-10 10:18:31 +05:30
ddd3e261b2 MDEV-254: Server hang with FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT
The code to re-enable checkpointing after UNLOCK TABLES was lost in the 5.5
merge, so re-add it back in.
2012-05-08 14:27:44 +02:00
54534a6984 MDEV-262 : log_state occationally fails in buildbot.
The failures are  missing entries in the slow query log.  The reason for the failure  are sleep() calls  with short duration 10ms, which is less than the default system timer resolution for various WaitForXXXObject functions  (15.6 ms) and thus can't work reliably.
The fix is to make sleeps tiny bit longer (20ms from 10ms) in the test.
2012-05-08 12:38:22 +02:00
074ce71e90 Merge from mysql-5.1.63-release 2012-05-08 07:19:14 +02:00
5be07ceadd Merge 5.5.24 back into main 5.5.
This is a weave merge, but without any conflicts.
In 14 source files, the copyright year needed to be updated to 2012.
2012-05-07 22:20:42 +02:00
ea8314fdd5 LP bug#994275 fix.
In 5.3 we substitute constants in ref access values it can't be null so we do not need add NOT NULL for early NULL filtering.
2012-05-07 21:14:37 +03:00
1d47bbe3bf Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM

Merging the fix from mysql-5.1 to mysql-5.5

mysql-test/t/mysqldump.test:
  There is a difference in the testcase which is added as 
  part of this fix, when compared with mysql-5.1. In mysql-5.5 
  and mysql-5.6, "DROP mysql database" fails by enabling 
  logging, hence removed those lines.
2012-05-07 16:51:26 +05:30
e7364ec29c Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after 
dropping the mysql database, and applying the dump by enabling the 
logging, "'general_log' table not found" errors are logged into the 
server log file.

Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables 
for general_log and slow_log, because the data dump of those tables 
are taking LOCKS, which is not allowed for log tables.

Fix:
----
We came up with an approach that instead of taking both meta data 
and data dump information for those tables, take only the meta data 
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

Design with the fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and 
   slow_log
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

While taking the meta data dump for general_log and slow_log the 
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
This is because we skipped "DROP TABLE" for those tables, 
"DROP TABLE" fails for these tables if logging is enabled. 
Customer is applying the dump by enabling logging so, if the dump 
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
stmts for those tables.
  
After the fix we could observe "Table 'mysql.general_log' 
doesn't exist" errors initially that is because in the customer 
scenario they are dropping the mysql database by enabling the 
logging, Hence, those errors are expected. Once we apply the 
dump which is taken before the "drop database mysql", the errors 
will not be there.

client/mysqldump.c:
  In get_table_structure() added code to skip the DROP TABLE stmts for general_log
  and slow_log tables, because when logging is enabled those stmts will fail. And
  replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make 
  sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
  those tables.
  In dump_all_tables_in_db() added code to call get_table_structure() for 
  general_log and slow_log tables.
mysql-test/r/mysqldump.result:
  Added a test as part of fix for Bug #11754178
mysql-test/t/mysqldump.test:
  Added a test as part of fix for Bug #11754178
2012-05-07 16:46:44 +05:30
8065143637 Fix for LP bug#993726
Optimization of aggregate functions detected constant under max() and evalueted it, but condition in the WHWRE clause (which is always FALSE) was not taken into account
2012-05-07 13:26:34 +03:00
213476ef3e Fix for bug lp:992405
The patch backports two patches from mysql 5.6:
- BUG#12640437: USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT QUERY OUTPUT
- Bug#12578908: SELECT SQL_BUFFER_RESULT OUTPUTS TOO MANY ROWS WHEN GROUP IS OPTIMIZED AWAY

Original comment:
-----------------
3714 Jorgen Loland	2012-03-01
      BUG#12640437 - USING SQL_BUFFER_RESULT RESULTS IN A DIFFERENT 
                     QUERY OUTPUT
      
      For all but simple grouped queries, temporary tables are used to
      resolve grouping. In these cases, the list of grouping fields is
      stored in the temporary table and grouping is resolved
      there (e.g. by adding a unique constraint on the involved
      fields). Because of this, grouping is already done when the rows
      are read from the temporary table.
      
      In the case where a group clause may be optimized away, grouping
      does not have to be resolved using a temporary table. However, if
      a temporary table is explicitly requested (e.g. because the
      SQL_BUFFER_RESULT hint is used, or the statement is
      INSERT...SELECT), a temporary table is used anyway. In this case,
      the temporary table is created with an empty group list (because
      the group clause was optimized away) and it will therefore not
      create groups. Since the temporary table does not take care of
      grouping, JOIN::group shall not be set to false in 
      make_simple_join(). This was fixed in bug 12578908. 
      
      However, there is an exception where make_simple_join() should
      set JOIN::group to false even if the query uses a temporary table
      that was explicitly requested but is not strictly needed. That
      exception is if the loose index scan access method (explain
      says "Using index for group-by") is used to read into the 
      temporary table. With loose index scan, grouping is resolved 
      by the access method. This is exactly what happens in this bug.
2012-05-07 11:02:58 +03:00
867ce618cb merge 2012-05-05 14:59:44 +02:00
daafaa0f86 Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY
CAUSES RESTORE PROBLEM
Problem Statement:
------------------
mysqldump is not having the dump stmts for general_log and slow_log
tables. That is because of the fix for Bug#26121. Hence, after 
dropping the mysql database, and applying the dump by enabling the 
logging, "'general_log' table not found" errors are logged into the 
server log file.

Analysis:
---------
As part of the fix for Bug#26121, we skipped the dumping of tables 
for general_log and slow_log, because the data dump of those tables 
are taking LOCKS, which is not allowed for log tables.

Fix:
----
We came up with an approach that instead of taking both meta data 
and data dump information for those tables, take only the meta data 
dump which doesn't need LOCKS.
As part of fixing the issue we came up with below algorithm.
Design before fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

Design with the fix:
1) mysql database is having tables like db, event,... general_log,
   ... slow_log...
2) Skip general_log and slow_log while preparing the tables list
3) Explicitly call the 'show create table' for general_log and 
   slow_log
3) Take the TL_READ lock on tables which are present in the table 
   list and do 'show create table'.
4) Release the lock.

While taking the meta data dump for general_log and slow_log the 
"CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
This is because we skipped "DROP TABLE" for those tables, 
"DROP TABLE" fails for these tables if logging is enabled. 
Customer is applying the dump by enabling logging so, if the dump 
has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
stmts for those tables.
  
After the fix we could observe "Table 'mysql.general_log' 
doesn't exist" errors initially that is because in the customer 
scenario they are dropping the mysql database by enabling the 
logging, Hence, those errors are expected. Once we apply the 
dump which is taken before the "drop database mysql", the errors 
will not be there.

client/mysqldump.c:
  In get_table_structure() added code to skip the DROP TABLE stmts for general_log
  and slow_log tables, because when logging is enabled those stmts will fail. And
  replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make 
  sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
  those tables.
  In dump_all_tables_in_db() added code to call get_table_structure() for 
  general_log and slow_log tables.
mysql-test/r/mysqldump.result:
  Added a test as part of fix for Bug #11754178
mysql-test/t/mysqldump.test:
  Added a test as part of fix for Bug #11754178
2012-05-04 18:33:34 +05:30
2cf17b09ed Resolve opt_vardir in MTR with realpath. Server resolves some directory names, thus
mtr should do it as well, to avoid differences in test output. 

This fixes sys_vars.secure_file_priv on FreeBSD9.0 buildbot.
2012-05-04 14:46:18 +02:00
ab58904367 Fix FreeBSD test errors. Also link with libexecinfo on FreeBSD for stacktrace functionality. 2012-05-04 14:02:35 +02:00
b757c13078 In perl, to break out of a foreach loop we need to use
the keyword "last" and not "break".  Fixing the failing
test case.
2012-05-04 12:29:49 +05:30
44cf9ee5f7 5.3 merge 2012-05-04 07:16:38 +02:00
d231dc8f59 Fix (hopefully) a race condition in a test. Wait until killed connection
is gone.
2012-05-03 18:58:48 +02:00
d555878050 automatic merge 2012-05-03 16:00:41 +03:00
c9a73aa204 Fix bug lp:993745
This is a backport of the fix for MySQL bug #13723054 in 5.6.

Original comment:
      The crash is caused by arbitrary memory area owerwriting in case of
      BLOB fields during attempt to copy BLOB field key image into record
      buffer(record buffer is too small to get BLOB key part image).
      note:
      QUICK_GROUP_MIN_MAX_SELECT can not work with BLOB fields
      because it uses record buffer as temporary buffer for key values
      however this case is filtered out by covering_keys() check
      in get_best_group_min_max() as BLOBs always require key length
      modificator in the key declaration and if the key has a BLOB
      then it can not be covered key.
      The fix is to use 'max_used_key_length' key length instead of 0.

Analysis:
Spcifically the crash in this bug was a result of the call to key_copy()
that copied the whole key, inlcuding the BLOB field which is not used
for index access. Copying the blob field overwrote memory as far as the
function parameter 'key_info'. As a result the contents of key_info was
all 0, which resulted in a crash when this key_info was accessed few
lines below in key_cmp().
2012-05-03 14:49:52 +03:00
550d6871a5 MDEV-246 - Aborted_clients incremented during ordinary connection close
The problem was increment of aborted_threads variable due to thd->killed which was set when threadpool connection was terminated .  The fix is not to set thd->killed anymore, there is no real reason for doing it..

Added a test that checks that status variable aborted_clients does not grow for ordinary disconnects, and that successful KILL increments this variable.
2012-05-03 02:47:06 +02:00
ef4b9a0e57 Fix for failing gis-precise on Windows. 2012-05-03 13:14:40 +05:00
99e2ba4848 5.2 merge 2012-05-02 22:02:06 +02:00
167ad4c4a5 update the result file 2012-05-02 22:00:31 +02:00
8fe40c50db MDEV-214 lp:967242 Wrong result with JOIN, AND in ON condition, multi-part key, GROUP BY, subquery and OR in WHERE
The problem was in the code (update_const_equal_items()) which marked
index parts constant independently of the place where the equality was used.
In the test suite it marked t2_1.c part constant despite the fact that
it connected by OR with other expression.

Solution is to mark constant only top equalities connected with AND.
2012-05-02 18:11:02 +02:00
26fcc55017 LP993103: Wrong result with LAST_DAY('0000-00-00 00:00:00')IS NULL in WHERE condition
Fix is to set  maybe_null  flag for Item_func_last_day.
2012-05-02 16:53:02 +02:00
beec2a2b1d MDEV-241 lp:992722 - Server crashes in get_datetime_value
Create an Item_cache based on item's cmp_type, not result_type in 
subselect_engine.

Use result_field in Item_cache_temporal::cache_value(),
just like all other Item_cache*::cache_value() do.
2012-05-02 15:22:47 +02:00
6920491587 merge 2012-05-02 17:04:28 +02:00
af084bcd78 bug #977021 ST_BUFFER fails with the negative D.
Points and lines should disappear if we got negative D.
  To make it work properly inside the GEOMETRYCOLLECTION,
  we add the empty operation there.

bug #986977 Assertion `!cur_p->event' failed in Gcalc_scan_iterator::arrange_event(int, int).
  The double->inernal coord conversion produced -0 (minus zero) on some data.
  That minus-zero produces invalid comparison results when compared agains plus-zero.
  So we fixed the gcalc_set_double() to avoid it.

per-file comments:
  mysql-test/r/gis-precise.result
        result updated.
  mysql-test/t/gis-precise.test
        tests for #977021 and #986977 added.
  sql/gcalc_slicescan.cc
        bug #986977. The gcalc_set_double fixed to not produce minus-zero.
  sql/item_geofunc.cc
        bug #977021. Add the NOOP for the disappearing features.
2012-04-29 18:08:11 +05:00
476762bd7b Revert two follow-ups for Bug#12762885:
- alexander.nozdrin@oracle.com-20120427151428-7llk1mlwx8xmbx0t
  - alexander.nozdrin@oracle.com-20120427144227-kltwiuu8snds4j3l.
2012-04-27 21:07:53 +04:00
25cdec81e0 Proper follow-up for Bug#12762885 - 61713: MYSQL WILL NOT BIND TO "LOCALHOST"
IF LOCALHOST IS BOTH IPV4/IPV6 ENABLED.

The original patch removed default value of the bind-address option.
So, the default value became NULL. By coincedence NULL resolves
to 0.0.0.0 and ::, and since the server chooses first IPv4-address, 
0.0.0.0 is choosen. So, there was no change in the behaviour.

This patch restores default value of the bind-address option to "0.0.0.0".
2012-04-27 19:14:28 +04:00
8cfa6c3f33 MDEV-216 lp:976104 - Assertion `0' failed in my_message_sql on UPDATE IGNORE, or unknown error on release build
Don't send_error at the end of mysql_multi_update() if select failed.
The error, if there was any, was already sent by mysql_select
2012-04-26 19:21:37 +02:00
c04786d3e3 Fix bug lp:985667, MDEV-229
Analysis:

The reason for the wrong result is the interaction between constant
optimization (in this case 1-row table) and subquery optimization.

- First the outer query is optimized, and 'make_join_statistics' finds that
table t2 has one row, reads that row, and marks the whole table as constant.
This also means that all fields of t2 are constant.

- Next, we optimize the subquery in the end of the outer 'make_join_statistics'.
The field 'f2' is considered constant, with value '3'. The subquery predicate
is rewritten as the constant TRUE.

- The outer query execution detects early that the whole query result is empty
and calls 'return_zero_rows'. Since the query is with implicit grouping, we
have to produce one row with special values for the aggregates (depending on
each aggregate function), and NULL values for all non-aggregate fields.  This
function calls 'no_rows_in_result' to set each aggregate function to the
default value when it aggregates over an empty result, and then calls
'send_data', which in turn evaluates each Item in the SELECT list.

- When evaluation reaches the subquery predicate, it executes the subquery
with field 'f2' having a constant value '3', and the subquery produces the
incorrect result '7'.

Solution:

Implement Item::no_rows_in_result for all subquery predicates. In order to
make this work, it is also needed to make all val_* methods of all subquery
predicates respect the Item_subselect::forced_const flag. Otherwise subqueries
are executed anyways, and override the default value set by no_rows_in_result
with whatever result is produced from the subquery evaluation.
2012-04-27 12:59:17 +03:00
3f98e95354 BUG#13812374 - RPL.RPL_REPORT_PORT FAILS OCCASIONALLY ON PB2
Problem - The failure on PB2 is possbily due to the port number being still in
          use even after the server restarts which is not reflected in the
          server restart.
      
Fix - The problem is fixed by starting the servers forcefully using the option
      file and also the parameters for the server restart is passed correctly.

mysql-test/suite/rpl/t/rpl_report_port-master.opt:
  Option file for the master.
2012-04-26 19:34:03 +05:30
678f221cb9 Allow Windows absolute paths in N:\ formatfor the --vardir option 2012-04-23 12:52:14 +02:00
f189ccf530 merge from 5.1 repo 2012-04-23 12:05:05 +03:00
bd0c38064f BUG#11754117
rpl_auto_increment_bug45679.test is refined due to not fixed in 5.1 Bug11749859-39934.
2012-04-23 11:51:19 +03:00
6d74c922af merge bug11754117-45670 fixes from 5.1: fixing result files. 2012-04-21 14:19:06 +03:00
14de6de946 merge bug11754117-45670 fixes from 5.1. 2012-04-21 13:24:39 +03:00
dcb5071b19 BUG#12427262 : 60961: SHOW TABLES VERY SLOW WHEN NOT IN SYSTEM DISK CACHE
Details:
 - test case bug12427262.test was failing on windows because
   on windows '/' was not recognized. And this was used in
   LIKE clause of the query being run in this test case.

Fix:
 - Windows needs '\\\\' for path seperater in mysql. I was 
   not sure how to keep a single query with two different 
   syntax based on platform. So modifying query to make sure
   it runs correctly on both platform.
2012-04-21 05:23:09 +05:30
d3968407e6 BUG#13979418: SHOW BINLOG EVENTS MAY CRASH THE SERVER
Merge from 5.1 into 5.5.

Conflicts:
 * sql/log.h
 * sql/sql_repl.cc
2012-04-20 23:35:53 +01:00