1
0
mirror of https://github.com/MariaDB/server.git synced 2025-12-09 08:01:34 +03:00
Commit Graph

28769 Commits

Author SHA1 Message Date
Sergei Golubchik
22c5ffde30 a simple pam user mapper module 2012-09-25 20:23:01 +02:00
Rohit Kalhans
5530c5e38d BUG#14548159: NUMEROUS CASES OF INCORRECT IDENTIFIER
QUOTING IN REPLICATION 

Problem: Misquoting or unquoted identifiers may lead to
incorrect statements to be logged to the binary log.

Fix: we use specialized functions to append quoted identifiers in
the statements generated by the server.
2012-09-22 17:50:51 +05:30
unknown
792efd59bc MDEV-521 fix.
After pullout item during single row subselect transformation it should be fixed properly.
2012-09-20 12:48:59 +03:00
unknown
0bc89929ef - Merged the fix for bug lp:1009187, mdev-373.
- Performed some refactoring and simplification that was enabled and required by the merge.
2012-09-17 11:13:46 +03:00
unknown
b917fb63a6 Fix bug lp:1009187, mdev-373, mysql bug#58628
Analysis:
The queries in question use the [unique | index]_subquery execution methods.
These methods reuse the ref keys constructed by create_ref_for_key(). The
way create_ref_for_key() works is that it doesn't store in ref.key_copy[]
store_key elements that represent constants. In particular it doesn't store
the store_key for NULL constants.

The execution of [unique | index]_subquery calls
subselect_uniquesubquery_engine::copy_ref_key, which in addition to copy
the left IN argument into a index lookup key, is supposed to detect if
the left IN argument contains NULLs. Since the store_key for the NULL
constant is not copied into the key array, the null is not detected, and
execution erroneously proceeds as if it should look for a complete match.

Solution:
The solution (unlike MySQL) is to reuse already computed information about
NULL presence. Item_in_optimizer::val_int already finds out if the left IN
operand contains NULLs. The fix propagates this to the execution methods
subselect_[unique | index]subquery_engine::exec so it knows if there were
NULL values independent of the presence of keys.

In addition the patch siplifies copy_ref_key() and the logic that hanldes
the case of NULLs in the left IN operand.
2012-09-14 11:26:01 +03:00
unknown
54bb28d4a1 MDEV-486 LP BUG#1010116 fix.
Link view/derived table fields to a real table to check turning the table record to null row.

Item_direct_view_ref wrapper now checks if table is turned to null row.
2012-09-05 23:23:58 +03:00
Alexey Botchkov
589c62fefe Bug #1043845 st_distance() results are incorrect depending on variable order.
Autointersections of an object were treated as nodes, so the wrong result.

per-file comments:
  mysql-test/r/gis.result
Bug #1043845 st_distance() results are incorrect depending on variable order.
        test result updated.
  mysql-test/t/gis.test
Bug #1043845 st_distance() results are incorrect depending on variable order.
        test case added.
  sql/item.cc
        small fix to make compilers happy.
  sql/item_geofunc.cc
Bug #1043845 st_distance() results are incorrect depending on variable order.
        Skip intersection points when calculate distance.
2012-08-31 19:50:45 +05:00
Annamalai Gurusami
f3a6816fe5 Bug #13453036 ERROR CODE 1118: ROW SIZE TOO LARGE - EVEN
THOUGH IT IS NOT.

The following error message is misleading because it claims 
that the BLOB space is not counted.  

"ERROR 1118 (42000): Row size too large. The maximum row size for 
the used table type, not counting BLOBs, is 8126. You have to 
change some columns to TEXT or BLOBs"

When the ROW_FORMAT=compact or ROW_FORMAT=REDUNDANT is used,
the BLOB prefix is stored inline along with the row.  So 
the above error message is changed as follows depending on
the row format used:

For ROW_FORMAT=COMPRESSED or ROW_FORMAT=DYNAMIC, the error
message is as follows:

"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB may help. In current row format, 
BLOB prefix of 0 bytes is stored inline."

For ROW_FORMAT=COMPACT or ROW_FORMAT=REDUNDANT, the error
message is as follows:

"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or 
ROW_FORMAT=COMPRESSED may help. In current row
format, BLOB prefix of 768 bytes is stored inline."

rb://1252 approved by Marko Makela
2012-08-31 15:42:00 +05:30
Sergei Golubchik
0536c506ff MDEV-437 Microseconds: In time functions precision is calculated modulo 256
store the precision in uint, not uint8
2012-08-30 09:05:27 +02:00
Sergei Golubchik
3444e8e925 MDEV-454 Addition of a time interval reduces the resulting value
1. Field_newdate::get_date should refuse to return a date with zeros when
   TIME_NO_ZERO_IN_DATE is set, not when TIME_FUZZY_DATE is unset
2. Item_func_to_days and Item_date_add_interval can only work with valid dates,
   no zeros allowed.
2012-08-29 17:55:59 +02:00
Sergei Golubchik
a44331ab34 MDEV-456 An out-of-range datetime value (with a 5-digit year) can be created and cause troubles
fix Item_func_add_time::get_date() to generate valid dates.
Move the validity check inside get_date_from_daynr()
instead of relying on callers
(5 that had it, and 2 that did not, but should've)
2012-08-29 10:59:51 +02:00
unknown
95ee3fbf30 MDEV-492: fixed incorrect error check. 2012-08-29 11:35:42 +03:00
unknown
4d2b05b7d7 fix for MDEV-367
The problem was that was_null and null_value variables was reset in each reexecution of IN subquery, but engine rerun only for non-constant subqueries.

Fixed checking constant in Item_equal sort.
Fix constant reporting in Item_subselect.
2012-08-25 09:15:57 +03:00
unknown
4092d08bb8 Merge into latest 5.3 2012-08-24 14:26:23 +02:00
unknown
fc666a0df6 merge from 5.2 2012-08-24 14:02:32 +02:00
unknown
e44a800d91 Merge from 5.2 2012-08-24 13:51:16 +02:00
unknown
89e4d23f3b Merge into latest 5.2. 2012-08-24 12:57:19 +02:00
unknown
96703a63da Merge from 5.1. 2012-08-24 12:32:46 +02:00
unknown
4997ddfa9e Merge with latest 5.1. 2012-08-24 10:34:55 +02:00
unknown
cdeabcfd43 MDEV-382: Incorrect quoting
Various places in the server replication code was incorrectly quoting
strings, which could lead to incorrect SQL on the slave/mysqlbinlog.
2012-08-24 10:06:16 +02:00
Sergei Golubchik
f72a765997 5.2 merge.
two tests still fail:
  main.innodb_icp and main.range_vs_index_merge_innodb
  call records_in_range() with both range ends being open
  (which triggers an assert)
2012-08-22 16:45:25 +02:00
Sergei Golubchik
1fd8150a5b 5.1 merge
increase xtradb verson from 13.0 to 13.01
2012-08-22 16:13:54 +02:00
Sergei Golubchik
115a296756 merge with XtraDB as of Percona-Server-5.1.63-rel13.4 2012-08-22 16:10:31 +02:00
Sergei Golubchik
cefc30b166 merge with MySQL 5.1.65 2012-08-22 11:40:39 +02:00
Igor Babaev
d07b179fd2 Fixed bug mdev-449.
The bug could caused a crash when the server executed a query with
ORDER by and sort_buffer_size was set to a small enough number.
It happened because the small sort buffer did not allow to allocate
all merge buffers in it.
Made sure that the allocated sort buffer would be big enough
to contain all possible merge buffers.
2012-08-13 21:13:14 -07:00
Venkata Sidagam
18087b049e Bug #13115401: -SSL-KEY VALUE IS NOT VALIDATED AND IT ALLOWS INSECURE
CONNECTIONS IF SPE

Problem description: -ssl-key value is not validated, you can assign any bogus 
text to --ssl-key and it is not verified that it exists, and more importantly, 
it allows the client to connect to mysqld.

Fix: Added proper validations checks for --ssl-key.

Note:
1) Documentation changes require for 5.1, 5.5, 5.6 and trunk in the sections
   listed below and the details are :

 http://dev.mysql.com/doc/refman/5.6/en/ssl-options.html#option_general_ssl
    and
 REQUIRE SSL section of
 http://dev.mysql.com/doc/refman/5.6/en/grant.html

2) Client having with option '--ssl', should able to get ssl connection. This 
will be implemented as part of separate fix in 5.6 and trunk.
2012-08-11 15:43:04 +05:30
Elena Stepanova
d1a90e852b MDEV-369 (Mismatches in MySQL engines test suite)
Following reasons caused mismatches:
  - different handling of invalid values;
  - different CAST results with fractional seconds;
  - microseconds support in MariaDB;
  - different algorithm of comparing temporal values;
  - differences in error and warning texts and codes;
  - different approach to truncating datetime values to time;
  - additional collations;
  - different record order for queries without ORDER BY;
  - MySQL bug#66034.
More details in MDEV-369 comments.
2012-07-30 04:16:49 +04:00
Venkata Sidagam
e130d9efbf Bug #12876932 - INCORRECT SELECT RESULT ON FEDERATED TABLE
Fixed the missing of federated/include folder at the time 
of preparing package distribution, issue happens only in 5.1
2012-07-27 12:05:37 +05:30
Elena Stepanova
3ca3b44dbb Result files were wrong due to MySQL bug#66034 2012-07-26 23:31:08 +04:00
Venkata Sidagam
b6ecca263c Bug #12876932 - INCORRECT SELECT RESULT ON FEDERATED TABLE
Fix for pb2 test failure.
2012-07-26 23:23:04 +05:30
Venkata Sidagam
aef1982be0 Bug #12876932 - INCORRECT SELECT RESULT ON FEDERATED TABLE
Problem description:
Table 't' created with two colums having compound index on both the 
columns under innodb/myisam engine at remote machine. In the local 
machine same table is created undet the federated engine.
A select having where clause with along 'AND' operation gives wrong 
results on local machine.

Analysis: 
The given query at federated engine is wrongly transformed by 
federated::create_where_from_key() function and the same was sent to 
the remote machine. Hence the local machine is showing wrong results.

Given query "select c1 from t where c1 <= 2 and c2 = 1;"
Query transformed, after ha_federated::create_where_from_key() function is:
SELECT `c1`, `c2` FROM `t` WHERE  (`c1` IS NOT NULL ) AND 
( (`c1` >= 2)  AND  (`c2` <= 1) ) and the same sent to real_query().
In the above the '<=' and '=' conditions were transformed to '>=' and 
'<=' respectively.

ha_federated::create_where_from_key() function behaving as below:
The key_range is having both the start_key and end_key. The start_key 
is used to get "(`c1` IS NOT NULL )" part of the where clause, this 
transformation is correct. The end_key is used to get "( (`c1` >= 2) 
AND  (`c2` <= 1) )", which is wrong, here the given conditions('<=' and '=') 
are changed as wrong conditions('>=' and '<=').
The end_key is having {key = 0x39fa6d0 "", length = 10, keypart_map = 3, 
flag = HA_READ_AFTER_KEY}

The store_length is having value '5'. Based on store_length and length 
values the condition values is applied in HA_READ_AFTER_KEY switch case.
The switch case 'HA_READ_AFTER_KEY' is applicable to only the last part of 
the end_key and for previous parts it is going to 'HA_READ_KEY_OR_NEXT' case, 
here the '>=' is getting added as a condition instead of '<='.

Fix:
Updated the 'if' condition in 'HA_READ_AFTER_KEY' case to affect for all 
parts of the end_key. i.e 'i > 0' will used for end_key, Hence added it in 
the if condition.


mysql-test/suite/federated/federated.test:
  modified the federated.inc file location
mysql-test/suite/federated/federated_archive.test:
  modified the federated.inc file location
mysql-test/suite/federated/federated_bug_13118.test:
  modified the federated.inc file location
mysql-test/suite/federated/federated_bug_25714.test:
  modified the federated.inc file location
mysql-test/suite/federated/federated_bug_35333.test:
  modified the federated.inc file location
mysql-test/suite/federated/federated_debug.test:
  modified the federated.inc file location
mysql-test/suite/federated/federated_innodb.test:
  modified the federated.inc file location
mysql-test/suite/federated/federated_server.test:
  modified the federated.inc file location
mysql-test/suite/federated/federated_transactions.test:
  modified the federated.inc file location
mysql-test/suite/federated/include/federated.inc:
  moved the file from federated suite to federated/include folder
mysql-test/suite/federated/include/federated_cleanup.inc:
  moved the file from federated suite to federated/include folder
mysql-test/suite/federated/include/have_federated_db.inc:
  moved the file from federated suite to federated/include folder
storage/federated/ha_federated.cc:
  updated the 'if condition' in ha_federated::create_where_from_key() 
  function.
2012-07-26 15:09:22 +05:30
Annamalai Gurusami
1383660024 Bug #13113026 INFORMATION_SCHEMA.INNODB_BUFFER_PAGE_LRUFROM 5.6 BACKPORT
Backporting the WL#5716, "Information schema table for InnoDB 
buffer pool information". Backporting revisions 2876.244.113, 
2876.244.102 from mysql-trunk.

rb://1175 approved by Jimmy Yang.
2012-07-25 13:51:39 +05:30
Chaithra Gopalareddy
ddcd6867e9 Bug#11762052: 54599: BUG IN QUERY PLANNER ON QUERIES WITH
"ORDER BY" AND "LIMIT BY" CLAUSE

PROBLEM:
When a 'limit' clause is specified in a query along with
group by and order by, optimizer chooses wrong index
there by examining more number of rows than required.
However without the 'limit' clause, optimizer chooses
the right index.

ANALYSIS:
With respect to the query specified, range optimizer chooses
the first index as there is a range present ( on 'a'). Optimizer
then checks for an index which would give records in sorted
order for the 'group by' clause.

While checking chooses the second index (on 'c,b,a') based on
the 'limit' specified and the selectivity of
'quick_condition_rows' (number of rows present in the range)
in 'test_if_skip_sort_order' function. 
But, it fails to consider that an order by clause on a
different column will result in scanning the entire index and 
hence the estimated number of rows calculated above are 
wrong (which results in choosing the second index).

FIX:
Do not enforce the 'limit' clause in the call to
'test_if_skip_sort_order' if we are creating a temporary
table. Creation of temporary table indicates that there would be
more post-processing and hence will need all the rows.

This fix is backported from 5.6. This problem is fixed in 5.6 as   
part of changes for work log #5558


mysql-test/r/subselect.result:
  Changes for Bug#11762052 results in the correct number of rows.
sql/sql_select.cc:
  Do not pass the actual 'limit' value if 'need_tmp' is true.
2012-07-18 14:36:08 +05:30
Rohit Kalhans
91c8e79fcd BUG#11762667:MYSQLBINLOG IGNORES ERRORS WHILE WRITING OUTPUT
This is a followup patch for the bug enabling the test
i_binlog.binlog_mysqlbinlog_file_write.test
this was disabled in mysql trunk and mysql 5.5 as in the release
build mysqlbinlog was not debug compiled whereas the mysqld was.
Since have_debug.inc script checks only for mysqld to be debug
compiled, the test was not being skipped on release builds.

We resolve this problem by creating a new inc file 
mysqlbinlog_have_debug.inc which checks exclusively for mysqlbinlog
to be debug compiled. if not it skips the test.
 

mysql-test/include/mysqlbinlog_have_debug.inc:
  new inc file to check if mysqlbinlog is debug compiled.
2012-07-03 18:00:21 +05:30
Gleb Shchepa
767501fb54 Backport of the deprecation warning from WL#6219: "Deprecate and remove YEAR(2) type"
Print the warning(note):

 YEAR(x) is deprecated and will be removed in a future release. Please use YEAR(4) instead

on "CREATE TABLE ... YEAR(x)" or "ALTER TABLE MODIFY ... YEAR(x)", where x != 4
2012-06-29 12:55:45 +04:00
unknown
1b84c0cfee Fix for LP bug#1007622
TABLE_LIST::check_single_table made aware about fact that now if table attached to a merged view it can be (unopened) temporary table
(in 5.2 it was always leaf table or non (in case of several tables)).
2012-06-26 21:43:34 +03:00
Igor Babaev
20f3f4a273 Merge 5.2->5.3 2012-06-23 15:00:05 -07:00
unknown
60561ae613 Fix for LP bug#1001505 and LP bug#1001510
We set correct cmp_context during preparation to avoid changing it later by Item_field::equal_fields_propagator.
(see mysql bugs #57135 #57692 during merging)
2012-06-21 18:47:13 +03:00
Sergey Petrunya
eeea010fdc Update test results (checked) 2012-06-21 14:33:36 +04:00
Sergey Petrunya
20de3c8968 Update test results. 2012-06-20 22:30:24 +04:00
Sergey Petrunya
584d923c32 Post-merge fixes:
- put back the result encoding in func_in.result (messed up by kdiff3)
- update .result for other tests (checked)
2012-06-20 13:41:31 +04:00
Igor Babaev
0c69f22032 Fixed bug mdev-354.
Virtual columns of ENUM and SET data types were not supported properly
in the original patch that introduced virtual columns into MariaDB 5.2.
The problem was that for any  virtual column the patch used the 
interval_id field of the definition of the column in the frm file as
a reference to the virtual column expression.
The fix stores the optional interval_id of the virtual column in the
extended header of the virtual column expression.
2012-06-18 22:32:17 -07:00
Sergey Petrunya
90fbd8b22b Merge 5.2->5.3 2012-06-18 22:38:11 +04:00
unknown
db6dbadb5a Fix bug lp:1008686
Analysis:
The fix for bug lp:985667 implements the method Item_subselect::no_rows_in_result()
for all main kinds of subqueries. The purpose of this method is to be called from
return_zero_rows() and set Items to some default value in the case when a query
returns no rows. Aggregates and subqueries require special treatment in this case.

Every implementation of Item_subselect::no_rows_in_result() called
Item_subselect::make_const() to set the subquery predicate to its default value
irrespective of where the predicate was located in the query. Once the predicate
was set to a constant it was never executed.

At the same time, the JOIN object of the fake select for UNIONs (the one used for
the final result of the UNION), was set after all subqueries in the union were
executed. Since we set the subquery as constant, it was never executed, and the
corresponding JOIN was never created.

In order to decide whether the result of NOT IN is NULL or FALSE, Item_in_optimizer
needs to check if the subquery result was empty or not. This is where we got the
crash, because subselect_union_engine::no_rows() checks for
unit->fake_select_lex->join->send_records, and the join object was NULL.

Solution:
If a subquery is in the HAVING clause it must be evaluated in order to know its
result, so that we can properly filter the result records. Once subqueries in the
HAVING clause are executed even in the case of no result rows, this specific
crash will be solved, because the UNION will be executed, and its JOIN will be
constructed. Therefore the fix for this crash is to narrow the fix for lp:985667,
and to apply Item_subselect::no_rows_in_result() only when the subquery predicate
is in the SELECT clause.
2012-06-15 11:33:24 +03:00
unknown
88d3d853f4 Fix bug lp:1008773
Analysis:
Queries with implicit grouping (there is aggregate, but no group by)
follow some non-obvious semantics in the case of empty result set.
Aggregate functions produce some special "natural" value depending on
the function. For instance MIN/MAX return NULL, COUNT returns 0.

The complexity comes from non-aggregate expressions in the select list.
If the non-aggregate expression is a constant, it can be computed, so
we should return its value, however if the expression is non-constant,
and depends on columns from the empty result set, then the only meaningful
value is NULL.

The cause of the wrong result was that for subqueries the optimizer didn't
make a difference between constant and non-constant ones in the case of
empty result for implicit grouping.

Solution:
In all implementations of Item_subselect::no_rows_in_result() check if the
subquery predicate is constant. If it is constant, do not set it to the
default value for implicit grouping, instead let it be evaluated.
2012-06-14 17:03:09 +03:00
unknown
775293b898 BUG #13946716: FEDERATED_PLUGIN TEST CASE FAIL ON 64BIT ARCHITECTURES 2012-06-14 17:07:49 +05:30
Igor Babaev
64aa1fcb13 Adjusted results in pbxt.negation_elimination after the fix for lp bug 992380. 2012-06-12 10:06:26 -07:00
Manish Kumar
4a2d65cc31 BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
Replication breaks in the cases if the event length exceeds 
the size of master Dump thread's max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the  
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem is fixed in 2 steps:

1.) The Dump thread limit to read event is increased to the upper limit
    i.e. Dump thread reads whatever gets logged in the binary log.

2.) On the slave side we increase the the max_allowed_packet for the
    slave's threads (IO/SQL) by increasing it to 1GB.

    This is done using the new server option (slave_max_allowed_packet)
    included, is used to regulate the max_allowed_packet of the  
    slave thread (IO/SQL) by the DBA, and facilitates the sending of
    large packets from the master to the slave.

    This causes the large packets to be received by the slave and apply
    it successfully.

sql/log_event.cc:
  The max_allowed_packet is not evaluated to the new option 
  slave_max_allowed_packet after the fix.
sql/log_event.h:
  Added the new option in the log_event.h file.
sql/mysqld.cc:
  Added a new option to the server.
sql/slave.cc:
  Increasing the session max_allowed_packet to a large value,
  i.e. not taking global(max_allowed) into consideration, for the slave's threads.
sql/sql_repl.cc:
  The dump thread's max_allowed_packet is set to the upper limit
  which makes it independent and it now reads whatever gets 
  logged in the binary log.
2012-06-12 12:59:13 +05:30
Sergey Petrunya
af1909fc0e Merge 2012-06-10 14:06:11 +04:00
Sergey Petrunya
ae51b5b698 Merge 2012-06-10 14:04:21 +04:00