1
0
mirror of https://github.com/MariaDB/server.git synced 2025-11-02 02:53:04 +03:00
Commit Graph

1818 Commits

Author SHA1 Message Date
Aditya A
cebbe9a807 Bug#11751825 - OPTIMIZE PARTITION RECREATES FULL TABLE INSTEAD JUST PARTITION
Follow up patch to address the pb2 failures.
2012-11-08 14:19:27 +05:30
Shivji Kumar Jha
068478fbbf BUG#14659685 - main.mysqlbinlog_row_myisam and
main.mysqlbinlog_row_innodb are skipped by mtr

=== Problem ===

The following tests are wrongly placed in main suite and as a
result these are not run with proper binlog format combinations.
Some are always skipped by mtr.
1) mysqlbinlog_row_myisam
2) mysqlbinlog_row_innodb
3) mysqlbinlog_row.test
4) mysqlbinlog_row_trans.test
5) mysqlbinlog-cp932
6) mysqlbinlog2
7) mysqlbinlog_base64

=== Background ===

mtr runs the tests placed in main suite with binlog format=stmt.
Those that need to be tested against binlog format=row or mixed
or more than one binlog format and require only one mysql server
are placed in binlog suite. mtr runs tests in binlog suite with
all three binlog formats(stmt,row and mixed).

=== Fix ===


1) Moved the test listed in problem section above to binlog suite.
2) Added prefix "binlog_" to the name of each test case moved.
   Renamed the coresponding result files and option files accordingly.
2012-10-30 10:40:07 +05:30
Krunal Bauskar krunal.bauskar@oracle.com
c2ab844123 bug#14765606: ensure select is active before killing it else kill signal is ignored 2012-10-17 14:30:32 +05:30
Krunal Bauskar krunal.bauskar@oracle.com
c8cebffdbd bug#14704286
SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
LOOKUPS (honoring kill query while accessing sec_index)

If secondary index is being used for select query evaluation and this
query is operating with consistent read snapshot it might take good time for
secondary index to return back control to mysql as MVCC would kick in.

If user issues "kill query <id>" while query is actively accessing
secondary index it will not be honored as there is no hook to check
for this condition. Added hook for this check.

-----
Parallely secondary index taking too long to evaluate for consistent
read snapshot case is being examined for performance improvement. WL#6540.
2012-10-15 09:24:33 +05:30
Rohit Kalhans
5f003eca00 BUG#14548159: NUMEROUS CASES OF INCORRECT IDENTIFIER
QUOTING IN REPLICATION 

Problem: Misquoting or unquoted identifiers may lead to
incorrect statements to be logged to the binary log.

Fix: we use specialized functions to append quoted identifiers in
the statements generated by the server.
2012-09-22 17:50:51 +05:30
Annamalai Gurusami
4a3d325dc6 Bug #13453036 ERROR CODE 1118: ROW SIZE TOO LARGE - EVEN
THOUGH IT IS NOT.

The following error message is misleading because it claims 
that the BLOB space is not counted.  

"ERROR 1118 (42000): Row size too large. The maximum row size for 
the used table type, not counting BLOBs, is 8126. You have to 
change some columns to TEXT or BLOBs"

When the ROW_FORMAT=compact or ROW_FORMAT=REDUNDANT is used,
the BLOB prefix is stored inline along with the row.  So 
the above error message is changed as follows depending on
the row format used:

For ROW_FORMAT=COMPRESSED or ROW_FORMAT=DYNAMIC, the error
message is as follows:

"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB may help. In current row format, 
BLOB prefix of 0 bytes is stored inline."

For ROW_FORMAT=COMPACT or ROW_FORMAT=REDUNDANT, the error
message is as follows:

"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or 
ROW_FORMAT=COMPRESSED may help. In current row
format, BLOB prefix of 768 bytes is stored inline."

rb://1252 approved by Marko Makela
2012-08-31 15:42:00 +05:30
Venkata Sidagam
c6c8645ae9 Bug #12876932 - INCORRECT SELECT RESULT ON FEDERATED TABLE
Fix for pb2 test failure.
2012-07-26 23:23:04 +05:30
Venkata Sidagam
3b954d1ddd Bug #12876932 - INCORRECT SELECT RESULT ON FEDERATED TABLE
Problem description:
Table 't' created with two colums having compound index on both the 
columns under innodb/myisam engine at remote machine. In the local 
machine same table is created undet the federated engine.
A select having where clause with along 'AND' operation gives wrong 
results on local machine.

Analysis: 
The given query at federated engine is wrongly transformed by 
federated::create_where_from_key() function and the same was sent to 
the remote machine. Hence the local machine is showing wrong results.

Given query "select c1 from t where c1 <= 2 and c2 = 1;"
Query transformed, after ha_federated::create_where_from_key() function is:
SELECT `c1`, `c2` FROM `t` WHERE  (`c1` IS NOT NULL ) AND 
( (`c1` >= 2)  AND  (`c2` <= 1) ) and the same sent to real_query().
In the above the '<=' and '=' conditions were transformed to '>=' and 
'<=' respectively.

ha_federated::create_where_from_key() function behaving as below:
The key_range is having both the start_key and end_key. The start_key 
is used to get "(`c1` IS NOT NULL )" part of the where clause, this 
transformation is correct. The end_key is used to get "( (`c1` >= 2) 
AND  (`c2` <= 1) )", which is wrong, here the given conditions('<=' and '=') 
are changed as wrong conditions('>=' and '<=').
The end_key is having {key = 0x39fa6d0 "", length = 10, keypart_map = 3, 
flag = HA_READ_AFTER_KEY}

The store_length is having value '5'. Based on store_length and length 
values the condition values is applied in HA_READ_AFTER_KEY switch case.
The switch case 'HA_READ_AFTER_KEY' is applicable to only the last part of 
the end_key and for previous parts it is going to 'HA_READ_KEY_OR_NEXT' case, 
here the '>=' is getting added as a condition instead of '<='.

Fix:
Updated the 'if' condition in 'HA_READ_AFTER_KEY' case to affect for all 
parts of the end_key. i.e 'i > 0' will used for end_key, Hence added it in 
the if condition.
2012-07-26 15:09:22 +05:30
Annamalai Gurusami
c65932be49 Bug #13113026 INFORMATION_SCHEMA.INNODB_BUFFER_PAGE_LRUFROM 5.6 BACKPORT
Backporting the WL#5716, "Information schema table for InnoDB 
buffer pool information". Backporting revisions 2876.244.113, 
2876.244.102 from mysql-trunk.

rb://1175 approved by Jimmy Yang.
2012-07-25 13:51:39 +05:30
Gleb Shchepa
ba966cff98 Backport of the deprecation warning from WL#6219: "Deprecate and remove YEAR(2) type"
Print the warning(note):

 YEAR(x) is deprecated and will be removed in a future release. Please use YEAR(4) instead

on "CREATE TABLE ... YEAR(x)" or "ALTER TABLE MODIFY ... YEAR(x)", where x != 4
2012-06-29 12:55:45 +04:00
Manish Kumar
1211b5d50b BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
Replication breaks in the cases if the event length exceeds 
the size of master Dump thread's max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet, on addition of the  
max_event_header length exceeds the max_allowed_packet of the DUMP thread.
This causes the Dump thread to break replication and throw an error.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem is fixed in 2 steps:

1.) The Dump thread limit to read event is increased to the upper limit
    i.e. Dump thread reads whatever gets logged in the binary log.

2.) On the slave side we increase the the max_allowed_packet for the
    slave's threads (IO/SQL) by increasing it to 1GB.

    This is done using the new server option (slave_max_allowed_packet)
    included, is used to regulate the max_allowed_packet of the  
    slave thread (IO/SQL) by the DBA, and facilitates the sending of
    large packets from the master to the slave.

    This causes the large packets to be received by the slave and apply
    it successfully.
2012-06-12 12:59:13 +05:30
Manish Kumar
9aa79dc596 BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
Problem
========
            
SQL statements close to the size of max_allowed_packet produce binary
log events larger than max_allowed_packet.
              
The reason why this failure is occuring is because the event length is
more than the total size of the max_allowed_packet + max_event_header
length. Now since the event length exceeds this size master Dump
thread is unable to send the packet on to the slave.
                      
That can happen e.g with row-based replication in Update_rows event.
            
Fix
====
          
The problem was fixed by increasing the max_allowed_packet for the
slave's threads (IO/SQL) by increasing it to 1GB.
This is done using the new server option included which is used to
regulate the max_allowed_packet of the slave thread (IO/SQL).
This causes the large packets to be received by the slave and apply
it successfully.
2012-05-21 12:57:39 +05:30
Nuno Carvalho
8bb98d7535 BUG#11754117 - 45670: INTVAR_EVENTS FOR FILTERED-OUT QUERY_LOG_EVENTS ARE EXECUTED
Improved random number filtering verification on
rpl_filter_tables_not_exist test.
2012-05-15 22:06:48 +01:00
Annamalai Gurusami
b76a59f5a6 Bug #14007649 65111: INNODB SOMETIMES FAILS TO UPDATE ROWS INSERTED
BY A CONCURRENT TRANSACTIO

The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
a table handler clone. Innodb does not provide a clone operation.  
The ha_innobase::clone() is not there. The handler::clone() does not 
take care of the ha_innobase->prebuilt->select_lock_type.  Because of 
this what happens is that for one index we do a locking read, and 
for the other index we were doing a non-locking (consistent) read. 
The patch introduces ha_innobase::clone() member function.  
It is implemented similar to ha_myisam::clone().  It calls the 
base class handler::clone() and then does any additional operation 
required.  I am setting the ha_innobase->prebuilt->select_lock_type 
correctly. 

rb://1060 approved by Marko
2012-05-10 10:18:31 +05:30
Sunanda Menon
d37a28c9b0 Merge from mysql-5.1.63-release 2012-05-08 07:19:14 +02:00
Andrei Elkin
a891de4221 merge from 5.1 repo 2012-04-23 12:05:05 +03:00
Andrei Elkin
d5925c2044 BUG#11754117
rpl_auto_increment_bug45679.test is refined due to not fixed in 5.1 Bug11749859-39934.
2012-04-23 11:51:19 +03:00
Nuno Carvalho
ca33df2094 BUG#13979418: SHOW BINLOG EVENTS MAY CRASH THE SERVER
The function mysql_show_binlog_events has a local stack variable
'LOG_INFO linfo;', which is assigned to thd->current_linfo, however
this variable goes out of scope and is destroyed before clean
thd->current_linfo.

The problem is solved by moving 'LOG_INFO linfo;' to function scope.
2012-04-20 22:25:59 +01:00
Andrei Elkin
f3509d1d67 BUG#11754117 incorrect logging of INSERT into auto-increment
BUG#11761686 insert_id event is not filtered.
  
Two issues are covered.
  
INSERT into autoincrement field which is not the first part in the composed primary key 
is unsafe by autoincrement logging design. The case is specific to MyISAM engine
because Innodb does not allow such table definition.
  
However no warnings and row-format logging in the MIXED mode was done, and
that is fixed.
  
Int-, Rand-, User-var log-events were not filtered along with their parent
query that made possible them to screw up execution context of the following
query.
  
Fixed with deferring their execution until the parent query.

******
Bug#11754117 

Post review fixes.
2012-04-20 19:41:20 +03:00
Nuno Carvalho
a9a7e6ea24 WL#6236: Allow SHOW MASTER LOGS and SHOW BINARY LOGS with REPLICATION CLIENT
Currently SHOW MASTER LOGS and SHOW BINARY LOGS require the SUPER
privilege. Monitoring tools (such as MEM) often want to check this 
output - for instance MEM generates the SUM of the sizes of the logs 
reported here, and puts that in the Replication overview within the MEM
Dashboard.
However, because of the SUPER requirement, these tools often have an 
account that holds open the connection whilst monitoring, and can lock
out administrators when the server gets overloaded and reaches
max_connections - there is already another SUPER privileged account
connected, the "monitor". 

As SHOW MASTER STATUS, and all other replication related statements,
return with either REPLICATION CLIENT or SUPER privileges, this worklog 
is to make SHOW MASTER LOGS and SHOW BINARY LOGS be consistent with this
as well, and allow both of these commands with either SUPER or 
REPLICATION CLIENT. 
This allows monitoring tools to not require a SUPER privilege any more,
so is safer in overloaded situations, as well as being more secure, as 
lighter privileges can be given to users of such tools or scripts.
2012-04-18 10:08:01 +01:00
Georgi Kodinov
262c156849 merge mysql-5.1->mysql-5.1-security 2012-03-21 14:53:09 +02:00
karen.langford@oracle.com
3adb401c8a Merge from mysql-5.1.62-release 2012-03-20 17:35:41 +01:00
Annamalai Gurusami
d4ed7cf411 Bug #11766634 59783: INNODB DATA GROWS UNEXPECTEDLY WHEN INSERTING, TRUNCATING, INSERTING THE
The test case must insert all the records using a single transaction. Otherwise the test 
case takes more than 15 minutes and will time out in pb2 and mtr.
2012-03-16 12:06:29 +05:30
Luis Soares
975e67083d BUG#12400313
Adding missing sync_slave_with_master to the test case.
2012-03-12 23:23:40 +00:00
Luis Soares
deb49a2683 Automerge merge with latest mysql-5.1. 2012-03-12 23:16:44 +00:00
Luis Soares
ab03c5bace BUG#12400313
Hardening the test case:
  - including a diff_tables at the end.
  - increasing the tolerance on the relay limit size.
2012-03-12 23:15:01 +00:00
Luis Soares
c41a6fec10 BUG#12400313
Automerge with mysql-5.1.
2012-03-12 21:58:00 +00:00
Luis Soares
5360c4e5bc BUG#12400313 RELAY_LOG_SPACE_LIMIT IS NOT WORKING IN MANY CASES
BUG#64503: mysql frequently ignores --relay-log-space-limit

When the SQL thread goes to sleep, waiting for more events, it sets
the flag ignore_log_space_limit to true. This gives the IO thread a
chance to queue some more events and ultimately the SQL thread will be
able to purge the log once it is rotated. By then the SQL thread
resets the ignore_log_space_limit to false. However, between the time
the SQL thread has set the ignore flag and the time it resets it, the
IO thread will be queuing events in the relay log, possibly going way
over the limit.

This patch makes the IO and SQL thread to synchronize when they reach
the space limit and only ask for one event at a time. Thus the SQL
thread sets ignore_log_space_limit flag and the IO thread resets it to
false everytime it processes one more event. In addition, everytime
the SQL thread processes the next event, and the limit has been
reached, it checks if the IO thread should rotate. If it should, it
instructs the IO thread to rotate, giving the SQL thread a chance to
purge the logs (freeing space). Finally, this patch removes the
resetting of the ignore_log_space_limit flag from purge_first_log,
because this is now reset by the IO thread every time it processes the
next event when the limit has been reached.

If the SQL thread is in a transaction, it cannot purge so, there is no
point in asking the IO thread to rotate. The only thing it can do is
to ask for more events until the transaction is over (then it can ask
the IO to rotate and purge the log right away). Otherwise, there would
be a deadlock (SQL would not be able to purge and IO thread would not
be able to queue events so that the SQL would finish the transaction).
2012-03-12 12:28:27 +00:00
Annamalai Gurusami
da4418977d Bug #11766634 59783: InnoDB data grows unexpectedly when inserting,
truncating, inserting the same set of rows. When a table is 
re-created with the same set of rows, the data file size must
not grow.  

rb:968
Approved by Marko.
2012-03-09 11:07:16 +05:30
Georgi Kodinov
8232d9a6ee merge mysql-5.1->mysql-5.1-security 2012-03-08 17:16:53 +02:00
Annamalai Gurusami
98642459db The innodb plugin module cannot use DEBUG_SYNC_C facility on Windows.
Taking care of it.
2012-03-01 15:44:23 +05:30
Annamalai Gurusami
27ecea534c Bug#13635833: MULTIPLE CRASHES IN FOREIGN KEY CODE WITH CONCURRENT DDL/DML
There are two threads.  In one thread, dml operation is going on 
involving cascaded update operation.  In another thread, alter 
table add foreign key constraint is happening.  Under these 
circumstances, it is possible for the dml thread to access a 
dict_foreign_t object that has been freed by the ddl thread.  
The debug sync test case provides the sequence of operations.  
Without fix, the test case will crash the server (because of 
newly added assert).  With fix, the alter table stmt will return 
an error message.  
      
Backporting the fix from MySQL 5.5 to 5.1

rb:961
rb:947
2012-03-01 11:05:51 +05:30
Praveenkumar Hulakund
9af695fb45 Bug#12601974 - STORED PROCEDURE SQL_MODE=NO_BACKSLASH_ESCAPES IGNORED AND BREAKS REPLICATION
Analysis:
========================
sql_mode "NO_BACKSLASH_ESCAPES": When user want to use backslash as character input,
instead of escape character in a string literal then sql_mode can be set to 
"NO_BACKSLASH_ESCAPES". With this mode enabled, backslash becomes an ordinary 
character like any other. 

SQL_MODE set applies to the current client session. And while creating the stored 
procedure, MySQL stores the current sql_mode and always executes the stored 
procedure in sql_mode stored with the Procedure, regardless of the server SQL 
mode in effect when the routine is invoked.  

In the scenario (for which bug is reported), the routine is created with 
sql_mode=NO_BACKSLASH_ESCAPES. And routine is executed with the invoker sql_mode
is "" (NOT SET) by executing statement "call testp('Axel\'s')".
Since invoker sql_mode is "" (NOT_SET), the '\' in 'Axel\'s'(argument to function)
is considered as escape character and column "a" (of table "t1") values are 
updated with "Axel's". The binary log generated for above update operation is as below,

  set sql_mode=XXXXXX (for no_backslash_escapes)
  update test.t1 set a= NAME_CONST('var',_latin1'Axel\'s' COLLATE 'latin1_swedish_ci');

While logging stored procedure statements, the local variables (params) used in
statements are replaced with the NAME_CONST(var_name, var_value) (Internal function) 
(http://dev.mysql.com/doc/refman/5.6/en/miscellaneous-functions.html#function_name-const)

On slave, these logs are applied. NAME_CONST is parsed to get the variable and its
value. Since, stored procedure is created with sql_mode="NO_BACKSLASH_ESCAPES", the sql_mode
is also logged in. So that at slave this sql_mode is set before executing the statements
of routine.  So at slave, sql_mode is set to "NO_BACKSLASH_ESCAPES" and then while
parsing NAME_CONST of string variable, '\' is considered as NON ESCAPE character
and parsing reported error for "'" (as we have only one "'" no backslash). 

At slave, parsing was proper with sql_mode "NO_BACKSLASH_ESCAPES".
But above error reported while writing bin log, "'" (of Axel's) is escaped with
"\" character. Actually, all special characters (n, r, ', ", \, 0...) are escaped
while writing NAME_CONST for string variable(param, local variable) in bin log 
Airrespective of "NO_BACKSLASH_ESCAPES" sql_mode. So, basically, the problem is 
that logging string parameter does not take into account sql_mode value.

Fix:
========================
So when sql_mode is set to "NO_BACKSLASH_ESCAPES", escaping  characters as 
(n, r, ', ", \, 0...) should be avoided. To do so, added a check to not to
escape such characters while writing NAME_CONST for string variables in bin 
log. 
And when sql_mode is set to NO_BACKSLASH_ESCAPES, quote character "'" is
represented as ''.
http://dev.mysql.com/doc/refman/5.6/en/string-literals.html (There are several 
ways to include quote characters within a string: )
2012-02-29 12:23:15 +05:30
Vasil Dimov
a66f29c30c Fix Bug#13639142 64128: INNODB ERROR IN SERVER LOG OF INNODB_BUG34300
Suppress innodb_bug34300 from failing if InnoDB prints:

  120221 11:05:03  InnoDB: ERROR: the age of the last checkpoint is 9439048,
  InnoDB: which exceeds the log group capacity 9433498.

by default the log capacity is 2 log files, 5 MB each.
2012-02-21 17:57:07 +02:00
Georgi Kodinov
d2549def1c merge mysql-5.1->mysql-5.1-security 2012-02-18 10:58:19 +02:00
Georgi Kodinov
637c2d9e4e merge mysql-5.1->mysql-5.1-security 2012-02-17 11:52:41 +02:00
Marko Mäkelä
ae309bd336 Bug#13721257 RACE CONDITION IN UPDATES OR INSERTS OF WIDE RECORDS
This bug was originally filed and fixed as Bug#12612184. The original
fix was buggy, and it was patched by Bug#12704861. Also that patch was
buggy (potentially breaking crash recovery), and both fixes were
reverted.

This fix was not ported to the built-in InnoDB of MySQL 5.1, because
the function signatures of many core functions are different from
InnoDB Plugin and later versions. The block allocation routines and
their callers would have to changed so that they handle block
descriptors instead of page frames.

When a record is updated so that its size grows, non-updated columns
can be selected for external (off-page) storage. The bug is that the
initially inserted updated record contains an all-zero BLOB pointer to
the field that was not updated. Only after the BLOB pages have been
allocated and written, the valid pointer can be written to the record.

Between the release of the page latch in mtr_commit(mtr) after
btr_cur_pessimistic_update() and the re-latching of the page in
btr_pcur_restore_position(), other threads can see the invalid BLOB
pointer consisting of 20 zero bytes. Moreover, if the system crashes
at this point, the situation could persist after crash recovery, and
the contents of the non-updated column would be permanently lost.

The problem is amplified by the ROW_FORMAT=DYNAMIC and
ROW_FORMAT=COMPRESSED that were introduced in
innodb_file_format=barracuda in InnoDB Plugin, but the bug does exist
in all InnoDB versions.

The fix is as follows. After a pessimistic B-tree operation that needs
to write out off-page columns, allocate the pages for these columns in
the mini-transaction that performed the B-tree operation (btr_mtr),
but write the pages in a separate mini-transaction (blob_mtr). Do
mtr_commit(blob_mtr) before mtr_commit(btr_mtr). A quirk: Do not reuse
pages that were previously freed in btr_mtr. Only write the off-page
columns to 'fresh' pages.

In this way, crash recovery will see redo log entries for blob_mtr
before any redo log entry for btr_mtr. It will apply the BLOB page
writes to pages that were marked free at that point. If crash recovery
fails to see all of the btr_mtr redo log, there will be some
unreachable BLOB data in free pages, but the B-tree will be in a
consistent state.

btr_page_alloc_low(): Renamed from btr_page_alloc(). Add the parameter
init_mtr. Return an allocated block, or NULL. If init_mtr!=mtr but
the page was already X-latched in mtr, do not initialize the page.

btr_page_alloc(): Wrapper for btr_page_alloc_for_ibuf() and
btr_page_alloc_low().

btr_page_free(): Add a debug assertion that the page was a B-tree page.

btr_lift_page_up(): Return the father block.

btr_compress(), btr_cur_compress_if_useful(): Add the parameter ibool
adjust, for adjusting the cursor position.

btr_cur_pessimistic_update(): Preserve the cursor position when
big_rec will be written and the new flag BTR_KEEP_POS_FLAG is defined.
Remove a duplicate rec_get_offsets() call. Keep the X-latch on
index->lock when big_rec is needed.

btr_store_big_rec_extern_fields(): Replace update_inplace with
an operation code, and local_mtr with btr_mtr. When not doing a
fresh insert and btr_mtr has freed pages, put aside any pages that
were previously X-latched in btr_mtr, and free the pages after
writing out all data. The data must be written to 'fresh' pages,
because btr_mtr will be committed and written to the redo log after
the BLOB writes have been written to the redo log.

btr_blob_op_is_update(): Check if an operation passed to
btr_store_big_rec_extern_fields() is an update or insert-by-update.

fseg_alloc_free_page_low(), fsp_alloc_free_page(),
fseg_alloc_free_extent(), fseg_alloc_free_page_general(): Add the
parameter init_mtr. Return an allocated block, or NULL. If
init_mtr!=mtr but the page was already X-latched in mtr, do not
initialize the page.

xdes_get_descriptor_with_space_hdr(): Assert that the file space
header is being X-latched.

fsp_alloc_from_free_frag(): Refactored from fsp_alloc_free_page().

fsp_page_create(): New function, for allocating, X-latching and
potentially initializing a page. If init_mtr!=mtr but the page was
already X-latched in mtr, do not initialize the page.

fsp_free_page(): Add ut_ad(0) to the error outcomes.

fsp_free_page(), fseg_free_page_low(): Increment mtr->n_freed_pages.

fsp_alloc_seg_inode_page(), fseg_create_general(): Assert that the
page was not previously X-latched in the mini-transaction. A file
segment or inode page should never be allocated in the middle of an
mini-transaction that frees pages, such as btr_cur_pessimistic_delete().

fseg_alloc_free_page_low(): If the hinted page was allocated, skip the
check if the tablespace should be extended. Return NULL instead of
FIL_NULL on failure. Remove the flag frag_page_allocated. Instead,
return directly, because the page would already have been initialized.

fseg_find_free_frag_page_slot() would return ULINT_UNDEFINED on error,
not FIL_NULL. Correct a bogus assertion.

fseg_alloc_free_page(): Redefine as a wrapper macro around
fseg_alloc_free_page_general().

buf_block_buf_fix_inc(): Move the definition from the buf0buf.ic to
buf0buf.h, so that it can be called from other modules.

mtr_t: Add n_freed_pages (number of pages that have been freed).

page_rec_get_nth_const(), page_rec_get_nth(): The inverse function of
page_rec_get_n_recs_before(), get the nth record of the record
list. This is faster than iterating the linked list. Refactored from
page_get_middle_rec().

trx_undo_rec_copy(): Add a debug assertion for the length.

trx_undo_add_page(): Return a block descriptor or NULL instead of a
page number or FIL_NULL.

trx_undo_report_row_operation(): Add debug assertions.

trx_sys_create_doublewrite_buf(): Assert that each page was not
previously X-latched.

page_cur_insert_rec_zip_reorg(): Make use of page_rec_get_nth().

row_ins_clust_index_entry_by_modify(): Pass BTR_KEEP_POS_FLAG, so that
the repositioning of the cursor can be avoided.

row_ins_index_entry_low(): Add DEBUG_SYNC points before and after
writing off-page columns. If inserting by updating a delete-marked
record, do not reposition the cursor or commit the mini-transaction
before writing the off-page columns.

row_build(): Tighten a debug assertion about null BLOB pointers.

row_upd_clust_rec(): Add DEBUG_SYNC points before and after writing
off-page columns. Do not reposition the cursor or commit the
mini-transaction before writing the off-page columns.

rb:939 approved by Jimmy Yang
2012-02-17 11:42:04 +02:00
Marko Mäkelä
8b0f2c4d7d Remove a race condition in innodb_bug53756.test.
Before killing the server, tell mysql-test-run that it is to be expected.

Discussed with Bjorn Munch on IM.
2012-02-15 16:28:00 +02:00
Georgi Kodinov
145043fd69 merged mysql-5.1->mysql-5.1-security 2012-02-06 18:24:51 +02:00
Vasil Dimov
17afdb9051 Fix Bug#11754376 45976: INNODB LOST FILES FOR TEMPORARY TABLES ON
GRACEFUL SHUTDOWN

During startup mysql picks up .frm files from the tmpdir directory and
tries to drop those tables in the storage engine.

The problem is that when tmpdir ends in / then ha_innobase::delete_table()
is passed a string like "/var/tmp//#sql123", then it wrongly normalizes it
to "/#sql123" and calls row_drop_table_for_mysql() which of course fails
to delete the table entry from the InnoDB dictionary cache.
ha_innobase::delete_table() returns an error but nevertheless mysql wipes
away the .frm file and the entry in the InnoDB dictionary cache remains
orphaned with no easy way to remove it.

The "no easy" way to remove it is to create a similar temporary table again,
copy its .frm file to tmpdir under "#sql123.frm" and restart mysqld with
tmpdir=/var/tmp (no trailing slash) - this way mysql will pick the .frm file
after restart and will try to issue drop table for "/var/tmp/#sql123"
(notice do double slash), ha_innobase::delete_table() will normalize it to
"tmp/#sql123" and row_drop_table_for_mysql() will successfully remove the
table entry from the dictionary cache.

The solution is to fix normalize_table_name_low() to normalize things like
"/var/tmp//table" correctly to "tmp/table".

This patch also adds a test function which invokes
normalize_table_name_low() with various inputs to make sure it works
correctly and a mtr test that calls this test function.

Reviewed by:	Marko (http://bur03.no.oracle.com/rb/r/929/)
2012-02-06 12:44:59 +02:00
Alexander Barkov
680fd893f0 Postfix for Bug#11752408.
Recording correct test results.

modified:
  mysql-test/suite/engines/funcs/r/db_alter_collate_ascii.result
  mysql-test/suite/engines/funcs/r/db_alter_collate_utf8.result
2012-02-02 16:22:13 +04:00
Marko Mäkelä
a96c87206b Bug#13654923 BOGUS DEBUG ASSERTION IN INDEX CREATION FOR ZERO-LENGTH RECORD
row_merge_buf_write(): Relax the bogus assertion.
2012-02-02 13:38:32 +02:00
Marko Mäkelä
647abc1312 Suppress messages about long semaphore waits in innodb_bug34300.test. 2012-02-02 12:07:06 +02:00
Nuno Carvalho
bffc7ec82e BUG#11893288 60542: RPL.RPL_EXTRA_COL_MASTER_* DOESN'T TEST WHAT WAS INTENDED
Test extra/rpl_tests/rpl_extra_col_master.test (used by
rpl_extra_col_master_*) ends with the active connection pointing to the
slave. Thus, the two last tests never succeed in changing the binlog
format of the master away from 'row'. With correct active connection
(master) tests fail for binlog 'statement' and 'mixed' formats.

Tests rpl_extra_col_master_* only run when binary log format is
row.  Statement and mix replication do not make sense in this
tests since it will try to execute statements on columns that do
not exist.  This fix is basically a backport from mysql-5.5, see
changes done as part of BUG 39934.
2012-01-16 09:17:40 +00:00
Georgi Kodinov
aa03fc5333 weave merge mysql-5.1->mysql-5.1-security 2012-01-12 16:42:23 +02:00
Yasufumi Kinoshita
40203bd584 Bug#12400341 INNODB CAN LEAVE ORPHAN IBD FILES AROUND
If we meet DB_TOO_MANY_CONCURRENT_TRXS during the execution tab_create_graph from row_create_table_for_mysql(), .ibd file for the table should be created already but was not deleted for the error handling.

rb:875 approved by Jimmy Yang
2012-01-10 14:18:58 +09:00
Hemant Kumar
5b576596a2 Bug#12872803 - 62154: FEDERATED.FEDERATED_SERVER TEST FAILS WITH RUN --REPEAT=2
Fixed it to work with "--repeat" option.
2012-01-06 16:28:24 +05:30
Hemant Kumar
595f116df0 Bug#12872804 - 62155: BINLOG.BINLOG_STM_UNSAFE_WARNING FAILS WHEN RUN WITH --REPEAT=2
Fixed the testcase using timestamp logic while doing grep from the error file.
2012-01-06 15:46:03 +05:30
Vasil Dimov
43ea968d45 Fix Bug#13510739 63775: SERVER CRASH ON HANDLER READ NEXT AFTER DELETE RECORD.
CREATE TABLE bug13510739 (c INTEGER NOT NULL, PRIMARY KEY (c)) ENGINE=INNODB;
INSERT INTO bug13510739 VALUES (1), (2), (3), (4);
DELETE FROM bug13510739 WHERE c=2;
HANDLER bug13510739 OPEN;
HANDLER bug13510739 READ `primary` = (2);
HANDLER bug13510739 READ `primary` NEXT;  <-- crash

The bug is that in the particular testcase row_search_for_mysql() picked up
a delete-marked record and quit, leaving the cursor non-positioned state and
on the subsequent 'get next' call the code crashed because of the
non-positioned cursor.

In row0sel.cc (line numbers from mysql-trunk):

4653         if (rec_get_deleted_flag(rec, comp)) {
...
4679                 if (index == clust_index && unique_search) {
4680 
4681                         err = DB_RECORD_NOT_FOUND;
4682                         
4683                         goto normal_return;
4684                 }       

it quit from here, not storing the cursor position.

In contrast, if the record=2 is not found at all (e.g. sleep(1) after DELETE
to let the purge wipe it away completely) then 'get = 2' does find record=3
and quits from here:

4366                 if (0 != cmp_dtuple_rec(search_tuple, rec, offsets)) {
...
4394                         btr_pcur_store_position(pcur, &mtr);
4395 
4396                         err = DB_RECORD_NOT_FOUND;
4397 #if 0
4398                         ut_print_name(stderr, trx, FALSE, index->name);
4399                         fputs(" record not found 3\n", stderr);
4400 #endif
4401 
4402                         goto normal_return;

Another fix could be to extend the condition on line 4366 to hold only if
seach_tuple matches rec AND if rec is not delete marked.

Notice that in the above test case if we wait about 1 second somewhere after
DELETE and before 'get = 2', then the testcase does not crash and returns 4
instead. Not sure if this is the correct behavior, but this bugfix removes
the crash and makes the code return what it also returns in the non-crashing
case (if rec=2 is not found during 'get = 2', e.g. we have sleep(1) there).

Approved by:	Marko (http://bur03.no.oracle.com/rb/r/863/)
2011-12-22 12:55:44 +02:00
Annamalai Gurusami
22b3830483 Bug #13117023: Innodb increments handler_read_key when it should not
The counter handler_read_key (SSV::ha_read_key_count) is incremented 
incorrectly.

The mysql server maintains a per thread system_status_var (SSV)
object.  This object contains among other things the counter
SSV::ha_read_key_count. The purpose of this counter is to measure the
number of requests to read a row based on a key (or the number of
index lookups).

This counter was wrongly incremented in the
ha_innobase::innobase_get_index(). The fix removes
this increment statement (for both innodb and innodb_plugin).

The various callers of the innobase_get_index() was checked to
determine if anybody must increment this counter (if they first call
innobase_get_index() and then perform an index lookup).  It was found
that no caller of innobase_get_index() needs to worry about the
SSV::ha_read_key_count counter.
2011-12-13 14:26:12 +05:30