1
0
mirror of https://github.com/MariaDB/server.git synced 2025-07-01 03:26:54 +03:00
Commit Graph

5200 Commits

Author SHA1 Message Date
118b756198 BUG#13556107: CHECK AND REPAIR TABLE SHOULD BE MORE ROBUST [3]
Problem description: Incorrect key file. Key file is corrupted,
while reading the keys from the file. The problem here is that 
keyseg->start (which should point to the beginning of a field) 
is pointing beyond total record length.

Fix: If keyseg->start is greater than total record length then 
return error.
2012-11-14 17:02:36 +05:30
e42cd2db91 Fix Bug #14753402 - FAILING ASSERTION: LEN == IFIELD->FIXED_LEN
rb://1411 approved by Marko
2012-11-14 17:00:41 +08:00
5a7553f36a Bug #14676111 WRONG PAGE_LEVEL WRITTEN FOR UPPER THAN FATHER PAGE AT BTR_LIFT_PAGE_UP()
btr_lift_page_up() writes wrong page number (different by -1) for upper than father page.
But in almost all of the cases, the father page should be root page, no upper
pages. It is very rare path.

In addition the leaf page should not be lifted unless the father page is root.
Because the branch pages should not become the leaf pages.

rb://1336 approved by Marko Makela.
2012-11-12 22:31:30 +09:00
04195c30c1 This is a backport of "WL#5674 InnoDB: report all deadlocks (Bug#1784)"
from MySQL 5.6 into MySQL 5.5

Will close Bug#14515889 BACKPORT OF INNODB DEADLOCK LOGGING TO 5.5

The original implementation is in
vasil.dimov@oracle.com-20101213120811-k2ldtnao2t6zrxfn

Approved by:	Jimmy (rb:1535)
2012-11-12 14:24:43 +02:00
51d01d7517 Bug #14676111 WRONG PAGE_LEVEL WRITTEN FOR UPPER THAN FATHER PAGE AT BTR_LIFT_PAGE_UP()
btr_lift_page_up() writes wrong page number (different by -1) for upper than father page.
But in almost all of the cases, the father page should be root page, no upper
pages. It is very rare path.

In addition the leaf page should not be lifted unless the father page is root.
Because the branch pages should not become the leaf pages.

rb://1336 approved by Marko Makela.
2012-11-12 22:33:40 +09:00
9749b60ee8 Bug#13556000: CHECK AND REPAIR TABLE SHOULD BE MORE ROBUST[2]
Problem description: Corrupt key file for the table. Size of the 
key is greater than the maximum specified size. This results in 
the overflow of the key buffer while reading the key from key 
file.

Fix: If size of key is greater than the maximum size it returns 
an error before writing it into the key buffer. Gives error as 
corrupt file but no stack overflow.
2012-11-09 19:19:11 +05:30
2ad007dfd6 Bug #14669848 CRASH DURING ALTER MAKES ORIGINAL TABLE INACCESSIBLE
When a new primary key is added to an InnoDB table, then the following
steps are taken by InnoDB plugin:

.  let t1 be the original table.
.  a temporary table t1@00231 will be created by cloning t1.
.  all data will be copied from t1 to t1@00231.
.  rename t1 to t1@00232.
.  rename t1@00231 to t1.
.  drop t1@00232.

The rename and drop operations involve file operations.  But file operations
cannot be rolled back.  So in row_merge_rename_tables(), just after doing data
dictionary update and before doing any file operations, generate redo logs
for file operations and commit the transaction.  This will ensure that any
crash after this commit, the table is still recoverable by moving .ibd and
.frm files.  Manual recovery is required.

During recovery, the rename file operation redo logs are processed.
Previously this was being ignored.

rb://1460 approved by Marko Makela.
2012-11-09 19:04:01 +05:30
2b68f44034 Merging from mysql-5.1 to mysql-5.5. 2012-11-09 18:56:20 +05:30
7ef879bb81 Bug #14669848 CRASH DURING ALTER MAKES ORIGINAL TABLE INACCESSIBLE
When a new primary key is added to an InnoDB table, then the following
steps are taken by InnoDB plugin:

.  let t1 be the original table.
.  a temporary table t1@00231 will be created by cloning t1.
.  all data will be copied from t1 to t1@00231.
.  rename t1 to t1@00232.
.  rename t1@00231 to t1.
.  drop t1@00232.

The rename and drop operations involve file operations.  But file operations
cannot be rolled back.  So in row_merge_rename_tables(), just after doing data
dictionary update and before doing any file operations, generate redo logs
for file operations and commit the transaction.  This will ensure that any
crash after this commit, the table is still recoverable by moving .ibd and
.frm files.  Manual recovery is required.

During recovery, the rename file operation redo logs are processed.
Previously this was being ignored.

rb://1460 approved by Marko Makela.
2012-11-08 18:09:13 +05:30
1d771aa425 Bug #11759445: CAN'T DELETE ROWS FROM MEMORY TABLE WITH HASH KEY.
Merging from 5.1 to 5.5
2012-11-07 09:03:33 +05:30
2226b1084c Bug #11759445: CAN'T DELETE ROWS FROM MEMORY TABLE WITH HASH KEY.
Brief description: After insert some rows to MEMORY table with HASH key some 
rows can't be deleted in one step.    

Problem Analysis/solution: info->current_ptr will have the information about the
current hash pointer from where we can traverse to the list to get all the       
remaining tuples.
      
In hp_delete_key we are updating info->current_ptr with the last_pos based on       
the flag parameter(which is the keydef and last index are same). As part of the       
fix we are making it to zero only when the code flow reaches to the end of the       
function hp_delete_key() it means that the next record which has to get deleted       
will be at the starting of the list so, that in the next call to       
read record(heap_rnext()) will take line number 100 path instead of 102 path, 
please see the below code in file hp_rnext.c, function heap_rnext().
 99       else if (!info->current_ptr)              /* Deleted or first call */
100         pos= hp_search(info, keyinfo, info->lastkey, 0);
101       else  
102         pos= hp_search(info, keyinfo, info->lastkey, 1);

with that change the hp_search() will update the info->current_ptr with the 
record which needs to be deleted.

storage/heap/hp_delete.c:
  In heap_delete_key() function we are making info->current_ptr to 0 if 
  flag is enabled.
2012-11-07 09:00:17 +05:30
02501a0f97 BUG#13556441: CHECK AND REPAIR TABLE SHOULD BE MORE ROBUST [4]
Problem description:
mysql server crashes when we run repair table on currupted table.

Analysis:
The problem with this bug seem to be key_reflength out of bounds
(186 according to debugger). We read this value from meta-data
segment of .MYI file while doing mi_open().

If you look into _mi_kpointer() you can see that the upper limit
for key_reflength is 7.

Solution:
In mi_open() there is a line like:
  if (share->base.keystart > 65535 || share->base.rec_reflength > 8)
we should verify key_reflength here as well.
2012-10-31 18:32:53 +05:30
a18a3474bb BUG#11754894: MYISAMCHK ERROR HAS INCORRECT REFERENCE
TO 'MYISAM_SORT_BUFFER_SIZE'
Merging from 5.1 to 5.5
2012-10-30 19:00:12 +05:30
40b8b95142 BUG#11754894: MYISAMCHK ERROR HAS INCORRECT REFERENCE
TO 'MYISAM_SORT_BUFFER_SIZE'
Problem: 'myisam_sort_buffer_size' is a parameter used by 
mysqld program only whereas 'sort_buffer_size' is used by
mysqld and myisamchk programs. But the error message printed
when myisamchk program is run with insufficient buffer size 
is myisam_sort_buffer_size is too small which may mislead to the
server parameter myisam_sort_buffer_size.
SOLUTION: A parameter 'myisam_sort_buffer_size' is added as an
alias for 'sort_buffer_size' and the 'sort_buffer_size' parameter
is marked as deprecated. So myisamchk also has both the parameters
with the same role.
2012-10-30 18:49:15 +05:30
84d87d938a BUG#11754894: MYISAMCHK ERROR HAS INCORRECT REFERENCE
TO 'MYISAM_SORT_BUFFER_SIZE'
Problem: 'myisam_sort_buffer_size' is a parameter used by 
mysqld program only whereas 'sort_buffer_size' is used by
mysqld and myisamchk programs. But the error message printed
when myisamchk program is run with insufficient buffer size 
is myisam_sort_buffer_size is too small which may mislead to the
server parameter myisam_sort_buffer_size.
SOLUTION: A parameter 'myisam_sort_buffer_size' is added as an
alias for 'sort_buffer_size' and the 'sort_buffer_size' parameter
is marked as deprecated. So myisamchk also has both the parameters
with the same role.
2012-10-30 16:53:55 +05:30
e5b11572bc Merge mysql-5.1 to mysql-5.5. 2012-10-22 22:13:26 +03:00
d13554b1f9 Backport from 5.6: Bug#14769820 ASSERT FLEN == LEN
IN ALTER TABLE ... ADD UNIQUE KEY

A bogus debug assertion failure occurred when reporting a duplicate
key on a column prefix of a CHAR column.

This is a regression from Bug#14729221 IN-PLACE ALTER TABLE REPORTS ''
INSTEAD OF REAL DUPLICATE VALUE FOR PREFIX KEYS. The assertion is only
present when UNIV_DEBUG is defined (which it is in debug builds
starting from MySQL 5.5). It is a case of overasserting.

Fix approved by Inaam Rana on IM.
2012-10-22 22:10:33 +03:00
f9389d584d Merge mysql-5.1 to mysql-5.5. 2012-10-18 17:14:57 +03:00
52ea152294 Bug#14758405: ALTER TABLE: ADDING SERIAL NULL DATATYPE: ASSERTION:
LEN <= SIZEOF(ULONGLONG)

This bug was caught in the WL#6255 ALTER TABLE...ADD COLUMN in MySQL
5.6, but there is a bug in all InnoDB versions that support
auto-increment columns.

row_search_autoinc_read_column(): When reading the maximum value of
the auto-increment column, and the column only contains NULL values,
return 0. This corresponds to the case when the table is empty in
row_search_max_autoinc().

rb:1415 approved by Sunny Bains
2012-10-18 17:03:06 +03:00
bdc6895d35 Bug #13702112 : WAIT_FOR_READ IS STUCK IN THE 90S
rb://1334
approved by: Inaam Rana
2012-10-17 16:16:45 +09:00
39e6eafc20 Bug #13702112 : WAIT_FOR_READ IS STUCK IN THE 90S
rb://1334
approved by: Inaam Rana
2012-10-17 15:28:31 +09:00
bb371b649d Merge mysql-5.1 to mysql-5.5. 2012-10-16 14:35:19 +03:00
aec624762b Bug#14729221 IN-PLACE ALTER TABLE REPORTS '' INSTEAD OF
REAL DUPLICATE VALUE FOR PREFIX KEYS

innobase_rec_to_mysql(): Invoke dict_index_get_nth_col_or_prefix_pos()
instead of dict_index_get_nth_col_pos() to find the column.
2012-10-16 14:24:15 +03:00
ed6732dd16 bug#14704286
SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
LOOKUPS (honoring kill query while accessing sec_index)

If secondary index is being used for select query evaluation and this
query is operating with consistent read snapshot it might take good time for
secondary index to return back control to mysql as MVCC would kick in.

If user issues "kill query <id>" while query is actively accessing
secondary index it will not be honored as there is no hook to check
for this condition. Added hook for this check.

-----
Parallely secondary index taking too long to evaluate for consistent
read snapshot case is being examined for performance improvement. WL#6540.
2012-10-15 09:49:50 +05:30
9bfc910f2f bug#14704286
SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
LOOKUPS (honoring kill query while accessing sec_index)

If secondary index is being used for select query evaluation and this
query is operating with consistent read snapshot it might take good time for
secondary index to return back control to mysql as MVCC would kick in.

If user issues "kill query <id>" while query is actively accessing
secondary index it will not be honored as there is no hook to check
for this condition. Added hook for this check.

-----
Parallely secondary index taking too long to evaluate for consistent
read snapshot case is being examined for performance improvement. WL#6540.
2012-10-15 09:24:33 +05:30
1d16fc16dc Fix compilation error in debug mode:
os/os0file.c:1332: error: ISO C90 forbids mixed declarations and code
2012-10-10 22:22:10 +03:00
61bd7d0ce9 Merge mysql-5.1 -> mysql-5.5 2012-10-09 16:41:13 +03:00
4cefe863ae Port the test for Bug#14708715 from 5.1/innodb_plugin into 5.1/innodb
although the bug does not exist in 5.1/innodb.
2012-10-09 16:29:00 +03:00
8baabf3090 Update the ChangeLog with the fix of Bug#14708715 2012-10-09 16:08:06 +03:00
93398c307f Fix Bug#14708715 CREATE TABLE MEMORY LEAK ON DB_OUT_OF_FILE_SPACE
The problem is in the error handling in row_create_table_for_mysql().
In the 'disk full' case we may forget to call dict_mem_table_free() on
the table object.

Approved by:	Marko (rb:1377 and rb:1386)
2012-10-09 16:02:58 +03:00
378a7d1ef5 Bug #14036214 MYSQLD CRASHES WHEN EXECUTING UPDATE IN TRX WITH
CONSISTENT SNAPSHOT OPTION

A transaction is started with a consistent snapshot.  After 
the transaction is started new indexes are added to the 
table.  Now when we issue an update statement, the optimizer
chooses an index.  When the index scan is being initialized
via ha_innobase::change_active_index(), InnoDB reports 
the error code HA_ERR_TABLE_DEF_CHANGED, with message 
stating that "insufficient history for index".

This error message is propagated up to the SQL layer.  But
the my_error() api is never called.  The statement level
diagnostics area is not updated with the correct error 
status (it remains in Diagnostics_area::DA_EMPTY).  

Hence the following check in the Protocol::end_statement()
fails.

 516   case Diagnostics_area::DA_EMPTY:
 517   default:
 518     DBUG_ASSERT(0);
 519     error= send_ok(thd->server_status, 0, 0, 0, NULL);
 520     break;

The fix is to backport the fix of bugs 14365043, 11761652 
and 11746399. 

14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED

rb://1227 approved by guilhem and mattiasj.
2012-10-08 19:40:30 +05:30
52a4ef95e8 Merge mysql-5.1 to mysql-5.5.
Also, add debug check for trx_id sanity to row_upd_rec_sys_fields().
2012-10-08 16:18:54 +03:00
b06620868e Bug#14731482 UPDATE OR DELETE CORRUPTS A RECORD WITH A LONG PRIMARY KEY
We did not allocate enough bits for index->trx_id_offset, causing an
UPDATE or DELETE of a table with a PRIMARY KEY longer than 1024 bytes
to corrupt the PRIMARY KEY.

dict_index_t: Allocate enough bits.

dict_index_build_internal_clust(): Check for overflow of
index->trx_id_offset. Trip a debug assertion when overflow occurs.

rb:1380 approved by Jimmy Yang
2012-10-08 16:01:50 +03:00
b59a64e2f3 Bug #13249921 ASSERT !BPAGE->FILE_PAGE_WAS_FREED, USUALLY IN
TRANSACTION ROLLBACK

Description:  During the rollback operation, a blob page 
is removed earlier than desired.  Consider following scenario:

1. create table t1(a int primary key,b blob) engine=innodb;
2. insert into t1 values (1,repeat('b',9000));
3. begin;
4. update t1 set b=concat(b,'b');
5. update t1 set a=a+1;
6. insert into t1 values (1,repeat('b',9000));
7. rollback;

The update operation in line 5 produces 2 undo log record. The first
undo record (TRX_UNDO_DEL_MARK_REC) goes to trx->update_undo and the
second undo record (TRX_UNDO_INSERT_REC) goes to trx->insert_undo.
During rollback, they are executed out of order.

When the undo record TRX_UNDO_DEL_MARK_REC is applied/executed,
the blob ownership is also reset.  Because of this the blob page
is released earlier than desired.  This blob page must have been
freed only as part of applying/executing the undo record
TRX_UNDO_INSERT_REC.

This problem can be avoided by executing the undo records in
order.  This patch will make innodb to execute the undo records
in order.

rb://1125 approved by Marko.
2012-09-28 16:02:58 +05:30
6bbe24e9a0 Bug#14636528 INNODB CHANGE BUFFERING IS NOT ENTIRELY CRASH-SAFE
Delete-mark change buffer records when resorting to a pessimistic
delete from the change buffer B-tree. Skip delete-marked records in
the change buffer merge and when estimating whether an operation can
be buffered. Without this fix, we could try to apply the same buffered
changes multiple times if the server was killed at the right moment.

In MySQL 5.5 and later: ibuf_get_volume_buffered_count_func(): Ignore
delete-marked (already processed) records.

ibuf_delete_rec(): Add a crash point before optimistic delete. If the
optimistic delete fails, flag the record processed before
mtr_commit().

ibuf_merge_or_delete_for_page(): Ignore delete-marked (already
processed) records.

Backport to 5.1: Rename btr_cur_del_unmark_for_ibuf() to
btr_cur_set_deleted_flag_for_ibuf() and add a parameter.

rb:1307 approved by Jimmy Yang
2012-09-19 22:35:38 +03:00
300f3fb77f Bug#12701488 ASSERT PAGE_ZIP_VALIDATE, UNIV_ZIP_DEBUG
page_zip_validate(), page_zip_validate_low(): Add a parameter for the
B-tree index.

page_zip_validate_low(): If the page contents does not match, check
that the record link chains match. Furthermore, if dict_index_t is
passed, check that the records match. (This reduces coverage a bit: if
index=NULL, we will ignore differences in record contents, that is,
the page payload.)

rb:1264 approved by Inaam Rana
2012-09-17 14:21:00 +03:00
356c42d2c2 Merge from mysql-5.1 to mysql-5.5. 2012-09-28 16:42:31 +05:30
76e4d6acf0 Bug#14594600 ASSERT FROM DROP TABLE CONCURRENT WITH IBUF MERGES
rb://1293
approved by: Marko Makela

There is race when dropping a single table tablespace where a reader
thread can initiate a read request before the delete flag is set and
before it is finished the deleting thread can attempt to free the
fil_node.

This patch checks the status in fil_io() to make sure that the
tablespace is not being deleted. If it is being deleted then
an error is returned instead of attempting IO.
2012-09-20 08:44:33 -04:00
161a91d39f Merge mysql-5.1 to mysql-5.5. 2012-09-19 22:55:26 +03:00
d7089f6f3e Merge mysql-5.1 to mysql-5.5. 2012-09-17 14:32:07 +03:00
f3a6816fe5 Bug #13453036 ERROR CODE 1118: ROW SIZE TOO LARGE - EVEN
THOUGH IT IS NOT.

The following error message is misleading because it claims 
that the BLOB space is not counted.  

"ERROR 1118 (42000): Row size too large. The maximum row size for 
the used table type, not counting BLOBs, is 8126. You have to 
change some columns to TEXT or BLOBs"

When the ROW_FORMAT=compact or ROW_FORMAT=REDUNDANT is used,
the BLOB prefix is stored inline along with the row.  So 
the above error message is changed as follows depending on
the row format used:

For ROW_FORMAT=COMPRESSED or ROW_FORMAT=DYNAMIC, the error
message is as follows:

"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB may help. In current row format, 
BLOB prefix of 0 bytes is stored inline."

For ROW_FORMAT=COMPACT or ROW_FORMAT=REDUNDANT, the error
message is as follows:

"ERROR 42000: Row size too large (> 8126). Changing some
columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or 
ROW_FORMAT=COMPRESSED may help. In current row
format, BLOB prefix of 768 bytes is stored inline."

rb://1252 approved by Marko Makela
2012-08-31 15:42:00 +05:30
d37f6298cf Bug#14554000 CRASH IN PAGE_REC_GET_NTH_CONST(NTH=0) DURING COMPRESSED
PAGE SPLIT

page_rec_get_nth_const(): Map nth==0 to the page infimum.

btr_compress(adjust=TRUE): Add a debug assertion for nth>0. The cursor
should never be positioned on the page infimum.

btr_index_page_validate(): Add test instrumentation for checking the
return values of page_rec_get_nth_const() during CHECK TABLE, and for
checking that the page directory slot 0 always contains only one
record, the predefined page infimum record.

page_cur_delete_rec(), page_delete_rec_list_end(): Add debug
assertions guarding against accessing the page slot 0.

page_copy_rec_list_start(): Clarify a comment about ret_pos==0.

rb:1248 approved by Jimmy Yang
2012-08-30 21:53:41 +03:00
961486e524 Bug#14547952: DEBUG BUILD FAILS ASSERTION IN RECORDS_IN_RANGE()
ha_innodb::records_in_range(): Remove a debug assertion
that prohibits an open range (full table).

The patch by Jorgen Loland only removed the assertion from the
built-in InnoDB, not from the InnoDB Plugin.
2012-08-30 21:49:24 +03:00
816a8b5384 Bug#14547952: DEBUG BUILD FAILS ASSERTION IN RECORDS_IN_RANGE()
ha_innobase::records_in_range(): Remove a debug assertion
that prohibits an open range (full table).
This assertion catches unnecessary calls to this method, 
but such calls are not harming correctness.
2012-08-28 14:51:01 +02:00
9d41d7c57b Bug #14500557 CRASH WHEN USING LONG INNODB INDEXES
The ha_innobase table handler contained two search key buffers
(srch_key_val1, srch_key_val2) of fixed size used to store the search
key.  The size of these buffers where fixed at
REC_VERSION_56_MAX_INDEX_COL_LEN + 2.  But this size is not sufficient
to hold the search key.  Hence the following assert in
row_sel_convert_mysql_key_to_innobase() failed.

2438                 /* Storing may use at most data_len bytes of buf */
2439 
2440                 if (UNIV_LIKELY(!is_null)) {
2441                         ut_a(buf + data_len <= original_buf + buf_len);
2442                         row_mysql_store_col_in_innobase_format(
2443                                 dfield, buf,
2444                                 FALSE, /* MySQL key value format col */
2445                                 key_ptr + data_offset, data_len,
2446                                 dict_table_is_comp(index->table));
2447                         buf += data_len;
2448                 }

The buffer size is now calculated with the formula
MAX_KEY_LENGTH + MAX_REF_PARTS*2.  This properly takes into account
the extra bytes needed to store the length for each column.  An index
can contain a maximum of MAX_REF_PARTS columns in it, and for each
column 2 bytes are needed to store length.  

rb://1238 approved by Marko and Vasil Dimov.
2012-09-04 14:33:56 +05:30
3f0e739e3e Merge from mysql-5.1 to mysql-5.5. 2012-09-01 11:27:53 +05:30
22c0db6c3b Merge mysql-5.1 to mysql-5.5. 2012-08-30 22:01:23 +03:00
30aadd6b41 Bug#14145950 AUTO_INCREMENT ON DOUBLE WILL FAIL ON WINDOWS
Backport from mysql-5.6 the fix
(revision-id sunny.bains@oracle.com-20120315045831-20rgfa4cozxmz7kz)

  Bug#13839886 - CRASH IN INNOBASE_NEXT_AUTOINC
  
  The assertion introduce in the fix for Bug#13817703 
  is too strong, a negative  number can be greater 
  than the column max value, when the column value is
  a negative number.
  
  rb://978 Approved by Jimmy Yang.

rb:1236 approved by Marko Makela
2012-08-27 15:42:11 +05:30
9bc328a41a Bug#13417440 : 63340: ARCHIVE FILE IO NOT INSTRUMENTED
WARNING

This patch is for mysql-5.5 only,
to be null-merged to mysql-5.6 and mysql-trunk.

This is a partial rollback of the file io instrumentation,
removing the instrumentation for mysql_file_stat in the archive engine.

See the bug comments for details.
2012-08-24 10:01:59 +02:00
3f249921f8 Fix regression from Bug#12845774 OPTIMISTIC INSERT/UPDATE USES WRONG
HEURISTICS FOR COMPRESSED PAGE SIZE

The fix of Bug#12845774 was supposed to skip known-to-fail
btr_cur_optimistic_insert() calls. There was only one such call, in
btr_cur_pessimistic_update(). All other callers of
btr_cur_pessimistic_insert() would release and reacquire the B-tree
page latch before attempting the pessimistic insert. This would allow
other threads to restructure the B-tree, allowing (and requiring) the
insert to succeed as an optimistic (single-page) operation.

Failure to attempt an optimistic insert before a pessimistic one would
trigger an attempt to split an empty page.

rb:1234 approved by Sunny Bains
2012-08-21 10:47:17 +03:00