problem was that ha_partition::records was not implemented, thus
using the default handler::records, which is not correct if the engine
does not support HA_STATS_RECORDS_IS_EXACT.
Solution was to implement ha_partition::records as a wrapper around
the underlying partitions records.
The rows column in explain partitions will now include the total
number of records in the partitioned table.
(recommit after removing out-commented code)
returns erroneous results
Used the wrong function when fixing 30480 which lead to
no stop on end_key resulting in duplicate results from index scan
Includes test cases for the duplicates 37327 and 37329,
Duplicate rows and bad performance/High Handler_read_next values
Recommit after merge issues
called from a SELECT doesn't cause ROLLBACK of state"
Make private all class handler methods (PSEA API) that may modify
data. Introduce and deploy public ha_* wrappers for these methods in
all sql/.
This necessary to keep track of all data modifications in sql/,
which is in turn necessary to be able to optimize two-phase
commit of those transactions that do not modify data.
that the entire server uses their public ha_* counterparts instead,
since only then we can ensure proper tracing of these calls that
is necessary for Bug#12713.
A pre-requisite for Bug#12713 "Error in a stored function called from
a SELECT doesn't cause ROLLBACK of statem"
cause ROLLBACK of statement", part 1. Review fixes.
Do not send OK/EOF packets to the client until we reached the end of
the current statement.
This is a consolidation, to keep the functionality that is shared by all
SQL statements in one place in the server.
Currently this functionality includes:
- close_thread_tables()
- log_slow_statement().
After this patch and the subsequent patch for Bug#12713, it shall also include:
- ha_autocommit_or_rollback()
- net_end_statement()
- query_cache_end_of_result().
In future it may also include:
- mysql_reset_thd_for_next_command().
ha_partition::update_create_info() just calls update_create_info
of a first partition, so only get the autoincrement maximum
of the first partition, so SHOW CREATE TABLE can show
small AUTO_INCREMENT parameters.
Fixed by implementing ha_partition::update_create_info() in a way
other handlers work.
HA_ARCHIVE:stats.auto_increment handling made consistent with other engines
(also fixes the bugs: Bug#29320, Bug#29493 and Bug#30536)
Problem: Partitioning did not handle unordered scans correctly
for engines with unordered read order.
Solution: do not stop scanning fi a recored is out of range, since
there can be more records within the range afterwards.
Note: this is the patch that fixes the bug, but since there are no
storage engines shipped with mysql 5.1 (falcon comes in 6.0) there
are no test cases (it is a separate patch that only goes into 6.0)
The bug was that for ordered index scans, ha_partition::index_init() did
not put index columns into table->read_set if the underlying storage
engine did not have HA_PARTIAL_COLUMN_READ flag.
This was causing assertion failure when handle_ordered_index_scan() tried
to sort the records according to index order.
Fixed by making ha_partition::index_init() put index columns into table->read_set
for all ordered scans.
Problem was for LINEAR HASH/KEY. Crashes because of wrong partition id
returned when creating the new altered partitions. (because of wrong
linear hash mask)
Solution: Update the linear hash mask before using it for the new
altered table.
The problem: ha_partition::read_range_first() could return a record that is
outside of the scanned range. If that record happened to be in the next
subsequent range, it would satisfy the WHERE and appear in the output twice.
(we would get it the second time when scanning the next subsequent range)
Fix:
Made ha_partition::read_range_first() check if the returned recod is within
the scanned range, like other read_range_first() implementations do.
Partition handler fails updating tables with partitioning
based on timestamp field, as it calculates the timestamp field
AFTER it calculates the number of partition of a record.
Fixed by adding timestamp_field->set_time() call and disabling
such consequent calls
Problem: the table's INDEX and DATA DIR was taken
directly from the table's first partition.
This allowed rename attack similar to
bug#32111 when ALTER TABLE REMOVE PARTITIONING
Solution: Silently ignore the INDEX/DATA DIR
for the table. (Like some other storage engines
do).
Partitioned tables do not support DATA/INDEX
DIR on the table level, only on its partitions.
table to partitioned
Problem:
Crashed because usage of an uninitialised mutex when auto_incrementing
a partitioned temporary table
Fix:
Only locking (using the mutex) if not temporary table.
- Reserver namespace and place in frm for TABLE_CHECKSUM and PAGE_CHECKSUM create options
- Added syncing of directory when creating .frm files
- Portability fixes
- Added missing cast that could cause bugs
- Code cleanups
- Made some bit functions inline
- Moved things out of myisam.h to my_handler.h to make them more accessable
- Renamed some myisam variables and defines to make them more globaly usable (as they are used outside of MyISAM)
- Fixed bugs in error conditions
- Use compiler time asserts instead of run time
- Fixed indentation
HA_EXTRA_PREPARE_FOR_DELETE -> HA_EXTRA_PREPARE_FOR_DROP as the old name was wrong
(Added a define for old value to ensure we don't break any old code)
Added HA_EXTRA_PREPARE_FOR_RENAME as a signal for rename (before we used a DROP signal which is wrong)
- Initialize error messages early to get better errors when mysqld or an engine fails to start
- Fix windows bug that query_performance_frequency was not initialized if registry code failed
- thread_stack -> my_thread_stack_size
Two cases in ha_partition::extra() was missing
(HA_EXTRA_DELETE_CANNOT_BATCH and HA_EXTRA_UPDATE_CANNOT_BATCH)
which only is currently used by NDB (which not uses ha_partition)
"Rows not deleted from innodb partitioned tables if --innodb_autoinc_lock_mode=0"
Due to a previous bugfix which initializes a previously uninitialized
variable, ha_partition::get_auto_increment() may fail to operate
correctly when the storage engine reports that it is only reserving
one value and one or more partitions have a different 'next-value'.
Currently, only affects Innodb's new-style auto-increment code which
reserves larger blocks of values and has less inter-thread contention.
In the ha_partition::position() we don't calculate the number
of the partition of the record, but use m_last_part value instead,
relying on that it's previously set by some other call like ::write_row().
Delete_rows_log_event::do_exec_row() calls find_and_fetch_row(),
where we used position() + rnd_pos() call for the InnoDB-based PARTITION-ed
table as there HA_PRIMARY_KEY_REQUIRED_FOR_POSITION enabled.
fixed by introducing new handler::rnd_pos_by_record() method to be
used for random record-based positioning
In the ha_partition::position() we didn't calculate the number
of the partition of the record. We used m_last_part value instead,
relying on that it is set in other place like previous call of a method
like ::write_row(). In replication we don't call any of these befor
position(). Delete_rows_log_event::do_exec_row calls find_and_fetch_row.
In case of InnoDB-based PARTITION table, we have HA_PRIMARY_KEY_REQUIRED_FOR_POSITION
enabled, so use position() / rnd_pos() calls to fetch the record.
Fixed by adding partition_id calculation to the ha_partition::position()
Faster thr_alarm()
Added 'Opened_files' status variable to track calls to my_open()
Don't give warnings when running mysql_install_db
Added option --source-install to mysql_install_db
I had to do the following renames() as used polymorphism didn't work with Forte compiler on 64 bit systems
index_read() -> index_read_map()
index_read_idx() -> index_read_idx_map()
index_read_last() -> index_read_last_map()