ha_partition: Remove redundant 'virtual' keywords and add
missing 'override'.
FIXME: handler::table_type() is not declared virtual, yet ha_partition
and ha_sequence are seemingly trying to override it.
- Flag ALTER_STORED_COLUMN_TYPE set while doing varchar extension
for partition table. Basically all partition supports
can_be_converted_by_engine() then it should be set to
ALTER_COLUMN_TYPE_CHANGE_BY_ENGINE.
Partition table with the AUTO_INCREMENT column we ahve to check if the
max value is properly loaded. So we need to open all tables in INSERT
PARTITION statement if necessary. Also we need to check if some
tables are pruned away and not count the max autoincrement in this case.
MDEV-19486 and one more similar bug appeared because handler::write_row() interface
welcomes to modify buffer by storage engine. But callers are not ready for that
thus bugs are possible in future.
handler::write_row():
handler::ha_write_row(): make argument const
make live checksum to be returned in handler::info(),
and slow table-scan checksum to be calculated in handler::checksum().
part of
MDEV-16249 CHECKSUM TABLE for a spider table is not parallel and saves all data in memory in the spider head by default
Fix partitioning for trx_id-versioned tables.
`partition by hash`, `range` and others now work.
`partition by system_time` is forbidden.
Currently we cannot use row_start and row_end in `partition by`, because
insertion of versioned field is done by engine's handler, as well as
row_start/row_end's value set up, which is a transaction id -- so it's
also forbidden.
The drawback is that it's now impossible to use `partition by key()`
without parameters for such tables, because it references row_start and
row_end implicitly.
* add handler::vers_can_native()
* drop Table_scope_and_contents_source_st::vers_native()
* drop partition_element::find_engine_flag as unused
* forbid versioning partitioning for trx_id as not supported
* adopt vers tests for trx_id partitioning
* forbid any row_end referencing in `partition by` clauses,
including implicit `by key()`
This patch contains a full implementation of the optimization
that allows to use in-memory rowid / primary filters built for range
conditions over indexes. In many cases usage of such filters reduce
the number of disk seeks spent for fetching table rows.
In this implementation the choice of what possible filter to be applied
(if any) is made purely on cost-based considerations.
This implementation re-achitectured the partial implementation of
the feature pushed by Galina Shalygina in the commit
8d5a11122c.
Besides this patch contains a better implementation of the generic
handler function handler::multi_range_read_info_const() that
takes into account gaps between ranges when calculating the cost of
range index scans. It also contains some corrections of the
implementation of the handler function records_in_range() for MyISAM.
This patch supports the feature for InnoDB and MyISAM.
When using buffered sort in `UPDATE`, keyread is used. In this case,
`TABLE::update_virtual_field` should be aborted, but it actually isn't,
because it is called not with a top-level handler, but with the one that
is actually going to access the disk. Here the problemm is issued with
partitioning, so the solution is to recursively mark for keyread all the
underlying partition handlers.
* ha_partition: update keyread state for child partitions
Closes#800
We don't have many bits left, no need to add another InnoDB-specific flag.
Instead, we say that HA_REQUIRE_PRIMARY_KEY does not apply to SEQUENCE.
Meaning, if the engine declares HA_CAN_TABLES_WITHOUT_ROLLBACK (required
for SEQUENCE) it *must* support tables without a primary key.
main.derived_cond_pushdown: Move all 10.3 tests to the end,
trim trailing white space, and add an "End of 10.3 tests" marker.
Add --sorted_result to tests where the ordering is not deterministic.
main.win_percentile: Add --sorted_result to tests where the
ordering is no longer deterministic.
Fixed by adding table flag HA_WANTS_PRIMARY_KEY, which is like
HA_REQUIRE_PRIMARY_KEY but tells SQL upper layer that the storage engine
internally can handle tables without primary keys (for example for
sequences or trough user variables)
When using buffered sort in `UPDATE`, keyread is used. In this case,
`TABLE::update_virtual_field` should be aborted, but it actually isn't,
because it is called not with a top-level handler, but with the one that
is actually going to access the disk. Here the problemm is issued with
partitioning, so the solution is to recursively mark for keyread all the
underlying partition handlers.
* ha_partition: update keyread state for child partitions
Closes#800
The problem occurs in 10.2 and earlier releases of MariaDB Server because the
Partition Engine was not pushing the engine conditions to the underlying
storage engine of each partition. This caused Spider to return the first 5
rows in the table with the data provided by the customer. 2 of the 5 rows
did not qualify the WHERE clause, so they were removed from the result set by
the server.
To fix the problem, I have back-ported support for engine condition pushdown
in the Partition Engine from MariaDB Server 10.3.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Cherry-Picked:
Commit eb2ca3d on branch bb-10.2-MDEV-16912
The problem occurs in 10.2 and earlier releases of MariaDB Server because the
Partition Engine was not pushing the engine conditions to the underlying
storage engine of each partition. This caused Spider to return the first 5
rows in the table with the data provided by the customer. 2 of the 5 rows
did not qualify the WHERE clause, so they were removed from the result set by
the server.
To fix the problem, I have back-ported support for engine condition pushdown
in the Partition Engine from MariaDB Server 10.3.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
The problem occurred because the Spider node was incorrectly handling
timestamp values sent to and received from the data nodes.
The problem has been corrected as follows:
- Added logic to set and maintain the UTC time zone on the data nodes.
To prevent timestamp ambiguity, it is necessary for the data nodes to use
a time zone such as UTC which does not have daylight savings time.
- Removed the spider_sync_time_zone configuration variable, which did not
solve the problem and which interfered with the solution.
- Added logic to convert to the UTC time zone all timestamp values sent to
and received from the data nodes. This is done for both unique and
non-unique timestamp columns. It is done for WHERE clauses, applying to
SELECT, UPDATE and DELETE statements, and for UPDATE columns.
- Disabled Spider's use of direct update when any of the columns to update is
a timestamp column. This is necessary to prevent false duplicate key value
errors.
- Added a new test spider.timestamp to thoroughly test Spider's handling of
timestamp values.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Cherry-Picked:
Commit 97cc9d3 on branch bb-10.3-MDEV-16246
Main reason was to make it easier to print the above structures in
a debugger. Additional benefits is that I was able to use same
defines for both structures, which simplifes some code.
Most of the code is just removing Alter_info:: and Alter_inplace_info::
from alter table flags.
Following renames was done:
HA_ALTER_FLAGS -> alter_table_operations
CHANGE_CREATE_OPTION -> ALTER_CHANGE_CREATE_OPTION
Alter_info::ADD_INDEX -> ALTER_ADD_INDEX
DROP_INDEX -> ALTER_DROP_INDEX
ADD_UNIQUE_INDEX -> ALTER_ADD_UNIQUE_INDEX
DROP_UNIQUE_INDEx -> ALTER_DROP_UNIQUE_INDEX
ADD_PK_INDEX -> ALTER_ADD_PK_INDEX
DROP_PK_INDEX -> ALTER_DROP_PK_INDEX
Alter_info:ALTER_ADD_COLUMN -> ALTER_PARSE_ADD_COLUMN
Alter_info:ALTER_DROP_COLUMN -> ALTER_PARSE_DROP_COLUMN
Alter_inplace_info::ADD_INDEX -> ALTER_ADD_NON_UNIQUE_NON_PRIM_INDEX
Alter_inplace_info::DROP_INDEX -> ALTER_DROP_NON_UNIQUE_NON_PRIM_INDEX
Other things:
- Added typedef alter_table_operatons for alter table flags
- DROP CHECK CONSTRAINT can now be done online
- Added checks for Aria tables in alter_table_online.test
- alter_table_flags now takes an ulonglong as argument.
- Don't support online operations if checksum option is used.
- sql_lex.cc doesn't add ALTER_ADD_INDEX if index is not created
Handle string length as size_t, consistently (almost always:))
Change function prototypes to accept size_t, where in the past
ulong or uint were used. change local/member variables to size_t
when appropriate.
This fix excludes rocksdb, spider,spider, sphinx and connect for now.
Now we don't open partitions if it was explicitly cpecified.
ha_partition::m_opened_partition bitmap added to track
partitions that were actually opened.
Includes Spider patches
- 062_mariadb-10.2.0.direct_join_1and3.diff
- 063_mariadb-10.2.0.direct_join_for_single_partition.diff
- Test cases from Kentoku
Allows Spider to push full joins to the Spider engine trough the
create_group_by interface.
Other things:
- Increased MYSQL_VERSION_ID to check for 10211 (latest 10.2 version)
- Fix for const_table at calling create_group_by().
Original author: Kentoku SHIBA
Add support for direct update and direct delete requests for spider.
A direct update/delete request handles all qualified rows in a single
operation rather than one row at a time.
Contains Spiral patches:
006_mariadb-10.2.0.direct_update_rows.diff MDEV-7704
008_mariadb-10.2.0.partition_direct_update.diff MDEV-7706
010_mariadb-10.2.0.direct_update_rows2.diff MDEV-7708
011_mariadb-10.2.0.aggregate.diff MDEV-7709
027_mariadb-10.2.0.force_bulk_update.diff MDEV-7724
061_mariadb-10.2.0.mariadb-10.1.8.diff MDEV-12870
- The differences compared to the original patches:
- Most of the parameters of the new functions are unnecessary. The
unnecessary parameters have been removed.
- Changed bit positions for new handler flags upon consideration of
handler flags not needed by other Spiral patches and handler flags
merged from MySQL.
- Added info_push() (Was originally part of bulk access patch)
- Didn't include code related to handler socket
- Added HA_CAN_DIRECT_UPDATE_AND_DELETE
Original author: Kentoku SHIBA
First reviewer: Jacob Mathew
Second reviewer: Michael Widenius
- In Spider, calling cmp_ref() can be very expensive. In ha_partition.cc
we don't anymore sort rows according to position for the Spider
engine.
- Removed Spider specific call info(HA_EXTRA_STARTING_ORDERED_INDEX_SCAN)
from handle_ordered_index_scan(). It's caused performance issues and
does not change results for queries with ORDER BY.
- The visible effect of this patch is that for some storage engines,
rows may be returned in a different order if there is no ORDER BY clause.
- Based in Spiral Patch 052:
052_mariadb-10.2.0.add_partition_skip_pk_sort_for_non_clustered_index
MDEV-7748
- The major difference from original patch is that there is no variable to
get the old behaviour.
Other things:
- Optimized ha_partition::cmp_ref() and cmp_part_ids() to make them
simpler and faster.
- Changed arguments to cmp_key_part_id() to be same as
cmp_key_rowid_part_id to simplify code.
Original author: Kentoku SHIBA
First reviewer: Jacob Mathew
Second reviewer: Michael Widenius