For TIMESTAMP, it should do similar. However, it didn't work. For some reason, MDB has the function set as DATETIME, which for cs, isn't the same thing. Added a kludge to ha_mcs_execplan.cpp to handle it.
for a generated replacement statements of original statements:
* `CREATE TABLE .. LIKE ..`
* `ALTER TABLE .. ENGINE=Columnstore`
* `CREATE TABLE .. AS ..`
MCOL-2000 Process charset definitions in the ALTER TABLE .. ADD COLUMN
MCOL-2000 Yet another fixes for column charsets
* make respect for column (including table/db/server default) charsets
for the TEXT(n) fields
* round TEXT(n) column length up to the next default length of TEXT-like
subtypes, 255 (TINYTEXT), 65535 (TEXT) and so on up to 2100000000
(LONGTEXT)
1. Add wide decimal support to AggregateColumn::evaluate
and TreeNode::getDecimalVal().
2. Use the pm aggregate attributes to determine um aggregate
attributes in TupleAggregateStep::prep2PhasesAggregate.
MCOL-4409 This patch combines VDecimal and Decimal and makes
IDB_Decimal an alias for the result class
MCOL-4409 More boilerplate reduction in Func_mod
Removed couple TSInt128::toType() methods
- The code in ha_mcs_partition.cpp erroneously printed data
to a temporary ostringstream "oss" instead of "output".
- The left-side adjustfield (applied when printing the range values)
unintentionally disappeared during MCOL-4174 refactoring.
Restoring left adjustfield in TypeHandler::PrintPartitionValue*().
For now it consists of only:
using int128_t = __int128;
using uint128_t = unsigned __int128;
All new privitive data types should go into this file in the future.
1. Perform type promotion to wide decimal if the result
of an arithmetic operation has a precision > 18.
2. Only set the decimal width of an arithmetic operation to wide
if both the LHS and RHS of the operation are decimal types.
This commit also adds support in TupleHashJoinStep::forwardCPData,
although we currently do not support wide decimals as join keys.
Row estimation to determine large-side of the join is also updated.
Introduced fDecimalOverflowCheck to enable/disable overflow check.
Add support into a FunctionColumn.
Low level scanning crashes on medium sized data sets.
2. Set Decimal precision in SimpleColumn::evaluate().
3. Add support for int128_t in ConstantColumn.
4. Set IDB_Decimal::s128Value in buildDecimalColumn().
5. Use width 16 as first if predicate for branching based on decimal width.
TupleAggregateStep class method and buildAggregateColumn() now properly set result data type.
doSum() now handles DECIMAL(38) in approprate manner.
Low-level null related methods for new binary-based datatypes now handles magic values for
binary-based DT.
For CHAR/VARCHAR/TEXT fields, the buffer size of a field represents
the field size in bytes, which can be bigger than the field size in
number of characters, for multi-byte character sets such as utf8,
utf8mb4 etc. The buffer also contains a byte length prefix which can be
up to 65532 bytes for a VARCHAR field, and much higher for a TEXT
field (we process a maximum byte length for a TEXT field which fits in
4 bytes, which is 2^32 - 1 = 4GB!).
There is also special processing for a TEXT field defined with a default
length like so:
CREATE TABLE cs1 (a TEXT CHARACTER SET utf8)
Here, the byte length is a fixed 65535, irrespective of the character
set used. This is different from a case such as:
CREATE TABLE cs1 (a TEXT(65535) CHARACTER SET utf8), where the byte length
for the field will be 65535*3.
flag to ha_mcs_impl_start_bulk_insert.
An earlier commit to fix LDI under replication changed the call in
ha_mcs_cache::start_bulk_insert for a non-insert command from
parent::start_bulk_insert_from_cache to parent::start_bulk_insert.
This commit reverts that change for INSERT...SELECT operation.