DML statements executed on the primary node in a ColumnStore
cluster do not need to be written to the primary's binlog. This
is due to ColumnStore's distributed storage architecture.
With this patch, we disable writing to binlog when a DML statement
(INSERT/DELETE/UPDATE/LDI/INSERT..SELECT) is performed on a ColumnStore
table. HANDLER::external_lock() calls are used to
1. Turn OFF the OPTION_BIN_LOG flag
2. Turn ON the OPTION_BIN_TMP_LOG_OFF flag
in THD::variables.option_bits during a WRITE lock call.
THD::variables.option_bits is restored back to the original state
during the UNLOCK call in HANDLER::external_lock().
Further, isDMLStatement() function is added to reduce code verbosity
to check if a given statement is a DML statement.
Note that with this patch, not writing to primary's binlog means
DML replication from a ColumnStore cluster to another ColumnStore
cluster or to another foreign engine will not work.
on a non-ColumnStore table does not work.
As part of MCOL-4617, we moved the in-to-exists predicate creation
and injection from the server into the engine. However, when query
with an IN Subquery contains a non-ColumnStore table, the server
still performs the in-to-exists predicate transformation for the
foreign engine table. This caused ColumnStore's execution plan to
contain incorrect WHERE predicates. As a fix, we call
mutate_optimizer_flags() for the WRITE lock, in addition to the READ
table lock. And in mutate_optimizer_flags(), we change the optimizer
flag from OPTIMIZER_SWITCH_IN_TO_EXISTS to OPTIMIZER_SWITCH_MATERIALIZATION.
* MCOL-4769 Do not replay INSERTs and LDIs on the replica nodes when
the write cache is enabled.
* MCOL-4769 If a table is created with the write cache disabled
(i.e. when columnstore_cache_inserts=OFF), make it accessible when
the cache feature is enabled (columnstore_cache_inserts=ON).
After an AggreateColumn corresponding to SUM(1+1) is created,
it is pushed to the list:
gwi.count_asterisk_list.push_back(ac)
Later, in getSelectPlan(), the expression SUM(1+1) was erroneously
treated as a constant:
if (!hasNonSupportItem && !nonConstFunc(ifp) && !(parseInfo & AF_BIT) && tmpVec.size() == 0)
{
srcp.reset(buildReturnedColumn(item, gwi, gwi.fatalParseError));
This code freed the original AggregateColumn and replaced to a ConstantColumn.
But gwi.count_asterisk_list still pointer to the freed AggregateColumn().
The expression SUM(1+1) was treated as a constant because tmpVec
was empty due to a bug in this code:
// special handling for count(*). This should not be treated as constant.
if (isp->argument_count() == 1 &&
( sfitempp[0]->type() == Item::CONST_ITEM &&
(sfitempp[0]->cmp_type() == INT_RESULT ||
sfitempp[0]->cmp_type() == STRING_RESULT ||
sfitempp[0]->cmp_type() == REAL_RESULT ||
sfitempp[0]->cmp_type() == DECIMAL_RESULT)
)
)
{
field_vec.push_back((Item_field*)item); //dummy
Notice, it handles only aggregate functions with explicit literals
passed as an argument, while it does not handle constant expressions
such as 1+1.
Fix:
- Adding new classes ConstantColumnNull, ConstantColumnString,
ConstantColumnNum, ConstantColumnUInt, ConstantColumnSInt,
ConstantColumnReal, ValStrStdString, to reuse the code easier.
- Moving a part of the code from the case branch handling CONST_ITEM
in buildReturnedColumn() into a new function
newConstantColumnNotNullUsingValNativeNoTz(). This
makes the code easier to read and to reuse in the future.
- Adding a new function newConstantColumnMaybeNullFromValStrNoTz().
Removing dulplicate code from !!!four!!! places, using the new
function instead.
- Adding a function isSupportedAggregateWithOneConstArg() to
properly catch all constant expressions. Using the new function parse_item()
in the code commented as "special handling for count(*)".
Now it pushes all constant expressions to field_vec, not only
explicit literals.
- Moving a part of the code from buildAggregateColumn()
to a helper function processAggregateColumnConstArg().
Using processAggregateColumnConstArg() in the CONST_ITEM
and NULL_ITEM branches.
- Adding a new branch in buildReturnedColumn() handling FUNC_ITEM.
If a function has constant arguments, a ConstantColumn() is
immediately created, without going to
buildArithmeticColumn()/buildFunctionColumn().
- Reusing isSupportedAggregateWithOneConstArg()
and processAggregateColumnConstArg() in buildAggregateColumn().
A new branch catches aggregate function has only one constant argument
and immediately creates a single ConstantColumn without
traversing to the argument sub-components.
1. Add a new system variable, columnstore_use_cpimport_for_cache_inserts,
that when set to ON, uses cpimport for the cache flush into ColumnStore.
This variable is set to OFF by default. By default, we perform batch inserts
for the cache flush.
2. Disable DMLProc logging of the SQL statement text for the cache
flush operation in case of batch inserts. Under certain heavy loads
involving INSERT statements, this logging becomes a bottleneck for
the cache flush, causing subsequent inserts into the cache table to hang.
* Adds CompressInterfaceLZ4 which uses LZ4 API for compress/uncompress.
* Adds CMake machinery to search LZ4 on running host.
* All methods which use static data and do not modify any internal data - become `static`,
so we can use them without creation of the specific object. This is possible, because
the header specification has not been modified. We still use 2 sections in header, first
one with file meta data, the second one with pointers for compressed chunks.
* Methods `compress`, `uncompress`, `maxCompressedSize`, `getUncompressedSize` - become
pure virtual, so we can override them for the other compression algos.
* Adds method `getChunkMagicNumber`, so we can verify chunk magic number
for each compression algo.
* Renames "s/IDBCompressInterface/CompressInterface/g" according to requirement.
For queries of the form:
SELECT col1 FROM t1 WHERE col2 = (SELECT 2);
We fix the execution plan which earlier had an empty filters
expression. For this query, we now build a SimpleFilter with a
SimpleColumn and a ConstantColumn as the LHS and the RHS operands
respectively.
For queries of the form:
SELECT ... WHERE col1 NOT IN (SELECT <const_item>);
The execution plan earlier built a SimpleFilter with an "=" as
the predicate operator of the filter. We fix this by assigning
the correct "<>" operator instead.
cross-engine join with a ColumnStore table errors out.
ColumnStore cannot directly update a foreign table. We detect whether
a multi-table UPDATE operation is performed on a foreign table, if so,
do not create the select_handler and let the server execute the UPDATE
operation instead.
non-wide decimal column in the HAVING clause.
In buildAggregateColumn(), if an aggregate function (such as avg)
is applied on a non-wide decimal column, we were setting the precision
of the resulting column as -1. This later down in the execution got
converted to 255 as in some cases, precision is stored as uint8_t.
The predicate operations on a DECIMAL column has logic that uses
the wide Decimal::s128value field if precision > 18. This logic incorrectly
used the Decimal::s128value instead of the correct value stored in the
narrow Decimal::value field, since precision of the Decimal column
was 255. The fix is to set the aggregate column precision to
datatypes::INT64MAXPRECISION (18) in buildAggregateColumn() when the
aggregate is applied on a non-wide decimal column.
This commit also partially fixes -Wstrict-aliasing GCC warnings.
This feature allows a query execution to fallback to the server,
in case query execution using the select_handler (SH) fails. In case
of fallback, a warning message containing the original reason for
query failure using SH is generated.
To accomplish this task, SH execution is moved to an earlier step when
we create the SH in create_columnstore_select_handler(), instead of the
previous call to SH execution in ha_columnstore_select_handler::init_scan().
This requires some pre-requisite steps that occur in the server in
JOIN::optimize() and JOIN::exec() to be performed before starting SH execution.
In addition, missing test cases from MCOL-424 are also added to the MTR suite,
and the corresponding fix using disable_indices_for_CEJ() is reverted back
since the original fix now appears to be redundant.
This patch:
1. Removes the option to declare uncompressed columns (set columnstore_compression_type = 0).
2. Ignores [COMMENT '[compression=0] option at table or column level (no error messages, just disregard).
3. Removes the option to set more than 2 extents per file (ExtentsPreSegmentFile).
4. Updates rebuildEM tool to support up to 10 dictionary extent per dictionary segment file.
5. Adds check for `DBRootStorageType` for rebuildEM tool.
6. Renamed rebuildEM to mcsRebuildEM.