CSC default ctor was private b/c it must not allow to use CSC outside thread cache.
However there are some places in the plugin code that need a standalone syscat that
is cleaned up leaving the scope. The decision is to make the restriction mentioned
organizational rather than syntactical.
ColumnStore used to include server's mysql.h
but link all tools with libmariadb.so
There's no guarantee that this would work, even with workarounds
it had in dbcon/mysql/sm.cpp
Fix:
* tools (linked with libmariadb.so) *must* include libmariadb's mysql.h
* as a hack prevent service_thd_timezone.h from being loaded into tools,
as it conflicts with libmariadb's mysql.h
* server plugin *must* include server's mysql.h
* also don't link every tool with libmariadb.so, link the helper library
(liblibmysqlclient.so) that actually needs it, tools use this
helper library, not libmariadb.so directly
in aggregation code
The patch disables padding that forces hasher to calculate over the whole 2k buffer. This patch also moves hashing code
into the common place where it belongs.
EM scaleability project has two parts: phase1 and phase2.
This is phase1 that brings EM index to speed up(from O(n) down
to the speed of boost::unordered_map) EM lookups looking for
<dbroot, oid, partition> tuple to turn it into LBID,
e.g. most bulk insertion meta info operations.
The basis is boost::shared_managed_object where EMIndex is
stored. Whilst it is not debug-friendly it allows to put a
nested structs into shmem. EMIndex has 3 tiers. Top down description:
vector of dbroots, map of oids to partition vectors, partition
vectors that have EM indices.
Separate EM methods now queries index before they do EM run.
EMIndex has a separate shmem file with the fixed id
MCS-shm-00060001.
Part 1:
As part of MCOL-3776 to address synchronization issue while accessing
the fTimeZone member of the Func class, mutex locks were added to the
accessor and mutator methods. However, this slows down processing
of TIMESTAMP columns in PrimProc significantly as all threads across
all concurrently running queries would serialize on the mutex. This
is because PrimProc only has a single global object for the functor
class (class derived from Func in utils/funcexp/functor.h) for a given
function name. To fix this problem:
(1) We remove the fTimeZone as a member of the Func derived classes
(hence removing the mutexes) and instead use the fOperationType
member of the FunctionColumn class to propagate the timezone values
down to the individual functor processing functions such as
FunctionColumn::getStrVal(), FunctionColumn::getIntVal(), etc.
(2) To achieve (1), a timezone member is added to the
execplan::CalpontSystemCatalog::ColType class.
Part 2:
Several functors in the Funcexp code call dataconvert::gmtSecToMySQLTime()
and dataconvert::mySQLTimeToGmtSec() functions for conversion between seconds
since unix epoch and broken-down representation. These functions in turn call
the C library function localtime_r() which currently has a known bug of holding
a global lock via a call to __tz_convert. This significantly reduces performance
in multi-threaded applications where multiple threads concurrently call
localtime_r(). More details on the bug:
https://sourceware.org/bugzilla/show_bug.cgi?id=16145
This bug in localtime_r() caused processing of the Functors in PrimProc to
slowdown significantly since a query execution causes Functors code to be
processed in a multi-threaded manner.
As a fix, we remove the calls to localtime_r() from gmtSecToMySQLTime()
and mySQLTimeToGmtSec() by performing the timezone-to-offset conversion
(done in dataconvert::timeZoneToOffset()) during the execution plan
creation in the plugin. Note that localtime_r() is only called when the
time_zone system variable is set to "SYSTEM".
This fix also required changing the timezone type from a std::string to
a long across the system.
* MCOL-4846 dev-6 Handle large join results
Use a loop to shrink the number of results reported per message to something manageable.
* MCOL-4841 small changes requested by review
* Add EXTRA threads to prioritythreadpool
prioritythreadpool is configured at startup with a fixed number of threads available. This is to prevent thread thrashing. Since most of the time, BPP job steps are short lived, and a rescheduling mechanism exist if no threads are available, this works to keep cpu wastage to a minimum.
However, if a query or queries consume all the threads in prioritythreadpool and then block (due to the consumer not consuming fast enough) we can run out of threads and no work will be done until some threads unblock. A new mechanism allows for EXTRA threads to be generated for the duration of the blocking action. These threads can act on new queries. When all blocking is completed, these threads will be released when idle.
* MCOL-4841 dev6 Reconcile with changes in develop-6
* MCOL-4841 Some format corrections
* MCOL-4841 dev clean up some things based on review
* MCOL-4841 dev 6 ExeMgr Crashes after large join
This commit fixes up memory accounting issues in ExeMgr
* MCOL-4841 remove LDI change
Opened MCOL-4968 to address the issue
* MCOL-4841 Add fMaxBPPSendQueue to ResourceManager
This causes the setting to be loaded at run time (requires restart to accept a change) BPPSendthread gets this in it's ctor
Also rolled back changes to TupleHashJoinStep::smallRunnerFcn() that used a local variable to count locally allocated memory, then added it into the global counter at function's end. Not counting the memory globally caused conversion to UM only join way later than it should. This resulted in MCOL-4971.
* MCOL-4841 make blockedThreads and extraThreads atomic
Also restore previous scope of locks in bppsendthread. There is some small chance the new scope could be incorrect, and the performance boost is negligible. Better safe than sorry.
DML statements executed on the primary node in a ColumnStore
cluster do not need to be written to the primary's binlog. This
is due to ColumnStore's distributed storage architecture.
With this patch, we disable writing to binlog when a DML statement
(INSERT/DELETE/UPDATE/LDI/INSERT..SELECT) is performed on a ColumnStore
table. HANDLER::external_lock() calls are used to
1. Turn OFF the OPTION_BIN_LOG flag
2. Turn ON the OPTION_BIN_TMP_LOG_OFF flag
in THD::variables.option_bits during a WRITE lock call.
THD::variables.option_bits is restored back to the original state
during the UNLOCK call in HANDLER::external_lock().
Further, isDMLStatement() function is added to reduce code verbosity
to check if a given statement is a DML statement.
Note that with this patch, not writing to primary's binlog means
DML replication from a ColumnStore cluster to another ColumnStore
cluster or to another foreign engine will not work.
on a non-ColumnStore table does not work.
As part of MCOL-4617, we moved the in-to-exists predicate creation
and injection from the server into the engine. However, when query
with an IN Subquery contains a non-ColumnStore table, the server
still performs the in-to-exists predicate transformation for the
foreign engine table. This caused ColumnStore's execution plan to
contain incorrect WHERE predicates. As a fix, we call
mutate_optimizer_flags() for the WRITE lock, in addition to the READ
table lock. And in mutate_optimizer_flags(), we change the optimizer
flag from OPTIMIZER_SWITCH_IN_TO_EXISTS to OPTIMIZER_SWITCH_MATERIALIZATION.
* MCOL-4769 Do not replay INSERTs and LDIs on the replica nodes when
the write cache is enabled.
* MCOL-4769 If a table is created with the write cache disabled
(i.e. when columnstore_cache_inserts=OFF), make it accessible when
the cache feature is enabled (columnstore_cache_inserts=ON).
After an AggreateColumn corresponding to SUM(1+1) is created,
it is pushed to the list:
gwi.count_asterisk_list.push_back(ac)
Later, in getSelectPlan(), the expression SUM(1+1) was erroneously
treated as a constant:
if (!hasNonSupportItem && !nonConstFunc(ifp) && !(parseInfo & AF_BIT) && tmpVec.size() == 0)
{
srcp.reset(buildReturnedColumn(item, gwi, gwi.fatalParseError));
This code freed the original AggregateColumn and replaced to a ConstantColumn.
But gwi.count_asterisk_list still pointer to the freed AggregateColumn().
The expression SUM(1+1) was treated as a constant because tmpVec
was empty due to a bug in this code:
// special handling for count(*). This should not be treated as constant.
if (isp->argument_count() == 1 &&
( sfitempp[0]->type() == Item::CONST_ITEM &&
(sfitempp[0]->cmp_type() == INT_RESULT ||
sfitempp[0]->cmp_type() == STRING_RESULT ||
sfitempp[0]->cmp_type() == REAL_RESULT ||
sfitempp[0]->cmp_type() == DECIMAL_RESULT)
)
)
{
field_vec.push_back((Item_field*)item); //dummy
Notice, it handles only aggregate functions with explicit literals
passed as an argument, while it does not handle constant expressions
such as 1+1.
Fix:
- Adding new classes ConstantColumnNull, ConstantColumnString,
ConstantColumnNum, ConstantColumnUInt, ConstantColumnSInt,
ConstantColumnReal, ValStrStdString, to reuse the code easier.
- Moving a part of the code from the case branch handling CONST_ITEM
in buildReturnedColumn() into a new function
newConstantColumnNotNullUsingValNativeNoTz(). This
makes the code easier to read and to reuse in the future.
- Adding a new function newConstantColumnMaybeNullFromValStrNoTz().
Removing dulplicate code from !!!four!!! places, using the new
function instead.
- Adding a function isSupportedAggregateWithOneConstArg() to
properly catch all constant expressions. Using the new function parse_item()
in the code commented as "special handling for count(*)".
Now it pushes all constant expressions to field_vec, not only
explicit literals.
- Moving a part of the code from buildAggregateColumn()
to a helper function processAggregateColumnConstArg().
Using processAggregateColumnConstArg() in the CONST_ITEM
and NULL_ITEM branches.
- Adding a new branch in buildReturnedColumn() handling FUNC_ITEM.
If a function has constant arguments, a ConstantColumn() is
immediately created, without going to
buildArithmeticColumn()/buildFunctionColumn().
- Reusing isSupportedAggregateWithOneConstArg()
and processAggregateColumnConstArg() in buildAggregateColumn().
A new branch catches aggregate function has only one constant argument
and immediately creates a single ConstantColumn without
traversing to the argument sub-components.
* MaxConcurrentTransactions were not cleared when config is wrong (#2093)
* Wrong concatenation in between predicate (#2092)
* We forgot to initilize longdoublenull value (#2091)
* WriteBatchFieldMariaDB m_type was wrong (#2090)
* moda returned local object pointer (#2089)
* Wrong power of 2 in esimator` (#2088)
* targetDbroot should be assined, not compared (#2087)
* obviously wrong bytesTx assignment (#2086)
* GetInterrupted returned bool instead of bool * (#2085)
1. Add a new system variable, columnstore_use_cpimport_for_cache_inserts,
that when set to ON, uses cpimport for the cache flush into ColumnStore.
This variable is set to OFF by default. By default, we perform batch inserts
for the cache flush.
2. Disable DMLProc logging of the SQL statement text for the cache
flush operation in case of batch inserts. Under certain heavy loads
involving INSERT statements, this logging becomes a bottleneck for
the cache flush, causing subsequent inserts into the cache table to hang.
* Adds CompressInterfaceLZ4 which uses LZ4 API for compress/uncompress.
* Adds CMake machinery to search LZ4 on running host.
* All methods which use static data and do not modify any internal data - become `static`,
so we can use them without creation of the specific object. This is possible, because
the header specification has not been modified. We still use 2 sections in header, first
one with file meta data, the second one with pointers for compressed chunks.
* Methods `compress`, `uncompress`, `maxCompressedSize`, `getUncompressedSize` - become
pure virtual, so we can override them for the other compression algos.
* Adds method `getChunkMagicNumber`, so we can verify chunk magic number
for each compression algo.
* Renames "s/IDBCompressInterface/CompressInterface/g" according to requirement.
For queries of the form:
SELECT col1 FROM t1 WHERE col2 = (SELECT 2);
We fix the execution plan which earlier had an empty filters
expression. For this query, we now build a SimpleFilter with a
SimpleColumn and a ConstantColumn as the LHS and the RHS operands
respectively.
For queries of the form:
SELECT ... WHERE col1 NOT IN (SELECT <const_item>);
The execution plan earlier built a SimpleFilter with an "=" as
the predicate operator of the filter. We fix this by assigning
the correct "<>" operator instead.