linking with unused libraries creates a difference in dependencies
between --no-as-needed (gcc default) and --as-needed (default on
Fedora rpmbuild) builds.
As part of the charset support, a call to MY_INIT() was added at the
initialization of the above processes. This call initializes the MySQL
thread environment required by the charset library. However, the
accompanying my_end() call required to terminate this thread environment
was not added at the termination of these process, hence leaking
resources. As a fix, we move the MY_INIT() calls to the Child()
functions of these services and also add the missing my_end() call.
This reverts commit f4e3022fbdecc25a22ae6eaf072e462aa6695f35.
The commit apparently caused MCOL-5318 and MCOL-5319 which involve the
internal ColumnStore batch insert mechanism passing through the SQL
layer. The code block involved in this change is a predicate checking
for the HWM extent in WriteEngineServer at the end of the batch insert.
This is done in WE_DMLCommandProc::processBatchInsertHwm(). The original
predicate check in this function for the HWM extent is restored until
further investigation.
In the implementation of MCOL-5021, an assert was added in
`WE_DMLCommandProc::processBatchInsertHwm()` that assumed the
`WriteEngine::TableMetaData` cache is uniform across the cluster.
However, this assumption is incorrect.
This bug caused undefined behaviour in ColumnStore resulting in bugs
such as MCOL-5367. In MCOL-5367, in a multi-node ColumnStore cluster,
an INSERT ... SELECT in a transaction with system variable
`columnstore_use_import_for_batchinsert=OFF/ON` did not show inserted
records when a SELECT query was issued. Assuming a 3-node cluster setup,
DMLProc only sends a given batch of records to be inserted to one of the
3 nodes, and not all nodes. As a result, the `WriteEngine::TableMetaData`
cache is only populated for that one node and is not uniform across the
cluster, causing the assert to fail.
As a fix, we simply remove this assert as it is redundant and should not
have been added in the first place.
DMLProc starts ROLLBACK when SELECT part of UPDATE fails b/c EM facility in PP were restarted.
Unfortunately this ROLLBACK stuck if EM/PP are not yet available.
DMLProc must have a t/o with re-try doing ROLLBACK.
by default), which when enabled, indiscriminately invalidates all
column extents and performs the actual DELETE only on the AUX
column. The trade-off with this approach would now be that the
first SELECT for certain query patterns (those containing a WHERE
predicate) after the DELETE operation will slow down as the
invalidated column extent would need to be scanned again to set
the min/max values.
the DELETE operation. ColumnOp::readBlock() calls can cause writes
to database files when the active chunk list in ChunkManager is full.
Since non-AUX columns are read-only for the DELETE operation, we prevent
writes of compressed chunks and header for these columns by passing
an isReadOnly flag to CompFileData which indicates whether the column
is read-only or read-write.
Introduced UDF and stored prodecure.
usage:
set columnstore_s3_key='<s3_key>';
set columnstore_s3_secret='<s3_secret>';
set columnstore_s3_region='region';
and then use UDF
select columnstore_dataload("<tablename>", "<filename>", "<bucket>", "<db_name>");
for UDF db_name can be ommited, then current connection db will be used
or stored function
call calpontsys.columnstore_load_from_s3("<tablename>", "<filename>", "<bucket>", "<db_name>");
EM scaleability project has two parts: phase1 and phase2.
This is phase1 that brings EM index to speed up(from O(n) down
to the speed of boost::unordered_map) EM lookups looking for
<dbroot, oid, partition> tuple to turn it into LBID,
e.g. most bulk insertion meta info operations.
The basis is boost::shared_managed_object where EMIndex is
stored. Whilst it is not debug-friendly it allows to put a
nested structs into shmem. EMIndex has 3 tiers. Top down description:
vector of dbroots, map of oids to partition vectors, partition
vectors that have EM indices.
Separate EM methods now queries index before they do EM run.
EMIndex has a separate shmem file with the fixed id
MCS-shm-00060001.
The idea is relatively simple - encode prefixes of collated strings as
integers and use them to compute extents' ranges. Then we can eliminate
extents with strings.
The actual patch does have all the code there but miss one important
step: we do not keep collation index, we keep charset index. Because of
this, some of the tests in the bugfix suite fail and thus main
functionality is turned off.
The reason of this patch to be put into PR at all is that it contains
changes that made CHAR/VARCHAR columns unsigned. This change is needed in
vectorization work.
* Fix clang warnings
* Remove vim tab guides
* initialize variables
* 'strncpy' output truncated before terminating nul copying as many bytes from a string as its length
* Fix ISO C++17 does not allow 'register' storage class specifier for outdated bison
* chars are unsigned on ARM, having if (ival < 0) always false
* chars are unsigned by default on ARM and comparison with -1 if always true
Part 1:
As part of MCOL-3776 to address synchronization issue while accessing
the fTimeZone member of the Func class, mutex locks were added to the
accessor and mutator methods. However, this slows down processing
of TIMESTAMP columns in PrimProc significantly as all threads across
all concurrently running queries would serialize on the mutex. This
is because PrimProc only has a single global object for the functor
class (class derived from Func in utils/funcexp/functor.h) for a given
function name. To fix this problem:
(1) We remove the fTimeZone as a member of the Func derived classes
(hence removing the mutexes) and instead use the fOperationType
member of the FunctionColumn class to propagate the timezone values
down to the individual functor processing functions such as
FunctionColumn::getStrVal(), FunctionColumn::getIntVal(), etc.
(2) To achieve (1), a timezone member is added to the
execplan::CalpontSystemCatalog::ColType class.
Part 2:
Several functors in the Funcexp code call dataconvert::gmtSecToMySQLTime()
and dataconvert::mySQLTimeToGmtSec() functions for conversion between seconds
since unix epoch and broken-down representation. These functions in turn call
the C library function localtime_r() which currently has a known bug of holding
a global lock via a call to __tz_convert. This significantly reduces performance
in multi-threaded applications where multiple threads concurrently call
localtime_r(). More details on the bug:
https://sourceware.org/bugzilla/show_bug.cgi?id=16145
This bug in localtime_r() caused processing of the Functors in PrimProc to
slowdown significantly since a query execution causes Functors code to be
processed in a multi-threaded manner.
As a fix, we remove the calls to localtime_r() from gmtSecToMySQLTime()
and mySQLTimeToGmtSec() by performing the timezone-to-offset conversion
(done in dataconvert::timeZoneToOffset()) during the execution plan
creation in the plugin. Note that localtime_r() is only called when the
time_zone system variable is set to "SYSTEM".
This fix also required changing the timezone type from a std::string to
a long across the system.
data types TEXT, CHAR, VARCHAR, FLOAT and DOUBLE are not yet supported by vectorized path
This patch introduces an example for Google benchmarking suite to measure a perf diff
b/w legacy scan/filtering code and the templated version
According to C++ spec:
If an inline function or variable (since C++17) with external linkage is defined
differently in different translation units, the behavior is undefined.
The undefined behaviour causes link errors for cpimport binary.
/usr/bin/ld: /tmp/cpimport.bin.av067N.ltrans0.ltrans.o:(.data.rel.ro+0x6c8):
undefined reference to `WriteEngine::ColumnOp::isEmptyRow(unsigned long*, unsigned char const*, int)'
The isEmptyRow method is defined as inline in the cpp file and not
inline in the header file. As the method is not used as part of an
external API by any of the callers, nor is it subclassed, mark it as a
normal (non virtual) *inline* member function.
* Adds CompressInterfaceLZ4 which uses LZ4 API for compress/uncompress.
* Adds CMake machinery to search LZ4 on running host.
* All methods which use static data and do not modify any internal data - become `static`,
so we can use them without creation of the specific object. This is possible, because
the header specification has not been modified. We still use 2 sections in header, first
one with file meta data, the second one with pointers for compressed chunks.
* Methods `compress`, `uncompress`, `maxCompressedSize`, `getUncompressedSize` - become
pure virtual, so we can override them for the other compression algos.
* Adds method `getChunkMagicNumber`, so we can verify chunk magic number
for each compression algo.
* Renames "s/IDBCompressInterface/CompressInterface/g" according to requirement.