boost::uniqie_lock dtor calls a fancy unlock logic that throws twice.
First if the mutex is 0 and second lock doesn't own the mutex.
The first condition failure causes unhandled exception for one of the clients
in DEC::writeToClient(). I was unable to find out why Linux can have a 0
mutex and replaced boost::mutex with std::mutex b/c stdlibc++ should
be more stable comparing with boost.
ColumnStore used to include server's mysql.h
but link all tools with libmariadb.so
There's no guarantee that this would work, even with workarounds
it had in dbcon/mysql/sm.cpp
Fix:
* tools (linked with libmariadb.so) *must* include libmariadb's mysql.h
* as a hack prevent service_thd_timezone.h from being loaded into tools,
as it conflicts with libmariadb's mysql.h
* server plugin *must* include server's mysql.h
* also don't link every tool with libmariadb.so, link the helper library
(liblibmysqlclient.so) that actually needs it, tools use this
helper library, not libmariadb.so directly
This is attempt to make some part of the code more stable.
For some reason we can get a spurious nullptr for boost::shared_ptr
which cause an assert and abort.
* MCOL-5279 Send ByteStream with Primitive message to remote PPs first
and to the local PP last
MCOL-5166 forces EM to interact with PP over a messaging queue when they are in the same process. When ByteStream is taken from the queue it is drained so it is impossible to re-use the same ByteStream to send to the other nodes.
* MCOL-5279 Fix a corner case when sending PP messages
Multi-node can break when flow control is used to backpressure PPs that overflows EM with Primitive messages.
and to the local PP last
MCOL-5166 forces EM to interact with PP over a messaging queue when they are in the same process. When ByteStream is taken from the queue it is drained so it is impossible to re-use the same ByteStream to send to the other nodes.
MCOL-5166 forces EM to interact with PP over a messaging queue when they are in the same process. When ByteStream is taken from the queue it is drained so it is impossible to re-use the same ByteStream to send to the other nodes.
Exit early from the plugin execution of ALTER TABLE statements
on the replica nodes. This is to prevent re-execution of syscat
table population from the replica nodes which should only be
executed once by the primary node in a CS cluster setup.
ARM platforms. This patch resolves this issue and unifies AUX column
processing at x86 and ARM using tempate class SimdProcessor.
The patch also replaces uint16_t mask previously used in column.cpp and
SimProcessor code with a native masks that platform uses, e.g. __m128i
or __m128 on x86 and variety of masks on ARM.
To unify the processing I introduced a new filtering Compare Operator - COMPARE_NULLEQ.
with a 'c1 IS NULL semantics'.
This patch improves the runtime performance of UNION processing in CS, as reported JIRA issue MCOL 4590. The idea of the optimization is to infer the normalize seperate functions beforehand and perform the normalization individually later, instead of a huge switch body of all normalization. This patch also cover engineering optimization, removing the hotspots in UNION processing. After application of this patch, the normalize part takes only about 25% of the whole UNION query in our experiment avg case.
Signed-off-by: Jigao Luo <luojigao@outlook.com>
The main CmakeLists.txt was using MY_CHECK_AND_SET_COMPILER_FLAG before the include. This works in-band with server because it was already included in server's CmakeLists.txt.
dbcon/mysql included curl as a build dependency. We don't build curl. It's a lib dependency. Not sure why it works in-band. One wouldn't think it should.
The following functions are created:
Create function JSON_VALID and test cases
Create function JSON_DEPTH and test cases
Create function JSON_LENGTH and test cases
Create function JSON_EQUALS and test cases
Create function JSON_NORMALIZE and test cases
Create function JSON_TYPE and test cases
Create function JSON_OBJECT and test cases
Create function JSON_ARRAY and test cases
Create function JSON_KEYS and test cases
Create function JSON_EXISTS and test cases
Create function JSON_QUOTE/JSON_UNQUOTE and test cases
Create function JSON_COMPACT/DETAILED/LOOSE and test cases
Create function JSON_MERGE and test cases
Create function JSON_MERGE_PATCH and test cases
Create function JSON_VALUE and test cases
Create function JSON_QUERY and test cases
Create function JSON_CONTAINS and test cases
Create function JSON_ARRAY_APPEND and test cases
Create function JSON_ARRAY_INSERT and test cases
Create function JSON_INSERT/REPLACE/SET and test cases
Create function JSON_REMOVE and test cases
Create function JSON_CONTAINS_PATH and test cases
Create function JSON_OVERLAPS and test cases
Create function JSON_EXTRACT and test cases
Create function JSON_SEARCH and test cases
Note:
Some functions output differs from MDB because session variables that affects functions output,e.g JSON_QUOTE/JSON_UNQUOTE
This depends on MCOL-5212
in aggregation code
The patch disables padding that forces hasher to calculate over the whole 2k buffer. This patch also moves hashing code
into the common place where it belongs.
* MCOL-5092 Ensure column width is correct for datatype
Change MODA return type to STRING
Modify MODA to handle every numeric type
* MCOL-5162 MODA to support char and varchar with collation support
Fixes to the aggregate bit functions
When we fixed the storage sign issue for MCOL-5092, it uncovered a problem in the bit aggregates (bit_and, bit_or and bit_xor). These aggregates should always return UBIGINT, but they relied on the type of the argument column, which gave bad results.
This patch adds support for on clause filter for a table which is not involved in particular join
by disabling an `merge optimization` for those particular cases.
The `merge optimization` is optimization when CS
tries to create a one BPP join with one `large side` table and multiple `small sides` tables, in this
case we cannot apply a FE filter if this filter requires a columns from `small side` table which is not
involved in particular join.