This is a fix of logging subsystem, nothing else.
The old code expanded an argument into string and advanced too little
and, if expansion contained argument's index, it expanded it again. And
again.
The fix is simple: enable subtotals in single-phase aggregation and
disable parallel processing when there are subtotals and aggregation is
single-phase.
Fixes MCOL-5643.
The problem was that different views with same column names in GROUP BY
and on the SELECT clause produced an error about "projection column is
not an aggergate neither in GROUP BY list."
This was due to incorrect search in expressions's list that lead to
duplicate columns in GROUP BY list.
JSON functions were implemented violating an assumption of their
pureness, as they should not have any state. This concrete patch
fixes implementation of JSON_VALUE function.
This is "productization" of an old code that would enable extent
elimination for dictionary columns.
This concrete patch enables it, fixes perfomance degradation (main
problem with old code) and also fixes incorrect behavior of cpimport.
We add intermediate calculations in int128_t when target is UBIGINT and
check for overflow before converting into the UBIGINT. This is so
because we can overflow on addition and multiplication, with (some)
signed operands or both unsigned.
Adds a special column which helps to differentiate data and rollups of
various depts and a simple logic to row aggregation to add processing of
subtotals.
1. Extend the calpontsys.syscolumn system catalog table
with a new column, 'charsetnum'.
'charsetnum' field is set to the 'number' member of the
'charset_info_st' struct defined in the server in m_ctype.h.
For CHAR/VARCHAR/TEXT column types, 'charset_info_st' is
initialized to the charset/collation of the column, which
is set at the column-level or at the table-level in the DDL.
For BLOB/VARBINARY binary column types, 'charset_info_st' is
initialized to my_charset_bin (charsetnum=63).
For all other column types, charsetnum is set to 0.
2. Add support for the newly added 'charsetnum' column in the
automatic system catalog upgrade logic in dbbuilder.
For existing table definitions, charsetnum for the column is
defaulted to 0.
3. Add MTR test case that creates a few table definitions with
a range of charset/collation combinations and queries the
calpontsys.syscolumn system catalog table with the charsetnum
field for the columns in the table DDLs.
feat(charset)!: utf8 is a new charset default and utf8_general_ci is a new collation default in the engine configuration file shipped
---------
Co-authored-by: Leonid Fedorov <leonid.fedorov@mariadb.com>
Co-authored-by: mariadb-DanielLee <daniel.lee@mariadb.com>
This patch improves handling of NULLs in textual fields in ColumnStore.
Previously empty strings were considered NULLs and it could be a problem
if data scheme allows for empty strings. It was also one of major
reasons of behavior difference between ColumnStore and other engines in
MariaDB family.
Also, this patch fixes some other bugs and incorrect behavior, for
example, incorrect comparison for "column <= ''" which evaluates to
constant True for all purposes before this patch.
Added logical transformation of the execplan::ParseTrees with the taking out the common factor in expression of the form "(A and B) or (A and C)" for the purposes of passing a TPCH 19 query.
Co-authored-by: Leonid Fedorov <leonid.fedorov@mariadb.com>
1. In TupleUnion::writeNull(), add the missing switch case for
wide decimal with 16bytes column width.
2. MCOL-5432 Disable complete/partial pushdown of UNION operation
if the query involves an ORDER BY or a LIMIT clause, until
MCOL-5222 is fixed. Also add MTR test cases for this.
When a UNION operation involving DECIMAL datatypes with scale and digits
before the decimal exceeds the currently supported maximum precision
of 38, we throw an error to the user:
"MCS-2060: Union operation exceeds maximum DECIMAL precision of 38".
This is until MCOL-5417 is implemented where ColumnStore will have
full parity with MariaDB server in terms of maximum supported DECIMAL
precision and scale of 65 and 38 digits respectively.
Disable check for correlated subqueries, basically those types of queries transforms
to join (aggr(table2), table1), table2) and post join scalar filter.
* Sort test result so the test case would pass
* Server message has been changes
* Added schema name in query for rows in test case only. Also use lower case schema name
* Changed database to lower case
* Run test case in its own database to avoid table already exists error
Co-authored-by: root <root@rocky8.localdomain>
This patch fixs the reported JIRA issue MCOL 5205, which consists of a wrong union type from two input Int types. The bug results in wrong unioned answers in CS. The fix includes more INT case discussions. Additionaly, this patch provides detailed unit tests for correctness in UNION processing with Int.
Signed-off-by: Jigao Luo <luojigao@outlook.com>
The following functions are created:
Create function JSON_VALID and test cases
Create function JSON_DEPTH and test cases
Create function JSON_LENGTH and test cases
Create function JSON_EQUALS and test cases
Create function JSON_NORMALIZE and test cases
Create function JSON_TYPE and test cases
Create function JSON_OBJECT and test cases
Create function JSON_ARRAY and test cases
Create function JSON_KEYS and test cases
Create function JSON_EXISTS and test cases
Create function JSON_QUOTE/JSON_UNQUOTE and test cases
Create function JSON_COMPACT/DETAILED/LOOSE and test cases
Create function JSON_MERGE and test cases
Create function JSON_MERGE_PATCH and test cases
Create function JSON_VALUE and test cases
Create function JSON_QUERY and test cases
Create function JSON_CONTAINS and test cases
Create function JSON_ARRAY_APPEND and test cases
Create function JSON_ARRAY_INSERT and test cases
Create function JSON_INSERT/REPLACE/SET and test cases
Create function JSON_REMOVE and test cases
Create function JSON_CONTAINS_PATH and test cases
Create function JSON_OVERLAPS and test cases
Create function JSON_EXTRACT and test cases
Create function JSON_SEARCH and test cases
Note:
Some functions output differs from MDB because session variables that affects functions output,e.g JSON_QUOTE/JSON_UNQUOTE
This depends on MCOL-5212
When length of string to replace minus length of string to replace to is
bigger than input string and processing mode allows for binary (memcmp
or std::string::find()) comparison, REPLACE may trigger invalid capacity
assertion and query processing will stop.
The fix is to properly count the number of occurences of the string to
replace, basically.
* MCOL-5092 Ensure column width is correct for datatype
Change MODA return type to STRING
Modify MODA to handle every numeric type
* MCOL-5162 MODA to support char and varchar with collation support
Fixes to the aggregate bit functions
When we fixed the storage sign issue for MCOL-5092, it uncovered a problem in the bit aggregates (bit_and, bit_or and bit_xor). These aggregates should always return UBIGINT, but they relied on the type of the argument column, which gave bad results.