* configcpp refactored
* chore(build): massive removals, auto add files to debian install file
* chore(build): configure before autobake
* chore(build): use custom cmake commands for components, mariadb-plugin-columnstore.install generated
* chore(build): install deps as separate step for build-packages
* more deps
* chore(codemanagement, build): build refactoring stage2
* chore(safety): Locked Map for MessageqCpp with a simpler way
Please enter the commit message for your changes. Lines starting
* chore(codemanagement, ci): better coredumps handling, deps fixed
* Delete build/bootstrap_mcs.py
* Update charset.cpp (add license)
BLOB fields did not work as grouping keys at all, they were assigned
value NULL for any value, be it NULL or not. The fix is in the
rowaggregation.cpp in the initMapping(), a switch/case branch was added
to handle BLOB field copying there.
Also, TEXT columns did not distinguish between NULL and empty string in
the grouping algorithm, now they do. The fix is in the equals()
function, now we specifically check for isNull() equality between
values.
The purpose of this changeset is to obtain list of partitions from
SELECT_LEX structure and pass it down to joblist and then to
CrossEngineStep to pass to InnoDB.
* move GROUP_CONCAT/JSON_ARRAYAGG storage to the RowGroup from
the RowAggregation*
* internal data structures (de)serialization
* get rid of a specialized classes for processing JSON_ARRAYAGG
* move the memory accounting to disk-based aggregation classes
* allow aggregation generations to be used for queries with
GROUP_CONCAT/JSON_ARRAYAGG
* Remove the thread id from the error message as it interferes with the mtr
This patch introduces an internal aggregate operator SELECT_SOME that
is automatically added to columns that are not in GROUP BY. It
"computes" some plausible value of the column (actually, last one
passed).
Along the way it fixes incorrect handling of HAVING being transferred
into WHERE, window function handling and a bit of other inconsistencies.
The most important fix here is the fix of possible buffer overrun in
DATEFORMAT() function. A "%W" format, repeated enough times, would
overflow the 256-bytes buffer for result. Now we use ostringstream to
construct result and we are safe.
Changes in date/time projection functions made me fix difference between
us and server behavior. The new, better behavior is reflected in changes
in tests' results.
Also, there was incorrect logic in TRUNCATE() and ROUND() functions in
computing the decimal "shift."
* MCOL-4234: improve GROUP BY and ORDER BY interaction (#3194)
This patch fixes the problem in MCOL-4234 and also generally improves
behavior of GROUP BY.
It does so by introducing a "dummy" aggregate and by wrapping columns
into it. This allows for columns that are not in GROUP BY to be used
more freely, for example, in SELECT * FROM tbl GROUP BY col - all
columns that are not "col" will be wrapped into an aggregate and query
will proceed to execution.
The dummy aggregate itself does nothing more than remember last value
passed into it.
There also an additional error message that tries to explain what types
of expressions can be wrapped into an aggregate.
* MCOL-5772: incorrect ORDER BY ordering for a columns not in GROUP BY (#3214)
When ORDER BY column is not in GROUP BY, is not an aggregate and there
is a SELECT column that is also not an aggregate, there was a problem:
ordering happened on the SELECTed column, not ORDERed one.
This patch fixes that particular problem and also performs some tidying
around newly added aggregate.
---------
Co-authored-by: Leonid Fedorov <79837786+mariadb-LeonidFedorov@users.noreply.github.com>
We add intermediate calculations in int128_t when target is UBIGINT and
check for overflow before converting into the UBIGINT. This is so
because we can overflow on addition and multiplication, with (some)
signed operands or both unsigned.
calpontsys.syscolumn syscat table to be latin1.
This change is done in one of the ctors of pColStep which is
initiated while building the job list from the execution plan.
Adds a special column which helps to differentiate data and rollups of
various depts and a simple logic to row aggregation to add processing of
subtotals.
1. Extend the following CalpontSystemCatalog member functions to
set CalpontSystemCatalog::ColType::charsetNumber, after the
system catalog update to add charset number to calpontsys.syscolumn
in MCOL-5005:
CalpontSystemCatalog::lookupOID
CalpontSystemCatalog::colType
CalpontSystemCatalog::columnRIDs
CalpontSystemCatalog::getSchemaInfo
2. Update cpimport to use the CHARSET_INFO object associated with the
charset number retrieved from the system catalog, for a
dictionary/non-dictionary CHAR/VARCHAR/TEXT column, to truncate
long strings that exceed the target column character length.
3. Add MTR test cases.
1. Extend the calpontsys.syscolumn system catalog table
with a new column, 'charsetnum'.
'charsetnum' field is set to the 'number' member of the
'charset_info_st' struct defined in the server in m_ctype.h.
For CHAR/VARCHAR/TEXT column types, 'charset_info_st' is
initialized to the charset/collation of the column, which
is set at the column-level or at the table-level in the DDL.
For BLOB/VARBINARY binary column types, 'charset_info_st' is
initialized to my_charset_bin (charsetnum=63).
For all other column types, charsetnum is set to 0.
2. Add support for the newly added 'charsetnum' column in the
automatic system catalog upgrade logic in dbbuilder.
For existing table definitions, charsetnum for the column is
defaulted to 0.
3. Add MTR test case that creates a few table definitions with
a range of charset/collation combinations and queries the
calpontsys.syscolumn system catalog table with the charsetnum
field for the columns in the table DDLs.
This patch:
1. Properly processes situation when pm join result count is exceeded.
2. Adds session variable 'columnstore_max_pm_join_result_count` to control the limit.
This patch:
1. Handles corner case when the bucket exceeded the memory limit, but we cannot redistribute the data in this bucket into new buckets based on a hash algorithm, because the rows have the same values.
2. Adds force option for disk join step.
3. Add a option to contol the depth of the partition tree.