Boost 1.85 removed some deprecated code in filesystem module which is
still used in columnstore:
- The boost/filesystem/convenience.hpp was removed but columnstore does
not use any functionality from that file except indirect includes.
Therefore this include is removed or replaced with more general
boost/filesystem.hpp. The convenience.hpp header file was deprecated
in filesystem V3 introduced in Boost 1.46.0.
- `normalize` method was removed and users are suggested to replace it
with `lexically_normal` method, which was introduced in Boost 1.60.0.
Original `normalize` call is preserved for backward compatibility with
old Boost version, however`, `lexically_normal` method is preferably
used with Boost 1.60.0 and newer.
- The `copy_option` was removed in favor of `copy_options` (note the
trailing 's'), but enum values were renamed. Namely, `fail_if_exists`
is replaced with `none` and `overwrite_if_exists` is replaced with
`overwrite_existing`. The `copy_options` was introduced in Boost
1.74.0.
New form is used instead, but a backward compatibility layer for Boost
1.73.0 and older was introduced in boost_copy_options_compat.hpp file.
This solution seems to be less awkward than using multiple #if #else
#endif blocks in source code.
Taken from FreeBSD ports, this uses the FreeBSD
APIs rather than the Linux specific prctl to change
and retreive the thread names.
Co-authored-by: Bernard Spil <brnrd@FreeBSD.org>
in aggregation code
The patch disables padding that forces hasher to calculate over the whole 2k buffer. This patch also moves hashing code
into the common place where it belongs.
exact functionality that does not use MDB hash function.
This patch also takes a bit from Robin Hood hash map implementation forgotten
that reduces hash function collision rate.
* MCOL-4846 dev-6 Handle large join results
Use a loop to shrink the number of results reported per message to something manageable.
* MCOL-4841 small changes requested by review
* Add EXTRA threads to prioritythreadpool
prioritythreadpool is configured at startup with a fixed number of threads available. This is to prevent thread thrashing. Since most of the time, BPP job steps are short lived, and a rescheduling mechanism exist if no threads are available, this works to keep cpu wastage to a minimum.
However, if a query or queries consume all the threads in prioritythreadpool and then block (due to the consumer not consuming fast enough) we can run out of threads and no work will be done until some threads unblock. A new mechanism allows for EXTRA threads to be generated for the duration of the blocking action. These threads can act on new queries. When all blocking is completed, these threads will be released when idle.
* MCOL-4841 dev6 Reconcile with changes in develop-6
* MCOL-4841 Some format corrections
* MCOL-4841 dev clean up some things based on review
* MCOL-4841 dev 6 ExeMgr Crashes after large join
This commit fixes up memory accounting issues in ExeMgr
* MCOL-4841 remove LDI change
Opened MCOL-4968 to address the issue
* MCOL-4841 Add fMaxBPPSendQueue to ResourceManager
This causes the setting to be loaded at run time (requires restart to accept a change) BPPSendthread gets this in it's ctor
Also rolled back changes to TupleHashJoinStep::smallRunnerFcn() that used a local variable to count locally allocated memory, then added it into the global counter at function's end. Not counting the memory globally caused conversion to UM only join way later than it should. This resulted in MCOL-4971.
* MCOL-4841 make blockedThreads and extraThreads atomic
Also restore previous scope of locks in bppsendthread. There is some small chance the new scope could be incorrect, and the performance boost is negligible. Better safe than sorry.
data types TEXT, CHAR, VARCHAR, FLOAT and DOUBLE are not yet supported by vectorized path
This patch introduces an example for Google benchmarking suite to measure a perf diff
b/w legacy scan/filtering code and the templated version
non-wide decimal column in the HAVING clause.
In buildAggregateColumn(), if an aggregate function (such as avg)
is applied on a non-wide decimal column, we were setting the precision
of the resulting column as -1. This later down in the execution got
converted to 255 as in some cases, precision is stored as uint8_t.
The predicate operations on a DECIMAL column has logic that uses
the wide Decimal::s128value field if precision > 18. This logic incorrectly
used the Decimal::s128value instead of the correct value stored in the
narrow Decimal::value field, since precision of the Decimal column
was 255. The fix is to set the aggregate column precision to
datatypes::INT64MAXPRECISION (18) in buildAggregateColumn() when the
aggregate is applied on a non-wide decimal column.
This commit also partially fixes -Wstrict-aliasing GCC warnings.
* Introduce multigeneration aggregation
* Do not save unused part of RGDatas to disk
* Add IO error explanation (strerror)
* Reduce memory usage while aggregating
* introduce in-memory generations to better memory utilization
* Try to limit the qty of buckets at a low limit
* Refactor disk aggregation a bit
* pass calculated hash into RowAggregation
* try to keep some RGData with free space in memory
* do not dump more than half of rowgroups to disk if generations are
allowed, instead start a new generation
* for each thread shift the first processed bucket at each iteration,
so the generations start more evenly
* Unify temp data location
* Explicitly create temp subdirectories
whether disk aggregation/join are enabled or not
mcsconfig.h and my_config.h have the following
pre-processor definitions:
1. Conflicting definitions coming from the standard cmake definitions:
- PACKAGE
- PACKAGE_BUGREPORT
- PACKAGE_NAME
- PACKAGE_STRING
- PACKAGE_TARNAME
- PACKAGE_VERSION
- VERSION
2. Conflicting definitions of other kinds:
- HAVE_STRTOLL - this is a dirt in MariaDB headers.
Should be fixed in the server code. my_config.h erroneously
performs "#define HAVE_STRTOLL" instead of "#define HAVE_STRTOLL 1".
in some cases. The former is not CMake compatible style. The latter is.
3. Non-conflicting definitions:
Otherwise, mcsconfig.h and my_config.h should be mutually compatible,
because both are generated by cmake on the same host machine. So
they should have exactly equal definitions like "HAVE_XXX", "SIZEOF_XXX", etc.
Observations:
- It's OK to include both mcsconfig.h and my_config.h providing that we
suppress duplicate definition of the above conflicting types #1 and #2.
- There is no a need to suppress duplicate definitions mentioned in #3,
as they are compatible!
- my_sys.h and m_ctype.h must always follow a CMake configuation header,
either my_config.h or mcsconfig.h (or both).
They must never be included without any preceeding configuration header.
This change make sure that we resolve conflicts by:
- either disallowing inclusion of mcsconfig.h and my_config.h
at the same time
- or by hiding conflicting definitions #1 and #2
(with their later restoring).
- also, by making sure that my_sys.h and m_ctype.h always follow
a CMake configuration file.
Details:
- idb_mysql.h can now only be included only after my_config.h
An attempt to use idb_mysql.h with mcsconfig.h instead of
my_config.h is caught by the "#error" preprocessor directive.
- mariadb_my_sys.h can now be only included after mcsconfig.h.
An attempt to use mariadb_my_sys.h without mcscofig.h
(e.g. with my_config.h) is also caught by "#error".
- collation.h now can now be included in two ways.
It now has the following effective structure:
#if defined(PREFER_MY_CONFIG_H) && defined(MY_CONFIG_H)
// Remember current conflicting definitions on the preprocessor stack
// Undefine current conflicting definitions
#endif
#include "mcsconfig.h"
#include "m_ctype.h"
#if defined(PREFER_MY_CONFIG_H) && defined(MY_CONFIG_H)
# Restore conflicting definitions from the preprocessor stack
#endif
and can be included as follows:
a. using only mcsconfig.h as a configuration header:
// my_config.h must not be included so far
#include "collation.h"
b. using my_config.h as the first included configuration file:
#define PREFER_MY_CONFIG_H // Force conflict resolution
#include "my_config.h" // can be included directly or indirectly
...
#include "collation.h"
Other changes:
- Adding helper header files
utils/common/mcsconfig_conflicting_defs_remember.h
utils/common/mcsconfig_conflicting_defs_restore.h
utils/common/mcsconfig_conflicting_defs_undef.h
to perform conflict resolution easier.
- Removing `#include "collation.h"` from a number of files,
as it's automatically included from rowgroup.h.
- Removing redundant `#include "utils_utf8.h"`.
This change is not directly related to the problem being fixed,
but it's nice to remove redundant directives for both collation.h
and utils_utf8.h from all the files that do not really need them.
(this change could probably have gone as a separate commit)
- Changing my_init() to MY_INIT(argv[0]) in the MCS services sources.
After the fix of the complitation failure it appeared that ColumnStore
services compiled with the debug build crash due to recent changes in
safemalloc. The crash happened in strcmp() with `my_progname` as an argument
(where my_progname is a mysys global variable). This problem should
probably be fixed on the server side as well to avoid passing NULL.
But, the majority of MariaDB executable programs also use MY_INIT(argv[0])
rather than my_init(). So let's make MCS do like the other programs do.
This change fixes:
MCOL-4462 CAST(varchar_expr AS DECIMAL(M,N)) returns a wrong result
MCOL-4500 Bit functions processing throws internally trying to cast char into decimal representation
MCOL-4532 CAST(AS DECIMAL) returns a garbage for large values
Also, this change makes string-to-decimal conversion 5-10 times faster,
depending on exact data.
Performance implemenent is achieved by the fact that (unlike in the old
implementation), the new version does not do any "string" object copying.
A non-JOIN condition like `WHERE c1=c2` (with c1 and c2 being columns of the
same table) was not collation-aware yet after the main patches for MCOL-4064.
Additionally fixing StrFilterCmd::compare*() to address this.
Removed uint128 from joblist/lbidlist.*
Another toString() method for wide-decimal that is EMPTY/NULL aware
Unified decimal processing in WF functions
Fixed a potential issue in EqualCompData::operator() for
wide-decimal processing
Fixed some signedness warnings
For now it consists of only:
using int128_t = __int128;
using uint128_t = unsigned __int128;
All new privitive data types should go into this file in the future.
This commit also adds support in TupleHashJoinStep::forwardCPData,
although we currently do not support wide decimals as join keys.
Row estimation to determine large-side of the join is also updated.
2. Set Decimal precision in SimpleColumn::evaluate().
3. Add support for int128_t in ConstantColumn.
4. Set IDB_Decimal::s128Value in buildDecimalColumn().
5. Use width 16 as first if predicate for branching based on decimal width.