You've already forked mariadb-columnstore-engine
mirror of
https://github.com/mariadb-corporation/mariadb-columnstore-engine.git
synced 2025-07-30 19:23:07 +03:00
MCOL-987 Add LZ4 compression.
* Adds CompressInterfaceLZ4 which uses LZ4 API for compress/uncompress. * Adds CMake machinery to search LZ4 on running host. * All methods which use static data and do not modify any internal data - become `static`, so we can use them without creation of the specific object. This is possible, because the header specification has not been modified. We still use 2 sections in header, first one with file meta data, the second one with pointers for compressed chunks. * Methods `compress`, `uncompress`, `maxCompressedSize`, `getUncompressedSize` - become pure virtual, so we can override them for the other compression algos. * Adds method `getChunkMagicNumber`, so we can verify chunk magic number for each compression algo. * Renames "s/IDBCompressInterface/CompressInterface/g" according to requirement.
This commit is contained in:
@ -121,9 +121,9 @@ int ColumnOpCompress0::saveBlock(IDBDataFile* pFile, const unsigned char* writeB
|
||||
* Constructor
|
||||
*/
|
||||
|
||||
ColumnOpCompress1::ColumnOpCompress1(Log* logger)
|
||||
ColumnOpCompress1::ColumnOpCompress1(uint32_t compressionType, Log* logger)
|
||||
{
|
||||
m_compressionType = 1;
|
||||
m_compressionType = compressionType;
|
||||
m_chunkManager = new ChunkManager();
|
||||
|
||||
if (logger)
|
||||
@ -164,11 +164,7 @@ bool ColumnOpCompress1::abbreviatedExtent(IDBDataFile* pFile, int colWidth) cons
|
||||
|
||||
int ColumnOpCompress1::blocksInFile(IDBDataFile* pFile) const
|
||||
{
|
||||
CompFileHeader compFileHeader;
|
||||
readHeaders(pFile, compFileHeader.fControlData, compFileHeader.fPtrSection);
|
||||
|
||||
compress::IDBCompressInterface compressor;
|
||||
return compressor.getBlockCount(compFileHeader.fControlData);
|
||||
return m_chunkManager->getBlockCount(pFile);
|
||||
}
|
||||
|
||||
|
||||
|
Reference in New Issue
Block a user