You've already forked mariadb-columnstore-engine
mirror of
https://github.com/mariadb-corporation/mariadb-columnstore-engine.git
synced 2025-10-28 19:54:55 +03:00
Progress keep and test commit Progress keep and test commit Progress keep and test commit Again, trying to pinpoint problematic part of a change Revert "Again, trying to pinpoint problematic part of a change" This reverts commit 71874e7c0d7e4eeed0c201b12d306b583c07b9e2. Revert "Progress keep and test commit" This reverts commit 63c7bc67ae55bdb81433ca58bbd239d6171a1031. Revert "Progress keep and test commit" This reverts commit 121c09febd78dacd37158caeab9ac70f65b493df. Small steps - I walk minefield here Propagating changes - now CPInfo in convertValArray Progress keep commit Restoring old functionality Progress keep commit Small steps to avoid/better locate old problem with the write engine. Progress keeping commit Thread the CPInfo up to convertValArray call in writeColumnRec About to test changes - I should get no regression and no updates in ranges either. Testing out why I get a regression Investigating source of regression Debugging prints Fix compile error Debugging print - debug regression I clearly see calls to writeColumnRec and prints there added to discern between these. Fix warning error Possible culprit Add forgotten default parameter for convertValArray New logic to test Max/min gets updated during value conversion To test results of updates Debug logs Debug logs An attempt to provide proper sequence index Debug logs An attempt to provide proper sequence index - now magic for resetting Debug logs Debug logs Debug logs Trying to perform correct updates Trying to perform correct updates - seqNum woes fight COMMIT after INSERT performs 'mark extent as invalid' operation - investigating To test: cut setting of CPInfo upon commit from DML processor It may be superfluous as write engine does that too Debug logs Debug logs Better interface for CPMaxMin Old interface forgot to set isBinaryColumn field Possible fix for the problems I forgot to reassign the value in cpinfoList Debug logs Computation of 'binary' column property logs indicated that it was not set in getExtentCPMaxMin, and it was impossible to compute there so I had to move that into writeengine. To test: code to allow cross-extent insertion To test: removed another assertion for probable cause of errors Debug logs Dropped excessive logs Better reset code Again, trying to fix ordering Fixing order of rowids for LBID computation Debug logs Remove update of second LBID in split insert I have to validate incorrect behaviour for this test Restoring the case where everything almost worked Tracking changes in newly created extents Progress keeping commit Fixing build errors with recent server An ability to get old values from blocks we update Progress keeping commit Adding analysis of old values to write engine code. It is needed for updates and deletes. Progress keeping commit Moving max/min range update from convertValArray into separate function with simpler logic. To test and debug - logic is there Fix build errors Update logic to debug There is a suspicious write engine method updateColumnRecs which receives a vector of column types but does not iterate over them (otherwise it will be identical to updateColumnRec in logic). Other than that, the updateColumnRec looks like the center of all updates - deleteRow calls it, for example, dml processor also calls it. Debug logs for insert bookkeeping regression Set up operation type in externally-callable interface Internal operations depend on the operation type and consistency is what matters there. Debug logs Fix for extent range update failure during update operation Fix build error Debug logs Fix for update on deletion I am not completely sure in it - to debug. Debug log writeColumnRec cannot set m_opType to UPDATE unconditionally It is called from deleteRow Better diagnostics Debug logs Fixed search condition Debug logs Debugging invalid LBID appearance Debug logs - fixed condition Fix problems with std::vector reallocation during growth Fix growing std::vector data dangling access error Still fixing indexing errors Make in-range update to work Correct sequence numbers Debug logs Debug logs Remove range drop from DML part of write engine A hack to test the culprit of range non-keeping Tests - no results for now MTR-style comments Empty test results To be filled with actual results. Special database and result selects for all tests Pleasing MTR with better folder name Pleasing MTR - testing test result comparison Pleasing MTR by disabling warnings All test results Cleaning up result files Reset ranges before update Remove comments from results - point of failure in MTR Remove empty line from result - another MTR failure point Probably fix for deletes Possible fix for remaining failed delete test Fix a bug in writeRows It should not affect delete-with-range test case, yet it is a bug. Debug logs Debug logs Tests reorganization and description Support for unsigned integer for new tests Fix type omission Fix test failure due to warnings on clean installation Support for bigint to test Fix for failed signed bigint test Set proper unsignedness flag Removed that assignment during refactoring. Tests for types with column width 1 and 2 Support for types in new tests Remove trailing empty lines from results Tests had failed because of extra empty lines. Remove debug logs Update README with info about new tests Move tests for easier testing Add task tag to tests Fix invalid unsaigned range check Fix for signed types Fix regressions - progress keeping commit Do not set invalid ranges into valid state A possible fix for mcs81_self_join test MCOL 2044 test database cleanup Missing expected results Delete extraneous assignment to m_opType nullptr instead of NULL Refactor extended CPInfo with TypeHandler Better handling of ranges - safer types, less copy-paste Fix logic error related to typo Fix logic error related to typo Trying to figure out why invalid ranges aren't displayed as NULL..NULL Debug logs Debug logs Debug logs Debug logs for worker node Debug logs for worker node in extent map Debugging virtual table fill operation Debugging virtual table fill operation Fix for invalid range computation Remove debug logs Change handling of invalid ranges They are also set, but to invalid state. Complete change Fix typo Remove unused code "Fix" for tests - -1..0 instead of NULL..NULL for invalid unsigned ranges Not a good change, yet I cannot do better for now. MTR output requires tabs instead of spaces Debug logs Debug logs Debug logs - fix build Debug logs and logic error fix Fix for clearly incorrect firstLBID in CPInfo being set - to test Fix for system catalog operations suppot Better interface to fix build errors Delete tests we cannot satisfy due to extent rescan due to WHERE Tests for wide decimals Testing support for wide decimals Fix for wide decimals tests Fix for delete within range Memory leak fix and, possible, double free fix Dispatch on CalpontSystemCatalog::ColDataType is more robust Add support for forgotten MEDINT type Add forgottent BIGINT empty() instead of size() > 0 Better layout Remove confusing comment Sensible names for special values of seqNum field Tests for wide decimal support Addressing concerns of drrtuy Remove test we cannot satisfy Final touches for PR Remove unused result file
810 lines
30 KiB
C++
810 lines
30 KiB
C++
/* Copyright (C) 2014 InfiniDB, Inc.
|
|
|
|
This program is free software; you can redistribute it and/or
|
|
modify it under the terms of the GNU General Public License
|
|
as published by the Free Software Foundation; version 2 of
|
|
the License.
|
|
|
|
This program is distributed in the hope that it will be useful,
|
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
GNU General Public License for more details.
|
|
|
|
You should have received a copy of the GNU General Public License
|
|
along with this program; if not, write to the Free Software
|
|
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
|
|
MA 02110-1301, USA. */
|
|
|
|
// $Id: writeengine.h 4726 2013-08-07 03:38:36Z bwilkinson $
|
|
|
|
|
|
/** @file */
|
|
|
|
#ifndef _WRITE_ENGINE_H_
|
|
#define _WRITE_ENGINE_H_
|
|
#include <stdio.h>
|
|
#include <string>
|
|
|
|
// the header file for fd
|
|
#include <sys/types.h>
|
|
#include <sys/stat.h>
|
|
#include <fcntl.h>
|
|
// end
|
|
#include <boost/lexical_cast.hpp>
|
|
#ifdef _MSC_VER
|
|
#include <unordered_set>
|
|
#else
|
|
#include <tr1/unordered_set>
|
|
#endif
|
|
|
|
#include "we_brm.h"
|
|
#include "we_colop.h"
|
|
#include "we_index.h"
|
|
#include "we_tablemetadata.h"
|
|
#include "we_dbrootextenttracker.h"
|
|
#include "we_rbmetawriter.h"
|
|
#include "brmtypes.h"
|
|
#include "we_chunkmanager.h"
|
|
|
|
#define IO_BUFF_SIZE 81920
|
|
|
|
#if defined(_MSC_VER) && defined(WRITEENGINE_DLLEXPORT)
|
|
#define EXPORT __declspec(dllexport)
|
|
#else
|
|
#define EXPORT
|
|
#endif
|
|
/** Namespace WriteEngine */
|
|
namespace WriteEngine
|
|
{
|
|
|
|
//... Total compression operation: un_compresssed, compressed
|
|
const int UN_COMPRESSED_OP = 0;
|
|
const int COMPRESSED_OP = 1;
|
|
const int TOTAL_COMPRESS_OP = 2;
|
|
|
|
//...Forward class declarations
|
|
class Log;
|
|
|
|
// Bug4312. During transactions, we need to mark each extent modified as invalid.
|
|
// In order to prevent thrashing, marking the same extent everytime we get an lbid
|
|
// for an extent, we remember the starting lbid for each extent marked for the
|
|
// transaction. We also add a sequence number so we can age them out of the list
|
|
// for truly long running transactions.
|
|
struct TxnLBIDRec
|
|
{
|
|
std::tr1::unordered_set<BRM::LBID_t> m_LBIDSet;
|
|
std::vector<BRM::LBID_t> m_LBIDs;
|
|
std::vector<execplan::CalpontSystemCatalog::ColDataType> m_ColDataTypes;
|
|
|
|
TxnLBIDRec() {};
|
|
~TxnLBIDRec() {}
|
|
void AddLBID(BRM::LBID_t lbid, const execplan::CalpontSystemCatalog::ColDataType& colDataType)
|
|
{
|
|
if ( m_LBIDSet.insert(lbid).second)
|
|
{
|
|
m_LBIDs.push_back(lbid);
|
|
m_ColDataTypes.push_back(colDataType);
|
|
}
|
|
}
|
|
};
|
|
|
|
typedef boost::shared_ptr<TxnLBIDRec> SP_TxnLBIDRec_t;
|
|
typedef std::set<BRM::LBID_t> dictLBIDRec_t;
|
|
|
|
/** @brief Range information for 1 or 2 extents changed by DML operation. */
|
|
struct ColSplitMaxMinInfo {
|
|
ExtCPInfo fSplitMaxMinInfo[2]; /** @brief internal to write engine: min/max ranges for data in one and, possible, second extent. */
|
|
ExtCPInfo* fSplitMaxMinInfoPtrs[2]; /** @brief pointers to CPInfos in fSplitMaxMinInfo above */
|
|
ColSplitMaxMinInfo(execplan::CalpontSystemCatalog::ColDataType colDataType, int colWidth)
|
|
: fSplitMaxMinInfo { ExtCPInfo(colDataType, colWidth), ExtCPInfo(colDataType, colWidth) }
|
|
{
|
|
fSplitMaxMinInfoPtrs[0] = fSplitMaxMinInfoPtrs[1] = NULL; // disable by default.
|
|
}
|
|
};
|
|
|
|
typedef std::vector<ColSplitMaxMinInfo> ColSplitMaxMinInfoList;
|
|
|
|
|
|
/** Class WriteEngineWrapper */
|
|
class WriteEngineWrapper : public WEObj
|
|
{
|
|
public:
|
|
/**
|
|
* @brief Constructor
|
|
*/
|
|
EXPORT WriteEngineWrapper();
|
|
|
|
EXPORT WriteEngineWrapper(const WriteEngineWrapper& rhs);
|
|
/**
|
|
* @brief Default Destructor
|
|
*/
|
|
EXPORT ~WriteEngineWrapper();
|
|
|
|
/************************************************************************
|
|
* Interface definitions
|
|
************************************************************************/
|
|
/**
|
|
* @brief Performs static/global initialization for BRMWrapper.
|
|
* Should be called once from the main thread.
|
|
*/
|
|
EXPORT static void init(unsigned subSystemID);
|
|
|
|
/**
|
|
* @brief Build a index from an oid file (NOTE: this is write engine internal used function, just for test purpose and not for generic use
|
|
*/
|
|
int buildIndex(const OID& colOid, const OID& treeOid, const OID& listOid,
|
|
execplan::CalpontSystemCatalog::ColDataType colDataType, int width, int hwm,
|
|
bool resetFile, uint64_t& totalRows, int maxRow = IDX_DEFAULT_READ_ROW)
|
|
{
|
|
return -1;
|
|
}
|
|
|
|
/**
|
|
* @brief Build a index from a file
|
|
*/
|
|
int buildIndex(const std::string& sourceFileName, const OID& treeOid, const OID& listOid,
|
|
execplan::CalpontSystemCatalog::ColDataType colDataType, int width, int hwm, bool resetFile,
|
|
uint64_t& totalRows, const std::string& indexName, Log* pLogger,
|
|
int maxRow = IDX_DEFAULT_READ_ROW)
|
|
{
|
|
return -1;
|
|
}
|
|
|
|
/**
|
|
* @brief Close a index file
|
|
*/
|
|
void closeIndex() { }
|
|
|
|
/**
|
|
* @brief Close a dictionary
|
|
*/
|
|
int closeDctnry(const TxnID& txnid, int i, bool realClose = true)
|
|
{
|
|
return m_dctnry[op(i)]->closeDctnry(realClose);
|
|
}
|
|
|
|
/**
|
|
* @brief Commit transaction
|
|
*/
|
|
int commit(const TxnID& txnid)
|
|
{
|
|
m_txnLBIDMap.erase(txnid);
|
|
return BRMWrapper::getInstance()->commit(txnid);
|
|
}
|
|
|
|
/**
|
|
* @brief Convert interface value list to internal value array
|
|
*/
|
|
EXPORT void convertValArray(const size_t totalRow,
|
|
const execplan::CalpontSystemCatalog::ColType& cscColType,
|
|
const ColType colType,
|
|
ColTupleList& curTupleList, void* valArray,
|
|
bool bFromList = true) ;
|
|
/**
|
|
* @brief Updates range information given old range information, old values, new values and column information.
|
|
*/
|
|
EXPORT void updateMaxMinRange(const size_t totalNewRow, const size_t totalOldRow,
|
|
const execplan::CalpontSystemCatalog::ColType& cscColType,
|
|
const ColType colType,
|
|
const void* valArray, const void* oldValArray,
|
|
ExtCPInfo* maxMin, bool canStartWithInvalidRange);
|
|
|
|
|
|
/**
|
|
* @brief Create a column, include object ids for column data and bitmap files
|
|
* @param dataOid column datafile object id
|
|
* @param dataType column data type
|
|
* @param dataWidth column width
|
|
* @param dbRoot DBRoot under which file is to be located (1-based)
|
|
* @param partition Starting partition number for segment file path (0-based).
|
|
* @param compressionType compression type
|
|
*/
|
|
EXPORT int createColumn(const TxnID& txnid, const OID& dataOid,
|
|
execplan::CalpontSystemCatalog::ColDataType dataType, int dataWidth,
|
|
uint16_t dbRoot, uint32_t partition = 0, int compressionType = 0);
|
|
|
|
//BUG931
|
|
/**
|
|
* @brief Fill a new column with default value using row-ids from a reference column
|
|
*
|
|
* @param txnid Transaction id
|
|
* @param dataOid OID of the new column
|
|
* @param dataType Data-type of the new column
|
|
* @param dataWidth Width of the new column
|
|
* @param defaultVal Default value to be filled in the new column
|
|
* @param refColOID OID of the reference column
|
|
* @param refColDataType Data-type of the referecne column
|
|
* @param refColWidth Width of the reference column
|
|
*/
|
|
EXPORT int fillColumn(const TxnID& txnid, const OID& dataOid, const execplan::CalpontSystemCatalog::ColType& colType,
|
|
ColTuple defaultVal,
|
|
const OID& refColOID, execplan::CalpontSystemCatalog::ColDataType refColDataType,
|
|
int refColWidth, int refCompressionType, bool isNULL, int compressionType,
|
|
const std::string& defaultValStr, const OID& dictOid = 0, bool autoincrement = false);
|
|
|
|
/**
|
|
* @brief Create a index related files, include object ids for index tree and list files
|
|
|
|
* @param treeOid index tree file object id
|
|
* @param listOid index list file object id
|
|
*/
|
|
int createIndex(const TxnID& txnid, const OID& treeOid, const OID& listOid)
|
|
{
|
|
int rc = -1;
|
|
return rc;
|
|
}
|
|
|
|
/**
|
|
* @brief Create dictionary
|
|
* @param dctnryOid dictionary signature file object id
|
|
* @param partition Starting partition number for segment file path (0-based).
|
|
* @param segment segment number
|
|
* @param compressionType compression type
|
|
*/
|
|
EXPORT int createDctnry(const TxnID& txnid, const OID& dctnryOid,
|
|
int colWidth, uint16_t dbRoot,
|
|
uint32_t partiotion = 0, uint16_t segment = 0, int compressionType = 0);
|
|
|
|
/**
|
|
* @brief Delete a list of rows from a table
|
|
* @param colStructList column struct list
|
|
* @param colOldValueList column old values list (return value)
|
|
* @param rowIdList row id list
|
|
*/
|
|
EXPORT int deleteRow(const TxnID& txnid, const std::vector<CSCTypesList>& colExtentsColType, std::vector<ColStructList>& colExtentsStruct,
|
|
std::vector<void*>& colOldValueList, std::vector<RIDList>& ridLists, const int32_t tableOid);
|
|
|
|
/**
|
|
* @brief Delete a list of rows from a table
|
|
* @param colStructList column struct list
|
|
* @param rowIdList row id list
|
|
*/
|
|
|
|
EXPORT int deleteBadRows(const TxnID& txnid, ColStructList& colStructs,
|
|
RIDList& ridList, DctnryStructList& dctnryStructList);
|
|
|
|
|
|
/**
|
|
* @brief delete a dictionary signature and its token
|
|
* @param dctnryStruct dictionary structure
|
|
* @param dctnryTuple dictionary tuple
|
|
*/
|
|
//ITER17_Obsolete
|
|
// int deleteToken(const TxnID& txnid, Token& token); // Files need already open
|
|
// int deleteToken(const TxnID& txnid, DctnryStruct& dctnryStruct, Token& token);
|
|
|
|
/**
|
|
* @brief Drop a column, include object ids for column data file
|
|
* @param dataOid column datafile object id
|
|
*/
|
|
int dropColumn(const TxnID& txnid, const OID dataOid)
|
|
{
|
|
return m_colOp[0]->dropColumn((FID) dataOid);
|
|
}
|
|
|
|
/**
|
|
* @brief Drop files
|
|
* @param dataOids column and dictionary datafile object id
|
|
*/
|
|
int dropFiles(const TxnID& txnid, const std::vector<int32_t>& dataOids)
|
|
{
|
|
return m_colOp[0]->dropFiles(dataOids);
|
|
}
|
|
|
|
/**
|
|
* @brief Delete files for one partition
|
|
* @param dataOids column and dictionary datafile object id
|
|
*/
|
|
int deletePartitions(const std::vector<OID>& dataOids,
|
|
const std::vector<BRM::PartitionInfo>& partitions)
|
|
{
|
|
return m_colOp[0]->dropPartitions(dataOids, partitions);
|
|
}
|
|
|
|
int deleteOIDsFromExtentMap (const TxnID& txnid, const std::vector<int32_t>& dataOids)
|
|
{
|
|
return m_colOp[0]->deleteOIDsFromExtentMap(dataOids);
|
|
}
|
|
|
|
/**
|
|
* @brief Create a index related files, include object ids for index tree and list files
|
|
* @param treeOid index tree file object id
|
|
* @param listOid index list file object id
|
|
*/
|
|
int dropIndex(const TxnID& txnid, const OID& treeOid, const OID& listOid)
|
|
{
|
|
return -1;
|
|
}
|
|
|
|
/**
|
|
* @brief Drop a dictionary
|
|
* @param dctnryOid dictionary signature file object id
|
|
* @param treeOid dictionary tree file object id
|
|
* @param listOid index list file object id
|
|
*/
|
|
int dropDctnry(const TxnID& txnid, const OID& dctnryOid, const OID& treeOid, const OID& listOid)
|
|
{
|
|
return m_dctnry[0]->dropDctnry(dctnryOid);
|
|
}
|
|
|
|
/**
|
|
* @brief Flush VM write cache
|
|
* @param None
|
|
*/
|
|
EXPORT void flushVMCache() const;
|
|
|
|
/**
|
|
* @brief Insert values into a table
|
|
* @param colStructList column structure list
|
|
* @param colValueList column value list
|
|
* @param dicStringListt dictionary values list
|
|
* @param dbRootExtentTrackers dbrootTrackers
|
|
* @param bFirstExtentOnThisPM true when there is no extent on this PM
|
|
* @param insertSelect if insert with select, the hwm block is skipped
|
|
* @param isAutoCommitOn if autocommit on, only the hwm block is versioned,
|
|
* else eveything is versioned
|
|
* @param tableOid used to get table meta data
|
|
* @param isFirstBatchPm to track if this batch is first batch for this PM.
|
|
*/
|
|
EXPORT int insertColumnRecs(const TxnID& txnid,
|
|
const CSCTypesList& cscColTypeList,
|
|
ColStructList& colStructList,
|
|
ColValueList& colValueList,
|
|
DctnryStructList& dctnryStructList,
|
|
DictStrList& dictStrList,
|
|
std::vector<boost::shared_ptr<DBRootExtentTracker> >& dbRootExtentTrackers,
|
|
RBMetaWriter* fRBMetaWriter,
|
|
bool bFirstExtentOnThisPM,
|
|
bool insertSelect = false,
|
|
bool isAutoCommitOn = false,
|
|
OID tableOid = 0,
|
|
bool isFirstBatchPm = false);
|
|
|
|
EXPORT int insertColumnRecsBinary(const TxnID& txnid,
|
|
ColStructList& colStructList,
|
|
std::vector<uint64_t>& colValueList,
|
|
DctnryStructList& dctnryStructList,
|
|
DictStrList& dictStrList,
|
|
std::vector<boost::shared_ptr<DBRootExtentTracker> >& dbRootExtentTrackers,
|
|
RBMetaWriter* fRBMetaWriter,
|
|
bool bFirstExtentOnThisPM,
|
|
bool insertSelect = false,
|
|
bool isAutoCommitOn = false,
|
|
OID tableOid = 0,
|
|
bool isFirstBatchPm = false);
|
|
|
|
|
|
/**
|
|
* @brief Insert values into systables
|
|
* @param colStructList column structure list
|
|
* @param colValueList column value list
|
|
* @param dicStringListt dictionary values list
|
|
*/
|
|
EXPORT int insertColumnRec_SYS(const TxnID& txnid,
|
|
const CSCTypesList& cscColTypeList,
|
|
ColStructList& colStructList,
|
|
ColValueList& colValueList,
|
|
DctnryStructList& dctnryStructList,
|
|
DictStrList& dictStrList,
|
|
const int32_t tableOid);
|
|
|
|
/**
|
|
* @brief Insert a row
|
|
* @param colStructList column structure list
|
|
* @param colValueList column value list
|
|
* @param dicStringListt dictionary values list
|
|
*/
|
|
EXPORT int insertColumnRec_Single(const TxnID& txnid,
|
|
const CSCTypesList& cscColTypeList,
|
|
ColStructList& colStructList,
|
|
ColValueList& colValueList,
|
|
DctnryStructList& dctnryStructList,
|
|
DictStrList& dictStrList,
|
|
const int32_t tableOid);
|
|
/**
|
|
* @brief Open dictionary
|
|
* @param txnid relevant transaction
|
|
* @param dctnryStruct dictionary column to open
|
|
* @param useTmpSuffix Bulk HDFS usage: use *.tmp file suffix
|
|
*/
|
|
// @bug 5572 - HDFS usage: add *.tmp file backup flag
|
|
int openDctnry(const TxnID& txnid, DctnryStruct dctnryStruct, bool useTmpSuffix)
|
|
{
|
|
int compress_op = op(dctnryStruct.fCompressionType);
|
|
m_dctnry[compress_op]->setTransId(txnid);
|
|
return m_dctnry[compress_op]->openDctnry(
|
|
dctnryStruct.dctnryOid,
|
|
dctnryStruct.fColDbRoot,
|
|
dctnryStruct.fColPartition,
|
|
dctnryStruct.fColSegment,
|
|
useTmpSuffix);
|
|
}
|
|
|
|
/**
|
|
* @brief Rollback transaction (common portion)
|
|
*/
|
|
EXPORT int rollbackCommon(const TxnID& txnid, int sessionId);
|
|
|
|
/**
|
|
* @brief Rollback transaction
|
|
*/
|
|
EXPORT int rollbackTran(const TxnID& txnid, int sessionId);
|
|
|
|
/**
|
|
* @brief Rollback transaction
|
|
*/
|
|
EXPORT int rollbackBlocks(const TxnID& txnid, int sessionId);
|
|
|
|
/**
|
|
* @brief Rollback transaction
|
|
*/
|
|
EXPORT int rollbackVersion(const TxnID& txnid, int sessionId);
|
|
|
|
/**
|
|
* @brief Set the IsInsert flag in the ChunkManagers.
|
|
* This forces flush at end of block. Used only for bulk insert.
|
|
*/
|
|
void setIsInsert(bool bIsInsert)
|
|
{
|
|
m_colOp[COMPRESSED_OP]->chunkManager()->setIsInsert(bIsInsert);
|
|
m_dctnry[COMPRESSED_OP]->chunkManager()->setIsInsert(true);
|
|
}
|
|
|
|
/**
|
|
* @brief Get the IsInsert flag as set in the ChunkManagers.
|
|
* Since both chunk managers are supposed to be in lockstep as regards the
|
|
* isInsert flag, we need only grab one.
|
|
*
|
|
*/
|
|
bool getIsInsert()
|
|
{
|
|
return m_colOp[COMPRESSED_OP]->chunkManager()->getIsInsert();
|
|
}
|
|
|
|
std::tr1::unordered_map<TxnID, SP_TxnLBIDRec_t>& getTxnMap()
|
|
{
|
|
|
|
return m_txnLBIDMap;
|
|
};
|
|
std::tr1::unordered_map<TxnID, dictLBIDRec_t>& getDictMap()
|
|
{
|
|
return m_dictLBIDMap;
|
|
};
|
|
/**
|
|
* @brief Flush the ChunkManagers.
|
|
*/
|
|
int flushChunks(int rc, const std::map<FID, FID>& columOids)
|
|
{
|
|
int rtn1 = m_colOp[COMPRESSED_OP]->chunkManager()->flushChunks(rc, columOids);
|
|
int rtn2 = m_dctnry[COMPRESSED_OP]->chunkManager()->flushChunks(rc, columOids);
|
|
|
|
return (rtn1 != NO_ERROR ? rtn1 : rtn2);
|
|
}
|
|
|
|
/**
|
|
* @brief Set the transaction id into all fileops
|
|
*/
|
|
void setTransId(const TxnID& txnid)
|
|
{
|
|
for (int i = 0; i < TOTAL_COMPRESS_OP; i++)
|
|
{
|
|
m_colOp[i]->setTransId(txnid);
|
|
m_dctnry[i]->setTransId(txnid);
|
|
}
|
|
}
|
|
|
|
/**
|
|
* @brief Set the fIsBulk id into all fileops
|
|
*/
|
|
void setBulkFlag(bool isBulk)
|
|
{
|
|
for (int i = 0; i < TOTAL_COMPRESS_OP; i++)
|
|
{
|
|
m_colOp[i]->setBulkFlag(isBulk);
|
|
m_dctnry[i]->setBulkFlag(isBulk);
|
|
}
|
|
}
|
|
|
|
/**
|
|
* @brief Set the fIsFix into all fileops
|
|
*/
|
|
void setFixFlag(bool isFix = false)
|
|
{
|
|
for (int i = 0; i < TOTAL_COMPRESS_OP; i++)
|
|
{
|
|
m_colOp[i]->setFixFlag(isFix);
|
|
m_dctnry[i]->setFixFlag(isFix);
|
|
}
|
|
}
|
|
|
|
/**
|
|
* @brief let chunkmanager start transaction.
|
|
*
|
|
*/
|
|
int startTransaction(const TxnID& txnid)
|
|
{
|
|
int rc = 0;
|
|
rc = m_colOp[COMPRESSED_OP]->chunkManager()->startTransaction(txnid);
|
|
//if ( rc == 0)
|
|
// rc = m_dctnry[COMPRESSED_OP]->chunkManager()->startTransaction(txnid);
|
|
return rc;
|
|
}
|
|
|
|
/**
|
|
* @brief let chunkmanager confirm transaction.
|
|
*
|
|
*/
|
|
int confirmTransaction (const TxnID& txnid)
|
|
{
|
|
int rc = 0;
|
|
rc = m_colOp[COMPRESSED_OP]->chunkManager()->confirmTransaction (txnid);
|
|
return rc;
|
|
}
|
|
|
|
|
|
/**
|
|
* @brief let chunkmanager end transaction.
|
|
*
|
|
*/
|
|
int endTransaction(const TxnID& txnid, bool success)
|
|
{
|
|
int rc = 0;
|
|
rc = m_colOp[COMPRESSED_OP]->chunkManager()->endTransaction(txnid, success);
|
|
//if ( rc == 0)
|
|
// rc = m_dctnry[COMPRESSED_OP]->chunkManager()->endTransaction(txnid, success);
|
|
return rc;
|
|
}
|
|
|
|
/**
|
|
* @brief Tokenize a dictionary signature into a token
|
|
* @param dctnryStruct dictionary structure
|
|
* @param dctnryTuple dictionary tuple
|
|
* @param useTmpSuffix Bulk HDFS usage: use *.tmp file suffix
|
|
*/
|
|
EXPORT int tokenize(const TxnID& txnid, DctnryTuple&, int compType ); // Files need open first
|
|
EXPORT int tokenize(const TxnID& txnid, DctnryStruct& dctnryStruct, DctnryTuple& dctnryTuple,
|
|
bool useTmpSuffix);
|
|
|
|
/**
|
|
* @brief Update values into a column (New one)
|
|
* @param colStructList column structure list
|
|
* @param colValueList column value list
|
|
* @param colOldValueList column old values list (return value)
|
|
* @param ridList row id list
|
|
*/
|
|
EXPORT int updateColumnRec(const TxnID& txnid,
|
|
const std::vector<CSCTypesList>& colExtentsColType,
|
|
std::vector<ColStructList>& colExtentsStruct,
|
|
ColValueList& colValueList,
|
|
std::vector<void*>& colOldValueList,
|
|
std::vector<RIDList>& ridLists,
|
|
std::vector<DctnryStructList>& dctnryExtentsStruct,
|
|
DctnryValueList& dctnryValueList,
|
|
const int32_t tableOid);
|
|
|
|
/**
|
|
* @brief Update values into columns
|
|
* @param colStructList column structure list
|
|
* @param colValueList column value list
|
|
* @param ridList row id list
|
|
*/
|
|
|
|
EXPORT int updateColumnRecs(const TxnID& txnid,
|
|
const CSCTypesList& cscColTypeList,
|
|
std::vector<ColStruct>& colStructList,
|
|
ColValueList& colValueList,
|
|
const RIDList& ridLists,
|
|
const int32_t tableOid);
|
|
|
|
/**
|
|
* @brief Release specified table lock.
|
|
* @param lockID Table lock id to be released.
|
|
* @param errorMsg Return error message
|
|
*/
|
|
EXPORT int clearTableLockOnly(uint64_t lockID,
|
|
std::string& errorMsg);
|
|
|
|
/**
|
|
* @brief Rollback the specified table
|
|
* @param tableOid Table to be rolled back
|
|
* @param lockID Table lock id of the table to be rolled back.
|
|
* Currently used for logging only.
|
|
* @param tableName Name of table associated with tableOid.
|
|
* Currently used for logging only.
|
|
* @param applName Application that is driving this bulk rollback.
|
|
* Currently used for logging only.
|
|
* @param debugConsole Enable logging to console
|
|
* @param errorMsg Return error message
|
|
*/
|
|
EXPORT int bulkRollback(OID tableOid,
|
|
uint64_t lockID,
|
|
const std::string& tableName,
|
|
const std::string& applName,
|
|
bool debugConsole, std::string& errorMsg);
|
|
|
|
/**
|
|
* @brief update SYSCOLUMN next value
|
|
* @param oidValueMap
|
|
*/
|
|
EXPORT int updateNextValue(const TxnID txnId, const OID& columnoid, const uint64_t nextVal, const uint32_t sessionID, const uint16_t dbRoot);
|
|
|
|
/**
|
|
* @brief write active datafiles to disk
|
|
*
|
|
*/
|
|
EXPORT int flushDataFiles(int rc, const TxnID txnId, std::map<FID, FID>& columnOids);
|
|
|
|
/**
|
|
* @brief Process versioning for batch insert - only version the hwm block.
|
|
*/
|
|
EXPORT int processBatchVersions(const TxnID& txnid, std::vector<Column> columns, std::vector<BRM::LBIDRange>& rangeList);
|
|
|
|
EXPORT void writeVBEnd(const TxnID& txnid, std::vector<BRM::LBIDRange>& rangeList);
|
|
|
|
/************************************************************************
|
|
* Future implementations
|
|
************************************************************************/
|
|
/**
|
|
* @brief Begin transaction
|
|
*/
|
|
// todo: add implementation when we work on version control
|
|
// int beginTran(const TransID transOid) { return NO_ERROR; }
|
|
|
|
/**
|
|
* @brief End transaction
|
|
*/
|
|
// todo: add implementation when we work on version control
|
|
// int endTran(const TransID transOid) { return NO_ERROR; }
|
|
// WIP
|
|
void setDebugLevel(const DebugLevel level)
|
|
{
|
|
WEObj::setDebugLevel(level);
|
|
|
|
for (int i = 0; i < TOTAL_COMPRESS_OP; i++)
|
|
{
|
|
m_colOp[i]->setDebugLevel(level);
|
|
m_dctnry[i]->setDebugLevel(level);
|
|
}
|
|
} // todo: cleanup
|
|
|
|
/************************************************************************
|
|
* Internal use definitions
|
|
************************************************************************/
|
|
private:
|
|
/**
|
|
* @brief Check whether the passing parameters are valid
|
|
*/
|
|
int checkValid(const TxnID& txnid, const ColStructList& colStructList, const ColValueList& colValueList, const RIDList& ridList) const;
|
|
|
|
/**
|
|
* @brief Find the smallest column for this table
|
|
*/
|
|
void findSmallestColumn(uint32_t &colId, ColStructList colStructList);
|
|
|
|
/**
|
|
* @brief Convert interface column type to an internal column type
|
|
*/
|
|
void convertValue(const execplan::CalpontSystemCatalog::ColType& cscColType, ColType colType, void* valArray, size_t pos, boost::any& data, bool fromList = true);
|
|
|
|
/**
|
|
* @brief Convert column value to its internal representation
|
|
*
|
|
* @param colType Column data-type
|
|
* @param value Memory pointer for storing output value. Should be pre-allocated
|
|
* @param data Column data
|
|
*/
|
|
void convertValue(const execplan::CalpontSystemCatalog::ColType& cscColType, const ColType colType, void* value, boost::any& data);
|
|
|
|
/**
|
|
* @brief Print input value from DDL/DML processors
|
|
*/
|
|
void printInputValue(const ColStructList& colStructList, const ColValueList& colValueList, const RIDList& ridList) const;
|
|
|
|
/**
|
|
* @brief Process version buffer
|
|
*/
|
|
int processVersionBuffer(IDBDataFile* pFile, const TxnID& txnid, const ColStruct& colStruct,
|
|
int width, int totalRow, const RID* rowIdArray,
|
|
std::vector<BRM::LBIDRange>& rangeList);
|
|
|
|
/**
|
|
* @brief Process version buffers for update and delete @Bug 1886,2870
|
|
*/
|
|
int processVersionBuffers(IDBDataFile* pFile, const TxnID& txnid, const ColStruct& colStruct,
|
|
int width, int totalRow, const RIDList& ridList,
|
|
std::vector<BRM::LBIDRange>& rangeList);
|
|
|
|
int processBeginVBCopy(const TxnID& txnid, const std::vector<ColStruct>& colStructList, const RIDList& ridList,
|
|
std::vector<BRM::VBRange>& freeList, std::vector<std::vector<uint32_t> >& fboLists,
|
|
std::vector<std::vector<BRM::LBIDRange> >& rangeLists, std::vector<BRM::LBIDRange>& rangeListTot);
|
|
|
|
|
|
/**
|
|
* @brief Common methods to write values to a column
|
|
*/
|
|
int writeColumnRec(const TxnID& txnid,
|
|
const CSCTypesList& cscColTypes,
|
|
const ColStructList& colStructList,
|
|
ColValueList& colValueList,
|
|
RID* rowIdArray, const ColStructList& newColStructList,
|
|
ColValueList& newColValueList, const int32_t tableOid,
|
|
bool useTmpSuffix, bool versioning = true,
|
|
ColSplitMaxMinInfoList* maxMins = NULL);
|
|
|
|
int writeColumnRecBinary(const TxnID& txnid, const ColStructList& colStructList,
|
|
std::vector<uint64_t>& colValueList,
|
|
RID* rowIdArray, const ColStructList& newColStructList,
|
|
std::vector<uint64_t>& newColValueList,
|
|
const int32_t tableOid,
|
|
bool useTmpSuffix, bool versioning = true);
|
|
|
|
//@Bug 1886,2870 pass the address of ridList vector
|
|
int writeColumnRecUpdate(const TxnID& txnid,
|
|
const CSCTypesList& cscColTypeList,
|
|
const ColStructList& colStructList,
|
|
const ColValueList& colValueList, std::vector<void*>& colOldValueList,
|
|
const RIDList& ridList, const int32_t tableOid,
|
|
bool convertStructFlag = true, ColTupleList::size_type nRows = 0, std::vector<ExtCPInfo*>* cpInfos = NULL);
|
|
|
|
//For update column from column to use
|
|
int writeColumnRecords(const TxnID& txnid, const CSCTypesList& cscColTypeList,
|
|
std::vector<ColStruct>& colStructList,
|
|
ColValueList& colValueList, const RIDList& ridLists,
|
|
const int32_t tableOid, bool versioning = true, std::vector<ExtCPInfo*>* cpInfos = NULL);
|
|
|
|
/**
|
|
* @brief util method to convert rowid to a column file
|
|
*
|
|
*/
|
|
int convertRidToColumn(RID& rid, uint16_t& dbRoot, uint32_t& partition, uint16_t& segment,
|
|
const RID filesPerColumnPartition, const RID extentsPerSegmentFile,
|
|
const RID extentRows, uint16_t startDBRoot, unsigned dbrootCnt);
|
|
|
|
void AddDictToList(const TxnID txnid, std::vector<BRM::LBID_t>& lbids);
|
|
void RemoveTxnFromDictMap(const TxnID txnid);
|
|
|
|
// Bug 4312: We use a hash map to hold the set of starting LBIDS for a given
|
|
// txn so that we don't waste time marking the same extent as invalid. This
|
|
// list should be trimmed if it gets too big.
|
|
int AddLBIDtoList(const TxnID txnid,
|
|
const ColStruct& colStruct,
|
|
const int fbo,
|
|
ExtCPInfo* cpInfo = NULL // provide CPInfo pointer if you want max/min updated.
|
|
);
|
|
|
|
// Get CPInfo for given starting LBID and column description structure.
|
|
int GetLBIDRange(const BRM::LBID_t startingLBID, const ColStruct& colStruct, ExtCPInfo& cpInfo);
|
|
|
|
// mark extents of the transaction as invalid. erase transaction from txn->lbidsrec map if requested.
|
|
int markTxnExtentsAsInvalid(const TxnID txnid, bool erase = false);
|
|
|
|
// write LBID's new ranges.
|
|
int setExtentsNewMaxMins(const ColSplitMaxMinInfoList& maxMins, bool haveSplit);
|
|
|
|
int RemoveTxnFromLBIDMap(const TxnID txnid);
|
|
|
|
int op(int compressionType)
|
|
{
|
|
return (compressionType > 0 ? COMPRESSED_OP : UN_COMPRESSED_OP);
|
|
}
|
|
|
|
|
|
// This is a Map of sets of LBIDS for each transaction. A Transaction's list will be removed upon commit or rollback.
|
|
std::tr1::unordered_map<TxnID, SP_TxnLBIDRec_t> m_txnLBIDMap;
|
|
|
|
// MCOL-1160: We need to track dictionary LBIDs so we can tell PrimProc
|
|
// to flush the blocks after an API bulk-write.
|
|
std::tr1::unordered_map<TxnID, dictLBIDRec_t> m_dictLBIDMap;
|
|
|
|
ColumnOp* m_colOp[TOTAL_COMPRESS_OP]; // column operations
|
|
Dctnry* m_dctnry[TOTAL_COMPRESS_OP]; // dictionary operations
|
|
OpType m_opType; // operation type
|
|
DebugLevel m_debugLevel; // debug level
|
|
};
|
|
|
|
} //end of namespace
|
|
|
|
#undef EXPORT
|
|
|
|
#endif // _WRITE_ENGINE_H_
|