* Release server lock temporarily when BF aborting local SR
transaction during view event processing
* Check transaction state for BF aborts in before_prepare() after
the lock has acquired after fragment removal
* Send rollback fragment only from streaming_rollback()
* Check fragment removal error code in prepare phase. It is possible
that the transaction gets BF aborted during fragment removal.
* Mark fragment certified in certify_fragment() even if the provider
returns cert failed error. With current wsrep-API error codes
it may not be possible to distinquish certification failure
and BF abort during fragment replication. This may also be a
provider bug. As a result rollback fragment may sometimes be
replicated when it would not be necessary.
SR tranasctions are BF aborted or rolled back on primary view
changes according to the following rules:
* Ongoing local SR transactions are BF aborted if the processing
server is not found from the current view.
* All remote SR transactions whose origin server is not included in the
current view are rolled back.
* Enable codepath to BF abort high priority SR applier
* Pass ws_handle, ws_meta to high priority service rollback
call to allow total ordering of rollback process
* Added server_id into transaction in order to be able to stop
streaming applier during high priority BF abort
* Added missing commit fragment applying
* Don't clear fragments for replaying SR transaction
The write set handle and meta data are needed for SR transactions
where the commit context is not known when the transaction starts.
The passed handle and meta data can be set through client_state
prepare_for_ordering() call before performing commit.
The intended purpose for local mode was local storage access without
entering replication hooks. However, the same can be achieved with
high priority mode. Removed replicating mode and use local instead to
denote locally processing clients.
quite useful as there might not be enough information for it
after the statement has been processed. Better to handle retrying
on DBMS side. Also removed after_statement_result enumeration and
return plain int from after_statement().
The interface method can be used to notify the DBMS implementation
about state changes in well defined order. The call will be done
under server_state mutex protection.
* Pass condition variable for client_state
* Notify all cond waiters when changing the transcation status to
aborted
* Wait for aborting transaction state aborted in before_command
* Added bootstrap service call to do DBMS side bootstrap operations
during the cluster bootstrap.
* Added last_committed_gtid() to provider interface
* Implemented wait_for_gtid() provider call
* Pass initial position to the server state
* Propagate server max protocol version to provider init options
* Store gtid from connected call to make cluster id and the connect
position available
Depending on the DBMS client session allocation strategy the
client id may or may not be available when the client_session
is constructed, therefore there should be a method to assign
an id after construction. Close/cleanup methods were added to
clean up open transactions appropriately.