When updating non-transactional tables inside a multi-statement transaction,
and binlog_direct_non_transactional_updates=1, then the non-transactional
updates are binlogged directly through the statement cache while the
transaction cache is still being added to in the main transaction.
Thus, move the engine_binlog_info out from binlog_cache_mngr and into the
individual stmt/trx binlog_cache_data, so that we can have separate
engine_binlog_info active for the statement and the transaction cache.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Support for SAVEPOINT, ROLLBACK TO SAVEPOINT, rolling back a failed
statement (keeping active transaction), and rolling back transaction.
For savepoints (and start-of-statement), if the binlog data to be rolled
back is still in the in-memory part of trx cache we can just truncate the
cache to the point.
But if we need to spill cache contents as out-of-band data containing one or
more savepoints/start-of-statement point, then split the spill at each point
and inform the engine of the savepoints.
In InnoDB, at savepoint set, save the state of the forest of perfect binary
trees being built. Then at rollback, restore the appropriate state.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Instead of returning only one chunk at a time, make
ha_innodb_binlog_reader::read_data() try to read all chunks on the page.
This reduces the number of times each reader has to latch pages in the page
fifo, which contends for a global mutex also shared with the writer.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
This patch makes replication crash-safe with the new binlog implementation,
even when --innodb-flush-log-at-trx-commit=0|2. The point is to not send any
binlog events to the slave until they have become durable on master, thus
avoiding that a slave may replicate a transaction that is lost during master
recovery, diverging the slave from the master.
Keep track of which point in the binlog has been durably synced to disk
(meaning the corresponding LSN has been durably synced to disk in the InnoDB
redo log). Each write to the binlog inserts an entry with offset and
corresponding LSN in a FIFO. Dump threads will first read only up to the
durable point in the binlog. A dump thread will then check the LSN fifo, and
do an InnoDB redo log sync if anything is pending. Then the FIFO is emptied
of any LSNs that have now become durable, and the durable point in the
binlog is updated and reading the binlog can continue.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Keep track of, for each binlog file, how many open transactions have
out-of-band data starting in that file. Then at the start of each new binlog
file, in the header page, record the file_no of the earliest file that this
file might contain commit records with references back to OOB records in
that earlier file.
Use this in PURGE BINARY LOGS, so that when a dump thread (slave connection)
is active in file number N, and that file (or a later one) may require
looking back in an earlier file number M for out-of-band records, purge will
stop already at file number M. This way, we avoid that purge accidentally
deletes some binlog file that a dump thread would later get an error on
because it needs to read out-of-band data.
This patch also includes placeholder data for a similar facility for XA
references. The actual implementation of support for XA is for later though.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Mostly various fixes to avoid initializing or creating any data or files for
the legacy binlog.
A possible later refinement could be to sub-class the binlog class
differently for legacy and in-engine binlogs, writing separate virtual
functions for behaviour that differ, extracting common functionality into
sub-methods. This could remove some if (opt_binlog_engine_hton)
conditionals.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Still ToDo: is to restrict auto-purge so that it does not purge any binlog
file with out-of-band data that might still be needed by a connected slave.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Add option --binlog-directory, used to place the binlogs outside the data
directory (eg. to put them on different disk/file system).
Disallow specifying the binlog name in --log-bin when
--binlog-storage-engine is used, as the name is then not user configurable.
A ToDo (not implemented in this commit) is to use the --binlog-directory
value, if given, also for the legacy binlog implementation.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
With this commit, the out-of-band binlogging of large event groups in
multiple smaller records interleaved with other event groups is now working.
Instead of flushing the binlog cache to disk when they reach
@@binlog_cache_size, instead the cache is binlogged as an out-of-band
record. Then at transaction commit, a commit record is written containing
just the GTID and a link to the out-of-band data.
To facilitate append-only operation, the binlogged records do not have a
"next" pointer. Instead, they are written out as a forest of perfect binary
trees, the leftmost leaf of one tree pointing to the root of the previous
tree. This structure is used in the binlog reader to efficiently read out
the event group data consecutively for the binlog dump thread, needing to
maintain only O(log(N)) amount of memory during the reading.
As part of this commit, the existing binlog reader code is refactored to be
greatly improved, with a much cleaner explicit state machine and handling of
chunk/page/file boundaries etc.
Also fixes some bugs in the gtid_search::find_gtid_pos().
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
To find the target position, we first loop backwards over binlog files,
reading the initial GTID state written at the start to find the file to
start in. We then binary search on the differential GTID states written
every --innodb-binlog-state-interval bytes.
This patch does only minimal changes to the dump thread code in sql_repl.cc
to be able to send out binlog data to the client. Some re-factoring/cleanup
should be done in a follow-up patch to more cleanly separate the two code
paths, avoid a lot of if-statements and make the binlog-in-engine code path
free of much of the cruft from the legacy binlog implementation.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Every N bytes (hardcoded at 64k for now, to become a configurable setting),
write the binlog GTID state into the binlog tablespace. This allows to
quickly find a given GTID position by binary search to the prior GTID state
in the tablespace and then a small linear scan from that point.
The full binlog state is dumped at the start of the binlog file; remaining
states dumped are differential states containing only the changed
(domain_id, server_id) pairs, to save space if binlog space is large.
This commit only implements the writing of the binlog state to the
tablespace at regular intervals. The binary search to be implemented in a
subsequent commit.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Partial commit of the greater MDEV-34348 scope.
MDEV-34348: MariaDB is violating clang-16 -Wcast-function-type-strict
Reviewed By:
============
Marko Mäkelä <marko.makela@mariadb.com>
When a derived table which has distinct values and BLOB fields is
materialized, an index is created over all columns to ensure only
unique values are placed to the result.
This index is created in a special mode HA_UNIQUE_HASH to support BLOBs.
Later the optimizer may incorrectly choose this index to retrieve values
from the derived table, although such type of index cannot be used
for data retrieval.
This commit excludes HA_UNIQUE_HASH indexes from adding to
`JOIN::keyuse` array thus preventing their subsequent usage for
data retrieval
work consistently on replication
Row-based replication does not execute CREATE .. SELECT but instead
CREATE TABLE. CREATE .. SELECT creates implict system fields on
unusual place: in-between declared fields and select fields. That was
done because select_field_pos logic requires select fields go last in
create_list.
So, CREATE .. SELECT on master and CREATE TABLE on slave create system
fields on different positions and replication gets field mismatch.
To fix this we've changed CREATE .. SELECT to create implicit system
fields on usual place in the end and updated select_field_pos for
handling this case.
Fixed by checking handler_stats if it's active instead of
thd->variables.log_slow_verbosity & LOG_SLOW_VERBOSITY_ENGINE.
Reviewed-by: Sergei Petrunia <sergey@mariadb.com>