Marko mentions, it could be caused by MDEV-15740 where InnoDB does not
flush redo log as often as it should, with innodb_flush_log_at_trx_commit=1
The workaround is to use innodb_flush_log_at_trx_commit=2, which,
according to MDEV-15740 is more durable.
Disks with native 4K sectors need 4K alignment and size for unbuffered IO
(i.e for files opened with FILE_FLAG_NO_BUFFERING)
Innodb opens redo log with FILE_FLAG_NO_BUFFERING, however it always does
512byte IOs. Thus, the IO on 4K native sectors will fail, rendering
Innodb non-functional.
The fix is to check whether OS_FILE_LOG_BLOCK_SIZE is multiple of logical
sector size, and if it is not, reopen the redo log without
FILE_FLAG_NO_BUFFERING flag.
"my_snprintf(NULL, 0, ...)" does not follow what snprintf does. Change
the call in wsrep_dump_rbr_buf_with_header to use snprintf (note that
wsrep_dump_rbr_buf was using snprintf all the way).
Fix an off-by-one error in comparison so it actually prints the output
For some reason, some of these suppressions would fail to suppress
when the code is compiled with clang 6.0, Debug and -DWITH_ASAN=ON.
Possibly it is related to the number of .* or the length of the
regular expression strings.
NULL values when there is no DEFAULT
- Merged the alter_non_null test case to alter_not_null test case.
Renamed the alter_non_null_debug to alter_not_null_debug test case
One can create table with the same name for `field` and `table` `check` constraint.
For example:
`create table t(a int check(a>0), constraint a check(a>10));`
But when inserting new rows same error is always raised.
For example with
```insert into t values (-1);```
and
```insert into t values (10);```
same error `ER_CONSTRAINT_FAILED` is obtained and it is not clear which constraint is violated.
This patch solve this error so that in case if field constraint is violated the first parameter
in the error message is `table.field_name` and if table constraint is violated the first parameter
in error message is `constraint_name`.
Correct 898a8c3c0c to work when newer debhelper-10.2 is installed from
xenial-backports (or jessie-backports).
Use gcc version instead of debproxy version, this is likely a gcc
issue (as disabling LTO and gcc's linker plugin fixes it).
* ignore CHECK constraint for historical rows;
* FOREIGN KEY test case.
TODO:
MDEV-16301 IB: use real table name for error messages on ALTER
Closes tempesta-tech/mariadb#491
Closes#748
Disks with native 4K sectors need 4K alignment and size for unbuffered IO
(i.e for files opened with FILE_FLAG_NO_BUFFERING)
Innodb opens redo log with FILE_FLAG_NO_BUFFERING, however it always does
512byte IOs. Thus, the IO on 4K native sectors will fail, rendering
Innodb non-functional.
The fix is to check whether OS_FILE_LOG_BLOCK_SIZE is multiple of logical
sector size, and if it is not, reopen the redo log without
FILE_FLAG_NO_BUFFERING flag.
When the definition of the index used for hash join was created
in create_hj_key_for_table() it could cause memory overwrite
due to a bug that led to an underestimation of the number of
the index component.
Building this plugin which requires run-time access to network, uses a lot
of disk space and is slow was already partially disabled. This way we
also ensure that on cmake level it never runs even if it out of some
autodetection reason at times thought it could run.
This fixes the error message:
fatal: unable to access 'https://github.com/awslabs/aws-sdk-cpp.git/':
Problem with the SSL CA cert (path? access rights?)
This complements commit ecb0e0ade4 that
disabled a bunch of plugins from being built on Travis-CI (due to time
and disk space saving reasons).
When the plugins are not built, the packaging phase will fail due to
missing files. This change omits the files from packaging to the process
can complete successfully.
Change the float comparison function to use a negated version when
comparing for equality. This actually produces less code when compiling
with optimizations (O3) on.
* Exclude some storage engines from Travis to conserve
build time and disk usage per job. Exluded:
TOKUDB MROONGA SPIDER OQGRAPH PERFSCHEMA SPHINX
* Increase travis_wait from default 20m to 30 for MTR
* Use travis_wait for long running MTR command (wait
30m instead of default 20m)
* Increase testcase-timeout to 20m for OSX, 2m for Linux
* Set ccache size only on Linux, adjust timeout again
* Increase cache push timeout to 5 mins
* Remove AWS defines, not needed
* Remove commented out ASAN rules, has been disabled
previously since it has a significant impact on job
runtime, should be used more in buildbot instead
* Misc cleanup and fixes
Several improvements have been made so that builds run
faster and with fewer canceled jobs:
* Set ccache max size to 1GB. Was 512MB for Linux
(too low for MariaDB) and 5GB on macOS with defaults;
* Don't install libasan in Travis if not necessary.
Sicne ASAN is disabled for the time being, save
time/resources for other steps;
* Decrease number of parallel processes. To prevent
resource exhaustion leading to poor performance. According
to Travis docs, a max of 4 concurrent processses should be
run per job:
https://docs.travis-ci.com/user/common-build-problems/#My-build-script-is-killed-without-any-error
* Reconsider tests exec order and split huge main and rocksdb
test suites into their own job, decreasing the chance of going
over the Travis job execution limit and getting killed;
* Increase Travis testcase-timeout to 4 minutes. Occasionally
on Ubuntu target and frequently on macOS, many tests in main,
rpl, binlog suites take longer than 2 minutes, resulting in
many jobs failing, when in reality the failing tests didn't
get a chance to complete. From my testing, along with the other
speedups, i.e. increasing ccache size, a timeout of 4 minutes
should be Ok. Revert to 3 minutes of necessary.
* Build with GCC and Clang version 5,6 only.
* Rename GCC_VERSION to CC_VERSION for clarity. We are using
two compilers after all, GCC and Clang.
* Stop using somewhat obsolete Clang4 in Travis. Also, was the
reason for the failing test suites in MDEV-15430.
MDEV-7257 made a dump thread to read from binlog concurrently with
writers as long as the read bytes are below a water-mark
(MYSQL_BIN_LOG::binlog_end_pos). However it appeared to be possible a
dump thread reader reach out for bytes past the water mark through a
feature of IO_CACHE that fills in the internal buffer and while doing
so it could read what the reader is not supposed to see (the bytes
above MYSQL_BIN_LOG::binlog_end_pos).
The issue is fixed with constraining the IO_CACHE buffer fill to respect
the watermark.
An added unit test proves reading from file is bound to an external
parameter
passed to {IO_CACHE::end_of_file} cache member.
Problem:
push_handler() created sp_handler_entry instances on THD::main_mem_root,
which is freed only after the SP instructions execution.
So in case of a CONTINUE HANDLER inside a loop (e.g. WHILE) this approach
leaked thread memory on every loop iteration.
Changes:
- Removing sp_handler_entry declaration, it's not really needed.
- Fixing the data type of sp_rcontext::m_handlers from
Dynamic_array<sp_handler_entry*> to Dynamic_array<sp_instr_hpush_jump*>
- Fixing sp_rcontext::push_handler() to push the pointer to
an sp_instr_hpush_jump instance to the handler stack.
This instance contains everything we need.
There is no a need to allocate anything else.
MDEV-7257 made a dump thread to read from binlog concurrently with
writers as long as the read bytes are below a water-mark
(MYSQL_BIN_LOG::binlog_end_pos). However it appeared to be possible a
dump thread reader reach out for bytes past the water mark through a
feature of IO_CACHE that fills in the internal buffer and while doing
so it could read what the reader is not supposed to see (the bytes
above MYSQL_BIN_LOG::binlog_end_pos).
The issue is fixed with constraining the IO_CACHE buffer fill to respect
the watermark.
An added unit test proves reading from file is bound to an external
parameter
passed to {IO_CACHE::end_of_file} cache member.
The previous correction of the patch for mdev-16473 did not work
correctly for the databases whose names started with '*'.
Added a test case with a database named "*".
For the purpose of reporting an error to error log, shutdown thread was
attempting to access current_thd->variables.lc_messages->errmsgs->errmsgs.
Whereas current_thd was NULL.
We should log errors according to global lc_messages setting instead of
session setting.