For some reason, some of these suppressions would fail to suppress
when the code is compiled with clang 6.0, Debug and -DWITH_ASAN=ON.
Possibly it is related to the number of .* or the length of the
regular expression strings.
NULL values when there is no DEFAULT
- Merged the alter_non_null test case to alter_not_null test case.
Renamed the alter_non_null_debug to alter_not_null_debug test case
Building this plugin which requires run-time access to network, uses a lot
of disk space and is slow was already partially disabled. This way we
also ensure that on cmake level it never runs even if it out of some
autodetection reason at times thought it could run.
This fixes the error message:
fatal: unable to access 'https://github.com/awslabs/aws-sdk-cpp.git/':
Problem with the SSL CA cert (path? access rights?)
This complements commit ecb0e0ade4 that
disabled a bunch of plugins from being built on Travis-CI (due to time
and disk space saving reasons).
When the plugins are not built, the packaging phase will fail due to
missing files. This change omits the files from packaging to the process
can complete successfully.
* Exclude some storage engines from Travis to conserve
build time and disk usage per job. Exluded:
TOKUDB MROONGA SPIDER OQGRAPH PERFSCHEMA SPHINX
* Increase travis_wait from default 20m to 30 for MTR
* Use travis_wait for long running MTR command (wait
30m instead of default 20m)
* Increase testcase-timeout to 20m for OSX, 2m for Linux
* Set ccache size only on Linux, adjust timeout again
* Increase cache push timeout to 5 mins
* Remove AWS defines, not needed
* Remove commented out ASAN rules, has been disabled
previously since it has a significant impact on job
runtime, should be used more in buildbot instead
* Misc cleanup and fixes
Several improvements have been made so that builds run
faster and with fewer canceled jobs:
* Set ccache max size to 1GB. Was 512MB for Linux
(too low for MariaDB) and 5GB on macOS with defaults;
* Don't install libasan in Travis if not necessary.
Sicne ASAN is disabled for the time being, save
time/resources for other steps;
* Decrease number of parallel processes. To prevent
resource exhaustion leading to poor performance. According
to Travis docs, a max of 4 concurrent processses should be
run per job:
https://docs.travis-ci.com/user/common-build-problems/#My-build-script-is-killed-without-any-error
* Reconsider tests exec order and split huge main and rocksdb
test suites into their own job, decreasing the chance of going
over the Travis job execution limit and getting killed;
* Increase Travis testcase-timeout to 4 minutes. Occasionally
on Ubuntu target and frequently on macOS, many tests in main,
rpl, binlog suites take longer than 2 minutes, resulting in
many jobs failing, when in reality the failing tests didn't
get a chance to complete. From my testing, along with the other
speedups, i.e. increasing ccache size, a timeout of 4 minutes
should be Ok. Revert to 3 minutes of necessary.
* Build with GCC and Clang version 5,6 only.
* Rename GCC_VERSION to CC_VERSION for clarity. We are using
two compilers after all, GCC and Clang.
* Stop using somewhat obsolete Clang4 in Travis. Also, was the
reason for the failing test suites in MDEV-15430.
Problem:
push_handler() created sp_handler_entry instances on THD::main_mem_root,
which is freed only after the SP instructions execution.
So in case of a CONTINUE HANDLER inside a loop (e.g. WHILE) this approach
leaked thread memory on every loop iteration.
Changes:
- Removing sp_handler_entry declaration, it's not really needed.
- Fixing the data type of sp_rcontext::m_handlers from
Dynamic_array<sp_handler_entry*> to Dynamic_array<sp_instr_hpush_jump*>
- Fixing sp_rcontext::push_handler() to push the pointer to
an sp_instr_hpush_jump instance to the handler stack.
This instance contains everything we need.
There is no a need to allocate anything else.
Problem:
push_cursor() created sp_cursor instances on THD::main_mem_root,
which is freed only after the SP instructions loop.
Changes:
- Moving sp_cursor declaration from sp_rcontext.h to sql_class.h
- Deriving sp_instr_cpush from sp_cursor. So now sp_cursor is created
only once (at the SP parse time) and then reused on all loop iterations
- Adding a new method reset() into sp_cursor (and its parent classes)
to reset an sp_cursor instance before reuse.
- Moving former sp_cursor members m_fetch_count, m_row_count, m_found
into a separate class sp_cursor_statistics. This helps to reuse
the code in sp_cursor constructors, and in sp_cursor::reset()
- Adding a helper method sp_rcontext::pop_cursor().
- Adding "THD*" parameter to so_rcontext::pop_cursors() and pop_all_cursors()
- Removing "new" and "delete" from sp_rcontext::push_cursor() and
sp_rconext::pop_cursor().
- Fixing sp_cursor not to derive from Sql_alloc, as it's now allocated
only as a part of sp_instr_cpush (and not allocated separately).
- Moving lex_keeper->disable_query_cache() from sp_cursor::sp_cursor()
to sp_instr_cpush::execute().
- Adding tests
MDEV-10581 sql_mode=ORACLE: Explicit cursor FOR LOOP
MDEV-12098 sql_mode=ORACLE: Implicit cursor FOR loop
Cleanup changes:
- Removing sp_lex_cursor::m_cursor_name
- Adding sp_instr_cursor_copy_struct::m_cursor (the cursor global index)
- Fixing sp_instr_cursor_copy_struct::print() to access to the cursor
name using m_ctx and m_cursor (like other cursor related instructions do)
instead of m_cursor_name.
This change is needed to unify sp_assignment_lex and sp_cursor_lex later,
to fix this problem easier:
MDEV-16558 Parenthesized expression does not work as a lower FOR loop bound
Returned accidentally removed undefinition of MYSQL_SERVER in net_serv.cc inside embedded server
(embedded server uses real_net_read/write only as a client)
Prevented attempt to clean up embedded server if it was not initialized
NULL values when there is no DEFAULT
Copy and inplace algorithm works similarly for
NULL to NOT NULL conversion for the following cases:
(1) strict sql mode - Should give error.
(2) non-strict sql mode - Should give warnings alone
(3) alter ignore table command. - Should give warnings alone.
Changing columns WITH/WITHOUT SYSTEM VERSIONING doens't require to read data at
all. Thus it should be an instant operation.
Patch also fixes a bug when ALTER_COLUMN_UNVERSIONED wasn't passed to InnoDB
to change its internal structures.
change_field_versioning_try(): apply WITH/WITHOUT SYSTEM VERSIONING
change in SYS_COLUMNS for one field.
change_fields_versioning_try(): apply WITH/WITHOUT SYSTEM VERSIONING
change in SYS_COLUMNS for every changed field in a table.
change_fields_versioning_cache(): update cache for versioning property
of columns.
This bug happened for queries that used a materialized view that
renamed columns of the specifying query in an inner table of
an outer join. For such a query name resolution for a column
belonging the view could fail if the underlying column was
non-nullable.
When creating the defintion of the the temporary table for
the materialized view used in the inner part of an outer join
the definition of the non-nullable columns are created by the
function create_tmp_field_from_item() that names the columns
according to the names of the underlying columns. So these names
should be changed for the view column names.
This bug cannot be reproduced in 10.2 because there setup_fields()
called when preparing joins in the view specification effectively
renames the underlying columns in the function find_field_in_view().
In 10.3 this renaming was removed as improper
(see Monty's commit b478276b04).
Add a Spider test to ensure that a bug similar to MDEV-11084 is not
re-introduced. Spider would crash if the first partition was not used first.
Author:
Eric Herman.
First Reviewer:
Jacob Mathew.
Second Reviewer:
Kentoku Shiba.
When altering from DECIMAL to *INT UNIGNED or to BIT, go through val_decimal(),
to avoid truncation to the biggest possible signed integer
(0x7FFFFFFFFFFFFFFF / 9223372036854775807).
This problem was earlier fixed by the patch cb16d753b2
for MDEV-11337 Split Item::save_in_field() into virtual methods in Type_handler.
Adding tests only.
materialized derived table/view that uses aliases is done
The problem appears when a column alias inside the materialized derived
table/view t1 definition coincides with the column name used in the
GROUP BY clause of t1. If the condition that can be pushed into t1
uses that ambiguous column name this column is determined as a column that
is used in the GROUP BY clause instead of the alias used in the projection
list of t1. That causes wrong result.
To prevent it resolve_ref_in_select_and_group() was changed.
Since MariaDB Server 10.2.2 (and MySQL 5.7), the default value of
innodb_checksum_algorithm is crc32 (CRC-32C), not the inefficient "innodb"
checksum. Change Mariabackup to use the same default, so that checksum
validation (when using the default algorithm on the server) will take less
time during mariabackup --backup. Also, mariabackup --prepare should be
a little faster, and the server should read backups faster, because the
page checksums would only be validated against CRC-32C.
fil_page_decompress(): Replaces fil_decompress_page().
Allow the caller detect errors. Remove
duplicated code. Use the "safe" instead of "fast" variants of
decompression routines.
fil_page_compress(): Replaces fil_compress_page().
The length of the input buffer always was srv_page_size (innodb_page_size).
Remove printouts, and remove the fil_space_t* parameter.
buf_tmp_buffer_t::reserved: Make private; the accessors acquire()
and release() will use atomic memory access.
buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter.
Do not acquire any mutex. Remove the allocation of the buffers.
buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf():
Refactored away from buf_pool_reserve_tmp_slot().
buf_page_decrypt_after_read(): Make static, and simplify the logic.
Use the encryption buffer also for decompressing.
buf_page_io_complete(), buf_dblwr_process(): Check more failures.
fil_space_encrypt(): Simplify the debug checks.
fil_space_t::printed_compression_failure: Remove.
fil_get_compression_alg_name(): Remove.
fil_iterate(): Allocate a buffer for compression and decompression
only once, instead of allocating and freeing it for every page
that uses compression, during IMPORT TABLESPACE. Also, validate the
page checksum before decryption, and reduce the scope of some variables.
fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused).
AbstractCallback::operator()(): Remove the parameter 'offset'.
The check for it in FetchIndexRootPages::operator() was basically
redundant and dead code since the previous refactoring.
Problem:
The problem was most likely introduced by a fix for MDEV-11597
(commit 5f0c31f928) which removed
the assignment "killed= KILL_BAD_DATA" from THD::raise_condition().
Before MDEV-11597, sp_head::execute() tested thd->killed after
looping through the SP instructions and exited with an error
if thd->killed is set. After MDEV-11597, sp_head::execute()
stopped to notice errors and set the OK status on top of the
error status, which crashed on assert.
Fix:
Making sp_cursor::fetch() return -1 if server_side_cursor->fetch(1)
left an error in the diagnostics area. This makes the statement
"err_status= i->execute(thd, &ip)" in sp_head::execute() set the
error code and correctly break the SP instruction loop and
return on error without setting the OK status.