derived table / view by equality
Now rows of a materialized derived table are always put into a
temporary table before join operation. If BNLH is used to join this
table with the result of a partial join then both operands of the
join are actually put into main memory. In most cases this is not
efficient.
We could avoid this by sending the rows of the derived table directly
to the join operation. However this kind of data flow is not supported
yet.
Fixed by not allowing usage of hash join algorithm to join a materialized
derived table if it's joined by an equality predicate of the form
f=e where f is a field of the derived table.
This warning come from a copy() operation of type:
memcpy(ptr, ptr+A, B), which is safe but produces a warning
when run with valgrind.
To avoid the warning, I added copy_or_move() method which uses
memmove() instead of memcpy().
In 10.3 the change in item_strfunc::Item_func_concat() has to be mirroed
in Item_func_concat_oracle() to avoid future valgrind warnings.
This is a regression caused by
commit 73af8af094
(MDEV-15325 Incomplete validation of missing tablespace during recovery).
If the recv_sys->addr_hash hash table ran out of memory, we would
have to do crash recovery in multiple passes. If some tablespaces were
missing, after the MDEV-15325 fix we would rescan the remaining redo log.
But, we could incorrectly reset the "rescan" flag. Because of this, we
would fail to apply some of the oldest redo log records to the data files.
(The recv_sys->addr_hash would only contain records from the latest
redo log scan batch.)
Fix:
After checking for missing tablespaces, reset the flag rescan=true,
so that all redo log records will be re-read and applied.
|| node->vcol_info.is_used()' failed
- Purge thread can acquire mdl lock while initializing the mysql template.
Set the vcol_info information before acquiring mdl lock.
- Purge thread doesn't need to use the virtual column info even though it is
requested. In that case, reset the virtual column info.
rw_lock_x_lock_wait_func(): remove duplicated logic added in incorrect merge
Affected counters are affected by InnoDB monitor. But they aren't stable and thus
can not be realiably tested.
storage/rocksdb/rdb_datadic.cc: In member function 'int myrocks::Rdb_key_def::unpack_integer(myrocks::Rdb_field_packing*, Field*, uchar*, myrocks::Rdb_string_reader*, myrocks::Rdb_string_reader*) const'
storage/rocksdb/rdb_datadic.cc:1781:1: internal compiler error: Segmentation fault
}
on ppc64le, ubuntu bionic gcc 7.3.0 and debian stretch gcc 6.3.0
The error happens with -ftree-loop-vectorize when trying to vectorize
a particular loop (see Rdb_key_def::unpack_integer())
Compiler gets confused by __attribute__((optimize("O0")) that comes from
ha_rocksdb_proto.h. The intention of this __attribute__ was to prevent
function from being inlined (see ha_rocksdb.cc). Let's use a more
specific attribute that prevents inlining but does not confuse
loop vectorizer.
An error in "group commit with MariaDB's binlog" code: we would flush
the WAL even when the transaction did not do any writes (and so the logic
in myrocks::Rdb_transaction::commit caused it to rollback).
Unary minus operation for the smallest possible signed long long value
(LONLONG_MIN) is undefined in C++. Because of this, func_time.test
failed on ppc64 buildbot machines.
Fixing the code to avod using undefined operations.
This is fix is similar to "MDEV-7973 bigint fail with gcc 5.0"
RPM solution:
Make all server plugins to restart the server when installed.
To avoid multiple server restarts, do it only once in posttrans scriptlet.
Add support for CPACK_RPM_<component>_POST_TRANS_SCRIPT_FILE
For the original test in 10.0 it was not really important if
find_user_wild() or find_user_exact() is used in sp_grant_privileges().
sp-security.test passed with either of them.
Fixing the test so it reliably fails with find_user_wild()
and pass with find_user_exact().
Simplify, and make it work with system tablespace outside of
innodb data home.
Also, do not reread TRX_SYS page in endless loop,
if it appears to be corrupted.
Use finite number of attempts.