The MDEV-29693 conflict resolution is from Monty, as well as is
a bug fix where ANALYZE TABLE wrongly built histograms for
single-column PRIMARY KEY.
Also includes a fix for safe_malloc error reporting.
Other things:
- Copied main.log_slow from 10.4 to avoid mtr issue
Disabled test:
- spider/bugfix.mdev_27239 because we started to get
+Error 1429 Unable to connect to foreign data source: localhost
-Error 1158 Got an error reading communication packets
- main.delayed
- Bug#54332 Deadlock with two connections doing LOCK TABLE+INSERT DELAYED
This part is disabled for now as it fails randomly with different
warnings/errors (no corruption).
The problem is that s390x is not using the default bzip library we use
on other platforms, which causes compressed string lengths to be differnt
than what mtr tests expects.
Fixed by:
- Added have_normal_bzip.inc, which checks if compress() returns the
expected length.
- Adjust the results to match the expected one
- main.func_compress.test & archive.archive
- Don't print lengths that depends on compression library
- mysqlbinlog compress tests & connect.zip
- Don't print DATA_LENGTH for SET column_compression_zlib_level=1
- main.column_compression
This includes all test changes from
"Changing all cost calculation to be given in milliseconds"
and forwards.
Some of the things that caused changes in the result files:
- As part of fixing tests, I added 'echo' to some comments to be able to
easier find out where things where wrong.
- MATERIALIZED has now a higher cost compared to X than before. Because
of this some MATERIALIZED types have changed to DEPENDEND SUBQUERY.
- Some test cases that required MATERIALIZED to repeat a bug was
changed by adding more rows to force MATERIALIZED to happen.
- 'Filtered' in SHOW EXPLAIN has in many case changed from 100.00 to
something smaller. This is because now filtered also takes into
account the smallest possible ref access and filters, even if they
where not used. Another reason for 'Filtered' being smaller is that
we now also take into account implicit filtering done for subqueries
using FIRSTMATCH.
(main.subselect_no_exists_to_in)
This is caluculated in best_access_path() and stored in records_out.
- Table orders has changed because more accurate costs.
- 'index' and 'ALL' for small tables has changed to use 'range' or
'ref' because of optimizer_scan_setup_cost.
- index can be changed to 'range' as 'range' optimizer assumes we don't
have to read the blocks from disk that range optimizer has already read.
This can be confusing in the case where there is no obvious where clause
but instead there is a hidden 'key_column > NULL' added by the optimizer.
(main.subselect_no_exists_to_in)
- Scan on primary clustered key does not report 'Using Index' anymore
(It's a table scan, not an index scan).
- For derived tables, the number of rows is now 100 instead of 2,
which can be seen in EXPLAIN.
- More tests have "Using index for group by" as the cost of this
optimization is now more correct (lower).
- A primary key could be preferred for a normal key, even if it would
access more rows, as it's faster to do 1 lokoup and 3 'index_next' on a
clustered primary key than one lookup trough a secondary.
(main.stat_tables_innodb)
Notes:
- There was a 4.7% more calls to best_extension_by_limited_search() in
the main.greedy_optimizer test. However examining the test results
it looked that the plans where slightly better (eq_ref where more
chained together) so I assume this is ok.
- I have verified a few test cases where there was notable/unexpected
changes in the plan and in all cases the new optimizer plans where
faster. (main.greedy_optimizer and some others)
Remove usage of deprecated variable storage_engine. It was deprecated in 5.5 but
it never issued a deprecation warning. Make it issue a warning in 10.5.1.
Replaced with default_storage_engine.
Fixed archive.archive failure.
Applied remnants of two revisions, which were partially merged.
Rev. 3225.1.1 (5.0 compatibility):
BUG#11756687 - 48633: ARCHIVE TABLES ARE NOT UPGRADEABLE
Archive table created by 5.0 were not accessible.
This patch adds various fixes so that 5.0 archive tables
are readable and writable. Though it is strongly recommended
to avoid binary upgrade of archive tables whenever it is
possible.
Rev. 3710 (due to valgrind warnings):
Bug#13907676: HA_ARCHIVE::INFO
In WL#4305 the refactoring of the archive writer,
it could flush the writer when it was not yet open.
This was due to if bulk insert was used but no
rows was actually inserted (write_row was never called),
the writer was marked dirty even if it was not open.
Fix was to only mark it as dirty if it was opened.
mysql-test/std_data/bug48633.ARM:
A test case for BUG#11756687: archive table created by 5.0.95.
mysql-test/std_data/bug48633.ARZ:
A test case for BUG#11756687: archive table created by 5.0.95.
mysql-test/std_data/bug48633.frm:
A test case for BUG#11756687: archive table created by 5.0.95.
mysql-test/suite/archive/archive.result:
Modified a test case for BUG#47012 according to fix for
BUG#11756687.
Added a test case for BUG#11756687.
mysql-test/suite/archive/archive.test:
Modified a test case for BUG#47012 according to fix for
BUG#11756687.
Added a test case for BUG#11756687.
No need to remove .ARM files anymore: DROP TABLE will take
care of them.
storage/archive/azio.c:
Do not write AZIO (v.3) header to GZIO file (v.1).
Added initialization of various azio_stream members
to read_header() so it can proceed with v.1 format.
Update data start position only when reading first
GZIO header. That is only on azopen(), but never on
azread().
storage/archive/ha_archive.cc:
Removed guardians that were rejecting to open v.1 archive
tables.
Reload .frm when repairing v.1 tables - they didn't have
storage for .frm.
Do not flush write stream when it is not open.
Let DROP TABLE remove 5.0 .ARM files.
includes:
* remove some remnants of "Bug#14521864: MYSQL 5.1 TO 5.5 BUGS PARTITIONING"
* introduce LOCK_share, now LOCK_ha_data is strictly for engines
* rea_create_table() always creates .par file (even in "frm-only" mode)
* fix a 5.6 bug, temp file leak on dummy ALTER TABLE
* persistent table versions in the extra2
* ha_archive::frm_compare using TABLE_SHARE::tabledef_version
* distinguish between "important" and "optional" extra2 frm values
* write engine-defined attributes (aka "table options") to extra2, not to extra,
but still read from the old location, if they're found there.
* print "table doesn't exist in engine" when a table doesn't exist in the engine,
instead of "file not found" (if no file was involved)
* print a complete filename that cannot be found ('t1.MYI', not 't1')
* it's not an error for a DROP if a table doesn't exist in the engine (or some table
files cannot be found) - if the DROP succeeded regardless
Moved test from main suite to the new suites.
Move tests from maria/t and maria/r to maria
mysql-test/mysql-test-run.pl:
Added support for the new suites