If one of the selected field is a MIN or MAX and it has been optimized
into a constant, it is not added to the temp table used by a group by
handler (GBH). The GBH therefore cannot store results to this missing
field.
On the other hand, when SELECTing from a view or a derived table,
TMP_TABLE_ALL_COLUMNS is set. If the query has no group by or order
by, an Item_temptable_field is created for this MIN/MAX field and
added to the JOIN. Since the GBH could not store results to the
corresponding field in the temp table, the value of this
Item_temptable_field remains NULL. And the NULL value is passed to the
record, then the temp row, and finally output as the (wrong) result.
To fix this, we opt to not creating a spider GBH when a view or
derived table is involved.
This fixes spider/bugfix.mdev_26345 for --view-protocol
Also fixed a comment:
TABLE_LIST::belong_to_derived is NULL if the table belongs to a
derived table that has non-MERGE type.
This is a fixup of MDEV-26345 commit
77ed235d50.
In MDEV-26345 the spider group by handler was updated so that it uses
the item_ptr fields of Query::group_by and Query::order_by, instead of
item. This was and is because the call to
join->set_items_ref_array(join->items1) during the execution stage,
just before the execution replaces the order-by / group-by item arrays
with Item_temptable_field.
Spider traverses the item tree during the group by handler (gbh)
creation at the end of the optimization stage, and decides a gbh could
handle the execution of the query. Basically spider gbh can handle the
execution if it can construct a well-formed query, executes on the
data node, and store the results in the correct places. If so, it will
create one, otherwise it will return NULL and the execution will use
the usual handler (ha_spider instead of spider_group_by_handler). To
that end, the general principle is the items checked for creation
should be the same items later used for query construciton. Since in
MDEV-26345 we changed to use the item_ptr field instead of item field
of order-by and group-by in query construction, in this patch we do
the same for the gbh creation.
The item_ptr field could be the uninitialised NULL value during the
gbh creation. This is because the optimizer may replace a DISTINCT
with a GROUP BY, which only happens if the original GROUP BY is empty.
It creates the artificial GROUP BY by calling create_distinct_group(),
which creates the corresponding ORDER object with item field aligning
with somewhere in ref_pointer_array, but leaving item_ptr to be NULL.
When spider finds out that item_ptr is NULL, it knows there's some
optimizer skullduggery and it is passed a query different from the
original. Without a clear contract between the server layer and the
gbh, it is better to be safe than sorry and not create the gbh in this
case.
Also add a check and error reporting for the unlikely case of item_ptr
changing from non-NULL at gbh construction to NULL at execution to
prevent server crash.
Also, we remove a check added in MDEV-29480 of order by items being
aggregate functions. That check was added with the premise that spider
was including auxiliary SELECT items which is referenced by ORDER BY
items. This premise was no longer true since MDEV-26345, and caused
problems such as MDEV-29546, which was fixed by MDEV-26345.
Stop skipping const items when selecting but skip them when storing
their results to spider row to avoid storing in mismatching temporary
table fields.
Skip auxiliary fields in SELECTing, and do not store
the (non-existing) results to the corresponding temporary table
accordingly.
When there are BOTH auxiliary fields AND const items in the auxiliary
field items, do not use the spider GBH. This is a rare occasion if it
happens at all and not worth the added complexity to cover it.
Use the original item (item_ptr) in constructing GROUP BY and ORDER
BY, which also means using item->name instead of field->field_name as
aliases in constructing SELECT items. This fixes spurious regressions
caused by the above changes in some tests using ORDER BY, such as
mdev_24517.test. As a by-product, this also fixes MDEV-29546.
Therefore we update mdev_29008.test to include the MDEV-29546 case.
This will avoid issues like MDEV-32486
IDs used in
- spider_alloc_calc_mem_init()
- spider_string::init_calc_mem()
- spider_malloc()
- spider_bulk_alloc_mem()
- spider_bulk_malloc()
Spider GBH's query rewrite of table joins is overly complex and
error-prone. We replace it with something closer to what
dbug_print() (more specifically, print_join()) does, but catered to
spider.
More specifically, we replace the body of
spider_db_mbase_util::append_from_and_tables() with a call to
spider_db_mbase_util::append_join(), and remove downstream append_X
functions.
We make it handle const tables by rewriting them as (select 1). This
fixes the main issue in MDEV-26247.
We also ban semijoin from spider gbh, which fixes MDEV-31645 and
MDEV-30392, as semi-join is an "internal" join, and "semi join" does
not parse, and it is different from "join" in that it deduplicates the
right hand side
Not all queries passed to a group by handler are valid (MDEV-32273),
for example, a join on expr may refer outer fields not in the current
context. We detect this during the handler creation when walking the
join. See also gbh_outer_fields_in_join.test.
It also skips eliminated tables, which fixes MDEV-26193.
Spider gbh query rewrite should get table for fields in a simple way.
Add a method spider_fields::find_table that searches its table holders
to find table for a given field. This way we will be able to get rid
of the first pass during the gbh creation where field_chains and
field_holders are created.
We also check that the field belongs to a spider table while walking
through the query, so we could remove
all_query_fields_are_query_table_members(). However, this requires an
earlier creation of the table_holder so that tables are added before
checking. We do that, and in doing so, also decouple table_holder and
spider_fields
Remove unused methods and fields. Add comments.
Two methods from spider_fields. There are probably more of these
conn_holder related methods that can be removed
reappend_tables_part()
reappend_tables()
The MDEV-29693 conflict resolution is from Monty, as well as is
a bug fix where ANALYZE TABLE wrongly built histograms for
single-column PRIMARY KEY.
Also includes a fix for safe_malloc error reporting.
Other things:
- Copied main.log_slow from 10.4 to avoid mtr issue
Disabled test:
- spider/bugfix.mdev_27239 because we started to get
+Error 1429 Unable to connect to foreign data source: localhost
-Error 1158 Got an error reading communication packets
- main.delayed
- Bug#54332 Deadlock with two connections doing LOCK TABLE+INSERT DELAYED
This part is disabled for now as it fails randomly with different
warnings/errors (no corruption).
The system variable spider_disable_group_by_handler, if on, will
disable the spider group by handler (gbh), and such disablement serves
as workaround to bugs caused by gbh, labelled with spider-gbh on jira,
including MDEV-26247, MDEV-28998, MDEV-29163, MDEV-30392, MDEV-31645.
Tests for these tickets are added accordingly with the workaround in
place.
The direct aggregate mechanism sems to be only intended to work when
otherwise a full table scan query will be executed from the spider
node and the aggregation done at the spider node too. Typically this
happens in sub_select(). In the test spider.direct_aggregate_part
direct aggregate allows to send COUNT statements directly to the data
nodes and adds up the results at the spider node, instead of iterating
over the rows one by one at the spider node.
By contrast, the group by handler (GBH) typically sends aggregated
queries directly to data nodes, in which case DA does not improve the
situation here.
That is why we should fix it by disabling DA when GBH is used.
There are other reasons supporting this change. First, the creation of
GBH results in a call to change_to_use_tmp_fields() (as opposed to
setup_copy_fields()) which causes the spider DA function
spider_db_fetch_for_item_sum_funcs() to work on wrong items. Second,
the spider DA function only calls direct_add() on the items, and the
follow-up add() needs to be called by the sql layer code. In
do_select(), after executing the query with the GBH, it seems that the
required add() would not necessarily be called.
Disabling DA when GBH is used does fix the bug. There are a few
other things included in this commit to improve the situation with
spider DA:
1. Add a session variable that allows user to disable DA completely,
this will help as a temporary measure if/when further bugs with DA
emerge.
2. Move the increment of direct_aggregate_count to the spider DA
function. Currently this is done in rather bizarre and random
locations.
3. Fix the spider_db_mbase_row creation so that the last of its row
field (sentinel) is NULL. The code is already doing a null check, but
somehow the sentinel field is on an invalid address, causing the
segfaults. With a correct implementation of the row creation, we can
avoid such segfaults.
when generating a query to send to a remote server, spider generates
new aliases for all tables in the query (at least in the group_by handler).
First it walks all the expressions and create a list of new table aliases
to use for each field. Then - in init_scan() - it actually generates the
query, taking for each field the next alias from the list.
It dives recursively into functions, for example, for func(f1) it'll
go in, will see the field f1 and append to the list the new name for
the table of f1. This works fine for non-aggregate functions and
for aggregate functions in the SELECT list. But aggregate functions
in the ORDER BY are always references to the select list, they never
need to be qualified with a table name. That is, even if there is a
field name as an argument of an aggregate function in the ORDER BY
it must not append a table alias to the list. Let's just skip
aggregate functions when analyzing ORDER BY for table aliases.
This fixes spider/bugfix.mdev_29008
(was observed on aarch64, x86, ppc64le, and amd64 --rr)
Delete the deprecated variable, spider_use_handler and related code.
Spider now does not supports accessing data nodes via handler
statements. Thus, the notion of SQL kinds are no longer useful.
We too discard it.