mirror of
https://github.com/postgres/postgres.git
synced 2025-11-28 11:44:57 +03:00
Remove tabs after spaces in C comments
This was not changed in HEAD, but will be done later as part of a pgindent run. Future pgindent runs will also do this. Report by Tom Lane Backpatch through all supported branches, but not HEAD
This commit is contained in:
@@ -288,7 +288,7 @@ set_plain_rel_pathlist(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte)
|
||||
* set_append_rel_pathlist
|
||||
* Build access paths for an "append relation"
|
||||
*
|
||||
* The passed-in rel and RTE represent the entire append relation. The
|
||||
* The passed-in rel and RTE represent the entire append relation. The
|
||||
* relation's contents are computed by appending together the output of
|
||||
* the individual member relations. Note that in the inheritance case,
|
||||
* the first member relation is actually the same table as is mentioned in
|
||||
@@ -360,7 +360,7 @@ set_append_rel_pathlist(PlannerInfo *root, RelOptInfo *rel,
|
||||
|
||||
/*
|
||||
* We have to copy the parent's targetlist and quals to the child,
|
||||
* with appropriate substitution of variables. However, only the
|
||||
* with appropriate substitution of variables. However, only the
|
||||
* baserestrictinfo quals are needed before we can check for
|
||||
* constraint exclusion; so do that first and then check to see if we
|
||||
* can disregard this child.
|
||||
@@ -422,7 +422,7 @@ set_append_rel_pathlist(PlannerInfo *root, RelOptInfo *rel,
|
||||
|
||||
/*
|
||||
* We have to make child entries in the EquivalenceClass data
|
||||
* structures as well. This is needed either if the parent
|
||||
* structures as well. This is needed either if the parent
|
||||
* participates in some eclass joins (because we will want to consider
|
||||
* inner-indexscan joins on the individual children) or if the parent
|
||||
* has useful pathkeys (because we should try to build MergeAppend
|
||||
@@ -1064,7 +1064,7 @@ make_rel_from_joinlist(PlannerInfo *root, List *joinlist)
|
||||
* independent jointree items in the query. This is > 1.
|
||||
*
|
||||
* 'initial_rels' is a list of RelOptInfo nodes for each independent
|
||||
* jointree item. These are the components to be joined together.
|
||||
* jointree item. These are the components to be joined together.
|
||||
* Note that levels_needed == list_length(initial_rels).
|
||||
*
|
||||
* Returns the final level of join relations, i.e., the relation that is
|
||||
@@ -1080,7 +1080,7 @@ make_rel_from_joinlist(PlannerInfo *root, List *joinlist)
|
||||
* needed for these paths need have been instantiated.
|
||||
*
|
||||
* Note to plugin authors: the functions invoked during standard_join_search()
|
||||
* modify root->join_rel_list and root->join_rel_hash. If you want to do more
|
||||
* modify root->join_rel_list and root->join_rel_hash. If you want to do more
|
||||
* than one join-order search, you'll probably need to save and restore the
|
||||
* original states of those data structures. See geqo_eval() for an example.
|
||||
*/
|
||||
@@ -1179,7 +1179,7 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels)
|
||||
* column k is found to be unsafe to reference, we set unsafeColumns[k] to
|
||||
* TRUE, but we don't reject the subquery overall since column k might
|
||||
* not be referenced by some/all quals. The unsafeColumns[] array will be
|
||||
* consulted later by qual_is_pushdown_safe(). It's better to do it this
|
||||
* consulted later by qual_is_pushdown_safe(). It's better to do it this
|
||||
* way than to make the checks directly in qual_is_pushdown_safe(), because
|
||||
* when the subquery involves set operations we have to check the output
|
||||
* expressions in each arm of the set op.
|
||||
@@ -1272,7 +1272,7 @@ recurse_pushdown_safe(Node *setOp, Query *topquery,
|
||||
* check_output_expressions - check subquery's output expressions for safety
|
||||
*
|
||||
* There are several cases in which it's unsafe to push down an upper-level
|
||||
* qual if it references a particular output column of a subquery. We check
|
||||
* qual if it references a particular output column of a subquery. We check
|
||||
* each output column of the subquery and set unsafeColumns[k] to TRUE if
|
||||
* that column is unsafe for a pushed-down qual to reference. The conditions
|
||||
* checked here are:
|
||||
@@ -1290,7 +1290,7 @@ recurse_pushdown_safe(Node *setOp, Query *topquery,
|
||||
* of rows returned. (This condition is vacuous for DISTINCT, because then
|
||||
* there are no non-DISTINCT output columns, so we needn't check. But note
|
||||
* we are assuming that the qual can't distinguish values that the DISTINCT
|
||||
* operator sees as equal. This is a bit shaky but we have no way to test
|
||||
* operator sees as equal. This is a bit shaky but we have no way to test
|
||||
* for the case, and it's unlikely enough that we shouldn't refuse the
|
||||
* optimization just because it could theoretically happen.)
|
||||
*/
|
||||
|
||||
@@ -59,7 +59,7 @@ static void addRangeClause(RangeQueryClause **rqlist, Node *clause,
|
||||
* See clause_selectivity() for the meaning of the additional parameters.
|
||||
*
|
||||
* Our basic approach is to take the product of the selectivities of the
|
||||
* subclauses. However, that's only right if the subclauses have independent
|
||||
* subclauses. However, that's only right if the subclauses have independent
|
||||
* probabilities, and in reality they are often NOT independent. So,
|
||||
* we want to be smarter where we can.
|
||||
|
||||
@@ -76,12 +76,12 @@ static void addRangeClause(RangeQueryClause **rqlist, Node *clause,
|
||||
* see that hisel is the fraction of the range below the high bound, while
|
||||
* losel is the fraction above the low bound; so hisel can be interpreted
|
||||
* directly as a 0..1 value but we need to convert losel to 1-losel before
|
||||
* interpreting it as a value. Then the available range is 1-losel to hisel.
|
||||
* interpreting it as a value. Then the available range is 1-losel to hisel.
|
||||
* However, this calculation double-excludes nulls, so really we need
|
||||
* hisel + losel + null_frac - 1.)
|
||||
*
|
||||
* If either selectivity is exactly DEFAULT_INEQ_SEL, we forget this equation
|
||||
* and instead use DEFAULT_RANGE_INEQ_SEL. The same applies if the equation
|
||||
* and instead use DEFAULT_RANGE_INEQ_SEL. The same applies if the equation
|
||||
* yields an impossible (negative) result.
|
||||
*
|
||||
* A free side-effect is that we can recognize redundant inequalities such
|
||||
@@ -175,7 +175,7 @@ clauselist_selectivity(PlannerInfo *root,
|
||||
{
|
||||
/*
|
||||
* If it's not a "<" or ">" operator, just merge the
|
||||
* selectivity in generically. But if it's the right oprrest,
|
||||
* selectivity in generically. But if it's the right oprrest,
|
||||
* add the clause to rqlist for later processing.
|
||||
*/
|
||||
switch (get_oprrest(expr->opno))
|
||||
@@ -460,14 +460,14 @@ treat_as_join_clause(Node *clause, RestrictInfo *rinfo,
|
||||
* nestloop join's inner relation --- varRelid should then be the ID of the
|
||||
* inner relation.
|
||||
*
|
||||
* When varRelid is 0, all variables are treated as variables. This
|
||||
* When varRelid is 0, all variables are treated as variables. This
|
||||
* is appropriate for ordinary join clauses and restriction clauses.
|
||||
*
|
||||
* jointype is the join type, if the clause is a join clause. Pass JOIN_INNER
|
||||
* if the clause isn't a join clause.
|
||||
*
|
||||
* sjinfo is NULL for a non-join clause, otherwise it provides additional
|
||||
* context information about the join being performed. There are some
|
||||
* context information about the join being performed. There are some
|
||||
* special cases:
|
||||
* 1. For a special (not INNER) join, sjinfo is always a member of
|
||||
* root->join_info_list.
|
||||
@@ -502,7 +502,7 @@ clause_selectivity(PlannerInfo *root,
|
||||
/*
|
||||
* If the clause is marked pseudoconstant, then it will be used as a
|
||||
* gating qual and should not affect selectivity estimates; hence
|
||||
* return 1.0. The only exception is that a constant FALSE may be
|
||||
* return 1.0. The only exception is that a constant FALSE may be
|
||||
* taken as having selectivity 0.0, since it will surely mean no rows
|
||||
* out of the plan. This case is simple enough that we need not
|
||||
* bother caching the result.
|
||||
@@ -521,11 +521,11 @@ clause_selectivity(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* If possible, cache the result of the selectivity calculation for
|
||||
* the clause. We can cache if varRelid is zero or the clause
|
||||
* the clause. We can cache if varRelid is zero or the clause
|
||||
* contains only vars of that relid --- otherwise varRelid will affect
|
||||
* the result, so mustn't cache. Outer join quals might be examined
|
||||
* with either their join's actual jointype or JOIN_INNER, so we need
|
||||
* two cache variables to remember both cases. Note: we assume the
|
||||
* two cache variables to remember both cases. Note: we assume the
|
||||
* result won't change if we are switching the input relations or
|
||||
* considering a unique-ified case, so we only need one cache variable
|
||||
* for all non-JOIN_INNER cases.
|
||||
@@ -686,7 +686,7 @@ clause_selectivity(PlannerInfo *root,
|
||||
/*
|
||||
* This is not an operator, so we guess at the selectivity. THIS IS A
|
||||
* HACK TO GET V4 OUT THE DOOR. FUNCS SHOULD BE ABLE TO HAVE
|
||||
* SELECTIVITIES THEMSELVES. -- JMH 7/9/92
|
||||
* SELECTIVITIES THEMSELVES. -- JMH 7/9/92
|
||||
*/
|
||||
s1 = (Selectivity) 0.3333333;
|
||||
}
|
||||
|
||||
@@ -24,7 +24,7 @@
|
||||
*
|
||||
* Obviously, taking constants for these values is an oversimplification,
|
||||
* but it's tough enough to get any useful estimates even at this level of
|
||||
* detail. Note that all of these parameters are user-settable, in case
|
||||
* detail. Note that all of these parameters are user-settable, in case
|
||||
* the default values are drastically off for a particular platform.
|
||||
*
|
||||
* seq_page_cost and random_page_cost can also be overridden for an individual
|
||||
@@ -455,7 +455,7 @@ cost_index(IndexPath *path, PlannerInfo *root,
|
||||
* computed for us by query_planner.
|
||||
*
|
||||
* Caller is expected to have ensured that tuples_fetched is greater than zero
|
||||
* and rounded to integer (see clamp_row_est). The result will likewise be
|
||||
* and rounded to integer (see clamp_row_est). The result will likewise be
|
||||
* greater than zero and integral.
|
||||
*/
|
||||
double
|
||||
@@ -651,7 +651,7 @@ cost_bitmap_heap_scan(Path *path, PlannerInfo *root, RelOptInfo *baserel,
|
||||
/*
|
||||
* For small numbers of pages we should charge spc_random_page_cost
|
||||
* apiece, while if nearly all the table's pages are being read, it's more
|
||||
* appropriate to charge spc_seq_page_cost apiece. The effect is
|
||||
* appropriate to charge spc_seq_page_cost apiece. The effect is
|
||||
* nonlinear, too. For lack of a better idea, interpolate like this to
|
||||
* determine the cost per page.
|
||||
*/
|
||||
@@ -723,7 +723,7 @@ cost_bitmap_tree_node(Path *path, Cost *cost, Selectivity *selec)
|
||||
* Estimate the cost of a BitmapAnd node
|
||||
*
|
||||
* Note that this considers only the costs of index scanning and bitmap
|
||||
* creation, not the eventual heap access. In that sense the object isn't
|
||||
* creation, not the eventual heap access. In that sense the object isn't
|
||||
* truly a Path, but it has enough path-like properties (costs in particular)
|
||||
* to warrant treating it as one.
|
||||
*/
|
||||
@@ -780,7 +780,7 @@ cost_bitmap_or_node(BitmapOrPath *path, PlannerInfo *root)
|
||||
/*
|
||||
* We estimate OR selectivity on the assumption that the inputs are
|
||||
* non-overlapping, since that's often the case in "x IN (list)" type
|
||||
* situations. Of course, we clamp to 1.0 at the end.
|
||||
* situations. Of course, we clamp to 1.0 at the end.
|
||||
*
|
||||
* The runtime cost of the BitmapOr itself is estimated at 100x
|
||||
* cpu_operator_cost for each tbm_union needed. Probably too small,
|
||||
@@ -857,7 +857,7 @@ cost_tidscan(Path *path, PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* We must force TID scan for WHERE CURRENT OF, because only nodeTidscan.c
|
||||
* understands how to do it correctly. Therefore, honor enable_tidscan
|
||||
* understands how to do it correctly. Therefore, honor enable_tidscan
|
||||
* only when CURRENT OF isn't present. Also note that cost_qual_eval
|
||||
* counts a CurrentOfExpr as having startup cost disable_cost, which we
|
||||
* subtract off here; that's to prevent other plan types such as seqscan
|
||||
@@ -950,7 +950,7 @@ cost_functionscan(Path *path, PlannerInfo *root, RelOptInfo *baserel)
|
||||
*
|
||||
* Currently, nodeFunctionscan.c always executes the function to
|
||||
* completion before returning any rows, and caches the results in a
|
||||
* tuplestore. So the function eval cost is all startup cost, and per-row
|
||||
* tuplestore. So the function eval cost is all startup cost, and per-row
|
||||
* costs are minimal.
|
||||
*
|
||||
* XXX in principle we ought to charge tuplestore spill costs if the
|
||||
@@ -1007,7 +1007,7 @@ cost_valuesscan(Path *path, PlannerInfo *root, RelOptInfo *baserel)
|
||||
*
|
||||
* Note: this is used for both self-reference and regular CTEs; the
|
||||
* possible cost differences are below the threshold of what we could
|
||||
* estimate accurately anyway. Note that the costs of evaluating the
|
||||
* estimate accurately anyway. Note that the costs of evaluating the
|
||||
* referenced CTE query are added into the final plan as initplan costs,
|
||||
* and should NOT be counted here.
|
||||
*/
|
||||
@@ -1091,7 +1091,7 @@ cost_recursive_union(Plan *runion, Plan *nrterm, Plan *rterm)
|
||||
* If the total volume exceeds sort_mem, we switch to a tape-style merge
|
||||
* algorithm. There will still be about t*log2(t) tuple comparisons in
|
||||
* total, but we will also need to write and read each tuple once per
|
||||
* merge pass. We expect about ceil(logM(r)) merge passes where r is the
|
||||
* merge pass. We expect about ceil(logM(r)) merge passes where r is the
|
||||
* number of initial runs formed and M is the merge order used by tuplesort.c.
|
||||
* Since the average initial run should be about twice sort_mem, we have
|
||||
* disk traffic = 2 * relsize * ceil(logM(p / (2*sort_mem)))
|
||||
@@ -1105,7 +1105,7 @@ cost_recursive_union(Plan *runion, Plan *nrterm, Plan *rterm)
|
||||
* accesses (XXX can't we refine that guess?)
|
||||
*
|
||||
* By default, we charge two operator evals per tuple comparison, which should
|
||||
* be in the right ballpark in most cases. The caller can tweak this by
|
||||
* be in the right ballpark in most cases. The caller can tweak this by
|
||||
* specifying nonzero comparison_cost; typically that's used for any extra
|
||||
* work that has to be done to prepare the inputs to the comparison operators.
|
||||
*
|
||||
@@ -1227,7 +1227,7 @@ cost_sort(Path *path, PlannerInfo *root,
|
||||
* Determines and returns the cost of a MergeAppend node.
|
||||
*
|
||||
* MergeAppend merges several pre-sorted input streams, using a heap that
|
||||
* at any given instant holds the next tuple from each stream. If there
|
||||
* at any given instant holds the next tuple from each stream. If there
|
||||
* are N streams, we need about N*log2(N) tuple comparisons to construct
|
||||
* the heap at startup, and then for each output tuple, about log2(N)
|
||||
* comparisons to delete the top heap entry and another log2(N) comparisons
|
||||
@@ -1383,7 +1383,7 @@ cost_agg(Path *path, PlannerInfo *root,
|
||||
* group otherwise. We charge cpu_tuple_cost for each output tuple.
|
||||
*
|
||||
* Note: in this cost model, AGG_SORTED and AGG_HASHED have exactly the
|
||||
* same total CPU cost, but AGG_SORTED has lower startup cost. If the
|
||||
* same total CPU cost, but AGG_SORTED has lower startup cost. If the
|
||||
* input path is already sorted appropriately, AGG_SORTED should be
|
||||
* preferred (since it has no risk of memory overflow). This will happen
|
||||
* as long as the computed total costs are indeed exactly equal --- but if
|
||||
@@ -1709,10 +1709,10 @@ cost_nestloop(NestPath *path, PlannerInfo *root, SpecialJoinInfo *sjinfo)
|
||||
* Unlike other costsize functions, this routine makes one actual decision:
|
||||
* whether we should materialize the inner path. We do that either because
|
||||
* the inner path can't support mark/restore, or because it's cheaper to
|
||||
* use an interposed Material node to handle mark/restore. When the decision
|
||||
* use an interposed Material node to handle mark/restore. When the decision
|
||||
* is cost-based it would be logically cleaner to build and cost two separate
|
||||
* paths with and without that flag set; but that would require repeating most
|
||||
* of the calculations here, which are not all that cheap. Since the choice
|
||||
* of the calculations here, which are not all that cheap. Since the choice
|
||||
* will not affect output pathkeys or startup cost, only total cost, there is
|
||||
* no possibility of wanting to keep both paths. So it seems best to make
|
||||
* the decision here and record it in the path's materialize_inner field.
|
||||
@@ -1775,7 +1775,7 @@ cost_mergejoin(MergePath *path, PlannerInfo *root, SpecialJoinInfo *sjinfo)
|
||||
qp_qual_cost.per_tuple -= merge_qual_cost.per_tuple;
|
||||
|
||||
/*
|
||||
* Get approx # tuples passing the mergequals. We use approx_tuple_count
|
||||
* Get approx # tuples passing the mergequals. We use approx_tuple_count
|
||||
* here because we need an estimate done with JOIN_INNER semantics.
|
||||
*/
|
||||
mergejointuples = approx_tuple_count(root, &path->jpath, mergeclauses);
|
||||
@@ -1789,7 +1789,7 @@ cost_mergejoin(MergePath *path, PlannerInfo *root, SpecialJoinInfo *sjinfo)
|
||||
* estimated approximately as size of merge join output minus size of
|
||||
* inner relation. Assume that the distinct key values are 1, 2, ..., and
|
||||
* denote the number of values of each key in the outer relation as m1,
|
||||
* m2, ...; in the inner relation, n1, n2, ... Then we have
|
||||
* m2, ...; in the inner relation, n1, n2, ... Then we have
|
||||
*
|
||||
* size of join = m1 * n1 + m2 * n2 + ...
|
||||
*
|
||||
@@ -1800,7 +1800,7 @@ cost_mergejoin(MergePath *path, PlannerInfo *root, SpecialJoinInfo *sjinfo)
|
||||
* This equation works correctly for outer tuples having no inner match
|
||||
* (nk = 0), but not for inner tuples having no outer match (mk = 0); we
|
||||
* are effectively subtracting those from the number of rescanned tuples,
|
||||
* when we should not. Can we do better without expensive selectivity
|
||||
* when we should not. Can we do better without expensive selectivity
|
||||
* computations?
|
||||
*
|
||||
* The whole issue is moot if we are working from a unique-ified outer
|
||||
@@ -1972,7 +1972,7 @@ cost_mergejoin(MergePath *path, PlannerInfo *root, SpecialJoinInfo *sjinfo)
|
||||
|
||||
/*
|
||||
* Decide whether we want to materialize the inner input to shield it from
|
||||
* mark/restore and performing re-fetches. Our cost model for regular
|
||||
* mark/restore and performing re-fetches. Our cost model for regular
|
||||
* re-fetches is that a re-fetch costs the same as an original fetch,
|
||||
* which is probably an overestimate; but on the other hand we ignore the
|
||||
* bookkeeping costs of mark/restore. Not clear if it's worth developing
|
||||
@@ -2065,7 +2065,7 @@ cost_mergejoin(MergePath *path, PlannerInfo *root, SpecialJoinInfo *sjinfo)
|
||||
/*
|
||||
* For each tuple that gets through the mergejoin proper, we charge
|
||||
* cpu_tuple_cost plus the cost of evaluating additional restriction
|
||||
* clauses that are to be applied at the join. (This is pessimistic since
|
||||
* clauses that are to be applied at the join. (This is pessimistic since
|
||||
* not all of the quals may get evaluated at each tuple.)
|
||||
*
|
||||
* Note: we could adjust for SEMI/ANTI joins skipping some qual
|
||||
@@ -2292,7 +2292,7 @@ cost_hashjoin(HashPath *path, PlannerInfo *root, SpecialJoinInfo *sjinfo)
|
||||
* If inner relation is too big then we will need to "batch" the join,
|
||||
* which implies writing and reading most of the tuples to disk an extra
|
||||
* time. Charge seq_page_cost per page, since the I/O should be nice and
|
||||
* sequential. Writing the inner rel counts as startup cost, all the rest
|
||||
* sequential. Writing the inner rel counts as startup cost, all the rest
|
||||
* as run cost.
|
||||
*/
|
||||
if (numbatches > 1)
|
||||
@@ -2384,7 +2384,7 @@ cost_hashjoin(HashPath *path, PlannerInfo *root, SpecialJoinInfo *sjinfo)
|
||||
/*
|
||||
* For each tuple that gets through the hashjoin proper, we charge
|
||||
* cpu_tuple_cost plus the cost of evaluating additional restriction
|
||||
* clauses that are to be applied at the join. (This is pessimistic since
|
||||
* clauses that are to be applied at the join. (This is pessimistic since
|
||||
* not all of the quals may get evaluated at each tuple.)
|
||||
*/
|
||||
startup_cost += qp_qual_cost.startup;
|
||||
@@ -2437,7 +2437,7 @@ cost_subplan(PlannerInfo *root, SubPlan *subplan, Plan *plan)
|
||||
{
|
||||
/*
|
||||
* Otherwise we will be rescanning the subplan output on each
|
||||
* evaluation. We need to estimate how much of the output we will
|
||||
* evaluation. We need to estimate how much of the output we will
|
||||
* actually need to scan. NOTE: this logic should agree with the
|
||||
* tuple_fraction estimates used by make_subplan() in
|
||||
* plan/subselect.c.
|
||||
@@ -2485,10 +2485,10 @@ cost_subplan(PlannerInfo *root, SubPlan *subplan, Plan *plan)
|
||||
/*
|
||||
* cost_rescan
|
||||
* Given a finished Path, estimate the costs of rescanning it after
|
||||
* having done so the first time. For some Path types a rescan is
|
||||
* having done so the first time. For some Path types a rescan is
|
||||
* cheaper than an original scan (if no parameters change), and this
|
||||
* function embodies knowledge about that. The default is to return
|
||||
* the same costs stored in the Path. (Note that the cost estimates
|
||||
* the same costs stored in the Path. (Note that the cost estimates
|
||||
* actually stored in Paths are always for first scans.)
|
||||
*
|
||||
* This function is not currently intended to model effects such as rescans
|
||||
@@ -2529,7 +2529,7 @@ cost_rescan(PlannerInfo *root, Path *path,
|
||||
{
|
||||
/*
|
||||
* These plan types materialize their final result in a
|
||||
* tuplestore or tuplesort object. So the rescan cost is only
|
||||
* tuplestore or tuplesort object. So the rescan cost is only
|
||||
* cpu_tuple_cost per tuple, unless the result is large enough
|
||||
* to spill to disk.
|
||||
*/
|
||||
@@ -2554,8 +2554,8 @@ cost_rescan(PlannerInfo *root, Path *path,
|
||||
{
|
||||
/*
|
||||
* These plan types not only materialize their results, but do
|
||||
* not implement qual filtering or projection. So they are
|
||||
* even cheaper to rescan than the ones above. We charge only
|
||||
* not implement qual filtering or projection. So they are
|
||||
* even cheaper to rescan than the ones above. We charge only
|
||||
* cpu_operator_cost per tuple. (Note: keep that in sync with
|
||||
* the run_cost charge in cost_sort, and also see comments in
|
||||
* cost_material before you change it.)
|
||||
@@ -2696,7 +2696,7 @@ cost_qual_eval_walker(Node *node, cost_qual_eval_context *context)
|
||||
* evaluation of AND/OR? Probably *not*, because that would make the
|
||||
* results depend on the clause ordering, and we are not in any position
|
||||
* to expect that the current ordering of the clauses is the one that's
|
||||
* going to end up being used. The above per-RestrictInfo caching would
|
||||
* going to end up being used. The above per-RestrictInfo caching would
|
||||
* not mix well with trying to re-order clauses anyway.
|
||||
*/
|
||||
if (IsA(node, FuncExpr))
|
||||
@@ -2811,7 +2811,7 @@ cost_qual_eval_walker(Node *node, cost_qual_eval_context *context)
|
||||
else if (IsA(node, AlternativeSubPlan))
|
||||
{
|
||||
/*
|
||||
* Arbitrarily use the first alternative plan for costing. (We should
|
||||
* Arbitrarily use the first alternative plan for costing. (We should
|
||||
* certainly only include one alternative, and we don't yet have
|
||||
* enough information to know which one the executor is most likely to
|
||||
* use.)
|
||||
@@ -2937,7 +2937,7 @@ adjust_semi_join(PlannerInfo *root, JoinPath *path, SpecialJoinInfo *sjinfo,
|
||||
/*
|
||||
* jselec can be interpreted as the fraction of outer-rel rows that have
|
||||
* any matches (this is true for both SEMI and ANTI cases). And nselec is
|
||||
* the fraction of the Cartesian product that matches. So, the average
|
||||
* the fraction of the Cartesian product that matches. So, the average
|
||||
* number of matches for each outer-rel row that has at least one match is
|
||||
* nselec * inner_rows / jselec.
|
||||
*
|
||||
@@ -2960,7 +2960,7 @@ adjust_semi_join(PlannerInfo *root, JoinPath *path, SpecialJoinInfo *sjinfo,
|
||||
|
||||
/*
|
||||
* If requested, check whether the inner path uses all the joinquals as
|
||||
* indexquals. (If that's true, we can assume that an unmatched outer
|
||||
* indexquals. (If that's true, we can assume that an unmatched outer
|
||||
* tuple is cheap to process, whereas otherwise it's probably expensive.)
|
||||
*/
|
||||
if (indexed_join_quals)
|
||||
@@ -3117,7 +3117,7 @@ set_joinrel_size_estimates(PlannerInfo *root, RelOptInfo *rel,
|
||||
double nrows;
|
||||
|
||||
/*
|
||||
* Compute joinclause selectivity. Note that we are only considering
|
||||
* Compute joinclause selectivity. Note that we are only considering
|
||||
* clauses that become restriction clauses at this join level; we are not
|
||||
* double-counting them because they were not considered in estimating the
|
||||
* sizes of the component rels.
|
||||
@@ -3175,7 +3175,7 @@ set_joinrel_size_estimates(PlannerInfo *root, RelOptInfo *rel,
|
||||
*
|
||||
* If we are doing an outer join, take that into account: the joinqual
|
||||
* selectivity has to be clamped using the knowledge that the output must
|
||||
* be at least as large as the non-nullable input. However, any
|
||||
* be at least as large as the non-nullable input. However, any
|
||||
* pushed-down quals are applied after the outer join, so their
|
||||
* selectivity applies fully.
|
||||
*
|
||||
@@ -3246,7 +3246,7 @@ set_subquery_size_estimates(PlannerInfo *root, RelOptInfo *rel,
|
||||
|
||||
/*
|
||||
* Compute per-output-column width estimates by examining the subquery's
|
||||
* targetlist. For any output that is a plain Var, get the width estimate
|
||||
* targetlist. For any output that is a plain Var, get the width estimate
|
||||
* that was made while planning the subquery. Otherwise, we leave it to
|
||||
* set_rel_width to fill in a datatype-based default estimate.
|
||||
*/
|
||||
@@ -3402,7 +3402,7 @@ set_cte_size_estimates(PlannerInfo *root, RelOptInfo *rel, Plan *cteplan)
|
||||
* of estimating baserestrictcost, so we set that, and we also set up width
|
||||
* using what will be purely datatype-driven estimates from the targetlist.
|
||||
* There is no way to do anything sane with the rows value, so we just put
|
||||
* a default estimate and hope that the wrapper can improve on it. The
|
||||
* a default estimate and hope that the wrapper can improve on it. The
|
||||
* wrapper's PlanForeignScan function will be called momentarily.
|
||||
*
|
||||
* The rel's targetlist and restrictinfo list must have been constructed
|
||||
@@ -3517,7 +3517,7 @@ set_rel_width(PlannerInfo *root, RelOptInfo *rel)
|
||||
{
|
||||
/*
|
||||
* We could be looking at an expression pulled up from a subquery,
|
||||
* or a ROW() representing a whole-row child Var, etc. Do what we
|
||||
* or a ROW() representing a whole-row child Var, etc. Do what we
|
||||
* can using the expression type information.
|
||||
*/
|
||||
int32 item_width;
|
||||
|
||||
@@ -73,7 +73,7 @@ static bool reconsider_full_join_clause(PlannerInfo *root,
|
||||
*
|
||||
* If below_outer_join is true, then the clause was found below the nullable
|
||||
* side of an outer join, so its sides might validly be both NULL rather than
|
||||
* strictly equal. We can still deduce equalities in such cases, but we take
|
||||
* strictly equal. We can still deduce equalities in such cases, but we take
|
||||
* care to mark an EquivalenceClass if it came from any such clauses. Also,
|
||||
* we have to check that both sides are either pseudo-constants or strict
|
||||
* functions of Vars, else they might not both go to NULL above the outer
|
||||
@@ -140,9 +140,9 @@ process_equivalence(PlannerInfo *root, RestrictInfo *restrictinfo,
|
||||
collation);
|
||||
|
||||
/*
|
||||
* Reject clauses of the form X=X. These are not as redundant as they
|
||||
* Reject clauses of the form X=X. These are not as redundant as they
|
||||
* might seem at first glance: assuming the operator is strict, this is
|
||||
* really an expensive way to write X IS NOT NULL. So we must not risk
|
||||
* really an expensive way to write X IS NOT NULL. So we must not risk
|
||||
* just losing the clause, which would be possible if there is already a
|
||||
* single-element EquivalenceClass containing X. The case is not common
|
||||
* enough to be worth contorting the EC machinery for, so just reject the
|
||||
@@ -186,14 +186,14 @@ process_equivalence(PlannerInfo *root, RestrictInfo *restrictinfo,
|
||||
* Sweep through the existing EquivalenceClasses looking for matches to
|
||||
* item1 and item2. These are the possible outcomes:
|
||||
*
|
||||
* 1. We find both in the same EC. The equivalence is already known, so
|
||||
* 1. We find both in the same EC. The equivalence is already known, so
|
||||
* there's nothing to do.
|
||||
*
|
||||
* 2. We find both in different ECs. Merge the two ECs together.
|
||||
*
|
||||
* 3. We find just one. Add the other to its EC.
|
||||
*
|
||||
* 4. We find neither. Make a new, two-entry EC.
|
||||
* 4. We find neither. Make a new, two-entry EC.
|
||||
*
|
||||
* Note: since all ECs are built through this process or the similar
|
||||
* search in get_eclass_for_sort_expr(), it's impossible that we'd match
|
||||
@@ -397,7 +397,7 @@ process_equivalence(PlannerInfo *root, RestrictInfo *restrictinfo,
|
||||
* Also, the expression's exposed collation must match the EC's collation.
|
||||
* This is important because in comparisons like "foo < bar COLLATE baz",
|
||||
* only one of the expressions has the correct exposed collation as we receive
|
||||
* it from the parser. Forcing both of them to have it ensures that all
|
||||
* it from the parser. Forcing both of them to have it ensures that all
|
||||
* variant spellings of such a construct behave the same. Again, we can
|
||||
* stick on a RelabelType to force the right exposed collation. (It might
|
||||
* work to not label the collation at all in EC members, but this is risky
|
||||
@@ -502,7 +502,7 @@ add_eq_member(EquivalenceClass *ec, Expr *expr, Relids relids,
|
||||
* single-member EquivalenceClass for it.
|
||||
*
|
||||
* sortref is the SortGroupRef of the originating SortGroupClause, if any,
|
||||
* or zero if not. (It should never be zero if the expression is volatile!)
|
||||
* or zero if not. (It should never be zero if the expression is volatile!)
|
||||
*
|
||||
* If rel is not NULL, it identifies a specific relation we're considering
|
||||
* a path for, and indicates that child EC members for that relation can be
|
||||
@@ -656,7 +656,7 @@ get_eclass_for_sort_expr(PlannerInfo *root,
|
||||
*
|
||||
* When an EC contains pseudoconstants, our strategy is to generate
|
||||
* "member = const1" clauses where const1 is the first constant member, for
|
||||
* every other member (including other constants). If we are able to do this
|
||||
* every other member (including other constants). If we are able to do this
|
||||
* then we don't need any "var = var" comparisons because we've successfully
|
||||
* constrained all the vars at their points of creation. If we fail to
|
||||
* generate any of these clauses due to lack of cross-type operators, we fall
|
||||
@@ -681,7 +681,7 @@ get_eclass_for_sort_expr(PlannerInfo *root,
|
||||
* "WHERE a.x = b.y AND b.y = a.z", the scheme breaks down if we cannot
|
||||
* generate "a.x = a.z" as a restriction clause for A.) In this case we mark
|
||||
* the EC "ec_broken" and fall back to regurgitating its original source
|
||||
* RestrictInfos at appropriate times. We do not try to retract any derived
|
||||
* RestrictInfos at appropriate times. We do not try to retract any derived
|
||||
* clauses already generated from the broken EC, so the resulting plan could
|
||||
* be poor due to bad selectivity estimates caused by redundant clauses. But
|
||||
* the correct solution to that is to fix the opfamilies ...
|
||||
@@ -941,7 +941,7 @@ generate_base_implied_equalities_broken(PlannerInfo *root,
|
||||
* we consider different join paths, we avoid generating multiple copies:
|
||||
* whenever we select a particular pair of EquivalenceMembers to join,
|
||||
* we check to see if the pair matches any original clause (in ec_sources)
|
||||
* or previously-built clause (in ec_derives). This saves memory and allows
|
||||
* or previously-built clause (in ec_derives). This saves memory and allows
|
||||
* re-use of information cached in RestrictInfos.
|
||||
*/
|
||||
List *
|
||||
@@ -1011,7 +1011,7 @@ generate_join_implied_equalities_normal(PlannerInfo *root,
|
||||
* First, scan the EC to identify member values that are computable at the
|
||||
* outer rel, at the inner rel, or at this relation but not in either
|
||||
* input rel. The outer-rel members should already be enforced equal,
|
||||
* likewise for the inner-rel members. We'll need to create clauses to
|
||||
* likewise for the inner-rel members. We'll need to create clauses to
|
||||
* enforce that any newly computable members are all equal to each other
|
||||
* as well as to at least one input member, plus enforce at least one
|
||||
* outer-rel member equal to at least one inner-rel member.
|
||||
@@ -1034,7 +1034,7 @@ generate_join_implied_equalities_normal(PlannerInfo *root,
|
||||
}
|
||||
|
||||
/*
|
||||
* First, select the joinclause if needed. We can equate any one outer
|
||||
* First, select the joinclause if needed. We can equate any one outer
|
||||
* member to any one inner member, but we have to find a datatype
|
||||
* combination for which an opfamily member operator exists. If we have
|
||||
* choices, we prefer simple Var members (possibly with RelabelType) since
|
||||
@@ -1236,8 +1236,8 @@ create_join_clause(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* Search to see if we already built a RestrictInfo for this pair of
|
||||
* EquivalenceMembers. We can use either original source clauses or
|
||||
* previously-derived clauses. The check on opno is probably redundant,
|
||||
* EquivalenceMembers. We can use either original source clauses or
|
||||
* previously-derived clauses. The check on opno is probably redundant,
|
||||
* but be safe ...
|
||||
*/
|
||||
foreach(lc, ec->ec_sources)
|
||||
@@ -1368,7 +1368,7 @@ create_join_clause(PlannerInfo *root,
|
||||
*
|
||||
* Outer join clauses that are marked outerjoin_delayed are special: this
|
||||
* condition means that one or both VARs might go to null due to a lower
|
||||
* outer join. We can still push a constant through the clause, but only
|
||||
* outer join. We can still push a constant through the clause, but only
|
||||
* if its operator is strict; and we *have to* throw the clause back into
|
||||
* regular joinclause processing. By keeping the strict join clause,
|
||||
* we ensure that any null-extended rows that are mistakenly generated due
|
||||
@@ -1562,7 +1562,7 @@ reconsider_outer_join_clause(PlannerInfo *root, RestrictInfo *rinfo,
|
||||
|
||||
/*
|
||||
* Yes it does! Try to generate a clause INNERVAR = CONSTANT for each
|
||||
* CONSTANT in the EC. Note that we must succeed with at least one
|
||||
* CONSTANT in the EC. Note that we must succeed with at least one
|
||||
* constant before we can decide to throw away the outer-join clause.
|
||||
*/
|
||||
match = false;
|
||||
@@ -2051,7 +2051,7 @@ find_eclass_clauses_for_index_join(PlannerInfo *root, RelOptInfo *rel,
|
||||
* a joinclause between the two given relations.
|
||||
*
|
||||
* This is essentially a very cut-down version of
|
||||
* generate_join_implied_equalities(). Note it's OK to occasionally say "yes"
|
||||
* generate_join_implied_equalities(). Note it's OK to occasionally say "yes"
|
||||
* incorrectly. Hence we don't bother with details like whether the lack of a
|
||||
* cross-type operator might prevent the clause from actually being generated.
|
||||
*/
|
||||
@@ -2081,7 +2081,7 @@ have_relevant_eclass_joinclause(PlannerInfo *root,
|
||||
* as a possibly-overoptimistic heuristic.
|
||||
*
|
||||
* We don't test ec_has_const either, even though a const eclass won't
|
||||
* generate real join clauses. This is because if we had "WHERE a.x =
|
||||
* generate real join clauses. This is because if we had "WHERE a.x =
|
||||
* b.y and a.x = 42", it is worth considering a join between a and b,
|
||||
* since the join result is likely to be small even though it'll end
|
||||
* up being an unqualified nestloop.
|
||||
@@ -2155,7 +2155,7 @@ has_relevant_eclass_joinclause(PlannerInfo *root, RelOptInfo *rel1)
|
||||
* as a possibly-overoptimistic heuristic.
|
||||
*
|
||||
* We don't test ec_has_const either, even though a const eclass won't
|
||||
* generate real join clauses. This is because if we had "WHERE a.x =
|
||||
* generate real join clauses. This is because if we had "WHERE a.x =
|
||||
* b.y and a.x = 42", it is worth considering a join between a and b,
|
||||
* since the join result is likely to be small even though it'll end
|
||||
* up being an unqualified nestloop.
|
||||
@@ -2202,7 +2202,7 @@ has_relevant_eclass_joinclause(PlannerInfo *root, RelOptInfo *rel1)
|
||||
* against the specified relation.
|
||||
*
|
||||
* This is just a heuristic test and doesn't have to be exact; it's better
|
||||
* to say "yes" incorrectly than "no". Hence we don't bother with details
|
||||
* to say "yes" incorrectly than "no". Hence we don't bother with details
|
||||
* like whether the lack of a cross-type operator might prevent the clause
|
||||
* from actually being generated.
|
||||
*/
|
||||
@@ -2223,7 +2223,7 @@ eclass_useful_for_merging(EquivalenceClass *eclass,
|
||||
|
||||
/*
|
||||
* Note we don't test ec_broken; if we did, we'd need a separate code path
|
||||
* to look through ec_sources. Checking the members anyway is OK as a
|
||||
* to look through ec_sources. Checking the members anyway is OK as a
|
||||
* possibly-overoptimistic heuristic.
|
||||
*/
|
||||
|
||||
|
||||
@@ -157,7 +157,7 @@ static Const *string_to_const(const char *str, Oid datatype);
|
||||
* scan this routine deems potentially interesting for the current query.
|
||||
*
|
||||
* We also determine the set of other relids that participate in join
|
||||
* clauses that could be used with each index. The actually best innerjoin
|
||||
* clauses that could be used with each index. The actually best innerjoin
|
||||
* path will be generated for each outer relation later on, but knowing the
|
||||
* set of potential otherrels allows us to identify equivalent outer relations
|
||||
* and avoid repeated computation.
|
||||
@@ -334,16 +334,16 @@ find_usable_indexes(PlannerInfo *root, RelOptInfo *rel,
|
||||
}
|
||||
|
||||
/*
|
||||
* Ignore partial indexes that do not match the query. If a partial
|
||||
* Ignore partial indexes that do not match the query. If a partial
|
||||
* index is marked predOK then we know it's OK; otherwise, if we are
|
||||
* at top level we know it's not OK (since predOK is exactly whether
|
||||
* its predicate could be proven from the toplevel clauses).
|
||||
* Otherwise, we have to test whether the added clauses are sufficient
|
||||
* to imply the predicate. If so, we could use the index in the
|
||||
* to imply the predicate. If so, we could use the index in the
|
||||
* current context.
|
||||
*
|
||||
* We set useful_predicate to true iff the predicate was proven using
|
||||
* the current set of clauses. This is needed to prevent matching a
|
||||
* the current set of clauses. This is needed to prevent matching a
|
||||
* predOK index to an arm of an OR, which would be a legal but
|
||||
* pointlessly inefficient plan. (A better plan will be generated by
|
||||
* just scanning the predOK index alone, no OR.)
|
||||
@@ -636,7 +636,7 @@ generate_bitmap_or_paths(PlannerInfo *root, RelOptInfo *rel,
|
||||
* Given a nonempty list of bitmap paths, AND them into one path.
|
||||
*
|
||||
* This is a nontrivial decision since we can legally use any subset of the
|
||||
* given path set. We want to choose a good tradeoff between selectivity
|
||||
* given path set. We want to choose a good tradeoff between selectivity
|
||||
* and cost of computing the bitmap.
|
||||
*
|
||||
* The result is either a single one of the inputs, or a BitmapAndPath
|
||||
@@ -664,12 +664,12 @@ choose_bitmap_and(PlannerInfo *root, RelOptInfo *rel,
|
||||
* In theory we should consider every nonempty subset of the given paths.
|
||||
* In practice that seems like overkill, given the crude nature of the
|
||||
* estimates, not to mention the possible effects of higher-level AND and
|
||||
* OR clauses. Moreover, it's completely impractical if there are a large
|
||||
* OR clauses. Moreover, it's completely impractical if there are a large
|
||||
* number of paths, since the work would grow as O(2^N).
|
||||
*
|
||||
* As a heuristic, we first check for paths using exactly the same sets of
|
||||
* WHERE clauses + index predicate conditions, and reject all but the
|
||||
* cheapest-to-scan in any such group. This primarily gets rid of indexes
|
||||
* cheapest-to-scan in any such group. This primarily gets rid of indexes
|
||||
* that include the interesting columns but also irrelevant columns. (In
|
||||
* situations where the DBA has gone overboard on creating variant
|
||||
* indexes, this can make for a very large reduction in the number of
|
||||
@@ -689,14 +689,14 @@ choose_bitmap_and(PlannerInfo *root, RelOptInfo *rel,
|
||||
* costsize.c and clausesel.c aren't very smart about redundant clauses.
|
||||
* They will usually double-count the redundant clauses, producing a
|
||||
* too-small selectivity that makes a redundant AND step look like it
|
||||
* reduces the total cost. Perhaps someday that code will be smarter and
|
||||
* reduces the total cost. Perhaps someday that code will be smarter and
|
||||
* we can remove this limitation. (But note that this also defends
|
||||
* against flat-out duplicate input paths, which can happen because
|
||||
* best_inner_indexscan will find the same OR join clauses that
|
||||
* create_or_index_quals has pulled OR restriction clauses out of.)
|
||||
*
|
||||
* For the same reason, we reject AND combinations in which an index
|
||||
* predicate clause duplicates another clause. Here we find it necessary
|
||||
* predicate clause duplicates another clause. Here we find it necessary
|
||||
* to be even stricter: we'll reject a partial index if any of its
|
||||
* predicate clauses are implied by the set of WHERE clauses and predicate
|
||||
* clauses used so far. This covers cases such as a condition "x = 42"
|
||||
@@ -759,7 +759,7 @@ choose_bitmap_and(PlannerInfo *root, RelOptInfo *rel,
|
||||
/*
|
||||
* For each surviving index, consider it as an "AND group leader", and see
|
||||
* whether adding on any of the later indexes results in an AND path with
|
||||
* cheaper total cost than before. Then take the cheapest AND group.
|
||||
* cheaper total cost than before. Then take the cheapest AND group.
|
||||
*/
|
||||
for (i = 0; i < npaths; i++)
|
||||
{
|
||||
@@ -1015,7 +1015,7 @@ find_indexpath_quals(Path *bitmapqual, List **quals, List **preds)
|
||||
/*
|
||||
* find_list_position
|
||||
* Return the given node's position (counting from 0) in the given
|
||||
* list of nodes. If it's not equal() to any existing list member,
|
||||
* list of nodes. If it's not equal() to any existing list member,
|
||||
* add it at the end, and return that position.
|
||||
*/
|
||||
static int
|
||||
@@ -1056,7 +1056,7 @@ find_list_position(Node *node, List **nodelist)
|
||||
*
|
||||
* We can use clauses from either the current clauses or outer_clauses lists,
|
||||
* but *found_clause is set TRUE only if we used at least one clause from
|
||||
* the "current clauses" list. See find_usable_indexes() for motivation.
|
||||
* the "current clauses" list. See find_usable_indexes() for motivation.
|
||||
*
|
||||
* outer_relids determines what Vars will be allowed on the other side
|
||||
* of a possible index qual; see match_clause_to_indexcol().
|
||||
@@ -1169,7 +1169,7 @@ group_clauses_by_indexkey(IndexOptInfo *index,
|
||||
* to the caller-specified outer_relids relations (which had better not
|
||||
* include the relation whose index is being tested). outer_relids should
|
||||
* be NULL when checking simple restriction clauses, and the outer side
|
||||
* of the join when building a join inner scan. Other than that, the
|
||||
* of the join when building a join inner scan. Other than that, the
|
||||
* only thing we don't like is volatile functions.
|
||||
*
|
||||
* Note: in most cases we already know that the clause as a whole uses
|
||||
@@ -1194,7 +1194,7 @@ group_clauses_by_indexkey(IndexOptInfo *index,
|
||||
* It is also possible to match RowCompareExpr clauses to indexes (but
|
||||
* currently, only btree indexes handle this). In this routine we will
|
||||
* report a match if the first column of the row comparison matches the
|
||||
* target index column. This is sufficient to guarantee that some index
|
||||
* target index column. This is sufficient to guarantee that some index
|
||||
* condition can be constructed from the RowCompareExpr --- whether the
|
||||
* remaining columns match the index too is considered in
|
||||
* expand_indexqual_rowcompare().
|
||||
@@ -1237,7 +1237,7 @@ match_clause_to_indexcol(IndexOptInfo *index,
|
||||
bool plain_op;
|
||||
|
||||
/*
|
||||
* Never match pseudoconstants to indexes. (Normally this could not
|
||||
* Never match pseudoconstants to indexes. (Normally this could not
|
||||
* happen anyway, since a pseudoconstant clause couldn't contain a Var,
|
||||
* but what if someone builds an expression index on a constant? It's not
|
||||
* totally unreasonable to do so with a partial index, either.)
|
||||
@@ -1552,7 +1552,7 @@ match_index_to_pathkeys(IndexOptInfo *index, List *pathkeys)
|
||||
* Note that we currently do not consider the collation of the ordering
|
||||
* operator's result. In practical cases the result type will be numeric
|
||||
* and thus have no collation, and it's not very clear what to match to
|
||||
* if it did have a collation. The index's collation should match the
|
||||
* if it did have a collation. The index's collation should match the
|
||||
* ordering operator's input collation, not its result.
|
||||
*
|
||||
* If successful, return 'clause' as-is if the indexkey is on the left,
|
||||
@@ -1683,7 +1683,7 @@ check_partial_indexes(PlannerInfo *root, RelOptInfo *rel)
|
||||
/*
|
||||
* indexable_outerrelids
|
||||
* Finds all other relids that participate in any indexable join clause
|
||||
* for the specified table. Returns a set of relids.
|
||||
* for the specified table. Returns a set of relids.
|
||||
*/
|
||||
static Relids
|
||||
indexable_outerrelids(PlannerInfo *root, RelOptInfo *rel)
|
||||
@@ -1870,7 +1870,7 @@ eclass_matches_any_index(EquivalenceClass *ec, EquivalenceMember *em,
|
||||
* compatible with the EC, since no clause generated from the EC
|
||||
* could be used with the index. For non-btree indexes, we can't
|
||||
* easily tell whether clauses generated from the EC could be used
|
||||
* with the index, so only check for expression match. This might
|
||||
* with the index, so only check for expression match. This might
|
||||
* mean we return "true" for a useless index, but that will just
|
||||
* cause some wasted planner cycles; it's better than ignoring
|
||||
* useful indexes.
|
||||
@@ -1970,7 +1970,7 @@ best_inner_indexscan(PlannerInfo *root, RelOptInfo *rel,
|
||||
/*
|
||||
* Look to see if we already computed the result for this set of relevant
|
||||
* outerrels. (We include the isouterjoin status in the cache lookup key
|
||||
* for safety. In practice I suspect this is not necessary because it
|
||||
* for safety. In practice I suspect this is not necessary because it
|
||||
* should always be the same for a given combination of rels.)
|
||||
*
|
||||
* NOTE: because we cache on outer_relids rather than outer_rel->relids,
|
||||
@@ -1999,7 +1999,7 @@ best_inner_indexscan(PlannerInfo *root, RelOptInfo *rel,
|
||||
*
|
||||
* Note: because we include restriction clauses, we will find indexscans
|
||||
* that could be plain indexscans, ie, they don't require the join context
|
||||
* at all. This may seem redundant, but we need to include those scans in
|
||||
* at all. This may seem redundant, but we need to include those scans in
|
||||
* the input given to choose_bitmap_and() to be sure we find optimal AND
|
||||
* combinations of join and non-join scans. Also, even if the "best inner
|
||||
* indexscan" is just a plain indexscan, it will have a different cost
|
||||
@@ -2137,7 +2137,7 @@ find_clauses_for_join(PlannerInfo *root, RelOptInfo *rel,
|
||||
|
||||
/*
|
||||
* Also check to see if any EquivalenceClasses can produce a relevant
|
||||
* joinclause. Since all such clauses are effectively pushed-down, this
|
||||
* joinclause. Since all such clauses are effectively pushed-down, this
|
||||
* doesn't apply to outer joins.
|
||||
*/
|
||||
if (!isouterjoin && rel->has_eclass_joins)
|
||||
@@ -2293,7 +2293,7 @@ match_index_to_operand(Node *operand,
|
||||
int indkey;
|
||||
|
||||
/*
|
||||
* Ignore any RelabelType node above the operand. This is needed to be
|
||||
* Ignore any RelabelType node above the operand. This is needed to be
|
||||
* able to apply indexscanning in binary-compatible-operator cases. Note:
|
||||
* we can assume there is at most one RelabelType node;
|
||||
* eval_const_expressions() will have simplified if more than one.
|
||||
@@ -2360,10 +2360,10 @@ match_index_to_operand(Node *operand,
|
||||
* indexscan machinery. The key idea is that these operators allow us
|
||||
* to derive approximate indexscan qual clauses, such that any tuples
|
||||
* that pass the operator clause itself must also satisfy the simpler
|
||||
* indexscan condition(s). Then we can use the indexscan machinery
|
||||
* indexscan condition(s). Then we can use the indexscan machinery
|
||||
* to avoid scanning as much of the table as we'd otherwise have to,
|
||||
* while applying the original operator as a qpqual condition to ensure
|
||||
* we deliver only the tuples we want. (In essence, we're using a regular
|
||||
* we deliver only the tuples we want. (In essence, we're using a regular
|
||||
* index as if it were a lossy index.)
|
||||
*
|
||||
* An example of what we're doing is
|
||||
@@ -2377,7 +2377,7 @@ match_index_to_operand(Node *operand,
|
||||
*
|
||||
* Another thing that we do with this machinery is to provide special
|
||||
* smarts for "boolean" indexes (that is, indexes on boolean columns
|
||||
* that support boolean equality). We can transform a plain reference
|
||||
* that support boolean equality). We can transform a plain reference
|
||||
* to the indexkey into "indexkey = true", or "NOT indexkey" into
|
||||
* "indexkey = false", so as to make the expression indexable using the
|
||||
* regular index operators. (As of Postgres 8.1, we must do this here
|
||||
@@ -2798,7 +2798,7 @@ expand_indexqual_opclause(RestrictInfo *rinfo, Oid opfamily, Oid idxcollation)
|
||||
/*
|
||||
* LIKE and regex operators are not members of any btree index opfamily,
|
||||
* but they can be members of opfamilies for more exotic index types such
|
||||
* as GIN. Therefore, we should only do expansion if the operator is
|
||||
* as GIN. Therefore, we should only do expansion if the operator is
|
||||
* actually not in the opfamily. But checking that requires a syscache
|
||||
* lookup, so it's best to first see if the operator is one we are
|
||||
* interested in.
|
||||
@@ -2881,7 +2881,7 @@ expand_indexqual_opclause(RestrictInfo *rinfo, Oid opfamily, Oid idxcollation)
|
||||
* column matches) or a simple OpExpr (if the first-column match is all
|
||||
* there is). In these cases the modified clause is always "<=" or ">="
|
||||
* even when the original was "<" or ">" --- this is necessary to match all
|
||||
* the rows that could match the original. (We are essentially building a
|
||||
* the rows that could match the original. (We are essentially building a
|
||||
* lossy version of the row comparison when we do this.)
|
||||
*/
|
||||
static RestrictInfo *
|
||||
@@ -2964,7 +2964,7 @@ expand_indexqual_rowcompare(RestrictInfo *rinfo,
|
||||
break; /* no good, volatile comparison value */
|
||||
|
||||
/*
|
||||
* The Var side can match any column of the index. If the user does
|
||||
* The Var side can match any column of the index. If the user does
|
||||
* something weird like having multiple identical index columns, we
|
||||
* insist the match be on the first such column, to avoid confusing
|
||||
* the executor.
|
||||
|
||||
@@ -97,7 +97,7 @@ add_paths_to_joinrel(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* 1. Consider mergejoin paths where both relations must be explicitly
|
||||
* sorted. Skip this if we can't mergejoin.
|
||||
* sorted. Skip this if we can't mergejoin.
|
||||
*/
|
||||
if (mergejoin_allowed)
|
||||
sort_inner_and_outer(root, joinrel, outerrel, innerrel,
|
||||
@@ -118,7 +118,7 @@ add_paths_to_joinrel(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* 3. Consider paths where the inner relation need not be explicitly
|
||||
* sorted. This includes mergejoins only (nestloops were already built in
|
||||
* sorted. This includes mergejoins only (nestloops were already built in
|
||||
* match_unsorted_outer).
|
||||
*
|
||||
* Diked out as redundant 2/13/2000 -- tgl. There isn't any really
|
||||
@@ -149,7 +149,7 @@ add_paths_to_joinrel(PlannerInfo *root,
|
||||
* We already know that the clause is a binary opclause referencing only the
|
||||
* rels in the current join. The point here is to check whether it has the
|
||||
* form "outerrel_expr op innerrel_expr" or "innerrel_expr op outerrel_expr",
|
||||
* rather than mixing outer and inner vars on either side. If it matches,
|
||||
* rather than mixing outer and inner vars on either side. If it matches,
|
||||
* we set the transient flag outer_is_left to identify which side is which.
|
||||
*/
|
||||
static inline bool
|
||||
@@ -238,7 +238,7 @@ sort_inner_and_outer(PlannerInfo *root,
|
||||
*
|
||||
* Actually, it's not quite true that every mergeclause ordering will
|
||||
* generate a different path order, because some of the clauses may be
|
||||
* partially redundant (refer to the same EquivalenceClasses). Therefore,
|
||||
* partially redundant (refer to the same EquivalenceClasses). Therefore,
|
||||
* what we do is convert the mergeclause list to a list of canonical
|
||||
* pathkeys, and then consider different orderings of the pathkeys.
|
||||
*
|
||||
@@ -331,7 +331,7 @@ sort_inner_and_outer(PlannerInfo *root,
|
||||
* cheapest-total inner-indexscan path (if any), and one on the
|
||||
* cheapest-startup inner-indexscan path (if different).
|
||||
*
|
||||
* We also consider mergejoins if mergejoin clauses are available. We have
|
||||
* We also consider mergejoins if mergejoin clauses are available. We have
|
||||
* two ways to generate the inner path for a mergejoin: sort the cheapest
|
||||
* inner path, or use an inner path that is already suitably ordered for the
|
||||
* merge. If we have several mergeclauses, it could be that there is no inner
|
||||
@@ -648,7 +648,7 @@ match_unsorted_outer(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* Look for an inner path ordered well enough for the first
|
||||
* 'sortkeycnt' innersortkeys. NB: trialsortkeys list is modified
|
||||
* 'sortkeycnt' innersortkeys. NB: trialsortkeys list is modified
|
||||
* destructively, which is why we made a copy...
|
||||
*/
|
||||
trialsortkeys = list_truncate(trialsortkeys, sortkeycnt);
|
||||
@@ -857,7 +857,7 @@ hash_inner_and_outer(PlannerInfo *root,
|
||||
* best_appendrel_indexscan
|
||||
* Finds the best available set of inner indexscans for a nestloop join
|
||||
* with the given append relation on the inside and the given outer_rel
|
||||
* outside. Returns an AppendPath comprising the best inner scans, or
|
||||
* outside. Returns an AppendPath comprising the best inner scans, or
|
||||
* NULL if there are no possible inner indexscans.
|
||||
*
|
||||
* Note that we currently consider only cheapest-total-cost. It's not
|
||||
|
||||
@@ -182,7 +182,7 @@ join_search_one_level(PlannerInfo *root, int level)
|
||||
* SELECT * FROM a,b,c WHERE (a.f1 + b.f2 + c.f3) = 0;
|
||||
*
|
||||
* The join clause will be usable at level 3, but at level 2 we have no
|
||||
* choice but to make cartesian joins. We consider only left-sided and
|
||||
* choice but to make cartesian joins. We consider only left-sided and
|
||||
* right-sided cartesian joins in this case (no bushy).
|
||||
*/
|
||||
if (joinrels[level] == NIL)
|
||||
@@ -210,7 +210,7 @@ join_search_one_level(PlannerInfo *root, int level)
|
||||
|
||||
/*----------
|
||||
* When special joins are involved, there may be no legal way
|
||||
* to make an N-way join for some values of N. For example consider
|
||||
* to make an N-way join for some values of N. For example consider
|
||||
*
|
||||
* SELECT ... FROM t1 WHERE
|
||||
* x IN (SELECT ... FROM t2,t3 WHERE ...) AND
|
||||
@@ -329,7 +329,7 @@ join_is_legal(PlannerInfo *root, RelOptInfo *rel1, RelOptInfo *rel2,
|
||||
ListCell *l;
|
||||
|
||||
/*
|
||||
* Ensure output params are set on failure return. This is just to
|
||||
* Ensure output params are set on failure return. This is just to
|
||||
* suppress uninitialized-variable warnings from overly anal compilers.
|
||||
*/
|
||||
*sjinfo_p = NULL;
|
||||
@@ -337,7 +337,7 @@ join_is_legal(PlannerInfo *root, RelOptInfo *rel1, RelOptInfo *rel2,
|
||||
|
||||
/*
|
||||
* If we have any special joins, the proposed join might be illegal; and
|
||||
* in any case we have to determine its join type. Scan the join info
|
||||
* in any case we have to determine its join type. Scan the join info
|
||||
* list for conflicts.
|
||||
*/
|
||||
match_sjinfo = NULL;
|
||||
@@ -560,7 +560,7 @@ make_join_rel(PlannerInfo *root, RelOptInfo *rel1, RelOptInfo *rel2)
|
||||
|
||||
/*
|
||||
* If it's a plain inner join, then we won't have found anything in
|
||||
* join_info_list. Make up a SpecialJoinInfo so that selectivity
|
||||
* join_info_list. Make up a SpecialJoinInfo so that selectivity
|
||||
* estimation functions will know what's being joined.
|
||||
*/
|
||||
if (sjinfo == NULL)
|
||||
@@ -848,7 +848,7 @@ have_join_order_restriction(PlannerInfo *root,
|
||||
*
|
||||
* Essentially, this tests whether have_join_order_restriction() could
|
||||
* succeed with this rel and some other one. It's OK if we sometimes
|
||||
* say "true" incorrectly. (Therefore, we don't bother with the relatively
|
||||
* say "true" incorrectly. (Therefore, we don't bother with the relatively
|
||||
* expensive has_legal_joinclause test.)
|
||||
*/
|
||||
static bool
|
||||
@@ -953,7 +953,7 @@ is_dummy_rel(RelOptInfo *rel)
|
||||
* dummy.
|
||||
*
|
||||
* Also, when called during GEQO join planning, we are in a short-lived
|
||||
* memory context. We must make sure that the dummy path attached to a
|
||||
* memory context. We must make sure that the dummy path attached to a
|
||||
* baserel survives the GEQO cycle, else the baserel is trashed for future
|
||||
* GEQO cycles. On the other hand, when we are marking a joinrel during GEQO,
|
||||
* we don't want the dummy path to clutter the main planning context. Upshot
|
||||
|
||||
@@ -41,7 +41,7 @@
|
||||
*
|
||||
* The added quals are partially redundant with the original OR, and therefore
|
||||
* will cause the size of the joinrel to be underestimated when it is finally
|
||||
* formed. (This would be true of a full transformation to CNF as well; the
|
||||
* formed. (This would be true of a full transformation to CNF as well; the
|
||||
* fault is not really in the transformation, but in clauselist_selectivity's
|
||||
* inability to recognize redundant conditions.) To minimize the collateral
|
||||
* damage, we want to minimize the number of quals added. Therefore we do
|
||||
@@ -56,7 +56,7 @@
|
||||
* it is finally formed. This is a MAJOR HACK: it depends on the fact
|
||||
* that clause selectivities are cached and on the fact that the same
|
||||
* RestrictInfo node will appear in every joininfo list that might be used
|
||||
* when the joinrel is formed. And it probably isn't right in cases where
|
||||
* when the joinrel is formed. And it probably isn't right in cases where
|
||||
* the size estimation is nonlinear (i.e., outer and IN joins). But it
|
||||
* beats not doing anything.
|
||||
*
|
||||
@@ -96,10 +96,10 @@ create_or_index_quals(PlannerInfo *root, RelOptInfo *rel)
|
||||
* enforced at the relation scan level.
|
||||
*
|
||||
* We must also ignore clauses that are marked !is_pushed_down (ie they
|
||||
* are themselves outer-join clauses). It would be safe to extract an
|
||||
* are themselves outer-join clauses). It would be safe to extract an
|
||||
* index condition from such a clause if we are within the nullable rather
|
||||
* than the non-nullable side of its join, but we haven't got enough
|
||||
* context here to tell which applies. OR clauses in outer-join quals
|
||||
* context here to tell which applies. OR clauses in outer-join quals
|
||||
* aren't exactly common, so we'll let that case go unoptimized for now.
|
||||
*/
|
||||
foreach(i, rel->joininfo)
|
||||
@@ -114,7 +114,7 @@ create_or_index_quals(PlannerInfo *root, RelOptInfo *rel)
|
||||
* Use the generate_bitmap_or_paths() machinery to estimate the
|
||||
* value of each OR clause. We can use regular restriction
|
||||
* clauses along with the OR clause contents to generate
|
||||
* indexquals. We pass outer_rel = NULL so that sub-clauses that
|
||||
* indexquals. We pass outer_rel = NULL so that sub-clauses that
|
||||
* are actually joins will be ignored.
|
||||
*/
|
||||
List *orpaths;
|
||||
|
||||
@@ -259,7 +259,7 @@ make_pathkey_from_sortinfo(PlannerInfo *root,
|
||||
/*
|
||||
* EquivalenceClasses need to contain opfamily lists based on the family
|
||||
* membership of mergejoinable equality operators, which could belong to
|
||||
* more than one opfamily. So we have to look up the opfamily's equality
|
||||
* more than one opfamily. So we have to look up the opfamily's equality
|
||||
* operator and get its membership.
|
||||
*/
|
||||
equality_op = get_opfamily_member(opfamily,
|
||||
@@ -432,7 +432,7 @@ get_cheapest_path_for_pathkeys(List *paths, List *pathkeys,
|
||||
|
||||
/*
|
||||
* Since cost comparison is a lot cheaper than pathkey comparison, do
|
||||
* that first. (XXX is that still true?)
|
||||
* that first. (XXX is that still true?)
|
||||
*/
|
||||
if (matched_path != NULL &&
|
||||
compare_path_costs(matched_path, path, cost_criterion) <= 0)
|
||||
@@ -619,7 +619,7 @@ find_indexkey_var(PlannerInfo *root, RelOptInfo *rel, AttrNumber varattno)
|
||||
/*
|
||||
* convert_subquery_pathkeys
|
||||
* Build a pathkeys list that describes the ordering of a subquery's
|
||||
* result, in the terms of the outer query. This is essentially a
|
||||
* result, in the terms of the outer query. This is essentially a
|
||||
* task of conversion.
|
||||
*
|
||||
* 'rel': outer query's RelOptInfo for the subquery relation.
|
||||
@@ -672,7 +672,7 @@ convert_subquery_pathkeys(PlannerInfo *root, RelOptInfo *rel,
|
||||
|
||||
/*
|
||||
* Note: it might look funny to be setting sortref = 0 for a
|
||||
* reference to a volatile sub_eclass. However, the
|
||||
* reference to a volatile sub_eclass. However, the
|
||||
* expression is *not* volatile in the outer query: it's just
|
||||
* a Var referencing whatever the subquery emitted. (IOW, the
|
||||
* outer query isn't going to re-execute the volatile
|
||||
@@ -706,7 +706,7 @@ convert_subquery_pathkeys(PlannerInfo *root, RelOptInfo *rel,
|
||||
/*
|
||||
* Otherwise, the sub_pathkey's EquivalenceClass could contain
|
||||
* multiple elements (representing knowledge that multiple items
|
||||
* are effectively equal). Each element might match none, one, or
|
||||
* are effectively equal). Each element might match none, one, or
|
||||
* more of the output columns that are visible to the outer query.
|
||||
* This means we may have multiple possible representations of the
|
||||
* sub_pathkey in the context of the outer query. Ideally we
|
||||
@@ -936,7 +936,7 @@ make_pathkeys_for_sortclauses(PlannerInfo *root,
|
||||
* right sides.
|
||||
*
|
||||
* Note this is called before EC merging is complete, so the links won't
|
||||
* necessarily point to canonical ECs. Before they are actually used for
|
||||
* necessarily point to canonical ECs. Before they are actually used for
|
||||
* anything, update_mergeclause_eclasses must be called to ensure that
|
||||
* they've been updated to point to canonical ECs.
|
||||
*/
|
||||
@@ -1068,7 +1068,7 @@ find_mergeclauses_for_pathkeys(PlannerInfo *root,
|
||||
* It's possible that multiple matching clauses might have different
|
||||
* ECs on the other side, in which case the order we put them into our
|
||||
* result makes a difference in the pathkeys required for the other
|
||||
* input path. However this routine hasn't got any info about which
|
||||
* input path. However this routine hasn't got any info about which
|
||||
* order would be best, so we don't worry about that.
|
||||
*
|
||||
* It's also possible that the selected mergejoin clauses produce
|
||||
@@ -1099,7 +1099,7 @@ find_mergeclauses_for_pathkeys(PlannerInfo *root,
|
||||
|
||||
/*
|
||||
* If we didn't find a mergeclause, we're done --- any additional
|
||||
* sort-key positions in the pathkeys are useless. (But we can still
|
||||
* sort-key positions in the pathkeys are useless. (But we can still
|
||||
* mergejoin if we found at least one mergeclause.)
|
||||
*/
|
||||
if (matched_restrictinfos == NIL)
|
||||
@@ -1131,7 +1131,7 @@ find_mergeclauses_for_pathkeys(PlannerInfo *root,
|
||||
* Returns a pathkeys list that can be applied to the outer relation.
|
||||
*
|
||||
* Since we assume here that a sort is required, there is no particular use
|
||||
* in matching any available ordering of the outerrel. (joinpath.c has an
|
||||
* in matching any available ordering of the outerrel. (joinpath.c has an
|
||||
* entirely separate code path for considering sort-free mergejoins.) Rather,
|
||||
* it's interesting to try to match the requested query_pathkeys so that a
|
||||
* second output sort may be avoided; and failing that, we try to list "more
|
||||
@@ -1462,7 +1462,7 @@ pathkeys_useful_for_merging(PlannerInfo *root, RelOptInfo *rel, List *pathkeys)
|
||||
|
||||
/*
|
||||
* If we didn't find a mergeclause, we're done --- any additional
|
||||
* sort-key positions in the pathkeys are useless. (But we can still
|
||||
* sort-key positions in the pathkeys are useless. (But we can still
|
||||
* mergejoin if we found at least one mergeclause.)
|
||||
*/
|
||||
if (matched)
|
||||
@@ -1492,7 +1492,7 @@ right_merge_direction(PlannerInfo *root, PathKey *pathkey)
|
||||
pathkey->pk_opfamily == query_pathkey->pk_opfamily)
|
||||
{
|
||||
/*
|
||||
* Found a matching query sort column. Prefer this pathkey's
|
||||
* Found a matching query sort column. Prefer this pathkey's
|
||||
* direction iff it matches. Note that we ignore pk_nulls_first,
|
||||
* which means that a sort might be needed anyway ... but we still
|
||||
* want to prefer only one of the two possible directions, and we
|
||||
@@ -1568,13 +1568,13 @@ truncate_useless_pathkeys(PlannerInfo *root,
|
||||
* useful according to truncate_useless_pathkeys().
|
||||
*
|
||||
* This is a cheap test that lets us skip building pathkeys at all in very
|
||||
* simple queries. It's OK to err in the direction of returning "true" when
|
||||
* simple queries. It's OK to err in the direction of returning "true" when
|
||||
* there really aren't any usable pathkeys, but erring in the other direction
|
||||
* is bad --- so keep this in sync with the routines above!
|
||||
*
|
||||
* We could make the test more complex, for example checking to see if any of
|
||||
* the joinclauses are really mergejoinable, but that likely wouldn't win
|
||||
* often enough to repay the extra cycles. Queries with neither a join nor
|
||||
* often enough to repay the extra cycles. Queries with neither a join nor
|
||||
* a sort are reasonably common, though, so this much work seems worthwhile.
|
||||
*/
|
||||
bool
|
||||
|
||||
@@ -19,7 +19,7 @@
|
||||
* representation all the way through to execution.
|
||||
*
|
||||
* There is currently no special support for joins involving CTID; in
|
||||
* particular nothing corresponding to best_inner_indexscan(). Since it's
|
||||
* particular nothing corresponding to best_inner_indexscan(). Since it's
|
||||
* not very useful to store TIDs of one table in another table, there
|
||||
* doesn't seem to be enough use-case to justify adding a lot of code
|
||||
* for that.
|
||||
@@ -57,7 +57,7 @@ static List *TidQualFromRestrictinfo(List *restrictinfo, int varno);
|
||||
* or
|
||||
* pseudoconstant = CTID
|
||||
*
|
||||
* We check that the CTID Var belongs to relation "varno". That is probably
|
||||
* We check that the CTID Var belongs to relation "varno". That is probably
|
||||
* redundant considering this is only applied to restriction clauses, but
|
||||
* let's be safe.
|
||||
*/
|
||||
|
||||
Reference in New Issue
Block a user